threads
listlengths
1
2.99k
[ { "msg_contents": "Hi\n\nIn multixact.c I found some comments like the following:\n\n*\t\tSimilar to AtEOX_MultiXact but for COMMIT PREPARED\n* Discard the local MultiXactId cache like in AtEOX_MultiXact\n\nSince there's no function called \"AtEOX_MultiXact\" in the code,\nI think the \"AtEOX_MultiXact\" may be a typo.\n\nAtEOXact_MultiXact seems to be the right function here.\n\nBest regards,\nhouzj", "msg_date": "Thu, 8 Oct 2020 01:15:35 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "Probably typo in multixact.c" }, { "msg_contents": "On Thu, Oct 08, 2020 at 01:15:35AM +0000, Hou, Zhijie wrote:\n> Hi\n> \n> In multixact.c I found some comments like the following:\n> \n> *\t\tSimilar to AtEOX_MultiXact but for COMMIT PREPARED\n> * Discard the local MultiXactId cache like in AtEOX_MultiXact\n> \n> Since there's no function called \"AtEOX_MultiXact\" in the code,\n> I think the \"AtEOX_MultiXact\" may be a typo.\n> \n> AtEOXact_MultiXact seems to be the right function here.\n\nYes, that looks like a simple typo to me as well.\nAtEOXact_MultiXact() shares portions of the logics in\nPostPrepare_MultiXact and multixact_twophase_postcommit.\n--\nMichael", "msg_date": "Thu, 8 Oct 2020 10:26:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Probably typo in multixact.c" }, { "msg_contents": "On Thu, Oct 8, 2020 at 10:26:39AM +0900, Michael Paquier wrote:\n> On Thu, Oct 08, 2020 at 01:15:35AM +0000, Hou, Zhijie wrote:\n> > Hi\n> > \n> > In multixact.c I found some comments like the following:\n> > \n> > *\t\tSimilar to AtEOX_MultiXact but for COMMIT PREPARED\n> > * Discard the local MultiXactId cache like in AtEOX_MultiXact\n> > \n> > Since there's no function called \"AtEOX_MultiXact\" in the code,\n> > I think the \"AtEOX_MultiXact\" may be a typo.\n> > \n> > AtEOXact_MultiXact seems to be the right function here.\n> \n> Yes, that looks like a simple typo to me as well.\n> AtEOXact_MultiXact() shares portions of the logics in\n> PostPrepare_MultiXact and multixact_twophase_postcommit.\n\nFYI, this patch was applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Thu, 8 Oct 2020 12:42:28 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Probably typo in multixact.c" } ]
[ { "msg_contents": "I want to progress work on stored procedures returning multiple result \nsets. Examples of how this could work on the SQL side have previously \nbeen shown [0]. We also have ongoing work to make psql show multiple \nresult sets [1]. This appears to work fine in the simple query \nprotocol. But the extended query protocol doesn't support multiple \nresult sets at the moment [2]. This would be desirable to be able to \nuse parameter binding, and also since one of the higher-level goals \nwould be to support the use case of stored procedures returning multiple \nresult sets via JDBC.\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/4580ff7b-d610-eaeb-e06f-4d686896b93b%402ndquadrant.com\n[1]: https://commitfest.postgresql.org/29/2096/\n[2]: https://www.postgresql.org/message-id/9507.1534370765%40sss.pgh.pa.us\n\n(Terminology: I'm calling this project \"dynamic result sets\", which \nincludes several concepts: 1) multiple result sets, 2) those result sets \ncan have different structures, 3) the structure of the result sets is \ndecided at run time, not declared in the schema/procedure definition/etc.)\n\nOne possibility I rejected was to invent a third query protocol beside \nthe simple and extended one. This wouldn't really match with the \nrequirements of JDBC and similar APIs because the APIs for sending \nqueries don't indicate whether dynamic result sets are expected or \nrequired, you only indicate that later by how you process the result \nsets. So we really need to use the existing ways of sending off the \nqueries. Also, avoiding a third query protocol is probably desirable in \ngeneral to avoid extra code and APIs.\n\nSo here is my sketch on how this functionality could be woven into the \nextended query protocol. I'll go through how the existing protocol \nexchange works and then point out the additions that I have in mind.\n\nThese additions could be enabled by a _pq_ startup parameter sent by the \nclient. Alternatively, it might also work without that because the \nclient would just reject protocol messages it doesn't understand, but \nthat's probably less desirable behavior.\n\nSo here is how it goes:\n\nC: Parse\nS: ParseComplete\n\nAt this point, the server would know whether the statement it has parsed \ncan produce dynamic result sets. For a stored procedure, this would be \ndeclared with the procedure definition, so when the CALL statement is \nparsed, this can be noticed. I don't actually plan any other cases, but \nfor the sake of discussion, perhaps some variant of EXPLAIN could also \nreturn multiple result sets, and that could also be detected from \nparsing the EXPLAIN invocation.\n\nAt this point a client would usually do\n\nC: Describe (statement)\nS: ParameterDescription\nS: RowDescription\n\nNew would be that the server would now also respond with a new message, say,\n\nS: DynamicResultInfo\n\nthat indicates that dynamic result sets will follow later. The message \nwould otherwise be empty. (We could perhaps include the number of \nresult sets, but this might not actually be useful, and perhaps it's \nbetter not to spent effort on counting things that don't need to be \ncounted.)\n\n(If we don't guard this by a _pq_ startup parameter from the client, an \nold client would now error out because of an unexpected protocol message.)\n\nNow the normal bind and execute sequence follows:\n\nC: Bind\nS: BindComplete\n(C: Describe (portal))\n(S: RowDescription)\nC: Execute\nS: ... (DataRows)\nS: CommandComplete\n\nIn the case of a CALL with output parameters, this \"primary\" result set \ncontains one row with the output parameters (existing behavior).\n\nNow, if the client has seen DynamicResultInfo earlier, it should now go \ninto a new subsequence to get the remaining result sets, like this \n(naming obviously to be refined):\n\nC: NextResult\nS: NextResultReady\nC: Describe (portal)\nS: RowDescription\nC: Execute\n....\nS: CommandComplete\nC: NextResult\n...\nC: NextResult\nS: NoNextResult\nC: Sync\nS: ReadyForQuery\n\nI think this would all have to use the unnamed portal, but perhaps there \ncould be other uses with named portals. Some details to be worked out.\n\nOne could perhaps also do without the DynamicResultInfo message and just \nput extra information into the CommandComplete message indicating \"there \nare more result sets after this one\".\n\n(Following the model from the simple query protocol, CommandComplete \nreally means one result set complete, not the whole top-level command. \nReadyForQuery means the whole command is complete. This is perhaps \ndebatable, and interesting questions could also arise when considering \nwhat should happen in the simple query protocol when a query string \nconsists of multiple commands each returning multiple result sets. But \nit doesn't really seem sensible to cater to that.)\n\nOne thing that's missing in this sequence is a way to specify the \ndesired output format (text/binary) for each result set. This could be \nadded to the NextResult message, but at that point the client doesn't \nyet know the number of columns in the result set, so we could only do it \nglobally. Then again, since the result sets are dynamic, it's less \nlikely that a client would be coded to set per-column output codes. \nThen again, I would hate to bake such a restriction into the protocol, \nbecause some is going to try. (I suspect what would be more useful in \npractice is to designate output formats per data type.) So if we wanted \nto have this fully featured, it might have to look something like this:\n\nC: NextResult\nS: NextResultReady\nC: Describe (dynamic) (new message subkind)\nS: RowDescription\nC: Bind (zero parameters, optionally format codes)\nS: BindComplete\nC: Describe (portal)\nS: RowDescription\nC: Execute\n...\n\nWhile this looks more complicated, client libraries could reuse existing \ncode that starts processing with a Bind message and continues to \nCommandComplete, and then just loops back around.\n\nThe mapping of this to libpq in a simple case could look like this:\n\nPQsendQueryParams(conn, \"CALL ...\", ...);\nPQgetResult(...); // gets output parameters\nPQnextResult(...); // new: sends NextResult+Bind\nPQgetResult(...); // and repeat\n\nAgain, it's not clear here how to declare the result column output \nformats. Since libpq doesn't appear to expose the Bind message \nseparately, I'm not sure what to do here.\n\nIn JDBC, the NextResult message would correspond to the \nStatement.getMoreResults() method. It will need a bit of conceptual \nadjustment because the first result set sent on the protocol is actually \nthe output parameters, which the JDBC API returns separately from a \nResultSet, so the initial CallableStatement.execute() call will need to \nprocess the primary result set and then send NextResult and obtain the \nfirst dynamic result as the first ResultSet for its API, but that can be \nhandled internally.\n\nThoughts so far?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 8 Oct 2020 09:46:38 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "dynamic result sets support in extended query protocol" }, { "msg_contents": "Are you proposing to bump up the protocol version (either major or\nminor)? I am asking because it seems you are going to introduce some\nnew message types.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n> I want to progress work on stored procedures returning multiple result\n> sets. Examples of how this could work on the SQL side have previously\n> been shown [0]. We also have ongoing work to make psql show multiple\n> result sets [1]. This appears to work fine in the simple query\n> protocol. But the extended query protocol doesn't support multiple\n> result sets at the moment [2]. This would be desirable to be able to\n> use parameter binding, and also since one of the higher-level goals\n> would be to support the use case of stored procedures returning\n> multiple result sets via JDBC.\n> \n> [0]:\n> https://www.postgresql.org/message-id/flat/4580ff7b-d610-eaeb-e06f-4d686896b93b%402ndquadrant.com\n> [1]: https://commitfest.postgresql.org/29/2096/\n> [2]:\n> https://www.postgresql.org/message-id/9507.1534370765%40sss.pgh.pa.us\n> \n> (Terminology: I'm calling this project \"dynamic result sets\", which\n> includes several concepts: 1) multiple result sets, 2) those result\n> sets can have different structures, 3) the structure of the result\n> sets is decided at run time, not declared in the schema/procedure\n> definition/etc.)\n> \n> One possibility I rejected was to invent a third query protocol beside\n> the simple and extended one. This wouldn't really match with the\n> requirements of JDBC and similar APIs because the APIs for sending\n> queries don't indicate whether dynamic result sets are expected or\n> required, you only indicate that later by how you process the result\n> sets. So we really need to use the existing ways of sending off the\n> queries. Also, avoiding a third query protocol is probably desirable\n> in general to avoid extra code and APIs.\n> \n> So here is my sketch on how this functionality could be woven into the\n> extended query protocol. I'll go through how the existing protocol\n> exchange works and then point out the additions that I have in mind.\n> \n> These additions could be enabled by a _pq_ startup parameter sent by\n> the client. Alternatively, it might also work without that because\n> the client would just reject protocol messages it doesn't understand,\n> but that's probably less desirable behavior.\n> \n> So here is how it goes:\n> \n> C: Parse\n> S: ParseComplete\n> \n> At this point, the server would know whether the statement it has\n> parsed can produce dynamic result sets. For a stored procedure, this\n> would be declared with the procedure definition, so when the CALL\n> statement is parsed, this can be noticed. I don't actually plan any\n> other cases, but for the sake of discussion, perhaps some variant of\n> EXPLAIN could also return multiple result sets, and that could also be\n> detected from parsing the EXPLAIN invocation.\n> \n> At this point a client would usually do\n> \n> C: Describe (statement)\n> S: ParameterDescription\n> S: RowDescription\n> \n> New would be that the server would now also respond with a new\n> message, say,\n> \n> S: DynamicResultInfo\n> \n> that indicates that dynamic result sets will follow later. The\n> message would otherwise be empty. (We could perhaps include the\n> number of result sets, but this might not actually be useful, and\n> perhaps it's better not to spent effort on counting things that don't\n> need to be counted.)\n> \n> (If we don't guard this by a _pq_ startup parameter from the client,\n> an old client would now error out because of an unexpected protocol\n> message.)\n> \n> Now the normal bind and execute sequence follows:\n> \n> C: Bind\n> S: BindComplete\n> (C: Describe (portal))\n> (S: RowDescription)\n> C: Execute\n> S: ... (DataRows)\n> S: CommandComplete\n> \n> In the case of a CALL with output parameters, this \"primary\" result\n> set contains one row with the output parameters (existing behavior).\n> \n> Now, if the client has seen DynamicResultInfo earlier, it should now\n> go into a new subsequence to get the remaining result sets, like this\n> (naming obviously to be refined):\n> \n> C: NextResult\n> S: NextResultReady\n> C: Describe (portal)\n> S: RowDescription\n> C: Execute\n> ....\n> S: CommandComplete\n> C: NextResult\n> ...\n> C: NextResult\n> S: NoNextResult\n> C: Sync\n> S: ReadyForQuery\n> \n> I think this would all have to use the unnamed portal, but perhaps\n> there could be other uses with named portals. Some details to be\n> worked out.\n> \n> One could perhaps also do without the DynamicResultInfo message and\n> just put extra information into the CommandComplete message indicating\n> \"there are more result sets after this one\".\n> \n> (Following the model from the simple query protocol, CommandComplete\n> really means one result set complete, not the whole top-level\n> command. ReadyForQuery means the whole command is complete. This is\n> perhaps debatable, and interesting questions could also arise when\n> considering what should happen in the simple query protocol when a\n> query string consists of multiple commands each returning multiple\n> result sets. But it doesn't really seem sensible to cater to that.)\n> \n> One thing that's missing in this sequence is a way to specify the\n> desired output format (text/binary) for each result set. This could\n> be added to the NextResult message, but at that point the client\n> doesn't yet know the number of columns in the result set, so we could\n> only do it globally. Then again, since the result sets are dynamic,\n> it's less likely that a client would be coded to set per-column output\n> codes. Then again, I would hate to bake such a restriction into the\n> protocol, because some is going to try. (I suspect what would be more\n> useful in practice is to designate output formats per data type.) So\n> if we wanted to have this fully featured, it might have to look\n> something like this:\n> \n> C: NextResult\n> S: NextResultReady\n> C: Describe (dynamic) (new message subkind)\n> S: RowDescription\n> C: Bind (zero parameters, optionally format codes)\n> S: BindComplete\n> C: Describe (portal)\n> S: RowDescription\n> C: Execute\n> ...\n> \n> While this looks more complicated, client libraries could reuse\n> existing code that starts processing with a Bind message and continues\n> to CommandComplete, and then just loops back around.\n> \n> The mapping of this to libpq in a simple case could look like this:\n> \n> PQsendQueryParams(conn, \"CALL ...\", ...);\n> PQgetResult(...); // gets output parameters\n> PQnextResult(...); // new: sends NextResult+Bind\n> PQgetResult(...); // and repeat\n> \n> Again, it's not clear here how to declare the result column output\n> formats. Since libpq doesn't appear to expose the Bind message\n> separately, I'm not sure what to do here.\n> \n> In JDBC, the NextResult message would correspond to the\n> Statement.getMoreResults() method. It will need a bit of conceptual\n> adjustment because the first result set sent on the protocol is\n> actually the output parameters, which the JDBC API returns separately\n> from a ResultSet, so the initial CallableStatement.execute() call will\n> need to process the primary result set and then send NextResult and\n> obtain the first dynamic result as the first ResultSet for its API,\n> but that can be handled internally.\n> \n> Thoughts so far?\n> \n> -- \n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n> \n> \n\n\n", "msg_date": "Thu, 08 Oct 2020 17:23:54 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On 2020-10-08 10:23, Tatsuo Ishii wrote:\n> Are you proposing to bump up the protocol version (either major or\n> minor)? I am asking because it seems you are going to introduce some\n> new message types.\n\nIt wouldn't be a new major version. It could either be a new minor \nversion, or it would be guarded by a _pq_ protocol message to enable \nthis functionality from the client, as described. Or both? We haven't \ndone this sort of thing a lot, so some discussion on the details might \nbe necessary.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 9 Oct 2020 09:31:33 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "\nOn 10/8/20 3:46 AM, Peter Eisentraut wrote:\n> I want to progress work on stored procedures returning multiple result\n> sets.  Examples of how this could work on the SQL side have previously\n> been shown [0].  We also have ongoing work to make psql show multiple\n> result sets [1].  This appears to work fine in the simple query\n> protocol.  But the extended query protocol doesn't support multiple\n> result sets at the moment [2].  This would be desirable to be able to\n> use parameter binding, and also since one of the higher-level goals\n> would be to support the use case of stored procedures returning\n> multiple result sets via JDBC.\n>\n> [0]:\n> https://www.postgresql.org/message-id/flat/4580ff7b-d610-eaeb-e06f-4d686896b93b%402ndquadrant.com\n> [1]: https://commitfest.postgresql.org/29/2096/\n> [2]:\n> https://www.postgresql.org/message-id/9507.1534370765%40sss.pgh.pa.us\n>\n> (Terminology: I'm calling this project \"dynamic result sets\", which\n> includes several concepts: 1) multiple result sets, 2) those result\n> sets can have different structures, 3) the structure of the result\n> sets is decided at run time, not declared in the schema/procedure\n> definition/etc.)\n>\n> One possibility I rejected was to invent a third query protocol beside\n> the simple and extended one.  This wouldn't really match with the\n> requirements of JDBC and similar APIs because the APIs for sending\n> queries don't indicate whether dynamic result sets are expected or\n> required, you only indicate that later by how you process the result\n> sets.  So we really need to use the existing ways of sending off the\n> queries.  Also, avoiding a third query protocol is probably desirable\n> in general to avoid extra code and APIs.\n>\n> So here is my sketch on how this functionality could be woven into the\n> extended query protocol.  I'll go through how the existing protocol\n> exchange works and then point out the additions that I have in mind.\n>\n> These additions could be enabled by a _pq_ startup parameter sent by\n> the client.  Alternatively, it might also work without that because\n> the client would just reject protocol messages it doesn't understand,\n> but that's probably less desirable behavior.\n>\n> So here is how it goes:\n>\n> C: Parse\n> S: ParseComplete\n>\n> At this point, the server would know whether the statement it has\n> parsed can produce dynamic result sets.  For a stored procedure, this\n> would be declared with the procedure definition, so when the CALL\n> statement is parsed, this can be noticed.  I don't actually plan any\n> other cases, but for the sake of discussion, perhaps some variant of\n> EXPLAIN could also return multiple result sets, and that could also be\n> detected from parsing the EXPLAIN invocation.\n>\n> At this point a client would usually do\n>\n> C: Describe (statement)\n> S: ParameterDescription\n> S: RowDescription\n>\n> New would be that the server would now also respond with a new\n> message, say,\n>\n> S: DynamicResultInfo\n>\n> that indicates that dynamic result sets will follow later.  The\n> message would otherwise be empty.  (We could perhaps include the\n> number of result sets, but this might not actually be useful, and\n> perhaps it's better not to spent effort on counting things that don't\n> need to be counted.)\n>\n> (If we don't guard this by a _pq_ startup parameter from the client,\n> an old client would now error out because of an unexpected protocol\n> message.)\n>\n> Now the normal bind and execute sequence follows:\n>\n> C: Bind\n> S: BindComplete\n> (C: Describe (portal))\n> (S: RowDescription)\n> C: Execute\n> S: ... (DataRows)\n> S: CommandComplete\n>\n> In the case of a CALL with output parameters, this \"primary\" result\n> set contains one row with the output parameters (existing behavior).\n>\n> Now, if the client has seen DynamicResultInfo earlier, it should now\n> go into a new subsequence to get the remaining result sets, like this\n> (naming obviously to be refined):\n>\n> C: NextResult\n> S: NextResultReady\n> C: Describe (portal)\n> S: RowDescription\n> C: Execute\n> ....\n> S: CommandComplete\n> C: NextResult\n> ...\n> C: NextResult\n> S: NoNextResult\n> C: Sync\n> S: ReadyForQuery\n>\n> I think this would all have to use the unnamed portal, but perhaps\n> there could be other uses with named portals.  Some details to be\n> worked out.\n>\n> One could perhaps also do without the DynamicResultInfo message and\n> just put extra information into the CommandComplete message indicating\n> \"there are more result sets after this one\".\n>\n> (Following the model from the simple query protocol, CommandComplete\n> really means one result set complete, not the whole top-level command.\n> ReadyForQuery means the whole command is complete.  This is perhaps\n> debatable, and interesting questions could also arise when considering\n> what should happen in the simple query protocol when a query string\n> consists of multiple commands each returning multiple result sets. \n> But it doesn't really seem sensible to cater to that.)\n>\n> One thing that's missing in this sequence is a way to specify the\n> desired output format (text/binary) for each result set.  This could\n> be added to the NextResult message, but at that point the client\n> doesn't yet know the number of columns in the result set, so we could\n> only do it globally.  Then again, since the result sets are dynamic,\n> it's less likely that a client would be coded to set per-column output\n> codes. Then again, I would hate to bake such a restriction into the\n> protocol, because some is going to try.  (I suspect what would be more\n> useful in practice is to designate output formats per data type.)  So\n> if we wanted to have this fully featured, it might have to look\n> something like this:\n>\n> C: NextResult\n> S: NextResultReady\n> C: Describe (dynamic) (new message subkind)\n> S: RowDescription\n> C: Bind (zero parameters, optionally format codes)\n> S: BindComplete\n> C: Describe (portal)\n> S: RowDescription\n> C: Execute\n> ...\n>\n> While this looks more complicated, client libraries could reuse\n> existing code that starts processing with a Bind message and continues\n> to CommandComplete, and then just loops back around.\n>\n> The mapping of this to libpq in a simple case could look like this:\n>\n> PQsendQueryParams(conn, \"CALL ...\", ...);\n> PQgetResult(...);  // gets output parameters\n> PQnextResult(...);  // new: sends NextResult+Bind\n> PQgetResult(...);  // and repeat\n>\n> Again, it's not clear here how to declare the result column output\n> formats.  Since libpq doesn't appear to expose the Bind message\n> separately, I'm not sure what to do here.\n>\n> In JDBC, the NextResult message would correspond to the\n> Statement.getMoreResults() method.  It will need a bit of conceptual\n> adjustment because the first result set sent on the protocol is\n> actually the output parameters, which the JDBC API returns separately\n> from a ResultSet, so the initial CallableStatement.execute() call will\n> need to process the primary result set and then send NextResult and\n> obtain the first dynamic result as the first ResultSet for its API,\n> but that can be handled internally.\n>\n> Thoughts so far?\n>\n\n\nExciting stuff. But I'm a bit concerned about the sequence of\nresultsets. The JDBC docco for CallableStatement says:\n\n A CallableStatement can return one ResultSet object or multiple\n ResultSet objects. Multiple ResultSet objects are handled using\n operations inherited from Statement.\n\n For maximum portability, a call's ResultSet objects and update\n counts should be processed prior to getting the values of output\n parameters.\n\nAnd this is more or less in line with the pattern that I've seen when\nconverting SPs from other systems - the OUT params are usually set at\nthe end with things like status flags and error messages.\n\nIf the OUT parameter resultset has to come first (which is how I read\nyour proposal - please correct me if I'm wrong) we'll have to stack up\nall the resultsets until the SP returns, then send the OUT params, then\nsend the remaining resultsets. That seems ... suboptimal.  The\nalternative would be to send the OUT params last. That might result in\nthe driver needing to do some lookahead and caching, but I don't think\nit's unmanageable. Of course, your protocol would also need changing.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Fri, 9 Oct 2020 13:32:48 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On Fri, 9 Oct 2020 at 13:33, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 10/8/20 3:46 AM, Peter Eisentraut wrote:\n> > I want to progress work on stored procedures returning multiple result\n> > sets. Examples of how this could work on the SQL side have previously\n> > been shown [0]. We also have ongoing work to make psql show multiple\n> > result sets [1]. This appears to work fine in the simple query\n> > protocol. But the extended query protocol doesn't support multiple\n> > result sets at the moment [2]. This would be desirable to be able to\n> > use parameter binding, and also since one of the higher-level goals\n> > would be to support the use case of stored procedures returning\n> > multiple result sets via JDBC.\n> >\n> > [0]:\n> >\n> https://www.postgresql.org/message-id/flat/4580ff7b-d610-eaeb-e06f-4d686896b93b%402ndquadrant.com\n> > [1]: https://commitfest.postgresql.org/29/2096/\n> > [2]:\n> > https://www.postgresql.org/message-id/9507.1534370765%40sss.pgh.pa.us\n> >\n> > (Terminology: I'm calling this project \"dynamic result sets\", which\n> > includes several concepts: 1) multiple result sets, 2) those result\n> > sets can have different structures, 3) the structure of the result\n> > sets is decided at run time, not declared in the schema/procedure\n> > definition/etc.)\n> >\n> > One possibility I rejected was to invent a third query protocol beside\n> > the simple and extended one. This wouldn't really match with the\n> > requirements of JDBC and similar APIs because the APIs for sending\n> > queries don't indicate whether dynamic result sets are expected or\n> > required, you only indicate that later by how you process the result\n> > sets. So we really need to use the existing ways of sending off the\n> > queries. Also, avoiding a third query protocol is probably desirable\n> > in general to avoid extra code and APIs.\n> >\n> > So here is my sketch on how this functionality could be woven into the\n> > extended query protocol. I'll go through how the existing protocol\n> > exchange works and then point out the additions that I have in mind.\n> >\n> > These additions could be enabled by a _pq_ startup parameter sent by\n> > the client. Alternatively, it might also work without that because\n> > the client would just reject protocol messages it doesn't understand,\n> > but that's probably less desirable behavior.\n> >\n> > So here is how it goes:\n> >\n> > C: Parse\n> > S: ParseComplete\n> >\n> > At this point, the server would know whether the statement it has\n> > parsed can produce dynamic result sets. For a stored procedure, this\n> > would be declared with the procedure definition, so when the CALL\n> > statement is parsed, this can be noticed. I don't actually plan any\n> > other cases, but for the sake of discussion, perhaps some variant of\n> > EXPLAIN could also return multiple result sets, and that could also be\n> > detected from parsing the EXPLAIN invocation.\n> >\n> > At this point a client would usually do\n> >\n> > C: Describe (statement)\n> > S: ParameterDescription\n> > S: RowDescription\n> >\n> > New would be that the server would now also respond with a new\n> > message, say,\n> >\n> > S: DynamicResultInfo\n> >\n> > that indicates that dynamic result sets will follow later. The\n> > message would otherwise be empty. (We could perhaps include the\n> > number of result sets, but this might not actually be useful, and\n> > perhaps it's better not to spent effort on counting things that don't\n> > need to be counted.)\n> >\n> > (If we don't guard this by a _pq_ startup parameter from the client,\n> > an old client would now error out because of an unexpected protocol\n> > message.)\n> >\n> > Now the normal bind and execute sequence follows:\n> >\n> > C: Bind\n> > S: BindComplete\n> > (C: Describe (portal))\n> > (S: RowDescription)\n> > C: Execute\n> > S: ... (DataRows)\n> > S: CommandComplete\n> >\n> > In the case of a CALL with output parameters, this \"primary\" result\n> > set contains one row with the output parameters (existing behavior).\n> >\n> > Now, if the client has seen DynamicResultInfo earlier, it should now\n> > go into a new subsequence to get the remaining result sets, like this\n> > (naming obviously to be refined):\n> >\n> > C: NextResult\n> > S: NextResultReady\n> > C: Describe (portal)\n> > S: RowDescription\n> > C: Execute\n> > ....\n> > S: CommandComplete\n> > C: NextResult\n> > ...\n> > C: NextResult\n> > S: NoNextResult\n> > C: Sync\n> > S: ReadyForQuery\n> >\n> > I think this would all have to use the unnamed portal, but perhaps\n> > there could be other uses with named portals. Some details to be\n> > worked out.\n> >\n> > One could perhaps also do without the DynamicResultInfo message and\n> > just put extra information into the CommandComplete message indicating\n> > \"there are more result sets after this one\".\n> >\n> > (Following the model from the simple query protocol, CommandComplete\n> > really means one result set complete, not the whole top-level command.\n> > ReadyForQuery means the whole command is complete. This is perhaps\n> > debatable, and interesting questions could also arise when considering\n> > what should happen in the simple query protocol when a query string\n> > consists of multiple commands each returning multiple result sets.\n> > But it doesn't really seem sensible to cater to that.)\n> >\n> > One thing that's missing in this sequence is a way to specify the\n> > desired output format (text/binary) for each result set. This could\n> > be added to the NextResult message, but at that point the client\n> > doesn't yet know the number of columns in the result set, so we could\n> > only do it globally. Then again, since the result sets are dynamic,\n> > it's less likely that a client would be coded to set per-column output\n> > codes. Then again, I would hate to bake such a restriction into the\n> > protocol, because some is going to try. (I suspect what would be more\n> > useful in practice is to designate output formats per data type.) So\n> > if we wanted to have this fully featured, it might have to look\n> > something like this:\n> >\n> > C: NextResult\n> > S: NextResultReady\n> > C: Describe (dynamic) (new message subkind)\n> > S: RowDescription\n> > C: Bind (zero parameters, optionally format codes)\n> > S: BindComplete\n> > C: Describe (portal)\n> > S: RowDescription\n> > C: Execute\n> > ...\n> >\n> > While this looks more complicated, client libraries could reuse\n> > existing code that starts processing with a Bind message and continues\n> > to CommandComplete, and then just loops back around.\n> >\n> > The mapping of this to libpq in a simple case could look like this:\n> >\n> > PQsendQueryParams(conn, \"CALL ...\", ...);\n> > PQgetResult(...); // gets output parameters\n> > PQnextResult(...); // new: sends NextResult+Bind\n> > PQgetResult(...); // and repeat\n> >\n> > Again, it's not clear here how to declare the result column output\n> > formats. Since libpq doesn't appear to expose the Bind message\n> > separately, I'm not sure what to do here.\n> >\n> > In JDBC, the NextResult message would correspond to the\n> > Statement.getMoreResults() method. It will need a bit of conceptual\n> > adjustment because the first result set sent on the protocol is\n> > actually the output parameters, which the JDBC API returns separately\n> > from a ResultSet, so the initial CallableStatement.execute() call will\n> > need to process the primary result set and then send NextResult and\n> > obtain the first dynamic result as the first ResultSet for its API,\n> > but that can be handled internally.\n> >\n> > Thoughts so far?\n> >\n>\n>\n> Exciting stuff. But I'm a bit concerned about the sequence of\n> resultsets. The JDBC docco for CallableStatement says:\n>\n> A CallableStatement can return one ResultSet object or multiple\n> ResultSet objects. Multiple ResultSet objects are handled using\n> operations inherited from Statement.\n>\n> For maximum portability, a call's ResultSet objects and update\n> counts should be processed prior to getting the values of output\n> parameters.\n>\n> And this is more or less in line with the pattern that I've seen when\n> converting SPs from other systems - the OUT params are usually set at\n> the end with things like status flags and error messages.\n>\n> If the OUT parameter resultset has to come first (which is how I read\n> your proposal - please correct me if I'm wrong) we'll have to stack up\n> all the resultsets until the SP returns, then send the OUT params, then\n> send the remaining resultsets. That seems ... suboptimal. The\n> alternative would be to send the OUT params last. That might result in\n> the driver needing to do some lookahead and caching, but I don't think\n> it's unmanageable. Of course, your protocol would also need changing.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\nCurrently the JDBC driver does NOT do :\n\n At this point a client would usually do\n>\n> C: Describe (statement)\n> S: ParameterDescription\n> S: RowDescription\n\nWe do not do the Describe until we use a named statement and decide that\nthe extra round trip is worth it.\n\nMaking this assumption will cause a performance regression on all queries.\n\nIf we are going to make a protocol change there are a number of other\nthings the drivers want.\nhttps://github.com/pgjdbc/pgjdbc/blob/master/backend_protocol_v4_wanted_features.md\n\nThanks,\n\nDave\n\nOn Fri, 9 Oct 2020 at 13:33, Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 10/8/20 3:46 AM, Peter Eisentraut wrote:\n> I want to progress work on stored procedures returning multiple result\n> sets.  Examples of how this could work on the SQL side have previously\n> been shown [0].  We also have ongoing work to make psql show multiple\n> result sets [1].  This appears to work fine in the simple query\n> protocol.  But the extended query protocol doesn't support multiple\n> result sets at the moment [2].  This would be desirable to be able to\n> use parameter binding, and also since one of the higher-level goals\n> would be to support the use case of stored procedures returning\n> multiple result sets via JDBC.\n>\n> [0]:\n> https://www.postgresql.org/message-id/flat/4580ff7b-d610-eaeb-e06f-4d686896b93b%402ndquadrant.com\n> [1]: https://commitfest.postgresql.org/29/2096/\n> [2]:\n> https://www.postgresql.org/message-id/9507.1534370765%40sss.pgh.pa.us\n>\n> (Terminology: I'm calling this project \"dynamic result sets\", which\n> includes several concepts: 1) multiple result sets, 2) those result\n> sets can have different structures, 3) the structure of the result\n> sets is decided at run time, not declared in the schema/procedure\n> definition/etc.)\n>\n> One possibility I rejected was to invent a third query protocol beside\n> the simple and extended one.  This wouldn't really match with the\n> requirements of JDBC and similar APIs because the APIs for sending\n> queries don't indicate whether dynamic result sets are expected or\n> required, you only indicate that later by how you process the result\n> sets.  So we really need to use the existing ways of sending off the\n> queries.  Also, avoiding a third query protocol is probably desirable\n> in general to avoid extra code and APIs.\n>\n> So here is my sketch on how this functionality could be woven into the\n> extended query protocol.  I'll go through how the existing protocol\n> exchange works and then point out the additions that I have in mind.\n>\n> These additions could be enabled by a _pq_ startup parameter sent by\n> the client.  Alternatively, it might also work without that because\n> the client would just reject protocol messages it doesn't understand,\n> but that's probably less desirable behavior.\n>\n> So here is how it goes:\n>\n> C: Parse\n> S: ParseComplete\n>\n> At this point, the server would know whether the statement it has\n> parsed can produce dynamic result sets.  For a stored procedure, this\n> would be declared with the procedure definition, so when the CALL\n> statement is parsed, this can be noticed.  I don't actually plan any\n> other cases, but for the sake of discussion, perhaps some variant of\n> EXPLAIN could also return multiple result sets, and that could also be\n> detected from parsing the EXPLAIN invocation.\n>\n> At this point a client would usually do\n>\n> C: Describe (statement)\n> S: ParameterDescription\n> S: RowDescription\n>\n> New would be that the server would now also respond with a new\n> message, say,\n>\n> S: DynamicResultInfo\n>\n> that indicates that dynamic result sets will follow later.  The\n> message would otherwise be empty.  (We could perhaps include the\n> number of result sets, but this might not actually be useful, and\n> perhaps it's better not to spent effort on counting things that don't\n> need to be counted.)\n>\n> (If we don't guard this by a _pq_ startup parameter from the client,\n> an old client would now error out because of an unexpected protocol\n> message.)\n>\n> Now the normal bind and execute sequence follows:\n>\n> C: Bind\n> S: BindComplete\n> (C: Describe (portal))\n> (S: RowDescription)\n> C: Execute\n> S: ... (DataRows)\n> S: CommandComplete\n>\n> In the case of a CALL with output parameters, this \"primary\" result\n> set contains one row with the output parameters (existing behavior).\n>\n> Now, if the client has seen DynamicResultInfo earlier, it should now\n> go into a new subsequence to get the remaining result sets, like this\n> (naming obviously to be refined):\n>\n> C: NextResult\n> S: NextResultReady\n> C: Describe (portal)\n> S: RowDescription\n> C: Execute\n> ....\n> S: CommandComplete\n> C: NextResult\n> ...\n> C: NextResult\n> S: NoNextResult\n> C: Sync\n> S: ReadyForQuery\n>\n> I think this would all have to use the unnamed portal, but perhaps\n> there could be other uses with named portals.  Some details to be\n> worked out.\n>\n> One could perhaps also do without the DynamicResultInfo message and\n> just put extra information into the CommandComplete message indicating\n> \"there are more result sets after this one\".\n>\n> (Following the model from the simple query protocol, CommandComplete\n> really means one result set complete, not the whole top-level command.\n> ReadyForQuery means the whole command is complete.  This is perhaps\n> debatable, and interesting questions could also arise when considering\n> what should happen in the simple query protocol when a query string\n> consists of multiple commands each returning multiple result sets. \n> But it doesn't really seem sensible to cater to that.)\n>\n> One thing that's missing in this sequence is a way to specify the\n> desired output format (text/binary) for each result set.  This could\n> be added to the NextResult message, but at that point the client\n> doesn't yet know the number of columns in the result set, so we could\n> only do it globally.  Then again, since the result sets are dynamic,\n> it's less likely that a client would be coded to set per-column output\n> codes. Then again, I would hate to bake such a restriction into the\n> protocol, because some is going to try.  (I suspect what would be more\n> useful in practice is to designate output formats per data type.)  So\n> if we wanted to have this fully featured, it might have to look\n> something like this:\n>\n> C: NextResult\n> S: NextResultReady\n> C: Describe (dynamic) (new message subkind)\n> S: RowDescription\n> C: Bind (zero parameters, optionally format codes)\n> S: BindComplete\n> C: Describe (portal)\n> S: RowDescription\n> C: Execute\n> ...\n>\n> While this looks more complicated, client libraries could reuse\n> existing code that starts processing with a Bind message and continues\n> to CommandComplete, and then just loops back around.\n>\n> The mapping of this to libpq in a simple case could look like this:\n>\n> PQsendQueryParams(conn, \"CALL ...\", ...);\n> PQgetResult(...);  // gets output parameters\n> PQnextResult(...);  // new: sends NextResult+Bind\n> PQgetResult(...);  // and repeat\n>\n> Again, it's not clear here how to declare the result column output\n> formats.  Since libpq doesn't appear to expose the Bind message\n> separately, I'm not sure what to do here.\n>\n> In JDBC, the NextResult message would correspond to the\n> Statement.getMoreResults() method.  It will need a bit of conceptual\n> adjustment because the first result set sent on the protocol is\n> actually the output parameters, which the JDBC API returns separately\n> from a ResultSet, so the initial CallableStatement.execute() call will\n> need to process the primary result set and then send NextResult and\n> obtain the first dynamic result as the first ResultSet for its API,\n> but that can be handled internally.\n>\n> Thoughts so far?\n>\n\n\nExciting stuff. But I'm a bit concerned about the sequence of\nresultsets. The JDBC docco for CallableStatement says:\n\n    A CallableStatement can return one ResultSet object or multiple\n    ResultSet objects. Multiple ResultSet objects are handled using\n    operations inherited from Statement.\n\n    For maximum portability, a call's ResultSet objects and update\n    counts should be processed prior to getting the values of output\n    parameters.\n\nAnd this is more or less in line with the pattern that I've seen when\nconverting SPs from other systems - the OUT params are usually set at\nthe end with things like status flags and error messages.\n\nIf the OUT parameter resultset has to come first (which is how I read\nyour proposal - please correct me if I'm wrong) we'll have to stack up\nall the resultsets until the SP returns, then send the OUT params, then\nsend the remaining resultsets. That seems ... suboptimal.  The\nalternative would be to send the OUT params last. That might result in\nthe driver needing to do some lookahead and caching, but I don't think\nit's unmanageable. Of course, your protocol would also need changing.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\nCurrently the JDBC driver does NOT  do : At this point a client would usually do\n>\n> C: Describe (statement)\n> S: ParameterDescription\n> S: RowDescriptionWe do not do the Describe until we use a named statement and decide that the extra round trip is worth it.Making this assumption will cause a performance regression on all queries.If we are going to make a protocol change there are a number of other things the drivers want. https://github.com/pgjdbc/pgjdbc/blob/master/backend_protocol_v4_wanted_features.mdThanks,Dave", "msg_date": "Fri, 9 Oct 2020 14:39:38 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "Hi,\n\nOn 2020-10-08 09:46:38 +0200, Peter Eisentraut wrote:\n> New would be that the server would now also respond with a new message, say,\n> \n> S: DynamicResultInfo\n\n> Now, if the client has seen DynamicResultInfo earlier, it should now go into\n> a new subsequence to get the remaining result sets, like this (naming\n> obviously to be refined):\n\nHm. Isn't this going to be a lot more latency sensitive than we'd like?\nThis would basically require at least one additional roundtrip for\neverything that *potentially* could return multiple result sets, even if\nno additional results are returned, right? And it'd add at least one\nadditional roundtrip for every result set that's actually sent.\n\nIs there really a good reason for forcing the client to issue\nNextResult, Describe, Execute for each of the dynamic result sets? It's\nnot like there's really a case for allowing the clients to skip them,\nright? Why aren't we sending something more like\n\nS: CommandPartiallyComplete\nS: RowDescription\nS: DataRow...\nS: CommandPartiallyComplete\nS: RowDescription\nS: DataRow...\n...\nS: CommandComplete\nC: Sync\n\ngated by a _pq_ parameter, of course.\n\n\n> I think this would all have to use the unnamed portal, but perhaps there\n> could be other uses with named portals. Some details to be worked out.\n\nWhich'd avoid this too, but:\n\n> One thing that's missing in this sequence is a way to specify the desired\n> output format (text/binary) for each result set.\n\nIs a good point. I personally think avoiding the back and forth is more\nimportant though. But if we could address both at the same time...\n\n\n> (I suspect what would be more useful in practice is to designate\n> output formats per data type.)\n\nYea, that'd be *really* useful. It sucks that we basically require\nmultiple round trips to make realistic use of the binary data for the\nfew types where it's a huge win (e.g. bytea).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 9 Oct 2020 11:46:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "Hi,\n\n\nOn Fri, 9 Oct 2020 at 14:46, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-10-08 09:46:38 +0200, Peter Eisentraut wrote:\n> > New would be that the server would now also respond with a new message,\n> say,\n> >\n> > S: DynamicResultInfo\n>\n> > Now, if the client has seen DynamicResultInfo earlier, it should now go\n> into\n> > a new subsequence to get the remaining result sets, like this (naming\n> > obviously to be refined):\n>\n> Hm. Isn't this going to be a lot more latency sensitive than we'd like?\n> This would basically require at least one additional roundtrip for\n> everything that *potentially* could return multiple result sets, even if\n> no additional results are returned, right? And it'd add at least one\n> additional roundtrip for every result set that's actually sent.\n>\n\nAgreed as mentioned.\n\n>\n> Is there really a good reason for forcing the client to issue\n> NextResult, Describe, Execute for each of the dynamic result sets? It's\n> not like there's really a case for allowing the clients to skip them,\n> right? Why aren't we sending something more like\n>\n> S: CommandPartiallyComplete\n> S: RowDescription\n> S: DataRow...\n> S: CommandPartiallyComplete\n> S: RowDescription\n> S: DataRow...\n> ...\n> S: CommandComplete\n> C: Sync\n>\n> gated by a _pq_ parameter, of course.\n>\n>\n> > I think this would all have to use the unnamed portal, but perhaps there\n> > could be other uses with named portals. Some details to be worked out.\n>\n> Which'd avoid this too, but:\n>\n> > One thing that's missing in this sequence is a way to specify the desired\n> > output format (text/binary) for each result set.\n>\n> Is a good point. I personally think avoiding the back and forth is more\n> important though. But if we could address both at the same time...\n>\n>\n> > (I suspect what would be more useful in practice is to designate\n> > output formats per data type.)\n>\n> Yea, that'd be *really* useful. It sucks that we basically require\n> multiple round trips to make realistic use of the binary data for the\n> few types where it's a huge win (e.g. bytea).\n>\n\nYes!!! Ideally in the startup message.\n\nDave\n\nHi,On Fri, 9 Oct 2020 at 14:46, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-10-08 09:46:38 +0200, Peter Eisentraut wrote:\n> New would be that the server would now also respond with a new message, say,\n> \n> S: DynamicResultInfo\n\n> Now, if the client has seen DynamicResultInfo earlier, it should now go into\n> a new subsequence to get the remaining result sets, like this (naming\n> obviously to be refined):\n\nHm. Isn't this going to be a lot more latency sensitive than we'd like?\nThis would basically require at least one additional roundtrip for\neverything that *potentially* could return multiple result sets, even if\nno additional results are returned, right? And it'd add at least one\nadditional roundtrip for every result set that's actually sent.Agreed as mentioned. \n\nIs there really a good reason for forcing the client to issue\nNextResult, Describe, Execute for each of the dynamic result sets? It's\nnot like there's really a case for allowing the clients to skip them,\nright?  Why aren't we sending something more like\n\nS: CommandPartiallyComplete\nS: RowDescription\nS: DataRow...\nS: CommandPartiallyComplete\nS: RowDescription\nS: DataRow...\n...\nS: CommandComplete\nC: Sync\n\ngated by a _pq_ parameter, of course.\n\n\n> I think this would all have to use the unnamed portal, but perhaps there\n> could be other uses with named portals.  Some details to be worked out.\n\nWhich'd avoid this too, but:\n\n> One thing that's missing in this sequence is a way to specify the desired\n> output format (text/binary) for each result set.\n\nIs a good point. I personally think avoiding the back and forth is more\nimportant though. But if we could address both at the same time...\n\n\n> (I suspect what would be more useful in practice is to designate\n> output formats per data type.)\n\nYea, that'd be *really* useful. It sucks that we basically require\nmultiple round trips to make realistic use of the binary data for the\nfew types where it's a huge win (e.g. bytea).Yes!!! Ideally in the startup message. Dave", "msg_date": "Fri, 9 Oct 2020 14:49:11 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "Hi,\n\nOn 2020-10-09 14:49:11 -0400, Dave Cramer wrote:\n> On Fri, 9 Oct 2020 at 14:46, Andres Freund <andres@anarazel.de> wrote:\n> > > (I suspect what would be more useful in practice is to designate\n> > > output formats per data type.)\n> >\n> > Yea, that'd be *really* useful. It sucks that we basically require\n> > multiple round trips to make realistic use of the binary data for the\n> > few types where it's a huge win (e.g. bytea).\n> >\n> \n> Yes!!! Ideally in the startup message.\n\nI don't think startup is a good choice. For one, it's size limited. But\nmore importantly, before having successfully established a connection,\nthere's really no way the driver can know which types it should list as\nto be sent in binary (consider e.g. some postgis types, which'd greatly\nbenefit from being sent in binary, but also just version dependent\nstuff).\n\nThe hard part around this really is whether and how to deal with changes\nin type definitions. From types just being created - comparatively\nsimple - to extensions being dropped and recreated, with oids\npotentially being reused.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 9 Oct 2020 11:59:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On Fri, 9 Oct 2020 at 14:59, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-10-09 14:49:11 -0400, Dave Cramer wrote:\n> > On Fri, 9 Oct 2020 at 14:46, Andres Freund <andres@anarazel.de> wrote:\n> > > > (I suspect what would be more useful in practice is to designate\n> > > > output formats per data type.)\n> > >\n> > > Yea, that'd be *really* useful. It sucks that we basically require\n> > > multiple round trips to make realistic use of the binary data for the\n> > > few types where it's a huge win (e.g. bytea).\n> > >\n> >\n> > Yes!!! Ideally in the startup message.\n>\n> I don't think startup is a good choice. For one, it's size limited. But\n> more importantly, before having successfully established a connection,\n> there's really no way the driver can know which types it should list as\n> to be sent in binary (consider e.g. some postgis types, which'd greatly\n> benefit from being sent in binary, but also just version dependent\n> stuff).\n>\n> For the most part we know exactly which types we want in binary for 99% of\nqueries.\n\n\n> The hard part around this really is whether and how to deal with changes\n> in type definitions. From types just being created - comparatively\n> simple - to extensions being dropped and recreated, with oids\n> potentially being reused.\n>\n\nFair point but this is going to be much more complex than just sending most\nof the results in binary which would speed up the overwhelming majority of\nqueries\n\nDave Cramer\n\n>\n>\n\nOn Fri, 9 Oct 2020 at 14:59, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-10-09 14:49:11 -0400, Dave Cramer wrote:\n> On Fri, 9 Oct 2020 at 14:46, Andres Freund <andres@anarazel.de> wrote:\n> > > (I suspect what would be more useful in practice is to designate\n> > > output formats per data type.)\n> >\n> > Yea, that'd be *really* useful. It sucks that we basically require\n> > multiple round trips to make realistic use of the binary data for the\n> > few types where it's a huge win (e.g. bytea).\n> >\n> \n> Yes!!! Ideally in the startup message.\n\nI don't think startup is a good choice. For one, it's size limited. But\nmore importantly, before having successfully established a connection,\nthere's really no way the driver can know which types it should list as\nto be sent in binary (consider e.g. some postgis types, which'd greatly\nbenefit from being sent in binary, but also just version dependent\nstuff).\nFor the most part we know exactly which types we want in binary for 99% of queries. \nThe hard part around this really is whether and how to deal with changes\nin type definitions. From types just being created - comparatively\nsimple - to extensions being dropped and recreated, with oids\npotentially being reused.Fair point but this is going to be much more complex than just sending most of the results in binary which would speed up the overwhelming majority of queriesDave Cramer", "msg_date": "Fri, 9 Oct 2020 15:02:31 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On 2020-10-09 21:02, Dave Cramer wrote:\n> For the most part we know exactly which types we want in binary for 99% \n> of queries.\n> \n> The hard part around this really is whether and how to deal with changes\n> in type definitions. From types just being created - comparatively\n> simple - to extensions being dropped and recreated, with oids\n> potentially being reused.\n> \n> \n> Fair point but this is going to be much more complex than just sending \n> most of the results in binary which would speed up the overwhelming \n> majority of queries\n\nI've been studying in more detail how the JDBC driver handles binary \nformat use. Having some kind of message \"use binary for these types\" \nwould match its requirements quite exactly. (I have also studied \nnpgsql, but it appears to work quite differently. More input from there \nand other places with similar requirements would be welcome.) The \nquestion as mentioned above is how to deal with type changes. Let's \nwork through a couple of options.\n\nWe could send the type/format list with every query. For example, we \ncould extend/enhance/alter the Bind message so that instead of a \nformat-per-column it sends a format-per-type. But then you'd need to \nsend the complete type list every time. The JDBC driver currently has \n20+ types already hardcoded and more optionally, so you'd send 100+ \nbytes for every query, plus required effort for encoding and decoding. \nThat seems unattractive.\n\nOr we send the type/format list once near the beginning of the session. \nThen we need to deal with types being recreated or updated etc.\n\nThe first option is that we \"lock\" the types against changes (ignoring \nwhether that's actually possible right now). That would mean you \ncouldn't update an affected type/extension while a JDBC session is \nactive. That's no good. (Imagine connection pools with hours of server \nlifetime.)\n\nAnother option is that we invalidate the session when a thus-registered \ntype changes. Also no good. (We don't want an extension upgrade \nsuddenly breaking all open connections.)\n\nFinally, we could do it an a best-effort basis. We use binary format \nfor registered types, until there is some invalidation event for the \ntype, at which point we revert to default/text format until the end of a \nsession (or until another protocol message arrives re-registering the \ntype). This should work, because the result row descriptor contains the \nactual format type, and there is no guarantee that it's the same one \nthat was requested.\n\nSo how about that last option? I imagine a new protocol message, say, \nTypeFormats, that contains a number of type/format pairs. The message \nwould typically be sent right after the first ReadyForQuery, gets no \nresponse. It could also be sent at any other time, but I expect that to \nbe less used in practice. Binary format is used for registered types if \nthey have binary format support functions, otherwise text continues to \nbe used. There is no error response for types without binary support. \n(There should probably be an error response for registering a type that \ndoes not exist.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 20 Oct 2020 11:57:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On Tue, 20 Oct 2020 at 05:57, Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-10-09 21:02, Dave Cramer wrote:\n> > For the most part we know exactly which types we want in binary for 99%\n> > of queries.\n> >\n> > The hard part around this really is whether and how to deal with\n> changes\n> > in type definitions. From types just being created - comparatively\n> > simple - to extensions being dropped and recreated, with oids\n> > potentially being reused.\n> >\n> >\n> > Fair point but this is going to be much more complex than just sending\n> > most of the results in binary which would speed up the overwhelming\n> > majority of queries\n>\n> I've been studying in more detail how the JDBC driver handles binary\n> format use. Having some kind of message \"use binary for these types\"\n> would match its requirements quite exactly. (I have also studied\n> npgsql, but it appears to work quite differently. More input from there\n> and other places with similar requirements would be welcome.) The\n> question as mentioned above is how to deal with type changes. Let's\n> work through a couple of options.\n>\n\nI've added Vladimir (pgjdbc), Shay (npgsql) and Mark Paluch (r2dbc) to\nthis discussion.\nI'm sure there are others but I'm not acquainted with them\n\n>\n> We could send the type/format list with every query. For example, we\n> could extend/enhance/alter the Bind message so that instead of a\n> format-per-column it sends a format-per-type. But then you'd need to\n> send the complete type list every time. The JDBC driver currently has\n> 20+ types already hardcoded and more optionally, so you'd send 100+\n> bytes for every query, plus required effort for encoding and decoding.\n> That seems unattractive.\n>\n> Or we send the type/format list once near the beginning of the session.\n> Then we need to deal with types being recreated or updated etc.\n>\n> The first option is that we \"lock\" the types against changes (ignoring\n> whether that's actually possible right now). That would mean you\n> couldn't update an affected type/extension while a JDBC session is\n> active. That's no good. (Imagine connection pools with hours of server\n> lifetime.)\n>\n> Another option is that we invalidate the session when a thus-registered\n> type changes. Also no good. (We don't want an extension upgrade\n> suddenly breaking all open connections.)\n>\n> Agreed the first 2 options are not viable.\n\n\n> Finally, we could do it an a best-effort basis. We use binary format\n> for registered types, until there is some invalidation event for the\n> type, at which point we revert to default/text format until the end of a\n> session (or until another protocol message arrives re-registering the\n> type).\n\n\nDoes the driver tell the server what registered types it wants in binary ?\n\n\n> This should work, because the result row descriptor contains the\n> actual format type, and there is no guarantee that it's the same one\n> that was requested.\n>\n> So how about that last option? I imagine a new protocol message, say,\n> TypeFormats, that contains a number of type/format pairs. The message\n> would typically be sent right after the first ReadyForQuery, gets no\n> response.\n\n\nThis seems a bit hard to control. How long do you wait for no response?\n\n\n> It could also be sent at any other time, but I expect that to\n> be less used in practice. Binary format is used for registered types if\n> they have binary format support functions, otherwise text continues to\n> be used. There is no error response for types without binary support.\n> (There should probably be an error response for registering a type that\n> does not exist.)\n>\n> I'm not sure we (pgjdbc) want all types with binary support functions sent\nautomatically. Turns out that decoding binary is sometimes slower than\ndecoding the text and the on wire overhead isn't significant.\nTimestamps/dates with timezone are also interesting as the binary output\ndoes not include the timezone.\n\nThe notion of a status change message is appealing however. I used the term\nstatus change on purpose as there are other server changes we would like to\nbe made aware of. For instance if someone changes the search path, we would\nlike to know. I'm sort of expanding the scope here but if we are imagining\n... :)\n\nDave\n\nOn Tue, 20 Oct 2020 at 05:57, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-10-09 21:02, Dave Cramer wrote:\n> For the most part we know exactly which types we want in binary for 99% \n> of queries.\n> \n>     The hard part around this really is whether and how to deal with changes\n>     in type definitions. From types just being created - comparatively\n>     simple - to extensions being dropped and recreated, with oids\n>     potentially being reused.\n> \n> \n> Fair point but this is going to be much more complex than just sending \n> most of the results in binary which would speed up the overwhelming \n> majority of queries\n\nI've been studying in more detail how the JDBC driver handles binary \nformat use.  Having some kind of message \"use binary for these types\" \nwould match its requirements quite exactly.  (I have also studied \nnpgsql, but it appears to work quite differently.  More input from there \nand other places with similar requirements would be welcome.)  The \nquestion as mentioned above is how to deal with type changes.  Let's \nwork through a couple of options.I've added Vladimir (pgjdbc), Shay (npgsql) and Mark Paluch (r2dbc)  to this discussion. I'm sure there are others but I'm not acquainted with them\n\nWe could send the type/format list with every query.  For example, we \ncould extend/enhance/alter the Bind message so that instead of a \nformat-per-column it sends a format-per-type.  But then you'd need to \nsend the complete type list every time.  The JDBC driver currently has \n20+ types already hardcoded and more optionally, so you'd send 100+ \nbytes for every query, plus required effort for encoding and decoding. \nThat seems unattractive.\n\nOr we send the type/format list once near the beginning of the session. \nThen we need to deal with types being recreated or updated etc.\n\nThe first option is that we \"lock\" the types against changes (ignoring \nwhether that's actually possible right now).  That would mean you \ncouldn't update an affected type/extension while a JDBC session is \nactive.  That's no good.  (Imagine connection pools with hours of server \nlifetime.)\n\nAnother option is that we invalidate the session when a thus-registered \ntype changes.  Also no good.  (We don't want an extension upgrade \nsuddenly breaking all open connections.)\nAgreed the first 2 options are not viable. \nFinally, we could do it an a best-effort basis.  We use binary format \nfor registered types, until there is some invalidation event for the \ntype, at which point we revert to default/text format until the end of a \nsession (or until another protocol message arrives re-registering the \ntype).  Does the driver tell the server what registered types it wants in binary ? This should work, because the result row descriptor contains the \nactual format type, and there is no guarantee that it's the same one \nthat was requested.\n\nSo how about that last option?  I imagine a new protocol message, say, \nTypeFormats, that contains a number of type/format pairs.  The message \nwould typically be sent right after the first ReadyForQuery, gets no \nresponse.  This seems a bit hard to control. How long do you wait for no response?  It could also be sent at any other time, but I expect that to \nbe less used in practice.  Binary format is used for registered types if \nthey have binary format support functions, otherwise text continues to \nbe used.  There is no error response for types without binary support. \n(There should probably be an error response for registering a type that \ndoes not exist.)\nI'm not sure we (pgjdbc) want all types with binary support functions sent automatically. Turns out that decoding binary is sometimes slower than decoding the text and the on wire overhead isn't significant. Timestamps/dates with timezone are also interesting as the binary output does not include the timezone.The notion of a status change message is appealing however. I used the term status change on purpose as there are other server changes we would like to be made aware of. For instance if someone changes the search path, we would like to know. I'm sort of expanding the scope here but if we are imagining ... :)Dave", "msg_date": "Tue, 20 Oct 2020 06:24:24 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "Very interesting conversation, thanks for including me Dave. Here are some\nthoughts from the Npgsql perspective,\n\nRe the binary vs. text discussion... A long time ago, Npgsql became a\n\"binary-only\" driver, meaning that it never sends or receives values in\ntext encoding, and practically always uses the extended protocol. This was\nbecause in most (all?) cases, encoding/decoding binary is more efficient,\nand maintaining two encoders/decoders (one for text, one for binary) made\nless and less sense. So by default, Npgsql just requests \"all binary\" in\nall Bind messages it sends (there's an API for the user to request text, in\nwhich case they get pure strings which they're responsible for parsing).\nBinary handling is implemented for almost all PG types which support it,\nand I've hardly seen any complaints about this for the last few years. I'd\nbe interested in any arguments against this decision (Dave, when have you\nseen that decoding binary is slower than decoding text?).\n\nGiven the above, allowing the client to specify in advance which types\nshould be in binary sounds good, but wouldn't help Npgsql much (since by\ndefault it already requests binary for everything). It would slightly help\nin allowing binary-unsupported types to automatically come back as text\nwithout manual user API calls, but as I wrote above this is an extremely\nrare scenario that people don't care much about.\n\n> Is there really a good reason for forcing the client to issue NextResult,\nDescribe, Execute for each of the dynamic result sets?\n\nI very much agree - it should be possible to execute a procedure and\nconsume all results in a single roundtrip, otherwise this is quite a perf\nkiller.\n\nPeter, from your original message:\n\n> Following the model from the simple query protocol, CommandComplete\nreally means one result set complete, not the whole top-level command.\nReadyForQuery means the whole command is complete. This is perhaps\ndebatable, and interesting questions could also arise when considering what\nshould happen in the simple query protocol when a query string consists of\nmultiple commands each returning multiple result sets. But it doesn't\nreally seem sensible to cater to that\n\nNpgsql implements batching of multiple statements via the extended protocol\nin a similar way. In other words, the .NET API allows users to pack\nmultiple SQL statements and execute them in one roundtrip, and Npgsql does\nthis by sending\nParse1/Bind1/Describe1/Execute1/Parse2/Bind2/Describe2/Execute2/Sync. So\nCommandComplete signals completion of a single statement in the batch,\nwhereas ReadyForQuery signals completion of the entire batch. This means\nthat the \"interesting questions\" mentioned above are possibly relevant to\nthe extended protocol as well.\n\nVery interesting conversation, thanks for including me Dave. Here are some thoughts from the Npgsql perspective,Re the binary vs. text discussion... A long time ago, Npgsql became a \"binary-only\" driver, meaning that it never sends or receives values in text encoding, and practically always uses the extended protocol. This was because in most (all?) cases, encoding/decoding binary is more efficient, and maintaining two encoders/decoders (one for text, one for binary) made less and less sense. So by default, Npgsql just requests \"all binary\" in all Bind messages it sends (there's an API for the user to request text, in which case they get pure strings which they're responsible for parsing). Binary handling is implemented for almost all PG types which support it, and I've hardly seen any complaints about this for the last few years. I'd be interested in any arguments against this decision (Dave, when have you seen that decoding binary is slower than decoding text?).Given the above, allowing the client to specify in advance which types should be in binary sounds good, but wouldn't help Npgsql much (since by default it already requests binary for everything). It would slightly help in allowing binary-unsupported types to automatically come back as text without manual user API calls, but as I wrote above this is an extremely rare scenario that people don't care much about.> Is there really a good reason for forcing the client to issue NextResult, Describe, Execute for each of the dynamic result sets?I very much agree - it should be possible to execute a procedure and consume all results in a single roundtrip, otherwise this is quite a perf killer.Peter, from your original message:> Following the model from the simple query protocol, CommandComplete really means one result set complete, not the whole top-level command. ReadyForQuery means the whole command is complete. This is perhaps debatable, and interesting questions could also arise when considering what should happen in the simple query protocol when a query string consists of multiple commands each returning multiple result sets. But it doesn't really seem sensible to cater to thatNpgsql implements batching of multiple statements via the extended protocol in a similar way. In other words, the .NET API allows users to pack multiple SQL statements and execute them in one roundtrip, and Npgsql does this by sending Parse1/Bind1/Describe1/Execute1/Parse2/Bind2/Describe2/Execute2/Sync. So CommandComplete signals completion of a single statement in the batch, whereas ReadyForQuery signals completion of the entire batch. This means that the \"interesting questions\" mentioned above are possibly relevant to the extended protocol as well.", "msg_date": "Tue, 20 Oct 2020 17:28:06 +0300", "msg_from": "Shay Rojansky <roji@roji.org>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "Regarding decoding binary vs text performance: There can be a significant\nperformance cost to fetching the binary format over the text format for\ntypes such as text. See\nhttps://www.postgresql.org/message-id/CAMovtNoHFod2jMAKQjjxv209PCTJx5Kc66anwWvX0mEiaXwgmA%40mail.gmail.com\nfor the previous discussion.\n\n From the pgx driver (https://github.com/jackc/pgx) perspective:\n\nA \"use binary for these types\" message sent once at the beginning of the\nsession would not only be helpful for dynamic result sets but could\nsimplify use of the extended protocol in general.\n\nUpthread someone posted a page pgjdbc detailing desired changes to the\nbackend protocol (\nhttps://github.com/pgjdbc/pgjdbc/blob/master/backend_protocol_v4_wanted_features.md).\nI concur with almost everything there, but in particular the first\nsuggestion of the backend automatically converting binary values like it\ndoes text values would be huge. That combined with the \"use binary for\nthese types\" message could greatly simplify the driver side work in using\nthe binary format.\n\nCommandComplete vs ReadyForQuery -- pgx does the same as Npgsql in that it\nbundles batches multiple queries together in the extended protocol and uses\nCommandComplete for statement completion and ReadyForQuery for batch\ncompletion.\n\n\n\nOn Tue, Oct 20, 2020 at 9:28 AM Shay Rojansky <roji@roji.org> wrote:\n\n> Very interesting conversation, thanks for including me Dave. Here are some\n> thoughts from the Npgsql perspective,\n>\n> Re the binary vs. text discussion... A long time ago, Npgsql became a\n> \"binary-only\" driver, meaning that it never sends or receives values in\n> text encoding, and practically always uses the extended protocol. This was\n> because in most (all?) cases, encoding/decoding binary is more efficient,\n> and maintaining two encoders/decoders (one for text, one for binary) made\n> less and less sense. So by default, Npgsql just requests \"all binary\" in\n> all Bind messages it sends (there's an API for the user to request text, in\n> which case they get pure strings which they're responsible for parsing).\n> Binary handling is implemented for almost all PG types which support it,\n> and I've hardly seen any complaints about this for the last few years. I'd\n> be interested in any arguments against this decision (Dave, when have you\n> seen that decoding binary is slower than decoding text?).\n>\n> Given the above, allowing the client to specify in advance which types\n> should be in binary sounds good, but wouldn't help Npgsql much (since by\n> default it already requests binary for everything). It would slightly help\n> in allowing binary-unsupported types to automatically come back as text\n> without manual user API calls, but as I wrote above this is an extremely\n> rare scenario that people don't care much about.\n>\n> > Is there really a good reason for forcing the client to issue\n> NextResult, Describe, Execute for each of the dynamic result sets?\n>\n> I very much agree - it should be possible to execute a procedure and\n> consume all results in a single roundtrip, otherwise this is quite a perf\n> killer.\n>\n> Peter, from your original message:\n>\n> > Following the model from the simple query protocol, CommandComplete\n> really means one result set complete, not the whole top-level command.\n> ReadyForQuery means the whole command is complete. This is perhaps\n> debatable, and interesting questions could also arise when considering what\n> should happen in the simple query protocol when a query string consists of\n> multiple commands each returning multiple result sets. But it doesn't\n> really seem sensible to cater to that\n>\n> Npgsql implements batching of multiple statements via the extended\n> protocol in a similar way. In other words, the .NET API allows users to\n> pack multiple SQL statements and execute them in one roundtrip, and Npgsql\n> does this by sending\n> Parse1/Bind1/Describe1/Execute1/Parse2/Bind2/Describe2/Execute2/Sync. So\n> CommandComplete signals completion of a single statement in the batch,\n> whereas ReadyForQuery signals completion of the entire batch. This means\n> that the \"interesting questions\" mentioned above are possibly relevant to\n> the extended protocol as well.\n>\n\nRegarding decoding binary vs text performance: There can be a significant performance cost to fetching the binary format over the text format for types such as text. See https://www.postgresql.org/message-id/CAMovtNoHFod2jMAKQjjxv209PCTJx5Kc66anwWvX0mEiaXwgmA%40mail.gmail.com for the previous discussion.From the pgx driver (https://github.com/jackc/pgx) perspective:A \"use binary for these types\" message sent once at the beginning of the session would not only be helpful for dynamic result sets but could simplify use of the extended protocol in general.Upthread someone posted a page pgjdbc detailing desired changes to the backend protocol (https://github.com/pgjdbc/pgjdbc/blob/master/backend_protocol_v4_wanted_features.md). I concur with almost everything there, but in particular the first suggestion of the backend automatically converting binary values like it does text values would be huge. That combined with the \"use binary for these types\" message could greatly simplify the driver side work in using the binary format.CommandComplete vs ReadyForQuery -- pgx does the same as Npgsql in that it bundles batches multiple queries together in the extended protocol and uses CommandComplete for statement completion and ReadyForQuery for batch completion. On Tue, Oct 20, 2020 at 9:28 AM Shay Rojansky <roji@roji.org> wrote:Very interesting conversation, thanks for including me Dave. Here are some thoughts from the Npgsql perspective,Re the binary vs. text discussion... A long time ago, Npgsql became a \"binary-only\" driver, meaning that it never sends or receives values in text encoding, and practically always uses the extended protocol. This was because in most (all?) cases, encoding/decoding binary is more efficient, and maintaining two encoders/decoders (one for text, one for binary) made less and less sense. So by default, Npgsql just requests \"all binary\" in all Bind messages it sends (there's an API for the user to request text, in which case they get pure strings which they're responsible for parsing). Binary handling is implemented for almost all PG types which support it, and I've hardly seen any complaints about this for the last few years. I'd be interested in any arguments against this decision (Dave, when have you seen that decoding binary is slower than decoding text?).Given the above, allowing the client to specify in advance which types should be in binary sounds good, but wouldn't help Npgsql much (since by default it already requests binary for everything). It would slightly help in allowing binary-unsupported types to automatically come back as text without manual user API calls, but as I wrote above this is an extremely rare scenario that people don't care much about.> Is there really a good reason for forcing the client to issue NextResult, Describe, Execute for each of the dynamic result sets?I very much agree - it should be possible to execute a procedure and consume all results in a single roundtrip, otherwise this is quite a perf killer.Peter, from your original message:> Following the model from the simple query protocol, CommandComplete really means one result set complete, not the whole top-level command. ReadyForQuery means the whole command is complete. This is perhaps debatable, and interesting questions could also arise when considering what should happen in the simple query protocol when a query string consists of multiple commands each returning multiple result sets. But it doesn't really seem sensible to cater to thatNpgsql implements batching of multiple statements via the extended protocol in a similar way. In other words, the .NET API allows users to pack multiple SQL statements and execute them in one roundtrip, and Npgsql does this by sending Parse1/Bind1/Describe1/Execute1/Parse2/Bind2/Describe2/Execute2/Sync. So CommandComplete signals completion of a single statement in the batch, whereas ReadyForQuery signals completion of the entire batch. This means that the \"interesting questions\" mentioned above are possibly relevant to the extended protocol as well.", "msg_date": "Tue, 20 Oct 2020 18:55:41 -0500", "msg_from": "Jack Christensen <jack@jncsoftware.com>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "Hi,\n\nOn 2020-10-20 18:55:41 -0500, Jack Christensen wrote:\n> Upthread someone posted a page pgjdbc detailing desired changes to the\n> backend protocol (\n> https://github.com/pgjdbc/pgjdbc/blob/master/backend_protocol_v4_wanted_features.md).\n\nA lot of the stuff on there seems way beyond what can be achieved in\nsomething incrementally added to the protocol. Fair enough in an article\nabout \"v4\" of the protocol. But I don't think we are - nor should we be\n- talking about a full new protocol version here. Instead we are talking\nabout extending the protocol, where the extensions are opt-in.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 20 Oct 2020 17:09:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On Tue, 20 Oct 2020 at 20:09, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-10-20 18:55:41 -0500, Jack Christensen wrote:\n> > Upthread someone posted a page pgjdbc detailing desired changes to the\n> > backend protocol (\n> >\n> https://github.com/pgjdbc/pgjdbc/blob/master/backend_protocol_v4_wanted_features.md\n> ).\n>\n> A lot of the stuff on there seems way beyond what can be achieved in\n> something incrementally added to the protocol. Fair enough in an article\n> about \"v4\" of the protocol. But I don't think we are - nor should we be\n> - talking about a full new protocol version here. Instead we are talking\n> about extending the protocol, where the extensions are opt-in.\n>\n\nYou are correct we are not talking about a whole new protocol, but why not ?\nSeems to me we would have a lot more latitude to get it right if we didn't\nhave this limitation.\n\nDave\n\n>\n>\n\nOn Tue, 20 Oct 2020 at 20:09, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-10-20 18:55:41 -0500, Jack Christensen wrote:\n> Upthread someone posted a page pgjdbc detailing desired changes to the\n> backend protocol (\n> https://github.com/pgjdbc/pgjdbc/blob/master/backend_protocol_v4_wanted_features.md).\n\nA lot of the stuff on there seems way beyond what can be achieved in\nsomething incrementally added to the protocol. Fair enough in an article\nabout \"v4\" of the protocol. But I don't think we are - nor should we be\n- talking about a full new protocol version here. Instead we are talking\nabout extending the protocol, where the extensions are opt-in.You are correct we are not talking about a whole new protocol, but why not ?Seems to me we would have a lot more latitude to get it right if we didn't have this limitation.Dave", "msg_date": "Tue, 20 Oct 2020 20:17:45 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "Hi,\n\nOn 2020-10-20 20:17:45 -0400, Dave Cramer wrote:\n> You are correct we are not talking about a whole new protocol, but why not ?\n> Seems to me we would have a lot more latitude to get it right if we didn't\n> have this limitation.\n\nA new protocol will face a much bigger adoption hurdle, and there's much\nstuff that we'll want to do that we'll have a hard time ever getting off\nthe ground. Whereas opt-in extensions are much easier to get off the ground.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 21 Oct 2020 10:49:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On 2020-10-20 12:24, Dave Cramer wrote:\n> Finally, we could do it an a best-effort basis.  We use binary format\n> for registered types, until there is some invalidation event for the\n> type, at which point we revert to default/text format until the end\n> of a\n> session (or until another protocol message arrives re-registering the\n> type). \n> \n> Does the driver tell the server what registered types it wants in binary ?\n\nYes, the driver tells the server, \"whenever you send these types, send \nthem in binary\" (all other types keep sending in text).\n\n> This should work, because the result row descriptor contains the\n> actual format type, and there is no guarantee that it's the same one\n> that was requested.\n> \n> So how about that last option?  I imagine a new protocol message, say,\n> TypeFormats, that contains a number of type/format pairs.  The message\n> would typically be sent right after the first ReadyForQuery, gets no\n> response. \n> \n> This seems a bit hard to control. How long do you wait for no response?\n\nIn this design, you don't need a response.\n\n> It could also be sent at any other time, but I expect that to\n> be less used in practice.  Binary format is used for registered\n> types if\n> they have binary format support functions, otherwise text continues to\n> be used.  There is no error response for types without binary support.\n> (There should probably be an error response for registering a type that\n> does not exist.)\n> \n> I'm not sure we (pgjdbc) want all types with binary support functions \n> sent automatically. Turns out that decoding binary is sometimes slower \n> than decoding the text and the on wire overhead isn't significant. \n> Timestamps/dates with timezone are also interesting as the binary output \n> does not include the timezone.\n\nIn this design, you pick the types you want.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 23 Oct 2020 10:59:38 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On 2020-10-09 20:46, Andres Freund wrote:\n> Is there really a good reason for forcing the client to issue\n> NextResult, Describe, Execute for each of the dynamic result sets? It's\n> not like there's really a case for allowing the clients to skip them,\n> right? Why aren't we sending something more like\n> \n> S: CommandPartiallyComplete\n> S: RowDescription\n> S: DataRow...\n> S: CommandPartiallyComplete\n> S: RowDescription\n> S: DataRow...\n> ...\n> S: CommandComplete\n> C: Sync\n\nI want to post my current patch, to keep this discussion moving. There \nare still a number of pieces to pull together, but what I have is a \nself-contained functioning prototype.\n\nThe interesting thing about the above message sequence is that the \n\"CommandPartiallyComplete\" isn't actually necessary. Since an Execute \nmessage normally does not issue a RowDescription response, the \nappearance of one is already enough to mark the beginning of a new \nresult set. Moreover, libpq already handles this correctly, so we \nwouldn't need to change it at all.\n\nWe might still want to add a new protocol message, for clarity perhaps, \nand that would probably only be a few lines of code on either side, but \nthat would only serve for additional error checking and wouldn't \nactually be needed to identify what's going on.\n\nWhat else we need:\n\n- Think about what should happen if the Execute message specifies a row \ncount, and what should happen during subsequent Execute messages on the \nsame portal. I suspect that there isn't a particularly elegant answer, \nbut we need to pick some behavior.\n\n- Some way for psql to display multiple result sets. Proposals have been \nmade in [0] and [1]. (You need either patch or one like it for the \nregression tests in this patch to pass.)\n\n- Session-level default result formats setting, proposed in [2]. Not \nstrictly necessary, but would be most sensible to coordinate these two.\n\n- We don't have a way to test the extended query protocol. I have \nattached my test program, but we might want to think about something \nmore permanent. Proposals for this have already been made in [3].\n\n- Right now, this only supports returning dynamic result sets from a \ntop-level CALL. Specifications for passing dynamic result sets from one \nprocedure to a calling procedure exist in the SQL standard and could be \nadded later.\n\n(All the SQL additions in this patch are per SQL standard. DB2 appears \nto be the closest existing implementation.)\n\n\n[0]:\nhttps://www.postgresql.org/message-id/flat/4580ff7b-d610-eaeb-e06f-4d686896b93b%402ndquadrant.com\n[1]: https://commitfest.postgresql.org/29/2096/\n[2]: https://commitfest.postgresql.org/31/2812/\n[3]: \nhttps://www.postgresql.org/message-id/4f733cca-5e07-e167-8b38-05b5c9066d04%402ndQuadrant.com\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/", "msg_date": "Wed, 30 Dec 2020 15:33:56 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "Hi Peter,\n\nOn 12/30/20 9:33 AM, Peter Eisentraut wrote:\n> On 2020-10-09 20:46, Andres Freund wrote:\n>> Is there really a good reason for forcing the client to issue\n>> NextResult, Describe, Execute for each of the dynamic result sets? It's\n>> not like there's really a case for allowing the clients to skip them,\n>> right?  Why aren't we sending something more like\n>>\n>> S: CommandPartiallyComplete\n>> S: RowDescription\n>> S: DataRow...\n>> S: CommandPartiallyComplete\n>> S: RowDescription\n>> S: DataRow...\n>> ...\n>> S: CommandComplete\n>> C: Sync\n> \n> I want to post my current patch, to keep this discussion moving.\n\nCFBot reports that tests are failing, although the patch applies.\n\nAlso, you dropped all the driver authors from the thread. Not sure if \nthat was intentional, but you might want to add them back if you need \ntheir input.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Mon, 15 Mar 2021 09:56:25 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On 15.03.21 14:56, David Steele wrote:\n> Hi Peter,\n> \n> On 12/30/20 9:33 AM, Peter Eisentraut wrote:\n>> On 2020-10-09 20:46, Andres Freund wrote:\n>>> Is there really a good reason for forcing the client to issue\n>>> NextResult, Describe, Execute for each of the dynamic result sets? It's\n>>> not like there's really a case for allowing the clients to skip them,\n>>> right?  Why aren't we sending something more like\n>>>\n>>> S: CommandPartiallyComplete\n>>> S: RowDescription\n>>> S: DataRow...\n>>> S: CommandPartiallyComplete\n>>> S: RowDescription\n>>> S: DataRow...\n>>> ...\n>>> S: CommandComplete\n>>> C: Sync\n>>\n>> I want to post my current patch, to keep this discussion moving.\n> \n> CFBot reports that tests are failing, although the patch applies.\n\nYes, as explained in the message, you need another patch that makes psql \nshow the additional result sets. The cfbot cannot handle that kind of \nthing.\n\nIn the meantime, I have made a few small fixes, so I'm attaching another \npatch.", "msg_date": "Tue, 16 Mar 2021 13:23:35 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "Here is an updated patch with some merge conflicts resolved, to keep it \nfresh. It's still pending in the commit fest from last time.\n\nMy focus right now is to work on the \"psql - add SHOW_ALL_RESULTS \noption\" patch (https://commitfest.postgresql.org/33/2096/) first, which \nis pretty much a prerequisite to this one. The attached patch set \ncontains a minimal variant of that patch in 0001 and 0002, just to get \nthis working, but disregard those for the purposes of code review.\n\nThe 0003 patch contains comprehensive documentation and test changes \nthat can explain the feature in its current form.", "msg_date": "Tue, 29 Jun 2021 15:39:37 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On Tue, Jun 29, 2021 at 7:10 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Here is an updated patch with some merge conflicts resolved, to keep it\n> fresh. It's still pending in the commit fest from last time.\n>\n> My focus right now is to work on the \"psql - add SHOW_ALL_RESULTS\n> option\" patch (https://commitfest.postgresql.org/33/2096/) first, which\n> is pretty much a prerequisite to this one. The attached patch set\n> contains a minimal variant of that patch in 0001 and 0002, just to get\n> this working, but disregard those for the purposes of code review.\n>\n> The 0003 patch contains comprehensive documentation and test changes\n> that can explain the feature in its current form.\n\nOne of the patch v3-0003-Dynamic-result-sets-from-procedures.patch\ndoes not apply on HEAD, please post an updated patch for it:\nHunk #1 FAILED at 57.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/include/commands/defrem.h.rej\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 22 Jul 2021 11:36:47 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "rebased patch set\n\nOn 22.07.21 08:06, vignesh C wrote:\n> On Tue, Jun 29, 2021 at 7:10 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> Here is an updated patch with some merge conflicts resolved, to keep it\n>> fresh. It's still pending in the commit fest from last time.\n>>\n>> My focus right now is to work on the \"psql - add SHOW_ALL_RESULTS\n>> option\" patch (https://commitfest.postgresql.org/33/2096/) first, which\n>> is pretty much a prerequisite to this one. The attached patch set\n>> contains a minimal variant of that patch in 0001 and 0002, just to get\n>> this working, but disregard those for the purposes of code review.\n>>\n>> The 0003 patch contains comprehensive documentation and test changes\n>> that can explain the feature in its current form.\n> \n> One of the patch v3-0003-Dynamic-result-sets-from-procedures.patch\n> does not apply on HEAD, please post an updated patch for it:\n> Hunk #1 FAILED at 57.\n> 1 out of 1 hunk FAILED -- saving rejects to file\n> src/include/commands/defrem.h.rej\n> \n> Regards,\n> Vignesh\n> \n>", "msg_date": "Mon, 30 Aug 2021 22:22:44 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On Mon, Aug 30, 2021 at 1:23 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> rebased patch set\n>\n> On 22.07.21 08:06, vignesh C wrote:\n> > On Tue, Jun 29, 2021 at 7:10 PM Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> >>\n> >> Here is an updated patch with some merge conflicts resolved, to keep it\n> >> fresh. It's still pending in the commit fest from last time.\n> >>\n> >> My focus right now is to work on the \"psql - add SHOW_ALL_RESULTS\n> >> option\" patch (https://commitfest.postgresql.org/33/2096/) first, which\n> >> is pretty much a prerequisite to this one. The attached patch set\n> >> contains a minimal variant of that patch in 0001 and 0002, just to get\n> >> this working, but disregard those for the purposes of code review.\n> >>\n> >> The 0003 patch contains comprehensive documentation and test changes\n> >> that can explain the feature in its current form.\n> >\n> > One of the patch v3-0003-Dynamic-result-sets-from-procedures.patch\n> > does not apply on HEAD, please post an updated patch for it:\n> > Hunk #1 FAILED at 57.\n> > 1 out of 1 hunk FAILED -- saving rejects to file\n> > src/include/commands/defrem.h.rej\n> >\n> > Regards,\n> > Vignesh\n> >\n> >\n>\n> Hi,\n\n+ <term><literal>WITH RETURN</literal></term>\n+ <term><literal>WITHOUT RETURN</literal></term>\n+ <listitem>\n+ <para>\n+ This option is only valid for cursors defined inside a procedure.\n\nSince there are two options listed, I think using 'These options are' would\nbe better.\n\nFor CurrentProcedure(),\n\n+ return InvalidOid;\n+ else\n+ return llast_oid(procedure_stack);\n\nThe word 'else' can be omitted.\n\nCheers\n\nOn Mon, Aug 30, 2021 at 1:23 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:rebased patch set\n\nOn 22.07.21 08:06, vignesh C wrote:\n> On Tue, Jun 29, 2021 at 7:10 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> Here is an updated patch with some merge conflicts resolved, to keep it\n>> fresh.  It's still pending in the commit fest from last time.\n>>\n>> My focus right now is to work on the \"psql - add SHOW_ALL_RESULTS\n>> option\" patch (https://commitfest.postgresql.org/33/2096/) first, which\n>> is pretty much a prerequisite to this one.  The attached patch set\n>> contains a minimal variant of that patch in 0001 and 0002, just to get\n>> this working, but disregard those for the purposes of code review.\n>>\n>> The 0003 patch contains comprehensive documentation and test changes\n>> that can explain the feature in its current form.\n> \n> One of the patch v3-0003-Dynamic-result-sets-from-procedures.patch\n> does not apply on HEAD, please post an updated patch for it:\n> Hunk #1 FAILED at 57.\n> 1 out of 1 hunk FAILED -- saving rejects to file\n> src/include/commands/defrem.h.rej\n> \n> Regards,\n> Vignesh\n> \n> \nHi,+    <term><literal>WITH RETURN</literal></term>+    <term><literal>WITHOUT RETURN</literal></term>+    <listitem>+     <para>+      This option is only valid for cursors defined inside a procedure.Since there are two options listed, I think using 'These options are' would be better. For CurrentProcedure(),+       return InvalidOid;+   else+       return llast_oid(procedure_stack);The word 'else' can be omitted.Cheers", "msg_date": "Mon, 30 Aug 2021 14:11:34 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "Hi,\n\nOn Mon, Aug 30, 2021 at 02:11:34PM -0700, Zhihong Yu wrote:\n> On Mon, Aug 30, 2021 at 1:23 PM Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> wrote:\n> \n> > rebased patch set\n> \n> + <term><literal>WITH RETURN</literal></term>\n> + <term><literal>WITHOUT RETURN</literal></term>\n> + <listitem>\n> + <para>\n> + This option is only valid for cursors defined inside a procedure.\n> \n> Since there are two options listed, I think using 'These options are' would\n> be better.\n> \n> For CurrentProcedure(),\n> \n> + return InvalidOid;\n> + else\n> + return llast_oid(procedure_stack);\n> \n> The word 'else' can be omitted.\n\nThe cfbot reports that the patch doesn't apply anymore:\nhttp://cfbot.cputube.org/patch_36_2911.log.\n\nSince you mentioned that this patch depends on the SHOW_ALL_RESULTS psql patch\nwhich is still being worked on, I'm not expecting much activity here until the\nprerequirements are done. It also seems better to mark this patch as Waiting\non Author as further reviews are probably not really needed for now.\n\n\n", "msg_date": "Wed, 12 Jan 2022 18:20:55 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On 12.01.22 11:20, Julien Rouhaud wrote:\n> Since you mentioned that this patch depends on the SHOW_ALL_RESULTS psql patch\n> which is still being worked on, I'm not expecting much activity here until the\n> prerequirements are done. It also seems better to mark this patch as Waiting\n> on Author as further reviews are probably not really needed for now.\n\nWell, a review on the general architecture and approach would have been \nuseful. But I understand that without the psql work, it's difficult for \na reviewer to even get started on this patch. It's also similarly \ndifficult for me to keep updating it. So I'll set it to Returned with \nfeedback for now and take it off the table. I want to get back to it \nwhen the prerequisites are more settled.\n\n\n", "msg_date": "Tue, 1 Feb 2022 15:40:46 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On 01.02.22 15:40, Peter Eisentraut wrote:\n> On 12.01.22 11:20, Julien Rouhaud wrote:\n>> Since you mentioned that this patch depends on the SHOW_ALL_RESULTS \n>> psql patch\n>> which is still being worked on, I'm not expecting much activity here \n>> until the\n>> prerequirements are done.  It also seems better to mark this patch as \n>> Waiting\n>> on Author as further reviews are probably not really needed for now.\n> \n> Well, a review on the general architecture and approach would have been \n> useful.  But I understand that without the psql work, it's difficult for \n> a reviewer to even get started on this patch.  It's also similarly \n> difficult for me to keep updating it.  So I'll set it to Returned with \n> feedback for now and take it off the table.  I want to get back to it \n> when the prerequisites are more settled.\n\nNow that the psql support for multiple result sets exists, I want to \nrevive this patch. It's the same as the last posted version, except now \nit doesn't require any psql changes or any weird test modifications anymore.\n\n(Old news: This patch allows declaring a cursor WITH RETURN in a \nprocedure to make the cursor's data be returned as a result of the CALL \ninvocation. The procedure needs to be declared with the DYNAMIC RESULT \nSETS attribute.)", "msg_date": "Fri, 14 Oct 2022 09:11:59 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "Hi\n\n\npá 14. 10. 2022 v 9:12 odesílatel Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> napsal:\n\n> On 01.02.22 15:40, Peter Eisentraut wrote:\n> > On 12.01.22 11:20, Julien Rouhaud wrote:\n> >> Since you mentioned that this patch depends on the SHOW_ALL_RESULTS\n> >> psql patch\n> >> which is still being worked on, I'm not expecting much activity here\n> >> until the\n> >> prerequirements are done. It also seems better to mark this patch as\n> >> Waiting\n> >> on Author as further reviews are probably not really needed for now.\n> >\n> > Well, a review on the general architecture and approach would have been\n> > useful. But I understand that without the psql work, it's difficult for\n> > a reviewer to even get started on this patch. It's also similarly\n> > difficult for me to keep updating it. So I'll set it to Returned with\n> > feedback for now and take it off the table. I want to get back to it\n> > when the prerequisites are more settled.\n>\n> Now that the psql support for multiple result sets exists, I want to\n> revive this patch. It's the same as the last posted version, except now\n> it doesn't require any psql changes or any weird test modifications\n> anymore.\n>\n> (Old news: This patch allows declaring a cursor WITH RETURN in a\n> procedure to make the cursor's data be returned as a result of the CALL\n> invocation. The procedure needs to be declared with the DYNAMIC RESULT\n> SETS attribute.)\n>\n\nI did a quick test of this patch, and it is working pretty well.\n\nI have two ideas.\n\n1. there can be possibility to set \"dynamic result sets\" to unknown. The\nbehaviour of the \"dynamic result sets\" option is a little bit confusing. I\nexpect the number of result sets should be exactly the same as this number.\nBut the warning is raised only when this number is acrossed. For this\nimplementation the correct name should be like \"max dynamic result sets\" or\nsome like this. At this moment, I see this feature \"dynamic result sets\"\nmore confusing, and because the effect is just a warning, then I don't see\na strong benefit. I can see some benefit if I can declare so CALL will be\nwithout dynamic result sets, or with exact number of dynamic result sets or\nwith unknown number of dynamic result sets. And if the result is not\nexpected, then an exception should be raised (not warning).\n\n2. Unfortunately, it doesn't work nicely with pagers. It starts a pager for\none result, and waits for the end, and starts pager for the second result,\nand waits for the end. There is not a possibility to see all results at one\ntime. The current behavior is correct, but I don't think it is user\nfriendly. I think I can teach pspg to support multiple documents. But I\nneed a more robust protocol and some separators - minimally an empty line\n(but some ascii control char can be safer). As second step we can introduce\nnew psql option like PSQL_MULTI_PAGER, that can be used when possible\nresult sets is higher than 1\n\nRegards\n\nPavel\n\nHipá 14. 10. 2022 v 9:12 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:On 01.02.22 15:40, Peter Eisentraut wrote:\n> On 12.01.22 11:20, Julien Rouhaud wrote:\n>> Since you mentioned that this patch depends on the SHOW_ALL_RESULTS \n>> psql patch\n>> which is still being worked on, I'm not expecting much activity here \n>> until the\n>> prerequirements are done.  It also seems better to mark this patch as \n>> Waiting\n>> on Author as further reviews are probably not really needed for now.\n> \n> Well, a review on the general architecture and approach would have been \n> useful.  But I understand that without the psql work, it's difficult for \n> a reviewer to even get started on this patch.  It's also similarly \n> difficult for me to keep updating it.  So I'll set it to Returned with \n> feedback for now and take it off the table.  I want to get back to it \n> when the prerequisites are more settled.\n\nNow that the psql support for multiple result sets exists, I want to \nrevive this patch.  It's the same as the last posted version, except now \nit doesn't require any psql changes or any weird test modifications anymore.\n\n(Old news: This patch allows declaring a cursor WITH RETURN in a \nprocedure to make the cursor's data be returned as a result of the CALL \ninvocation.  The procedure needs to be declared with the DYNAMIC RESULT \nSETS attribute.)I did a quick test of this patch, and it is working pretty well.I have two ideas.1. there can be possibility to set \"dynamic result sets\" to unknown. The behaviour of the \"dynamic result sets\" option is a little bit confusing. I expect the number of result sets should be exactly the same as this number. But the warning is raised only when this number is acrossed. For this implementation the correct name should be like \"max dynamic result sets\" or some like this. At this moment, I see this feature \"dynamic result sets\" more confusing, and because the effect is just a warning, then I don't see a strong benefit. I can see some benefit if I can declare so CALL will be without dynamic result sets, or with exact number of dynamic result sets or with unknown number of dynamic result sets. And if the result is not expected, then an exception should be raised (not warning).2. Unfortunately, it doesn't work nicely with pagers. It starts a pager for one result, and waits for the end, and starts pager for the second result, and waits for the end. There is not a possibility to see all results at one time. The current behavior is correct, but I don't think it is user friendly. I think I can teach pspg to support multiple documents. But I need a more robust protocol and some separators - minimally an empty line (but some ascii control char can be safer). As second step we can introduce new psql option like PSQL_MULTI_PAGER, that can be used when possible result sets is higher than 1RegardsPavel", "msg_date": "Fri, 14 Oct 2022 19:22:30 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On 14.10.22 19:22, Pavel Stehule wrote:\n> 1. there can be possibility to set \"dynamic result sets\" to unknown. The \n> behaviour of the \"dynamic result sets\" option is a little bit confusing. \n> I expect the number of result sets should be exactly the same as this \n> number. But the warning is raised only when this number is acrossed. For \n> this implementation the correct name should be like \"max dynamic result \n> sets\" or some like this. At this moment, I see this feature \"dynamic \n> result sets\" more confusing, and because the effect is just a warning, \n> then I don't see a strong benefit. I can see some benefit if I can \n> declare so CALL will be without dynamic result sets, or with exact \n> number of dynamic result sets or with unknown number of dynamic result \n> sets. And if the result is not expected, then an exception should be \n> raised (not warning).\n\nAll of this is specified by the SQL standard. (What I mean by that is \nthat if we want to deviate from that, we should have strong reasons \nbeyond \"it seems a bit odd\".)\n\n> 2. Unfortunately, it doesn't work nicely with pagers. It starts a pager \n> for one result, and waits for the end, and starts pager for the second \n> result, and waits for the end. There is not a possibility to see all \n> results at one time. The current behavior is correct, but I don't think \n> it is user friendly. I think I can teach pspg to support multiple \n> documents. But I need a more robust protocol and some separators - \n> minimally an empty line (but some ascii control char can be safer). As \n> second step we can introduce new psql option like PSQL_MULTI_PAGER, that \n> can be used when possible result sets is higher than 1\n\nI think that is unrelated to this patch. Multiple result sets already \nexist and libpq and psql handle them. This patch introduces another way \nin which multiple result sets can be produced on the server, but it \ndoesn't touch the client side. So your concerns should be added either \nas a new feature or possibly as a bug against existing psql functionality.\n\n\n", "msg_date": "Tue, 15 Nov 2022 15:58:57 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On 14.10.22 09:11, Peter Eisentraut wrote:\n> Now that the psql support for multiple result sets exists, I want to \n> revive this patch.  It's the same as the last posted version, except now \n> it doesn't require any psql changes or any weird test modifications \n> anymore.\n> \n> (Old news: This patch allows declaring a cursor WITH RETURN in a \n> procedure to make the cursor's data be returned as a result of the CALL \n> invocation.  The procedure needs to be declared with the DYNAMIC RESULT \n> SETS attribute.)\n\nI added tests using the new psql \\bind command to test this \nfunctionality in the extended query protocol, which showed that this got \nbroken since I first wrote this patch. This \"blame\" is on the pipeline \nmode in libpq patch (acb7e4eb6b1c614c68a62fb3a6a5bba1af0a2659). I need \nto spend more time on this and figure out how to repair it. In the \nmeantime, here is an updated patch set with the current status.", "msg_date": "Tue, 22 Nov 2022 16:57:17 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On 2022-Nov-22, Peter Eisentraut wrote:\n\n> I added tests using the new psql \\bind command to test this functionality in\n> the extended query protocol, which showed that this got broken since I first\n> wrote this patch. This \"blame\" is on the pipeline mode in libpq patch\n> (acb7e4eb6b1c614c68a62fb3a6a5bba1af0a2659). I need to spend more time on\n> this and figure out how to repair it. In the meantime, here is an updated\n> patch set with the current status.\n\nI looked at this a little bit to understand why it fails with \\bind. As\nyou say, it does interact badly with pipeline mode -- more precisely, it\ncollides with the queue handling that was added for pipeline. The\nproblem is that in extended query mode, we \"advance\" the queue in\nPQgetResult when asyncStatus is READY -- fe-exec.c line 2110 ff. But\nthe protocol relies on returning READY when the second RowDescriptor\nmessage is received (fe-protocol3.c line 319), so libpq gets confused\nand everything blows up. libpq needs the queue to stay put until all\nthe results from that query have been consumed.\n\nIf you comment out the pqCommandQueueAdvance() in fe-exec.c line 2124,\nyour example works correctly and no longer throws a libpq error (but of\ncourse, other things break).\n\nI suppose that in order for this to work, we would have to find another\nway to \"advance\" the queue that doesn't rely on the status being\nPGASYNC_READY.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 21 Dec 2022 20:41:20 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On 2022-Dec-21, Alvaro Herrera wrote:\n\n> I suppose that in order for this to work, we would have to find another\n> way to \"advance\" the queue that doesn't rely on the status being\n> PGASYNC_READY.\n\nI think the way to make this work is to increase the coupling between\nfe-exec.c and fe-protocol.c by making the queue advance occur when\nCommandComplete is received. This is likely more correct protocol-wise\nthan what we're doing now: we would consider the command as done when\nthe server tells us it is done, rather than relying on internal libpq\nstate.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 22 Dec 2022 20:39:21 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On 2022-Nov-22, Peter Eisentraut wrote:\n\n> I added tests using the new psql \\bind command to test this functionality in\n> the extended query protocol, which showed that this got broken since I first\n> wrote this patch. This \"blame\" is on the pipeline mode in libpq patch\n> (acb7e4eb6b1c614c68a62fb3a6a5bba1af0a2659). I need to spend more time on\n> this and figure out how to repair it.\n\nApplying this patch, your test queries seem to work correctly.\n\nThis is quite WIP, especially because there's a couple of scenarios\nuncovered by tests that I'd like to ensure correctness about, but if you\nwould like to continue adding tests for extended query and dynamic\nresult sets, it may be helpful.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"How strange it is to find the words \"Perl\" and \"saner\" in such close\nproximity, with no apparent sense of irony. I doubt that Larry himself\ncould have managed it.\" (ncm, http://lwn.net/Articles/174769/)", "msg_date": "Mon, 30 Jan 2023 14:06:09 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On 30.01.23 14:06, Alvaro Herrera wrote:\n> On 2022-Nov-22, Peter Eisentraut wrote:\n> \n>> I added tests using the new psql \\bind command to test this functionality in\n>> the extended query protocol, which showed that this got broken since I first\n>> wrote this patch. This \"blame\" is on the pipeline mode in libpq patch\n>> (acb7e4eb6b1c614c68a62fb3a6a5bba1af0a2659). I need to spend more time on\n>> this and figure out how to repair it.\n> \n> Applying this patch, your test queries seem to work correctly.\n\nGreat!\n\n> This is quite WIP, especially because there's a couple of scenarios\n> uncovered by tests that I'd like to ensure correctness about, but if you\n> would like to continue adding tests for extended query and dynamic\n> result sets, it may be helpful.\n\nI should note that it is debatable whether my patch extends the extended \nquery protocol or just uses it within its existing spec but in new ways. \n It just happened to work in old libpq versions without any changes. \nSo you should keep that in mind as you refine your patch, since the way \nthe protocol has been extended/creatively-used is still subject to review.\n\n\n\n", "msg_date": "Tue, 31 Jan 2023 12:07:07 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On 31.01.23 12:07, Peter Eisentraut wrote:\n>> Applying this patch, your test queries seem to work correctly.\n> \n> Great!\n> \n>> This is quite WIP, especially because there's a couple of scenarios\n>> uncovered by tests that I'd like to ensure correctness about, but if you\n>> would like to continue adding tests for extended query and dynamic\n>> result sets, it may be helpful.\n> \n> I should note that it is debatable whether my patch extends the extended \n> query protocol or just uses it within its existing spec but in new ways. \n>  It just happened to work in old libpq versions without any changes. So \n> you should keep that in mind as you refine your patch, since the way the \n> protocol has been extended/creatively-used is still subject to review.\n\nAfter some consideration, I have an idea how to proceed with this. I \nhave split my original patch into two incremental patches. The first \npatch implements the original feature, but just for the simple query \nprotocol. (The simple query protocol already supports multiple result \nsets.) Attempting to return dynamic result sets using the extended \nquery protocol will result in an error. The second patch then adds the \nextended query protocol support back in, but it still has the issues \nwith libpq that we are discussing.\n\nI think this way we could have a chance to get the first part into PG16 \nor early into PG17, and then the second part can be worked on with less \nstress. This would also allow us to consider a minor protocol version \nbump, and the handling of binary format for dynamic result sets (like \nhttps://commitfest.postgresql.org/42/3777/), and maybe some other issues.\n\nThe attached patches are the same as before, rebased over master and \nsplit up as described. I haven't done any significant work on the \ncontents, but I will try to get the 0001 patch into a more polished \nstate soon.", "msg_date": "Mon, 20 Feb 2023 13:58:52 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" }, { "msg_contents": "On 20.02.23 13:58, Peter Eisentraut wrote:\n> The attached patches are the same as before, rebased over master and \n> split up as described.  I haven't done any significant work on the \n> contents, but I will try to get the 0001 patch into a more polished \n> state soon.\n\nI've done a bit of work on this patch, mainly cleaned up and expanded \nthe tests, and also added DO support, which is something that had been \nrequested (meaning you can return result sets from DO with this \nfacility). Here is a new version.", "msg_date": "Fri, 24 Feb 2023 12:26:42 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: dynamic result sets support in extended query protocol" } ]
[ { "msg_contents": "@@ -1432,7 +1432,7 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb,\nReorderBufferTXN *txn)\n ReorderBufferCleanupTXN(rb, subtxn);\n }\n\n- /* cleanup changes in the toplevel txn */\n+ /* cleanup changes in the txn */\n dlist_foreach_modify(iter, &txn->changes)\n {\n ReorderBufferChange *change;\n@@ -1533,7 +1533,7 @@ ReorderBufferTruncateTXN(ReorderBuffer *rb,\nReorderBufferTXN *txn)\n ReorderBufferTruncateTXN(rb, subtxn);\n }\n\n- /* cleanup changes in the toplevel txn */\n+ /* cleanup changes in the txn */\n dlist_foreach_modify(iter, &txn->changes)\n {\n ReorderBufferChange *change;\n\nBoth the above functions are recursive and will clean the changes for\nboth the top-level transaction and subtransactions. So, I feel the\ncomments should be accordingly updated.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 8 Oct 2020 14:07:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Fix typos in reorderbuffer.c" }, { "msg_contents": "On Thu, 8 Oct 2020 at 17:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> @@ -1432,7 +1432,7 @@ ReorderBufferCleanupTXN(ReorderBuffer *rb,\n> ReorderBufferTXN *txn)\n> ReorderBufferCleanupTXN(rb, subtxn);\n> }\n>\n> - /* cleanup changes in the toplevel txn */\n> + /* cleanup changes in the txn */\n> dlist_foreach_modify(iter, &txn->changes)\n> {\n> ReorderBufferChange *change;\n> @@ -1533,7 +1533,7 @@ ReorderBufferTruncateTXN(ReorderBuffer *rb,\n> ReorderBufferTXN *txn)\n> ReorderBufferTruncateTXN(rb, subtxn);\n> }\n>\n> - /* cleanup changes in the toplevel txn */\n> + /* cleanup changes in the txn */\n> dlist_foreach_modify(iter, &txn->changes)\n> {\n> ReorderBufferChange *change;\n>\n> Both the above functions are recursive and will clean the changes for\n> both the top-level transaction and subtransactions.\n\nRight.\n\n> So, I feel the\n> comments should be accordingly updated.\n\n+1 for this change.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 8 Oct 2020 18:09:24 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Fix typos in reorderbuffer.c" }, { "msg_contents": "On Thu, Oct 8, 2020 at 2:40 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 8 Oct 2020 at 17:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > So, I feel the\n> > comments should be accordingly updated.\n>\n> +1 for this change.\n>\n\nThanks, I have pushed this and along with it pushed a typo-fix in logical.c.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 9 Oct 2020 08:43:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix typos in reorderbuffer.c" } ]
[ { "msg_contents": "Hi:\n\n I found the following code in gen_partprune_steps_internal, which\nlooks the if-statement to be always true since list_length(results) > 1;\nI added an Assert(step_ids != NIL) and all the test cases passed.\nif the if-statement is always true, shall we remove it to avoid confusion?\n\n\ngen_partprune_steps_internal(GeneratePruningStepsContext *context,\n\n\n if (list_length(result) > 1)\n {\n List *step_ids = NIL;\n\n foreach(lc, result)\n {\n PartitionPruneStep *step = lfirst(lc);\n\n step_ids = lappend_int(step_ids, step->step_id);\n }\n Assert(step_ids != NIL);\n if (step_ids != NIL) // This should always be true.\n {\n PartitionPruneStep *step;\n\n step = gen_prune_step_combine(context, step_ids,\n\n PARTPRUNE_COMBINE_INTERSECT);\n result = lappend(result, step);\n }\n }\n\n\n-- \nBest Regards\nAndy Fan\n\nHi:  I found the following code in gen_partprune_steps_internal,  which looks the if-statement to be always true since list_length(results) > 1; I added an Assert(step_ids != NIL) and all the test cases passed. if the if-statement is always true,  shall we remove it to avoid confusion?gen_partprune_steps_internal(GeneratePruningStepsContext *context,        if (list_length(result) > 1)        {                List       *step_ids = NIL;                foreach(lc, result)                {                        PartitionPruneStep *step = lfirst(lc);                        step_ids = lappend_int(step_ids, step->step_id);                }                Assert(step_ids != NIL);                  if (step_ids != NIL) // This should always be true.                 {                        PartitionPruneStep *step;                        step = gen_prune_step_combine(context, step_ids,                                                                                  PARTPRUNE_COMBINE_INTERSECT);                        result = lappend(result, step);                }        }-- Best RegardsAndy Fan", "msg_date": "Thu, 8 Oct 2020 17:55:43 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "Hi,\n\nOn Thu, Oct 8, 2020 at 6:56 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Hi:\n>\n> I found the following code in gen_partprune_steps_internal, which\n> looks the if-statement to be always true since list_length(results) > 1;\n> I added an Assert(step_ids != NIL) and all the test cases passed.\n> if the if-statement is always true, shall we remove it to avoid confusion?\n>\n>\n> gen_partprune_steps_internal(GeneratePruningStepsContext *context,\n>\n>\n> if (list_length(result) > 1)\n> {\n> List *step_ids = NIL;\n>\n> foreach(lc, result)\n> {\n> PartitionPruneStep *step = lfirst(lc);\n>\n> step_ids = lappend_int(step_ids, step->step_id);\n> }\n> Assert(step_ids != NIL);\n> if (step_ids != NIL) // This should always be true.\n> {\n> PartitionPruneStep *step;\n>\n> step = gen_prune_step_combine(context, step_ids,\n> PARTPRUNE_COMBINE_INTERSECT);\n> result = lappend(result, step);\n> }\n> }\n\nThat seems fine to me.\n\nLooking at this piece of code, I remembered that exactly the same\npiece of logic is also present in gen_prune_steps_from_opexps(), which\nlooks like this:\n\n /* Lastly, add a combine step to mutually AND these op steps, if needed */\n if (list_length(opsteps) > 1)\n {\n List *opstep_ids = NIL;\n\n foreach(lc, opsteps)\n {\n PartitionPruneStep *step = lfirst(lc);\n\n opstep_ids = lappend_int(opstep_ids, step->step_id);\n }\n\n if (opstep_ids != NIL)\n return gen_prune_step_combine(context, opstep_ids,\n PARTPRUNE_COMBINE_INTERSECT);\n return NULL;\n }\n else if (opsteps != NIL)\n return linitial(opsteps);\n\nI think we should remove this duplicative logic and return the\ngenerated steps in a list from this function, which the code in\ngen_partprune_steps_internal() then \"combines\" using an INTERSECT\nstep. See attached a patch to show what I mean.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 12 Oct 2020 17:36:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "On Mon, Oct 12, 2020 at 4:37 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> Hi,\n>\n> On Thu, Oct 8, 2020 at 6:56 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> > Hi:\n> >\n> > I found the following code in gen_partprune_steps_internal, which\n> > looks the if-statement to be always true since list_length(results) > 1;\n> > I added an Assert(step_ids != NIL) and all the test cases passed.\n> > if the if-statement is always true, shall we remove it to avoid\n> confusion?\n> >\n> >\n> > gen_partprune_steps_internal(GeneratePruningStepsContext *context,\n> >\n> >\n> > if (list_length(result) > 1)\n> > {\n> > List *step_ids = NIL;\n> >\n> > foreach(lc, result)\n> > {\n> > PartitionPruneStep *step = lfirst(lc);\n> >\n> > step_ids = lappend_int(step_ids, step->step_id);\n> > }\n> > Assert(step_ids != NIL);\n> > if (step_ids != NIL) // This should always be true.\n> > {\n> > PartitionPruneStep *step;\n> >\n> > step = gen_prune_step_combine(context, step_ids,\n> >\n> PARTPRUNE_COMBINE_INTERSECT);\n> > result = lappend(result, step);\n> > }\n> > }\n>\n> That seems fine to me.\n>\n> Looking at this piece of code, I remembered that exactly the same\n> piece of logic is also present in gen_prune_steps_from_opexps(), which\n> looks like this:\n>\n> /* Lastly, add a combine step to mutually AND these op steps, if\n> needed */\n> if (list_length(opsteps) > 1)\n> {\n> List *opstep_ids = NIL;\n>\n> foreach(lc, opsteps)\n> {\n> PartitionPruneStep *step = lfirst(lc);\n>\n> opstep_ids = lappend_int(opstep_ids, step->step_id);\n> }\n>\n> if (opstep_ids != NIL)\n> return gen_prune_step_combine(context, opstep_ids,\n> PARTPRUNE_COMBINE_INTERSECT);\n> return NULL;\n> }\n> else if (opsteps != NIL)\n> return linitial(opsteps);\n>\n> I think we should remove this duplicative logic and return the\n> generated steps in a list from this function, which the code in\n> gen_partprune_steps_internal() then \"combines\" using an INTERSECT\n> step. See attached a patch to show what I mean.\n>\n>\nThis changes LGTM, and \"make check\" PASSED, thanks for the patch!\n\n-- \nBest Regards\nAndy Fan\n\nOn Mon, Oct 12, 2020 at 4:37 PM Amit Langote <amitlangote09@gmail.com> wrote:Hi,\n\nOn Thu, Oct 8, 2020 at 6:56 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Hi:\n>\n>   I found the following code in gen_partprune_steps_internal,  which\n> looks the if-statement to be always true since list_length(results) > 1;\n> I added an Assert(step_ids != NIL) and all the test cases passed.\n> if the if-statement is always true,  shall we remove it to avoid confusion?\n>\n>\n> gen_partprune_steps_internal(GeneratePruningStepsContext *context,\n>\n>\n>         if (list_length(result) > 1)\n>         {\n>                 List       *step_ids = NIL;\n>\n>                 foreach(lc, result)\n>                 {\n>                         PartitionPruneStep *step = lfirst(lc);\n>\n>                         step_ids = lappend_int(step_ids, step->step_id);\n>                 }\n>                 Assert(step_ids != NIL);\n>                 if (step_ids != NIL) // This should always be true.\n>                 {\n>                         PartitionPruneStep *step;\n>\n>                         step = gen_prune_step_combine(context, step_ids,\n>                                                                                   PARTPRUNE_COMBINE_INTERSECT);\n>                         result = lappend(result, step);\n>                 }\n>         }\n\nThat seems fine to me.\n\nLooking at this piece of code, I remembered that exactly the same\npiece of logic is also present in gen_prune_steps_from_opexps(), which\nlooks like this:\n\n    /* Lastly, add a combine step to mutually AND these op steps, if needed */\n    if (list_length(opsteps) > 1)\n    {\n        List       *opstep_ids = NIL;\n\n        foreach(lc, opsteps)\n        {\n            PartitionPruneStep *step = lfirst(lc);\n\n            opstep_ids = lappend_int(opstep_ids, step->step_id);\n        }\n\n        if (opstep_ids != NIL)\n            return gen_prune_step_combine(context, opstep_ids,\n                                          PARTPRUNE_COMBINE_INTERSECT);\n        return NULL;\n    }\n    else if (opsteps != NIL)\n        return linitial(opsteps);\n\nI think we should remove this duplicative logic and return the\ngenerated steps in a list from this function, which the code in\ngen_partprune_steps_internal() then \"combines\" using an INTERSECT\nstep.  See attached a patch to show what I mean.This changes LGTM, and \"make check\" PASSED,  thanks for the patch!-- Best RegardsAndy Fan", "msg_date": "Wed, 14 Oct 2020 11:26:33 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "At Wed, 14 Oct 2020 11:26:33 +0800, Andy Fan <zhihui.fan1213@gmail.com> wrote in \n> On Mon, Oct 12, 2020 at 4:37 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> > I think we should remove this duplicative logic and return the\n> > generated steps in a list from this function, which the code in\n> > gen_partprune_steps_internal() then \"combines\" using an INTERSECT\n> > step. See attached a patch to show what I mean.\n> >\n> >\n> This changes LGTM, and \"make check\" PASSED, thanks for the patch!\n\nFWIW, both looks fine to me.\n\nBy the way, I guess that some of the caller sites of\ngen_prune_step_combine(PARTPRUNE_COMBINE_INTERSECT) is useless if we\ndo that later?\n\n(Diff1 below)\n\nMmm. I was wrong. *All the other caller site* than that at the end of\ngen_partprune_steps_internal is useless?\n\n(Note: The Diff1 alone leads to assertion failure at partprune.c:945@master.\n See below.)\n\n\nBy the way, I'm confused to see the following portion in\ngen_partprune_steps_internal.\n\n>\t/*\n>\t * Finally, results from all entries appearing in result should be\n>\t * combined using an INTERSECT combine step, if more than one.\n>\t */\n>\tif (list_length(result) > 1)\n...\n>\t\t\tstep = gen_prune_step_combine(context, step_ids,\n>\t\t\t\t\t\t\t\t\t\t PARTPRUNE_COMBINE_INTERSECT);\n>\t\t\tresult = lappend(result, step);\n\nThe result contains both the source terms and the combined term. If I\nunderstand it correctly, we should replace the source terms with\ncombined one. (With this change the assertion above doesn't fire and\npasses all regression tests.)\n\n=====\n@@ -1180,13 +1163,9 @@ gen_partprune_steps_internal(GeneratePruningStepsContext *context,\n \t\t}\n \n \t\tif (step_ids != NIL)\n-\t\t{\n-\t\t\tPartitionPruneStep *step;\n-\n-\t\t\tstep = gen_prune_step_combine(context, step_ids,\n-\t\t\t\t\t\t\t\t\t\t PARTPRUNE_COMBINE_INTERSECT);\n-\t\t\tresult = lappend(result, step);\n-\t\t}\n+\t\t\tresult =\n+\t\t\t\tlist_make1(gen_prune_step_combine(context, step_ids,\n+\t\t\t\t\t\t\t\t\t\t\t\t PARTPRUNE_COMBINE_INTERSECT));\n \t}\n \n \treturn result;\n=====\n\n\nregards.\n\n\nDiff1\n======\n@@ -983,9 +983,7 @@ gen_partprune_steps_internal(GeneratePruningStepsContext *context,\n \t\t\telse if (is_andclause(clause))\n \t\t\t{\n \t\t\t\tList\t *args = ((BoolExpr *) clause)->args;\n-\t\t\t\tList\t *argsteps,\n-\t\t\t\t\t\t *arg_stepids = NIL;\n-\t\t\t\tListCell *lc1;\n+\t\t\t\tList\t *argsteps;\n \n \t\t\t\t/*\n \t\t\t\t * args may itself contain clauses of arbitrary type, so just\n@@ -998,21 +996,7 @@ gen_partprune_steps_internal(GeneratePruningStepsContext *context,\n \t\t\t\tif (context->contradictory)\n \t\t\t\t\treturn NIL;\n \n-\t\t\t\tforeach(lc1, argsteps)\n-\t\t\t\t{\n-\t\t\t\t\tPartitionPruneStep *step = lfirst(lc1);\n-\n-\t\t\t\t\targ_stepids = lappend_int(arg_stepids, step->step_id);\n-\t\t\t\t}\n-\n-\t\t\t\tif (arg_stepids != NIL)\n-\t\t\t\t{\n-\t\t\t\t\tPartitionPruneStep *step;\n-\n-\t\t\t\t\tstep = gen_prune_step_combine(context, arg_stepids,\n-\t\t\t\t\t\t\t\t\t\t\t\t PARTPRUNE_COMBINE_INTERSECT);\n-\t\t\t\t\tresult = lappend(result, step);\n-\t\t\t\t}\n+\t\t\t\tresult = list_concat(result, argsteps);\n \t\t\t\tcontinue;\n \t\t\t}\n==== \n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 14 Oct 2020 15:27:48 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "On Wed, Oct 14, 2020 at 11:26 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Mon, Oct 12, 2020 at 4:37 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n>\n>> Hi,\n>>\n>> On Thu, Oct 8, 2020 at 6:56 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> >\n>> > Hi:\n>> >\n>> > I found the following code in gen_partprune_steps_internal, which\n>> > looks the if-statement to be always true since list_length(results) > 1;\n>> > I added an Assert(step_ids != NIL) and all the test cases passed.\n>> > if the if-statement is always true, shall we remove it to avoid\n>> confusion?\n>> >\n>> >\n>> > gen_partprune_steps_internal(GeneratePruningStepsContext *context,\n>> >\n>> >\n>> > if (list_length(result) > 1)\n>> > {\n>> > List *step_ids = NIL;\n>> >\n>> > foreach(lc, result)\n>> > {\n>> > PartitionPruneStep *step = lfirst(lc);\n>> >\n>> > step_ids = lappend_int(step_ids, step->step_id);\n>> > }\n>> > Assert(step_ids != NIL);\n>> > if (step_ids != NIL) // This should always be true.\n>> > {\n>> > PartitionPruneStep *step;\n>> >\n>> > step = gen_prune_step_combine(context, step_ids,\n>> >\n>> PARTPRUNE_COMBINE_INTERSECT);\n>> > result = lappend(result, step);\n>> > }\n>> > }\n>>\n>> That seems fine to me.\n>>\n>> Looking at this piece of code, I remembered that exactly the same\n>> piece of logic is also present in gen_prune_steps_from_opexps(), which\n>> looks like this:\n>>\n>> /* Lastly, add a combine step to mutually AND these op steps, if\n>> needed */\n>> if (list_length(opsteps) > 1)\n>> {\n>> List *opstep_ids = NIL;\n>>\n>> foreach(lc, opsteps)\n>> {\n>> PartitionPruneStep *step = lfirst(lc);\n>>\n>> opstep_ids = lappend_int(opstep_ids, step->step_id);\n>> }\n>>\n>> if (opstep_ids != NIL)\n>> return gen_prune_step_combine(context, opstep_ids,\n>> PARTPRUNE_COMBINE_INTERSECT);\n>> return NULL;\n>> }\n>> else if (opsteps != NIL)\n>> return linitial(opsteps);\n>>\n>> I think we should remove this duplicative logic and return the\n>> generated steps in a list from this function, which the code in\n>> gen_partprune_steps_internal() then \"combines\" using an INTERSECT\n>> step. See attached a patch to show what I mean.\n>>\n>>\n> This changes LGTM, and \"make check\" PASSED, thanks for the patch!\n>\n>\nI created https://commitfest.postgresql.org/30/2771/ so that this patch\nwill not\nbe lost. Thanks!\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Oct 14, 2020 at 11:26 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Mon, Oct 12, 2020 at 4:37 PM Amit Langote <amitlangote09@gmail.com> wrote:Hi,\n\nOn Thu, Oct 8, 2020 at 6:56 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Hi:\n>\n>   I found the following code in gen_partprune_steps_internal,  which\n> looks the if-statement to be always true since list_length(results) > 1;\n> I added an Assert(step_ids != NIL) and all the test cases passed.\n> if the if-statement is always true,  shall we remove it to avoid confusion?\n>\n>\n> gen_partprune_steps_internal(GeneratePruningStepsContext *context,\n>\n>\n>         if (list_length(result) > 1)\n>         {\n>                 List       *step_ids = NIL;\n>\n>                 foreach(lc, result)\n>                 {\n>                         PartitionPruneStep *step = lfirst(lc);\n>\n>                         step_ids = lappend_int(step_ids, step->step_id);\n>                 }\n>                 Assert(step_ids != NIL);\n>                 if (step_ids != NIL) // This should always be true.\n>                 {\n>                         PartitionPruneStep *step;\n>\n>                         step = gen_prune_step_combine(context, step_ids,\n>                                                                                   PARTPRUNE_COMBINE_INTERSECT);\n>                         result = lappend(result, step);\n>                 }\n>         }\n\nThat seems fine to me.\n\nLooking at this piece of code, I remembered that exactly the same\npiece of logic is also present in gen_prune_steps_from_opexps(), which\nlooks like this:\n\n    /* Lastly, add a combine step to mutually AND these op steps, if needed */\n    if (list_length(opsteps) > 1)\n    {\n        List       *opstep_ids = NIL;\n\n        foreach(lc, opsteps)\n        {\n            PartitionPruneStep *step = lfirst(lc);\n\n            opstep_ids = lappend_int(opstep_ids, step->step_id);\n        }\n\n        if (opstep_ids != NIL)\n            return gen_prune_step_combine(context, opstep_ids,\n                                          PARTPRUNE_COMBINE_INTERSECT);\n        return NULL;\n    }\n    else if (opsteps != NIL)\n        return linitial(opsteps);\n\nI think we should remove this duplicative logic and return the\ngenerated steps in a list from this function, which the code in\ngen_partprune_steps_internal() then \"combines\" using an INTERSECT\nstep.  See attached a patch to show what I mean.This changes LGTM, and \"make check\" PASSED,  thanks for the patch!I created https://commitfest.postgresql.org/30/2771/ so that this patch will notbe lost.  Thanks! -- Best RegardsAndy Fan", "msg_date": "Tue, 20 Oct 2020 15:05:29 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "Hi Andy,\n\nOn Tue, Oct 20, 2020 at 4:05 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> On Wed, Oct 14, 2020 at 11:26 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> On Mon, Oct 12, 2020 at 4:37 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>>> I think we should remove this duplicative logic and return the\n>>> generated steps in a list from this function, which the code in\n>>> gen_partprune_steps_internal() then \"combines\" using an INTERSECT\n>>> step. See attached a patch to show what I mean.\n>>>\n>>\n>> This changes LGTM, and \"make check\" PASSED, thanks for the patch!\n>>\n>\n> I created https://commitfest.postgresql.org/30/2771/ so that this patch will not\n> be lost. Thanks!\n\nThanks for doing that.\n\nI had updated the patch last week to address Horiguchi-san's comments\nbut didn't manage to post a polished-enough version. I will try again\nthis week.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Oct 2020 21:46:44 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nThe original patch still applies and passes make installcheck-world. An updated patch was mentioned but has not been attached. Updating status to Waiting on Author.\r\n\r\nCheers,\r\n\r\n-- Ryan Lambert\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Wed, 03 Mar 2021 23:44:52 +0000", "msg_from": "Ryan Lambert <ryan@rustprooflabs.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "On Tue, Oct 20, 2020 at 9:46 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Oct 20, 2020 at 4:05 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > On Wed, Oct 14, 2020 at 11:26 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >> On Mon, Oct 12, 2020 at 4:37 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >>> I think we should remove this duplicative logic and return the\n> >>> generated steps in a list from this function, which the code in\n> >>> gen_partprune_steps_internal() then \"combines\" using an INTERSECT\n> >>> step. See attached a patch to show what I mean.\n> >>>\n> >>\n> >> This changes LGTM, and \"make check\" PASSED, thanks for the patch!\n> >>\n> >\n> > I created https://commitfest.postgresql.org/30/2771/ so that this patch will not\n> > be lost. Thanks!\n>\n> Thanks for doing that.\n>\n> I had updated the patch last week to address Horiguchi-san's comments\n> but didn't manage to post a polished-enough version. I will try again\n> this week.\n\nSorry, this seems to have totally slipped my mind.\n\nAttached is the patch I had promised.\n\nAlso, I have updated the title of the CF entry to \"Some cosmetic\nimprovements of partition pruning code\", which I think is more\nappropriate.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 4 Mar 2021 15:03:03 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "On Wed, Mar 3, 2021 at 11:03 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> Sorry, this seems to have totally slipped my mind.\n>\n> Attached is the patch I had promised.\n>\n> Also, I have updated the title of the CF entry to \"Some cosmetic\n> improvements of partition pruning code\", which I think is more\n> appropriate.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n\n\nThank you. The updated patch passes installcheck-world. I ran a handful\nof test queries with a small number of partitions and observed the same\nplans before and after the patch. I cannot speak to the quality of the\ncode, though am happy to test any additional use cases that should be\nverified.\n\n\nRyan Lambert\n\nOn Wed, Mar 3, 2021 at 11:03 PM Amit Langote <amitlangote09@gmail.com> wrote:Sorry, this seems to have totally slipped my mind.\n\nAttached is the patch I had promised.\n\nAlso, I have updated the title of the CF entry to \"Some cosmetic\nimprovements of partition pruning code\", which I think is more\nappropriate.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.comThank you.  The updated patch passes installcheck-world.  I ran a handful of test queries with a small number of partitions and observed the same plans before and after the patch. I cannot speak to the quality of the code, though am happy to test any additional use cases that should be verified.Ryan Lambert", "msg_date": "Thu, 4 Mar 2021 15:50:41 -0700", "msg_from": "Ryan Lambert <ryan@rustprooflabs.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "On Fri, Mar 5, 2021 at 7:50 AM Ryan Lambert <ryan@rustprooflabs.com> wrote:\n> On Wed, Mar 3, 2021 at 11:03 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>>\n>> Sorry, this seems to have totally slipped my mind.\n>>\n>> Attached is the patch I had promised.\n>>\n>> Also, I have updated the title of the CF entry to \"Some cosmetic\n>> improvements of partition pruning code\", which I think is more\n>> appropriate.\n>\n> Thank you. The updated patch passes installcheck-world. I ran a handful of test queries with a small number of partitions and observed the same plans before and after the patch. I cannot speak to the quality of the code, though am happy to test any additional use cases that should be verified.\n\nThanks Ryan.\n\nThere's no need to test it extensively, because no functionality is\nchanged with this patch.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Mar 2021 15:38:14 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "Should the status of this patch be updated to ready for comitter to get in\nline for Pg 14 deadline?\n\n*Ryan Lambert*\n\nOn Sun, Mar 7, 2021 at 11:38 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Fri, Mar 5, 2021 at 7:50 AM Ryan Lambert <ryan@rustprooflabs.com>\n> wrote:\n> > On Wed, Mar 3, 2021 at 11:03 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> >>\n> >> Sorry, this seems to have totally slipped my mind.\n> >>\n> >> Attached is the patch I had promised.\n> >>\n> >> Also, I have updated the title of the CF entry to \"Some cosmetic\n> >> improvements of partition pruning code\", which I think is more\n> >> appropriate.\n> >\n> > Thank you. The updated patch passes installcheck-world. I ran a\n> handful of test queries with a small number of partitions and observed the\n> same plans before and after the patch. I cannot speak to the quality of the\n> code, though am happy to test any additional use cases that should be\n> verified.\n>\n> Thanks Ryan.\n>\n> There's no need to test it extensively, because no functionality is\n> changed with this patch.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n>\n\nShould the status of this patch be updated to ready for comitter to get in line for Pg 14 deadline?Ryan LambertOn Sun, Mar 7, 2021 at 11:38 PM Amit Langote <amitlangote09@gmail.com> wrote:On Fri, Mar 5, 2021 at 7:50 AM Ryan Lambert <ryan@rustprooflabs.com> wrote:\n> On Wed, Mar 3, 2021 at 11:03 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>>\n>> Sorry, this seems to have totally slipped my mind.\n>>\n>> Attached is the patch I had promised.\n>>\n>> Also, I have updated the title of the CF entry to \"Some cosmetic\n>> improvements of partition pruning code\", which I think is more\n>> appropriate.\n>\n> Thank you.  The updated patch passes installcheck-world.  I ran a handful of test queries with a small number of partitions and observed the same plans before and after the patch. I cannot speak to the quality of the code, though am happy to test any additional use cases that should be verified.\n\nThanks Ryan.\n\nThere's no need to test it extensively, because no functionality is\nchanged with this patch.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 22 Mar 2021 11:24:03 -0600", "msg_from": "Ryan Lambert <ryan@rustprooflabs.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "Hi Ryan,\n\nOn Tue, Mar 23, 2021 at 2:24 AM Ryan Lambert <ryan@rustprooflabs.com> wrote:\n> Should the status of this patch be updated to ready for comitter to get in line for Pg 14 deadline?\n\nYes, I've done that. Thanks for the reminder.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Mar 2021 21:53:18 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "On Thu, 4 Mar 2021 at 19:03, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Tue, Oct 20, 2020 at 9:46 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > I had updated the patch last week to address Horiguchi-san's comments\n> > but didn't manage to post a polished-enough version. I will try again\n> > this week.\n>\n> Sorry, this seems to have totally slipped my mind.\n>\n> Attached is the patch I had promised.\n\nI've been looking at this patch today and spent quite a bit of time\nstaring at the following fragment:\n\n case PARTCLAUSE_MATCH_STEPS:\n- Assert(clause_steps != NIL);\n- result = list_concat(result, clause_steps);\n+ Assert(clause_step != NULL);\n+ steps = lappend(steps, clause_step);\n break;\n\nSo here, we used to use list_concat to add the steps that\nmatch_clause_to_partition_key() output, but now we lappend() the\nsingle step that match_clause_to_partition_key set in its output arg.\n\nThis appears to be ok as we only return PARTCLAUSE_MATCH_STEPS from\nmatch_clause_to_partition_key() when we process a ScalarArrayOpExpr.\nThere we just transform the IN(<list of consts>) into a Boolean OR\nclause with a set of OpExprs which are equivalent to the\nScalarArrayOpExpr. e.g. \"a IN (1,2)\" becomes \"a = 1 OR a = 2\". The\ncode path which processes the list of OR clauses in\ngen_partprune_steps_internal() will always just output a single\nPARTPRUNE_COMBINE_UNION combine step. So it does not appear that there\nare any behavioural changes there. The list_concat would always have\nbeen just adding a single item to the list before anyway.\n\nHowever, it does change the meaning of what PARTCLAUSE_MATCH_STEPS\ndoes. If we ever needed to expand what PARTCLAUSE_MATCH_STEPS does,\nthen we'll have less flexibility with the newly updated code. For\nexample if we needed to return multiple steps and only combine them at\nthe top level then we now can't. I feel there's a good possibility\nthat we'll never need to do that, but I'm not certain of that.\n\nI'm keen to hear your opinion on this.\n\nDavid\n\n\n", "msg_date": "Wed, 7 Apr 2021 19:43:19 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "On Wed, Apr 7, 2021 at 4:43 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Thu, 4 Mar 2021 at 19:03, Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Tue, Oct 20, 2020 at 9:46 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > I had updated the patch last week to address Horiguchi-san's comments\n> > > but didn't manage to post a polished-enough version. I will try again\n> > > this week.\n> >\n> > Sorry, this seems to have totally slipped my mind.\n> >\n> > Attached is the patch I had promised.\n>\n> I've been looking at this patch today and spent quite a bit of time\n> staring at the following fragment:\n\nThanks a lot for looking at this.\n\n> case PARTCLAUSE_MATCH_STEPS:\n> - Assert(clause_steps != NIL);\n> - result = list_concat(result, clause_steps);\n> + Assert(clause_step != NULL);\n> + steps = lappend(steps, clause_step);\n> break;\n>\n> So here, we used to use list_concat to add the steps that\n> match_clause_to_partition_key() output, but now we lappend() the\n> single step that match_clause_to_partition_key set in its output arg.\n>\n> This appears to be ok as we only return PARTCLAUSE_MATCH_STEPS from\n> match_clause_to_partition_key() when we process a ScalarArrayOpExpr.\n> There we just transform the IN(<list of consts>) into a Boolean OR\n> clause with a set of OpExprs which are equivalent to the\n> ScalarArrayOpExpr. e.g. \"a IN (1,2)\" becomes \"a = 1 OR a = 2\". The\n> code path which processes the list of OR clauses in\n> gen_partprune_steps_internal() will always just output a single\n> PARTPRUNE_COMBINE_UNION combine step. So it does not appear that there\n> are any behavioural changes there. The list_concat would always have\n> been just adding a single item to the list before anyway.\n\nRight, that was my observation as well.\n\n> However, it does change the meaning of what PARTCLAUSE_MATCH_STEPS\n> does. If we ever needed to expand what PARTCLAUSE_MATCH_STEPS does,\n> then we'll have less flexibility with the newly updated code. For\n> example if we needed to return multiple steps and only combine them at\n> the top level then we now can't. I feel there's a good possibility\n> that we'll never need to do that, but I'm not certain of that.\n>\n> I'm keen to hear your opinion on this.\n\nThat's a good point. So maybe gen_partprune_steps_internal() should\ncontinue to return a list of steps, the last of which would be an\nintersect step to combine the results of the earlier multiple steps.\nWe should still fix the originally reported issue that\ngen_prune_steps_from_opexps() seems to needlessly add an intersect\nstep.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Apr 2021 18:04:08 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "On Wed, 7 Apr 2021 at 21:04, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Wed, Apr 7, 2021 at 4:43 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > However, it does change the meaning of what PARTCLAUSE_MATCH_STEPS\n> > does. If we ever needed to expand what PARTCLAUSE_MATCH_STEPS does,\n> > then we'll have less flexibility with the newly updated code. For\n> > example if we needed to return multiple steps and only combine them at\n> > the top level then we now can't. I feel there's a good possibility\n> > that we'll never need to do that, but I'm not certain of that.\n> >\n> > I'm keen to hear your opinion on this.\n>\n> That's a good point. So maybe gen_partprune_steps_internal() should\n> continue to return a list of steps, the last of which would be an\n> intersect step to combine the results of the earlier multiple steps.\n> We should still fix the originally reported issue that\n> gen_prune_steps_from_opexps() seems to needlessly add an intersect\n> step.\n\nI was hoping you'd just say that we'll likely not need to do that and\nif we ever did we could adapt the code at that time. :)\n\nThinking more about it, these steps we're talking about are generated\nfrom a recursive call to gen_partprune_steps_internal(). I'm finding\nit very hard to imagine that we'd want to combine steps generated in\nsome recursive call with steps from outside that same call. Right now\nwe recuse into AND BoolExprs OR BoolExprs. I'm struggling to think of\nwhy we'd want to combine a set of steps we generated processing some\nof those with steps from outside that BoolExpr. If we did, we might\nwant to consider teaching canonicalize_qual() to fix it beforehand.\n\ne.g.\n\npostgres=# explain select * from ab where (a = 1 and b = 1) or (a = 1\nand b = 2);\n QUERY PLAN\n---------------------------------------------------\n Seq Scan on ab (cost=0.00..49.55 rows=1 width=8)\n Filter: ((a = 1) AND ((b = 1) OR (b = 2)))\n(2 rows)\n\nIf canonicalize_qual() had been unable to rewrite that WHERE clause\nthen I could see that we might want to combine steps from other\nrecursive quals. I'm thinking right now that I'm glad\ncanonicalize_qual() does that hard work for us. (I think partprune.c\ncould handle the original WHERE clause as-is in this example\nanyway...)\n\nDavid\n\n\n", "msg_date": "Wed, 7 Apr 2021 21:53:19 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "On Wed, 7 Apr 2021 at 21:53, David Rowley <dgrowleyml@gmail.com> wrote:\n> If canonicalize_qual() had been unable to rewrite that WHERE clause\n> then I could see that we might want to combine steps from other\n> recursive quals. I'm thinking right now that I'm glad\n> canonicalize_qual() does that hard work for us. (I think partprune.c\n> could handle the original WHERE clause as-is in this example\n> anyway...)\n\nI made a pass over the v2 patch and since it's been a long time since\nI'd looked at partprune.c I ended doing further rewriting of the\ncomments you'd changed.\n\nThere's only one small code change as I didn't like the following:\n\n- return result;\n+ /* A single step or no pruning possible with the provided clauses. */\n+ return steps ? linitial(steps) : NULL;\n\nI ended up breaking that out into an if condition.\n\nAll the other changes are around the comments.\n\nCan you look over this and let me know if you're happy with the changes?\n\nDavid", "msg_date": "Wed, 7 Apr 2021 23:44:38 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "On Wed, Apr 7, 2021 at 8:44 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 7 Apr 2021 at 21:53, David Rowley <dgrowleyml@gmail.com> wrote:\n> > If canonicalize_qual() had been unable to rewrite that WHERE clause\n> > then I could see that we might want to combine steps from other\n> > recursive quals. I'm thinking right now that I'm glad\n> > canonicalize_qual() does that hard work for us. (I think partprune.c\n> > could handle the original WHERE clause as-is in this example\n> > anyway...)\n>\n> I made a pass over the v2 patch and since it's been a long time since\n> I'd looked at partprune.c I ended doing further rewriting of the\n> comments you'd changed.\n>\n> There's only one small code change as I didn't like the following:\n>\n> - return result;\n> + /* A single step or no pruning possible with the provided clauses. */\n> + return steps ? linitial(steps) : NULL;\n>\n> I ended up breaking that out into an if condition.\n>\n> All the other changes are around the comments.\n>\n> Can you look over this and let me know if you're happy with the changes?\n\nThanks David. Actually, I was busy updating the patch to revert to\ngen_partprune_steps_internal() returning a list and was almost done\nwith it when I saw your message.\n\nI read through v3 and can say that it certainly looks better than v2.\nIf you are happy with gen_partprune_steps_internal() no longer\nreturning a list, I would not object if you wanted to go ahead and\ncommit the v3.\n\nI've attached the patch I had ended up with and was about to post as\nv3, just in case you wanted to glance.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 7 Apr 2021 21:49:17 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "On Wed, Apr 7, 2021 at 6:53 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 7 Apr 2021 at 21:04, Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > On Wed, Apr 7, 2021 at 4:43 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > > However, it does change the meaning of what PARTCLAUSE_MATCH_STEPS\n> > > does. If we ever needed to expand what PARTCLAUSE_MATCH_STEPS does,\n> > > then we'll have less flexibility with the newly updated code. For\n> > > example if we needed to return multiple steps and only combine them at\n> > > the top level then we now can't. I feel there's a good possibility\n> > > that we'll never need to do that, but I'm not certain of that.\n> > >\n> > > I'm keen to hear your opinion on this.\n> >\n> > That's a good point. So maybe gen_partprune_steps_internal() should\n> > continue to return a list of steps, the last of which would be an\n> > intersect step to combine the results of the earlier multiple steps.\n> > We should still fix the originally reported issue that\n> > gen_prune_steps_from_opexps() seems to needlessly add an intersect\n> > step.\n>\n> I was hoping you'd just say that we'll likely not need to do that and\n> if we ever did we could adapt the code at that time. :)\n>\n> Thinking more about it, these steps we're talking about are generated\n> from a recursive call to gen_partprune_steps_internal(). I'm finding\n> it very hard to imagine that we'd want to combine steps generated in\n> some recursive call with steps from outside that same call. Right now\n> we recuse into AND BoolExprs OR BoolExprs. I'm struggling to think of\n> why we'd want to combine a set of steps we generated processing some\n> of those with steps from outside that BoolExpr. If we did, we might\n> want to consider teaching canonicalize_qual() to fix it beforehand.\n>\n> e.g.\n>\n> postgres=# explain select * from ab where (a = 1 and b = 1) or (a = 1\n> and b = 2);\n> QUERY PLAN\n> ---------------------------------------------------\n> Seq Scan on ab (cost=0.00..49.55 rows=1 width=8)\n> Filter: ((a = 1) AND ((b = 1) OR (b = 2)))\n> (2 rows)\n>\n> If canonicalize_qual() had been unable to rewrite that WHERE clause\n> then I could see that we might want to combine steps from other\n> recursive quals. I'm thinking right now that I'm glad\n> canonicalize_qual() does that hard work for us.\n> (I think partprune.c\n> could handle the original WHERE clause as-is in this example\n> anyway...)\n\nActually, I am not sure that canonicalization always makes things\nbetter for partprune.c. I can show examples where canonicalization\ncauses partprune.c as it is today to not be able to prune as optimally\nas it could have with the original ones.\n\ncreate table ab (a int, b int) partition by range (a, b);\ncreate table ab0 partition of ab for values from (1, 1) to (1, 2);\ncreate table ab1 partition of ab for values from (1, 2) to (1, 3);\ncreate table ab2 partition of ab for values from (1, 3) to (1, 4);\ncreate table ab3 partition of ab for values from (2, 1) to (2, 2);\n\nexplain select * from ab where (a = 1 and b = 1) or (a = 1 and b = 2);\n QUERY PLAN\n---------------------------------------------------------------\n Append (cost=0.00..148.66 rows=3 width=8)\n -> Seq Scan on ab0 ab_1 (cost=0.00..49.55 rows=1 width=8)\n Filter: ((a = 1) AND ((b = 1) OR (b = 2)))\n -> Seq Scan on ab1 ab_2 (cost=0.00..49.55 rows=1 width=8)\n Filter: ((a = 1) AND ((b = 1) OR (b = 2)))\n -> Seq Scan on ab2 ab_3 (cost=0.00..49.55 rows=1 width=8)\n Filter: ((a = 1) AND ((b = 1) OR (b = 2)))\n(7 rows)\n\nexplain select * from ab where (a = 1 and b = 1) or (a = 1 and b = 3);\n QUERY PLAN\n---------------------------------------------------------------\n Append (cost=0.00..148.66 rows=3 width=8)\n -> Seq Scan on ab0 ab_1 (cost=0.00..49.55 rows=1 width=8)\n Filter: ((a = 1) AND ((b = 1) OR (b = 3)))\n -> Seq Scan on ab1 ab_2 (cost=0.00..49.55 rows=1 width=8)\n Filter: ((a = 1) AND ((b = 1) OR (b = 3)))\n -> Seq Scan on ab2 ab_3 (cost=0.00..49.55 rows=1 width=8)\n Filter: ((a = 1) AND ((b = 1) OR (b = 3)))\n(7 rows)\n\nI would've expected the 1st query to scan ab0 and ab1, whereas the 2nd\nquery to scan ab0 and ab2. But in the canonicalized version, the\nAND's 2nd arm is useless for multi-column range pruning, because it\nonly provides clauses for the 2nd key. With the original version,\nboth arms of the OR have ANDed clauses covering both keys, so pruning\nwith that would have produced the desired result.\n\nSo, if I am not entirely wrong, maybe it is exactly because of\ncanonicalization that partprune.c should be looking to peek across\nBoolExprs.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Apr 2021 23:07:23 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "On Thu, 8 Apr 2021 at 00:49, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Thanks David. Actually, I was busy updating the patch to revert to\n> gen_partprune_steps_internal() returning a list and was almost done\n> with it when I saw your message.\n>\n> I read through v3 and can say that it certainly looks better than v2.\n> If you are happy with gen_partprune_steps_internal() no longer\n> returning a list, I would not object if you wanted to go ahead and\n> commit the v3.\n>\n> I've attached the patch I had ended up with and was about to post as\n> v3, just in case you wanted to glance.\n\nThanks. I've made a pass over that and just fixed up the places that\nwere mixing up NIL and NULL.\n\nI applied most of my comments from my last version after adapting them\nto account for the variation in the functions return value. I also did\na bit more explaining about op steps and combine steps in the header\ncomment for gen_partprune_steps_internal.\n\nPatch attached.\n\nDavid", "msg_date": "Thu, 8 Apr 2021 20:34:28 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "On Thu, Apr 8, 2021 at 5:34 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Thu, 8 Apr 2021 at 00:49, Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > Thanks David. Actually, I was busy updating the patch to revert to\n> > gen_partprune_steps_internal() returning a list and was almost done\n> > with it when I saw your message.\n> >\n> > I read through v3 and can say that it certainly looks better than v2.\n> > If you are happy with gen_partprune_steps_internal() no longer\n> > returning a list, I would not object if you wanted to go ahead and\n> > commit the v3.\n> >\n> > I've attached the patch I had ended up with and was about to post as\n> > v3, just in case you wanted to glance.\n>\n> Thanks. I've made a pass over that and just fixed up the places that\n> were mixing up NIL and NULL.\n>\n> I applied most of my comments from my last version after adapting them\n> to account for the variation in the functions return value. I also did\n> a bit more explaining about op steps and combine steps in the header\n> comment for gen_partprune_steps_internal.\n\nThanks for updating the patch.\n\n+ * These partition pruning steps come in 2 forms; operation steps and combine\n+ * steps.\n\nMaybe you meant \"operator\" steps? IIRC, the reason why we named it\nPartitionPruneStepOp is that an op step is built to prune based on the\nsemantics of the operators that were involved in the matched clause.\nAlthough, they're abused for pruning based on nullness clauses too.\nMaybe, we should also updated the description of node struct as\nfollows to consider that last point:\n\n * PartitionPruneStepOp - Information to prune using a set of mutually ANDed\n * OpExpr and any IS [ NOT ] NULL clauses\n\n+ * Combine steps (PartitionPruneStepCombine) instruct the partition pruning\n+ * code how it should produce a single set of partitions from multiple input\n+ * operation steps.\n\nI think the last part should be: ...from multiple operation/operator\nand [ other ] combine steps.\n\nIf that sounds fine, likewise adjust the following sentences in the\nsame paragraph.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Apr 2021 18:03:52 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "On Thu, 8 Apr 2021 at 21:04, Amit Langote <amitlangote09@gmail.com> wrote:\n> + * These partition pruning steps come in 2 forms; operation steps and combine\n> + * steps.\n>\n> Maybe you meant \"operator\" steps? IIRC, the reason why we named it\n> PartitionPruneStepOp is that an op step is built to prune based on the\n> semantics of the operators that were involved in the matched clause.\n> Although, they're abused for pruning based on nullness clauses too.\n> Maybe, we should also updated the description of node struct as\n> follows to consider that last point:\n\nOh right. Thanks. I fixed that.\n\n> * PartitionPruneStepOp - Information to prune using a set of mutually ANDed\n> * OpExpr and any IS [ NOT ] NULL clauses\n>\n> + * Combine steps (PartitionPruneStepCombine) instruct the partition pruning\n> + * code how it should produce a single set of partitions from multiple input\n> + * operation steps.\n\nI didn't add that. I wasn't really sure if I understood why we'd talk\nabout PartitionPruneStepCombine in the PartitionPruneStepOp. I thought\nthe overview in gen_partprune_steps_internal was ok to link the two\ntogether and explain why they're both needed.\n\n> I think the last part should be: ...from multiple operation/operator\n> and [ other ] combine steps.\n\nChange that and pushed.\n\nDavid\n\n\n", "msg_date": "Thu, 8 Apr 2021 22:40:46 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "On Thu, Apr 8, 2021 at 7:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Thu, 8 Apr 2021 at 21:04, Amit Langote <amitlangote09@gmail.com> wrote:\n> > Maybe, we should also updated the description of node struct as\n> > follows to consider that last point:\n>>\n> > * PartitionPruneStepOp - Information to prune using a set of mutually ANDed\n> > * OpExpr and any IS [ NOT ] NULL clauses\n>\n> I didn't add that. I wasn't really sure if I understood why we'd talk\n> about PartitionPruneStepCombine in the PartitionPruneStepOp. I thought\n> the overview in gen_partprune_steps_internal was ok to link the two\n> together and explain why they're both needed.\n\nSorry, maybe the way I wrote it was a bit confusing, but I meant to\nsuggest that we do what I have quoted above from my last email. That\nis, we should clarify in the description of PartitionPruneStepOp that\nit contains information derived from OpExprs and in some cases also IS\n[ NOT ] NULL clauses.\n\nThanks for the commit.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Apr 2021 20:58:56 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" }, { "msg_contents": "On Thu, Apr 8, 2021 at 7:59 PM Amit Langote <amitlangote09@gmail.com> wrote:\n\n> On Thu, Apr 8, 2021 at 7:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > On Thu, 8 Apr 2021 at 21:04, Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> > > Maybe, we should also updated the description of node struct as\n> > > follows to consider that last point:\n> >>\n> > > * PartitionPruneStepOp - Information to prune using a set of mutually\n> ANDed\n> > > * OpExpr and any IS [ NOT ] NULL clauses\n> >\n> > I didn't add that. I wasn't really sure if I understood why we'd talk\n> > about PartitionPruneStepCombine in the PartitionPruneStepOp. I thought\n> > the overview in gen_partprune_steps_internal was ok to link the two\n> > together and explain why they're both needed.\n>\n> Sorry, maybe the way I wrote it was a bit confusing, but I meant to\n> suggest that we do what I have quoted above from my last email. That\n> is, we should clarify in the description of PartitionPruneStepOp that\n> it contains information derived from OpExprs and in some cases also IS\n> [ NOT ] NULL clauses.\n>\n> Thanks for the commit.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n>\n\nThanks for the patch.\n\nRecently I am reading the partition prune code again, and want to\npropose some tiny changes. That is helpful for me and hope it is\nhelpful for others as well, especially for the people who are not familiar\nwith these codes.\n\n-- v1-0001-Document-enhancement-for-RelOptInfo.partexprs-nul.patch\n\nJust add comments for RelOptInfo.partexprs & nullable_partexprs to\nremind the reader nullable_partexprs is just for partition wise join. and\nuse bms_add_member(relinfo->all_partrels, childRTindex); instead of\nbms_add_members(relinfo->all_partrels, childrelinfo->relids); which\nwould be more explicit to say add the child rt index to all_partrels.\n\n-- v1-0002-Split-gen_prune_steps_from_exprs-into-some-smalle.patch\n\nJust split the gen_prune_steps_from_opexps into some smaller chunks.\nThe benefits are the same as smaller functions.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)", "msg_date": "Mon, 12 Apr 2021 15:58:11 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Wired if-statement in gen_partprune_steps_internal" } ]
[ { "msg_contents": "Hi,\n\nI hit an assertion failure. When asserts disabled, it works fine even with more tables (>5000).\n\nSteps to reproduce:\n\nCREATE TABLE users_table (user_id int, time timestamp, value_1 int, value_2 int, value_3 float, value_4 bigint);\n\n250 relations work fine, see the query (too long to copy & paste here): https://gist.github.com/onderkalaci/2b40a18d989da389ee4fb631e1ad7c0e#file-steps_to_assert_pg-sql-L41\n\n-- when # relations >500, we hit the assertion (too long to copy & paste here):\nSee the query: https://gist.github.com/onderkalaci/2b40a18d989da389ee4fb631e1ad7c0e#file-steps_to_assert_pg-sql-L45\n\n\nAnd, the backtrace:\n\n(lldb) bt\n* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGABRT\n * frame #0: 0x00007fff639fa2c2 libsystem_kernel.dylib`__pthread_kill + 10\n frame #1: 0x00007fff63ab5bf1 libsystem_pthread.dylib`pthread_kill + 284\n frame #2: 0x00007fff639646a6 libsystem_c.dylib`abort + 127\n frame #3: 0x0000000102180a02 postgres`ExceptionalCondition(conditionName=<unavailable>, errorType=<unavailable>, fileName=<unavailable>, lineNumber=<unavailable>) at assert.c:67:2\n frame #4: 0x0000000101ece9b2 postgres`initial_cost_mergejoin(root=0x7ff0000000000000, workspace=0x00007ffeedf5b528, jointype=JOIN_INNER, mergeclauses=<unavailable>, outer_path=0x000000012ebf12d0, inner_path=0x4093d80000000000, outersortkeys=0x0000000000000000, innersortkeys=0x000000012ebf68e8, extra=0x00007ffeedf5b6f8) at costsize.c:3043:2\n frame #5: 0x0000000101eda01b postgres`try_mergejoin_path(root=0x0000000104a12618, joinrel=0x000000012ebeede0, outer_path=0x000000012ebf12d0, inner_path=0x00000001283d00e8, pathkeys=0x000000012ebf67e0, mergeclauses=0x000000012ebf6890, outersortkeys=0x0000000000000000, innersortkeys=0x000000012ebf68e8, jointype=JOIN_LEFT, extra=0x00007ffeedf5b6f8, is_partial=<unavailable>) at joinpath.c:615:2\n frame #6: 0x0000000101ed9426 postgres`sort_inner_and_outer(root=0x0000000104a12618, joinrel=0x000000012ebeede0, outerrel=<unavailable>, innerrel=<unavailable>, jointype=JOIN_LEFT, extra=0x00007ffeedf5b6f8) at joinpath.c:1038:3\n frame #7: 0x0000000101ed8f7a postgres`add_paths_to_joinrel(root=0x0000000104a12618, joinrel=0x000000012ebeede0, outerrel=0x000000012ebe7b48, innerrel=0x0000000127f146e0, jointype=<unavailable>, sjinfo=<unavailable>, restrictlist=0x000000012ebf42b0) at joinpath.c:269:3\n frame #8: 0x0000000101edbdc6 postgres`populate_joinrel_with_paths(root=0x0000000104a12618, rel1=0x000000012ebe7b48, rel2=0x0000000127f146e0, joinrel=0x000000012ebeede0, sjinfo=0x000000012809edc8, restrictlist=0x000000012ebf42b0) at joinrels.c:824:4\n frame #9: 0x0000000101edb57a postgres`make_join_rel(root=0x0000000104a12618, rel1=0x000000012ebe7b48, rel2=0x0000000127f146e0) at joinrels.c:760:2\n frame #10: 0x0000000101edb1ec postgres`make_rels_by_clause_joins(root=0x0000000104a12618, old_rel=0x000000012ebe7b48, other_rels_list=<unavailable>, other_rels=<unavailable>) at joinrels.c:312:11\n frame #11: 0x0000000101edada3 postgres`join_search_one_level(root=0x0000000104a12618, level=2) at joinrels.c:123:4\n frame #12: 0x0000000101ec7feb postgres`standard_join_search(root=0x0000000104a12618, levels_needed=8, initial_rels=0x000000012ebf4078) at allpaths.c:3097:3\n frame #13: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280a5618) at allpaths.c:2993:14\n frame #14: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280ab320) at allpaths.c:2993:14\n frame #15: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280b1028) at allpaths.c:2993:14\n frame #16: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280b6d30) at allpaths.c:2993:14\n frame #17: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280bca38) at allpaths.c:2993:14\n frame #18: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280c2740) at allpaths.c:2993:14\n frame #19: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280c8448) at allpaths.c:2993:14\n frame #20: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280ce150) at allpaths.c:2993:14\n frame #21: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280d3e58) at allpaths.c:2993:14\n frame #22: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280d9b60) at allpaths.c:2993:14\n frame #23: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280df868) at allpaths.c:2993:14\n frame #24: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280e5570) at allpaths.c:2993:14\n frame #25: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280eb278) at allpaths.c:2993:14\n frame #26: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280f0f80) at allpaths.c:2993:14\n frame #27: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280f8d88) at allpaths.c:2993:14\n frame #28: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128101810) at allpaths.c:2993:14\n frame #29: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012810a298) at allpaths.c:2993:14\n frame #30: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128112d20) at allpaths.c:2993:14\n frame #31: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012811b7a8) at allpaths.c:2993:14\n frame #32: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128124230) at allpaths.c:2993:14\n frame #33: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012812ccb8) at allpaths.c:2993:14\n frame #34: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128135740) at allpaths.c:2993:14\n frame #35: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012813e1c8) at allpaths.c:2993:14\n frame #36: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128146c50) at allpaths.c:2993:14\n frame #37: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012814f6d8) at allpaths.c:2993:14\n frame #38: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128158160) at allpaths.c:2993:14\n frame #39: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128160be8) at allpaths.c:2993:14\n frame #40: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128169670) at allpaths.c:2993:14\n frame #41: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281720f8) at allpaths.c:2993:14\n frame #42: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012817ab80) at allpaths.c:2993:14\n frame #43: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128183608) at allpaths.c:2993:14\n frame #44: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012818c090) at allpaths.c:2993:14\n frame #45: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128194b18) at allpaths.c:2993:14\n frame #46: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012819d5a0) at allpaths.c:2993:14\n frame #47: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281a6028) at allpaths.c:2993:14\n frame #48: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281aeab0) at allpaths.c:2993:14\n frame #49: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281b7538) at allpaths.c:2993:14\n frame #50: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281bffc0) at allpaths.c:2993:14\n frame #51: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281c8a48) at allpaths.c:2993:14\n frame #52: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281d14d0) at allpaths.c:2993:14\n frame #53: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281d9f58) at allpaths.c:2993:14\n frame #54: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281e29e0) at allpaths.c:2993:14\n frame #55: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281eb468) at allpaths.c:2993:14\n frame #56: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281f3ef0) at allpaths.c:2993:14\n frame #57: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281fc978) at allpaths.c:2993:14\n frame #58: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128205400) at allpaths.c:2993:14\n frame #59: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012820de88) at allpaths.c:2993:14\n frame #60: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128216910) at allpaths.c:2993:14\n frame #61: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012821f398) at allpaths.c:2993:14\n frame #62: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128227e20) at allpaths.c:2993:14\n frame #63: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282308a8) at allpaths.c:2993:14\n frame #64: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128239330) at allpaths.c:2993:14\n frame #65: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128241db8) at allpaths.c:2993:14\n frame #66: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012824a840) at allpaths.c:2993:14\n frame #67: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282532c8) at allpaths.c:2993:14\n frame #68: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012825bd50) at allpaths.c:2993:14\n frame #69: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282647d8) at allpaths.c:2993:14\n frame #70: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012826d260) at allpaths.c:2993:14\n frame #71: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128275ce8) at allpaths.c:2993:14\n frame #72: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012827e770) at allpaths.c:2993:14\n frame #73: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282871f8) at allpaths.c:2993:14\n frame #74: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012828fc80) at allpaths.c:2993:14\n frame #75: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128298708) at allpaths.c:2993:14\n frame #76: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282a1190) at allpaths.c:2993:14\n frame #77: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282a9c18) at allpaths.c:2993:14\n frame #78: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282b26a0) at allpaths.c:2993:14\n frame #79: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282bb128) at allpaths.c:2993:14\n frame #80: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282c3bb0) at allpaths.c:2993:14\n frame #81: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282cc638) at allpaths.c:2993:14\n frame #82: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282d50c0) at allpaths.c:2993:14\n frame #83: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282ddb48) at allpaths.c:2993:14\n frame #84: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282e65d0) at allpaths.c:2993:14\n frame #85: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282ef058) at allpaths.c:2993:14\n frame #86: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282f7ae0) at allpaths.c:2993:14\n frame #87: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128300568) at allpaths.c:2993:14\n frame #88: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128308ff0) at allpaths.c:2993:14\n frame #89: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128311a78) at allpaths.c:2993:14\n frame #90: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012831a500) at allpaths.c:2993:14\n frame #91: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128322f88) at allpaths.c:2993:14\n frame #92: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012832ba10) at allpaths.c:2993:14\n frame #93: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128334498) at allpaths.c:2993:14\n frame #94: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012833cf20) at allpaths.c:2993:14\n frame #95: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001283459a8) at allpaths.c:2993:14\n frame #96: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012834e430) at allpaths.c:2993:14\n frame #97: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128356eb8) at allpaths.c:2993:14\n frame #98: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012835f940) at allpaths.c:2993:14\n frame #99: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001283683c8) at allpaths.c:2993:14\n frame #100: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012837e358) at allpaths.c:2993:14\n frame #101: 0x0000000101ec688f postgres`make_one_rel(root=0x0000000104a12618, joinlist=0x000000012837e358) at allpaths.c:227:8\n frame #102: 0x0000000101eec187 postgres`query_planner(root=0x0000000104a12618, qp_callback=<unavailable>, qp_extra=0x00007ffeedf5d000) at planmain.c:269:14\n frame #103: 0x0000000101eeea9b postgres`grouping_planner(root=0x0000000104a12618, inheritance_update=<unavailable>, tuple_fraction=<unavailable>) at planner.c:2058:17\n frame #104: 0x0000000101eed1a1 postgres`subquery_planner(glob=<unavailable>, parse=0x00000001049ad620, parent_root=<unavailable>, hasRecursion=<unavailable>, tuple_fraction=0) at planner.c:1015:3\n frame #105: 0x0000000101eec3b6 postgres`standard_planner(parse=0x00000001049ad620, query_string=<unavailable>, cursorOptions=256, boundParams=0x0000000000000000) at planner.c:405:9\n frame #106: 0x0000000101faeaf1 postgres`pg_plan_query(querytree=0x00000001049ad620, query_string=\"SELECT count(*) FROM users_table u_1123123123123123 LEFT JOIN users_table u0 USING (user_id) LEFT JOIN users_table u1 USING (user_id) LEFT JOIN users_table u2 USING (user_id) LEFT JOIN users_table u3 USING (user_id) LEFT JOIN users_table u4 USING (user_id) LEFT JOIN users_table u5 USING (user_id) LEFT JOIN users_table u6 USING (user_id) LEFT JOIN users_table u7 USING (user_id) LEFT JOIN users_table u8 USING (user_id) LEFT JOIN users_table u9 USING (user_id) LEFT JOIN users_table u10 USING (user_id) LEFT JOIN users_table u11 USING (user_id) LEFT JOIN users_table u12 USING (user_id) LEFT JOIN users_table u13 USING (user_id) LEFT JOIN users_table u14 USING (user_id) LEFT JOIN users_table u15 USING (user_id) LEFT JOIN users_table u16 USING (user_id) LEFT JOIN users_table u17 USING (user_id) LEFT JOIN users_table u18 USING (user_id) LEFT JOIN users_table u19 USING (user_id) LEFT JOIN users_table u20 USING (user_id) LEFT JOIN users_table u21 USING (user_id) LEFT JOIN users_table u22 USING (use\"..., cursorOptions=256, boundParams=0x0000000000000000) at postgres.c:875:9\n frame #107: 0x0000000101faec32 postgres`pg_plan_queries(querytrees=0x00000001275c20e0, query_string=\"SELECT count(*) FROM users_table u_1123123123123123 LEFT JOIN users_table u0 USING (user_id) LEFT JOIN users_table u1 USING (user_id) LEFT JOIN users_table u2 USING (user_id) LEFT JOIN users_table u3 USING (user_id) LEFT JOIN users_table u4 USING (user_id) LEFT JOIN users_table u5 USING (user_id) LEFT JOIN users_table u6 USING (user_id) LEFT JOIN users_table u7 USING (user_id) LEFT JOIN users_table u8 USING (user_id) LEFT JOIN users_table u9 USING (user_id) LEFT JOIN users_table u10 USING (user_id) LEFT JOIN users_table u11 USING (user_id) LEFT JOIN users_table u12 USING (user_id) LEFT JOIN users_table u13 USING (user_id) LEFT JOIN users_table u14 USING (user_id) LEFT JOIN users_table u15 USING (user_id) LEFT JOIN users_table u16 USING (user_id) LEFT JOIN users_table u17 USING (user_id) LEFT JOIN users_table u18 USING (user_id) LEFT JOIN users_table u19 USING (user_id) LEFT JOIN users_table u20 USING (user_id) LEFT JOIN users_table u21 USING (user_id) LEFT JOIN users_table u22 USING (use\"..., cursorOptions=256, boundParams=0x0000000000000000) at postgres.c:966:11\n frame #108: 0x0000000101fb09fa postgres`exec_simple_query(query_string=\"SELECT count(*) FROM users_table u_1123123123123123 LEFT JOIN users_table u0 USING (user_id) LEFT JOIN users_table u1 USING (user_id) LEFT JOIN users_table u2 USING (user_id) LEFT JOIN users_table u3 USING (user_id) LEFT JOIN users_table u4 USING (user_id) LEFT JOIN users_table u5 USING (user_id) LEFT JOIN users_table u6 USING (user_id) LEFT JOIN users_table u7 USING (user_id) LEFT JOIN users_table u8 USING (user_id) LEFT JOIN users_table u9 USING (user_id) LEFT JOIN users_table u10 USING (user_id) LEFT JOIN users_table u11 USING (user_id) LEFT JOIN users_table u12 USING (user_id) LEFT JOIN users_table u13 USING (user_id) LEFT JOIN users_table u14 USING (user_id) LEFT JOIN users_table u15 USING (user_id) LEFT JOIN users_table u16 USING (user_id) LEFT JOIN users_table u17 USING (user_id) LEFT JOIN users_table u18 USING (user_id) LEFT JOIN users_table u19 USING (user_id) LEFT JOIN users_table u20 USING (user_id) LEFT JOIN users_table u21 USING (user_id) LEFT JOIN users_table u22 USING (use\"...) at postgres.c:1158:19\n frame #109: 0x0000000101fb024e postgres`PostgresMain(argc=<unavailable>, argv=<unavailable>, dbname=<unavailable>, username=<unavailable>) at postgres.c:0\n frame #110: 0x0000000101f35f65 postgres`BackendRun(port=0x0000000000000001) at postmaster.c:4536:2\n frame #111: 0x0000000101f35830 postgres`BackendStartup(port=<unavailable>) at postmaster.c:4220:3\n frame #112: 0x0000000101f35005 postgres`ServerLoop at postmaster.c:1739:7\n frame #113: 0x0000000101f3321c postgres`PostmasterMain(argc=3, argv=0x00007fc7a7403250) at postmaster.c:1412:11\n frame #114: 0x0000000101e91e06 postgres`main(argc=3, argv=0x00007fc7a7403250) at main.c:210:3\n frame #115: 0x00007fff638bf3d5 libdyld.dylib`start + 1\n frame #116: 0x00007fff638bf3d5 libdyld.dylib`start + 1\n\n\n\nSELECT version();\n version\n-------------------------------------------------------------------------------------------------------------------\nPostgreSQL 13.0 on x86_64-apple-darwin18.7.0, compiled by Apple clang version 11.0.0 (clang-1100.0.33.17), 64-bit\n(1 row)\n\n\npg_config\nBINDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/bin\nDOCDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/share/doc\nHTMLDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/share/doc\nINCLUDEDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/include\nPKGINCLUDEDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/include\nINCLUDEDIR-SERVER = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/include/server\nLIBDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/lib\nPKGLIBDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/lib\nLOCALEDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/share/locale\nMANDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/share/man\nSHAREDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/share\nSYSCONFDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/etc\nPGXS = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/lib/pgxs/src/makefiles/pgxs.mk\nCONFIGURE = '--prefix=/Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0' '--enable-debug' '--enable-cassert' 'CFLAGS=-ggdb -Og -g3 -fno-omit-frame-pointer' '--with-openssl' '--with-icu' 'LDFLAGS=-L/usr/local/opt/readline/lib -L/usr/local/opt/openssl/lib ' 'CPPFLAGS=-I/usr/local/opt/readline/include -I/usr/local/opt/openssl/include/ ' 'PKG_CONFIG_PATH=/usr/local/opt/icu4c/lib/pkgconfig'\nCC = gcc\nCPPFLAGS = -I/usr/local/Cellar/icu4c/66.1/include -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk -I/usr/local/opt/readline/include -I/usr/local/opt/openssl/include/\nCFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -g -ggdb -Og -g3 -fno-omit-frame-pointer\nCFLAGS_SL =\nLDFLAGS = -L/usr/local/opt/readline/lib -L/usr/local/opt/openssl/lib -Wl,-dead_strip_dylibs\nLDFLAGS_EX =\nLDFLAGS_SL =\nLIBS = -lpgcommon -lpgport -lssl -lcrypto -lz -lreadline -lm\nVERSION = PostgreSQL 13.0\n\n\n\n\n\n\n\n\n\n\n\n\nHi,\n \nI hit an assertion failure. When asserts disabled, it works fine even with more tables  (>5000).\n \nSteps to reproduce:\n \nCREATE TABLE users_table (user_id int, time timestamp, value_1 int, value_2 int, value_3 float, value_4 bigint);\n \n250 relations work fine, see the query (too long to copy & paste here):\nhttps://gist.github.com/onderkalaci/2b40a18d989da389ee4fb631e1ad7c0e#file-steps_to_assert_pg-sql-L41\n \n--  when # relations >500, we hit the assertion (too long to copy & paste here):\nSee the query: https://gist.github.com/onderkalaci/2b40a18d989da389ee4fb631e1ad7c0e#file-steps_to_assert_pg-sql-L45\n \n \nAnd, the backtrace:\n \n(lldb) bt\n* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGABRT\n  * frame #0: 0x00007fff639fa2c2 libsystem_kernel.dylib`__pthread_kill + 10\n    frame #1: 0x00007fff63ab5bf1 libsystem_pthread.dylib`pthread_kill + 284\n    frame #2: 0x00007fff639646a6 libsystem_c.dylib`abort + 127\n    frame #3: 0x0000000102180a02 postgres`ExceptionalCondition(conditionName=<unavailable>, errorType=<unavailable>, fileName=<unavailable>, lineNumber=<unavailable>) at assert.c:67:2\n    frame #4: 0x0000000101ece9b2 postgres`initial_cost_mergejoin(root=0x7ff0000000000000, workspace=0x00007ffeedf5b528, jointype=JOIN_INNER, mergeclauses=<unavailable>, outer_path=0x000000012ebf12d0, inner_path=0x4093d80000000000,\n outersortkeys=0x0000000000000000, innersortkeys=0x000000012ebf68e8, extra=0x00007ffeedf5b6f8) at costsize.c:3043:2\n    frame #5: 0x0000000101eda01b postgres`try_mergejoin_path(root=0x0000000104a12618, joinrel=0x000000012ebeede0, outer_path=0x000000012ebf12d0, inner_path=0x00000001283d00e8, pathkeys=0x000000012ebf67e0, mergeclauses=0x000000012ebf6890,\n outersortkeys=0x0000000000000000, innersortkeys=0x000000012ebf68e8, jointype=JOIN_LEFT, extra=0x00007ffeedf5b6f8, is_partial=<unavailable>) at joinpath.c:615:2\n    frame #6: 0x0000000101ed9426 postgres`sort_inner_and_outer(root=0x0000000104a12618, joinrel=0x000000012ebeede0, outerrel=<unavailable>, innerrel=<unavailable>, jointype=JOIN_LEFT, extra=0x00007ffeedf5b6f8) at joinpath.c:1038:3\n    frame #7: 0x0000000101ed8f7a postgres`add_paths_to_joinrel(root=0x0000000104a12618, joinrel=0x000000012ebeede0, outerrel=0x000000012ebe7b48, innerrel=0x0000000127f146e0, jointype=<unavailable>, sjinfo=<unavailable>,\n restrictlist=0x000000012ebf42b0) at joinpath.c:269:3\n    frame #8: 0x0000000101edbdc6 postgres`populate_joinrel_with_paths(root=0x0000000104a12618, rel1=0x000000012ebe7b48, rel2=0x0000000127f146e0, joinrel=0x000000012ebeede0, sjinfo=0x000000012809edc8, restrictlist=0x000000012ebf42b0)\n at joinrels.c:824:4\n    frame #9: 0x0000000101edb57a postgres`make_join_rel(root=0x0000000104a12618, rel1=0x000000012ebe7b48, rel2=0x0000000127f146e0) at joinrels.c:760:2\n    frame #10: 0x0000000101edb1ec postgres`make_rels_by_clause_joins(root=0x0000000104a12618, old_rel=0x000000012ebe7b48, other_rels_list=<unavailable>, other_rels=<unavailable>) at joinrels.c:312:11\n    frame #11: 0x0000000101edada3 postgres`join_search_one_level(root=0x0000000104a12618, level=2) at joinrels.c:123:4\n    frame #12: 0x0000000101ec7feb postgres`standard_join_search(root=0x0000000104a12618, levels_needed=8, initial_rels=0x000000012ebf4078) at allpaths.c:3097:3\n    frame #13: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280a5618) at allpaths.c:2993:14\n    frame #14: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280ab320) at allpaths.c:2993:14\n    frame #15: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280b1028) at allpaths.c:2993:14\n    frame #16: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280b6d30) at allpaths.c:2993:14\n    frame #17: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280bca38) at allpaths.c:2993:14\n    frame #18: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280c2740) at allpaths.c:2993:14\n    frame #19: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280c8448) at allpaths.c:2993:14\n    frame #20: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280ce150) at allpaths.c:2993:14\n    frame #21: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280d3e58) at allpaths.c:2993:14\n    frame #22: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280d9b60) at allpaths.c:2993:14\n    frame #23: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280df868) at allpaths.c:2993:14\n    frame #24: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280e5570) at allpaths.c:2993:14\n    frame #25: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280eb278) at allpaths.c:2993:14\n    frame #26: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280f0f80) at allpaths.c:2993:14\n    frame #27: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001280f8d88) at allpaths.c:2993:14\n    frame #28: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128101810) at allpaths.c:2993:14\n    frame #29: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012810a298) at allpaths.c:2993:14\n    frame #30: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128112d20) at allpaths.c:2993:14\n    frame #31: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012811b7a8) at allpaths.c:2993:14\n    frame #32: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128124230) at allpaths.c:2993:14\n    frame #33: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012812ccb8) at allpaths.c:2993:14\n    frame #34: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128135740) at allpaths.c:2993:14\n    frame #35: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012813e1c8) at allpaths.c:2993:14\n    frame #36: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128146c50) at allpaths.c:2993:14\n    frame #37: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012814f6d8) at allpaths.c:2993:14\n    frame #38: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128158160) at allpaths.c:2993:14\n    frame #39: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128160be8) at allpaths.c:2993:14\n    frame #40: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128169670) at allpaths.c:2993:14\n    frame #41: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281720f8) at allpaths.c:2993:14\n    frame #42: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012817ab80) at allpaths.c:2993:14\n    frame #43: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128183608) at allpaths.c:2993:14\n    frame #44: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012818c090) at allpaths.c:2993:14\n    frame #45: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128194b18) at allpaths.c:2993:14\n    frame #46: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012819d5a0) at allpaths.c:2993:14\n    frame #47: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281a6028) at allpaths.c:2993:14\n    frame #48: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281aeab0) at allpaths.c:2993:14\n    frame #49: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281b7538) at allpaths.c:2993:14\n    frame #50: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281bffc0) at allpaths.c:2993:14\n    frame #51: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281c8a48) at allpaths.c:2993:14\n    frame #52: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281d14d0) at allpaths.c:2993:14\n    frame #53: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281d9f58) at allpaths.c:2993:14\n    frame #54: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281e29e0) at allpaths.c:2993:14\n    frame #55: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281eb468) at allpaths.c:2993:14\n    frame #56: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281f3ef0) at allpaths.c:2993:14\n    frame #57: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001281fc978) at allpaths.c:2993:14\n    frame #58: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128205400) at allpaths.c:2993:14\n    frame #59: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012820de88) at allpaths.c:2993:14\n    frame #60: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128216910) at allpaths.c:2993:14\n    frame #61: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012821f398) at allpaths.c:2993:14\n    frame #62: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128227e20) at allpaths.c:2993:14\n    frame #63: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282308a8) at allpaths.c:2993:14\n    frame #64: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128239330) at allpaths.c:2993:14\n    frame #65: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128241db8) at allpaths.c:2993:14\n    frame #66: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012824a840) at allpaths.c:2993:14\n    frame #67: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282532c8) at allpaths.c:2993:14\n    frame #68: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012825bd50) at allpaths.c:2993:14\n    frame #69: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282647d8) at allpaths.c:2993:14\n    frame #70: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012826d260) at allpaths.c:2993:14\n    frame #71: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128275ce8) at allpaths.c:2993:14\n    frame #72: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012827e770) at allpaths.c:2993:14\n    frame #73: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282871f8) at allpaths.c:2993:14\n    frame #74: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012828fc80) at allpaths.c:2993:14\n    frame #75: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128298708) at allpaths.c:2993:14\n    frame #76: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282a1190) at allpaths.c:2993:14\n    frame #77: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282a9c18) at allpaths.c:2993:14\n    frame #78: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282b26a0) at allpaths.c:2993:14\n    frame #79: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282bb128) at allpaths.c:2993:14\n    frame #80: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282c3bb0) at allpaths.c:2993:14\n    frame #81: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282cc638) at allpaths.c:2993:14\n    frame #82: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282d50c0) at allpaths.c:2993:14\n    frame #83: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282ddb48) at allpaths.c:2993:14\n    frame #84: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282e65d0) at allpaths.c:2993:14\n    frame #85: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282ef058) at allpaths.c:2993:14\n    frame #86: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001282f7ae0) at allpaths.c:2993:14\n    frame #87: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128300568) at allpaths.c:2993:14\n    frame #88: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128308ff0) at allpaths.c:2993:14\n    frame #89: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128311a78) at allpaths.c:2993:14\n    frame #90: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012831a500) at allpaths.c:2993:14\n    frame #91: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128322f88) at allpaths.c:2993:14\n    frame #92: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012832ba10) at allpaths.c:2993:14\n    frame #93: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128334498) at allpaths.c:2993:14\n    frame #94: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012833cf20) at allpaths.c:2993:14\n    frame #95: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001283459a8) at allpaths.c:2993:14\n    frame #96: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012834e430) at allpaths.c:2993:14\n    frame #97: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x0000000128356eb8) at allpaths.c:2993:14\n    frame #98: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012835f940) at allpaths.c:2993:14\n    frame #99: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x00000001283683c8) at allpaths.c:2993:14\n    frame #100: 0x0000000101ec6b38 postgres`make_rel_from_joinlist(root=0x0000000104a12618, joinlist=0x000000012837e358) at allpaths.c:2993:14\n    frame #101: 0x0000000101ec688f postgres`make_one_rel(root=0x0000000104a12618, joinlist=0x000000012837e358) at allpaths.c:227:8\n    frame #102: 0x0000000101eec187 postgres`query_planner(root=0x0000000104a12618, qp_callback=<unavailable>, qp_extra=0x00007ffeedf5d000) at planmain.c:269:14\n    frame #103: 0x0000000101eeea9b postgres`grouping_planner(root=0x0000000104a12618, inheritance_update=<unavailable>, tuple_fraction=<unavailable>) at planner.c:2058:17\n    frame #104: 0x0000000101eed1a1 postgres`subquery_planner(glob=<unavailable>, parse=0x00000001049ad620, parent_root=<unavailable>, hasRecursion=<unavailable>, tuple_fraction=0) at planner.c:1015:3\n    frame #105: 0x0000000101eec3b6 postgres`standard_planner(parse=0x00000001049ad620, query_string=<unavailable>, cursorOptions=256, boundParams=0x0000000000000000) at planner.c:405:9\n    frame #106: 0x0000000101faeaf1 postgres`pg_plan_query(querytree=0x00000001049ad620, query_string=\"SELECT count(*) FROM users_table u_1123123123123123 LEFT JOIN users_table u0 USING (user_id)  LEFT JOIN users_table\n u1 USING (user_id)  LEFT JOIN users_table u2 USING (user_id)  LEFT JOIN users_table u3 USING (user_id)  LEFT JOIN users_table u4 USING (user_id)  LEFT JOIN users_table u5 USING (user_id)  LEFT JOIN users_table u6 USING (user_id)  LEFT JOIN users_table u7 USING\n (user_id)  LEFT JOIN users_table u8 USING (user_id)  LEFT JOIN users_table u9 USING (user_id)  LEFT JOIN users_table u10 USING (user_id)  LEFT JOIN users_table u11 USING (user_id)  LEFT JOIN users_table u12 USING (user_id)  LEFT JOIN users_table u13 USING\n (user_id)  LEFT JOIN users_table u14 USING (user_id)  LEFT JOIN users_table u15 USING (user_id)  LEFT JOIN users_table u16 USING (user_id)  LEFT JOIN users_table u17 USING (user_id)  LEFT JOIN users_table u18 USING (user_id)  LEFT JOIN users_table u19 USING\n (user_id)  LEFT JOIN users_table u20 USING (user_id)  LEFT JOIN users_table u21 USING (user_id)  LEFT JOIN users_table u22 USING (use\"..., cursorOptions=256, boundParams=0x0000000000000000) at postgres.c:875:9\n    frame #107: 0x0000000101faec32 postgres`pg_plan_queries(querytrees=0x00000001275c20e0, query_string=\"SELECT count(*) FROM users_table u_1123123123123123 LEFT JOIN users_table u0 USING (user_id)  LEFT JOIN users_table\n u1 USING (user_id)  LEFT JOIN users_table u2 USING (user_id)  LEFT JOIN users_table u3 USING (user_id)  LEFT JOIN users_table u4 USING (user_id)  LEFT JOIN users_table u5 USING (user_id)  LEFT JOIN users_table u6 USING (user_id)  LEFT JOIN users_table u7 USING\n (user_id)  LEFT JOIN users_table u8 USING (user_id)  LEFT JOIN users_table u9 USING (user_id)  LEFT JOIN users_table u10 USING (user_id)  LEFT JOIN users_table u11 USING (user_id)  LEFT JOIN users_table u12 USING (user_id)  LEFT JOIN users_table u13 USING\n (user_id)  LEFT JOIN users_table u14 USING (user_id)  LEFT JOIN users_table u15 USING (user_id)  LEFT JOIN users_table u16 USING (user_id)  LEFT JOIN users_table u17 USING (user_id)  LEFT JOIN users_table u18 USING (user_id)  LEFT JOIN users_table u19 USING\n (user_id)  LEFT JOIN users_table u20 USING (user_id)  LEFT JOIN users_table u21 USING (user_id)  LEFT JOIN users_table u22 USING (use\"..., cursorOptions=256, boundParams=0x0000000000000000) at postgres.c:966:11\n    frame #108: 0x0000000101fb09fa postgres`exec_simple_query(query_string=\"SELECT count(*) FROM users_table u_1123123123123123 LEFT JOIN users_table u0 USING (user_id)  LEFT JOIN users_table u1 USING (user_id)  LEFT\n JOIN users_table u2 USING (user_id)  LEFT JOIN users_table u3 USING (user_id)  LEFT JOIN users_table u4 USING (user_id)  LEFT JOIN users_table u5 USING (user_id)  LEFT JOIN users_table u6 USING (user_id)  LEFT JOIN users_table u7 USING (user_id)  LEFT JOIN\n users_table u8 USING (user_id)  LEFT JOIN users_table u9 USING (user_id)  LEFT JOIN users_table u10 USING (user_id)  LEFT JOIN users_table u11 USING (user_id)  LEFT JOIN users_table u12 USING (user_id)  LEFT JOIN users_table u13 USING (user_id)  LEFT JOIN\n users_table u14 USING (user_id)  LEFT JOIN users_table u15 USING (user_id)  LEFT JOIN users_table u16 USING (user_id)  LEFT JOIN users_table u17 USING (user_id)  LEFT JOIN users_table u18 USING (user_id)  LEFT JOIN users_table u19 USING (user_id)  LEFT JOIN\n users_table u20 USING (user_id)  LEFT JOIN users_table u21 USING (user_id)  LEFT JOIN users_table u22 USING (use\"...) at postgres.c:1158:19\n    frame #109: 0x0000000101fb024e postgres`PostgresMain(argc=<unavailable>, argv=<unavailable>, dbname=<unavailable>, username=<unavailable>) at postgres.c:0\n    frame #110: 0x0000000101f35f65 postgres`BackendRun(port=0x0000000000000001) at postmaster.c:4536:2\n    frame #111: 0x0000000101f35830 postgres`BackendStartup(port=<unavailable>) at postmaster.c:4220:3\n    frame #112: 0x0000000101f35005 postgres`ServerLoop at postmaster.c:1739:7\n    frame #113: 0x0000000101f3321c postgres`PostmasterMain(argc=3, argv=0x00007fc7a7403250) at postmaster.c:1412:11\n    frame #114: 0x0000000101e91e06 postgres`main(argc=3, argv=0x00007fc7a7403250) at main.c:210:3\n    frame #115: 0x00007fff638bf3d5 libdyld.dylib`start + 1\n    frame #116: 0x00007fff638bf3d5 libdyld.dylib`start + 1\n  \n \n \nSELECT version();\n                                                      version                                                     \n\n-------------------------------------------------------------------------------------------------------------------\nPostgreSQL 13.0 on x86_64-apple-darwin18.7.0, compiled by Apple clang version 11.0.0 (clang-1100.0.33.17), 64-bit\n(1 row)\n \n \npg_config \nBINDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/bin\nDOCDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/share/doc\nHTMLDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/share/doc\nINCLUDEDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/include\nPKGINCLUDEDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/include\nINCLUDEDIR-SERVER = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/include/server\nLIBDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/lib\nPKGLIBDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/lib\nLOCALEDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/share/locale\nMANDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/share/man\nSHAREDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/share\nSYSCONFDIR = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/etc\nPGXS = /Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0/lib/pgxs/src/makefiles/pgxs.mk\nCONFIGURE =  '--prefix=/Users/onderkalaci/Documents/citus_code/pgenv/pgsql-13.0' '--enable-debug' '--enable-cassert' 'CFLAGS=-ggdb -Og -g3 -fno-omit-frame-pointer' '--with-openssl' '--with-icu' 'LDFLAGS=-L/usr/local/opt/readline/lib\n -L/usr/local/opt/openssl/lib ' 'CPPFLAGS=-I/usr/local/opt/readline/include -I/usr/local/opt/openssl/include/ ' 'PKG_CONFIG_PATH=/usr/local/opt/icu4c/lib/pkgconfig'\nCC = gcc\nCPPFLAGS = -I/usr/local/Cellar/icu4c/66.1/include -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk -I/usr/local/opt/readline/include -I/usr/local/opt/openssl/include/\nCFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument\n -g -ggdb -Og -g3 -fno-omit-frame-pointer\nCFLAGS_SL = \nLDFLAGS = -L/usr/local/opt/readline/lib -L/usr/local/opt/openssl/lib -Wl,-dead_strip_dylibs\nLDFLAGS_EX = \nLDFLAGS_SL = \nLIBS = -lpgcommon -lpgport -lssl -lcrypto -lz -lreadline -lm\n\nVERSION = PostgreSQL 13.0", "msg_date": "Thu, 8 Oct 2020 11:26:21 +0000", "msg_from": "Onder Kalaci <onderk@microsoft.com>", "msg_from_op": true, "msg_subject": "Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "On Fri, 9 Oct 2020 at 08:16, Onder Kalaci <onderk@microsoft.com> wrote:\n> I hit an assertion failure. When asserts disabled, it works fine even with more tables (>5000).\n>\n> Steps to reproduce:\n> CREATE TABLE users_table (user_id int, time timestamp, value_1 int, value_2 int, value_3 float, value_4 bigint);\n> 250 relations work fine, see the query (too long to copy & paste here): https://gist.github.com/onderkalaci/2b40a18d989da389ee4fb631e1ad7c0e#file-steps_to_assert_pg-sql-L41\n\nI had a quick look at this and I can recreate it using the following\n(using psql)\n\nselect 'explain select count(*) from users_table ' || string_Agg('LEFT\nJOIN users_table u'|| x::text || ' USING (user_id)',' ') from\ngenerate_Series(1,379)x;\n\\gexec\n\nThat triggers the assert due to the Assert(outer_skip_rows <=\nouter_rows); failing in initial_cost_mergejoin().\n\nThe reason it fails is that outer_path_rows has become infinity due to\ncalc_joinrel_size_estimate continually multiplying in the join\nselectivity of 0.05 (due to our 200 default num distinct from lack of\nany stats) which after a number of iterations causes the number to\nbecome very large.\n\nInstead of running 379 joins from above, try with 378 and you get:\n\n Aggregate (cost=NaN..NaN rows=1 width=8)\n -> Nested Loop Left Join (cost=33329.16..NaN rows=Infinity width=0)\n Join Filter: (users_table.user_id = u378.user_id)\n -> Merge Left Join (cost=33329.16..<very large number> width=4)\n Merge Cond: (users_table.user_id = u377.user_id)\n -> Merge Left Join (cost=33240.99..<very large number> width=4)\n\nChanging the code in initial_cost_mergejoin() to add:\n\nif (outer_path_rows <= 0 || isnan(outer_path_rows))\n outer_path_rows = 1;\n+else if (isinf(outer_path_rows))\n+ outer_path_rows = DBL_MAX;\n\ndoes seem to fix the problem, but that's certainly not the right fix.\n\nPerhaps the right fix is to modify clamp_row_est() with:\n\n@@ -193,7 +194,9 @@ clamp_row_est(double nrows)\n * better and to avoid possible divide-by-zero when interpolating costs.\n * Make it an integer, too.\n */\n- if (nrows <= 1.0)\n+ if (isinf(nrows))\n+ nrows = rint(DBL_MAX);\n+ else if (nrows <= 1.0)\n nrows = 1.0;\n else\n nrows = rint(nrows);\n\nbut the row estimates are getting pretty insane well before then.\nDBL_MAX is 226 orders of magnitude more than the estimated number of\natoms in the observable universe, so it seems pretty unreasonable that\nsomeone might figure out a way to store that many tuples on a disk any\ntime soon.\n\nPerhaps DBL_MAX is way to big a number to clamp at. I'm just not sure\nwhat we should reduce it to so that it is reasonable.\n\nDavid\n\n\n", "msg_date": "Fri, 9 Oct 2020 11:27:17 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> The reason it fails is that outer_path_rows has become infinity due to\n> calc_joinrel_size_estimate continually multiplying in the join\n> selectivity of 0.05 (due to our 200 default num distinct from lack of\n> any stats) which after a number of iterations causes the number to\n> become very large.\n\n0.005, but yeah. We're estimating that each additional join inflates\nthe output size by about 6x (1270 * 0.005), and after a few hundred\nof those, it'll overflow.\n\n> Perhaps the right fix is to modify clamp_row_est() with:\n\nI thought of that too, but as you say, if the rowcount has overflowed a\ndouble then we've got way worse problems. It'd make more sense to try\nto keep the count to a saner value in the first place. \n\nIn the end, (a) this is an Assert, so not a problem for production\nsystems, and (b) it's going to take you longer than you want to\nwait to join 500+ tables, anyhow, unless maybe they're empty.\nI'm kind of disinclined to do anything in the way of a band-aid fix.\n\nIf somebody has an idea for a different way of estimating the join\nsize with no stats, we could talk about that. I notice though that\nthe only way a plan of this sort isn't going to blow up at execution\nis if the join multiplication factor is at most 1, ie the join\nkey is unique. But guess what, we already know what to do in that\ncase. Adding a unique or pkey constraint to users_table.user_id\ncauses the plan to collapse entirely (if they're left joins) or\nat least still produce a small rowcount estimate (if plain joins).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 08 Oct 2020 19:16:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "On Fri, 9 Oct 2020 at 12:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > Perhaps the right fix is to modify clamp_row_est() with:\n>\n> I thought of that too, but as you say, if the rowcount has overflowed a\n> double then we've got way worse problems. It'd make more sense to try\n> to keep the count to a saner value in the first place.\n\nI wonder if there was something more logical we could do to maintain\nsane estimates too, but someone could surely still cause it to blow up\nby writing a long series of clause-less joins. We can't really get\naway from the fact that we must estimate those as inner_rows *\nouter_rows\n\nI admit it's annoying to add cycles to clamp_row_est() for such insane cases.\n\nDavid\n\n\n", "msg_date": "Fri, 9 Oct 2020 12:27:21 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I admit it's annoying to add cycles to clamp_row_est() for such insane cases.\n\nI poked at this a bit more closely, and noted that the actual problem is\nthat when we do this:\n\n\touter_skip_rows = rint(outer_path_rows * outerstartsel);\n\nwe have outer_path_rows = inf, outerstartsel = 0, and of course inf times\nzero is NaN. So we end up asserting \"NaN <= Inf\", not \"Inf <= Inf\"\n(which wouldn't have caused a problem).\n\nIf we did want to do something here, I'd consider something like\n\n\tif (isnan(outer_skip_rows))\n\t outer_skip_rows = 0;\n\tif (isnan(inner_skip_rows))\n\t inner_skip_rows = 0;\n\n(We shouldn't need that for outer_rows/inner_rows, since the endsel\nvalues can't be 0.) Messing with clamp_row_est would be a much more\nindirect way of fixing it, as well as having more widespread effects.\n\nIn the end though, I'm still not terribly excited about this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 08 Oct 2020 19:59:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "On Fri, 9 Oct 2020 at 12:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If we did want to do something here, I'd consider something like\n>\n> if (isnan(outer_skip_rows))\n> outer_skip_rows = 0;\n> if (isnan(inner_skip_rows))\n> inner_skip_rows = 0;\n\nAre you worried about the costs above the join that triggers that\ncoming out as NaN with that fix? It appears that's the case. Cost\ncomparisons of paths with that are not going to do anything along the\nlines of sane.\n\nI guess whether or not that matters depends on if we expect any real\nqueries to hit this, or if we just want to stop the Assert failure.\n\n... 500 joins. I'm willing to listen to the explanation use case, but\nin absence of that explanation, I'd be leaning towards \"you're doing\nit wrong\". If that turns out to be true, then perhaps your proposed\nfix is okay.\n\nDavid\n\n\n", "msg_date": "Fri, 9 Oct 2020 13:36:18 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Are you worried about the costs above the join that triggers that\n> coming out as NaN with that fix? It appears that's the case.\n\n[ pokes at that... ] Yeah, it looks like nestloop cost estimation\nalso has some issues with inf-times-zero producing NaN; it's just\nnot asserting about it.\n\nI notice there are some other ad-hoc isnan() checks scattered\nabout costsize.c, too. Maybe we should indeed consider fixing\nclamp_row_estimate to get rid of inf (and nan too, I suppose)\nso that we'd not need those. I don't recall the exact cases\nthat made us introduce those checks, but they were for cases\na lot more easily reachable than this one, I believe.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 08 Oct 2020 22:06:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "On Fri, 9 Oct 2020 at 15:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I notice there are some other ad-hoc isnan() checks scattered\n> about costsize.c, too. Maybe we should indeed consider fixing\n> clamp_row_estimate to get rid of inf (and nan too, I suppose)\n> so that we'd not need those. I don't recall the exact cases\n> that made us introduce those checks, but they were for cases\n> a lot more easily reachable than this one, I believe.\n\nIs there actually a case where nrows could be NaN? If not, then it\nseems like a wasted check. Wouldn't it take one of the input\nrelations or the input rels to have an Inf row estimate (which won't\nhappen after changing clamp_row_estimate()), or the selectivity\nestimate being NaN.\n\nDavid\n\n\n", "msg_date": "Fri, 9 Oct 2020 17:32:35 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Fri, 9 Oct 2020 at 15:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I notice there are some other ad-hoc isnan() checks scattered\n>> about costsize.c, too. Maybe we should indeed consider fixing\n>> clamp_row_estimate to get rid of inf (and nan too, I suppose)\n>> so that we'd not need those. I don't recall the exact cases\n>> that made us introduce those checks, but they were for cases\n>> a lot more easily reachable than this one, I believe.\n\n> Is there actually a case where nrows could be NaN? If not, then it\n> seems like a wasted check. Wouldn't it take one of the input\n> relations or the input rels to have an Inf row estimate (which won't\n> happen after changing clamp_row_estimate()), or the selectivity\n> estimate being NaN.\n\nI'm fairly certain that every one of the existing NaN checks was put\nthere on the basis of hard experience. Possibly digging in the git\nhistory would offer more info about exactly where the NaNs came from.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 09 Oct 2020 09:19:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "On Sat, 10 Oct 2020 at 02:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm fairly certain that every one of the existing NaN checks was put\n> there on the basis of hard experience. Possibly digging in the git\n> history would offer more info about exactly where the NaNs came from.\n\n\nI had a look at this and found there's been quite a number of fixes\nwhich added either that isnan checks or the <= 0 checks.\n\nNamely:\n\n-----------\ncommit 72826fb362c4aada6d2431df0b706df448806c02\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Fri Apr 15 17:45:41 2011 -0400\n\n Guard against incoming rowcount estimate of NaN in cost_mergejoin().\n\n Although rowcount estimates really ought not be NaN, a bug elsewhere\n could perhaps result in that, and that would cause Assert failure in\n cost_mergejoin, which I believe to be the explanation for bug #5977 from\n Anton Kuznetsov. Seems like a good idea to expend a couple more cycles\n to prevent that, even though the real bug is elsewhere. Not back-patching,\n though, because we don't encourage running production systems with\n Asserts on.\n\n\nThe discussion for that is in\nhttps://www.postgresql.org/message-id/flat/4602.1302705756%40sss.pgh.pa.us#69dd8c334aa714cfac4e0d9b04c5201c\n\ncommit 76281aa9647e6a5dfc646514554d0f519e3b8a58\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Sat Mar 26 12:03:12 2016 -0400\n\n Avoid a couple of zero-divide scenarios in the planner.\n\n\n\ncommit fd791e7b5a1bf53131ad15e68e4d4f8ca795fcb4\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Mon Mar 24 21:53:04 2008 +0000\n\n When a relation has been proven empty by constraint exclusion,\npropagate that\n knowledge up through any joins it participates in. We were doing\nthat already\n in some special cases but not in the general case. Also, defend\nagainst zero\n row estimates for the input relations in cost_mergejoin --- this\nfix may have\n eliminated the only scenario in which that can happen, but be safe. Per\n report from Alex Solovey.\n\n\nThat was reported in\nhttps://www.postgresql.org/message-id/flat/BLU136-DAV79FF310AC13FFC96FA2FDAEFD0%40phx.gbl#4cde17b2369fc7e0da83cc7d4aeeaa48\n\nThe problem was that an Append with no subpaths could have a 0 row estimate.\n-----------\n\nBecause there's been quite a few of these, and this report is yet\nanother one, I wonder if it's time to try and stamp these out at the\nsource rather than where the row counts are being used.\n\nI toyed around with the attached patch, but I'm still not that excited\nabout the clamping of infinite values to DBL_MAX. The test case I\nshowed above with generate_Series(1,379) still ends up with NaN cost\nestimates due to costing a sort with DBL_MAX rows. When I was writing\nthe patch, I had it in my head that the costs per row will always be\nlower than 1. I thought because of that that even if the row count is\ndangerously close to DBL_MAX, the costs will never be higher than the\nrow count... Turns out, I was wrong about that as clearly sorting a\nnumber of rows even close to DBL_MAX would beyond astronomically\nexpensive and cause the costs would go infinite.\n\nThe fd791e7b5 fix was for a subpath-less Append node having a 0-row\nestimate and causing problems in the costing of merge join. In the\npatch, I thought it would be better just to fix this by insisting that\nAppend always will have at least 1 row. That means even a dummy path\nwould have 1 row, which will become a const-false Result in the plan.\nI've had to add a special case to set the plan_rows back to 0 so that\nEXPLAIN shows 0 rows as it did before. That's not exactly pretty, but\nI still feel there is merit in insisting we never have 0-row paths to\nget away from these types of bugs once at for all.\n\nThe patch does fix the failing Assert. However, something along these\nlines seems more suitable for master only. The back branches maybe\nshould just get a more localised isinf() check and clamp to DBL_MAX\nthat I mentioned earlier in this thread.\n\nI've searched through the code to see if there are other possible\ncases where paths may be generated with a 0-row count. I imagine\nanything that has a qual and performs a selectivity estimate will\nalready have a clamp_row_est() since we'd see fractional row counts if\nit didn't. That leaves me with Append / Merge Append and each join\ntype + aggregates. Currently, it seems we never will generate a Merge\nAppend without any sub-paths. I wondered if I should just Assert\nthat's the case in create_merge_append_path(). I ended up just adding\na clamp_row_est call instead. calc_joinrel_size_estimate() seems to\nhandle all join path row estimates. That uses clamp_row_est.\nAggregate paths can reduce the number of rows, but I think all the row\nestimates from those will go through estimate_num_groups(), which\nappears to never be able to return 0.\n\nDavid", "msg_date": "Tue, 13 Oct 2020 22:10:15 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Because there's been quite a few of these, and this report is yet\n> another one, I wonder if it's time to try and stamp these out at the\n> source rather than where the row counts are being used.\n\nI'm on board with trying to get rid of NaN rowcount estimates more\ncentrally. I do not think it is a good idea to try to wire in a\nprohibition against zero rowcounts. That is actually the correct\nthing in assorted scenarios --- one example recently under discussion\nwas ModifyTable without RETURNING, and another is where we can prove\nthat a restriction clause is constant-false. At some point I think\nwe are going to want to deal honestly with those cases instead of\nsweeping them under the rug. So I'm disinclined to remove zero\ndefenses that we'll just have to put back someday.\n\nI think converting Inf to DBL_MAX, in hopes of avoiding creation of\nNaNs later, is fine. (Note that applying rint() to that is quite\nuseless --- in every floating-point system, values bigger than\n2^number-of-mantissa-bits are certainly integral.)\n\nI'm not sure why you propose to map NaN to one. Wouldn't mapping it\nto Inf (and thence to DBL_MAX) make at least as much sense? Probably\nmore in fact. We know that unwarranted one-row estimates are absolute\ndeath to our chances of picking a well-chosen plan.\n\n> I toyed around with the attached patch, but I'm still not that excited\n> about the clamping of infinite values to DBL_MAX. The test case I\n> showed above with generate_Series(1,379) still ends up with NaN cost\n> estimates due to costing a sort with DBL_MAX rows. When I was writing\n> the patch, I had it in my head that the costs per row will always be\n> lower than 1.\n\nYeah, that is a good point. Maybe instead of clamping to DBL_MAX,\nwe should clamp rowcounts to something that provides some headroom\nfor multiplication by per-row costs. A max rowcount of say 1e100\nshould serve fine, while still being comfortably more than any\nnon-insane estimate.\n\nSo now I'm imagining something like\n\n#define MAXIMUM_ROWCOUNT 1e100\n\nclamp_row_est(double nrows)\n{\n\t/* Get rid of NaN, Inf, and impossibly large row counts */\n\tif (isnan(nrows) || nrows >= MAXIMUM_ROWCOUNT)\n\t nrows = MAXIMUM_ROWCOUNT;\n\telse\n\t... existing logic ...\n\n\nPerhaps we should also have some sort of clamp for path cost\nestimates, at least to prevent them from being NaNs which\nis going to confuse add_path terribly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Oct 2020 11:16:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "Thanks for having a look at this.\n\nOn Wed, 14 Oct 2020 at 04:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm on board with trying to get rid of NaN rowcount estimates more\n> centrally. I do not think it is a good idea to try to wire in a\n> prohibition against zero rowcounts. That is actually the correct\n> thing in assorted scenarios --- one example recently under discussion\n> was ModifyTable without RETURNING, and another is where we can prove\n> that a restriction clause is constant-false. At some point I think\n> we are going to want to deal honestly with those cases instead of\n> sweeping them under the rug. So I'm disinclined to remove zero\n> defenses that we'll just have to put back someday.\n\nOK, that certainly limits the scope here. It just means we can't get\nrid of the <= 0 checks in join costing functions. The problem case\nthat this was added for was a dummy Append. We still have valid cases\nthat won't convert the join rel to a dummy rel with a dummy Append on\none side.\n\n> I think converting Inf to DBL_MAX, in hopes of avoiding creation of\n> NaNs later, is fine. (Note that applying rint() to that is quite\n> useless --- in every floating-point system, values bigger than\n> 2^number-of-mantissa-bits are certainly integral.)\n\nGood point.\n\n> I'm not sure why you propose to map NaN to one. Wouldn't mapping it\n> to Inf (and thence to DBL_MAX) make at least as much sense? Probably\n> more in fact. We know that unwarranted one-row estimates are absolute\n> death to our chances of picking a well-chosen plan.\n\nThat came around due to what the join costing functions were doing. i.e:\n\n/* Protect some assumptions below that rowcounts aren't zero or NaN */\nif (inner_path_rows <= 0 || isnan(inner_path_rows))\n inner_path_rows = 1;\n\n[1] didn't have an example case of how the NaNs were introduced, so I\nwas mostly just copying the logic that was added to fix that back in\n72826fb3.\n\n> > I toyed around with the attached patch, but I'm still not that excited\n> > about the clamping of infinite values to DBL_MAX. The test case I\n> > showed above with generate_Series(1,379) still ends up with NaN cost\n> > estimates due to costing a sort with DBL_MAX rows. When I was writing\n> > the patch, I had it in my head that the costs per row will always be\n> > lower than 1.\n>\n> Yeah, that is a good point. Maybe instead of clamping to DBL_MAX,\n> we should clamp rowcounts to something that provides some headroom\n> for multiplication by per-row costs. A max rowcount of say 1e100\n> should serve fine, while still being comfortably more than any\n> non-insane estimate.\n>\n> So now I'm imagining something like\n>\n> #define MAXIMUM_ROWCOUNT 1e100\n\nThat seems more reasonable. We likely could push it a bit higher, but\nI'm not all that motivated to since if that was true, then you could\nexpect the heat death of the universe to arrive before your query\nresults. In which case the user would likely struggle to find\nelectrons to power their computer.\n\n> clamp_row_est(double nrows)\n> {\n> /* Get rid of NaN, Inf, and impossibly large row counts */\n> if (isnan(nrows) || nrows >= MAXIMUM_ROWCOUNT)\n> nrows = MAXIMUM_ROWCOUNT;\n> else\n> ... existing logic ...\n\nI've got something along those lines in the attached.\n\n> Perhaps we should also have some sort of clamp for path cost\n> estimates, at least to prevent them from being NaNs which\n> is going to confuse add_path terribly.\n\nhmm. I'm not quite sure where to start with that one. Many of the\npath estimates will already go through clamp_row_est(). There are\nvarious special requirements, e.g Appends with no subpaths. So when to\napply it would depend on what path type it is. I'd say it would need\nlots of careful analysis and a scattering of new calls in pathnode.c\n\nI've ended up leaving the NaN checks in the join costing functions.\nThere was no case mentioned in [1] that showed how we hit that\nreported test case, so I'm not really confident enough to know I'm not\njust reintroducing the same problem again by removing that. The path\nrow estimate that had the NaN might not have been through\nclamp_row_est(). Many don't.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/7270.1302902842%40sss.pgh.pa.us", "msg_date": "Wed, 14 Oct 2020 15:53:55 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Wed, 14 Oct 2020 at 04:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So now I'm imagining something like\n>> #define MAXIMUM_ROWCOUNT 1e100\n\n> That seems more reasonable. We likely could push it a bit higher, but\n> I'm not all that motivated to since if that was true, then you could\n> expect the heat death of the universe to arrive before your query\n> results. In which case the user would likely struggle to find\n> electrons to power their computer.\n\nRight. But I'm thinking about joins in which both inputs are clamped to\nthat maximum estimate. If we allowed it to be as high as 1e200, then\nmultiplying the two input rowcounts together would itself overflow.\nAt 1e100, we can do that and also multiply in a ridiculous per-row cost,\nand we're still well below the overflow threshold. So this should go\npretty far towards preventing internal overflows in any one plan step's\ncost & rows calculations.\n\n(For comparison's sake, I believe the number of atoms in the observable\nuniverse is thought to be somewhere on the order of 1e80. So we are\npretty safe in thinking that no practically-useful rowcount estimate\nwill exceed 1e100; there is no need to make it higher.)\n\n> I've ended up leaving the NaN checks in the join costing functions.\n> There was no case mentioned in [1] that showed how we hit that\n> reported test case, so I'm not really confident enough to know I'm not\n> just reintroducing the same problem again by removing that. The path\n> row estimate that had the NaN might not have been through\n> clamp_row_est(). Many don't.\n\nHmm, I will try to find some time tomorrow to reconstruct that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Oct 2020 23:26:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "I wrote:\n> David Rowley <dgrowleyml@gmail.com> writes:\n>> I've ended up leaving the NaN checks in the join costing functions.\n>> There was no case mentioned in [1] that showed how we hit that\n>> reported test case, so I'm not really confident enough to know I'm not\n>> just reintroducing the same problem again by removing that. The path\n>> row estimate that had the NaN might not have been through\n>> clamp_row_est(). Many don't.\n\n> Hmm, I will try to find some time tomorrow to reconstruct that.\n\nI'm confused now, because the v2 patch does remove those isnan calls?\n\nI rechecked the archives, and I agree that there's no data about\nexactly how we could have gotten a NaN here. My guess though is\ninfinity-times-zero in some earlier relation size estimate. So\nhopefully the clamp to 1e100 will make that impossible, or if it\ndoesn't then clamp_row_est() should still prevent a NaN from\npropagating to the next level up.\n\nI'm good with the v2 patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Oct 2020 13:00:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "On Sat, 17 Oct 2020 at 06:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm confused now, because the v2 patch does remove those isnan calls?\n\nI think that was a case of a last-minute change of mind and forgetting\nto attach the updated patch.\n\n> I rechecked the archives, and I agree that there's no data about\n> exactly how we could have gotten a NaN here. My guess though is\n> infinity-times-zero in some earlier relation size estimate. So\n> hopefully the clamp to 1e100 will make that impossible, or if it\n> doesn't then clamp_row_est() should still prevent a NaN from\n> propagating to the next level up.\n>\n> I'm good with the v2 patch.\n\nThanks a lot for having a look. I'll proceed in getting the v2 which I\nsent earlier into master.\n\nFor the backbranches, I think I go with something more minimal in the\nform of adding:\n\nif (outer_path_rows <= 0 || isnan(outer_path_rows))\n outer_path_rows = 1;\n+else if (isinf(outer_path_rows))\n+ outer_path_rows = DBL_MAX;\n\nand the same for the inner_path_rows to each area in costsize.c which\nhas that code.\n\nWondering your thoughts on that.\n\nDavid\n\n\n", "msg_date": "Mon, 19 Oct 2020 10:01:00 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Sat, 17 Oct 2020 at 06:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm good with the v2 patch.\n\n> Thanks a lot for having a look. I'll proceed in getting the v2 which I\n> sent earlier into master.\n\n> For the backbranches, I think I go with something more minimal in the\n> form of adding:\n\nTBH, I see no need to do anything in the back branches. This is not\nan issue for production usage.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 18 Oct 2020 19:10:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "On Mon, 19 Oct 2020 at 12:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > For the backbranches, I think I go with something more minimal in the\n> > form of adding:\n>\n> TBH, I see no need to do anything in the back branches. This is not\n> an issue for production usage.\n\nI understand the Assert failure is pretty harmless, so non-assert\nbuilds shouldn't suffer too greatly. I just assumed that any large\nstakeholders invested in upgrading to a newer version of PostgreSQL\nmay like to run various tests with their application against an assert\nenabled version of PostgreSQL perhaps to gain some confidence in the\nupgrade. A failing assert is unlikely to inspire additional\nconfidence.\n\nI'm not set on backpatching, but that's just my thoughts.\n\nFWIW, the patch I'd thought of is attached.\n\nDavid", "msg_date": "Mon, 19 Oct 2020 12:18:14 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Mon, 19 Oct 2020 at 12:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> TBH, I see no need to do anything in the back branches. This is not\n>> an issue for production usage.\n\n> I understand the Assert failure is pretty harmless, so non-assert\n> builds shouldn't suffer too greatly. I just assumed that any large\n> stakeholders invested in upgrading to a newer version of PostgreSQL\n> may like to run various tests with their application against an assert\n> enabled version of PostgreSQL perhaps to gain some confidence in the\n> upgrade. A failing assert is unlikely to inspire additional\n> confidence.\n\nIf any existing outside regression tests hit such corner cases, then\n(a) we'd have heard about it, and (b) likely they'd fail in the older\nbranch as well. So I don't buy the argument that this will dissuade\nsomebody from upgrading.\n\nI do, on the other hand, buy the idea that if anyone is indeed working\nin this realm, they might be annoyed by a behavior change in a stable\nbranch. So it cuts both ways. On balance I don't think we should\ntouch this in the back branches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 18 Oct 2020 19:25:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "On Mon, 19 Oct 2020 at 12:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Mon, 19 Oct 2020 at 12:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> TBH, I see no need to do anything in the back branches. This is not\n> >> an issue for production usage.\n>\n> > I understand the Assert failure is pretty harmless, so non-assert\n> > builds shouldn't suffer too greatly. I just assumed that any large\n> > stakeholders invested in upgrading to a newer version of PostgreSQL\n> > may like to run various tests with their application against an assert\n> > enabled version of PostgreSQL perhaps to gain some confidence in the\n> > upgrade. A failing assert is unlikely to inspire additional\n> > confidence.\n>\n> If any existing outside regression tests hit such corner cases, then\n> (a) we'd have heard about it, and (b) likely they'd fail in the older\n> branch as well. So I don't buy the argument that this will dissuade\n> somebody from upgrading.\n\nhmm, well it was reported to us. Perhaps swapping the word \"upgrading\"\nfor \"migrating\".\n\nIt would be good to hear Onder's case to see if he has a good argument\nfor having a vested interest in pg13 not failing this way with assets\nenabled.\n\n> I do, on the other hand, buy the idea that if anyone is indeed working\n> in this realm, they might be annoyed by a behavior change in a stable\n> branch. So it cuts both ways. On balance I don't think we should\n> touch this in the back branches.\n\nI guess we could resolve that concern by just changing the failing\nassert to become: Assert(outer_skip_rows <= outer_rows ||\nisinf(outer_rows));\n\nIt's pretty grotty but should address that concern.\n\nDavid\n\n\n", "msg_date": "Mon, 19 Oct 2020 12:37:49 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> It would be good to hear Onder's case to see if he has a good argument\n> for having a vested interest in pg13 not failing this way with assets\n> enabled.\n\nYeah, some context for this report would be a good thing.\n(BTW, am I wrong to suppose that the same case fails the same\nway in our older branches? Certainly that Assert has been there\na long time.)\n\n> I guess we could resolve that concern by just changing the failing\n> assert to become: Assert(outer_skip_rows <= outer_rows ||\n> isinf(outer_rows));\n\nI can't really object to just weakening the Assert a tad.\nMy thoughts would have run towards checking for the NaN though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 18 Oct 2020 20:06:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "On Mon, 19 Oct 2020 at 13:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> (BTW, am I wrong to suppose that the same case fails the same\n> way in our older branches? Certainly that Assert has been there\n> a long time.)\n\nI only tested as back as far as 9.5, but it does fail there.\n\nDavid\n\n\n", "msg_date": "Mon, 19 Oct 2020 13:18:43 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" }, { "msg_contents": "On Mon, 19 Oct 2020 at 13:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I guess we could resolve that concern by just changing the failing\n> > assert to become: Assert(outer_skip_rows <= outer_rows ||\n> > isinf(outer_rows));\n>\n> I can't really object to just weakening the Assert a tad.\n> My thoughts would have run towards checking for the NaN though.\n\nI ended up back-patching a change that does that.\n\nThanks for your input on this and for the report, Onder.\n\nDavid\n\n\n", "msg_date": "Tue, 20 Oct 2020 00:09:08 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with LEFT JOINs among >500 relations" } ]
[ { "msg_contents": "Over in the thread at [1], we've tentatively determined that the\nreason buildfarm member lorikeet is currently failing is that its\nnetwork stack returns ECONNABORTED for (some?) connection failures,\nwhereas our code is only expecting ECONNRESET. Fujii Masao therefore\nproposes that we treat ECONNABORTED the same as ECONNRESET. I think\nthis is a good idea, but after a bit of research I feel it does not\ngo far enough. I find these POSIX-standard errnos that also seem\nlikely candidates to be returned for a hard loss of connection:\n\n\tECONNABORTED\n\tEHOSTUNREACH\n\tENETDOWN\n\tENETUNREACH\n\nAll of these have been in POSIX since SUSv2, so it seems unlikely\nthat we need to #ifdef any of them. (It is in any case pretty silly\nthat we have #ifdefs around a very small minority of our references\nto ECONNRESET :-(.)\n\nThere are some other related errnos, such as ECONNREFUSED, that\ndon't seem like they'd be returned for a failure of a pre-existing\nconnection, so we don't need to include them in such tests.\n\nAccordingly, I propose the attached patch (an expansion of\nFujii-san's) that causes us to test for all five errnos anyplace\nwe had been checking for ECONNRESET. I felt that this was getting to\nthe point where we'd better centralize the knowledge of what to check,\nso the patch does that, via an inline function and an admittedly hacky\nmacro. I also upgraded some places such as strerror.c to have full\nsupport for these symbols.\n\nAll of the machines I have (even as far back as HPUX 10.20) also\ndefine ENETRESET and EHOSTDOWN. However, those symbols do not appear\nin SUSv2. ENETRESET was added at some later point, but EHOSTDOWN is\nstill not in POSIX. For the moment I've left these second-tier\nsymbols out of the patch, but there's a case for adding them. I'm\nnot sure whether there'd be any point in trying to #ifdef them.\n\nBTW, I took out the conditional defines of some of these errnos in\nlibpq's win32.h; AFAICS that's been dead code ever since we added\n#define's for them to win32_port.h. Am I missing something?\n\nThis seems like a bug fix to me, so I'm inclined to back-patch.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/E1kPc9v-0005L4-2l%40gemulon.postgresql.org", "msg_date": "Thu, 08 Oct 2020 15:15:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Expansion of our checks for connection-loss errors" }, { "msg_contents": "At Thu, 08 Oct 2020 15:15:54 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Over in the thread at [1], we've tentatively determined that the\n> reason buildfarm member lorikeet is currently failing is that its\n> network stack returns ECONNABORTED for (some?) connection failures,\n> whereas our code is only expecting ECONNRESET. Fujii Masao therefore\n> proposes that we treat ECONNABORTED the same as ECONNRESET. I think\n> this is a good idea, but after a bit of research I feel it does not\n> go far enough. I find these POSIX-standard errnos that also seem\n> likely candidates to be returned for a hard loss of connection:\n> \n> \tECONNABORTED\n> \tEHOSTUNREACH\n> \tENETDOWN\n> \tENETUNREACH\n> \n> All of these have been in POSIX since SUSv2, so it seems unlikely\n> that we need to #ifdef any of them. (It is in any case pretty silly\n> that we have #ifdefs around a very small minority of our references\n> to ECONNRESET :-(.)\n> \n> There are some other related errnos, such as ECONNREFUSED, that\n> don't seem like they'd be returned for a failure of a pre-existing\n> connection, so we don't need to include them in such tests.\n> \n> Accordingly, I propose the attached patch (an expansion of\n> Fujii-san's) that causes us to test for all five errnos anyplace\n> we had been checking for ECONNRESET. I felt that this was getting to\n> the point where we'd better centralize the knowledge of what to check,\n> so the patch does that, via an inline function and an admittedly hacky\n> macro. I also upgraded some places such as strerror.c to have full\n> support for these symbols.\n> \n> All of the machines I have (even as far back as HPUX 10.20) also\n> define ENETRESET and EHOSTDOWN. However, those symbols do not appear\n> in SUSv2. ENETRESET was added at some later point, but EHOSTDOWN is\n> still not in POSIX. For the moment I've left these second-tier\n> symbols out of the patch, but there's a case for adding them. I'm\n> not sure whether there'd be any point in trying to #ifdef them.\n> \n> BTW, I took out the conditional defines of some of these errnos in\n> libpq's win32.h; AFAICS that's been dead code ever since we added\n> #define's for them to win32_port.h. Am I missing something?\n> \n> This seems like a bug fix to me, so I'm inclined to back-patch.\n> \n> \t\t\tregards, tom lane\n> \n> [1] https://www.postgresql.org/message-id/flat/E1kPc9v-0005L4-2l%40gemulon.postgresql.org\n\n+1 for the direction.\n\nIn terms of connection errors, connect(2) and bind(2) can return\nEADDRNOTAVAIL. bind(2) and listen(2) can return EADDRINUSE. FWIW I\nrecetnly saw pgbench getting EADDRNOTAVAIL. (They have mapping from\nrespective WSA errors in TranslateSocketError())\n\nI'm not sure how we should treat EMFILE/ENFILE/ENOBUFS/ENOMEM from\naccept(2). (select(2) can return ENOMEM.)\n\nI'd make errno_is_connection_loss use ALL_CONNECTION_LOSS_ERRNOS to\navoid duplication definition of the errno list.\n\n-\tif (ret < 0 && WSAGetLastError() == WSAECONNRESET)\n+\tif (ret < 0 && errno_is_connection_loss(WSAGetLastError()))\n\nDon't we need to use TranslateSocketError() before?\n\n+\t\t/* We might get ECONNRESET etc here if using TCP and backend died */\n+\t\tif (errno_is_connection_loss(SOCK_ERRNO))\n\nPerhaps I'm confused but SOCK_ERROR doesn't seem portable between\nWindows and Linux.\n\n=====\n/*\n * These macros are needed to let error-handling code be portable between\n * Unix and Windows. (ugh)\n */\n#ifdef WIN32\n#define SOCK_ERRNO (WSAGetLastError())\n#define SOCK_STRERROR winsock_strerror\n#define SOCK_ERRNO_SET(e) WSASetLastError(e)\n#else\n#define SOCK_ERRNO errno\n#define SOCK_STRERROR strerror_r\n#define SOCK_ERRNO_SET(e) (errno = (e))\n#endif\n=====\n\nAFAICS SOCK_ERRNO is intended to be used idiomatically as:\n\n> SOCK_STRERROR(SOCK_ERRNO, ...)\n\nThe WSAE values from WSAGetLastError() and E values in errno are not\ncompatible and needs translation by TranslateSocketError()?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 09 Oct 2020 10:05:38 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Expansion of our checks for connection-loss errors" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Thu, 08 Oct 2020 15:15:54 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>> Accordingly, I propose the attached patch (an expansion of\n>> Fujii-san's) that causes us to test for all five errnos anyplace\n>> we had been checking for ECONNRESET.\n\n> +1 for the direction.\n\n> In terms of connection errors, connect(2) and bind(2) can return\n> EADDRNOTAVAIL. bind(2) and listen(2) can return EADDRINUSE. FWIW I\n> recetnly saw pgbench getting EADDRNOTAVAIL. (They have mapping from\n> respective WSA errors in TranslateSocketError())\n\nI do not think we have any issues with connection-time errors;\nor at least, if we do, the spots being touched here certainly\nshouldn't need to worry about them. These places are dealing\nwith already-established connections.\n\n> I'd make errno_is_connection_loss use ALL_CONNECTION_LOSS_ERRNOS to\n> avoid duplication definition of the errno list.\n\nHmm, might be worth doing, but I'm not sure. I am worried about\nwhether compilers will generate equally good code that way.\n\n> -\tif (ret < 0 && WSAGetLastError() == WSAECONNRESET)\n> +\tif (ret < 0 && errno_is_connection_loss(WSAGetLastError()))\n\n> Don't we need to use TranslateSocketError() before?\n\nOh, I missed that. But:\n\n> Perhaps I'm confused but SOCK_ERROR doesn't seem portable between\n> Windows and Linux.\n\nIn that case, nothing would have worked on Windows for the last\nten years, so you're mistaken. I think the actual explanation\nwhy this works, and why that test in parallel.c probably still\nworks even with my mistake, is that win32_port.h makes sure that\nour values of ECONNRESET etc match WSAECONNRESET etc.\n\nIOW, we'd not actually need TranslateSocketError at all, except\nthat it maps some not-similarly-named error codes for conditions\nthat don't exist in Unix into ones that do. We probably do want\nTranslateSocketError in this parallel.c test so that anything that\nit maps to one of the errno_is_connection_loss codes will be\nrecognized; but the basic cases would work anyway, unless I\nmisunderstand this stuff entirely.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 08 Oct 2020 21:41:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Expansion of our checks for connection-loss errors" }, { "msg_contents": "At Thu, 08 Oct 2020 21:41:55 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > At Thu, 08 Oct 2020 15:15:54 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> >> Accordingly, I propose the attached patch (an expansion of\n> >> Fujii-san's) that causes us to test for all five errnos anyplace\n> >> we had been checking for ECONNRESET.\n> \n> > +1 for the direction.\n> \n> > In terms of connection errors, connect(2) and bind(2) can return\n> > EADDRNOTAVAIL. bind(2) and listen(2) can return EADDRINUSE. FWIW I\n> > recetnly saw pgbench getting EADDRNOTAVAIL. (They have mapping from\n> > respective WSA errors in TranslateSocketError())\n> \n> I do not think we have any issues with connection-time errors;\n> or at least, if we do, the spots being touched here certainly\n> shouldn't need to worry about them. These places are dealing\n> with already-established connections.\n\nerrcode_for_socket_access() is called for connect, bind and lesten but\nI understand we don't consider the case since we don't have an actual\nissue related to the functions.\n\n> > I'd make errno_is_connection_loss use ALL_CONNECTION_LOSS_ERRNOS to\n> > avoid duplication definition of the errno list.\n> \n> Hmm, might be worth doing, but I'm not sure. I am worried about\n> whether compilers will generate equally good code that way.\n\nThe two are placed side-by-side so either will do for me.\n\n> > -\tif (ret < 0 && WSAGetLastError() == WSAECONNRESET)\n> > +\tif (ret < 0 && errno_is_connection_loss(WSAGetLastError()))\n> \n> > Don't we need to use TranslateSocketError() before?\n> \n> Oh, I missed that. But:\n> \n> > Perhaps I'm confused but SOCK_ERROR doesn't seem portable between\n> > Windows and Linux.\n> \n> In that case, nothing would have worked on Windows for the last\n> ten years, so you're mistaken. I think the actual explanation\n> why this works, and why that test in parallel.c probably still\n> works even with my mistake, is that win32_port.h makes sure that\n> our values of ECONNRESET etc match WSAECONNRESET etc.\n\nMmmmmmmmmm. Sure.\n\n> IOW, we'd not actually need TranslateSocketError at all, except\n> that it maps some not-similarly-named error codes for conditions\n> that don't exist in Unix into ones that do. We probably do want\n> TranslateSocketError in this parallel.c test so that anything that\n> it maps to one of the errno_is_connection_loss codes will be\n> recognized; but the basic cases would work anyway, unless I\n> misunderstand this stuff entirely.\n\nYeah, that seems to work.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 09 Oct 2020 11:53:13 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Expansion of our checks for connection-loss errors" }, { "msg_contents": "\n\nOn 2020/10/09 4:15, Tom Lane wrote:\n> Over in the thread at [1], we've tentatively determined that the\n> reason buildfarm member lorikeet is currently failing is that its\n> network stack returns ECONNABORTED for (some?) connection failures,\n> whereas our code is only expecting ECONNRESET. Fujii Masao therefore\n> proposes that we treat ECONNABORTED the same as ECONNRESET. I think\n> this is a good idea, but after a bit of research I feel it does not\n> go far enough. I find these POSIX-standard errnos that also seem\n> likely candidates to be returned for a hard loss of connection:\n> \n> \tECONNABORTED\n> \tEHOSTUNREACH\n> \tENETDOWN\n> \tENETUNREACH\n> \n> All of these have been in POSIX since SUSv2, so it seems unlikely\n> that we need to #ifdef any of them. (It is in any case pretty silly\n> that we have #ifdefs around a very small minority of our references\n> to ECONNRESET :-(.)\n> \n> There are some other related errnos, such as ECONNREFUSED, that\n> don't seem like they'd be returned for a failure of a pre-existing\n> connection, so we don't need to include them in such tests.\n> \n> Accordingly, I propose the attached patch (an expansion of\n> Fujii-san's) that causes us to test for all five errnos anyplace\n> we had been checking for ECONNRESET.\n\n+1\n\nThanks for expanding the patch!\n\n-#ifdef ECONNRESET\n-\t\t\tcase ECONNRESET:\n+\t\t\tcase ALL_CONNECTION_LOSS_ERRNOS:\n \t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t\t\t\t\t libpq_gettext(\"server closed the connection unexpectedly\\n\"\n \t\t\t\t\t\t\t\t\t\t\t\t\"\\tThis probably means the server terminated abnormally\\n\"\n \t\t\t\t\t\t\t\t\t\t\t\t\"\\tbefore or while processing the request.\\n\"));\n\nThis change causes the same error message to be reported for those five errno.\nThat is, we cannot identify which errno is actually reported, from the error\nmessage. But I just wonder if it's more helpful for the troubleshooting if we,\nfor example, append strerror() into the message so that we can easily\nidentify errno. Thought?\n\n\n> I felt that this was getting to\n> the point where we'd better centralize the knowledge of what to check,\n> so the patch does that, via an inline function and an admittedly hacky\n> macro. I also upgraded some places such as strerror.c to have full\n> support for these symbols.\n> \n> All of the machines I have (even as far back as HPUX 10.20) also\n> define ENETRESET and EHOSTDOWN. However, those symbols do not appear\n> in SUSv2. ENETRESET was added at some later point, but EHOSTDOWN is\n> still not in POSIX. For the moment I've left these second-tier\n> symbols out of the patch, but there's a case for adding them. I'm\n> not sure whether there'd be any point in trying to #ifdef them.\n> \n> BTW, I took out the conditional defines of some of these errnos in\n> libpq's win32.h; AFAICS that's been dead code ever since we added\n> #define's for them to win32_port.h. Am I missing something?\n> \n> This seems like a bug fix to me, so I'm inclined to back-patch.\n\n+1\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 9 Oct 2020 22:21:29 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Expansion of our checks for connection-loss errors" }, { "msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> On 2020/10/09 4:15, Tom Lane wrote:\n>> -#ifdef ECONNRESET\n>> -\t\t\tcase ECONNRESET:\n>> +\t\t\tcase ALL_CONNECTION_LOSS_ERRNOS:\n>> \t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n>> \t\t\t\t\t\t\t\t libpq_gettext(\"server closed the connection unexpectedly\\n\"\n>> \t\t\t\t\t\t\t\t\t\t\t\t\"\\tThis probably means the server terminated abnormally\\n\"\n>> \t\t\t\t\t\t\t\t\t\t\t\t\"\\tbefore or while processing the request.\\n\"));\n\n> This change causes the same error message to be reported for those five errno.\n> That is, we cannot identify which errno is actually reported, from the error\n> message. But I just wonder if it's more helpful for the troubleshooting if we,\n> for example, append strerror() into the message so that we can easily\n> identify errno. Thought?\n\nHmm, excellent point. While our code response to all these errors\nshould be the same, you are right that that doesn't extend to emitting\nidentical error texts. For EHOSTUNREACH/ENETDOWN/ENETUNREACH, we\nshould say something like \"connection to server lost\", without claiming\nthat the server crashed. It is less clear what to do with ECONNABORTED,\nbut I'm inclined to put it in the network-problem bucket not the\nserver-crash bucket, despite lorikeet's behavior. Thoughts?\n\nThis also destroys the patch's idea that switch statements should be\nable to handle these all alike. If we group things as \"ECONNRESET means\nserver crash and the others are all network failures\", then I'd be\ninclined to leave the ECONNRESET cases alone and just introduce\nnew infrastructure to recognize all the network-failure errnos.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 09 Oct 2020 10:17:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Expansion of our checks for connection-loss errors" }, { "msg_contents": "I wrote:\n> Hmm, excellent point. While our code response to all these errors\n> should be the same, you are right that that doesn't extend to emitting\n> identical error texts. For EHOSTUNREACH/ENETDOWN/ENETUNREACH, we\n> should say something like \"connection to server lost\", without claiming\n> that the server crashed. It is less clear what to do with ECONNABORTED,\n> but I'm inclined to put it in the network-problem bucket not the\n> server-crash bucket, despite lorikeet's behavior. Thoughts?\n\n> This also destroys the patch's idea that switch statements should be\n> able to handle these all alike. If we group things as \"ECONNRESET means\n> server crash and the others are all network failures\", then I'd be\n> inclined to leave the ECONNRESET cases alone and just introduce\n> new infrastructure to recognize all the network-failure errnos.\n\nActually, treating it that way seems like a good thing because it nets\nout as (nearly) no change to our error message behavior. The connection\nfailure errnos fall through to the default case, which produces a\nperfectly reasonable report that includes strerror(). The only big thing\nwe're changing is the set of errnos that errcode_for_socket_access will\nmap to ERRCODE_CONNECTION_FAILURE, so this is spiritually closer to your\noriginal patch.\n\nSome other changes in the attached v2:\n\n* I incorporated Kyotaro-san's suggested improvements.\n\n* I went ahead and included ENETRESET and EHOSTDOWN, figuring that\nif they exist we definitely want to class them as network failures.\nWe can worry about ifdef'ing them when and if we find a platform\nthat hasn't got them. (I don't see any non-ugly way to make the\nALL_NETWORK_FAILURE_ERRNOS macro vary for missing symbols, so I'd\nrather not deal with that unless it's proven necessary.)\n\n* I noticed that we were not terribly consistent about whether\nEPIPE is regarded as indicating a server failure like ECONNRESET\ndoes. So this patch also makes sure that EPIPE is treated like\nECONNRESET everywhere. (Hence, pqsecure_raw_read's error reporting\ndoes change, since it'll now report EPIPE as server failure.)\n\nI lack a way to test this on Windows, but otherwise it feels\nlike it's about ready.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 09 Oct 2020 12:14:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Expansion of our checks for connection-loss errors" } ]
[ { "msg_contents": "On Thu, Oct 8, 2020 at 10:13:53AM -0700, John W Higgins wrote:\n\n>It's not going to win a Turing award - but I thought this project was a\n>little more friendly then what I've seen in this thread towards a first\n>time contributor.\n\nInstead, it is unfriendly.\n\nIt takes a lot of motivation to \"try\" to submit a patch.\n\nGood lucky Maksim Kita.\n\nThanks for the support John\n\nregards,\n\nRanier Vilela\n\n\n\nOn Thu, Oct 8, 2020 at 10:13:53AM -0700, John W Higgins wrote: >It's not going to win a Turing award - but I thought this project was a>little more friendly then what I've seen in this thread towards a first>time contributor.Instead, it is unfriendly.It takes a lot of motivation to \"try\" to submit a patch.Good lucky \nMaksim Kita.Thanks for the support Johnregards,Ranier Vilela", "msg_date": "Thu, 8 Oct 2020 18:27:39 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] ecpg: fix progname memory leak" } ]
[ { "msg_contents": "Hi\r\n\r\nI found some likely unnecessary if-condition in code.\r\n\r\n1. Some check in else branch seems unnecessary.\r\n\r\nIn (/src/backend/replication/logical/reorderbuffer.c) \r\n① @@ -4068,7 +4068,7 @@ ReorderBufferToastAppendChunk(ReorderBuffer *rb, ReorderBufferTXN *txn,\r\n> bool       found;\r\n> if (!found)\r\n> {\r\n>...\r\n> }\r\n> else if (found && chunk_seq != ent->last_chunk_seq + 1)\r\n>...\r\n\r\nThe check of \"found\" in else if branch seems unnecessary.\r\n\r\n② (/src/backend/utils/init/postinit.c)\r\n@@ -924,11 +924,8 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,\r\n\r\n> bool\t\tbootstrap = IsBootstrapProcessingMode();\r\n> if (bootstrap)\r\n> {\r\n>...\r\n> }\r\n> else if(...)\r\n> {...}\r\n> else\r\n> {\r\n> if (!bootstrap)\r\n> {\r\n> ...\r\n> }\r\n> }\r\n\r\nThe check of \"bootstrap\" in else branch seems unnecessary.\r\n\r\n\r\n2.In (/src/interfaces/ecpg/compatlib/informix.c)\r\n@@ -944,7 +944,7 @@ rupshift(char *str)\r\n\r\n> for (len--; str[len] && str[len] == ' '; len--);\r\n\r\nThe first \"str[len]\" seems unnecessary since \" str[len] == ' '\" will check it as well.\r\n\r\nDo you think we should remove these if-condition for code clean ?\r\n\r\nBest regards,\r\nhouzj", "msg_date": "Fri, 9 Oct 2020 00:59:20 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "Remove some unnecessary if-condition" }, { "msg_contents": "On Fri, Oct 9, 2020 at 6:29 AM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n>\n> Hi\n>\n> I found some likely unnecessary if-condition in code.\n>\n> 1. Some check in else branch seems unnecessary.\n>\n> In (/src/backend/replication/logical/reorderbuffer.c)\n> ① @@ -4068,7 +4068,7 @@ ReorderBufferToastAppendChunk(ReorderBuffer *rb, ReorderBufferTXN *txn,\n> > bool found;\n> > if (!found)\n> > {\n> >...\n> > }\n> > else if (found && chunk_seq != ent->last_chunk_seq + 1)\n> >...\n>\n> The check of \"found\" in else if branch seems unnecessary.\n>\n> ② (/src/backend/utils/init/postinit.c)\n> @@ -924,11 +924,8 @@ InitPostgres(const char *in_dbname, Oid dboid, const char *username,\n>\n> > bool bootstrap = IsBootstrapProcessingMode();\n> > if (bootstrap)\n> > {\n> >...\n> > }\n> > else if(...)\n> > {...}\n> > else\n> > {\n> > if (!bootstrap)\n> > {\n> > ...\n> > }\n> > }\n>\n> The check of \"bootstrap\" in else branch seems unnecessary.\n>\n>\n> 2.In (/src/interfaces/ecpg/compatlib/informix.c)\n> @@ -944,7 +944,7 @@ rupshift(char *str)\n>\n> > for (len--; str[len] && str[len] == ' '; len--);\n>\n> The first \"str[len]\" seems unnecessary since \" str[len] == ' '\" will check it as well.\n>\n> Do you think we should remove these if-condition for code clean ?\n\nTo me it looks good to clean up the conditions as you have done in the\npatch. Please add this to commitfest so that it's not forgotten. I\nhave verified the code and indeed the conditions you are removing are\nunnecessary. So the patch can be marked as CFP right away.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 12 Oct 2020 14:14:03 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove some unnecessary if-condition" }, { "msg_contents": "> To me it looks good to clean up the conditions as you have done in the patch.\r\n> Please add this to commitfest so that it's not forgotten. I have verified\r\n> the code and indeed the conditions you are removing are unnecessary. So\r\n> the patch can be marked as CFP right away.\r\n\r\nThank you for reviewing! added it to commitfest\r\nhttps://commitfest.postgresql.org/30/2760/\r\n\r\n\r\nBest regards,\r\nhouzj\r\n\n\n", "msg_date": "Mon, 12 Oct 2020 11:42:31 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Remove some unnecessary if-condition" }, { "msg_contents": "On Mon, Oct 12, 2020 at 11:42:31AM +0000, Hou, Zhijie wrote:\n> Thank you for reviewing! added it to commitfest\n> https://commitfest.postgresql.org/30/2760/\n\n- if (!bootstrap)\n- {\n- pgstat_bestart();\n- CommitTransactionCommand();\n- }\n+ pgstat_bestart();\n+ CommitTransactionCommand();\nFWIW, I prefer the original style here. The if/elif dance is quite\nlong here so it can be easy to miss by reading the code that no\ntransaction commit should happen in bootstrap mode as this is\nconditioned only by the top of the if logic.\n\nI would also keep the code in reorderbuffer.c in its original shape,\nbecause it does not actually hurt and changing it could introduce some\nback-patching hazard, even if that be a conflict easy to fix.\n\nThere may be a point for the bit in informix.c, but similarly when you\nthink about back-patching I'd just keep it as it is.\n--\nMichael", "msg_date": "Wed, 14 Oct 2020 16:22:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove some unnecessary if-condition" } ]
[ { "msg_contents": "Hello hackers,\r\n\r\nWe know that pg_waldump can statistics size for every kind of records. When I use\r\nthe feature I find it misses some size for XLOG_SWITCH records. When a user does\r\na pg_wal_switch(), then postgres will discard the remaining size in the current wal\r\nsegment, and the pg_waldump tool misses the discard size.\r\n\r\nI think it will be better if pg_waldump can show the matter, so I make a patch\r\nwhich regards the discard size as a part of XLOG_SWITCH record, it works if we\r\nwant to display the detail of wal records or the statistics, and patch attached.\r\n\r\nWhat's your opinion?\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Fri, 9 Oct 2020 13:41:25 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "At Fri, 9 Oct 2020 13:41:25 +0800, \"movead.li@highgo.ca\" <movead.li@highgo.ca> wrote in \n> Hello hackers,\n> \n> We know that pg_waldump can statistics size for every kind of records. When I use\n> the feature I find it misses some size for XLOG_SWITCH records. When a user does\n> a pg_wal_switch(), then postgres will discard the remaining size in the current wal\n> segment, and the pg_waldump tool misses the discard size.\n> \n> I think it will be better if pg_waldump can show the matter, so I make a patch\n> which regards the discard size as a part of XLOG_SWITCH record, it works if we\n> want to display the detail of wal records or the statistics, and patch attached.\n> \n> What's your opinion?\n\nI think that the length of the XLOG_SWITCH record is no other than 24\nbytes. Just adding the padding? garbage bytes to that length doesn't\nseem the right thing to me.\n\nIf we want pg_waldump to show that length somewhere, it could be shown\nat the end of that record explicitly:\n\nrmgr: XLOG len (rec/tot): 24/16776848, tx: 0, lsn: 0/02000148, prev 0/02000110, desc: SWITCH, trailing-bytes: 16776944\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 09 Oct 2020 17:46:58 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": ">I think that the length of the XLOG_SWITCH record is no other than 24\r\n>bytes. Just adding the padding? garbage bytes to that length doesn't\r\n>seem the right thing to me.\r\n>\r\n>If we want pg_waldump to show that length somewhere, it could be shown\r\n>at the end of that record explicitly:\r\n> \r\n>rmgr: XLOG len (rec/tot): 24/16776848, tx: 0, lsn: 0/02000148, prev 0/02000110, desc: SWITCH, trailing-bytes: 16776944\r\n\r\nThanks, I think it's good idea, and new patch attached.\r\n\r\nHere's the lookes:\r\nrmgr: XLOG len (rec/tot): 24/ 24, tx: 0, lsn: 0/030000D8, prev 0/03000060, desc: SWITCH, trailing-bytes: 16776936\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Sat, 10 Oct 2020 09:50:02 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "On Sat, Oct 10, 2020 at 09:50:02AM +0800, movead.li@highgo.ca wrote:\n>> I think that the length of the XLOG_SWITCH record is no other than 24\n>> bytes. Just adding the padding? garbage bytes to that length doesn't\n>> seem the right thing to me.\n> \n> Here's the lookes:\n> rmgr: XLOG len (rec/tot): 24/ 24, tx: 0, lsn: 0/030000D8, prev 0/03000060, desc: SWITCH, trailing-bytes: 16776936\n\n\n static void\n-XLogDumpRecordLen(XLogReaderState *record, uint32 *rec_len, uint32 *fpi_len)\n+XLogDumpRecordLen(XLogReaderState *record, uint32 *rec_len, uint32 *fpi_len, uint32 *junk_len)\n {\nIf you wish to add more information about a XLOG_SWITCH record, I\ndon't think that changing the signature of XLogDumpRecordLen() is\nadapted because the record length of this record is defined as\nHoriguchi-san mentioned upthread, and the meaning of junk_len is\nconfusing here. It seems to me that any extra information should be\nadded to xlog_desc() where there should be an extra code path for\n(info == XLOG_SWITCH). XLogReaderState should have all the\ninformation you are lookng for.\n--\nMichael", "msg_date": "Mon, 12 Oct 2020 10:12:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "Thanks for reply.\r\n\r\n>If you wish to add more information about a XLOG_SWITCH record, I\r\n>don't think that changing the signature of XLogDumpRecordLen() is\r\n>adapted because the record length of this record is defined as\r\n>Horiguchi-san mentioned upthread, and the meaning of junk_len is\r\n>confusing here. It seems to me that any extra information should be\r\n>added to xlog_desc() where there should be an extra code path for\r\n>(info == XLOG_SWITCH). XLogReaderState should have all the\r\n>information you are lookng for.\r\nWe have two places to use the 'junk_len', one is when we show the \r\ndetail record information, another is when we statistics the percent\r\nof all kind of wal record kinds(by --stat=record). The second place\r\nwill not run the xlog_desc(), so it's not a good chance to do the thing.\r\n\r\nI am still can not understand why it can't adapted to change the\r\nsignature of XLogDumpRecordLen(), maybe we can add a new function\r\nto caculate the 'junk_len' and rename the 'junk_len' as 'skiped_size' or\r\n'switched_size'?\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\r\n\n\nThanks for reply.\n>If you wish to add more information about a XLOG_SWITCH record, I>don't think that changing the signature of XLogDumpRecordLen() is>adapted because the record length of this record is defined as>Horiguchi-san mentioned upthread, and the meaning of junk_len is>confusing here.  It seems to me that any extra information should be>added to xlog_desc() where there should be an extra code path for>(info == XLOG_SWITCH).  XLogReaderState should have all the>information you are lookng for.We have two places to use the 'junk_len', one is when we show the detail record information, another is when we statistics the percentof all kind of wal record kinds(by --stat=record). The second placewill not run the xlog_desc(), so it's not a good chance to do the thing.I am still can not understand why it can't adapted to change thesignature of XLogDumpRecordLen(), maybe we can add a new functionto caculate the 'junk_len' and rename the 'junk_len' as 'skiped_size' or'switched_size'?\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Mon, 12 Oct 2020 09:46:37 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "At Mon, 12 Oct 2020 09:46:37 +0800, \"movead.li@highgo.ca\" <movead.li@highgo.ca> wrote in \n> \n> Thanks for reply.\n> \n> >If you wish to add more information about a XLOG_SWITCH record, I\n> >don't think that changing the signature of XLogDumpRecordLen() is\n> >adapted because the record length of this record is defined as\n> >Horiguchi-san mentioned upthread, and the meaning of junk_len is\n> >confusing here. It seems to me that any extra information should be\n> >added to xlog_desc() where there should be an extra code path for\n> >(info == XLOG_SWITCH). XLogReaderState should have all the\n> >information you are lookng for.\n> We have two places to use the 'junk_len', one is when we show the \n> detail record information, another is when we statistics the percent\n> of all kind of wal record kinds(by --stat=record). The second place\n> will not run the xlog_desc(), so it's not a good chance to do the thing.\n> \n> I am still can not understand why it can't adapted to change the\n> signature of XLogDumpRecordLen(), maybe we can add a new function\n> to caculate the 'junk_len' and rename the 'junk_len' as 'skiped_size' or\n> 'switched_size'?\n\nThe reason is the function XLogDumpRecordLen is a common function\namong all kind of LOG records, not belongs only to XLOG_SWICH. And the\njunk_len is not useful for other than XLOG_SWITCH. Descriptions\nspecifc to XLOG_SWITCH is provided by xlog_desc().\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 14 Oct 2020 10:29:44 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "On Wed, Oct 14, 2020 at 10:29:44AM +0900, Kyotaro Horiguchi wrote:\n> The reason is the function XLogDumpRecordLen is a common function\n> among all kind of LOG records, not belongs only to XLOG_SWICH. And the\n> junk_len is not useful for other than XLOG_SWITCH. Descriptions\n> specifc to XLOG_SWITCH is provided by xlog_desc().\n\nYeah. In its current shape, it means that only pg_waldump would be\nable to know this information. If you make this information part of\nxlogdesc.c, any consumer of the WAL record descriptions would be able\nto show this information, so it would provide a consistent output for\nany kind of tools.\n\nOn top of that, it seems to me that the calculation used in the patch\nis wrong in two aspects at quick glance:\n1) startSegNo and endSegNo point always to two different segments with\na XLOG_SWITCH record, so you should check that ReadRecPtr is not at a\nsegment border instead before extracting SizeOfXLogLongPHD, no?\n2) This stuff should also check after the case of a WAL *page* border\nwhere you'd need to adjust based on SizeOfXLogShortPHD instead.\n--\nMichael", "msg_date": "Wed, 14 Oct 2020 15:52:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "Hi,\n\nOn 2020-10-14 15:52:43 +0900, Michael Paquier wrote:\n> Yeah. In its current shape, it means that only pg_waldump would be\n> able to know this information. If you make this information part of\n> xlogdesc.c, any consumer of the WAL record descriptions would be able\n> to show this information, so it would provide a consistent output for\n> any kind of tools.\n\nI'm not convinced by this argument. The only case where accounting for\nthe \"wasted\" length seems really interesting is for --stats=record - and\nfor that including it in the record description is useless. When looking\nat plain records the length is sufficiently deducable by looking at the\nnext record's LSN.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 14 Oct 2020 13:46:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "At Wed, 14 Oct 2020 13:46:13 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2020-10-14 15:52:43 +0900, Michael Paquier wrote:\n> > Yeah. In its current shape, it means that only pg_waldump would be\n> > able to know this information. If you make this information part of\n> > xlogdesc.c, any consumer of the WAL record descriptions would be able\n> > to show this information, so it would provide a consistent output for\n> > any kind of tools.\n> \n> I'm not convinced by this argument. The only case where accounting for\n> the \"wasted\" length seems really interesting is for --stats=record - and\n> for that including it in the record description is useless. When looking\n> at plain records the length is sufficiently deducable by looking at the\n> next record's LSN.\n\nI'm not sure the exact motive of this proposal, but if we show the\nwasted length in the stats result, I think it should be other than\nexisting record types.\n\n XLOG/CHECKPOINT_SHUTDOWN 1 ( 0.50) ..\n ...\n Btree/INSERT_LEAF 63 ( 31.19) ..\n+ EMPTY 1 ( xx.xx) ..\n----------------------------------------\n Total ...\n\n\nBy the way, I noticed that --stats=record shows two lines for\nTransaction/COMMIT. The cause is that XLogDumpCountRecord assumes the\nall four bits of xl_info is used to identify record id.\n\nThe fourth bit of xl_info of XLOG records is used to signal the\nexistence of record has 'xinfo' field or not. So an XLOG record with\nrecid == 8 actually exists but it is really a record that recid == 0\nand has xinfo field.\n\nI didn't come up with a cleaner solution but the attached fixes that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 15 Oct 2020 11:44:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "Thanks for all the suggestions.\r\n\r\n>Yeah. In its current shape, it means that only pg_waldump would be\r\n>able to know this information. If you make this information part of\r\n>xlogdesc.c, any consumer of the WAL record descriptions would be able\r\n>to show this information, so it would provide a consistent output for\r\n>any kind of tools.\r\nI have change the implement, move some code into xlog_desc().\r\n\r\n>On top of that, it seems to me that the calculation used in the patch\r\n>is wrong in two aspects at quick glance:\r\n>1) startSegNo and endSegNo point always to two different segments with\r\n>a XLOG_SWITCH record, so you should check that ReadRecPtr is not at a\r\n>segment border instead before extracting SizeOfXLogLongPHD, no?\r\nYes you are right, and I use 'record->EndRecPtr - 1' instead.\r\n\r\n>2) This stuff should also check after the case of a WAL *page* border\r\n>where you'd need to adjust based on SizeOfXLogShortPHD instead.\r\nThe 'junk_len -= SizeOfXLogLongPHD' code is considered for when the\r\nremain size of a wal segment can not afford a XLogRecord struct for\r\nXLOG_SWITCH, it needn't care *page* border.\r\n\r\n>I'm not sure the exact motive of this proposal, but if we show the\r\n>wasted length in the stats result, I think it should be other than\r\n>existing record types.\r\nYes agree, and now it looks like below as new patch:\r\n\r\nmovead@movead-PC:/h2/pg/bin$ ./pg_waldump -p ../walbk/ -s 0/3000000 -e 0/6000000 -z\r\nType N (%) Record size (%) FPI size (%) Combined size (%)\r\n---- - --- ----------- --- -------- --- ------------- ---\r\nXLOG 5 ( 31.25) 300 ( 0.00) 0 ( 0.00) 300 ( 0.00)\r\nXLOGSwitchJunk 3 ( 18.75) 50330512 (100.00) 0 ( 0.00) 50330512 (100.00)\r\n\r\n\r\nmovead@movead-PC:/h2/pg/bin$ ./pg_waldump -p ../walbk/ -s 0/3000000 -e 0/6000000 --stat=record\r\nXLOG/SWITCH 3 ( 18.75) 72 ( 0.00) 0 ( 0.00) 72 ( 0.00)\r\nXLOG/SWITCH_JUNK 3 ( 18.75) 50330512 (100.00) 0 ( 0.00) 50330512 (100.00)\r\n\r\nThe shortcoming now is I do not know how to handle the 'count' of SWITCH_JUNK\r\nin pg_waldump results. Currently I regard SWITCH_JUNK as one count.\r\n\r\n>By the way, I noticed that --stats=record shows two lines for\r\n>Transaction/COMMIT. The cause is that XLogDumpCountRecord assumes the\r\n>all four bits of xl_info is used to identify record id.\r\nThanks,I didn't notice it before, and your patch added into v3 patch attached.\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Thu, 15 Oct 2020 12:56:02 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "At Thu, 15 Oct 2020 12:56:02 +0800, \"movead.li@highgo.ca\" <movead.li@highgo.ca> wrote in \n> Thanks for all the suggestions.\n> \n> >Yeah. In its current shape, it means that only pg_waldump would be\n> >able to know this information. If you make this information part of\n> >xlogdesc.c, any consumer of the WAL record descriptions would be able\n> >to show this information, so it would provide a consistent output for\n> >any kind of tools.\n> I have change the implement, move some code into xlog_desc().\n\nAndres suggested that we don't need that description with per-record\nbasis. Do you have a reason to do that? (For clarity, I'm not\nsuggesting that you should reving it.)\n\n> >On top of that, it seems to me that the calculation used in the patch\n> >is wrong in two aspects at quick glance:\n> >1) startSegNo and endSegNo point always to two different segments with\n> >a XLOG_SWITCH record, so you should check that ReadRecPtr is not at a\n> >segment border instead before extracting SizeOfXLogLongPHD, no?\n> Yes you are right, and I use 'record->EndRecPtr - 1' instead.\n\n+\tXLByteToSeg(record->EndRecPtr - 1, endSegNo, record->segcxt.ws_segsize);\n\nWe use XLByteToPrevSeg instead for this purpose.\n\n> >2) This stuff should also check after the case of a WAL *page* border\n> >where you'd need to adjust based on SizeOfXLogShortPHD instead.\n> The 'junk_len -= SizeOfXLogLongPHD' code is considered for when the\n> remain size of a wal segment can not afford a XLogRecord struct for\n> XLOG_SWITCH, it needn't care *page* border.\n> \n> >I'm not sure the exact motive of this proposal, but if we show the\n> >wasted length in the stats result, I think it should be other than\n> >existing record types.\n> Yes agree, and now it looks like below as new patch:\n\nYou forgot to add a correction for short headers.\n\n> movead@movead-PC:/h2/pg/bin$ ./pg_waldump -p ../walbk/ -s 0/3000000 -e 0/6000000 -z\n> Type N (%) Record size (%) FPI size (%) Combined size (%)\n> ---- - --- ----------- --- -------- --- ------------- ---\n> XLOG 5 ( 31.25) 300 ( 0.00) 0 ( 0.00) 300 ( 0.00)\n> XLOGSwitchJunk 3 ( 18.75) 50330512 (100.00) 0 ( 0.00) 50330512 (100.00)\n> \n> \n> movead@movead-PC:/h2/pg/bin$ ./pg_waldump -p ../walbk/ -s 0/3000000 -e 0/6000000 --stat=record\n> XLOG/SWITCH 3 ( 18.75) 72 ( 0.00) 0 ( 0.00) 72 ( 0.00)\n> XLOG/SWITCH_JUNK 3 ( 18.75) 50330512 (100.00) 0 ( 0.00) 50330512 (100.00)\n> \n> The shortcoming now is I do not know how to handle the 'count' of SWITCH_JUNK\n> in pg_waldump results. Currently I regard SWITCH_JUNK as one count.\n\n\n+\tif(RM_XLOG_ID == rmid && XLOG_SWITCH == info)\n\nWe need a comment for the special code path.\nWe don't follow this kind of convension. Rather use \"variable =\nconstant\".\n\n+\t{\n+\t\tjunk_len = xlog_switch_junk_len(record);\n+\t\tstats->count_xlog_switch++;\n+\t\tstats->junk_size += junk_len;\n\njunk_len is used only the last line above. We don't need that\ntemporary variable.\n\n+\t * If the wal switch record spread on two segments, we should extra minus the\n\nThis comment is sticking out of 80-column border. However, I'm not\nsure if we have reached a conclustion about the target console-width.\n\n+\t\t\t\tinfo = (rj << 4) & ~XLR_INFO_MASK;\n+\t\t\t\tswitch_junk_id = \"XLOG/SWITCH_JUNK\";\n+\t\t\t\tif( XLOG_SWITCH == info && stats->count_xlog_switch > 0)\n\nThis line order is strange. At least switch_junk_id is used only in\nthe if-then block.\n\nI'm not confindent on which is better, but I feel that this is not a\nwork for display side, but for the recorder side like attached.\n\n> >By the way, I noticed that --stats=record shows two lines for\n> >Transaction/COMMIT. The cause is that XLogDumpCountRecord assumes the\n> >all four bits of xl_info is used to identify record id.\n> Thanks,I didn't notice it before, and your patch added into v3 patch attached.\n\nSorry for the confusion, but it would be a separate topic even if we\nare going to fix that..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 15 Oct 2020 17:32:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "At Thu, 15 Oct 2020 17:32:10 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 15 Oct 2020 12:56:02 +0800, \"movead.li@highgo.ca\" <movead.li@highgo.ca> wrote in \n> > Thanks for all the suggestions.\n> > \n> > >Yeah. In its current shape, it means that only pg_waldump would be\n> > >able to know this information. If you make this information part of\n> > >xlogdesc.c, any consumer of the WAL record descriptions would be able\n> > >to show this information, so it would provide a consistent output for\n> > >any kind of tools.\n> > I have change the implement, move some code into xlog_desc().\n> \n> Andres suggested that we don't need that description with per-record\n> basis. Do you have a reason to do that? (For clarity, I'm not\n> suggesting that you should reving it.)\n\nSorry. Maybe I deleted wrong letters in the \"reving\" above.\n\n====\nAndres suggested that we don't need that description with per-record\nbasis. Do you have a reason to do that? (For clarity, I'm not\nsuggesting that you should remove it.)\n====\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 15 Oct 2020 17:38:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "Thanks for all the suggestion, and new patch attached.\r\n\r\n>Andres suggested that we don't need that description with per-record\r\n>basis. Do you have a reason to do that? (For clarity, I'm not\r\n>suggesting that you should reving it.)\r\nI think Andres is saying if we just log it in xlog_desc() then we can not get\r\nthe result for '--stats=record' case. And the patch solve the problem.\r\n\r\n>+ XLByteToSeg(record->EndRecPtr - 1, endSegNo, record->segcxt.ws_segsize);\r\n>We use XLByteToPrevSeg instead for this purpose.\r\nThanks and follow the suggestion.\r\n\r\n>You forgot to add a correction for short headers.\r\nInfact, we need to consider this matter when the remain size of a wal\r\nsegment can not afford a XLogRecord struct for XLOG_SWITCH. \r\nIt's mean that if record->ReadRecPtr is on A wal segment, then\r\nrecord->EndRecPtr is on (A+2) wal segment. So we need to minus\r\nthe longpagehead size on (A+1) wal segment.\r\nIn my thought we need not to care the short header, if my understand\r\nis wrong?\r\n\r\n>+ if(RM_XLOG_ID == rmid && XLOG_SWITCH == info)\r\n> \r\n>We need a comment for the special code path.\r\n>We don't follow this kind of convension. Rather use \"variable =\r\n>constant\".\r\n>+ {\r\n>+ junk_len = xlog_switch_junk_len(record);\r\n>+ stats->count_xlog_switch++;\r\n>+ stats->junk_size += junk_len;\r\n> \r\n>junk_len is used only the last line above. We don't need that\r\n>temporary variable.\r\n> \r\n>+ * If the wal switch record spread on two segments, we should extra minus the\r\n>This comment is sticking out of 80-column border. However, I'm not\r\n>sure if we have reached a conclustion about the target console-width.\r\n>+ info = (rj << 4) & ~XLR_INFO_MASK;\r\n>+ switch_junk_id = \"XLOG/SWITCH_JUNK\";\r\n>+ if( XLOG_SWITCH == info && stats->count_xlog_switch > 0)\r\n> \r\n>This line order is strange. At least switch_junk_id is used only in\r\n>the if-then block.\r\nThanks and follow the suggestions.\r\n\r\n \r\n>I'm not confindent on which is better, but I feel that this is not a\r\n>work for display side, but for the recorder side like attached.\r\nThe patch really seems more clearly, but the new 'OTHERS' may confuse\r\nthe users and we hard to handle it with '--rmgr=RMGR' option. So I have\r\nnot use this design in this patch, let's see other's opinion.\r\n\r\n>Sorry for the confusion, but it would be a separate topic even if we\r\n>are going to fix that..\r\nSorry, I remove the code, make sense we should discuss it in a\r\nseparate topic.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Fri, 16 Oct 2020 16:21:47 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "At Fri, 16 Oct 2020 16:21:47 +0800, \"movead.li@highgo.ca\" <movead.li@highgo.ca> wrote in \n> Thanks for all the suggestion, and new patch attached.\n> \n> >Andres suggested that we don't need that description with per-record\n> >basis. Do you have a reason to do that? (For clarity, I'm not\n> >suggesting that you should reving it.)\n> I think Andres is saying if we just log it in xlog_desc() then we can not get\n> the result for '--stats=record' case. And the patch solve the problem.\n\nMmm.\n\n> and\n> for that including it in the record description is useless. When looking\n> at plain records the length is sufficiently deducable by looking at the\n> next record's LSN.\n\nIt looks to me \"We can know that length by subtracting the LSN of\nXLOG_SWITCH from the next record's LSN so it doesn't add any\ninformation.\"\n\n> >+ XLByteToSeg(record->EndRecPtr - 1, endSegNo, record->segcxt.ws_segsize);\n> >We use XLByteToPrevSeg instead for this purpose.\n> Thanks and follow the suggestion.\n> \n> >You forgot to add a correction for short headers.\n> Infact, we need to consider this matter when the remain size of a wal\n> segment can not afford a XLogRecord struct for XLOG_SWITCH. \n> It's mean that if record->ReadRecPtr is on A wal segment, then\n> record->EndRecPtr is on (A+2) wal segment. So we need to minus\n> the longpagehead size on (A+1) wal segment.\n> In my thought we need not to care the short header, if my understand\n> is wrong?\n\nMaybe.\n\nWhen a page doesn't have sufficient space for a record, the record is\nsplit into to pieces and the last half is recorded after the header of\nthe next page. If it next page is in the next segment, the header is a\nlong header and a short header otherwise.\n\nIf it were the last page of a segment,\n\nReadRecPtr\nv\n<--- SEG A ------->|<---------- SEG A+1 ----------------->|<-SEG A+2\n<XLOG_SWITCH_FIRST>|<LONG HEADER><XLOG_SWTICH_LAST><EMPTY>|<LONG HEADER>\n\nSo the length of <EMPTY> is:\n\n LOC(SEG A+2) - ReadRecPtr - LEN(long header) - LEN(XLOG_SWITCH)\n\n\nIf not, that is, it were not the last page of a segment.\n\n<-------------------- SEG A ---------------------------->|<-SEG A+1\n< page n ->|<-- page n + 1 ---------->|....|<-last page->|<-first page\n<X_S_FIRST>|<SHORT HEADER><X_S_LAST><EMPTY..............>|<LONG HEADER>\n\nSo the length of <EMPTY> in this case is:\n\n LOC(SEG A+1) - ReadRecPtr - LEN(short header) - LEN(XLOG_SWITCH)\n\n> >I'm not confindent on which is better, but I feel that this is not a\n> >work for display side, but for the recorder side like attached.\n> The patch really seems more clearly, but the new 'OTHERS' may confuse\n> the users and we hard to handle it with '--rmgr=RMGR' option. So I have\n> not use this design in this patch, let's see other's opinion.\n\nYeah, I don't like the \"OTHERS\", too.\n\n> >Sorry for the confusion, but it would be a separate topic even if we\n> >are going to fix that..\n> Sorry, I remove the code, make sense we should discuss it in a\n> separate topic.\n\nAgreed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 16 Oct 2020 18:00:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": ">It looks to me \"We can know that length by subtracting the LSN of\r\n>XLOG_SWITCH from the next record's LSN so it doesn't add any\r\n>information.\"\r\nSorry,maybe I miss this before.\r\nBut I think it will be better to show it clearly.\r\n\r\n>So the length of <EMPTY> in this case is:\r\n> \r\n>LOC(SEG A+1) - ReadRecPtr - LEN(short header) - LEN(XLOG_SWITCH)\r\nThanks, I should not have missed this and fixed.\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Fri, 16 Oct 2020 17:39:58 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "When I execute pg_waldump, I found that XLOG/SWITCH_JUNK appears twice.\r\nIs this problem solved by the way of correcting the previously discussed Transaction/COMMIT?\r\n\r\n$ ../bin/pg_waldump --stats=record ../data/pg_wal/000000010000000000000001\r\nType N (%) Record size (%) FPI size (%) Combined size (%)\r\n---- - --- ----------- --- -------- --- ------------- ---\r\nXLOG/CHECKPOINT_SHUTDOWN 5 ( 0.01) 570 ( 0.01) 0 ( 0.00) 570 ( 0.01)\r\nXLOG/CHECKPOINT_ONLINE 6 ( 0.02) 684 ( 0.02) 0 ( 0.00) 684 ( 0.01)\r\nXLOG/NEXTOID 3 ( 0.01) 90 ( 0.00) 0 ( 0.00) 90 ( 0.00)\r\nXLOG/FPI 290 ( 0.80) 14210 ( 0.34) 638216 ( 40.72) 652426 ( 11.30)\r\nTransaction/COMMIT 12 ( 0.03) 408 ( 0.01) 0 ( 0.00) 408 ( 0.01)\r\nTransaction/COMMIT 496 ( 1.36) 134497 ( 3.20) 0 ( 0.00) 134497 ( 2.33)\r\nStorage/CREATE 13 ( 0.04) 546 ( 0.01) 0 ( 0.00) 546 ( 0.01)\r\nCLOG/ZEROPAGE 1 ( 0.00) 30 ( 0.00) 0 ( 0.00) 30 ( 0.00)\r\nDatabase/CREATE 2 ( 0.01) 84 ( 0.00) 0 ( 0.00) 84 ( 0.00)\r\nStandby/LOCK 142 ( 0.39) 5964 ( 0.14) 0 ( 0.00) 5964 ( 0.10)\r\nStandby/RUNNING_XACTS 13 ( 0.04) 666 ( 0.02) 0 ( 0.00) 666 ( 0.01)\r\nStandby/INVALIDATIONS 136 ( 0.37) 12416 ( 0.30) 0 ( 0.00) 12416 ( 0.22)\r\nHeap2/CLEAN 132 ( 0.36) 8994 ( 0.21) 0 ( 0.00) 8994 ( 0.16)\r\nHeap2/FREEZE_PAGE 245 ( 0.67) 168704 ( 4.01) 0 ( 0.00) 168704 ( 2.92)\r\nHeap2/CLEANUP_INFO 2 ( 0.01) 84 ( 0.00) 0 ( 0.00) 84 ( 0.00)\r\nHeap2/VISIBLE 424 ( 1.16) 25231 ( 0.60) 352256 ( 22.48) 377487 ( 6.54)\r\nXLOG/SWITCH_JUNK 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)\r\nHeap2/MULTI_INSERT 1511 ( 4.15) 287727 ( 6.84) 12872 ( 0.82) 300599 ( 5.21)\r\nHeap2/MULTI_INSERT+INIT 46 ( 0.13) 71910 ( 1.71) 0 ( 0.00) 71910 ( 1.25)\r\nHeap/INSERT 8849 ( 24.31) 1288414 ( 30.62) 25648 ( 1.64) 1314062 ( 22.76)\r\nHeap/DELETE 25 ( 0.07) 1350 ( 0.03) 0 ( 0.00) 1350 ( 0.02)\r\nHeap/UPDATE 173 ( 0.48) 55238 ( 1.31) 5964 ( 0.38) 61202 ( 1.06)\r\nHeap/HOT_UPDATE 257 ( 0.71) 27585 ( 0.66) 1300 ( 0.08) 28885 ( 0.50)\r\nXLOG/SWITCH_JUNK 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)\r\nHeap/LOCK 180 ( 0.49) 9800 ( 0.23) 129812 ( 8.28) 139612 ( 2.42)\r\nHeap/INPLACE 214 ( 0.59) 44520 ( 1.06) 40792 ( 2.60) 85312 ( 1.48)\r\nHeap/INSERT+INIT 171 ( 0.47) 171318 ( 4.07) 0 ( 0.00) 171318 ( 2.97)\r\n\r\nRegards,\r\nShinya Kato\r\n\n\n\n\n\n\n\n\n\nWhen I execute pg_waldump, I found that XLOG/SWITCH_JUNK appears twice.\nIs this problem solved by the way of correcting the previously discussed Transaction/COMMIT?\n \n$ ../bin/pg_waldump --stats=record\r\n ../data/pg_wal/000000010000000000000001\nType                                           N      (%)         \r\n Record size      (%)             FPI size      (%)        Combined size      (%)\n----                                           -      ---         \r\n -----------      ---             --------      ---        -------------      ---\nXLOG/CHECKPOINT_SHUTDOWN                       5\r\n (  0.01)                  570 (  0.01)                    0 (  0.00)                  570 (  0.01)\nXLOG/CHECKPOINT_ONLINE                         6\r\n (  0.02)                  684 (  0.02)                    0 (  0.00)                  684 (  0.01)\nXLOG/NEXTOID                                   3\r\n (  0.01)                   90 (  0.00)                    0 (  0.00)                   90 (  0.00)\nXLOG/FPI                                     290\r\n (  0.80)                14210 (  0.34)               638216 ( 40.72)               652426 ( 11.30)\nTransaction/COMMIT                            12\r\n (  0.03)                  408 (  0.01)                    0 (  0.00)                  408 (  0.01)\nTransaction/COMMIT                           496\r\n (  1.36)               134497 (  3.20)                    0 (  0.00)               134497 (  2.33)\nStorage/CREATE                                13\r\n (  0.04)                  546 (  0.01)                    0 (  0.00)                  546 (  0.01)\nCLOG/ZEROPAGE                                  1\r\n (  0.00)                   30 (  0.00)                    0 (  0.00)                   30 (  0.00)\nDatabase/CREATE                                2\r\n (  0.01)                   84 (  0.00)                    0 (  0.00)                   84 (  0.00)\nStandby/LOCK                                 142\r\n (  0.39)                 5964 (  0.14)                    0 (  0.00)                 5964 (  0.10)\nStandby/RUNNING_XACTS                         13\r\n (  0.04)                  666 (  0.02)                    0 (  0.00)                  666 (  0.01)\nStandby/INVALIDATIONS                        136\r\n (  0.37)                12416 (  0.30)                    0 (  0.00)                12416 (  0.22)\nHeap2/CLEAN                                  132\r\n (  0.36)                 8994 (  0.21)                    0 (  0.00)                 8994 (  0.16)\nHeap2/FREEZE_PAGE                            245\r\n (  0.67)               168704 (  4.01)                    0 (  0.00)               168704 (  2.92)\nHeap2/CLEANUP_INFO                             2\r\n (  0.01)                   84 (  0.00)                    0 (  0.00)                   84 (  0.00)\nHeap2/VISIBLE                                424\r\n (  1.16)                25231 (  0.60)               352256 ( 22.48)               377487 (  6.54)\nXLOG/SWITCH_JUNK                               0\r\n (  0.00)                    0 (  0.00)                    0 (  0.00)                    0 (  0.00)\nHeap2/MULTI_INSERT                          1511\r\n (  4.15)               287727 (  6.84)                12872 (  0.82)               300599 (  5.21)\nHeap2/MULTI_INSERT+INIT                       46 (  0.13)               \r\n 71910 (  1.71)                    0 (  0.00)                71910 (  1.25)\nHeap/INSERT                                 8849\r\n ( 24.31)              1288414 ( 30.62)                25648 (  1.64)              1314062 ( 22.76)\nHeap/DELETE                                   25\r\n (  0.07)                 1350 (  0.03)                    0 (  0.00)                 1350 (  0.02)\nHeap/UPDATE                                  173\r\n (  0.48)                55238 (  1.31)                 5964 (  0.38)                61202 (  1.06)\nHeap/HOT_UPDATE                              257\r\n (  0.71)                27585 (  0.66)                 1300 (  0.08)                28885 (  0.50)\nXLOG/SWITCH_JUNK                               0\r\n (  0.00)                    0 (  0.00)                    0 (  0.00)                    0 (  0.00)\nHeap/LOCK                                    180\r\n (  0.49)                 9800 (  0.23)               129812 (  8.28)               139612 (  2.42)\nHeap/INPLACE                                 214\r\n (  0.59)                44520 (  1.06)                40792 (  2.60)                85312 (  1.48)\nHeap/INSERT+INIT                             171 (  0.47)              \r\n 171318 (  4.07)                    0 (  0.00)               171318 (  2.97)\n \nRegards,\nShinya Kato", "msg_date": "Fri, 4 Dec 2020 04:20:47 +0000", "msg_from": "<Shinya11.Kato@nttdata.com>", "msg_from_op": false, "msg_subject": "RE: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "Thanks for taking a look on this.\n\nAt Fri, 4 Dec 2020 04:20:47 +0000, <Shinya11.Kato@nttdata.com> wrote in \n> When I execute pg_waldump, I found that XLOG/SWITCH_JUNK appears twice.\n> Is this problem solved by the way of correcting the previously discussed Transaction/COMMIT?\n> \n> $ ../bin/pg_waldump --stats=record ../data/pg_wal/000000010000000000000001\n> Type N (%) Record size (%) FPI size (%) Combined size (%)\n> ---- - --- ----------- --- -------- --- ------------- ---\n..\n> XLOG/SWITCH_JUNK 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)\n...\n> XLOG/SWITCH_JUNK 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)\n\nYeah, that's because of XLogDumpDisplayStats forgets to consider ri\n(rmgr id) when showing the lines. If there's a record with info = 0x04\nfor other resources than RM_XLOG_ID, the spurious line is shown.\n\nThe first one is for XLOG_HEAP2_VISIBLE and the latter is for\nXLOG_HEAP_HOT_UPDATE, that is, both of which are not for XLOG_SWITCH..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 04 Dec 2020 15:20:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "Thanks for the reply. > Mr.Horiguchi.\r\n\r\nI reviewed the patch and found some problems.\r\n\r\n>+ if(startSegNo != endSegNo)\r\n>+ else if(record->ReadRecPtr / XLOG_BLCKSZ !=\r\n>+ if(rmid == RM_XLOG_ID && info == XLOG_SWITCH)\r\n>+ if(ri == RM_XLOG_ID)\r\n>+ if(info == XLOG_SWITCH)\r\nYou need to put a space after the \"if\".\r\n\r\n>@@ -24,6 +24,7 @@\r\n>#include \"common/logging.h\"\r\n>#include \"getopt_long.h\"\r\n>#include \"rmgrdesc.h\"\r\n>+#include \"catalog/pg_control.h\"\r\nI think the include statements should be arranged in alphabetical order.\r\n\r\n>+ info = (rj << 4) & ~XLR_INFO_MASK;\r\n>+ if(info == XLOG_SWITCH)\r\n>+ XLogDumpStatsRow(psprintf(\"XLOG/SWITCH_JUNK\"),\r\n>+ 0, total_count, stats->junk_size, total_rec_len,\r\n>+ 0, total_fpi_len, stats->junk_size, total_len);\r\n\r\nCan't be described in the same way as \"XLogDumpStatsRow(psprintf(\"%s/%s\", desc->rm_name, id)...\"?\r\nOnly this part looks strange.\r\n\r\nWhy are the \"count\" and \"fpi_len\" fields 0?\r\n\r\nI think you need to improve the duplicate output in column \"XLOG/SWITCH_JUNK\".\r\n\r\n\r\nRegards,\r\nShinya Kato\r\n", "msg_date": "Thu, 10 Dec 2020 01:34:08 +0000", "msg_from": "<Shinya11.Kato@nttdata.com>", "msg_from_op": false, "msg_subject": "RE: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "Thanks for review, and sorry for reply so later.\r\n\r\n>I reviewed the patch and found some problems. \r\n>>+ if(startSegNo != endSegNo)\r\n>>+ else if(record->ReadRecPtr / XLOG_BLCKSZ !=\r\n>>+ if(rmid == RM_XLOG_ID && info == XLOG_SWITCH)\r\n>>+ if(ri == RM_XLOG_ID)\r\n>>+ if(info == XLOG_SWITCH)\r\n>You need to put a space after the \"if\".\r\nAll fix and thanks for point the issue. \r\n\r\n>>@@ -24,6 +24,7 @@\r\n>>#include \"common/logging.h\"\r\n>>#include \"getopt_long.h\"\r\n>>#include \"rmgrdesc.h\"\r\n>>+#include \"catalog/pg_control.h\"\r\n>I think the include statements should be arranged in alphabetical order.\r\nFix.\r\n\r\n>>+ info = (rj << 4) & ~XLR_INFO_MASK;\r\n>>+ if(info == XLOG_SWITCH)\r\n>>+ XLogDumpStatsRow(psprintf(\"XLOG/SWITCH_JUNK\"),\r\n>>+ 0, total_count, stats->junk_size, total_rec_len,\r\n>>+ 0, total_fpi_len, stats->junk_size, total_len);\r\n \r\n>Can't be described in the same way as \"XLogDumpStatsRow(psprintf(\"%s/%s\", desc->rm_name, id)...\"?\r\n>Only this part looks strange.\r\n>Why are the \"count\" and \"fpi_len\" fields 0?\r\nThe 'SWITCH_JUNK' is not a real record and it relys on 'XLOG_SWITCH' record, so I think we can't count\r\n'SWITCH_JUNK', so the \"count\" is 0. And it never contain FPI, so the \"fpi_len\" is 0.\r\n\r\nBut 0 value maybe looks strange, so in current version I show it like below:\r\nType N (%) Record size (%) FPI size (%) Combined size (%)\r\n---- - --- ----------- --- -------- --- ------------- ---\r\n...\r\nXLOG/SWITCH_JUNK - ( -) 11006248 ( 72.26) - ( -) 11006248 ( 65.78)\r\nTransaction/COMMIT 10 ( 0.03) 340 ( 0.00) 0 ( 0.00) 340 ( 0.00)\r\n\r\n>I think you need to improve the duplicate output in column \"XLOG/SWITCH_JUNK\".\r\nYes it's a bug and fixed.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Wed, 6 Jan 2021 11:14:37 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "RE: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": ">Thanks for review, and sorry for reply so later.\r\n>\r\n>>I reviewed the patch and found some problems.\r\n>>>+ if(startSegNo != endSegNo)\r\n>>>+ else if(record->ReadRecPtr / XLOG_BLCKSZ !=\r\n>>>+ if(rmid == RM_XLOG_ID && info == XLOG_SWITCH)\r\n>>>+ if(ri == RM_XLOG_ID)\r\n>>>+ if(info == XLOG_SWITCH)\r\n>>You need to put a space after the \"if\".\r\n>All fix and thanks for point the issue.\r\n>\r\n>>>@@ -24,6 +24,7 @@\r\n>>>#include \"common/logging.h\"\r\n>>>#include \"getopt_long.h\"\r\n>>>#include \"rmgrdesc.h\"\r\n>>>+#include \"catalog/pg_control.h\"\r\n>>I think the include statements should be arranged in alphabetical order.\r\n>Fix.\r\n\r\nThank you for your revision!\r\n\r\n>>>+ info = (rj << 4) & ~XLR_INFO_MASK;\r\n>>>+ if(info == XLOG_SWITCH)\r\n>>>+ XLogDumpStatsRow(psprintf(\"XLOG/SWITCH_JUNK\"),\r\n>>>+ 0, total_count, stats->junk_size, total_rec_len,\r\n>>>+ 0, total_fpi_len, stats->junk_size, total_len);\r\n>\r\n>>Can't be described in the same way as \"XLogDumpStatsRow(psprintf(\"%s/%s\", desc->rm_name, id)...\"?\r\n>>Only this part looks strange.\r\n>>Why are the \"count\" and \"fpi_len\" fields 0?\r\n>The 'SWITCH_JUNK' is not a real record and it relys on 'XLOG_SWITCH' record, so I think we can't count\r\n>'SWITCH_JUNK', so the \"count\" is 0. And it never contain FPI, so the \"fpi_len\" is 0.\r\n>\r\n>But 0 value maybe looks strange, so in current version I show it like below:\r\n>Type N (%) Record size (%) FPI size (%) Combined size (%)\r\n>---- - --- ----------- --- -------- --- ------------- ---\r\n>...\r\n>XLOG/SWITCH_JUNK - ( -) 11006248 ( 72.26) - ( -) 11006248 ( 65.78)\r\n>Transaction/COMMIT 10 ( 0.03) 340 ( 0.00) 0 ( 0.00) 340 ( 0.00)\r\n>\r\n\r\nI just wanted to know why the \"count\" and \"fpi_len\" fields 0 are.\r\nSo, it would be nice to have 0 values. Sorry for confusing you.\r\n\r\nRegards,\r\nShinya Kato\r\n\r\n\n\n\n\n\n\n\n\n\n>Thanks for review, and sorry for reply so later.\r\n>\r\n>>I reviewed the patch and found some problems.\r\n>>>+ if(startSegNo != endSegNo)\r\n>>>+ else if(record->ReadRecPtr / XLOG_BLCKSZ !=\r\n>>>+ if(rmid == RM_XLOG_ID && info == XLOG_SWITCH)\r\n>>>+ if(ri == RM_XLOG_ID)\r\n>>>+ if(info == XLOG_SWITCH)\r\n>>You need to put a space after the \"if\".\r\n>All fix and thanks for point the issue.\r\n>\r\n>>>@@ -24,6 +24,7 @@\r\n>>>#include \"common/logging.h\"\r\n>>>#include \"getopt_long.h\"\r\n>>>#include \"rmgrdesc.h\"\r\n>>>+#include \"catalog/pg_control.h\"\r\n>>I think the include statements should be arranged in alphabetical order.\r\n>Fix.\n\nThank you for your revision!\n\n>>>+ info = (rj << 4) & ~XLR_INFO_MASK;\r\n>>>+ if(info == XLOG_SWITCH)\r\n>>>+ XLogDumpStatsRow(psprintf(\"XLOG/SWITCH_JUNK\"),\r\n>>>+ 0, total_count, stats->junk_size, total_rec_len,\r\n>>>+ 0, total_fpi_len, stats->junk_size, total_len);\r\n>\r\n>>Can't be described in the same way as \"XLogDumpStatsRow(psprintf(\"%s/%s\", desc->rm_name, id)...\"?\r\n>>Only this part looks strange.\r\n>>Why are the \"count\" and \"fpi_len\" fields 0?\r\n>The 'SWITCH_JUNK' is not a real record and it relys on 'XLOG_SWITCH' record, so I think we can't count\r\n>'SWITCH_JUNK', so the \"count\" is 0. And it never contain FPI, so the \"fpi_len\" is 0.\r\n>\r\n>But 0 value maybe looks strange, so in current version I show it like below:\r\n>Type N (%) Record size (%) FPI size (%) Combined size (%)\r\n>---- - --- ----------- --- -------- --- ------------- ---\r\n>...\r\n>XLOG/SWITCH_JUNK - ( -) 11006248 ( 72.26) - ( -) 11006248 ( 65.78)\r\n>Transaction/COMMIT 10 ( 0.03) 340 ( 0.00) 0 ( 0.00) 340 ( 0.00)\r\n>\n\nI just wanted to know why the \"count\" and \"fpi_len\" fields 0 are.\r\nSo, it would be nice to have 0 values. Sorry for confusing you.\n\nRegards,\r\nShinya Kato", "msg_date": "Thu, 7 Jan 2021 07:55:36 +0000", "msg_from": "<Shinya11.Kato@nttdata.com>", "msg_from_op": false, "msg_subject": "RE: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "On 1/7/21 2:55 AM, Shinya11.Kato@nttdata.com wrote:\n>>But 0 value maybe looks strange, so in current version I show it like below:\n>>Type N (%) Record size (%) FPI size (%) Combined size (%)\n>>---- - --- ----------- --- -------- --- ------------- ---\n>>...\n>>XLOG/SWITCH_JUNK - ( -) 11006248 ( 72.26) - ( -) 11006248 ( 65.78)\n>>Transaction/COMMIT 10 ( 0.03) 340 ( 0.00) 0 ( 0.00) 340 ( 0.00)\n> \n> I just wanted to know why the \"count\" and \"fpi_len\" fields 0 are.\n> So, it would be nice to have 0 values. Sorry for confusing you.\n\nKato, it's not clear to me if you were asking for - to be changed back to 0?\n\nYou marked the patch as Ready for Committer so I assume not, but it \nwould be better to say clearly that you think this patch is ready for a \ncommitter to look at.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Thu, 18 Mar 2021 10:26:03 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": ">>>But 0 value maybe looks strange, so in current version I show it like >below:\n>>>Type N (%) Record size (%) FPI size (%) Combined size (%)\n>>>---- - --- ----------- --- -------- --- ------------- --- ...\n>>>XLOG/SWITCH_JUNK - ( -) 11006248 ( 72.26) - ( -) 11006248 ( 65.78) \n>>>Transaction/COMMIT 10 ( 0.03) 340 ( 0.00) 0 ( 0.00) 340 ( 0.00)\n>> \n>> I just wanted to know why the \"count\" and \"fpi_len\" fields 0 are.\n>> So, it would be nice to have 0 values. Sorry for confusing you.\n>\n>Kato, it's not clear to me if you were asking for - to be changed back to 0?\n>\n>You marked the patch as Ready for Committer so I assume not, but it would be\n>better to say clearly that you think this patch is ready for a committer to look at.\n\nYes, I don't think it needs to be changed back to 0.\nI think this patch is ready for a committer to look at.\n\nRegards,\nShinya Kato\n\n\n", "msg_date": "Fri, 19 Mar 2021 06:06:47 +0000", "msg_from": "<Shinya11.Kato@nttdata.com>", "msg_from_op": false, "msg_subject": "RE: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "\n\nOn 2021/03/19 15:06, Shinya11.Kato@nttdata.com wrote:\n>>>> But 0 value maybe looks strange, so in current version I show it like >below:\n>>>> Type N (%) Record size (%) FPI size (%) Combined size (%)\n>>>> ---- - --- ----------- --- -------- --- ------------- --- ...\n>>>> XLOG/SWITCH_JUNK - ( -) 11006248 ( 72.26) - ( -) 11006248 ( 65.78)\n>>>> Transaction/COMMIT 10 ( 0.03) 340 ( 0.00) 0 ( 0.00) 340 ( 0.00)\n>>>\n>>> I just wanted to know why the \"count\" and \"fpi_len\" fields 0 are.\n>>> So, it would be nice to have 0 values. Sorry for confusing you.\n>>\n>> Kato, it's not clear to me if you were asking for - to be changed back to 0?\n>>\n>> You marked the patch as Ready for Committer so I assume not, but it would be\n>> better to say clearly that you think this patch is ready for a committer to look at.\n> \n> Yes, I don't think it needs to be changed back to 0.\n> I think this patch is ready for a committer to look at.\n\nWhat's the use case of this feature? I read through this thread briefly,\nbut I'm still not sure how useful this feature is.\n\nHoriguchi-san reported one issue upthread; --stats=record shows\ntwo lines for Transaction/COMMIT record. Probably this should be\nfixed separately.\n\nHoriguchi-san,\nDo you have updated version of that bug-fix patch?\nOr you started another thread for that issue?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 19 Mar 2021 18:27:10 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "On 2021/03/19 18:27, Fujii Masao wrote:\n> \n> \n> On 2021/03/19 15:06, Shinya11.Kato@nttdata.com wrote:\n>>>>> But 0 value maybe looks strange, so in current version I show it like >below:\n>>>>> Type N (%) Record size (%) FPI size (%) Combined size (%)\n>>>>> ---- - --- ----------- --- -------- --- ------------- --- ...\n>>>>> XLOG/SWITCH_JUNK - ( -) 11006248 ( 72.26) - ( -) 11006248 ( 65.78)\n>>>>> Transaction/COMMIT 10 ( 0.03) 340 ( 0.00) 0 ( 0.00) 340 ( 0.00)\n>>>>\n>>>> I just wanted to know why the \"count\" and \"fpi_len\" fields 0 are.\n>>>> So, it would be nice to have 0 values. Sorry for confusing you.\n>>>\n>>> Kato, it's not clear to me if you were asking for - to be changed back to 0?\n>>>\n>>> You marked the patch as Ready for Committer so I assume not, but it would be\n>>> better to say clearly that you think this patch is ready for a committer to look at.\n>>\n>> Yes, I don't think it needs to be changed back to 0.\n>> I think this patch is ready for a committer to look at.\n> \n> What's the use case of this feature? I read through this thread briefly,\n> but I'm still not sure how useful this feature is.\n> \n> Horiguchi-san reported one issue upthread; --stats=record shows\n> two lines for Transaction/COMMIT record. Probably this should be\n> fixed separately.\n> \n> Horiguchi-san,\n> Do you have updated version of that bug-fix patch?\n> Or you started another thread for that issue?\n\nI confirmed that only XACT records need to be processed differently.\nSo the patch that Horiguchi-san posted upthread looks good and enough\nto me. I added a bit more detail information into the comment in the patch.\nAttached is the updated version of the patch. Since this issue looks like\na bug, I'm thinking to back-patch that. Thought?\n\nBarring any objection, I will commit this.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 22 Mar 2021 11:22:07 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": ">-----Original Message-----\n>From: Fujii Masao <masao.fujii@oss.nttdata.com>\n>Sent: Monday, March 22, 2021 11:22 AM\n>To: Shinya11.Kato@nttdata.com; david@pgmasters.net; movead.li@highgo.ca\n>Cc: pgsql-hackers@postgresql.org; andres@anarazel.de; michael@paquier.xyz;\n>ahsan.hadi@highgo.ca; horikyota.ntt@gmail.com\n>Subject: Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump.\n>\n>\n>\n>On 2021/03/19 18:27, Fujii Masao wrote:\n>>\n>>\n>> On 2021/03/19 15:06, Shinya11.Kato@nttdata.com wrote:\n>>>>>> But 0 value maybe looks strange, so in current version I show it like\n>>below:\n>>>>>> Type N (%) Record size (%) FPI size (%) Combined size (%)\n>>>>>> ---- - --- ----------- --- -------- --- ------------- --- ...\n>>>>>> XLOG/SWITCH_JUNK - ( -) 11006248 ( 72.26) - ( -) 11006248 ( 65.78)\n>>>>>> Transaction/COMMIT 10 ( 0.03) 340 ( 0.00) 0 ( 0.00) 340 ( 0.00)\n>>>>>\n>>>>> I just wanted to know why the \"count\" and \"fpi_len\" fields 0 are.\n>>>>> So, it would be nice to have 0 values. Sorry for confusing you.\n>>>>\n>>>> Kato, it's not clear to me if you were asking for - to be changed back to 0?\n>>>>\n>>>> You marked the patch as Ready for Committer so I assume not, but it\n>>>> would be better to say clearly that you think this patch is ready for a\n>committer to look at.\n>>>\n>>> Yes, I don't think it needs to be changed back to 0.\n>>> I think this patch is ready for a committer to look at.\n>>\n>> What's the use case of this feature? I read through this thread\n>> briefly, but I'm still not sure how useful this feature is.\n>>\n>> Horiguchi-san reported one issue upthread; --stats=record shows two\n>> lines for Transaction/COMMIT record. Probably this should be fixed\n>> separately.\n>>\n>> Horiguchi-san,\n>> Do you have updated version of that bug-fix patch?\n>> Or you started another thread for that issue?\n>\n>I confirmed that only XACT records need to be processed differently.\n>So the patch that Horiguchi-san posted upthread looks good and enough to me.\n>I added a bit more detail information into the comment in the patch.\n>Attached is the updated version of the patch. Since this issue looks like a bug,\n>I'm thinking to back-patch that. Thought?\n>\n>Barring any objection, I will commit this.\n\nI think it's good except for a typo \"thoes four bits\"\n\nRegards,\nShinya Kato\n\n\n\n", "msg_date": "Mon, 22 Mar 2021 05:03:19 +0000", "msg_from": "<Shinya11.Kato@nttdata.com>", "msg_from_op": false, "msg_subject": "RE: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "On 2021/03/22 14:03, Shinya11.Kato@nttdata.com wrote:\n>> Barring any objection, I will commit this.\n> \n> I think it's good except for a typo \"thoes four bits\"\n\nThanks for the review! Attached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 22 Mar 2021 14:07:43 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "At Mon, 22 Mar 2021 14:07:43 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/03/22 14:03, Shinya11.Kato@nttdata.com wrote:\n> >> Barring any objection, I will commit this.\n> >I think it's good except for a typo \"thoes four bits\"\n> \n> Thanks for the review! Attached is the updated version of the patch.\n\nThanks for picking it up. LGTM and applies cleanly.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 22 Mar 2021 17:49:26 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." }, { "msg_contents": "\n\nOn 2021/03/22 17:49, Kyotaro Horiguchi wrote:\n> At Mon, 22 Mar 2021 14:07:43 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>>\n>> On 2021/03/22 14:03, Shinya11.Kato@nttdata.com wrote:\n>>>> Barring any objection, I will commit this.\n>>> I think it's good except for a typo \"thoes four bits\"\n>>\n>> Thanks for the review! Attached is the updated version of the patch.\n> \n> Thanks for picking it up. LGTM and applies cleanly.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 23 Mar 2021 10:01:20 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Wrong statistics for size of XLOG_SWITCH during pg_waldump." } ]
[ { "msg_contents": "During the discussion on Unix-domain sockets on Windows, someone pointed \nout[0] abstract Unix-domain sockets. This is a variant of the normal \nUnix-domain sockets that don't use the file system but a separate \n\"abstract\" namespace. At the user interface, such sockets are \nrepresented by names starting with \"@\". I took a look at this and it \nwasn't hard to get working, so here is a patch. It's supposed to be \nsupported on Linux and Windows right now, but I haven't tested on Windows.\n\nI figure, there are so many different deployment options nowadays, this \ncould be useful somewhere. It relieves you from dealing with the file \nsystem, you don't have to set up /tmp or something under /var/run, you \ndon't need to make sure file system permissions are right. Also, there \nis no need for a lock file or file cleanup. (Unlike file-system \nnamespace sockets, abstract namespace sockets give an EADDRINUSE when \ntrying to bind to a name already in use.) Conversely, of course, you \ndon't get to use file-system permissions to manage access to the socket, \nbut that isn't essential functionality, so it's a trade-off users can \nmake on their own.\n\nAnd then some extra patches for surrounding cleanup. During testing I \nnoticed that the bind() failure hint \"Is another postmaster already \nrunning ...\" was shown in inappropriate situations, so I changed that to \nonly show for EADDRINUSE errors. (Maybe other error codes could be \nappropriate, but I couldn't find any more.)\n\nAnd then looking for other uses of EADDRINUSE I found some dead \nWindows-related code that can be cleaned up.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/20191218142419.fvv4ikm4wq4gnkco@isc.upenn.edu\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 9 Oct 2020 09:28:34 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "abstract Unix-domain sockets" }, { "msg_contents": "Updated patch set after some conflicts had emerged.\n\nOn 2020-10-09 09:28, Peter Eisentraut wrote:\n> During the discussion on Unix-domain sockets on Windows, someone pointed\n> out[0] abstract Unix-domain sockets. This is a variant of the normal\n> Unix-domain sockets that don't use the file system but a separate\n> \"abstract\" namespace. At the user interface, such sockets are\n> represented by names starting with \"@\". I took a look at this and it\n> wasn't hard to get working, so here is a patch. It's supposed to be\n> supported on Linux and Windows right now, but I haven't tested on Windows.\n> \n> I figure, there are so many different deployment options nowadays, this\n> could be useful somewhere. It relieves you from dealing with the file\n> system, you don't have to set up /tmp or something under /var/run, you\n> don't need to make sure file system permissions are right. Also, there\n> is no need for a lock file or file cleanup. (Unlike file-system\n> namespace sockets, abstract namespace sockets give an EADDRINUSE when\n> trying to bind to a name already in use.) Conversely, of course, you\n> don't get to use file-system permissions to manage access to the socket,\n> but that isn't essential functionality, so it's a trade-off users can\n> make on their own.\n> \n> And then some extra patches for surrounding cleanup. During testing I\n> noticed that the bind() failure hint \"Is another postmaster already\n> running ...\" was shown in inappropriate situations, so I changed that to\n> only show for EADDRINUSE errors. (Maybe other error codes could be\n> appropriate, but I couldn't find any more.)\n> \n> And then looking for other uses of EADDRINUSE I found some dead\n> Windows-related code that can be cleaned up.\n\nThis last piece has been committed.\n\n> \n> \n> [0]:\n> https://www.postgresql.org/message-id/20191218142419.fvv4ikm4wq4gnkco@isc.upenn.edu\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 22 Oct 2020 09:03:49 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On Thu, Oct 22, 2020 at 09:03:49AM +0200, Peter Eisentraut wrote:\n> On 2020-10-09 09:28, Peter Eisentraut wrote:\n>> During the discussion on Unix-domain sockets on Windows, someone pointed\n>> out[0] abstract Unix-domain sockets. This is a variant of the normal\n>> Unix-domain sockets that don't use the file system but a separate\n>> \"abstract\" namespace. At the user interface, such sockets are\n>> represented by names starting with \"@\". I took a look at this and it\n>> wasn't hard to get working, so here is a patch. It's supposed to be\n>> supported on Linux and Windows right now, but I haven't tested on Windows.\n\nYeah, peaking at the Windows docs, what you are trying to do here\nshould be supported (please note that I have not tested ). One\nreference:\nhttps://devblogs.microsoft.com/commandline/af_unix-comes-to-windows/\n\n>> And then some extra patches for surrounding cleanup. During testing I\n>> noticed that the bind() failure hint \"Is another postmaster already\n>> running ...\" was shown in inappropriate situations, so I changed that to\n>> only show for EADDRINUSE errors. (Maybe other error codes could be\n>> appropriate, but I couldn't find any more.)\n>> \n>> And then looking for other uses of EADDRINUSE I found some dead\n>> Windows-related code that can be cleaned up.\n> \n> This last piece has been committed.\n\n+ <para>\n+ A value that starts with <literal>@</literal> specifies that a\n+ Unix-domain socket in the abstract namespace should be created\n+ (currently supported on Linux and Windows). In that case, this value\n+ does not specify a <quote>directory</quote> but a prefix from which\n+ the actual socket name is computed in the same manner as for the\n+ file-system namespace. While the abstract socket name prefix can be\n+ chosen freely, since it is not a file-system location, the convention\n+ is to nonetheless use file-system-like values such as\n+ <literal>@/tmp</literal>.\n+ </para>\n\nAs abstract namespaces don't have permissions, anyone knowing the name\nof the path, which should be unique, can have an access to the server.\nDo you think that the documentation should warn the user about that?\nThis feature is about easing the management part of the socket paths\nwhile throwing away the security aspect of it.\n\nWhen attempting to start a server that listens to the same port and\nuses the same abstract path, the second server started still shows\na hint referring to a file that does not exist:\nLOG: could not bind Unix address \"@tmp/.s.PGSQL.5432\": Address already\nin use\nHINT: Is another postmaster already running on port 5432? If not,\nremove socket file \"@tmp/.s.PGSQL.5432\" and retry.\n\nInstead of showing paths with at signs, wouldn't it be better to\nmention it is an abstract socket address?\n\nI am not sure that 0002 is an improvement. It would be more readable\nto move the part choosing what hint is adapted into a first block that\nselects the hint string rather than have the whole thing in a single\nelog() call.\n--\nMichael", "msg_date": "Mon, 9 Nov 2020 15:08:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On 2020-11-09 07:08, Michael Paquier wrote:\n> As abstract namespaces don't have permissions, anyone knowing the name\n> of the path, which should be unique, can have an access to the server.\n> Do you think that the documentation should warn the user about that?\n> This feature is about easing the management part of the socket paths\n> while throwing away the security aspect of it.\n\nWe could modify the documentation further. But note that the \ntraditional way of putting the socket into /tmp has the same properties, \nso this shouldn't be a huge shock.\n\n> When attempting to start a server that listens to the same port and\n> uses the same abstract path, the second server started still shows\n> a hint referring to a file that does not exist:\n> LOG: could not bind Unix address \"@tmp/.s.PGSQL.5432\": Address already\n> in use\n> HINT: Is another postmaster already running on port 5432? If not,\n> remove socket file \"@tmp/.s.PGSQL.5432\" and retry.\n> \n> Instead of showing paths with at signs, wouldn't it be better to\n> mention it is an abstract socket address?\n\nThe @ is the standard way of representing this in the user interface and \nthe configuration, so it seems sensible to me that way.\n\n> I am not sure that 0002 is an improvement. It would be more readable\n> to move the part choosing what hint is adapted into a first block that\n> selects the hint string rather than have the whole thing in a single\n> elog() call.\n\nCan you sketch how you would structure this? I realize it's not very \nelegant, but I couldn't come up with a better way that didn't involve \nhaving to duplicate some of the error messages into multiple branches.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n", "msg_date": "Mon, 9 Nov 2020 09:04:24 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On 11/9/20 9:04 AM, Peter Eisentraut wrote:\n> On 2020-11-09 07:08, Michael Paquier wrote:\n>> As abstract namespaces don't have permissions, anyone knowing the name\n>> of the path, which should be unique, can have an access to the server.\n>> Do you think that the documentation should warn the user about that?\n>> This feature is about easing the management part of the socket paths\n>> while throwing away the security aspect of it.\n> \n> We could modify the documentation further.� But note that the \n> traditional way of putting the socket into /tmp has the same properties, \n> so this shouldn't be a huge shock.\n\nOne issue with them is that they interact differently with kernel \nnamespaces than normal unix sockets do. Abstract sockets are handled by \nthe network namespaces, and not the file system namespaces. But I am not \nsure that this is our job to document.\n\nAndreas\n\n\n", "msg_date": "Mon, 9 Nov 2020 16:58:06 +0100", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On Mon, Nov 09, 2020 at 09:04:24AM +0100, Peter Eisentraut wrote:\n> On 2020-11-09 07:08, Michael Paquier wrote:\n> The @ is the standard way of representing this in the user interface and the\n> configuration, so it seems sensible to me that way.\n\nOk.\n\n> Can you sketch how you would structure this? I realize it's not very\n> elegant, but I couldn't come up with a better way that didn't involve having\n> to duplicate some of the error messages into multiple branches.\n\nI think that I would use a StringInfo to build each sentence of the\nhint separately. The first sentence, \"Is another postmaster already\nrunning on port %d?\" is already known. Then the second sentence could\nbe built depending on the two other conditions. FWIW, I think that it\nis confusing to mention in the hint to remove a socket file that\ncannot be removed.\n--\nMichael", "msg_date": "Tue, 10 Nov 2020 15:24:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On 2020-11-10 07:24, Michael Paquier wrote:\n>> Can you sketch how you would structure this? I realize it's not very\n>> elegant, but I couldn't come up with a better way that didn't involve having\n>> to duplicate some of the error messages into multiple branches.\n> \n> I think that I would use a StringInfo to build each sentence of the\n> hint separately. The first sentence, \"Is another postmaster already\n> running on port %d?\" is already known. Then the second sentence could\n> be built depending on the two other conditions.\n\nI'm not sure concatenating sentences like that is okay for translatability.\n\n> FWIW, I think that it\n> is confusing to mention in the hint to remove a socket file that\n> cannot be removed.\n\nThinking about it further, I think the hint in the Unix-domain socket \ncase is bogus. A socket in the file-system namespace never reports \nEADDRINUSE anyway, it just overwrites the file. For sockets in the \nabstract namespace, you can get this error, but of course there is no \nfile to remove.\n\nPerhaps we should change the hint in both the Unix and the IP cases to:\n\n\"Is another postmaster already running at this address?\"\n\n(This also resolves the confusing reference to \"port\" in the Unix case.)\n\nOr we just drop the hint in the Unix case. The primary error message is \nclear enough.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n", "msg_date": "Wed, 11 Nov 2020 13:39:17 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On Wed, Nov 11, 2020 at 01:39:17PM +0100, Peter Eisentraut wrote:\n> Thinking about it further, I think the hint in the Unix-domain socket case\n> is bogus. A socket in the file-system namespace never reports EADDRINUSE\n> anyway, it just overwrites the file. For sockets in the abstract namespace,\n> you can get this error, but of course there is no file to remove.\n> \n> Perhaps we should change the hint in both the Unix and the IP cases to:\n> \n> \"Is another postmaster already running at this address?\"\n> (This also resolves the confusing reference to \"port\" in the Unix case.)\n\nEr, it is perfectly possible for two postmasters to use the same unix\nsocket path, abstract or not, as long as they listen to different\nports (all nodes in a single TAP test do that for example). So we\nshould keep a reference to the port used in the log message, no?\n\n> Or we just drop the hint in the Unix case. The primary error message is\n> clear enough.\n\nDropping the hint for the abstract case sounds fine to me.\n--\nMichael", "msg_date": "Thu, 12 Nov 2020 16:12:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On 2020-11-12 08:12, Michael Paquier wrote:\n> On Wed, Nov 11, 2020 at 01:39:17PM +0100, Peter Eisentraut wrote:\n>> Thinking about it further, I think the hint in the Unix-domain socket case\n>> is bogus. A socket in the file-system namespace never reports EADDRINUSE\n>> anyway, it just overwrites the file. For sockets in the abstract namespace,\n>> you can get this error, but of course there is no file to remove.\n>>\n>> Perhaps we should change the hint in both the Unix and the IP cases to:\n>>\n>> \"Is another postmaster already running at this address?\"\n>> (This also resolves the confusing reference to \"port\" in the Unix case.)\n> Er, it is perfectly possible for two postmasters to use the same unix\n> socket path, abstract or not, as long as they listen to different\n> ports (all nodes in a single TAP test do that for example). So we\n> should keep a reference to the port used in the log message, no?\n\n\"Port\" is not a real thing for Unix-domain sockets, it's just something \nwe use internally and append to the socket file. The error message is \ncurrently something like\n\nERROR: could not bind Unix address \"/tmp/.s.PGSQL.5432\": Address \nalready in use\nHINT: Is another postmaster already running on port 5432? If not, \nremove socket file \"/tmp/.s.PGSQL.5432\" and retry.\n\nSo the mention of the \"port\" doesn't really add any information here and \njust introduces new terminology that isn't really relevant.\n\nMy idea is to change the message to:\n\nERROR: could not bind Unix address \"/tmp/.s.PGSQL.5432\": Address \nalready in use\nHINT: Is another postmaster already running at this address?\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n", "msg_date": "Tue, 17 Nov 2020 23:18:12 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On Tue, Nov 17, 2020 at 11:18:12PM +0100, Peter Eisentraut wrote:\n> So the mention of the \"port\" doesn't really add any information here and\n> just introduces new terminology that isn't really relevant.\n> \n> My idea is to change the message to:\n> \n> ERROR: could not bind Unix address \"/tmp/.s.PGSQL.5432\": Address already in\n> use\n> HINT: Is another postmaster already running at this address?\n\nAre you saying that you would remove the hint telling to remove the\nsocket file even for the case of non-abstract files? For abstract\npaths, this makes sense. For both, removing the \"port\" part is indeed\na good idea as long as you keep around the full socket file name.\n--\nMichael", "msg_date": "Wed, 18 Nov 2020 11:00:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On Fri, Oct 9, 2020 at 3:28 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> During the discussion on Unix-domain sockets on Windows, someone pointed\n> out[0] abstract Unix-domain sockets.\n>\n\nThis reminds me on a somewhat random note that SSPI mode authentication\nshould work out of the box for unix domain sockets on Windows.\n\nThe main reason we probably can't use it as a default is that SSPI isn't\neasy to implement for pure language drivers, it requires Windows API calls\nto interact with the windows auth services. It's a pain in JDBC for example.\n\nOn Fri, Oct 9, 2020 at 3:28 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:During the discussion on Unix-domain sockets on Windows, someone pointed \nout[0] abstract Unix-domain sockets.This reminds me on a somewhat random note that SSPI mode authentication should work out of the box for unix domain sockets on Windows.The main reason we probably can't use it as a default is that SSPI isn't easy to implement for pure language drivers, it requires Windows API calls to interact with the windows auth services. It's a pain in JDBC for example.", "msg_date": "Wed, 18 Nov 2020 11:05:50 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On Tue, Nov 17, 2020 at 7:00 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Nov 17, 2020 at 11:18:12PM +0100, Peter Eisentraut wrote:\n> > So the mention of the \"port\" doesn't really add any information here and\n> > just introduces new terminology that isn't really relevant.\n> >\n> > My idea is to change the message to:\n> >\n> > ERROR: could not bind Unix address \"/tmp/.s.PGSQL.5432\": Address\n> already in\n> > use\n> > HINT: Is another postmaster already running at this address?\n>\n> Are you saying that you would remove the hint telling to remove the\n> socket file even for the case of non-abstract files? For abstract\n> paths, this makes sense. For both, removing the \"port\" part is indeed\n> a good idea as long as you keep around the full socket file name.\n>\n>\n(resending to the list)\n\nGiven that \"port\" is a postgresql.conf setting its use here (and elsewhere)\nshould be taken to mean the value of that specific variable. To that end,\nI find the current description of port to be lacking - it should mention\nits usage as a qualifier when dealing with unix socket files (in addition\nto the existing wording under unix_socket_directories).\n\nIf we are going to debate semantics here \"bind unix address\" doesn't seem\ncorrect. could not create Unix socket file /tmp/.s.PGSQL.5432, it already\nexists.\n\nThe hint would be better written: Is another postmaster running with\nunix_socket_directories = /tmp and port = 5432? If not, remove the unix\nsocket file /tmp/.s.PGSQL.5432 and retry.\n\nI don't see much benefit in trying to share logic/wording between the\nvarious messages and hints for the different ways the server can establish\ncommunication points.\n\nI agree that there isn't a useful hint for the abstract case as it\nshouldn't happen unless there is indeed another running instance with the\nsame configuration. Though a hint similar to the above, but without the\n\"remove and retry\" bit, probably wouldn't hurt.\n\nDavid J.\n\nOn Tue, Nov 17, 2020 at 7:00 PM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Nov 17, 2020 at 11:18:12PM +0100, Peter Eisentraut wrote:\n> So the mention of the \"port\" doesn't really add any information here and\n> just introduces new terminology that isn't really relevant.\n> \n> My idea is to change the message to:\n> \n> ERROR:  could not bind Unix address \"/tmp/.s.PGSQL.5432\": Address already in\n> use\n> HINT:  Is another postmaster already running at this address?\n\nAre you saying that you would remove the hint telling to remove the\nsocket file even for the case of non-abstract files?  For abstract\npaths, this makes sense.  For both, removing the \"port\" part is indeed\na good idea as long as you keep around the full socket file name.\n(resending to the list)Given that \"port\" is a postgresql.conf setting its use here (and elsewhere) should be taken to mean the value of that specific variable.  To that end, I find the current description of port to be lacking - it should mention its usage as a qualifier when dealing with unix socket files (in addition to the existing wording under unix_socket_directories).If we are going to debate semantics here \"bind unix address\" doesn't seem correct.  could not create Unix socket file /tmp/.s.PGSQL.5432, it already exists.The hint would be better written: Is another postmaster running with unix_socket_directories = /tmp and port = 5432?  If not, remove the unix socket file /tmp/.s.PGSQL.5432 and retry.I don't see much benefit in trying to share logic/wording between the various messages and hints for the different ways the server can establish communication points.I agree that there isn't a useful hint for the abstract case as it shouldn't happen unless there is indeed another running instance with the same configuration.  Though a hint similar to the above, but without the \"remove and retry\" bit, probably wouldn't hurt.David J.", "msg_date": "Tue, 17 Nov 2020 20:35:39 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On 2020-11-18 04:35, David G. Johnston wrote:\n> Given that \"port\" is a postgresql.conf setting its use here (and \n> elsewhere) should be taken to mean the value of that specific variable. \n> To that end, I find the current description of port to be lacking - it \n> should mention its usage as a qualifier when dealing with unix socket \n> files (in addition to the existing wording under unix_socket_directories).\n> \n> If we are going to debate semantics here \"bind unix address\" doesn't \n> seem correct.  could not create Unix socket file /tmp/.s.PGSQL.5432, it \n> already exists.\n> \n> The hint would be better written: Is another postmaster running with \n> unix_socket_directories = /tmp and port = 5432?  If not, remove the unix \n> socket file /tmp/.s.PGSQL.5432 and retry.\n> \n> I don't see much benefit in trying to share logic/wording between the \n> various messages and hints for the different ways the server can \n> establish communication points.\n> \n> I agree that there isn't a useful hint for the abstract case as it \n> shouldn't happen unless there is indeed another running instance with \n> the same configuration.  Though a hint similar to the above, but without \n> the \"remove and retry\" bit, probably wouldn't hurt.\n\nI think we are getting a bit sidetracked here with the message wording. \nThe reason I looked at this was that \"remove socket file and retry\" is \nnever an appropriate action with abstract sockets. And on further \nanalysis, it is never an appropriate action with any Unix-domain socket \n(because with file system namespace sockets, you never get an \nEADDRINUSE, so it's dead code). So my proposal here is to just delete \nthat line from the hint and leave the rest the same. There could be \nvalue in further refining and rephrasing this, but it ought to be a \nseparate thread.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/", "msg_date": "Fri, 20 Nov 2020 16:06:43 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On Friday, November 20, 2020, Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-11-18 04:35, David G. Johnston wrote:\n>\n>>\n>>\n>> I agree that there isn't a useful hint for the abstract case as it\n>> shouldn't happen unless there is indeed another running instance with the\n>> same configuration. Though a hint similar to the above, but without the\n>> \"remove and retry\" bit, probably wouldn't hurt.\n>>\n>\n> I think we are getting a bit sidetracked here with the message wording.\n> The reason I looked at this was that \"remove socket file and retry\" is\n> never an appropriate action with abstract sockets. And on further\n> analysis, it is never an appropriate action with any Unix-domain socket\n> (because with file system namespace sockets, you never get an EADDRINUSE,\n> so it's dead code). So my proposal here is to just delete that line from\n> the hint and leave the rest the same. There could be value in further\n> refining and rephrasing this, but it ought to be a separate thread.\n>\n\nIf there is dead code there is an underlying problem to address/discover,\nnot just removing the dead code. In this case are we saying that a new\nserver won’t ever fail to start because the socket file exists but instead\nwill just clobber the file with its own? Because given that error, and a\nserver process that failed to clean up after itself, the correction to take\nwould indeed seem to be to manually remove the file as the hint says. IOW,\nfix the code, not the message?\n\nDavid J.\n\nOn Friday, November 20, 2020, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-11-18 04:35, David G. Johnston wrote:\n\n\nI agree that there isn't a useful hint for the abstract case as it shouldn't happen unless there is indeed another running instance with the same configuration.  Though a hint similar to the above, but without the \"remove and retry\" bit, probably wouldn't hurt.\n\n\nI think we are getting a bit sidetracked here with the message wording. The reason I looked at this was that \"remove socket file and retry\" is never an appropriate action with abstract sockets.  And on further analysis, it is never an appropriate action with any Unix-domain socket (because with file system namespace sockets, you never get an EADDRINUSE, so it's dead code).  So my proposal here is to just delete that line from the hint and leave the rest the same.  There could be value in further refining and rephrasing this, but it ought to be a separate thread.\nIf there is dead code there is an underlying problem to address/discover, not just removing the dead code.  In this case are we saying that a new server won’t ever fail to start because the socket file exists but instead will just clobber the file with its own?  Because given that error, and a server process that failed to clean up after itself, the correction to take would indeed seem to be to manually remove the file as the hint says.  IOW, fix the code, not the message?David J.", "msg_date": "Fri, 20 Nov 2020 10:23:53 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On 2020-11-20 18:23, David G. Johnston wrote:\n> If there is dead code there is an underlying problem to \n> address/discover, not just removing the dead code.  In this case are we \n> saying that a new server won’t ever fail to start because the socket \n> file exists but instead will just clobber the file with its own? \n\nYes. (In practice, there will be an error with respect to the lock file \nbefore you even get to that question, but that is different code elsewhere.)\n\n> Because given that error, and a server process that failed to clean up \n> after itself, the correction to take would indeed seem to be to manually \n> remove the file as the hint says.  IOW, fix the code, not the message?\n\nI don't understand that.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n", "msg_date": "Mon, 23 Nov 2020 14:50:41 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On Mon, Nov 23, 2020 at 6:50 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-11-20 18:23, David G. Johnston wrote:\n> > If there is dead code there is an underlying problem to\n> > address/discover, not just removing the dead code. In this case are we\n> > saying that a new server won’t ever fail to start because the socket\n> > file exists but instead will just clobber the file with its own?\n>\n> Yes. (In practice, there will be an error with respect to the lock file\n> before you even get to that question, but that is different code\n> elsewhere.)\n>\n> > Because given that error, and a server process that failed to clean up\n> > after itself, the correction to take would indeed seem to be to manually\n> > remove the file as the hint says. IOW, fix the code, not the message?\n>\n> I don't understand that.\n>\n>\nSo presently there is no functioning code to prevent two PostgreSQL\ninstances from using the same socket so long as they do not also use the\nsame data directory? We only handle the case of an unclean crash - where\nthe pid and socket are both left behind - having the system tell the user\nto remove the pid lock file but then auto-replacing the socket (I was\nconflating the behavior with the pid lock file and the socket file).\n\nI would expect that we handle port misconfiguration also, by not\nauto-replacing the socket and instead have the existing error message (with\nmodified hint) remain behind. This provides behavior consistent with TCP\nport binding. Or is it the case that we always attempt to bind the TCP/IP\nport, regardless of the presence of a socket file, in which case the\nfailure for port binding does cover the socket situation as well? If this\nis the case, pointing that out in [1] and a code comment, while removing\nthat particular error as \"dead code\", would work.\n\nDavid J.\n\n[1]\nhttps://www.postgresql.org/docs/13/server-start.html#SERVER-START-FAILURES\n\nOn Mon, Nov 23, 2020 at 6:50 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-11-20 18:23, David G. Johnston wrote:\n> If there is dead code there is an underlying problem to \n> address/discover, not just removing the dead code.  In this case are we \n> saying that a new server won’t ever fail to start because the socket \n> file exists but instead will just clobber the file with its own?  \n\nYes.  (In practice, there will be an error with respect to the lock file \nbefore you even get to that question, but that is different code elsewhere.)\n\n> Because given that error, and a server process that failed to clean up \n> after itself, the correction to take would indeed seem to be to manually \n> remove the file as the hint says.  IOW, fix the code, not the message?\n\nI don't understand that.So presently there is no functioning code to prevent two PostgreSQL instances from using the same socket so long as they do not also use the same data directory?  We only handle the case of an unclean crash - where the pid and socket are both left behind - having the system tell the user to remove the pid lock file but then auto-replacing the socket (I was conflating the behavior with the pid lock file and the socket file).I would expect that we handle port misconfiguration also, by not auto-replacing the socket and instead have the existing error message (with modified hint) remain behind.  This provides behavior consistent with TCP port binding.  Or is it the case that we always attempt to bind the TCP/IP port, regardless of the presence of a socket file, in which case the failure for port binding does cover the socket situation as well?  If this is the case, pointing that out in [1] and a code comment, while removing that particular error as \"dead code\", would work.David J.[1] https://www.postgresql.org/docs/13/server-start.html#SERVER-START-FAILURES", "msg_date": "Mon, 23 Nov 2020 09:00:32 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On Fri, Nov 20, 2020 at 04:06:43PM +0100, Peter Eisentraut wrote:\n> I think we are getting a bit sidetracked here with the message wording. The\n> reason I looked at this was that \"remove socket file and retry\" is never an\n> appropriate action with abstract sockets. And on further analysis, it is\n> never an appropriate action with any Unix-domain socket (because with file\n> system namespace sockets, you never get an EADDRINUSE, so it's dead code).\n> So my proposal here is to just delete that line from the hint and leave the\n> rest the same.\n\nReading again this thread, +1 on that.\n--\nMichael", "msg_date": "Tue, 24 Nov 2020 10:57:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On Mon, Nov 23, 2020 at 9:00 AM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> Or is it the case that we always attempt to bind the TCP/IP port,\n> regardless of the presence of a socket file, in which case the failure for\n> port binding does cover the socket situation as well?\n>\n\nThis cannot always be the case since the listened-to IP address matters.\n\nI think the socket file error message hint is appropriate. I'd consider it\na bug if that code is effectively unreachable (the fact that the hint\nexists supports this conclusion). If we add \"abstract unix sockets\" where\nwe likewise prevent two servers from listening on the same channel, the\nabsence of such a check for the socket file is even more unexpected. At\nminimum we should at least declare whether we will even try and whether\nsuch a socket file check is best effort or simply generally reliable.\n\nDavid J.\n\nOn Mon, Nov 23, 2020 at 9:00 AM David G. Johnston <david.g.johnston@gmail.com> wrote:Or is it the case that we always attempt to bind the TCP/IP port, regardless of the presence of a socket file, in which case the failure for port binding does cover the socket situation as well?This cannot always be the case since the listened-to IP address matters.I think the socket file error message hint is appropriate.  I'd consider it a bug if that code is effectively unreachable (the fact that the hint exists supports this conclusion).  If we add \"abstract unix sockets\" where we likewise prevent two servers from listening on the same channel, the absence of such a check for the socket file is even more unexpected.  At minimum we should at least declare whether we will even try and whether such a socket file check is best effort or simply generally reliable.David J.", "msg_date": "Tue, 24 Nov 2020 08:27:36 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On 2020-11-23 17:00, David G. Johnston wrote:\n> So presently there is no functioning code to prevent two PostgreSQL \n> instances from using the same socket so long as they do not also use the \n> same data directory?  We only handle the case of an unclean crash - \n> where the pid and socket are both left behind - having the system tell \n> the user to remove the pid lock file but then auto-replacing the socket \n> (I was conflating the behavior with the pid lock file and the socket file).\n> \n> I would expect that we handle port misconfiguration also, by not \n> auto-replacing the socket and instead have the existing error message \n> (with modified hint) remain behind.  This provides behavior consistent \n> with TCP port binding.  Or is it the case that we always attempt to bind \n> the TCP/IP port, regardless of the presence of a socket file, in which \n> case the failure for port binding does cover the socket situation as \n> well?  If this is the case, pointing that out in [1] and a code comment, \n> while removing that particular error as \"dead code\", would work.\n\nWe're subject to whatever the kernel behavior is. If the kernel doesn't \nreport address conflicts for Unix-domain sockets, then we can't do \nanything about that. Having an error message ready in case the kernel \ndoes report such an error is not useful if it never does.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n", "msg_date": "Tue, 24 Nov 2020 16:45:11 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On Tue, Nov 24, 2020 at 8:45 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> We're subject to whatever the kernel behavior is. If the kernel doesn't\n> report address conflicts for Unix-domain sockets, then we can't do\n> anything about that. Having an error message ready in case the kernel\n> does report such an error is not useful if it never does.\n>\n\nIt's a file, we can check for its existence in user-space.\n\nDavid J.\n\nOn Tue, Nov 24, 2020 at 8:45 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:We're subject to whatever the kernel behavior is.  If the kernel doesn't \nreport address conflicts for Unix-domain sockets, then we can't do \nanything about that.  Having an error message ready in case the kernel \ndoes report such an error is not useful if it never does.It's a file, we can check for its existence in user-space.David J.", "msg_date": "Tue, 24 Nov 2020 08:49:30 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On 2020-11-24 02:57, Michael Paquier wrote:\n> On Fri, Nov 20, 2020 at 04:06:43PM +0100, Peter Eisentraut wrote:\n>> I think we are getting a bit sidetracked here with the message wording. The\n>> reason I looked at this was that \"remove socket file and retry\" is never an\n>> appropriate action with abstract sockets. And on further analysis, it is\n>> never an appropriate action with any Unix-domain socket (because with file\n>> system namespace sockets, you never get an EADDRINUSE, so it's dead code).\n>> So my proposal here is to just delete that line from the hint and leave the\n>> rest the same.\n> \n> Reading again this thread, +1 on that.\n\ncommitted, thanks\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n", "msg_date": "Wed, 25 Nov 2020 08:47:09 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: abstract Unix-domain sockets" }, { "msg_contents": "On 2020-11-24 16:49, David G. Johnston wrote:\n> On Tue, Nov 24, 2020 at 8:45 AM Peter Eisentraut \n> <peter.eisentraut@2ndquadrant.com \n> <mailto:peter.eisentraut@2ndquadrant.com>> wrote:\n> \n> We're subject to whatever the kernel behavior is.  If the kernel\n> doesn't\n> report address conflicts for Unix-domain sockets, then we can't do\n> anything about that.  Having an error message ready in case the kernel\n> does report such an error is not useful if it never does.\n> \n> \n> It's a file, we can check for its existence in user-space.\n\nBut not without race conditions. That's why we have the separate lock \nfile, so we can do this properly.\n\nAlso, even if one were to add code to check the file existence first, \nthis would be separate code and would not affect the behavior of the \nbind() call that we are discussing here.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n", "msg_date": "Wed, 25 Nov 2020 08:49:26 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: abstract Unix-domain sockets" } ]
[ { "msg_contents": "Hi,\n\nRegarding the toast_tuple_target parameter of CREATE TABLE, the \ndocumentation says that it only affects External or Extended, but it \nactually affects the compression of Main as well.\n\nThe attached patch modifies the document to match the actual behavior.\n\nRegards,\n\n-- \nShinya Okano", "msg_date": "Fri, 09 Oct 2020 17:43:55 +0900", "msg_from": "Shinya Okano <btokanosn@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Add a description to the documentation that toast_tuple_target\n affects \"Main\"" }, { "msg_contents": "Hi,\n\nOn Fri, Oct 9, 2020 at 5:44 PM Shinya Okano <btokanosn@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> Regarding the toast_tuple_target parameter of CREATE TABLE, the\n> documentation says that it only affects External or Extended, but it\n> actually affects the compression of Main as well.\n>\n> The attached patch modifies the document to match the actual behavior.\n+1\n\nBest regards,\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n", "msg_date": "Tue, 13 Oct 2020 10:40:59 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add a description to the documentation that toast_tuple_target\n affects \"Main\"" }, { "msg_contents": "\n\nOn 2020/10/13 10:40, Kasahara Tatsuhito wrote:\n> Hi,\n> \n> On Fri, Oct 9, 2020 at 5:44 PM Shinya Okano <btokanosn@oss.nttdata.com> wrote:\n>>\n>> Hi,\n>>\n>> Regarding the toast_tuple_target parameter of CREATE TABLE, the\n>> documentation says that it only affects External or Extended, but it\n>> actually affects the compression of Main as well.\n>>\n>> The attached patch modifies the document to match the actual behavior.\n> +1\n\n+1\n\n+ we try to compress long column values or to move into TOAST tables, and\n\n\"we try to compress and/or move long column values into TOAST tables, and\" is better?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 14 Oct 2020 01:30:06 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add a description to the documentation that toast_tuple_target\n affects \"Main\"" }, { "msg_contents": "On 2020-10-14 01:30, Fujii Masao wrote:\n> On 2020/10/13 10:40, Kasahara Tatsuhito wrote:\n>> On Fri, Oct 9, 2020 at 5:44 PM Shinya Okano \n>> <btokanosn@oss.nttdata.com> wrote:\n>>> Regarding the toast_tuple_target parameter of CREATE TABLE, the\n>>> documentation says that it only affects External or Extended, but it\n>>> actually affects the compression of Main as well.\n>>> \n>>> The attached patch modifies the document to match the actual \n>>> behavior.\n>> +1\n> \n> +1\n> \n> + we try to compress long column values or to move into TOAST \n> tables, and\n> \n> \"we try to compress and/or move long column values into TOAST tables,\n> and\" is better?\n\nThank you everyone for reviews.\nI attached the new version of the patch.\n\nRegards,\n\n-- \nShinya Okano", "msg_date": "Wed, 14 Oct 2020 16:21:37 +0900", "msg_from": "Shinya Okano <btokanosn@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add a description to the documentation that toast_tuple_target\n affects \"Main\"" }, { "msg_contents": "\n\nOn 2020/10/14 16:21, Shinya Okano wrote:\n> On 2020-10-14 01:30, Fujii Masao wrote:\n>> On 2020/10/13 10:40, Kasahara Tatsuhito wrote:\n>>> On Fri, Oct 9, 2020 at 5:44 PM Shinya Okano <btokanosn@oss.nttdata.com> wrote:\n>>>> Regarding the toast_tuple_target parameter of CREATE TABLE, the\n>>>> documentation says that it only affects External or Extended, but it\n>>>> actually affects the compression of Main as well.\n>>>>\n>>>> The attached patch modifies the document to match the actual behavior.\n>>> +1\n>>\n>> +1\n>>\n>> +����� we try to compress long column values or to move into TOAST tables, and\n>>\n>> \"we try to compress and/or move long column values into TOAST tables,\n>> and\" is better?\n> \n> Thank you everyone for reviews.\n> I attached the new version of the patch.\n\nThanks for updating the patch! Pushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 15 Oct 2020 11:12:10 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add a description to the documentation that toast_tuple_target\n affects \"Main\"" } ]
[ { "msg_contents": "At function NIImportAffixes (src/backend/tsearch/spell.c).\n\nIf option \"flag\" is not handled, variable char flag[BUFSIZE] will remain\nuninitialized.\n\nregards,\nRanier Vilela", "msg_date": "Fri, 9 Oct 2020 09:36:42 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Uninitialized var utilized (src/backend/tsearch/spell.c)" }, { "msg_contents": "> On 9 Oct 2020, at 14:36, Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> At function NIImportAffixes (src/backend/tsearch/spell.c).\n> \n> If option \"flag\" is not handled, variable char flag[BUFSIZE] will remain uninitialized.\n\nTo help reviewers, your report should contain an explanation of when that can\nhappen.\n\ncheers ./daniel\n\n", "msg_date": "Fri, 9 Oct 2020 16:07:58 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Uninitialized var utilized (src/backend/tsearch/spell.c)" }, { "msg_contents": "Em sex., 9 de out. de 2020 às 11:08, Daniel Gustafsson <daniel@yesql.se>\nescreveu:\n\n> > On 9 Oct 2020, at 14:36, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> > At function NIImportAffixes (src/backend/tsearch/spell.c).\n> >\n> > If option \"flag\" is not handled, variable char flag[BUFSIZE] will remain\n> uninitialized.\n>\n> To help reviewers, your report should contain an explanation of when that\n> can\n> happen.\n>\n> When option \"flag\" is not handled.\nif (STRNCMP(pstr, \"flag\") == 0)\n\nregards,\nRanier Vilela\n\nEm sex., 9 de out. de 2020 às 11:08, Daniel Gustafsson <daniel@yesql.se> escreveu:> On 9 Oct 2020, at 14:36, Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> At function NIImportAffixes (src/backend/tsearch/spell.c).\n> \n> If option \"flag\" is not handled, variable char flag[BUFSIZE] will remain uninitialized.\n\nTo help reviewers, your report should contain an explanation of when that can\nhappen.\nWhen\noption \"flag\" is not handled. \t\tif (STRNCMP(pstr, \"flag\") == 0)regards,Ranier Vilela", "msg_date": "Fri, 9 Oct 2020 11:09:30 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Uninitialized var utilized (src/backend/tsearch/spell.c)" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em sex., 9 de out. de 2020 às 11:08, Daniel Gustafsson <daniel@yesql.se>\n> escreveu:\n>> To help reviewers, your report should contain an explanation of when that\n>> can happen.\n\n> When option \"flag\" is not handled.\n> if (STRNCMP(pstr, \"flag\") == 0)\n\nI think what he means is that if the file contains no \"flag\" command\nbefore an affix entry then then we would arrive at NIAddAffix with an\nundefined flag buffer. That's illegal syntax according to a quick scan\nof the ispell(5) man page, which explains the lack of complaints; but\nit might be worth guarding against.\n\nAside from failing to initialize some variables that need it, it looks to\nme like NIImportAffixes is uselessly initializing some variables that\ndon't need it. I'd also be inclined to figure out which values are\nactually meant to be carried across lines, and declare the ones that\naren't inside the loop, just for clarity.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 09 Oct 2020 10:37:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Uninitialized var utilized (src/backend/tsearch/spell.c)" }, { "msg_contents": "Em sex., 9 de out. de 2020 às 11:37, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Em sex., 9 de out. de 2020 às 11:08, Daniel Gustafsson <daniel@yesql.se>\n> > escreveu:\n> >> To help reviewers, your report should contain an explanation of when\n> that\n> >> can happen.\n>\n> > When option \"flag\" is not handled.\n> > if (STRNCMP(pstr, \"flag\") == 0)\n>\n> I think what he means is that if the file contains no \"flag\" command\n> before an affix entry then then we would arrive at NIAddAffix with an\n> undefined flag buffer. That's illegal syntax according to a quick scan\n> of the ispell(5) man page, which explains the lack of complaints; but\n> it might be worth guarding against.\n>\n> Aside from failing to initialize some variables that need it, it looks to\n> me like NIImportAffixes is uselessly initializing some variables that\n> don't need it. I'd also be inclined to figure out which values are\n> actually meant to be carried across lines, and declare the ones that\n> aren't inside the loop, just for clarity.\n>\nThanks Tom, for the great explanation.\n\nregards,\nRanier Vilela\n\nEm sex., 9 de out. de 2020 às 11:37, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em sex., 9 de out. de 2020 às 11:08, Daniel Gustafsson <daniel@yesql.se>\n> escreveu:\n>> To help reviewers, your report should contain an explanation of when that\n>> can happen.\n\n> When option \"flag\" is not handled.\n> if (STRNCMP(pstr, \"flag\") == 0)\n\nI think what he means is that if the file contains no \"flag\" command\nbefore an affix entry then then we would arrive at NIAddAffix with an\nundefined flag buffer.  That's illegal syntax according to a quick scan\nof the ispell(5) man page, which explains the lack of complaints; but\nit might be worth guarding against.\n\nAside from failing to initialize some variables that need it, it looks to\nme like NIImportAffixes is uselessly initializing some variables that\ndon't need it.  I'd also be inclined to figure out which values are\nactually meant to be carried across lines, and declare the ones that\naren't inside the loop, just for clarity.Thanks Tom, for the great explanation.regards,Ranier Vilela", "msg_date": "Fri, 9 Oct 2020 11:38:07 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Uninitialized var utilized (src/backend/tsearch/spell.c)" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16663\nLogged by: Denis Patron\nEmail address: denis.patron@previnet.it\nPostgreSQL version: 11.9\nOperating system: CentOS 7\nDescription: \n\nI have an index, which at the file system level, is made up of multiple\nsegments (file: <id>.1, <id>.2 ecc). When I DROP INDEX, the index is dropped\nin Postgresql but at the file system level, the segments are marked as\n\"deleted\". if I check with the lsof command, I see that the segments are in\nuse from an idle connection. This does not happen if the index is formed by\nonly one segment (in my case <1Gb). How can I prevent this?\r\nthanks", "msg_date": "Fri, 09 Oct 2020 13:24:15 +0000", "msg_from": "PG Bug reporting form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "BUG #16663: DROP INDEX did not free up disk space: idle connection\n hold file marked as deleted" }, { "msg_contents": "This is not a bug.\n\nAt Fri, 09 Oct 2020 13:24:15 +0000, PG Bug reporting form <noreply@postgresql.org> wrote in \n> The following bug has been logged on the website:\n> \n> Bug reference: 16663\n> Logged by: Denis Patron\n> Email address: denis.patron@previnet.it\n> PostgreSQL version: 11.9\n> Operating system: CentOS 7\n> Description: \n> \n> I have an index, which at the file system level, is made up of multiple\n> segments (file: <id>.1, <id>.2 ecc). When I DROP INDEX, the index is dropped\n> in Postgresql but at the file system level, the segments are marked as\n> \"deleted\". if I check with the lsof command, I see that the segments are in\n> use from an idle connection. This does not happen if the index is formed by\n> only one segment (in my case <1Gb). How can I prevent this?\n> thanks\n\nThat references to deleted files will dissapear at the beginning of\nthe next transaction.\n\nAt the time a relation including an index is dropped, the first\nsegment file (named as \"<id>\" without a suffix number) is left behind\nso the file is not shown as \"(deleted)\" in lsof output.\n\nThe next checkpoint removes the first segment.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 14 Oct 2020 12:05:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "Hi,\n\nOn 2020-10-14 12:05:10 +0900, Kyotaro Horiguchi wrote:\n> This is not a bug.\n>\n> At Fri, 09 Oct 2020 13:24:15 +0000, PG Bug reporting form <noreply@postgresql.org> wrote in\n> > The following bug has been logged on the website:\n> >\n> > Bug reference: 16663\n> > Logged by: Denis Patron\n> > Email address: denis.patron@previnet.it\n> > PostgreSQL version: 11.9\n> > Operating system: CentOS 7\n> > Description:\n> >\n> > I have an index, which at the file system level, is made up of multiple\n> > segments (file: <id>.1, <id>.2 ecc). When I DROP INDEX, the index is dropped\n> > in Postgresql but at the file system level, the segments are marked as\n> > \"deleted\". if I check with the lsof command, I see that the segments are in\n> > use from an idle connection. This does not happen if the index is formed by\n> > only one segment (in my case <1Gb). How can I prevent this?\n> > thanks\n>\n> That references to deleted files will dissapear at the beginning of\n> the next transaction.\n>\n> At the time a relation including an index is dropped, the first\n> segment file (named as \"<id>\" without a suffix number) is left behind\n> so the file is not shown as \"(deleted)\" in lsof output.\n\nI think we should consider either occasionally sending a sinval catchup\ninterrupt to backends that have been idle for a while, or to use a timer\nthat we use to limit the maximum time until we process sinvals. Just\nhaving to wait till all backends become busy and process sinval events\ndoesn't really seem like good approach to me.\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Tue, 13 Oct 2020 21:35:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "Andres Freund wrote\n> Hi,\n> \n> On 2020-10-14 12:05:10 +0900, Kyotaro Horiguchi wrote:\n>> This is not a bug.\n>>\n>> At Fri, 09 Oct 2020 13:24:15 +0000, PG Bug reporting form &lt;\n\n> noreply@\n\n> &gt; wrote in\n>> > The following bug has been logged on the website:\n>> >\n>> > Bug reference: 16663\n>> > Logged by: Denis Patron\n>> > Email address: \n\n> denis.patron@\n\n>> > PostgreSQL version: 11.9\n>> > Operating system: CentOS 7\n>> > Description:\n>> >\n>> > I have an index, which at the file system level, is made up of multiple\n>> > segments (file: \n> <id>\n> .1, \n> <id>\n> .2 ecc). When I DROP INDEX, the index is dropped\n>> > in Postgresql but at the file system level, the segments are marked as\n>> > \"deleted\". if I check with the lsof command, I see that the segments\n>> are in\n>> > use from an idle connection. This does not happen if the index is\n>> formed by\n>> > only one segment (in my case <1Gb). How can I prevent this?\n>> > thanks\n>>\n>> That references to deleted files will dissapear at the beginning of\n>> the next transaction.\n>>\n>> At the time a relation including an index is dropped, the first\n>> segment file (named as \"\n> <id>\n> \" without a suffix number) is left behind\n>> so the file is not shown as \"(deleted)\" in lsof output.\n> \n> I think we should consider either occasionally sending a sinval catchup\n> interrupt to backends that have been idle for a while, or to use a timer\n> that we use to limit the maximum time until we process sinvals. Just\n> having to wait till all backends become busy and process sinval events\n> doesn't really seem like good approach to me.\n> \n> Regards,\n> \n> Andres\n\n\n\nthanks for replying.\nthe problem is that I have a very large database, with indexes of up to 70\nGb. while I redo the indexes in concurrently mode, if an idle transaction is\nusing the index in question, the segment file (<id> _1 <id> _2 etc) of the\nindex remains in the filesystem (marked as deleted) as long as the idle\nconnection that it is blocking it does not make another transaction. this\nmeans that I can have hundreds of GB of space occupied by files marked\n\"deleted\", and this for hours. the risk is to run out of free space\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-bugs-f2117394.html\n\n\n", "msg_date": "Tue, 13 Oct 2020 23:47:34 -0700 (MST)", "msg_from": "\"denis.patron\" <denis.patron@previnet.it>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "On Wed, Oct 14, 2020 at 5:35 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-10-14 12:05:10 +0900, Kyotaro Horiguchi wrote:\n> > At the time a relation including an index is dropped, the first\n> > segment file (named as \"<id>\" without a suffix number) is left behind\n> > so the file is not shown as \"(deleted)\" in lsof output.\n>\n> I think we should consider either occasionally sending a sinval catchup\n> interrupt to backends that have been idle for a while, or to use a timer\n> that we use to limit the maximum time until we process sinvals. Just\n> having to wait till all backends become busy and process sinval events\n> doesn't really seem like good approach to me.\n\nOops, I also replied to this but now I see that I accidentally replied\nonly to Horiguchi-san and not the list! I was thinking that we should\nperhaps consider truncating the files to give back the disk space (as\nwe do for the first segment), so that it doesn't matter so much how\nlong other backends take to process SHAREDINVALSMGR_ID, close their\ndescriptors and release the inode.\n\n\n", "msg_date": "Thu, 15 Oct 2020 08:08:06 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Oct 14, 2020 at 5:35 PM Andres Freund <andres@anarazel.de> wrote:\n>> I think we should consider either occasionally sending a sinval catchup\n>> interrupt to backends that have been idle for a while, or to use a timer\n>> that we use to limit the maximum time until we process sinvals. Just\n>> having to wait till all backends become busy and process sinval events\n>> doesn't really seem like good approach to me.\n\n> Oops, I also replied to this but now I see that I accidentally replied\n> only to Horiguchi-san and not the list! I was thinking that we should\n> perhaps consider truncating the files to give back the disk space (as\n> we do for the first segment), so that it doesn't matter so much how\n> long other backends take to process SHAREDINVALSMGR_ID, close their\n> descriptors and release the inode.\n\n+1, I was also thinking that. It'd be pretty easy to fit into the\nexisting system structure (I think, without having looked at the relevant\ncode lately), and it would not add any overhead to normal processing.\nInstalling a timeout to handle this per Andres' idea inevitably *would*\nadd overhead.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Oct 2020 15:14:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "On Thu, Oct 15, 2020 at 8:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Wed, Oct 14, 2020 at 5:35 PM Andres Freund <andres@anarazel.de> wrote:\n> >> I think we should consider either occasionally sending a sinval catchup\n> >> interrupt to backends that have been idle for a while, or to use a timer\n> >> that we use to limit the maximum time until we process sinvals. Just\n> >> having to wait till all backends become busy and process sinval events\n> >> doesn't really seem like good approach to me.\n>\n> > Oops, I also replied to this but now I see that I accidentally replied\n> > only to Horiguchi-san and not the list! I was thinking that we should\n> > perhaps consider truncating the files to give back the disk space (as\n> > we do for the first segment), so that it doesn't matter so much how\n> > long other backends take to process SHAREDINVALSMGR_ID, close their\n> > descriptors and release the inode.\n>\n> +1, I was also thinking that. It'd be pretty easy to fit into the\n> existing system structure (I think, without having looked at the relevant\n> code lately), and it would not add any overhead to normal processing.\n> Installing a timeout to handle this per Andres' idea inevitably *would*\n> add overhead.\n\nAlright, here is a first swing at making our behaviour more consistent\nin two ways:\n\n1. The first segment should be truncated even in recovery.\n2. Later segments should be truncated on commit.\n\nI don't know why the existing coding decides not to try to unlink the\nlater segments if the truncate of segment 0 failed. We already\ncommitted, we should plough on.", "msg_date": "Thu, 15 Oct 2020 14:26:36 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "Ouch. You beat me to it.\n\nAt Thu, 15 Oct 2020 14:26:36 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Thu, Oct 15, 2020 at 8:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > On Wed, Oct 14, 2020 at 5:35 PM Andres Freund <andres@anarazel.de> wrote:\n> > >> I think we should consider either occasionally sending a sinval catchup\n> > >> interrupt to backends that have been idle for a while, or to use a timer\n> > >> that we use to limit the maximum time until we process sinvals. Just\n> > >> having to wait till all backends become busy and process sinval events\n> > >> doesn't really seem like good approach to me.\n> >\n> > > Oops, I also replied to this but now I see that I accidentally replied\n> > > only to Horiguchi-san and not the list! I was thinking that we should\n> > > perhaps consider truncating the files to give back the disk space (as\n> > > we do for the first segment), so that it doesn't matter so much how\n> > > long other backends take to process SHAREDINVALSMGR_ID, close their\n> > > descriptors and release the inode.\n> >\n> > +1, I was also thinking that. It'd be pretty easy to fit into the\n> > existing system structure (I think, without having looked at the relevant\n> > code lately), and it would not add any overhead to normal processing.\n> > Installing a timeout to handle this per Andres' idea inevitably *would*\n> > add overhead.\n> \n> Alright, here is a first swing at making our behaviour more consistent\n> in two ways:\n> \n> 1. The first segment should be truncated even in recovery.\n> 2. Later segments should be truncated on commit.\n> \n> I don't know why the existing coding decides not to try to unlink the\n> later segments if the truncate of segment 0 failed. We already\n> committed, we should plough on.\n\nI was trying the almost the same thing except how to emit the error\nmessage for truncation and not trying to unlink if truncation ends\nwith ENOENT for following segments.\n\nregareds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 15 Oct 2020 10:42:48 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "thanks for the patch. \nDo you think it can be included in the next minor releases or the only\nsolution will be to recompile?\nregards\nDenis\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-bugs-f2117394.html\n\n\n", "msg_date": "Wed, 14 Oct 2020 23:57:11 -0700 (MST)", "msg_from": "\"denni.pat\" <denni.pat@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "On Thu, Oct 15, 2020 at 8:20 PM denni.pat <denni.pat@gmail.com> wrote:\n> thanks for the patch.\n> Do you think it can be included in the next minor releases or the only\n> solution will be to recompile?\n\nI would vote +1 for back-patching a fix for this problem (that is,\npushing it into the minor releases), because I agree that it's very\narguably a bug that we treat the segments differently, and looking\naround I do see reports of people having to terminate processes to get\ntheir disk space back. I'd definitely want a consensus on that plan\nfrom some experienced reviewers and testers, though. For anyone\nwanting to test this, you might want to set RELSEGSIZE to a smaller\nnumber in src/include/pg_config.h.\n\n\n", "msg_date": "Fri, 16 Oct 2020 12:54:32 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "Thomas,\nI get into the patch and I think it's worth being committed and\nbackpatched.\nBTW I noticed that sometimes the same comparisons are done twice, and I\nmade a very minor refactor of the code. PFA v2 of a patch if you don't mind.\nAs for the question on what to do with the additional segments if the first\none failed to be truncated, I don't consider myself experienced enough and\nsurely someone else's independent opinion is very much welcome.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Wed, 11 Nov 2020 18:13:13 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nHi, I have tested the feature and it worked well. \r\nOne thing that doesn't matter is that the modify here seems unnecessary, right?\r\n\r\n> mdunlinkfork(RelFileNodeBackend rnode, ForkNumber forkNum, bool isRedo)\r\n> {\r\n> char\t *path;\r\n> -\tint\t\t\tret;\r\n> +\tint\t\t\tret = 0;\r\n> path = relpath(rnode, forkNum", "msg_date": "Thu, 19 Nov 2020 08:20:18 +0000", "msg_from": "Neil Chen <carpenter.nail.cz@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": ">\n> One thing that doesn't matter is that the modify here seems unnecessary,\n> right?\n>\n> > mdunlinkfork(RelFileNodeBackend rnode, ForkNumber forkNum, bool isRedo)\n> > {\n> > char *path;\n> > - int ret;\n> > + int ret = 0;\n> > path = relpath(rnode, forkNum\n\n\nI suppose it is indeed necessary as otherwise the result of the comparison\nis not defined in case of 'else' block in the mdunlinkfork() :\n346 else\n347 {\n348 /* Prevent other backends' fds from holding on to the disk\nspace */\n349 do_truncate(path);\n.....\n356 * Delete any additional segments.\n357 */\n358 if (ret >= 0)\n----------^^^^^^^\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nOne thing that doesn't matter is that the modify here seems unnecessary, right?\n\n> mdunlinkfork(RelFileNodeBackend rnode, ForkNumber forkNum, bool isRedo)\n> {\n> char     *path;\n> -     int                     ret;\n> +     int                     ret = 0;\n> path = relpath(rnode, forkNumI suppose it is indeed necessary as otherwise the result of the comparison is not defined in case of 'else' block in the mdunlinkfork() :  346     else347     {348         /* Prevent other backends' fds from holding on to the disk space */349         do_truncate(path);.....356      * Delete any additional segments.357      */358     if (ret >= 0) ----------^^^^^^^-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Thu, 19 Nov 2020 19:54:54 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "Yes, It's my fault. You're right.\n\nPavel Borisov <pashkin.elfe@gmail.com> 于2020年11月19日周四 下午11:55写道:\n\n> One thing that doesn't matter is that the modify here seems unnecessary,\n>> right?\n>>\n>> > mdunlinkfork(RelFileNodeBackend rnode, ForkNumber forkNum, bool isRedo)\n>> > {\n>> > char *path;\n>> > - int ret;\n>> > + int ret = 0;\n>> > path = relpath(rnode, forkNum\n>\n>\n> I suppose it is indeed necessary as otherwise the result of the comparison\n> is not defined in case of 'else' block in the mdunlinkfork() :\n> 346 else\n> 347 {\n> 348 /* Prevent other backends' fds from holding on to the disk\n> space */\n> 349 do_truncate(path);\n> .....\n> 356 * Delete any additional segments.\n> 357 */\n> 358 if (ret >= 0)\n> ----------^^^^^^^\n>\n> --\n> Best regards,\n> Pavel Borisov\n>\n> Postgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n>\n\nSo in the present logic, *ret* is always 0 if it is not in recovery mode\n(and other *if* conditions are not satisfied). But when the *if* condition\nis satisfied, it is possible to skip the deletion of additional segments.\nConsidering that our goal is to always try to unlink additional segments,\n*ret* seems unnecessary here. The code flow looks like:\n\n> if (isRedo || .....)\n> {\n> int ret; /* move to here */\n> ....\n> }\n> else\n> { }\n>\n> /* Delete any additional segments. */\n> if (true)\n> ...\n\nOr is there any reason to allow us to skip the attempt to delete additional\nsegments in recovery mode?\n\nYes, It's my fault. You're right.  Pavel Borisov <pashkin.elfe@gmail.com> 于2020年11月19日周四 下午11:55写道:One thing that doesn't matter is that the modify here seems unnecessary, right?\n\n> mdunlinkfork(RelFileNodeBackend rnode, ForkNumber forkNum, bool isRedo)\n> {\n> char     *path;\n> -     int                     ret;\n> +     int                     ret = 0;\n> path = relpath(rnode, forkNumI suppose it is indeed necessary as otherwise the result of the comparison is not defined in case of 'else' block in the mdunlinkfork() :  346     else347     {348         /* Prevent other backends' fds from holding on to the disk space */349         do_truncate(path);.....356      * Delete any additional segments.357      */358     if (ret >= 0) ----------^^^^^^^-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com So in the present logic, *ret* is always 0 if it is not in recovery mode (and other *if* conditions are not satisfied). But when the *if* condition is satisfied, it is possible to skip the deletion of additional segments. Considering that our goal is to always try to unlink additional segments, *ret* seems unnecessary here. The code flow looks like:> if (isRedo || .....)> {>     int ret;  /* move to here */>     ....> }> else> { }>> /* Delete any additional segments. */> if (true)> ...Or is there any reason to allow us to skip the attempt to delete additional segments in recovery mode?", "msg_date": "Fri, 20 Nov 2020 09:50:46 +0800", "msg_from": "Nail Carpenter <carpenter.nail.cz@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "I verified the patch \"v2-0001-Free-disk-space-for-dropped-relations-on-commit.patch\" on master branch \"0cc99327888840f2bf572303b68438e4caf62de9\". It works for me. Below is my test procedure and results.\r\n\r\n=== Before the patch ===\r\n#1 from psql console 1, create table and index then insert enough data\r\npostgres=# CREATE TABLE test_tbl ( a int, b text);\r\npostgres=# CREATE INDEX idx_test_tbl on test_tbl (a);\r\npostgres=# INSERT INTO test_tbl SELECT generate_series(1,80000000),'Hello world!';\r\npostgres=# INSERT INTO test_tbl SELECT generate_series(1,80000000),'Hello world!';\r\n\r\n#2 check files size \r\ndavid:12867$ du -h\r\n12G\t.\r\n\r\n#3 from psql console 2, drop the index\r\npostgres=# drop index idx_test_tbl;\r\n\r\n#4 check files size in different ways,\r\ndavid:12867$ du -h\r\n7.8G\t.\r\ndavid:12867$ ls -l\r\n...\r\n-rw------- 1 david david 0 Nov 23 20:07 16402\r\n...\r\n\r\n$ lsof -nP | grep '(deleted)' |grep pgdata\r\n...\r\npostgres 25736 david 45u REG 259,2 0 12592758 /home/david/sandbox/postgres/pgdata/base/12867/16402 (deleted)\r\npostgres 25736 david 49u REG 259,2 1073741824 12592798 /home/david/sandbox/postgres/pgdata/base/12867/16402.1 (deleted)\r\npostgres 25736 david 53u REG 259,2 1073741824 12592739 /home/david/sandbox/postgres/pgdata/base/12867/16402.2 (deleted)\r\npostgres 25736 david 59u REG 259,2 372604928 12592800 /home/david/sandbox/postgres/pgdata/base/12867/16402.3 (deleted)\r\n...\r\n\r\nThe index relnode id \"16402\" displays size \"0\" from postgres database folder, but when using lsof to check, all 16402.x are still in used by a psql connection except 16402 is set to 0. Check it again after an hour, lsof shows the same results.\r\n\r\n=== After the patch ===\r\nRepeat step 1 ~ 4, lsof shows all the index relnode files (in this case, the index relnode id 16389) are removed within about 1 minute.\r\n$ lsof -nP | grep '(deleted)' |grep pgdata\r\n...\r\npostgres 32707 david 66u REG 259,2 0 12592763 /home/david/sandbox/postgres/pgdata/base/12867/16389.1 (deleted)\r\npostgres 32707 david 70u REG 259,2 0 12592823 /home/david/sandbox/postgres/pgdata/base/12867/16389.2 (deleted)\r\npostgres 32707 david 74u REG 259,2 0 12592805 /home/david/sandbox/postgres/pgdata/base/12867/16389.3 (deleted)\r\n...\r\n\r\nOne thing interesting for me is that, if the index is created after data records has been inserted, then lsof doesn't show this issue.", "msg_date": "Tue, 24 Nov 2020 18:36:33 +0000", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nGiven we got two other reviews from Neil and David, I think I can finalize my own review and mark the patch as ready for committer if nobody has objections.\r\nThank you!\r\n\r\nPavel Borisov\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Tue, 24 Nov 2020 18:59:17 +0000", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "On Wed, Nov 25, 2020 at 8:00 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> The new status of this patch is: Ready for Committer\n\nThanks! One small thing bothered me about the last version of the\npatch. It tried to unlink when ENOENT had already been enountered by\nopen(2), so COMMIT of a DROP looks like:\n\nopenat(AT_FDCWD, \"base/14208/16384\", O_RDWR) = 54\nftruncate(54, 0) = 0\nclose(54) = 0\nopenat(AT_FDCWD, \"base/14208/16384.1\", O_RDWR) = -1 ENOENT\nunlink(\"base/14208/16384.1\") = -1 ENOENT\nopenat(AT_FDCWD, \"base/14208/16384_fsm\", O_RDWR) = -1 ENOENT\nunlink(\"base/14208/16384_fsm\") = -1 ENOENT\nopenat(AT_FDCWD, \"base/14208/16384_vm\", O_RDWR) = -1 ENOENT\nunlink(\"base/14208/16384_vm\") = -1 ENOENT\nopenat(AT_FDCWD, \"base/14208/16384_init\", O_RDWR) = -1 ENOENT\nunlink(\"base/14208/16384_init\") = -1 ENOENT\n\nSo I fixed that, by adding a return value to do_truncate() and\nchecking it. That's the version I plan to commit tomorrow, unless\nthere are further comments or objections. I've also attached a\nversion suitable for REL_11_STABLE and earlier branches (with a name\nthat cfbot should ignore), where things are slightly different. In\nthose branches, the register_forget_request() logic is elsewhere.\n\nWhile looking at trace output, I figured we should just use\ntruncate(2) on non-Windows, on the master branch only. It's not like\nit really makes much difference, but I don't see why we shouldn't\nallow ourselves to use ancient standardised Unix syscalls when we can.\nThat'd get us down to just the following when committing a DROP:\n\ntruncate(\"base/14208/16396\", 0) = 0\ntruncate(\"base/14208/16396.1\", 0) = -1 ENOENT\ntruncate(\"base/14208/16396_fsm\", 0) = -1 ENOENT\ntruncate(\"base/14208/16396_vm\", 0) = -1 ENOENT\ntruncate(\"base/14208/16396_init\", 0) = -1 ENOENT", "msg_date": "Mon, 30 Nov 2020 18:59:40 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "On Mon, Nov 30, 2020 at 6:59 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Nov 25, 2020 at 8:00 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > The new status of this patch is: Ready for Committer\n\n> ... That's the version I plan to commit tomorrow, unless\n> there are further comments or objections. ...\n\nDone, and back-patched.\n\nI thought a bit more about the fact that we fail to unlink\nhigher-numbered segments in certain error cases, potentially leaving\nstray files behind. As far as I can see, nothing we do in this\ncode-path is going to be a bullet-proof solution to that problem. One\nsimple idea would be for the checkpointer to refuse to unlink segment\n0 (thereby allowing the relfilenode to be recycled) until it has\nscanned the parent directory for any related files that shouldn't be\nthere.\n\n> While looking at trace output, I figured we should just use\n> truncate(2) on non-Windows, on the master branch only. It's not like\n> it really makes much difference, but I don't see why we shouldn't\n> allow ourselves to use ancient standardised Unix syscalls when we can.\n\nAlso pushed, but only to master.\n\n\n", "msg_date": "Tue, 1 Dec 2020 15:48:31 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "On Mon, Nov 30, 2020 at 06:59:40PM +1300, Thomas Munro wrote:\n> So I fixed that, by adding a return value to do_truncate() and\n> checking it. That's the version I plan to commit tomorrow, unless\n> there are further comments or objections. I've also attached a\n> version suitable for REL_11_STABLE and earlier branches (with a name\n> that cfbot should ignore), where things are slightly different. In\n> those branches, the register_forget_request() logic is elsewhere.\n\nHmm. Sorry for arriving late at the party. But is that really\nsomething suitable for a backpatch? Sure, it is not optimal to not\ntruncate all the segments when a transaction dropping a relation\ncommits, but this was not completely broken either.\n--\nMichael", "msg_date": "Tue, 1 Dec 2020 11:55:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "On Tue, Dec 1, 2020 at 3:55 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Nov 30, 2020 at 06:59:40PM +1300, Thomas Munro wrote:\n> > So I fixed that, by adding a return value to do_truncate() and\n> > checking it. That's the version I plan to commit tomorrow, unless\n> > there are further comments or objections. I've also attached a\n> > version suitable for REL_11_STABLE and earlier branches (with a name\n> > that cfbot should ignore), where things are slightly different. In\n> > those branches, the register_forget_request() logic is elsewhere.\n>\n> Hmm. Sorry for arriving late at the party. But is that really\n> something suitable for a backpatch? Sure, it is not optimal to not\n> truncate all the segments when a transaction dropping a relation\n> commits, but this was not completely broken either.\n\nI felt on balance it was a \"bug\", since it causes operational\ndifficulties for people and was clearly not our intended behaviour,\nand I announced this intention 6 weeks ago. Of course I'll be happy\nto revert it from the back-branches if that's the consensus. Any\nother opinions?\n\n\n", "msg_date": "Tue, 1 Dec 2020 16:06:48 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" }, { "msg_contents": "On Tue, Dec 01, 2020 at 04:06:48PM +1300, Thomas Munro wrote:\n> I felt on balance it was a \"bug\", since it causes operational\n> difficulties for people and was clearly not our intended behaviour,\n> and I announced this intention 6 weeks ago.\n\nOops, sorry for missing this discussion for such a long time :/\n\n> Of course I'll be happy to revert it from the back-branches if\n> that's the consensus. Any > other opinions?\n\nIf there are no other opinions, I am also fine to rely on your\njudgment.\n--\nMichael", "msg_date": "Tue, 1 Dec 2020 15:01:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #16663: DROP INDEX did not free up disk space: idle\n connection hold file marked as deleted" } ]
[ { "msg_contents": "I think that TupIsNull macro is no longer appropriate, to protect\nExecCopySlot.\n\nSee at tuptable.h:\n#define TupIsNull(slot) \\\n((slot) == NULL || TTS_EMPTY(slot))\n\nIf var node->group_pivot is NULL, ExecCopySlot will\ndereference a null pointer (first arg).\n\nMaybe, this can be related to a bug reported in the btree deduplication.\n\nregards,\nRanier Vilela", "msg_date": "Fri, 9 Oct 2020 12:24:16 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "On Fri, Oct 09, 2020 at 12:24:16PM -0300, Ranier Vilela wrote:\n>I think that TupIsNull macro is no longer appropriate, to protect\n>ExecCopySlot.\n>\n>See at tuptable.h:\n>#define TupIsNull(slot) \\\n>((slot) == NULL || TTS_EMPTY(slot))\n>\n>If var node->group_pivot is NULL, ExecCopySlot will\n>dereference a null pointer (first arg).\n>\n\nNo. The C standard says there's a \"sequence point\" [1] between the left\nand right arguments of the || operator, and that the expressions are\nevaluated from left to right. So the program will do the first check,\nand if the pointer really is NULL it won't do the second one (because\nthat is not necessary for determining the result). Similarly for the &&\noperator, of course.\n\nHad this been wrong, surely some of the the other places TupIsNull would\nbe wrong too (and there are hundreds of them).\n\n>Maybe, this can be related to a bug reported in the btree deduplication.\n>\n\nNot sure which bug you mean, but this piece of code is pretty unrelated\nto btree in general, so I don't see any link.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 9 Oct 2020 19:05:02 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "Em sex., 9 de out. de 2020 às 14:05, Tomas Vondra <\ntomas.vondra@2ndquadrant.com> escreveu:\n\n> On Fri, Oct 09, 2020 at 12:24:16PM -0300, Ranier Vilela wrote:\n> >I think that TupIsNull macro is no longer appropriate, to protect\n> >ExecCopySlot.\n> >\n> >See at tuptable.h:\n> >#define TupIsNull(slot) \\\n> >((slot) == NULL || TTS_EMPTY(slot))\n> >\n> >If var node->group_pivot is NULL, ExecCopySlot will\n> >dereference a null pointer (first arg).\n> >\n>\n> No. The C standard says there's a \"sequence point\" [1] between the left\n> and right arguments of the || operator, and that the expressions are\n> evaluated from left to right. So the program will do the first check,\n> and if the pointer really is NULL it won't do the second one (because\n> that is not necessary for determining the result). Similarly for the &&\n> operator, of course.\n>\nReally.\nThe trap is not on the second part of expression. Is in the first.\nIf the pointer is NULL, ExecCopySlot will be called.\n\nFor convenience, I will reproduce it:\nstatic inline TupleTableSlot *\nExecCopySlot(TupleTableSlot *dstslot, TupleTableSlot *srcslot)\n{\nAssert(!TTS_EMPTY(srcslot));\nAssertArg(srcslot != dstslot);\n\ndstslot->tts_ops->copyslot(dstslot, srcslot);\n\nreturn dstslot;\n}\n\nThe second arg is not empty? Yes.\nThe second arg is different from the first arg (NULL)? Yes.\n\ndstslot->tts_ops->copyslot(dstslot, srcslot); // dereference dstslot (which\nis NULL)\n\n\n>\n> Had this been wrong, surely some of the the other places TupIsNull would\n> be wrong too (and there are hundreds of them).\n>\n> >Maybe, this can be related to a bug reported in the btree deduplication.\n> >\n>\n> Not sure which bug you mean, but this piece of code is pretty unrelated\n> to btree in general, so I don't see any link.\n>\nSorry, can't find the thread.\nThe author of deduplication claimed that he thinks the problem may be in\nIncrementalSort,\nhe did not specify which part.\n\nregards,\nRanier Vilela\n\nEm sex., 9 de out. de 2020 às 14:05, Tomas Vondra <tomas.vondra@2ndquadrant.com> escreveu:On Fri, Oct 09, 2020 at 12:24:16PM -0300, Ranier Vilela wrote:\n>I think that TupIsNull macro is no longer appropriate, to protect\n>ExecCopySlot.\n>\n>See at tuptable.h:\n>#define TupIsNull(slot) \\\n>((slot) == NULL || TTS_EMPTY(slot))\n>\n>If var node->group_pivot is NULL, ExecCopySlot will\n>dereference a null pointer (first arg).\n>\n\nNo. The C standard says there's a \"sequence point\" [1] between the left\nand right arguments of the || operator, and that the expressions are\nevaluated from left to right. So the program will do the first check,\nand if the pointer really is NULL it won't do the second one (because\nthat is not necessary for determining the result). Similarly for the &&\noperator, of course.Really.The trap is not on the second part of expression. Is in the first.If the pointer is NULL, \nExecCopySlot will be called.For convenience, I will reproduce it:static inline TupleTableSlot *ExecCopySlot(TupleTableSlot *dstslot, TupleTableSlot *srcslot){\tAssert(!TTS_EMPTY(srcslot));\tAssertArg(srcslot != dstslot);\tdstslot->tts_ops->copyslot(dstslot, srcslot);\treturn dstslot;}The second arg is not empty? Yes.The second arg is different from the first arg (NULL)? Yes.\ndstslot->tts_ops->copyslot(dstslot, srcslot); // dereference dstslot (which is NULL) \n\nHad this been wrong, surely some of the the other places TupIsNull would\nbe wrong too (and there are hundreds of them).\n\n>Maybe, this can be related to a bug reported in the btree deduplication.\n>\n\nNot sure which bug you mean, but this piece of code is pretty unrelated\nto btree in general, so I don't see any link.Sorry, can't find the thread.The author of deduplication claimed that he thinks the problem may be in IncrementalSort, he did not specify which part.regards,Ranier Vilela", "msg_date": "Fri, 9 Oct 2020 17:25:02 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> The trap is not on the second part of expression. Is in the first.\n> If the pointer is NULL, ExecCopySlot will be called.\n\nYour initial comment indicated that you were worried about\nIncrementalSortState's group_pivot slot, but that is never going\nto be null in any execution function of nodeIncrementalSort.c,\nbecause ExecInitIncrementalSort always creates it.\n\n(The test whether it's NULL in ExecReScanIncrementalSort therefore\nseems rather useless and misleading, but it's not actually a bug.)\n\nThe places that use TupIsNull are just doing so because that's\nthe standard way to check whether a slot is empty. The null\ntest inside the macro is pointless in this context (and in a lot\nof its other use-cases, too) but we don't worry about that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 09 Oct 2020 16:47:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "Em sex., 9 de out. de 2020 às 17:47, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > The trap is not on the second part of expression. Is in the first.\n> > If the pointer is NULL, ExecCopySlot will be called.\n>\n> Your initial comment indicated that you were worried about\n> IncrementalSortState's group_pivot slot, but that is never going\n> to be null in any execution function of nodeIncrementalSort.c,\n> because ExecInitIncrementalSort always creates it.\n>\n> (The test whether it's NULL in ExecReScanIncrementalSort therefore\n> seems rather useless and misleading, but it's not actually a bug.)\n>\n> The places that use TupIsNull are just doing so because that's\n> the standard way to check whether a slot is empty. The null\n> test inside the macro is pointless in this context (and in a lot\n> of its other use-cases, too) but we don't worry about that.\n>\nSo I said that TupIsNull was not the most appropriate.\n\nDoesn't it look better?\n(node->group_pivot != NULL && TTS_EMPTY(node->group_pivot))\n\nregards,\nRanier Vilela\n\nEm sex., 9 de out. de 2020 às 17:47, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> The trap is not on the second part of expression. Is in the first.\n> If the pointer is NULL, ExecCopySlot will be called.\n\nYour initial comment indicated that you were worried about\nIncrementalSortState's group_pivot slot, but that is never going\nto be null in any execution function of nodeIncrementalSort.c,\nbecause ExecInitIncrementalSort always creates it.\n\n(The test whether it's NULL in ExecReScanIncrementalSort therefore\nseems rather useless and misleading, but it's not actually a bug.)\n\nThe places that use TupIsNull are just doing so because that's\nthe standard way to check whether a slot is empty.  The null\ntest inside the macro is pointless in this context (and in a lot\nof its other use-cases, too) but we don't worry about that.So I said that TupIsNull was not the most appropriate.Doesn't it look better?(node->group_pivot != NULL && TTS_EMPTY(node->group_pivot))regards,Ranier Vilela", "msg_date": "Fri, 9 Oct 2020 17:50:09 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "On Fri, Oct 9, 2020 at 1:28 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Sorry, can't find the thread.\n> The author of deduplication claimed that he thinks the problem may be in IncrementalSort,\n> he did not specify which part.\n\nNo I didn't.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 9 Oct 2020 13:57:52 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "Em sex., 9 de out. de 2020 às 17:58, Peter Geoghegan <pg@bowt.ie> escreveu:\n\n> On Fri, Oct 9, 2020 at 1:28 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > Sorry, can't find the thread.\n> > The author of deduplication claimed that he thinks the problem may be in\n> IncrementalSort,\n> > he did not specify which part.\n>\n> No I didn't.\n>\nhttps://www.postgresql.org/message-id/CAH2-Wz=Ae84z0PXTBc+SSGi9EC8nGKn9D16OP-dgH47Jcrv0Ww@mail.gmail.com\n\" On Tue, Jul 28, 2020 at 8:47 AM Peter Geoghegan <pg(at)bowt(dot)ie> wrote:\n> This is very likely to be related to incremental sort because it's a\"\n\nRanier Vilela\n\nEm sex., 9 de out. de 2020 às 17:58, Peter Geoghegan <pg@bowt.ie> escreveu:On Fri, Oct 9, 2020 at 1:28 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Sorry, can't find the thread.\n> The author of deduplication claimed that he thinks the problem may be in IncrementalSort,\n> he did not specify which part.\n\nNo I didn't.https://www.postgresql.org/message-id/CAH2-Wz=Ae84z0PXTBc+SSGi9EC8nGKn9D16OP-dgH47Jcrv0Ww@mail.gmail.com\"\nOn Tue, Jul 28, 2020 at 8:47 AM Peter Geoghegan <pg(at)bowt(dot)ie> wrote:> This is very likely to be related to incremental sort because it's a\"Ranier Vilela", "msg_date": "Fri, 9 Oct 2020 18:00:39 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "Greetings,\n\n* Ranier Vilela (ranier.vf@gmail.com) wrote:\n> Em sex., 9 de out. de 2020 às 14:05, Tomas Vondra <\n> tomas.vondra@2ndquadrant.com> escreveu:\n> \n> > On Fri, Oct 09, 2020 at 12:24:16PM -0300, Ranier Vilela wrote:\n> > >I think that TupIsNull macro is no longer appropriate, to protect\n> > >ExecCopySlot.\n> > >\n> > >See at tuptable.h:\n> > >#define TupIsNull(slot) \\\n> > >((slot) == NULL || TTS_EMPTY(slot))\n> > >\n> > >If var node->group_pivot is NULL, ExecCopySlot will\n> > >dereference a null pointer (first arg).\n\n[...]\n\n> The trap is not on the second part of expression. Is in the first.\n> If the pointer is NULL, ExecCopySlot will be called.\n\nYeah, that's interesting, and this isn't the only place there's a risk\nof that happening, in doing a bit of review of TupIsNull() callers:\n\nsrc/backend/executor/nodeGroup.c:\n\n if (TupIsNull(firsttupleslot))\n {\n outerslot = ExecProcNode(outerPlanState(node));\n if (TupIsNull(outerslot))\n {\n /* empty input, so return nothing */\n node->grp_done = true;\n return NULL;\n }\n /* Copy tuple into firsttupleslot */\n ExecCopySlot(firsttupleslot, outerslot);\n\nSeems that 'firsttupleslot' could possibly be a NULL pointer at this\npoint?\n\nsrc/backend/executor/nodeWindowAgg.c:\n\n /* Fetch next row if we didn't already */\n if (TupIsNull(agg_row_slot))\n {\n if (!window_gettupleslot(agg_winobj, winstate->aggregatedupto,\n agg_row_slot))\n break; /* must be end of partition */\n }\n\nIf agg_row_slot ends up being an actual NULL pointer, this looks likely\nto end up resulting in a crash too.\n\n /*\n * If this is the very first partition, we need to fetch the first input\n * row to store in first_part_slot.\n */\n if (TupIsNull(winstate->first_part_slot))\n {\n TupleTableSlot *outerslot = ExecProcNode(outerPlan);\n\n if (!TupIsNull(outerslot))\n ExecCopySlot(winstate->first_part_slot, outerslot);\n else\n {\n /* outer plan is empty, so we have nothing to do */\n winstate->partition_spooled = true;\n winstate->more_partitions = false;\n return;\n }\n }\n\nThis seems like another risky case, since we don't check that\nwinstate->first_part_slot is a non-NULL pointer.\n\n if (winstate->frameheadpos == 0 &&\n TupIsNull(winstate->framehead_slot))\n {\n /* fetch first row into framehead_slot, if we didn't already */\n if (!tuplestore_gettupleslot(winstate->buffer, true, true,\n winstate->framehead_slot))\n elog(ERROR, \"unexpected end of tuplestore\");\n }\n\nThere's a few of these 'framehead_slot' cases, and then some with\n'frametail_slot', all a similar pattern to above.\n\n> For convenience, I will reproduce it:\n> static inline TupleTableSlot *\n> ExecCopySlot(TupleTableSlot *dstslot, TupleTableSlot *srcslot)\n> {\n> Assert(!TTS_EMPTY(srcslot));\n> AssertArg(srcslot != dstslot);\n> \n> dstslot->tts_ops->copyslot(dstslot, srcslot);\n> \n> return dstslot;\n> }\n> \n> The second arg is not empty? Yes.\n> The second arg is different from the first arg (NULL)? Yes.\n> \n> dstslot->tts_ops->copyslot(dstslot, srcslot); // dereference dstslot (which\n> is NULL)\n\nRight, just to try and clarify further, the issue here is with this code:\n\nif (TupIsNull(node->group_pivot))\n ExecCopySlot(node->group_pivot, node->transfer_tuple);\n\nWith TupIsNull defined as:\n\n((slot) == NULL || TTS_EMPTY(slot))\n\nThat means we get:\n\nif ((node->group_pivot) == NULL || TTS_EMPTY(node->group_pivot))\n\tExecCopySlot(node->group_pivot, node->transfer_tuple);\n\nWhich means that if we reach this point with node->group_pivot as NULL,\nthen we'll pass that to ExecCopySlot() and eventually end up\ndereferencing it and crashing.\n\nI haven't tried to run back farther up to see if it's possible that\nthere's other checks which prevent node->group_pivot (and the other\ncases) from actually being a NULL pointer by the time we reach this\ncode, but I agree that it seems to be a bit concerning to have the code\nwritten this way- TupIsNull() allows the pointer *itself* to be NULL and\ncallers of it need to realize that (if nothing else by at least\ncommenting that there's other checks in place to make sure that it can't\nend up actually being a NULL pointer when we're passing it to some other\nfunction that'll dereference it).\n\nAs a side-note, this kind of further analysis and looking for other,\nsimilar, cases that might be problematic is really helpful and important\nto do whenever you come across a case like this, and will also lend a\nbit more validation that this is really an issue and something we need\nto look at and not a one-off mistake (which, as much as we'd like to\nthink we never make mistakes, isn't typically the case...).\n\nThanks,\n\nStephen", "msg_date": "Fri, 9 Oct 2020 17:02:46 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> So I said that TupIsNull was not the most appropriate.\n\n[ shrug... ] You're entitled to your opinion, but I see essentially\nno value in running around and trying to figure out which TupIsNull\ncalls actually can see a null pointer and which never will. It'd\nlikely introduce bugs, it would certainly not remove any, and there's\nno reason to believe that any meaningful performance improvement\ncould be gained.\n\n(It's possible that the compiler can remove some of the useless\ntests, so I'm satisfied to leave such micro-optimization to it.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 09 Oct 2020 17:05:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "Em sex., 9 de out. de 2020 às 18:02, Stephen Frost <sfrost@snowman.net>\nescreveu:\n\n> Greetings,\n>\n> * Ranier Vilela (ranier.vf@gmail.com) wrote:\n> > Em sex., 9 de out. de 2020 às 14:05, Tomas Vondra <\n> > tomas.vondra@2ndquadrant.com> escreveu:\n> >\n> > > On Fri, Oct 09, 2020 at 12:24:16PM -0300, Ranier Vilela wrote:\n> > > >I think that TupIsNull macro is no longer appropriate, to protect\n> > > >ExecCopySlot.\n> > > >\n> > > >See at tuptable.h:\n> > > >#define TupIsNull(slot) \\\n> > > >((slot) == NULL || TTS_EMPTY(slot))\n> > > >\n> > > >If var node->group_pivot is NULL, ExecCopySlot will\n> > > >dereference a null pointer (first arg).\n>\n> [...]\n>\n> > The trap is not on the second part of expression. Is in the first.\n> > If the pointer is NULL, ExecCopySlot will be called.\n>\n> Yeah, that's interesting, and this isn't the only place there's a risk\n> of that happening, in doing a bit of review of TupIsNull() callers:\n>\n> src/backend/executor/nodeGroup.c:\n>\n> if (TupIsNull(firsttupleslot))\n> {\n> outerslot = ExecProcNode(outerPlanState(node));\n> if (TupIsNull(outerslot))\n> {\n> /* empty input, so return nothing */\n> node->grp_done = true;\n> return NULL;\n> }\n> /* Copy tuple into firsttupleslot */\n> ExecCopySlot(firsttupleslot, outerslot);\n>\n> Seems that 'firsttupleslot' could possibly be a NULL pointer at this\n> point?\n>\n> src/backend/executor/nodeWindowAgg.c:\n>\n> /* Fetch next row if we didn't already */\n> if (TupIsNull(agg_row_slot))\n> {\n> if (!window_gettupleslot(agg_winobj, winstate->aggregatedupto,\n> agg_row_slot))\n> break; /* must be end of partition */\n> }\n>\n> If agg_row_slot ends up being an actual NULL pointer, this looks likely\n> to end up resulting in a crash too.\n>\n> /*\n> * If this is the very first partition, we need to fetch the first\n> input\n> * row to store in first_part_slot.\n> */\n> if (TupIsNull(winstate->first_part_slot))\n> {\n> TupleTableSlot *outerslot = ExecProcNode(outerPlan);\n>\n> if (!TupIsNull(outerslot))\n> ExecCopySlot(winstate->first_part_slot, outerslot);\n> else\n> {\n> /* outer plan is empty, so we have nothing to do */\n> winstate->partition_spooled = true;\n> winstate->more_partitions = false;\n> return;\n> }\n> }\n>\n> This seems like another risky case, since we don't check that\n> winstate->first_part_slot is a non-NULL pointer.\n>\n> if (winstate->frameheadpos == 0 &&\n> TupIsNull(winstate->framehead_slot))\n> {\n> /* fetch first row into framehead_slot, if we didn't\n> already */\n> if (!tuplestore_gettupleslot(winstate->buffer, true, true,\n> winstate->framehead_slot))\n> elog(ERROR, \"unexpected end of tuplestore\");\n> }\n>\n> There's a few of these 'framehead_slot' cases, and then some with\n> 'frametail_slot', all a similar pattern to above.\n>\n> > For convenience, I will reproduce it:\n> > static inline TupleTableSlot *\n> > ExecCopySlot(TupleTableSlot *dstslot, TupleTableSlot *srcslot)\n> > {\n> > Assert(!TTS_EMPTY(srcslot));\n> > AssertArg(srcslot != dstslot);\n> >\n> > dstslot->tts_ops->copyslot(dstslot, srcslot);\n> >\n> > return dstslot;\n> > }\n> >\n> > The second arg is not empty? Yes.\n> > The second arg is different from the first arg (NULL)? Yes.\n> >\n> > dstslot->tts_ops->copyslot(dstslot, srcslot); // dereference dstslot\n> (which\n> > is NULL)\n>\n> Right, just to try and clarify further, the issue here is with this code:\n>\n> if (TupIsNull(node->group_pivot))\n> ExecCopySlot(node->group_pivot, node->transfer_tuple);\n>\n> With TupIsNull defined as:\n>\n> ((slot) == NULL || TTS_EMPTY(slot))\n>\n> That means we get:\n>\n> if ((node->group_pivot) == NULL || TTS_EMPTY(node->group_pivot))\n> ExecCopySlot(node->group_pivot, node->transfer_tuple);\n>\n> Which means that if we reach this point with node->group_pivot as NULL,\n> then we'll pass that to ExecCopySlot() and eventually end up\n> dereferencing it and crashing.\n>\n> I haven't tried to run back farther up to see if it's possible that\n> there's other checks which prevent node->group_pivot (and the other\n> cases) from actually being a NULL pointer by the time we reach this\n> code, but I agree that it seems to be a bit concerning to have the code\n> written this way- TupIsNull() allows the pointer *itself* to be NULL and\n> callers of it need to realize that (if nothing else by at least\n> commenting that there's other checks in place to make sure that it can't\n> end up actually being a NULL pointer when we're passing it to some other\n> function that'll dereference it).\n>\n> As a side-note, this kind of further analysis and looking for other,\n> similar, cases that might be problematic is really helpful and important\n> to do whenever you come across a case like this, and will also lend a\n> bit more validation that this is really an issue and something we need\n> to look at and not a one-off mistake (which, as much as we'd like to\n> think we never make mistakes, isn't typically the case...).\n>\nSeveral places.\nTupIsNull it looks like a minefield...\n\nregards,\nRanier Vilela\n\nEm sex., 9 de out. de 2020 às 18:02, Stephen Frost <sfrost@snowman.net> escreveu:Greetings,\n\n* Ranier Vilela (ranier.vf@gmail.com) wrote:\n> Em sex., 9 de out. de 2020 às 14:05, Tomas Vondra <\n> tomas.vondra@2ndquadrant.com> escreveu:\n> \n> > On Fri, Oct 09, 2020 at 12:24:16PM -0300, Ranier Vilela wrote:\n> > >I think that TupIsNull macro is no longer appropriate, to protect\n> > >ExecCopySlot.\n> > >\n> > >See at tuptable.h:\n> > >#define TupIsNull(slot) \\\n> > >((slot) == NULL || TTS_EMPTY(slot))\n> > >\n> > >If var node->group_pivot is NULL, ExecCopySlot will\n> > >dereference a null pointer (first arg).\n\n[...]\n\n> The trap is not on the second part of expression. Is in the first.\n> If the pointer is NULL, ExecCopySlot will be called.\n\nYeah, that's interesting, and this isn't the only place there's a risk\nof that happening, in doing a bit of review of TupIsNull() callers:\n\nsrc/backend/executor/nodeGroup.c:\n\n    if (TupIsNull(firsttupleslot))\n    {\n        outerslot = ExecProcNode(outerPlanState(node));\n        if (TupIsNull(outerslot))\n        {\n            /* empty input, so return nothing */\n            node->grp_done = true;\n            return NULL;\n        }\n        /* Copy tuple into firsttupleslot */\n        ExecCopySlot(firsttupleslot, outerslot);\n\nSeems that 'firsttupleslot' could possibly be a NULL pointer at this\npoint?\n\nsrc/backend/executor/nodeWindowAgg.c:\n\n        /* Fetch next row if we didn't already */\n        if (TupIsNull(agg_row_slot))\n        {\n            if (!window_gettupleslot(agg_winobj, winstate->aggregatedupto,\n                                     agg_row_slot))\n                break;          /* must be end of partition */\n        }\n\nIf agg_row_slot ends up being an actual NULL pointer, this looks likely\nto end up resulting in a crash too.\n\n    /*\n     * If this is the very first partition, we need to fetch the first input\n     * row to store in first_part_slot.\n     */\n    if (TupIsNull(winstate->first_part_slot))\n    {\n        TupleTableSlot *outerslot = ExecProcNode(outerPlan);\n\n        if (!TupIsNull(outerslot))\n            ExecCopySlot(winstate->first_part_slot, outerslot);\n        else\n        {\n            /* outer plan is empty, so we have nothing to do */\n            winstate->partition_spooled = true;\n            winstate->more_partitions = false;\n            return;\n        }\n    }\n\nThis seems like another risky case, since we don't check that\nwinstate->first_part_slot is a non-NULL pointer.\n\n            if (winstate->frameheadpos == 0 &&\n                TupIsNull(winstate->framehead_slot))\n            {\n                /* fetch first row into framehead_slot, if we didn't already */\n                if (!tuplestore_gettupleslot(winstate->buffer, true, true,\n                                             winstate->framehead_slot))\n                    elog(ERROR, \"unexpected end of tuplestore\");\n            }\n\nThere's a few of these 'framehead_slot' cases, and then some with\n'frametail_slot', all a similar pattern to above.\n\n> For convenience, I will reproduce it:\n> static inline TupleTableSlot *\n> ExecCopySlot(TupleTableSlot *dstslot, TupleTableSlot *srcslot)\n> {\n> Assert(!TTS_EMPTY(srcslot));\n> AssertArg(srcslot != dstslot);\n> \n> dstslot->tts_ops->copyslot(dstslot, srcslot);\n> \n> return dstslot;\n> }\n> \n> The second arg is not empty? Yes.\n> The second arg is different from the first arg (NULL)? Yes.\n> \n> dstslot->tts_ops->copyslot(dstslot, srcslot); // dereference dstslot (which\n> is NULL)\n\nRight, just to try and clarify further, the issue here is with this code:\n\nif (TupIsNull(node->group_pivot))\n    ExecCopySlot(node->group_pivot, node->transfer_tuple);\n\nWith TupIsNull defined as:\n\n((slot) == NULL || TTS_EMPTY(slot))\n\nThat means we get:\n\nif ((node->group_pivot) == NULL || TTS_EMPTY(node->group_pivot))\n        ExecCopySlot(node->group_pivot, node->transfer_tuple);\n\nWhich means that if we reach this point with node->group_pivot as NULL,\nthen we'll pass that to ExecCopySlot() and eventually end up\ndereferencing it and crashing.\n\nI haven't tried to run back farther up to see if it's possible that\nthere's other checks which prevent node->group_pivot (and the other\ncases) from actually being a NULL pointer by the time we reach this\ncode, but I agree that it seems to be a bit concerning to have the code\nwritten this way- TupIsNull() allows the pointer *itself* to be NULL and\ncallers of it need to realize that (if nothing else by at least\ncommenting that there's other checks in place to make sure that it can't\nend up actually being a NULL pointer when we're passing it to some other\nfunction that'll dereference it).\n\nAs a side-note, this kind of further analysis and looking for other,\nsimilar, cases that might be problematic is really helpful and important\nto do whenever you come across a case like this, and will also lend a\nbit more validation that this is really an issue and something we need\nto look at and not a one-off mistake (which, as much as we'd like to\nthink we never make mistakes, isn't typically the case...).Several places.\nTupIsNull it looks like a minefield... regards,Ranier Vilela", "msg_date": "Fri, 9 Oct 2020 18:09:40 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "Greetings,\n\n* Ranier Vilela (ranier.vf@gmail.com) wrote:\n> Em sex., 9 de out. de 2020 às 18:02, Stephen Frost <sfrost@snowman.net>\n> escreveu:\n> > As a side-note, this kind of further analysis and looking for other,\n> > similar, cases that might be problematic is really helpful and important\n> > to do whenever you come across a case like this, and will also lend a\n> > bit more validation that this is really an issue and something we need\n> > to look at and not a one-off mistake (which, as much as we'd like to\n> > think we never make mistakes, isn't typically the case...).\n> >\n> Several places.\n> TupIsNull it looks like a minefield...\n\nIs it though? Tom already pointed out that the specific case you were\nconcerned about isn't an issue- I'd encourage you to go review the other\ncases that I found and see if you can find any cases where it's actually\ngoing to result in a crash.\n\nIf there is such a case, then perhaps we should consider changing\nthings, but if not, then perhaps there isn't any need to make a change.\nI do wonder if maybe some of those cases don't acutally need the\nTupIsNull() check at all as we could prove that it won't be by the time\nwe reach that point, but it depends- and requires more review.\n\nThanks,\n\nStephen", "msg_date": "Fri, 9 Oct 2020 17:16:07 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "On Fri, Oct 9, 2020 at 2:04 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> > The author of deduplication claimed that he thinks the problem may be in IncrementalSort,\n>> > he did not specify which part.\n>>\n>> No I didn't.\n>\n> https://www.postgresql.org/message-id/CAH2-Wz=Ae84z0PXTBc+SSGi9EC8nGKn9D16OP-dgH47Jcrv0Ww@mail.gmail.com\n\nThat thread is obviously totally unrelated to what you're talking\nabout. I cannot imagine how you made the connection. The only\ncommonality is the term \"incremental sort\".\n\nMoreover, the point that I make in the thread that you link to is that\nthe bug in question could not possibly be related to the incremental\nsort commit. That was an initial quick guess that I made that turned\nout to be wrong.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 9 Oct 2020 14:17:02 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "On Fri, Oct 09, 2020 at 05:25:02PM -0300, Ranier Vilela wrote:\n>Em sex., 9 de out. de 2020 �s 14:05, Tomas Vondra <\n>tomas.vondra@2ndquadrant.com> escreveu:\n>\n>> On Fri, Oct 09, 2020 at 12:24:16PM -0300, Ranier Vilela wrote:\n>> >I think that TupIsNull macro is no longer appropriate, to protect\n>> >ExecCopySlot.\n>> >\n>> >See at tuptable.h:\n>> >#define TupIsNull(slot) \\\n>> >((slot) == NULL || TTS_EMPTY(slot))\n>> >\n>> >If var node->group_pivot is NULL, ExecCopySlot will\n>> >dereference a null pointer (first arg).\n>> >\n>>\n>> No. The C standard says there's a \"sequence point\" [1] between the left\n>> and right arguments of the || operator, and that the expressions are\n>> evaluated from left to right. So the program will do the first check,\n>> and if the pointer really is NULL it won't do the second one (because\n>> that is not necessary for determining the result). Similarly for the &&\n>> operator, of course.\n>>\n>Really.\n>The trap is not on the second part of expression. Is in the first.\n>If the pointer is NULL, ExecCopySlot will be called.\n>\n\nAh, OK. Now I see what you meant. Well, yeah - calling ExecCopySlot with\nNULL would be bad, but as others pointed out most of the call sites\ndon't really have the issue for other reasons.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 10 Oct 2020 00:04:07 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "On Fri, Oct 09, 2020 at 05:50:09PM -0300, Ranier Vilela wrote:\n>Em sex., 9 de out. de 2020 �s 17:47, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>\n>> Ranier Vilela <ranier.vf@gmail.com> writes:\n>> > The trap is not on the second part of expression. Is in the first.\n>> > If the pointer is NULL, ExecCopySlot will be called.\n>>\n>> Your initial comment indicated that you were worried about\n>> IncrementalSortState's group_pivot slot, but that is never going\n>> to be null in any execution function of nodeIncrementalSort.c,\n>> because ExecInitIncrementalSort always creates it.\n>>\n>> (The test whether it's NULL in ExecReScanIncrementalSort therefore\n>> seems rather useless and misleading, but it's not actually a bug.)\n>>\n>> The places that use TupIsNull are just doing so because that's\n>> the standard way to check whether a slot is empty. The null\n>> test inside the macro is pointless in this context (and in a lot\n>> of its other use-cases, too) but we don't worry about that.\n>>\n>So I said that TupIsNull was not the most appropriate.\n>\n>Doesn't it look better?\n>(node->group_pivot != NULL && TTS_EMPTY(node->group_pivot))\n>\n\nMy (admittedly very subjective) opinion is that it looks much worse. The\nTupIsNull is pretty self-descriptive, unlike this proposed code.\n\nThat could be fixed by defining a new macro, perhaps something like\nSlotIsEmpty() or so. But as was already explained, Incremental Sort\ncan't actually have a NULL slot here, so it makes no difference there.\nAnd in the other places we can't just mechanically replace the macros\nbecause it'd quite likely silently hide pre-existing bugs.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 10 Oct 2020 00:12:59 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> My (admittedly very subjective) opinion is that it looks much worse. The\n> TupIsNull is pretty self-descriptive, unlike this proposed code.\n\n+1\n\n> That could be fixed by defining a new macro, perhaps something like\n> SlotIsEmpty() or so. But as was already explained, Incremental Sort\n> can't actually have a NULL slot here, so it makes no difference there.\n> And in the other places we can't just mechanically replace the macros\n> because it'd quite likely silently hide pre-existing bugs.\n\nIME, there are basically two use-cases for TupIsNull in the executor:\n\n1. Checking whether a lower-level plan node has returned an actual\ntuple or an EOF indicator. In current usage, both parts of the\nTupIsNull test are needed here, because some node types like to\nreturn NULL pointers while others do something like\n\"return ExecClearTuple(myslot)\".\n\n2. Manipulating a locally-managed slot. In just about every case\nof this sort, the slot is created during the node Init function,\nso that the NULL test in TupIsNull is unnecessary and what we are\nreally interested in is the empty-or-not state of the slot.\n\nThus, Ranier's concern would be valid if a node ever did anything\nwith a returned-from-lower-level slot after failing the TupIsNull\ncheck on it. But there's really no reason to do so, and furthermore\ndoing so would be a logic bug in itself. (Something like ExecCopySlot\ninto the slot, for example, is flat out wrong, because an upper level\nnode is *never* entitled to scribble on the output slot of a lower-level\nnode.) So I seriously, seriously doubt that there are any live bugs\nof this ilk.\n\nIn principle we could invent SlotIsEmpty() and apply it in use\ncases of type 2, but I don't really think that'd be a productive\nactivity. In return for saving a few cycles we'd have a nontrivial\nrisk of new bugs from using the wrong macro for the case at hand.\n\nI do wonder whether we should try to simplify the inter-node\nAPI by allowing only one of the two cases for EOF indicators.\nNot convinced it's worth troubling over, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 09 Oct 2020 18:45:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "Em sex., 9 de out. de 2020 às 19:45, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > My (admittedly very subjective) opinion is that it looks much worse. The\n> > TupIsNull is pretty self-descriptive, unlike this proposed code.\n>\n> +1\n>\n> > That could be fixed by defining a new macro, perhaps something like\n> > SlotIsEmpty() or so. But as was already explained, Incremental Sort\n> > can't actually have a NULL slot here, so it makes no difference there.\n> > And in the other places we can't just mechanically replace the macros\n> > because it'd quite likely silently hide pre-existing bugs.\n>\n> IME, there are basically two use-cases for TupIsNull in the executor:\n>\n> 1. Checking whether a lower-level plan node has returned an actual\n> tuple or an EOF indicator. In current usage, both parts of the\n> TupIsNull test are needed here, because some node types like to\n> return NULL pointers while others do something like\n> \"return ExecClearTuple(myslot)\".\n>\n> 2. Manipulating a locally-managed slot. In just about every case\n> of this sort, the slot is created during the node Init function,\n> so that the NULL test in TupIsNull is unnecessary and what we are\n> really interested in is the empty-or-not state of the slot.\n>\n> Thus, Ranier's concern would be valid if a node ever did anything\n> with a returned-from-lower-level slot after failing the TupIsNull\n> check on it. But there's really no reason to do so, and furthermore\n> doing so would be a logic bug in itself. (Something like ExecCopySlot\n> into the slot, for example, is flat out wrong, because an upper level\n> node is *never* entitled to scribble on the output slot of a lower-level\n> node.) So I seriously, seriously doubt that there are any live bugs\n> of this ilk.\n>\n> In principle we could invent SlotIsEmpty() and apply it in use\n> cases of type 2, but I don't really think that'd be a productive\n> activity. In return for saving a few cycles we'd have a nontrivial\n> risk of new bugs from using the wrong macro for the case at hand.\n>\n> I do wonder whether we should try to simplify the inter-node\n> API by allowing only one of the two cases for EOF indicators.\n> Not convinced it's worth troubling over, though.\n>\nThe problem is not only in nodeIncrementalSort.c, but in several others\ntoo, where people are using TupIsNull with ExecCopySlot.\nI would call this a design flaw.\nIf (TupIsNull)\n ExecCopySlot\n\nThe callers, think they are using TupIsNotNullAndEmpty.\nIf (TupIsNotNullAndEmpty)\n ExecCopySlot\n\nregards,\nRanier Vilela\n\nEm sex., 9 de out. de 2020 às 19:45, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> My (admittedly very subjective) opinion is that it looks much worse. The\n> TupIsNull is pretty self-descriptive, unlike this proposed code.\n\n+1\n\n> That could be fixed by defining a new macro, perhaps something like\n> SlotIsEmpty() or so. But as was already explained, Incremental Sort\n> can't actually have a NULL slot here, so it makes no difference there.\n> And in the other places we can't just mechanically replace the macros\n> because it'd quite likely silently hide pre-existing bugs.\n\nIME, there are basically two use-cases for TupIsNull in the executor:\n\n1. Checking whether a lower-level plan node has returned an actual\ntuple or an EOF indicator.  In current usage, both parts of the\nTupIsNull test are needed here, because some node types like to\nreturn NULL pointers while others do something like\n\"return ExecClearTuple(myslot)\".\n\n2. Manipulating a locally-managed slot.  In just about every case\nof this sort, the slot is created during the node Init function,\nso that the NULL test in TupIsNull is unnecessary and what we are\nreally interested in is the empty-or-not state of the slot.\n\nThus, Ranier's concern would be valid if a node ever did anything\nwith a returned-from-lower-level slot after failing the TupIsNull\ncheck on it.  But there's really no reason to do so, and furthermore\ndoing so would be a logic bug in itself.  (Something like ExecCopySlot\ninto the slot, for example, is flat out wrong, because an upper level\nnode is *never* entitled to scribble on the output slot of a lower-level\nnode.)  So I seriously, seriously doubt that there are any live bugs\nof this ilk.\n\nIn principle we could invent SlotIsEmpty() and apply it in use\ncases of type 2, but I don't really think that'd be a productive\nactivity.  In return for saving a few cycles we'd have a nontrivial\nrisk of new bugs from using the wrong macro for the case at hand.\n\nI do wonder whether we should try to simplify the inter-node\nAPI by allowing only one of the two cases for EOF indicators.\nNot convinced it's worth troubling over, though.The problem is not only in nodeIncrementalSort.c, but in several others too, where people are using TupIsNull with ExecCopySlot.I would call this a design flaw.If (TupIsNull)     ExecCopySlotThe callers, think they are using TupIsNotNullAndEmpty.If (TupIsNotNullAndEmpty)     ExecCopySlotregards,Ranier Vilela", "msg_date": "Fri, 9 Oct 2020 22:37:01 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "On Fri, Oct 9, 2020 at 6:41 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> The problem is not only in nodeIncrementalSort.c, but in several others\n> too, where people are using TupIsNull with ExecCopySlot.\n> I would call this a design flaw.\n> If (TupIsNull)\n> ExecCopySlot\n>\n> The callers, think they are using TupIsNotNullAndEmpty.\n> If (TupIsNotNullAndEmpty)\n> ExecCopySlot\n>\n\nIMO both names are problematic, too data value centric, not semantic.\nTupIsValid for the name and negating the existing tests would help to at\nleast clear that part up. Then, things operating on invalid tuples would\nbe expected to know about both representations. In the case of\nExecCopySlot there is nothing it can do with a null representation of an\ninvalid tuple so it would have to fail if presented one. An assertion\nseems sufficient.\n\nDavid J.\n\nOn Fri, Oct 9, 2020 at 6:41 PM Ranier Vilela <ranier.vf@gmail.com> wrote:The problem is not only in nodeIncrementalSort.c, but in several others too, where people are using TupIsNull with ExecCopySlot.I would call this a design flaw.If (TupIsNull)     ExecCopySlotThe callers, think they are using TupIsNotNullAndEmpty.If (TupIsNotNullAndEmpty)     ExecCopySlotIMO both names are problematic, too data value centric, not semantic.  TupIsValid for the name and negating the existing tests would help to at least clear that part up.  Then, things operating on invalid tuples would be expected to know about both representations.  In the case of ExecCopySlot there is nothing it can do with a null representation of an invalid tuple so it would have to fail if presented one.  An assertion seems sufficient.David J.", "msg_date": "Fri, 9 Oct 2020 20:10:49 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "Em sáb., 10 de out. de 2020 às 00:11, David G. Johnston <\ndavid.g.johnston@gmail.com> escreveu:\n\n> On Fri, Oct 9, 2020 at 6:41 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n>> The problem is not only in nodeIncrementalSort.c, but in several others\n>> too, where people are using TupIsNull with ExecCopySlot.\n>> I would call this a design flaw.\n>> If (TupIsNull)\n>> ExecCopySlot\n>>\n>> The callers, think they are using TupIsNotNullAndEmpty.\n>> If (TupIsNotNullAndEmpty)\n>> ExecCopySlot\n>>\n>\n> IMO both names are problematic, too data value centric, not semantic.\n> TupIsValid for the name and negating the existing tests would help to at\n> least clear that part up. Then, things operating on invalid tuples would\n> be expected to know about both representations. In the case of\n> ExecCopySlot there is nothing it can do with a null representation of an\n> invalid tuple so it would have to fail if presented one. An assertion\n> seems sufficient.\n>\nIHMO, assertion it is not the solution.\n\nSteven suggested looking for some NULL pointer font above the calls.\nI say that it is not necessary, there is no NULL pointer.\nWhoever guarantees this is the combination, which for me is an assertion.\n\nIf (TupIsNull)\n ExecCopySlot\n\nIt works as a subject, but in release mode.\nIt is the equivalent of:\n\nIf (TupIsNull)\n Abort\n\nThe only problem for me is that we are running this assertion on the\nclients' machines.\n\nregards,\nRanier Vilela\n\n>\n\nEm sáb., 10 de out. de 2020 às 00:11, David G. Johnston <david.g.johnston@gmail.com> escreveu:On Fri, Oct 9, 2020 at 6:41 PM Ranier Vilela <ranier.vf@gmail.com> wrote:The problem is not only in nodeIncrementalSort.c, but in several others too, where people are using TupIsNull with ExecCopySlot.I would call this a design flaw.If (TupIsNull)     ExecCopySlotThe callers, think they are using TupIsNotNullAndEmpty.If (TupIsNotNullAndEmpty)     ExecCopySlotIMO both names are problematic, too data value centric, not semantic.  TupIsValid for the name and negating the existing tests would help to at least clear that part up.  Then, things operating on invalid tuples would be expected to know about both representations.  In the case of ExecCopySlot there is nothing it can do with a null representation of an invalid tuple so it would have to fail if presented one.  An assertion seems sufficient.IHMO, assertion it is not the solution.Steven suggested looking for some NULL pointer font above the calls.I say that it is not necessary, there is no NULL pointer.Whoever guarantees this is the combination, which for me is an assertion.If (TupIsNull)   ExecCopySlotIt works as a subject, but in release mode.It is the equivalent of:If (TupIsNull)   AbortThe only problem for me is that we are running this assertion on the clients' machines.regards,Ranier Vilela", "msg_date": "Sun, 11 Oct 2020 07:39:07 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "On Sun, Oct 11, 2020 at 3:31 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Em sáb., 10 de out. de 2020 às 00:11, David G. Johnston <\n> david.g.johnston@gmail.com> escreveu:\n>\n>> On Fri, Oct 9, 2020 at 6:41 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>>\n>>> The problem is not only in nodeIncrementalSort.c, but in several others\n>>> too, where people are using TupIsNull with ExecCopySlot.\n>>> I would call this a design flaw.\n>>> If (TupIsNull)\n>>> ExecCopySlot\n>>>\n>>> The callers, think they are using TupIsNotNullAndEmpty.\n>>> If (TupIsNotNullAndEmpty)\n>>> ExecCopySlot\n>>>\n>>\n>> IMO both names are problematic, too data value centric, not semantic.\n>> TupIsValid for the name and negating the existing tests would help to at\n>> least clear that part up. Then, things operating on invalid tuples would\n>> be expected to know about both representations. In the case of\n>> ExecCopySlot there is nothing it can do with a null representation of an\n>> invalid tuple so it would have to fail if presented one. An assertion\n>> seems sufficient.\n>>\n> IHMO, assertion it is not the solution.\n>\n> Steven suggested looking for some NULL pointer font above the calls.\n> I say that it is not necessary, there is no NULL pointer.\n> Whoever guarantees this is the combination, which for me is an assertion.\n>\n> If (TupIsNull)\n> ExecCopySlot\n>\n> It works as a subject, but in release mode.\n> It is the equivalent of:\n>\n> If (TupIsNull)\n> Abort\n>\n> The only problem for me is that we are running this assertion on the\n> clients' machines.\n>\n>\nI cannot make heads nor tails of what you are trying to communicate here.\n\nI'll agree that TupIsNull isn't the most descriptive choice of name, and is\nprobably being abused throughout the code base, but the overall intent and\nexisting flow seems fine. My only goal would be to make it a bit easier\nfor unfamiliar coders to pick up on the coding pattern and assumptions and\nmake coding errors there more obvious. Renaming and/or an assertion fits\nthat goal. Breaking the current abstraction level doesn't seem desirable.\n\nDavid J.\n\nOn Sun, Oct 11, 2020 at 3:31 AM Ranier Vilela <ranier.vf@gmail.com> wrote:Em sáb., 10 de out. de 2020 às 00:11, David G. Johnston <david.g.johnston@gmail.com> escreveu:On Fri, Oct 9, 2020 at 6:41 PM Ranier Vilela <ranier.vf@gmail.com> wrote:The problem is not only in nodeIncrementalSort.c, but in several others too, where people are using TupIsNull with ExecCopySlot.I would call this a design flaw.If (TupIsNull)     ExecCopySlotThe callers, think they are using TupIsNotNullAndEmpty.If (TupIsNotNullAndEmpty)     ExecCopySlotIMO both names are problematic, too data value centric, not semantic.  TupIsValid for the name and negating the existing tests would help to at least clear that part up.  Then, things operating on invalid tuples would be expected to know about both representations.  In the case of ExecCopySlot there is nothing it can do with a null representation of an invalid tuple so it would have to fail if presented one.  An assertion seems sufficient.IHMO, assertion it is not the solution.Steven suggested looking for some NULL pointer font above the calls.I say that it is not necessary, there is no NULL pointer.Whoever guarantees this is the combination, which for me is an assertion.If (TupIsNull)   ExecCopySlotIt works as a subject, but in release mode.It is the equivalent of:If (TupIsNull)   AbortThe only problem for me is that we are running this assertion on the clients' machines.I cannot make heads nor tails of what you are trying to communicate here.I'll agree that TupIsNull isn't the most descriptive choice of name, and is probably being abused throughout the code base, but the overall intent and existing flow seems fine.  My only goal would be to make it a bit easier for unfamiliar coders to pick up on the coding pattern and assumptions and make coding errors there more obvious.  Renaming and/or an assertion fits that goal.  Breaking the current abstraction level doesn't seem desirable.David J.", "msg_date": "Sun, 11 Oct 2020 10:52:55 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "Em dom., 11 de out. de 2020 às 14:53, David G. Johnston <\ndavid.g.johnston@gmail.com> escreveu:\n\n> On Sun, Oct 11, 2020 at 3:31 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n>> Em sáb., 10 de out. de 2020 às 00:11, David G. Johnston <\n>> david.g.johnston@gmail.com> escreveu:\n>>\n>>> On Fri, Oct 9, 2020 at 6:41 PM Ranier Vilela <ranier.vf@gmail.com>\n>>> wrote:\n>>>\n>>>> The problem is not only in nodeIncrementalSort.c, but in several others\n>>>> too, where people are using TupIsNull with ExecCopySlot.\n>>>> I would call this a design flaw.\n>>>> If (TupIsNull)\n>>>> ExecCopySlot,\n>>>>\n>>>> The callers, think they are using TupIsNotNullAndEmpty.\n>>>> If (TupIsNotNullAndEmpty)\n>>>> ExecCopySlot\n>>>>\n>>>\n>>> IMO both names are problematic, too data value centric, not semantic.\n>>> TupIsValid for the name and negating the existing tests would help to at\n>>> least clear that part up. Then, things operating on invalid tuples would\n>>> be expected to know about both representations. In the case of\n>>> ExecCopySlot there is nothing it can do with a null representation of an\n>>> invalid tuple so it would have to fail if presented one. An assertion\n>>> seems sufficient.\n>>>\n>> IHMO, assertion it is not the solution.\n>>\n>> Steven suggested looking for some NULL pointer font above the calls.\n>> I say that it is not necessary, there is no NULL pointer.\n>> Whoever guarantees this is the combination, which for me is an assertion.\n>>\n>> If (TupIsNull)\n>> ExecCopySlot\n>>\n>> It works as a subject, but in release mode.\n>> It is the equivalent of:\n>>\n>> If (TupIsNull)\n>> Abort\n>>\n>> The only problem for me is that we are running this assertion on the\n>> clients' machines.\n>>\n>>\n> I cannot make heads nor tails of what you are trying to communicate here.\n>\nOk. I will try to explain.\n\n1. TupIsNull in fact it should be called: TupIsNullOrEmpty\n2. Only Rename TupIsNull to example TupIsNullOrEmpty, improves, but it is\nnot the complete solution.\n3. The combination:\n if (TupIsNull(node->group_pivot))\n ExecCopySlot(node->group_pivot, node->transfer_tuple);\nfor me it acts partly as if it were an assertion, but at runtime.\nIf node->group_pivot is NULL, ExecCopySlot crashes, like an assertion.\n4. As it has been running for a while, without any complaints, probably the\ncallers have already guaranteed that node-> group_pivot is not NULL\n5. We can remove the first part of the macro and rename: TupIsNull to\nSlotEmpty\n6. With SlotEmpty macro, each TupIsNull needs to be carefully changed.\nif (SlotEmpty(node->group_pivot))\n ExecCopySlot(node->group_pivot, node->transfer_tuple);\n\n\n> I'll agree that TupIsNull isn't the most descriptive choice of name, and\n> is probably being abused throughout the code base, but the overall intent\n> and existing flow seems fine. My only goal would be to make it a bit\n> easier for unfamiliar coders to pick up on the coding pattern and\n> assumptions and make coding errors there more obvious. Renaming and/or an\n> assertion fits that goal. Breaking the current abstraction level doesn't\n> seem desirable.\n>\nIf only rename TupIsNull to TupIsNullOrEmpty:\n\n1. Why continue testing a pointer against NULL and call ExecCopySlot and\ncrash at runtime.\n2. Most likely, the pointer is not NULL, since it has already been well\ntested.\n3. The only thing that can be done, after TupIsNullOrEmpty, is return or\nfail, anything else needs to be tested again.\n\nI think that current abstraction is broken.\n\nregards,\nRanier Vilela\n\nEm dom., 11 de out. de 2020 às 14:53, David G. Johnston <david.g.johnston@gmail.com> escreveu:On Sun, Oct 11, 2020 at 3:31 AM Ranier Vilela <ranier.vf@gmail.com> wrote:Em sáb., 10 de out. de 2020 às 00:11, David G. Johnston <david.g.johnston@gmail.com> escreveu:On Fri, Oct 9, 2020 at 6:41 PM Ranier Vilela <ranier.vf@gmail.com> wrote:The problem is not only in nodeIncrementalSort.c, but in several others too, where people are using TupIsNull with ExecCopySlot.I would call this a design flaw.If (TupIsNull)     ExecCopySlot,The callers, think they are using TupIsNotNullAndEmpty.If (TupIsNotNullAndEmpty)     ExecCopySlotIMO both names are problematic, too data value centric, not semantic.  TupIsValid for the name and negating the existing tests would help to at least clear that part up.  Then, things operating on invalid tuples would be expected to know about both representations.  In the case of ExecCopySlot there is nothing it can do with a null representation of an invalid tuple so it would have to fail if presented one.  An assertion seems sufficient.IHMO, assertion it is not the solution.Steven suggested looking for some NULL pointer font above the calls.I say that it is not necessary, there is no NULL pointer.Whoever guarantees this is the combination, which for me is an assertion.If (TupIsNull)   ExecCopySlotIt works as a subject, but in release mode.It is the equivalent of:If (TupIsNull)   AbortThe only problem for me is that we are running this assertion on the clients' machines.I cannot make heads nor tails of what you are trying to communicate here.Ok. I will try to explain.1. TupIsNull in fact it should be called: TupIsNullOrEmpty2. Only Rename TupIsNull to example TupIsNullOrEmpty, improves, but it is not the complete solution.3. The combination: if (TupIsNull(node->group_pivot))    ExecCopySlot(node->group_pivot, node->transfer_tuple);for me it acts partly as if it were an assertion, but at runtime.If node->group_pivot is NULL, ExecCopySlot crashes, like an assertion.4. As it has been running for a while, without any complaints, probably the callers have already guaranteed that node-> group_pivot is not NULL5. We can remove the first part of the macro and rename: TupIsNull to SlotEmpty6. With SlotEmpty macro, each TupIsNull needs to be carefully changed.if (SlotEmpty(node->group_pivot))    ExecCopySlot(node->group_pivot, node->transfer_tuple);I'll agree that TupIsNull isn't the most descriptive choice of name, and is probably being abused throughout the code base, but the overall intent and existing flow seems fine.  My only goal would be to make it a bit easier for unfamiliar coders to pick up on the coding pattern and assumptions and make coding errors there more obvious.  Renaming and/or an assertion fits that goal.  Breaking the current abstraction level doesn't seem desirable.If only rename TupIsNull to TupIsNullOrEmpty:1. Why continue testing a pointer against NULL and call ExecCopySlot and crash at runtime.2. Most likely, the pointer is not NULL, since it has already been well tested.3. The only thing that can be done, after TupIsNullOrEmpty, is return or fail, anything else needs to be tested again.I think that current abstraction is broken.regards,Ranier Vilela", "msg_date": "Sun, 11 Oct 2020 22:34:40 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" }, { "msg_contents": "On Sun, Oct 11, 2020 at 6:27 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Em dom., 11 de out. de 2020 às 14:53, David G. Johnston <\n> david.g.johnston@gmail.com> escreveu:\n>\n>> On Sun, Oct 11, 2020 at 3:31 AM Ranier Vilela <ranier.vf@gmail.com>\n>> wrote:\n>>\n>>> Em sáb., 10 de out. de 2020 às 00:11, David G. Johnston <\n>>> david.g.johnston@gmail.com> escreveu:\n>>>\n>>>> On Fri, Oct 9, 2020 at 6:41 PM Ranier Vilela <ranier.vf@gmail.com>\n>>>> wrote:\n>>>>\n>>>>> The problem is not only in nodeIncrementalSort.c, but in several\n>>>>> others too, where people are using TupIsNull with ExecCopySlot.\n>>>>> I would call this a design flaw.\n>>>>> If (TupIsNull)\n>>>>> ExecCopySlot,\n>>>>>\n>>>>> The callers, think they are using TupIsNotNullAndEmpty.\n>>>>> If (TupIsNotNullAndEmpty)\n>>>>> ExecCopySlot\n>>>>>\n>>>>\n>>>> IMO both names are problematic, too data value centric, not semantic.\n>>>> TupIsValid for the name and negating the existing tests would help to at\n>>>> least clear that part up. Then, things operating on invalid tuples would\n>>>> be expected to know about both representations. In the case of\n>>>> ExecCopySlot there is nothing it can do with a null representation of an\n>>>> invalid tuple so it would have to fail if presented one. An assertion\n>>>> seems sufficient.\n>>>>\n>>> IHMO, assertion it is not the solution.\n>>>\n>>> Steven suggested looking for some NULL pointer font above the calls.\n>>> I say that it is not necessary, there is no NULL pointer.\n>>> Whoever guarantees this is the combination, which for me is an assertion.\n>>>\n>>> If (TupIsNull)\n>>> ExecCopySlot\n>>>\n>>> It works as a subject, but in release mode.\n>>> It is the equivalent of:\n>>>\n>>> If (TupIsNull)\n>>> Abort\n>>>\n>>> The only problem for me is that we are running this assertion on the\n>>> clients' machines.\n>>>\n>>>\n>> I cannot make heads nor tails of what you are trying to communicate here.\n>>\n> Ok. I will try to explain.\n>\n> 1. TupIsNull in fact it should be called: TupIsNullOrEmpty\n> 2. Only Rename TupIsNull to example TupIsNullOrEmpty, improves, but it is\n> not the complete solution.\n> 3. The combination:\n> if (TupIsNull(node->group_pivot))\n> ExecCopySlot(node->group_pivot, node->transfer_tuple);\n> for me it acts partly as if it were an assertion, but at runtime.\n> If node->group_pivot is NULL, ExecCopySlot crashes, like an assertion.\n>\n\nOk, but for me it's not an assertion, it's a higher-level check that the\nvariable that is expected to hold data on subsequent loops is, at the\nbeginning of the loop, uninitialized. TupIsUninitialized comes to mind as\nbetter reflecting that fact.\n\n4. As it has been running for a while, without any complaints, probably the\n> callers have already guaranteed that node-> group_pivot is not NULL\n>\n5. We can remove the first part of the macro and rename: TupIsNull to\n> SlotEmpty\n> 6. With SlotEmpty macro, each TupIsNull needs to be carefully changed.\n> if (SlotEmpty(node->group_pivot))\n> ExecCopySlot(node->group_pivot, node->transfer_tuple);\n>\n\nI don't have a problem with introducing a SlotEmpty macro, and agree that\nwhen it is followed by \"ExecCopySlot\" it is an meaningful improvement (the\nblurring of the lines between a slot and its pointed-to-tuple bothers me as\nI get my first exposure this to code).\n\n\n>\n>\n>> I'll agree that TupIsNull isn't the most descriptive choice of name, and\n>> is probably being abused throughout the code base, but the overall intent\n>> and existing flow seems fine. My only goal would be to make it a bit\n>> easier for unfamiliar coders to pick up on the coding pattern and\n>> assumptions and make coding errors there more obvious. Renaming and/or an\n>> assertion fits that goal. Breaking the current abstraction level doesn't\n>> seem desirable.\n>>\n>\n\n> If only rename TupIsNull to TupIsNullOrEmpty:\n>\n> 1. Why continue testing a pointer against NULL and call ExecCopySlot and\n> crash at runtime.\n> 2. Most likely, the pointer is not NULL, since it has already been well\n> tested.\n> 3. The only thing that can be done, after TupIsNullOrEmpty, is return or\n> fail, anything else needs to be tested again.\n>\n> I think that current abstraction is broken.\n>\n\nI'm willing to agree that the abstraction is broken even if the end result\nof its use, in the existing codebase, hasn't resulted in any known bugs\n(again, the null pointer dereferencing seems like it should be picked up\nduring routine testing). That said, there are multiple solutions here that\nwould improve matters in varying degrees each having a proportional effort\nand risk profile in writing a patch (including the status-quo option).\n\nFor me, while I see room for improvement here, my total lack of actually\nwriting code using these interfaces means I defer to Tom Lane's final two\nconclusions in his last email regarding how productive this line of work\nwould be. I also read that to mean if there was a complete and thorough\npatch submitted it would be given a fair look. I would hope so since there\nis a meaningful decision to make with regards to making changes purely to\nbenefit future inexperienced coders. But it seems like worthy material for\nan inexperienced coder to compile and present and having the experienced\ncoders evaluate and critique, as Stephen Frost's post seemed to allude to.\n\nDavid J.\n\nOn Sun, Oct 11, 2020 at 6:27 PM Ranier Vilela <ranier.vf@gmail.com> wrote:Em dom., 11 de out. de 2020 às 14:53, David G. Johnston <david.g.johnston@gmail.com> escreveu:On Sun, Oct 11, 2020 at 3:31 AM Ranier Vilela <ranier.vf@gmail.com> wrote:Em sáb., 10 de out. de 2020 às 00:11, David G. Johnston <david.g.johnston@gmail.com> escreveu:On Fri, Oct 9, 2020 at 6:41 PM Ranier Vilela <ranier.vf@gmail.com> wrote:The problem is not only in nodeIncrementalSort.c, but in several others too, where people are using TupIsNull with ExecCopySlot.I would call this a design flaw.If (TupIsNull)     ExecCopySlot,The callers, think they are using TupIsNotNullAndEmpty.If (TupIsNotNullAndEmpty)     ExecCopySlotIMO both names are problematic, too data value centric, not semantic.  TupIsValid for the name and negating the existing tests would help to at least clear that part up.  Then, things operating on invalid tuples would be expected to know about both representations.  In the case of ExecCopySlot there is nothing it can do with a null representation of an invalid tuple so it would have to fail if presented one.  An assertion seems sufficient.IHMO, assertion it is not the solution.Steven suggested looking for some NULL pointer font above the calls.I say that it is not necessary, there is no NULL pointer.Whoever guarantees this is the combination, which for me is an assertion.If (TupIsNull)   ExecCopySlotIt works as a subject, but in release mode.It is the equivalent of:If (TupIsNull)   AbortThe only problem for me is that we are running this assertion on the clients' machines.I cannot make heads nor tails of what you are trying to communicate here.Ok. I will try to explain.1. TupIsNull in fact it should be called: TupIsNullOrEmpty2. Only Rename TupIsNull to example TupIsNullOrEmpty, improves, but it is not the complete solution.3. The combination: if (TupIsNull(node->group_pivot))    ExecCopySlot(node->group_pivot, node->transfer_tuple);for me it acts partly as if it were an assertion, but at runtime.If node->group_pivot is NULL, ExecCopySlot crashes, like an assertion.Ok, but for me it's not an assertion, it's a higher-level check that the variable that is expected to hold data on subsequent loops is, at the beginning of the loop, uninitialized.  TupIsUninitialized comes to mind as better reflecting that fact.4. As it has been running for a while, without any complaints, probably the callers have already guaranteed that node-> group_pivot is not NULL 5. We can remove the first part of the macro and rename: TupIsNull to SlotEmpty6. With SlotEmpty macro, each TupIsNull needs to be carefully changed.if (SlotEmpty(node->group_pivot))    ExecCopySlot(node->group_pivot, node->transfer_tuple);I don't have a problem with introducing a SlotEmpty macro, and agree that when it is followed by \"ExecCopySlot\" it is an meaningful improvement (the blurring of the lines between a slot and its pointed-to-tuple bothers me as I get my first exposure this to code). I'll agree that TupIsNull isn't the most descriptive choice of name, and is probably being abused throughout the code base, but the overall intent and existing flow seems fine.  My only goal would be to make it a bit easier for unfamiliar coders to pick up on the coding pattern and assumptions and make coding errors there more obvious.  Renaming and/or an assertion fits that goal.  Breaking the current abstraction level doesn't seem desirable. If only rename TupIsNull to TupIsNullOrEmpty:1. Why continue testing a pointer against NULL and call ExecCopySlot and crash at runtime.2. Most likely, the pointer is not NULL, since it has already been well tested.3. The only thing that can be done, after TupIsNullOrEmpty, is return or fail, anything else needs to be tested again.I think that current abstraction is broken.I'm willing to agree that the abstraction is broken even if the end result of its use, in the existing codebase, hasn't resulted in any known bugs (again, the null pointer dereferencing seems like it should be picked up during routine testing).  That said, there are multiple solutions here that would improve matters in varying degrees each having a proportional effort and risk profile in writing a patch (including the status-quo option).For me, while I see room for improvement here, my total lack of actually writing code using these interfaces means I defer to Tom Lane's final two conclusions in his last email regarding how productive this line of work would be.  I also read that to mean if there was a complete and thorough patch submitted it would be given a fair look.  I would hope so since there is a meaningful decision to make with regards to making changes purely to benefit future inexperienced coders.  But it seems like worthy material for an inexperienced coder to compile and present and having the experienced coders evaluate and critique, as Stephen Frost's post seemed to allude to.David J.", "msg_date": "Sun, 11 Oct 2020 19:44:00 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible NULL dereferencing null pointer\n (src/backend/executor/nodeIncrementalSort.c)" } ]
[ { "msg_contents": "-hackers,\n\nEnclosed find a patch to add a “truncate” option to subscription commands.\n\nWhen adding new tables to a subscription (either via `CREATE SUBSCRIPTION` or `REFRESH PUBLICATION`), tables on the target which are being newly subscribed will be truncated before the data copy step. This saves explicit coordination of a manual `TRUNCATE` on the target tables and allows the results of the initial data sync to be the same as on the publisher at the time of sync.\n\nTo preserve compatibility with existing behavior, the default value for this parameter is `false`.\n\nBest,\n\nDavid\n\n\n\n\n\n--\nDavid Christensen\nSenior Software and Database Engineer\nEnd Point Corporation\ndavid@endpoint.com\n785-727-1171", "msg_date": "Fri, 9 Oct 2020 13:54:01 -0500", "msg_from": "David Christensen <david@endpoint.com>", "msg_from_op": true, "msg_subject": "[PATCH] Add `truncate` option to subscription commands" }, { "msg_contents": "On Sat, Oct 10, 2020 at 12:24 AM David Christensen <david@endpoint.com> wrote:\n>\n> -hackers,\n>\n> Enclosed find a patch to add a “truncate” option to subscription commands.\n>\n> When adding new tables to a subscription (either via `CREATE SUBSCRIPTION` or `REFRESH PUBLICATION`), tables on the target which are being newly subscribed will be truncated before the data copy step. This saves explicit coordination of a manual `TRUNCATE` on the target tables and allows the results of the initial data sync to be the same as on the publisher at the time of sync.\n>\n\nSo IIUC, this will either truncate all the tables for a particular\nsubscription or none? Is it possible that the user wants some of\nthose tables to be truncated which made me think what exactly made you\npropose this feature? Basically, is it from user complaint, or is it\nsome optimization that you think will be helpful to users?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 10 Oct 2020 10:44:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `truncate` option to subscription commands" }, { "msg_contents": "\n\n> On Oct 10, 2020, at 12:14 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> On Sat, Oct 10, 2020 at 12:24 AM David Christensen <david@endpoint.com> wrote:\n>> \n>> -hackers,\n>> \n>> Enclosed find a patch to add a “truncate” option to subscription commands.\n>> \n>> When adding new tables to a subscription (either via `CREATE SUBSCRIPTION` or `REFRESH PUBLICATION`), tables on the target which are being newly subscribed will be truncated before the data copy step. This saves explicit coordination of a manual `TRUNCATE` on the target tables and allows the results of the initial data sync to be the same as on the publisher at the time of sync.\n>> \n> \n> So IIUC, this will either truncate all the tables for a particular\n> subscription or none? \n\nCorrect, when creating or altering the subscription all newly added tables would be left alone (current behavior) or truncated (new functionality from the patch).\n\n> Is it possible that the user wants some of\n> those tables to be truncated which made me think what exactly made you\n> propose this feature? Basically, is it from user complaint, or is it\n> some optimization that you think will be helpful to users?\n\nThis comes from my own experience with setting up/modifying subscriptions with adding many multiple additional tables, some of which had data in the subscribing node. I would have found this feature very helpful. \n\nThanks,\n\nDavid\n\n", "msg_date": "Sat, 10 Oct 2020 07:16:47 -0500", "msg_from": "David Christensen <david@endpoint.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add `truncate` option to subscription commands" }, { "msg_contents": "On Fri, 9 Oct 2020 at 15:54, David Christensen <david@endpoint.com> wrote:\n\n>\n> Enclosed find a patch to add a “truncate” option to subscription commands.\n>\n> When adding new tables to a subscription (either via `CREATE SUBSCRIPTION`\n> or `REFRESH PUBLICATION`), tables on the target which are being newly\n> subscribed will be truncated before the data copy step. This saves\n> explicit coordination of a manual `TRUNCATE` on the target tables and\n> allows the results of the initial data sync to be the same as on the\n> publisher at the time of sync.\n>\n> To preserve compatibility with existing behavior, the default value for\n> this parameter is `false`.\n>\n>\nTruncate will fail for tables whose foreign keys refer to it. If such a\nfeature cannot handle foreign keys, the usefulness will be restricted.\n\nRegards,\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Fri, 9 Oct 2020 at 15:54, David Christensen <david@endpoint.com> wrote:\nEnclosed find a patch to add a “truncate” option to subscription commands.\n\nWhen adding new tables to a subscription (either via `CREATE SUBSCRIPTION` or `REFRESH PUBLICATION`), tables on the target which are being newly subscribed will be truncated before the data copy step.  This saves explicit coordination of a manual `TRUNCATE` on the target tables and allows the results of the initial data sync to be the same as on the publisher at the time of sync.\n\nTo preserve compatibility with existing behavior, the default value for this parameter is `false`.\nTruncate will fail for tables whose foreign keys refer to it. If such a feature cannot handle foreign keys, the usefulness will be restricted.Regards,-- Euler Taveira                 http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 11 Oct 2020 15:13:54 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `truncate` option to subscription commands" }, { "msg_contents": "> On Oct 11, 2020, at 1:14 PM, Euler Taveira <euler.taveira@2ndquadrant.com> wrote:\n> \n> \n>> On Fri, 9 Oct 2020 at 15:54, David Christensen <david@endpoint.com> wrote:\n> \n>> \n>> Enclosed find a patch to add a “truncate” option to subscription commands.\n>> \n>> When adding new tables to a subscription (either via `CREATE SUBSCRIPTION` or `REFRESH PUBLICATION`), tables on the target which are being newly subscribed will be truncated before the data copy step. This saves explicit coordination of a manual `TRUNCATE` on the target tables and allows the results of the initial data sync to be the same as on the publisher at the time of sync.\n>> \n>> To preserve compatibility with existing behavior, the default value for this parameter is `false`.\n>> \n> \n> Truncate will fail for tables whose foreign keys refer to it. If such a feature cannot handle foreign keys, the usefulness will be restricted.\n\nThis is true for existing “truncate” with FKs, so doesn’t seem to be any different to me.\n\nHypothetically if you checked all new tables and could verify if there were FK cycles only already in the new tables being added then “truncate cascade” would be fine. Arguably if they had existing tables that were part of an FK that wasn’t fully replicated they were already operating brokenly.\n\nBut you would definitely want to avoid “truncate cascade” if the FK target tables were already in the publication, unless we were willing to re-sync the other tables that would be truncated. \n\nDavid\nOn Oct 11, 2020, at 1:14 PM, Euler Taveira <euler.taveira@2ndquadrant.com> wrote:On Fri, 9 Oct 2020 at 15:54, David Christensen <david@endpoint.com> wrote:\nEnclosed find a patch to add a “truncate” option to subscription commands.\n\nWhen adding new tables to a subscription (either via `CREATE SUBSCRIPTION` or `REFRESH PUBLICATION`), tables on the target which are being newly subscribed will be truncated before the data copy step.  This saves explicit coordination of a manual `TRUNCATE` on the target tables and allows the results of the initial data sync to be the same as on the publisher at the time of sync.\n\nTo preserve compatibility with existing behavior, the default value for this parameter is `false`.\nTruncate will fail for tables whose foreign keys refer to it. If such a feature cannot handle foreign keys, the usefulness will be restricted.This is true for existing “truncate” with FKs, so doesn’t seem to be any different to me.Hypothetically if you checked all new tables and could verify if there were FK cycles only already in the new tables being added then “truncate cascade” would be fine. Arguably if they had existing tables that were part of an FK that wasn’t fully replicated they were already operating brokenly.But you would definitely want to avoid “truncate cascade” if the FK target tables were already in the publication, unless we were willing to re-sync the other tables that would be truncated. David", "msg_date": "Sun, 11 Oct 2020 17:13:55 -0500", "msg_from": "David Christensen <david@endpoint.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add `truncate` option to subscription commands" }, { "msg_contents": "On Mon, Oct 12, 2020 at 3:44 AM David Christensen <david@endpoint.com> wrote:\n>\n>\n> On Oct 11, 2020, at 1:14 PM, Euler Taveira <euler.taveira@2ndquadrant.com> wrote:\n>\n> \n> On Fri, 9 Oct 2020 at 15:54, David Christensen <david@endpoint.com> wrote:\n>>\n>>\n>> Enclosed find a patch to add a “truncate” option to subscription commands.\n>>\n>> When adding new tables to a subscription (either via `CREATE SUBSCRIPTION` or `REFRESH PUBLICATION`), tables on the target which are being newly subscribed will be truncated before the data copy step. This saves explicit coordination of a manual `TRUNCATE` on the target tables and allows the results of the initial data sync to be the same as on the publisher at the time of sync.\n>>\n>> To preserve compatibility with existing behavior, the default value for this parameter is `false`.\n>>\n>\n> Truncate will fail for tables whose foreign keys refer to it. If such a feature cannot handle foreign keys, the usefulness will be restricted.\n>\n>\n> This is true for existing “truncate” with FKs, so doesn’t seem to be any different to me.\n>\n\nWhat would happen if there are multiple tables and truncate on only\none of them failed due to FK check? Does it give an error in such a\ncase, if so will the other tables be truncated?\n\n> Hypothetically if you checked all new tables and could verify if there were FK cycles only already in the new tables being added then “truncate cascade” would be fine. Arguably if they had existing tables that were part of an FK that wasn’t fully replicated they were already operating brokenly.\n>\n\nI think if both PK_table and FK_table are part of the same\nsubscription then there should be a problem as both them first get\ntruncated? If they are part of a different subscription (or if they\nare not subscribed due to whatever reason) then probably we need to\ndeal such cases carefully.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 12 Oct 2020 08:30:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `truncate` option to subscription commands" }, { "msg_contents": "> On Oct 11, 2020, at 10:00 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> On Mon, Oct 12, 2020 at 3:44 AM David Christensen <david@endpoint.com> wrote:\n>> \n>> \n>> On Oct 11, 2020, at 1:14 PM, Euler Taveira <euler.taveira@2ndquadrant.com> wrote:\n>> \n>> \n>> On Fri, 9 Oct 2020 at 15:54, David Christensen <david@endpoint.com> wrote:\n>>> \n>>> \n>>> Enclosed find a patch to add a “truncate” option to subscription commands.\n>>> \n>>> When adding new tables to a subscription (either via `CREATE SUBSCRIPTION` or `REFRESH PUBLICATION`), tables on the target which are being newly subscribed will be truncated before the data copy step. This saves explicit coordination of a manual `TRUNCATE` on the target tables and allows the results of the initial data sync to be the same as on the publisher at the time of sync.\n>>> \n>>> To preserve compatibility with existing behavior, the default value for this parameter is `false`.\n>>> \n>> \n>> Truncate will fail for tables whose foreign keys refer to it. If such a feature cannot handle foreign keys, the usefulness will be restricted.\n>> \n>> \n>> This is true for existing “truncate” with FKs, so doesn’t seem to be any different to me.\n>> \n> \n> What would happen if there are multiple tables and truncate on only\n> one of them failed due to FK check? Does it give an error in such a\n> case, if so will the other tables be truncated?\n\nCurrently each SyncRep relation is sync’d separately in its own worker process; we are doing the truncate at the initialization step of this, so it’s inherently in its own transaction. I think if we are going to do any sort of validation on this, it would have to be at the point of the CREATE SUBSCRIPTION/REFRESH PUBLICATION where we have the relation list and can do sanity-checking there.\n\nObviously if someone changes the schema at some point between when it does this and when relation syncs start there is a race condition, but the same issue would affect other data sync things, so I don’t care to solve that as part of this patch.\n\n>> Hypothetically if you checked all new tables and could verify if there were FK cycles only already in the new tables being added then “truncate cascade” would be fine. Arguably if they had existing tables that were part of an FK that wasn’t fully replicated they were already operating brokenly.\n>> \n> \n> I think if both PK_table and FK_table are part of the same\n> subscription then there should be a problem as both them first get\n> truncated? If they are part of a different subscription (or if they\n> are not subscribed due to whatever reason) then probably we need to\n> deal such cases carefully.\n\nYou mean “should not be a problem” here? If so, I agree with that. Obviously if we determine this features is only useful with this support we’d have to chase the entire dependency graph and make sure that is all contained in the set of newly-subscribed tables (or at least FK referents).\n\nI have not considered tables that are part of more than one subscription (is that possible?); we presumably should error out if any table exists already in a separate subscription, as we’d want to avoid truncating tables already part of an existing subscription.\n\nWhile I’m happy to take a stab at fixing some of the FK/PK issues, it seems easy to go down a rabbit hole. I’m not convinced that we couldn’t just detect FK issues and choose to not handle this case without decreasing the utility for at least some cases. (Perhaps we could give a hint as to the issues detected to point someone in the right direction.) Anyway, glad to keep discussing potential implications, etc.\n\nBest,\n\nDavid\n--\nDavid Christensen\nSenior Software and Database Engineer\nEnd Point Corporation\ndavid@endpoint.com\n785-727-1171", "msg_date": "Mon, 12 Oct 2020 13:01:35 -0500", "msg_from": "David Christensen <david@endpoint.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add `truncate` option to subscription commands" }, { "msg_contents": "Hi David,\n\nThe feature seems useful to me. The code will need to be refactored due to\nchanges in commit : b05fe7b442\n\nPlease see the following comments.\n1. Is there a specific reason behind having new relstate for truncate?\nThe current state flow is\nINIT->DATATSYNC->SYNCWAIT->CATCHUP->SYNCDONE->READY.\nCan we accommodate the truncate in either INIT or DATASYNC?\n\n2. + StartTransactionCommand();\n + rel =\ntable_open(MyLogicalRepWorker->relid, RowExclusiveLock);\n +\n + rels = lappend(rels, rel);\n + relids = lappend_oid(relids,\nMyLogicalRepWorker->relid);\n +\n + ExecuteTruncateGuts(rels, relids,\nNIL, DROP_RESTRICT, false);\n + CommitTransactionCommand();\n\nTruncate is being performed in a separate transaction as data copy, I think\nthat leaves a window\nopen for concurrent transactions to modify the data after truncate and\nbefore copy.\n\n3. Regarding the truncate of the referenced table, I think one approach can\nbe to perform the following:\ni. lock the referencing and referenced tables against writes\nii. drop the foriegn key constraints,\niii.truncate\niv. sync\nv. recreate the constraints\nvi. release lock.\nHowever, I am not sure of the implications of locking these tables on the\nmain apply process.\n\n\nThank you,\n\n\nOn Mon, Oct 12, 2020 at 11:31 PM David Christensen <david@endpoint.com>\nwrote:\n\n> > On Oct 11, 2020, at 10:00 PM, Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > On Mon, Oct 12, 2020 at 3:44 AM David Christensen <david@endpoint.com>\n> wrote:\n> >>\n> >>\n> >> On Oct 11, 2020, at 1:14 PM, Euler Taveira <\n> euler.taveira@2ndquadrant.com> wrote:\n> >>\n> >> \n> >> On Fri, 9 Oct 2020 at 15:54, David Christensen <david@endpoint.com>\n> wrote:\n> >>>\n> >>>\n> >>> Enclosed find a patch to add a “truncate” option to subscription\n> commands.\n> >>>\n> >>> When adding new tables to a subscription (either via `CREATE\n> SUBSCRIPTION` or `REFRESH PUBLICATION`), tables on the target which are\n> being newly subscribed will be truncated before the data copy step. This\n> saves explicit coordination of a manual `TRUNCATE` on the target tables and\n> allows the results of the initial data sync to be the same as on the\n> publisher at the time of sync.\n> >>>\n> >>> To preserve compatibility with existing behavior, the default value\n> for this parameter is `false`.\n> >>>\n> >>\n> >> Truncate will fail for tables whose foreign keys refer to it. If such a\n> feature cannot handle foreign keys, the usefulness will be restricted.\n> >>\n> >>\n> >> This is true for existing “truncate” with FKs, so doesn’t seem to be\n> any different to me.\n> >>\n> >\n> > What would happen if there are multiple tables and truncate on only\n> > one of them failed due to FK check? Does it give an error in such a\n> > case, if so will the other tables be truncated?\n>\n> Currently each SyncRep relation is sync’d separately in its own worker\n> process; we are doing the truncate at the initialization step of this, so\n> it’s inherently in its own transaction. I think if we are going to do any\n> sort of validation on this, it would have to be at the point of the CREATE\n> SUBSCRIPTION/REFRESH PUBLICATION where we have the relation list and can do\n> sanity-checking there.\n>\n> Obviously if someone changes the schema at some point between when it does\n> this and when relation syncs start there is a race condition, but the same\n> issue would affect other data sync things, so I don’t care to solve that as\n> part of this patch.\n>\n> >> Hypothetically if you checked all new tables and could verify if there\n> were FK cycles only already in the new tables being added then “truncate\n> cascade” would be fine. Arguably if they had existing tables that were part\n> of an FK that wasn’t fully replicated they were already operating brokenly.\n> >>\n> >\n> > I think if both PK_table and FK_table are part of the same\n> > subscription then there should be a problem as both them first get\n> > truncated? If they are part of a different subscription (or if they\n> > are not subscribed due to whatever reason) then probably we need to\n> > deal such cases carefully.\n>\n> You mean “should not be a problem” here? If so, I agree with that.\n> Obviously if we determine this features is only useful with this support\n> we’d have to chase the entire dependency graph and make sure that is all\n> contained in the set of newly-subscribed tables (or at least FK referents).\n>\n> I have not considered tables that are part of more than one subscription\n> (is that possible?); we presumably should error out if any table exists\n> already in a separate subscription, as we’d want to avoid truncating tables\n> already part of an existing subscription.\n>\n> While I’m happy to take a stab at fixing some of the FK/PK issues, it\n> seems easy to go down a rabbit hole. I’m not convinced that we couldn’t\n> just detect FK issues and choose to not handle this case without decreasing\n> the utility for at least some cases. (Perhaps we could give a hint as to\n> the issues detected to point someone in the right direction.) Anyway, glad\n> to keep discussing potential implications, etc.\n>\n> Best,\n>\n> David\n> --\n> David Christensen\n> Senior Software and Database Engineer\n> End Point Corporation\n> david@endpoint.com\n> 785-727-1171\n>\n>\n>\n>\n>\n\nHi David,The feature seems useful to me.  The code will need to be refactored due to changes in commit : b05fe7b442Please see the following comments. 1. Is there a specific reason behind having new relstate for truncate?The current state flow is INIT->DATATSYNC->SYNCWAIT->CATCHUP->SYNCDONE->READY.Can we accommodate the truncate in either INIT or DATASYNC?2.   +                               StartTransactionCommand();      +                               rel = table_open(MyLogicalRepWorker->relid, RowExclusiveLock);      +      +                               rels = lappend(rels, rel);      +                               relids = lappend_oid(relids, MyLogicalRepWorker->relid);      +      +                               ExecuteTruncateGuts(rels, relids, NIL, DROP_RESTRICT, false);      +                               CommitTransactionCommand();Truncate is being performed in a separate transaction as data copy, I think that leaves a windowopen for concurrent transactions to modify the data after truncate and before copy.3. Regarding the truncate of the referenced table, I think one approach can be to perform the following:i. lock the referencing and referenced tables against writesii. drop the foriegn key constraints, iii.truncateiv. syncv. recreate the constraints vi. release lock.  However, I am not sure of the implications of locking these tables on the main apply process. Thank you,On Mon, Oct 12, 2020 at 11:31 PM David Christensen <david@endpoint.com> wrote:> On Oct 11, 2020, at 10:00 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> On Mon, Oct 12, 2020 at 3:44 AM David Christensen <david@endpoint.com> wrote:\n>> \n>> \n>> On Oct 11, 2020, at 1:14 PM, Euler Taveira <euler.taveira@2ndquadrant.com> wrote:\n>> \n>> \n>> On Fri, 9 Oct 2020 at 15:54, David Christensen <david@endpoint.com> wrote:\n>>> \n>>> \n>>> Enclosed find a patch to add a “truncate” option to subscription commands.\n>>> \n>>> When adding new tables to a subscription (either via `CREATE SUBSCRIPTION` or `REFRESH PUBLICATION`), tables on the target which are being newly subscribed will be truncated before the data copy step.  This saves explicit coordination of a manual `TRUNCATE` on the target tables and allows the results of the initial data sync to be the same as on the publisher at the time of sync.\n>>> \n>>> To preserve compatibility with existing behavior, the default value for this parameter is `false`.\n>>> \n>> \n>> Truncate will fail for tables whose foreign keys refer to it. If such a feature cannot handle foreign keys, the usefulness will be restricted.\n>> \n>> \n>> This is true for existing “truncate” with FKs, so doesn’t seem to be any different to me.\n>> \n> \n> What would happen if there are multiple tables and truncate on only\n> one of them failed due to FK check? Does it give an error in such a\n> case, if so will the other tables be truncated?\n\nCurrently each SyncRep relation is sync’d separately in its own worker process; we are doing the truncate at the initialization step of this, so it’s inherently in its own transaction. I think if we are going to do any sort of validation on this, it would have to be at the point of the CREATE SUBSCRIPTION/REFRESH PUBLICATION where we have the relation list and can do sanity-checking there.\n\nObviously if someone changes the schema at some point between when it does this and when relation syncs start there is a race condition, but the same issue would affect other data sync things, so I don’t care to solve that as part of this patch.\n\n>> Hypothetically if you checked all new tables and could verify if there were FK cycles only already in the new tables being added then “truncate cascade” would be fine. Arguably if they had existing tables that were part of an FK that wasn’t fully replicated they were already operating brokenly.\n>> \n> \n> I think if both PK_table and FK_table are part of the same\n> subscription then there should be a problem as both them first get\n> truncated? If they are part of a different subscription (or if they\n> are not subscribed due to whatever reason) then probably we need to\n> deal such cases carefully.\n\nYou mean “should not be a problem” here?  If so, I agree with that.  Obviously if we determine this features is only useful with this support we’d have to chase the entire dependency graph and make sure that is all contained in the set of newly-subscribed tables (or at least FK referents).\n\nI have not considered tables that are part of more than one subscription (is that possible?); we presumably should error out if any table exists already in a separate subscription, as we’d want to avoid truncating tables already part of an existing subscription.\n\nWhile I’m happy to take a stab at fixing some of the FK/PK issues, it seems easy to go down a rabbit hole.  I’m not convinced that we couldn’t just detect FK issues and choose to not handle this case without decreasing the utility for at least some cases.  (Perhaps we could give a hint as to the issues detected to point someone in the right direction.)  Anyway, glad to keep discussing potential implications, etc.\n\nBest,\n\nDavid\n--\nDavid Christensen\nSenior Software and Database Engineer\nEnd Point Corporation\ndavid@endpoint.com\n785-727-1171", "msg_date": "Wed, 28 Oct 2020 23:36:19 +0530", "msg_from": "Rahila Syed <rahilasyed90@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `truncate` option to subscription commands" }, { "msg_contents": "Hi,\n\nAt this time I do not have time to make the necessary changes for this\ncommitfest so I am voluntarily withdrawing this patch, but will\nrevisit at a future time.\n\nBest,\n\nDavid\n\nOn Wed, Oct 28, 2020 at 1:06 PM Rahila Syed <rahilasyed90@gmail.com> wrote:\n>\n> Hi David,\n>\n> The feature seems useful to me. The code will need to be refactored due to changes in commit : b05fe7b442\n>\n> Please see the following comments.\n> 1. Is there a specific reason behind having new relstate for truncate?\n> The current state flow is INIT->DATATSYNC->SYNCWAIT->CATCHUP->SYNCDONE->READY.\n> Can we accommodate the truncate in either INIT or DATASYNC?\n>\n> 2. + StartTransactionCommand();\n> + rel = table_open(MyLogicalRepWorker->relid, RowExclusiveLock);\n> +\n> + rels = lappend(rels, rel);\n> + relids = lappend_oid(relids, MyLogicalRepWorker->relid);\n> +\n> + ExecuteTruncateGuts(rels, relids, NIL, DROP_RESTRICT, false);\n> + CommitTransactionCommand();\n>\n> Truncate is being performed in a separate transaction as data copy, I think that leaves a window\n> open for concurrent transactions to modify the data after truncate and before copy.\n>\n> 3. Regarding the truncate of the referenced table, I think one approach can be to perform the following:\n> i. lock the referencing and referenced tables against writes\n> ii. drop the foriegn key constraints,\n> iii.truncate\n> iv. sync\n> v. recreate the constraints\n> vi. release lock.\n> However, I am not sure of the implications of locking these tables on the main apply process.\n>\n>\n> Thank you,\n>\n>\n> On Mon, Oct 12, 2020 at 11:31 PM David Christensen <david@endpoint.com> wrote:\n>>\n>> > On Oct 11, 2020, at 10:00 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >\n>> > On Mon, Oct 12, 2020 at 3:44 AM David Christensen <david@endpoint.com> wrote:\n>> >>\n>> >>\n>> >> On Oct 11, 2020, at 1:14 PM, Euler Taveira <euler.taveira@2ndquadrant.com> wrote:\n>> >>\n>> >> \n>> >> On Fri, 9 Oct 2020 at 15:54, David Christensen <david@endpoint.com> wrote:\n>> >>>\n>> >>>\n>> >>> Enclosed find a patch to add a “truncate” option to subscription commands.\n>> >>>\n>> >>> When adding new tables to a subscription (either via `CREATE SUBSCRIPTION` or `REFRESH PUBLICATION`), tables on the target which are being newly subscribed will be truncated before the data copy step. This saves explicit coordination of a manual `TRUNCATE` on the target tables and allows the results of the initial data sync to be the same as on the publisher at the time of sync.\n>> >>>\n>> >>> To preserve compatibility with existing behavior, the default value for this parameter is `false`.\n>> >>>\n>> >>\n>> >> Truncate will fail for tables whose foreign keys refer to it. If such a feature cannot handle foreign keys, the usefulness will be restricted.\n>> >>\n>> >>\n>> >> This is true for existing “truncate” with FKs, so doesn’t seem to be any different to me.\n>> >>\n>> >\n>> > What would happen if there are multiple tables and truncate on only\n>> > one of them failed due to FK check? Does it give an error in such a\n>> > case, if so will the other tables be truncated?\n>>\n>> Currently each SyncRep relation is sync’d separately in its own worker process; we are doing the truncate at the initialization step of this, so it’s inherently in its own transaction. I think if we are going to do any sort of validation on this, it would have to be at the point of the CREATE SUBSCRIPTION/REFRESH PUBLICATION where we have the relation list and can do sanity-checking there.\n>>\n>> Obviously if someone changes the schema at some point between when it does this and when relation syncs start there is a race condition, but the same issue would affect other data sync things, so I don’t care to solve that as part of this patch.\n>>\n>> >> Hypothetically if you checked all new tables and could verify if there were FK cycles only already in the new tables being added then “truncate cascade” would be fine. Arguably if they had existing tables that were part of an FK that wasn’t fully replicated they were already operating brokenly.\n>> >>\n>> >\n>> > I think if both PK_table and FK_table are part of the same\n>> > subscription then there should be a problem as both them first get\n>> > truncated? If they are part of a different subscription (or if they\n>> > are not subscribed due to whatever reason) then probably we need to\n>> > deal such cases carefully.\n>>\n>> You mean “should not be a problem” here? If so, I agree with that. Obviously if we determine this features is only useful with this support we’d have to chase the entire dependency graph and make sure that is all contained in the set of newly-subscribed tables (or at least FK referents).\n>>\n>> I have not considered tables that are part of more than one subscription (is that possible?); we presumably should error out if any table exists already in a separate subscription, as we’d want to avoid truncating tables already part of an existing subscription.\n>>\n>> While I’m happy to take a stab at fixing some of the FK/PK issues, it seems easy to go down a rabbit hole. I’m not convinced that we couldn’t just detect FK issues and choose to not handle this case without decreasing the utility for at least some cases. (Perhaps we could give a hint as to the issues detected to point someone in the right direction.) Anyway, glad to keep discussing potential implications, etc.\n>>\n>> Best,\n>>\n>> David\n>> --\n>> David Christensen\n>> Senior Software and Database Engineer\n>> End Point Corporation\n>> david@endpoint.com\n>> 785-727-1171\n>>\n>>\n>>\n>>\n\n\n", "msg_date": "Wed, 25 Nov 2020 12:45:51 -0600", "msg_from": "David Christensen <david@endpoint.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add `truncate` option to subscription commands" }, { "msg_contents": "On Thu, Nov 26, 2020 at 12:16 AM David Christensen <david@endpoint.com> wrote:\n>\n> Hi,\n>\n> At this time I do not have time to make the necessary changes for this\n> commitfest so I am voluntarily withdrawing this patch, but will\n> revisit at a future time.\n\nHi,\n\nThis feature looks useful in the sense that it avoids users having to\nmanually lookup all the tables on all the subscribers for truncation\n(in case they want the subscriber tables to exactly sync with the\npublisher tables).\n\nI have gone through the prior discussions on this thread. IMO, we can\nalways go ahead with TRUNCATE ... RESTRICT behavior to avoid some\nunnecessary truncation of subscriber local tables (if at all users\nhave such tables) that can arise due to CASCADE option. It looks like\nthere are some problems with the FK - PK dependencies. Below are my\nthoughts:\n\n1) Whether a table the sync worker is trying to truncate is having any\nreferencing (foreign key) tables on the subscriber? If yes, whether\nall the referencing tables are present in the list of subscription\ntables (output of fetch_table_list)? In this case, the sync worker is\ntruncating the primary key/referenced table.\n\nOne way to solve the above problem is by storing the table oids of the\nsubscription tables (output of fetch_table_list) in a new column in\nthe pg_subscription catalog (like subpublications text[] column). In\nthe sync worker, before truncation of a table, use\nheap_truncate_find_FKs to get all the referencing tables of the given\ntable and get all the subscription tables from the new pg_subscription\ncolumn. If all the referencing tables exist in the subscription\ntables, then truncate the table, otherwise don't, just skip it. There\ncan be a problem here if there are many subscription tables, the size\nof the new column in pg_susbcription can be huge. However, we can\nchoose to store the table ids in this new column only when the\ntruncate option is specified.\n\nAnother way is to let each table sync worker scan the\npg_subscription_rel to get all the relations that belong to a\nsubscription. But I felt this was costly.\n\n2) Whether a table the sync worker is trying to truncate is a\nreferencing table for any of the subscriber tables that is not part of\nthe subscription list of tables? In this case, the table the sync\nworker is truncating is the foreign key/referencing table.\n\nThis isn't a problem actually, the sync worker can safely truncate the\ntable. This is also inline with the current TRUNCATE command\nbehaviour.\n\n3) I think we should allow the truncate option with CREATE\nSUBSCRIPTION, ALTER SUBSCRIPTION ... REFRESH/SET/ADD PUBLICATION,\nbasically wherever copy_data and refresh options can be specified. And\nthere's no need to store the truncate option in the pg_subscription\ncatalogue because we allow it to be specified with only DDLs.\n\n4) If there are a huge number of tables with lots of data, then all\nthe sync workers will have to spend an extra amount of time in\ntruncating the tables. At times the publications can use \"FOR ALL\nTABLES\" i.e. all the tables within a database, so truncating all of\nthem on the subscriber would be a time consuming task. I'm not sure if\nthis is okay.\n\n5) We can choose to skip the errors that arise out of\nExecuteTruncateGuts in a sync worker using PG_TRY/PG_CATCH or changing\nExecuteTruncateGuts API to return false on error instead of emitting\nan error.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 22 May 2021 09:58:20 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `truncate` option to subscription commands" }, { "msg_contents": "On Sat, May 22, 2021 at 9:58 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Nov 26, 2020 at 12:16 AM David Christensen <david@endpoint.com> wrote:\n> >\n> > Hi,\n> >\n> > At this time I do not have time to make the necessary changes for this\n> > commitfest so I am voluntarily withdrawing this patch, but will\n> > revisit at a future time.\n>\n> Hi,\n>\n> This feature looks useful in the sense that it avoids users having to\n> manually lookup all the tables on all the subscribers for truncation\n> (in case they want the subscriber tables to exactly sync with the\n> publisher tables).\n>\n> I have gone through the prior discussions on this thread. IMO, we can\n> always go ahead with TRUNCATE ... RESTRICT behavior to avoid some\n> unnecessary truncation of subscriber local tables (if at all users\n> have such tables) that can arise due to CASCADE option. It looks like\n> there are some problems with the FK - PK dependencies. Below are my\n> thoughts:\n>\n> 1) Whether a table the sync worker is trying to truncate is having any\n> referencing (foreign key) tables on the subscriber? If yes, whether\n> all the referencing tables are present in the list of subscription\n> tables (output of fetch_table_list)? In this case, the sync worker is\n> truncating the primary key/referenced table.\n>\n> One way to solve the above problem is by storing the table oids of the\n> subscription tables (output of fetch_table_list) in a new column in\n> the pg_subscription catalog (like subpublications text[] column). In\n> the sync worker, before truncation of a table, use\n> heap_truncate_find_FKs to get all the referencing tables of the given\n> table and get all the subscription tables from the new pg_subscription\n> column. If all the referencing tables exist in the subscription\n> tables, then truncate the table, otherwise don't, just skip it.\n>\n\nHere, silently skipping doesn't seem like a good idea when the user\nhas asked to truncate the table. Shouldn't we allow it if the user has\nprovided say cascade with a truncate option?\n\n> There\n> can be a problem here if there are many subscription tables, the size\n> of the new column in pg_susbcription can be huge. However, we can\n> choose to store the table ids in this new column only when the\n> truncate option is specified.\n>\n> Another way is to let each table sync worker scan the\n> pg_subscription_rel to get all the relations that belong to a\n> subscription. But I felt this was costly.\n>\n\nI feel it is better to use pg_subscription_rel especially because we\nwill do so when the user has given the truncate option and note that\nwe are already accessing it in sync worker for both reading and\nwriting. See LogicalRepSyncTableStart.\n\n> 2) Whether a table the sync worker is trying to truncate is a\n> referencing table for any of the subscriber tables that is not part of\n> the subscription list of tables? In this case, the table the sync\n> worker is truncating is the foreign key/referencing table.\n>\n> This isn't a problem actually, the sync worker can safely truncate the\n> table. This is also inline with the current TRUNCATE command\n> behaviour.\n>\n> 3) I think we should allow the truncate option with CREATE\n> SUBSCRIPTION, ALTER SUBSCRIPTION ... REFRESH/SET/ADD PUBLICATION,\n> basically wherever copy_data and refresh options can be specified. And\n> there's no need to store the truncate option in the pg_subscription\n> catalogue because we allow it to be specified with only DDLs.\n>\n\nmakes sense.\n\nOne other problem discussed in this thread was what to do when the\nsame table is part of multiple subscriptions and the user has provided\na truncate option while operating on such a subscription.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 24 May 2021 11:01:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `truncate` option to subscription commands" }, { "msg_contents": "On Mon, May 24, 2021 at 11:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > 1) Whether a table the sync worker is trying to truncate is having any\n> > referencing (foreign key) tables on the subscriber? If yes, whether\n> > all the referencing tables are present in the list of subscription\n> > tables (output of fetch_table_list)? In this case, the sync worker is\n> > truncating the primary key/referenced table.\n> >\n> > One way to solve the above problem is by storing the table oids of the\n> > subscription tables (output of fetch_table_list) in a new column in\n> > the pg_subscription catalog (like subpublications text[] column). In\n> > the sync worker, before truncation of a table, use\n> > heap_truncate_find_FKs to get all the referencing tables of the given\n> > table and get all the subscription tables from the new pg_subscription\n> > column. If all the referencing tables exist in the subscription\n> > tables, then truncate the table, otherwise don't, just skip it.\n> >\n>\n> Here, silently skipping doesn't seem like a good idea when the user\n> has asked to truncate the table. Shouldn't we allow it if the user has\n> provided say cascade with a truncate option?\n\nWe could do that. In that case, the truncate option just can't be a\nboolean, but it has to be an enum accepting \"restrict\", \"cascade\",\nmaybe \"restart identity\" or \"continue identity\" too. I have a concern\nhere - what if the ExecuteTruncateGuts fails with the cascade option\nfor whatever reason? Should the table sync worker be trapped in that\nerror? Will that table ever finish initial table sync/data copy?\nBasically, how will this error info be known to the user other than\nfrom the subscriber logs?\n\nOr should it just continue by skipping the error?\n\nIf required, we could introduce another rel state, say,\nSUBREL_STATE_READY_WITH_TRUNCATION_DONE if the table is truncated as\nper the user expectation. Otherwise just SUBREL_STATE_READY if there\nhas been any error occurred while truncating.\n\nAnother thing is that, if we allow the cascade option we must document\nit saying that the truncate might cascade down to any subscriber local\ntables that are not part of the subscription.\n\nThoughts?\n\n> > There\n> > can be a problem here if there are many subscription tables, the size\n> > of the new column in pg_susbcription can be huge. However, we can\n> > choose to store the table ids in this new column only when the\n> > truncate option is specified.\n> >\n> > Another way is to let each table sync worker scan the\n> > pg_subscription_rel to get all the relations that belong to a\n> > subscription. But I felt this was costly.\n> >\n>\n> I feel it is better to use pg_subscription_rel especially because we\n> will do so when the user has given the truncate option and note that\n> we are already accessing it in sync worker for both reading and\n> writing. See LogicalRepSyncTableStart.\n\nNote that in pg_subscription_rel, there can exist multiple rows for\neach table for a given subscription. Say, t1 is a table that the sync\nworker is trying to truncate and copy. Say, t1_dep1, t1_dep2, t1_dep3\n.... are the dependent tables (we can find these using\nheap_truncate_find_FKs). Now, we need to see if all the t1_dep1,\nt1_dep2, t1_dep3 .... tables are in the pg_suscription_rel with the\nsame subscription id, then only we can delete all of them with\nEexecuteTruncateGuts() using cascade option. If any of the t1_depX is\neither not in the pg_subscription_rel or it is subscribed in another\nsubscription, then is it okay if we scan pg_suscription_rel in a loop\nwith t1_depX relid's? Isn't it costiler? Or since it is a cache\nlookup, maybe that's okay?\n\n /* Try finding the mapping. */\n tup = SearchSysCache2(SUBSCRIPTIONRELMAP,\n ObjectIdGetDatum(relid),\n ObjectIdGetDatum(subid));\n\n> One other problem discussed in this thread was what to do when the\n> same table is part of multiple subscriptions and the user has provided\n> a truncate option while operating on such a subscription.\n\nI think we can just skip truncating a table when it is part of\nmultiple subscriptions. We can tell this clearly in the documentation.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 May 2021 14:10:28 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `truncate` option to subscription commands" }, { "msg_contents": "On Mon, May 24, 2021 at 2:10 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, May 24, 2021 at 11:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > 1) Whether a table the sync worker is trying to truncate is having any\n> > > referencing (foreign key) tables on the subscriber? If yes, whether\n> > > all the referencing tables are present in the list of subscription\n> > > tables (output of fetch_table_list)? In this case, the sync worker is\n> > > truncating the primary key/referenced table.\n> > >\n> > > One way to solve the above problem is by storing the table oids of the\n> > > subscription tables (output of fetch_table_list) in a new column in\n> > > the pg_subscription catalog (like subpublications text[] column). In\n> > > the sync worker, before truncation of a table, use\n> > > heap_truncate_find_FKs to get all the referencing tables of the given\n> > > table and get all the subscription tables from the new pg_subscription\n> > > column. If all the referencing tables exist in the subscription\n> > > tables, then truncate the table, otherwise don't, just skip it.\n> > >\n> >\n> > Here, silently skipping doesn't seem like a good idea when the user\n> > has asked to truncate the table. Shouldn't we allow it if the user has\n> > provided say cascade with a truncate option?\n>\n> We could do that. In that case, the truncate option just can't be a\n> boolean, but it has to be an enum accepting \"restrict\", \"cascade\",\n> maybe \"restart identity\" or \"continue identity\" too. I have a concern\n> here - what if the ExecuteTruncateGuts fails with the cascade option\n> for whatever reason? Should the table sync worker be trapped in that\n> error?\n>\n\nHow is it any different from any other error we got during table sync\n(say PK violation, out of memory, or any other such error)?\n\n>\n> > > There\n> > > can be a problem here if there are many subscription tables, the size\n> > > of the new column in pg_susbcription can be huge. However, we can\n> > > choose to store the table ids in this new column only when the\n> > > truncate option is specified.\n> > >\n> > > Another way is to let each table sync worker scan the\n> > > pg_subscription_rel to get all the relations that belong to a\n> > > subscription. But I felt this was costly.\n> > >\n> >\n> > I feel it is better to use pg_subscription_rel especially because we\n> > will do so when the user has given the truncate option and note that\n> > we are already accessing it in sync worker for both reading and\n> > writing. See LogicalRepSyncTableStart.\n>\n> Note that in pg_subscription_rel, there can exist multiple rows for\n> each table for a given subscription. Say, t1 is a table that the sync\n> worker is trying to truncate and copy. Say, t1_dep1, t1_dep2, t1_dep3\n> .... are the dependent tables (we can find these using\n> heap_truncate_find_FKs). Now, we need to see if all the t1_dep1,\n> t1_dep2, t1_dep3 .... tables are in the pg_suscription_rel with the\n> same subscription id, then only we can delete all of them with\n> EexecuteTruncateGuts() using cascade option. If any of the t1_depX is\n> either not in the pg_subscription_rel or it is subscribed in another\n> subscription, then is it okay if we scan pg_suscription_rel in a loop\n> with t1_depX relid's?\n>\n\nWhy do you need to search in a loop? There is an index for relid, subid.\n\n> Isn't it costiler? Or since it is a cache\n> lookup, maybe that's okay?\n>\n> /* Try finding the mapping. */\n> tup = SearchSysCache2(SUBSCRIPTIONRELMAP,\n> ObjectIdGetDatum(relid),\n> ObjectIdGetDatum(subid));\n>\n> > One other problem discussed in this thread was what to do when the\n> > same table is part of multiple subscriptions and the user has provided\n> > a truncate option while operating on such a subscription.\n>\n> I think we can just skip truncating a table when it is part of\n> multiple subscriptions. We can tell this clearly in the documentation.\n>\n\nOkay, I don't have any better ideas at this stage.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 25 May 2021 15:16:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add `truncate` option to subscription commands" } ]
[ { "msg_contents": "Hi\n\nI found some code places call list_delete_ptr can be replaced by list_delete_xxxcell which can be faster.\n\ndiff --git a/src/backend/optimizer/path/joinpath.c b/src/backend/optimizer/path/joinpath.c\nindex db54a6b..61ef7c8 100644\n--- a/src/backend/optimizer/path/joinpath.c\n+++ b/src/backend/optimizer/path/joinpath.c\n@@ -1005,8 +1005,8 @@ sort_inner_and_outer(PlannerInfo *root,\n \t\t/* Make a pathkey list with this guy first */\n \t\tif (l != list_head(all_pathkeys))\n \t\t\touterkeys = lcons(front_pathkey,\n-\t\t\t\t\t\t\t list_delete_ptr(list_copy(all_pathkeys),\n-\t\t\t\t\t\t\t\t\t\t\t front_pathkey));\n+\t\t\t\t\t\t\t list_delete_nth_cell(list_copy(all_pathkeys),\n+\t\t\t\t\t\t\t\t\t\t\t\t foreach_current_index(l)));\n \t\telse\n \t\t\touterkeys = all_pathkeys;\t/* no work at first one... */\n \ndiff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c\nindex fe777c3..d0f15b8 100644\n--- a/src/backend/rewrite/rewriteHandler.c\n+++ b/src/backend/rewrite/rewriteHandler.c\n@@ -650,7 +650,7 @@ adjustJoinTreeList(Query *parsetree, bool removert, int rt_index)\n \t\t\tif (IsA(rtr, RangeTblRef) &&\n \t\t\t\trtr->rtindex == rt_index)\n \t\t\t{\n-\t\t\t\tnewjointree = list_delete_ptr(newjointree, rtr);\n+\t\t\t\tnewjointree = list_delete_cell(newjointree, l);\n\n\nBest regards,\nhouzj", "msg_date": "Sat, 10 Oct 2020 02:44:49 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "Use list_delete_xxxcell O(1) instead of list_delete_ptr O(N) in some\n places" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nPatch applies cleanly on master & 13 and installcheck-world runs on 13 & master. Seem to follow the new style of using more the expressive macro's for the list interface, so looks good to me.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Wed, 14 Oct 2020 07:13:42 +0000", "msg_from": "Luc Vlaming <luc@swarm64.com>", "msg_from_op": false, "msg_subject": "Re: Use list_delete_xxxcell O(1) instead of list_delete_ptr O(N) in\n some\n places" }, { "msg_contents": "On Sat, 10 Oct 2020 at 15:45, Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> I found some code places call list_delete_ptr can be replaced by list_delete_xxxcell which can be faster.\n>\n> diff --git a/src/backend/optimizer/path/joinpath.c b/src/backend/optimizer/path/joinpath.c\n> index db54a6b..61ef7c8 100644\n> --- a/src/backend/optimizer/path/joinpath.c\n> +++ b/src/backend/optimizer/path/joinpath.c\n> @@ -1005,8 +1005,8 @@ sort_inner_and_outer(PlannerInfo *root,\n> /* Make a pathkey list with this guy first */\n> if (l != list_head(all_pathkeys))\n> outerkeys = lcons(front_pathkey,\n> - list_delete_ptr(list_copy(all_pathkeys),\n> - front_pathkey));\n> + list_delete_nth_cell(list_copy(all_pathkeys),\n> + foreach_current_index(l)));\n> else\n> outerkeys = all_pathkeys; /* no work at first one... */\n\nThat looks ok to me. It would be more optimal if we had a method to\nmove an element to the front of a list, or to any specified position,\nbut I can't imagine it's worth making such a function just for that.\nSo what you have there seems fine.\n\n> diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c\n> index fe777c3..d0f15b8 100644\n> --- a/src/backend/rewrite/rewriteHandler.c\n> +++ b/src/backend/rewrite/rewriteHandler.c\n> @@ -650,7 +650,7 @@ adjustJoinTreeList(Query *parsetree, bool removert, int rt_index)\n> if (IsA(rtr, RangeTblRef) &&\n> rtr->rtindex == rt_index)\n> {\n> - newjointree = list_delete_ptr(newjointree, rtr);\n> + newjointree = list_delete_cell(newjointree, l);\n\nI think you may as well just use newjointree =\nforeach_delete_current(newjointree, l);. The comment about why the\nlist_delete is ok inside a foreach is then irrelevant since\nforeach_delete_current() is designed for deleting the current foreach\ncell.\n\nLooking around for other places I found two more in equivclass.c.\nThese two do require an additional moving part to keep track of the\nindex we want to delete, so they're not quite as clear cut a win to\ndo. However, I don't think tracking the index makes the code overly\ncomplex, so I'm thinking they're both fine to do. Does anyone think\ndifferently?\n\nUpdated patch attached.\n\nDavid", "msg_date": "Fri, 16 Oct 2020 11:24:24 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use list_delete_xxxcell O(1) instead of list_delete_ptr O(N) in\n some places" }, { "msg_contents": "> > I found some code places call list_delete_ptr can be replaced by\r\n> list_delete_xxxcell which can be faster.\r\n> >\r\n> > diff --git a/src/backend/optimizer/path/joinpath.c\r\n> > b/src/backend/optimizer/path/joinpath.c\r\n> > index db54a6b..61ef7c8 100644\r\n> > --- a/src/backend/optimizer/path/joinpath.c\r\n> > +++ b/src/backend/optimizer/path/joinpath.c\r\n> > @@ -1005,8 +1005,8 @@ sort_inner_and_outer(PlannerInfo *root,\r\n> > /* Make a pathkey list with this guy first */\r\n> > if (l != list_head(all_pathkeys))\r\n> > outerkeys = lcons(front_pathkey,\r\n> > -\r\n> list_delete_ptr(list_copy(all_pathkeys),\r\n> > -\r\n> front_pathkey));\r\n> > +\r\n> list_delete_nth_cell(list_copy(all_pathkeys),\r\n> > +\r\n> > + foreach_current_index(l)));\r\n> > else\r\n> > outerkeys = all_pathkeys; /* no work at\r\n> first one... */\r\n> \r\n> That looks ok to me. It would be more optimal if we had a method to move\r\n> an element to the front of a list, or to any specified position, but I can't\r\n> imagine it's worth making such a function just for that.\r\n> So what you have there seems fine.\r\n> \r\n> > diff --git a/src/backend/rewrite/rewriteHandler.c\r\n> > b/src/backend/rewrite/rewriteHandler.c\r\n> > index fe777c3..d0f15b8 100644\r\n> > --- a/src/backend/rewrite/rewriteHandler.c\r\n> > +++ b/src/backend/rewrite/rewriteHandler.c\r\n> > @@ -650,7 +650,7 @@ adjustJoinTreeList(Query *parsetree, bool removert,\r\n> int rt_index)\r\n> > if (IsA(rtr, RangeTblRef) &&\r\n> > rtr->rtindex == rt_index)\r\n> > {\r\n> > - newjointree =\r\n> list_delete_ptr(newjointree, rtr);\r\n> > + newjointree =\r\n> > + list_delete_cell(newjointree, l);\r\n> \r\n> I think you may as well just use newjointree =\r\n> foreach_delete_current(newjointree, l);. The comment about why the\r\n> list_delete is ok inside a foreach is then irrelevant since\r\n> foreach_delete_current() is designed for deleting the current foreach cell.\r\n> \r\n> Looking around for other places I found two more in equivclass.c.\r\n> These two do require an additional moving part to keep track of the index\r\n> we want to delete, so they're not quite as clear cut a win to do. However,\r\n> I don't think tracking the index makes the code overly complex, so I'm\r\n> thinking they're both fine to do. Does anyone think differently?\r\n> \r\n> Updated patch attached.\r\n> \r\nThanks for reviewing the patch!\r\nAnd after checking the code again and I found two more places which can be improved.\r\n\r\n1.\r\n--- a/src/backend/parser/parse_expr.c\r\n+++ b/src/backend/parser/parse_expr.c\r\n@@ -1702,7 +1702,7 @@ transformMultiAssignRef(ParseState *pstate, MultiAssignRef *maref)\r\n \t\t */\r\n \t\tif (maref->colno == maref->ncolumns)\r\n \t\t\tpstate->p_multiassign_exprs =\r\n-\t\t\t\tlist_delete_ptr(pstate->p_multiassign_exprs, tle);\r\n+\t\t\t\tlist_delete_last(pstate->p_multiassign_exprs);\r\n\r\nBased on the logic above in function transformMultiAssignRef,\r\nI found 'tle' is always the last one in list ' pstate->p_multiassign_exprs ' ,\r\nSo list_delete_last seems can be used here.\r\n\r\n\r\n2.\r\n\r\n+\t\t\tnameEl_idx = foreach_current_index(option);\r\n \t\t}\r\n \t}\r\n \r\n@@ -405,7 +407,7 @@ generateSerialExtraStmts(CreateStmtContext *cxt, ColumnDef *column,\r\n \t\t}\r\n \t\tsname = rv->relname;\r\n \t\t/* Remove the SEQUENCE NAME item from seqoptions */\r\n-\t\tseqoptions = list_delete_ptr(seqoptions, nameEl);\r\n+\t\tseqoptions = list_delete_nth_cell(seqoptions, nameEl_idx);\r\n\r\nAdd a new var ' nameEl_idx ' to catch the index.\r\n\r\nBest regards,\r\nhouzj", "msg_date": "Fri, 16 Oct 2020 03:42:34 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Use list_delete_xxxcell O(1) instead of list_delete_ptr O(N) in\n some places" }, { "msg_contents": "On Fri, 16 Oct 2020 at 16:42, Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> And after checking the code again and I found two more places which can be improved.\n>\n> 1.\n> --- a/src/backend/parser/parse_expr.c\n> +++ b/src/backend/parser/parse_expr.c\n> @@ -1702,7 +1702,7 @@ transformMultiAssignRef(ParseState *pstate, MultiAssignRef *maref)\n> */\n> if (maref->colno == maref->ncolumns)\n> pstate->p_multiassign_exprs =\n> - list_delete_ptr(pstate->p_multiassign_exprs, tle);\n> + list_delete_last(pstate->p_multiassign_exprs);\n>\n> Based on the logic above in function transformMultiAssignRef,\n> I found 'tle' is always the last one in list ' pstate->p_multiassign_exprs ' ,\n> So list_delete_last seems can be used here.\n\n\nYeah. After a bit of looking I agree. There's a similar assumption\nthere already with:\n\n/*\n* Second or later column in a multiassignment. Re-fetch the\n* transformed SubLink or RowExpr, which we assume is still the last\n* entry in p_multiassign_exprs.\n*/\nAssert(pstate->p_multiassign_exprs != NIL);\ntle = (TargetEntry *) llast(pstate->p_multiassign_exprs);\n\n> 2.\n>\n> + nameEl_idx = foreach_current_index(option);\n> }\n> }\n>\n> @@ -405,7 +407,7 @@ generateSerialExtraStmts(CreateStmtContext *cxt, ColumnDef *column,\n> }\n> sname = rv->relname;\n> /* Remove the SEQUENCE NAME item from seqoptions */\n> - seqoptions = list_delete_ptr(seqoptions, nameEl);\n> + seqoptions = list_delete_nth_cell(seqoptions, nameEl_idx);\n>\n> Add a new var ' nameEl_idx ' to catch the index.\n\nYeah. That looks fine too.\n\nPushed.\n\nDavid\n\n\n", "msg_date": "Thu, 22 Oct 2020 14:40:07 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use list_delete_xxxcell O(1) instead of list_delete_ptr O(N) in\n some places" } ]
[ { "msg_contents": "A sub-patch extracted from the bigger patch in thread \"SQL-standard \nfunction body\"[0]: Make LANGUAGE SQL the default in CREATE FUNCTION and \nCREATE PROCEDURE, per SQL standard.\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/1c11f1eb-f00c-43b7-799d-2d44132c02d7@2ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 10 Oct 2020 10:49:01 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Make LANGUAGE SQL the default" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> A sub-patch extracted from the bigger patch in thread \"SQL-standard \n> function body\"[0]: Make LANGUAGE SQL the default in CREATE FUNCTION and \n> CREATE PROCEDURE, per SQL standard.\n\nI'm suspicious of doing this, mainly because DO does not have that\ndefault. I think sticking with no-default is less likely to cause\nconfusion. Moreover, I don't really believe that having a default value\nhere is going to add any noticeable ease-of-use for anyone. What's much\nmore likely to happen is that we'll start getting novice questions about\nwhatever weird syntax errors you get when trying to feed plpgsql code to\nthe sql-language function parser. (I don't know what they are exactly,\nbut I'll bet a very fine dinner that they're less understandable to a\nnovice than \"no language specified\".)\n\nI don't see any reason why we can't figure out that an unquoted function\nbody is SQL, while continuing to make no assumptions about a body written\nas a string. The argument that defaulting to SQL makes the latter case\nSQL-compliant seems pretty silly anyway.\n\nI also continue to suspect that we are going to need to treat quoted\nand unquoted SQL as two different languages, possibly with not even\nthe same semantics. If that's how things shake out, claiming that the\nquoted-SQL version is the default because spec becomes even sillier.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 10 Oct 2020 12:14:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Make LANGUAGE SQL the default" }, { "msg_contents": "so 10. 10. 2020 v 18:14 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > A sub-patch extracted from the bigger patch in thread \"SQL-standard\n> > function body\"[0]: Make LANGUAGE SQL the default in CREATE FUNCTION and\n> > CREATE PROCEDURE, per SQL standard.\n>\n> I'm suspicious of doing this, mainly because DO does not have that\n> default. I think sticking with no-default is less likely to cause\n> confusion. Moreover, I don't really believe that having a default value\n> here is going to add any noticeable ease-of-use for anyone. What's much\n> more likely to happen is that we'll start getting novice questions about\n> whatever weird syntax errors you get when trying to feed plpgsql code to\n> the sql-language function parser. (I don't know what they are exactly,\n> but I'll bet a very fine dinner that they're less understandable to a\n> novice than \"no language specified\".)\n>\n> I don't see any reason why we can't figure out that an unquoted function\n> body is SQL, while continuing to make no assumptions about a body written\n> as a string. The argument that defaulting to SQL makes the latter case\n> SQL-compliant seems pretty silly anyway.\n>\n\n+1\n\nPavel\n\n\n> I also continue to suspect that we are going to need to treat quoted\n> and unquoted SQL as two different languages, possibly with not even\n> the same semantics. If that's how things shake out, claiming that the\n> quoted-SQL version is the default because spec becomes even sillier.\n>\n> regards, tom lane\n>\n>\n>\n\nso 10. 10. 2020 v 18:14 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> A sub-patch extracted from the bigger patch in thread \"SQL-standard \n> function body\"[0]: Make LANGUAGE SQL the default in CREATE FUNCTION and \n> CREATE PROCEDURE, per SQL standard.\n\nI'm suspicious of doing this, mainly because DO does not have that\ndefault.  I think sticking with no-default is less likely to cause\nconfusion.  Moreover, I don't really believe that having a default value\nhere is going to add any noticeable ease-of-use for anyone.  What's much\nmore likely to happen is that we'll start getting novice questions about\nwhatever weird syntax errors you get when trying to feed plpgsql code to\nthe sql-language function parser.  (I don't know what they are exactly,\nbut I'll bet a very fine dinner that they're less understandable to a\nnovice than \"no language specified\".)\n\nI don't see any reason why we can't figure out that an unquoted function\nbody is SQL, while continuing to make no assumptions about a body written\nas a string.  The argument that defaulting to SQL makes the latter case\nSQL-compliant seems pretty silly anyway.+1Pavel\n\nI also continue to suspect that we are going to need to treat quoted\nand unquoted SQL as two different languages, possibly with not even\nthe same semantics.  If that's how things shake out, claiming that the\nquoted-SQL version is the default because spec becomes even sillier.\n\n                        regards, tom lane", "msg_date": "Sat, 10 Oct 2020 18:30:26 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make LANGUAGE SQL the default" } ]
[ { "msg_contents": "Hi\n\nInline handler creates simple_eval_resowner (without parent).\n\nInside plpgsql_estate_setup this value is assigned to\nestate->simple_eval_resowner\n\n<-->if (simple_eval_resowner)\n<--><-->estate->simple_eval_resowner = simple_eval_resowner;\n<-->else\n<--><-->estate->simple_eval_resowner = shared_simple_eval_resowner;\n\nWhen we call procedure with inner COMMIT, then when \"before_lxid !=\nafter_lxid\" following code is\nexecuted.\n\n<--><-->estate->simple_eval_estate = NULL;\n<--><-->estate->simple_eval_resowner = NULL;\n<--><-->plpgsql_create_econtext(estate);\n\nand\n\nfragment from plpgsql_create_econtext\n\n<-->/*\n<--> * Likewise for the simple-expression resource owner.\n<--> */\n<-->if (estate->simple_eval_resowner == NULL)\n<-->{\n<--><-->if (shared_simple_eval_resowner == NULL)\n<--><--><-->shared_simple_eval_resowner =\n<--><--><--><-->ResourceOwnerCreate(TopTransactionResourceOwner,\n<--><--><--><--><--><--><--><--><-->\"PL/pgSQL simple expressions\");\n<--><-->estate->simple_eval_resowner = shared_simple_eval_resowner;\n<-->}\n\nIn this case simple_eval_resowner from inline handler is overwritten and\nonly shared_simple_eval_resowner will be used.\n\nSo is it \"estate->simple_eval_resowner = NULL;\" error (without other\nconditions)?\n\nRegards\n\nPavel\n\nHiInline handler creates simple_eval_resowner (without parent). Inside plpgsql_estate_setup this value is assigned to estate->simple_eval_resowner<-->if (simple_eval_resowner)<--><-->estate->simple_eval_resowner = simple_eval_resowner;<-->else<--><-->estate->simple_eval_resowner = shared_simple_eval_resowner;When we call procedure with inner COMMIT, then when \"before_lxid != after_lxid\" following code is executed.<--><-->estate->simple_eval_estate = NULL;<--><-->estate->simple_eval_resowner = NULL;<--><-->plpgsql_create_econtext(estate);andfragment from plpgsql_create_econtext<-->/*<--> * Likewise for the simple-expression resource owner.<--> */<-->if (estate->simple_eval_resowner == NULL)<-->{<--><-->if (shared_simple_eval_resowner == NULL)<--><--><-->shared_simple_eval_resowner =<--><--><--><-->ResourceOwnerCreate(TopTransactionResourceOwner,<--><--><--><--><--><--><--><--><-->\"PL/pgSQL simple expressions\");<--><-->estate->simple_eval_resowner = shared_simple_eval_resowner;<-->}In this case simple_eval_resowner from inline handler is overwritten and only shared_simple_eval_resowner will be used.So is it \"estate->simple_eval_resowner = NULL;\" error (without other conditions)?RegardsPavel", "msg_date": "Mon, 12 Oct 2020 16:15:51 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "broken logic of simple_eval_resowner after CALL and COMMIT inside\n procedure" }, { "msg_contents": "po 12. 10. 2020 v 16:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> Inline handler creates simple_eval_resowner (without parent).\n>\n> Inside plpgsql_estate_setup this value is assigned to\n> estate->simple_eval_resowner\n>\n> <-->if (simple_eval_resowner)\n> <--><-->estate->simple_eval_resowner = simple_eval_resowner;\n> <-->else\n> <--><-->estate->simple_eval_resowner = shared_simple_eval_resowner;\n>\n> When we call procedure with inner COMMIT, then when \"before_lxid !=\n> after_lxid\" following code is\n> executed.\n>\n> <--><-->estate->simple_eval_estate = NULL;\n> <--><-->estate->simple_eval_resowner = NULL;\n> <--><-->plpgsql_create_econtext(estate);\n>\n> and\n>\n> fragment from plpgsql_create_econtext\n>\n> <-->/*\n> <--> * Likewise for the simple-expression resource owner.\n> <--> */\n> <-->if (estate->simple_eval_resowner == NULL)\n> <-->{\n> <--><-->if (shared_simple_eval_resowner == NULL)\n> <--><--><-->shared_simple_eval_resowner =\n> <--><--><--><-->ResourceOwnerCreate(TopTransactionResourceOwner,\n> <--><--><--><--><--><--><--><--><-->\"PL/pgSQL simple expressions\");\n> <--><-->estate->simple_eval_resowner = shared_simple_eval_resowner;\n> <-->}\n>\n> In this case simple_eval_resowner from inline handler is overwritten and\n> only shared_simple_eval_resowner will be used.\n>\n> So is it \"estate->simple_eval_resowner = NULL;\" error (without other\n> conditions)?\n>\n\nProbably it is described\n\n *\n * (However, if a DO block executes COMMIT or ROLLBACK, then\nexec_stmt_commit\n * or exec_stmt_rollback will unlink it from the DO's simple-expression\nEState\n * and create a new shared EState that will be used thenceforth. The\noriginal\n * EState will be cleaned up when we get back to plpgsql_inline_handler.\nThis\n * is a bit ugly, but it isn't worth doing better, since scenarios like this\n * can't result in indefinite accumulation of state trees.)\n\nPavel\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n\npo 12. 10. 2020 v 16:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:HiInline handler creates simple_eval_resowner (without parent). Inside plpgsql_estate_setup this value is assigned to estate->simple_eval_resowner<-->if (simple_eval_resowner)<--><-->estate->simple_eval_resowner = simple_eval_resowner;<-->else<--><-->estate->simple_eval_resowner = shared_simple_eval_resowner;When we call procedure with inner COMMIT, then when \"before_lxid != after_lxid\" following code is executed.<--><-->estate->simple_eval_estate = NULL;<--><-->estate->simple_eval_resowner = NULL;<--><-->plpgsql_create_econtext(estate);andfragment from plpgsql_create_econtext<-->/*<--> * Likewise for the simple-expression resource owner.<--> */<-->if (estate->simple_eval_resowner == NULL)<-->{<--><-->if (shared_simple_eval_resowner == NULL)<--><--><-->shared_simple_eval_resowner =<--><--><--><-->ResourceOwnerCreate(TopTransactionResourceOwner,<--><--><--><--><--><--><--><--><-->\"PL/pgSQL simple expressions\");<--><-->estate->simple_eval_resowner = shared_simple_eval_resowner;<-->}In this case simple_eval_resowner from inline handler is overwritten and only shared_simple_eval_resowner will be used.So is it \"estate->simple_eval_resowner = NULL;\" error (without other conditions)?Probably it is described  * * (However, if a DO block executes COMMIT or ROLLBACK, then exec_stmt_commit * or exec_stmt_rollback will unlink it from the DO's simple-expression EState * and create a new shared EState that will be used thenceforth.  The original * EState will be cleaned up when we get back to plpgsql_inline_handler.  This * is a bit ugly, but it isn't worth doing better, since scenarios like this * can't result in indefinite accumulation of state trees.)Pavel RegardsPavel", "msg_date": "Mon, 12 Oct 2020 16:36:00 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: broken logic of simple_eval_resowner after CALL and COMMIT inside\n procedure" } ]
[ { "msg_contents": "Would someone explain to me why assign_recovery_target_lsn and related GUC\nassign hooks throw errors, rather than doing so in the associated check\nhooks? An assign hook is not supposed to throw an error. Full stop, no\nexceptions. We wouldn't bother to separate those hooks otherwise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 12 Oct 2020 12:00:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Bizarre coding in recovery target GUC management" }, { "msg_contents": "On 2020-10-12 18:00, Tom Lane wrote:\n> Would someone explain to me why assign_recovery_target_lsn and related GUC\n> assign hooks throw errors, rather than doing so in the associated check\n> hooks? An assign hook is not supposed to throw an error. Full stop, no\n> exceptions. We wouldn't bother to separate those hooks otherwise.\n\nThat code is checking whether more than one recovery target GUC has been \nset. I don't think the check hook sees the right state to be able to \ncheck that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 23 Oct 2020 10:07:21 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Bizarre coding in recovery target GUC management" } ]
[ { "msg_contents": "Hackers,\n\nOver in general [1] Robert Inder griped about the not-so-recent change to\nour automatic checkpointing, and thus archiving, behavior where\nnon-activity results in nothing happening. In looking over the\ndocumentation I felt a few changes could be made to increase the chance\nthat a reader learns this key dynamic. Attached is a patch with those\nchanges. Copied inline for ease of review.\n\ncommit 8af7f653907688252d8663a80e945f6f5782b0de\nAuthor: David G. Johnston <david.g.johnston@gmail.com>\nDate: Mon Oct 12 21:32:32 2020 +0000\n\n Further note required activity aspect of automatic checkpoint and\narchiving\n\n A few spots in the documentation could use a reminder that checkpoints\n and archiving requires that actual WAL records be written in order to\nhappen\n automatically.\n\ndiff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml\nindex 42a8ed328d..c312fc9387 100644\n--- a/doc/src/sgml/backup.sgml\n+++ b/doc/src/sgml/backup.sgml\n@@ -722,6 +722,8 @@ test ! -f\n/mnt/server/archivedir/00000001000000A900000065 &amp;&amp; cp pg_wal/0\n short <varname>archive_timeout</varname> &mdash; it will bloat your\narchive\n storage. <varname>archive_timeout</varname> settings of a minute or\nso are\n usually reasonable.\n+ This is mitigated by the fact that empty WAL segments will not be\narchived\n+ even if the archive_timeout period has elapsed.\n </para>\n\n <para>\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex ee914740cc..306f78765c 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -3131,6 +3131,8 @@ include_dir 'conf.d'\n <listitem>\n <para>\n Maximum time between automatic WAL checkpoints.\n+ The automatic checkpoint will do nothing if no new WAL has been\n+ written since the last recorded checkpoint.\n If this value is specified without units, it is taken as seconds.\n The valid range is between 30 seconds and one day.\n The default is five minutes (<literal>5min</literal>).\n@@ -3337,18 +3339,17 @@ include_dir 'conf.d'\n </term>\n <listitem>\n <para>\n+ Force the completion of the current, non-empty, WAL segment when\n+ this amount of time (if non-zero) has elapsed since the last\n+ segment file switch.\n The <xref linkend=\"guc-archive-command\"/> is only invoked for\n completed WAL segments. Hence, if your server generates little WAL\n traffic (or has slack periods where it does so), there could be a\n long delay between the completion of a transaction and its safe\n recording in archive storage. To limit how old unarchived\n data can be, you can set <varname>archive_timeout</varname> to\nforce the\n- server to switch to a new WAL segment file periodically. When this\n- parameter is greater than zero, the server will switch to a new\n- segment file whenever this amount of time has elapsed since the\nlast\n- segment file switch, and there has been any database activity,\n- including a single checkpoint (checkpoints are skipped if there is\n- no database activity). Note that archived files that are closed\n+ server to switch to a new WAL segment file periodically.\n+ Note that archived files that are closed\n early due to a forced switch are still the same length as\ncompletely\n full files. Therefore, it is unwise to use a very short\n <varname>archive_timeout</varname> &mdash; it will bloat your\narchive\n\nDavid J.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAKqjJm83gnw2u0ugpkgc4bq58L%3DcLwbvmh69TwKKo83Y1CnANw%40mail.gmail.com", "msg_date": "Mon, 12 Oct 2020 14:54:28 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "[patch] [doc] Further note required activity aspect of automatic\n checkpoint and archving" }, { "msg_contents": "On 2020-10-12 23:54, David G. Johnston wrote:\n> --- a/doc/src/sgml/backup.sgml\n> +++ b/doc/src/sgml/backup.sgml\n> @@ -722,6 +722,8 @@ test ! -f \n> /mnt/server/archivedir/00000001000000A900000065 &amp;&amp; cp pg_wal/0\n>      short <varname>archive_timeout</varname> &mdash; it will bloat \n> your archive\n>      storage.  <varname>archive_timeout</varname> settings of a minute \n> or so are\n>      usually reasonable.\n> +    This is mitigated by the fact that empty WAL segments will not be \n> archived\n> +    even if the archive_timeout period has elapsed.\n>     </para>\n\nThis is hopefully not what happens. What this would mean is that I'd \nthen have a sequence of WAL files named, say,\n\n1, 2, 3, 7, 8, ...\n\nbecause a few in the middle were not archived because they were empty.\n\n> --- a/doc/src/sgml/config.sgml\n> +++ b/doc/src/sgml/config.sgml\n> @@ -3131,6 +3131,8 @@ include_dir 'conf.d'\n>        <listitem>\n>         <para>\n>          Maximum time between automatic WAL checkpoints.\n> +        The automatic checkpoint will do nothing if no new WAL has been\n> +        written since the last recorded checkpoint.\n>          If this value is specified without units, it is taken as seconds.\n>          The valid range is between 30 seconds and one day.\n>          The default is five minutes (<literal>5min</literal>).\n\nI think what happens is that the checkpoint is skipped, not that the \ncheckpoint happens but does nothing. That is the wording you cited in \nthe other thread from \n<https://www.postgresql.org/docs/13/wal-configuration.html>.\n\n\n", "msg_date": "Fri, 15 Jan 2021 08:16:51 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Further note required activity aspect of automatic\n checkpoint and archving" }, { "msg_contents": "On Fri, Jan 15, 2021 at 12:16 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 2020-10-12 23:54, David G. Johnston wrote:\n> > --- a/doc/src/sgml/backup.sgml\n> > +++ b/doc/src/sgml/backup.sgml\n> > @@ -722,6 +722,8 @@ test ! -f\n> > /mnt/server/archivedir/00000001000000A900000065 &amp;&amp; cp pg_wal/0\n> > short <varname>archive_timeout</varname> &mdash; it will bloat\n> > your archive\n> > storage. <varname>archive_timeout</varname> settings of a minute\n> > or so are\n> > usually reasonable.\n> > + This is mitigated by the fact that empty WAL segments will not be\n> > archived\n> > + even if the archive_timeout period has elapsed.\n> > </para>\n>\n> This is hopefully not what happens. What this would mean is that I'd\n> then have a sequence of WAL files named, say,\n>\n> 1, 2, 3, 7, 8, ...\n>\n> because a few in the middle were not archived because they were empty.\n>\n\nThis addition assumes it is known that the archive process first fills the\nfiles to their maximum size and then archives them. That filling of the\nfile is what causes the next file in the sequence to be created. So if the\narchiving doesn't happen the files do not get filled and the status-quo\nprevails.\n\nIf the above wants to be made more explicit in this change maybe:\n\n\"This is mitigated by the fact that archiving, and thus filling, the active\nWAL segment will not happen if that segment is empty; it will continue as\nthe active segment.\"\n\n\n> > --- a/doc/src/sgml/config.sgml\n> > +++ b/doc/src/sgml/config.sgml\n> > @@ -3131,6 +3131,8 @@ include_dir 'conf.d'\n> > <listitem>\n> > <para>\n> > Maximum time between automatic WAL checkpoints.\n> > + The automatic checkpoint will do nothing if no new WAL has been\n> > + written since the last recorded checkpoint.\n> > If this value is specified without units, it is taken as\n> seconds.\n> > The valid range is between 30 seconds and one day.\n> > The default is five minutes (<literal>5min</literal>).\n>\n> I think what happens is that the checkpoint is skipped, not that the\n> checkpoint happens but does nothing. That is the wording you cited in\n> the other thread from\n> <https://www.postgresql.org/docs/13/wal-configuration.html>.\n>\n\nConsistency is good; and considering it further the skipped wording is\ngenerally better anyway.\n\n\"The automatic checkpoint will be skipped if no new WAL has been written\nsince the last recorded checkpoint.\"\n\nDavid J.\n\nOn Fri, Jan 15, 2021 at 12:16 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 2020-10-12 23:54, David G. Johnston wrote:\n> --- a/doc/src/sgml/backup.sgml\n> +++ b/doc/src/sgml/backup.sgml\n> @@ -722,6 +722,8 @@ test ! -f \n> /mnt/server/archivedir/00000001000000A900000065 &amp;&amp; cp pg_wal/0\n>       short <varname>archive_timeout</varname> &mdash; it will bloat \n> your archive\n>       storage.  <varname>archive_timeout</varname> settings of a minute \n> or so are\n>       usually reasonable.\n> +    This is mitigated by the fact that empty WAL segments will not be \n> archived\n> +    even if the archive_timeout period has elapsed.\n>      </para>\n\nThis is hopefully not what happens.  What this would mean is that I'd \nthen have a sequence of WAL files named, say,\n\n1, 2, 3, 7, 8, ...\n\nbecause a few in the middle were not archived because they were empty.This addition assumes it is known that the archive process first fills the files to their maximum size and then archives them.  That filling of the file is what causes the next file in the sequence to be created.  So if the archiving doesn't happen the files do not get filled and the status-quo prevails.If the above wants to be made more explicit in this change maybe:\"This is mitigated by the fact that archiving, and thus filling, the active WAL segment will not happen if that segment is empty; it will continue as the active segment.\"\n\n> --- a/doc/src/sgml/config.sgml\n> +++ b/doc/src/sgml/config.sgml\n> @@ -3131,6 +3131,8 @@ include_dir 'conf.d'\n>         <listitem>\n>          <para>\n>           Maximum time between automatic WAL checkpoints.\n> +        The automatic checkpoint will do nothing if no new WAL has been\n> +        written since the last recorded checkpoint.\n>           If this value is specified without units, it is taken as seconds.\n>           The valid range is between 30 seconds and one day.\n>           The default is five minutes (<literal>5min</literal>).\n\nI think what happens is that the checkpoint is skipped, not that the \ncheckpoint happens but does nothing.  That is the wording you cited in \nthe other thread from \n<https://www.postgresql.org/docs/13/wal-configuration.html>.Consistency is good; and considering it further the skipped wording is generally better anyway.\"The automatic checkpoint will be skipped if no new WAL has been written since the last recorded checkpoint.\"David J.", "msg_date": "Fri, 15 Jan 2021 12:50:43 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [patch] [doc] Further note required activity aspect of automatic\n checkpoint and archving" }, { "msg_contents": "Hi David,\n\nOn 1/15/21 2:50 PM, David G. Johnston wrote:\n> \n> If the above wants to be made more explicit in this change maybe:\n> \n> \"This is mitigated by the fact that archiving, and thus filling, the \n> active WAL segment will not happen if that segment is empty; it will \n> continue as the active segment.\"\n\n\"archiving, and thus filling\" seems awkward to me. Perhaps:\n\nThis is mitigated by the fact that WAL segments will not be archived \nuntil they have been filled with some data, even if the archive_timeout \nperiod has elapsed.\n\n> Consistency is good; and considering it further the skipped wording is \n> generally better anyway.\n> \n> \"The automatic checkpoint will be skipped if no new WAL has been written \n> since the last recorded checkpoint.\"\nLooks good to me.\n\nCould you produce a new patch so Peter has something complete to look at?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Thu, 18 Mar 2021 11:36:52 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Further note required activity aspect of automatic\n checkpoint and archving" }, { "msg_contents": "> On 18 Mar 2021, at 16:36, David Steele <david@pgmasters.net> wrote:\n\n> Could you produce a new patch so Peter has something complete to look at?\n\nAs this thread has been stalled for for a few commitfests by now I'm marking\nthis patch as returned with feedback. Feel free to open a new entry for an\nupdated patch.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 4 Nov 2021 10:36:37 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Further note required activity aspect of automatic\n checkpoint and archving" } ]
[ { "msg_contents": "Hackers,\n\nOver in Bug# 16652 [1] Christoph failed to recognize the fact that signal\nsending functions are inherently one-way just as signals are. It seems\nworth heading off this situation in the future by making it clear how\nsignals behave and, in the specific case of pg_reload_conf, that the\nimportant feedback one would hope to get out of a success/failure response\nfrom the function call must instead be found in other locations.\n\nPlease see the attached patch, included inline as well.\n\nDavid J.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/16652-58dd6028047058a6%40postgresql.org\n\ncommit 6f0ba7c8fd131c906669882e4402930e548e4522\nAuthor: David G. Johnston <david.g.johnston@gmail.com>\nDate: Mon Oct 12 22:35:38 2020 +0000\n\n Clarify that signal functions have no feedback\n\n Bug# 16652 complains that the definition of success for pg_reload_conf\n doesn't include the outcome of actually reloading the configuration\n files. While this is a fairly easy gap to cross given knowledge of\n signals, being more explicit here doesn't hurt.\n\n Additionally, because of the special nature of pg_reload_conf, add\n links to the various locations where information related to the\n success or failure of a reload can be found. Lacking an existing\n holistic location in the documentation to point the reader just\n list the three resources explicitly.\n\ndiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\nindex e7cff980dd..75ff8acc93 100644\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -23927,7 +23927,8 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n\n <para>\n The functions shown in <xref\n- linkend=\"functions-admin-signal-table\"/> send control signals to\n+ linkend=\"functions-admin-signal-table\"/> send uni-directional\n+ control signals to\n other server processes. Use of these functions is restricted to\n superusers by default but access may be granted to others using\n <command>GRANT</command>, with noted exceptions.\n@@ -23935,7 +23936,8 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n\n <para>\n Each of these functions returns <literal>true</literal> if\n- successful and <literal>false</literal> otherwise.\n+ the signal was successfully sent and <literal>false</literal>\n+ if the sending of the signal failed.\n </para>\n\n <table id=\"functions-admin-signal-table\">\n@@ -23983,7 +23985,14 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n server to reload their configuration files. (This is initiated by\n sending a <systemitem>SIGHUP</systemitem> signal to the postmaster\n process, which in turn sends <systemitem>SIGHUP</systemitem> to\neach\n- of its children.)\n+ of its children.) Inspection of the\n+ <link linkend=\"runtime-config-logging\">log file</link>,\n+ <link linkend=\"view-pg-file-settings\">pg_file_settings view</link>,\n+ and the\n+ <link linkend=\"view-pg-settings\">pg_settings view</link>,\n+ is recommended before and/or after executing\n+ this function to detect whether there are any issues in the\nconfiguration\n+ files preventing some of all of their setting changes from taking\neffect.\n </para></entry>\n </row>", "msg_date": "Mon, 12 Oct 2020 15:43:27 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "[patch] [doc] Clarify that signal functions have no feedback" }, { "msg_contents": "On 2020-10-13 00:43, David G. Johnston wrote:\n> Over in Bug# 16652 [1] Christoph failed to recognize the fact that \n> signal sending functions are inherently one-way just as signals are.  It \n> seems worth heading off this situation in the future by making it clear \n> how signals behave and, in the specific case of pg_reload_conf, that the \n> important feedback one would hope to get out of a success/failure \n> response from the function call must instead be found in other locations.\n\nI agree that the documentation could be improved here. But I don't see \nhow the added advice actually helps in practice. How can you detect \nreload errors by inspecting pg_settings etc.?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 27 Oct 2020 09:19:57 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Clarify that signal functions have no feedback" }, { "msg_contents": "On Tue, Oct 27, 2020 at 1:19 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-10-13 00:43, David G. Johnston wrote:\n> > Over in Bug# 16652 [1] Christoph failed to recognize the fact that\n> > signal sending functions are inherently one-way just as signals are. It\n> > seems worth heading off this situation in the future by making it clear\n> > how signals behave and, in the specific case of pg_reload_conf, that the\n> > important feedback one would hope to get out of a success/failure\n> > response from the function call must instead be found in other locations.\n>\n> I agree that the documentation could be improved here. But I don't see\n> how the added advice actually helps in practice. How can you detect\n> reload errors by inspecting pg_settings etc.?\n>\n\nI decided I was trying to be too thorough here by including stuff other\nthan the file related view added mainly for this purpose (of which I missed\nincluding the one pertinent to the bug report - pg_hba_file_rules).\n\nAttached is a version 2 patch listing only pg_hba_file_rules and\npg_file_settings as the \"before reload\" places (as they do show current\nfile contents) to validate that the server understands the newly changed\ncontents of the pg_hba.conf file and the configuration settings.\n\nDavid J.", "msg_date": "Mon, 2 Nov 2020 09:02:21 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [patch] [doc] Clarify that signal functions have no feedback" }, { "msg_contents": "On 02/11/2020 18:02, David G. Johnston wrote:\n> diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\n> index bf6004f321..43bc2cf086 100644\n> --- a/doc/src/sgml/func.sgml\n> +++ b/doc/src/sgml/func.sgml\n> @@ -23892,7 +23892,8 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n> \n> <para>\n> The functions shown in <xref\n> - linkend=\"functions-admin-signal-table\"/> send control signals to\n> + linkend=\"functions-admin-signal-table\"/> send uni-directional\n> + control signals to\n> other server processes. Use of these functions is restricted to\n> superusers by default but access may be granted to others using\n> <command>GRANT</command>, with noted exceptions.\n\nThe \"uni-directional\" sounds a bit redundant, \"send\" implies that it's \nuni-directional I think.\n\n> @@ -23900,7 +23901,8 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n> \n> <para>\n> Each of these functions returns <literal>true</literal> if\n> - successful and <literal>false</literal> otherwise.\n> + the signal was successfully sent and <literal>false</literal>\n> + if the sending of the signal failed.\n> </para>\n\nThis is a good clarification.\n\n> <table id=\"functions-admin-signal-table\">\n> @@ -23948,7 +23950,11 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n> server to reload their configuration files. (This is initiated by\n> sending a <systemitem>SIGHUP</systemitem> signal to the postmaster\n> process, which in turn sends <systemitem>SIGHUP</systemitem> to each\n> - of its children.)\n> + of its children.) Inspection of the relevant\n> + <link linkend=\"view-pg-file-settings\">pg_file_settings</link>\n> + or\n> + <link linkend=\"view-pg-hba-file-rules\">pg_hba_file_rules</link> views\n> + is recommended after making changes but before signaling the server.\n> </para></entry>\n> </row>\n\nI don't understand this recommendation. What is the user supposed to \nlook for in those views? And why before signaling the server?\n\n[me reads what those views do]. Oh, I see, the idea is that you can use \nthose views to check the configuration for errors, before applying the \nchanges. How about this:\n\nYou can use the <link \nlinkend=\"view-pg-file-settings\">pg_file_settings</link> and <link \nlinkend=\"view-pg-hba-file-rules\">pg_hba_file_rules</link> views to check \nthe configuration files for possible errors, before reloading.\n\n- Heikki\n\n\n", "msg_date": "Tue, 17 Nov 2020 15:13:12 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Clarify that signal functions have no feedback" }, { "msg_contents": "On Tue, Nov 17, 2020 at 6:13 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 02/11/2020 18:02, David G. Johnston wrote:\n> > diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\n> > index bf6004f321..43bc2cf086 100644\n> > --- a/doc/src/sgml/func.sgml\n> > +++ b/doc/src/sgml/func.sgml\n> > @@ -23892,7 +23892,8 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n> >\n> > <para>\n> > The functions shown in <xref\n> > - linkend=\"functions-admin-signal-table\"/> send control signals to\n> > + linkend=\"functions-admin-signal-table\"/> send uni-directional\n> > + control signals to\n> > other server processes. Use of these functions is restricted to\n> > superusers by default but access may be granted to others using\n> > <command>GRANT</command>, with noted exceptions.\n>\n> The \"uni-directional\" sounds a bit redundant, \"send\" implies that it's\n> uni-directional I think.\n>\n\nAgreed, the other two changes sufficiently address the original complaint.\n\nYou can use the <link\n> linkend=\"view-pg-file-settings\">pg_file_settings</link> and <link\n> linkend=\"view-pg-hba-file-rules\">pg_hba_file_rules</link> views to check\n> the configuration files for possible errors, before reloading.\n>\n\nI agree with adding \"why\" you want to check those links, and added a bit\nabout why doing so before reloading works, but the phrasing using \"You\"\nseems out-of-place in this region of the documentation.\n\nv3 attached\n\nDavid J.", "msg_date": "Tue, 17 Nov 2020 12:50:20 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [patch] [doc] Clarify that signal functions have no feedback" }, { "msg_contents": "On 17/11/2020 21:50, David G. Johnston wrote:\n> On Tue, Nov 17, 2020 at 6:13 AM Heikki Linnakangas <hlinnaka@iki.fi \n> <mailto:hlinnaka@iki.fi>> wrote:\n> You can use the <link\n> linkend=\"view-pg-file-settings\">pg_file_settings</link> and <link\n> linkend=\"view-pg-hba-file-rules\">pg_hba_file_rules</link> views to\n> check\n> the configuration files for possible errors, before reloading.\n> \n> \n> I agree with adding \"why\" you want to check those links, and added a bit \n> about why doing so before reloading works, but the phrasing using \"You\" \n> seems out-of-place in this region of the documentation.\n\nThere are plenty of \"you\"s in the docs. Matter of taste, for sure, but \nI'd prefer more active voice. Me being stubborn, I pushed this using my \nwording. :-)\n\nThanks!\n\n- Heikki\n\n\n", "msg_date": "Wed, 18 Nov 2020 10:32:03 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Clarify that signal functions have no feedback" } ]
[ { "msg_contents": "> I found some code path use list_delete_ptr while the loop of foreach() is\n> iterating.\n> \n> List_delete_ptr seems search the list again to find the target cell and\n> delete it.\n> >\tforeach(cell, list)\n> >\t{\n> >\t\tif (lfirst(cell) == datum)\n> >\t\t\treturn list_delete_cell(list, cell);\n> >\t}\n> \n> \n> If we already get the cell in foreach loop, I think we can use\n> list_delete_cell to avoid searching the list again.\n> \n> Please see the attachment for the patch.\n\nI have added it to commitfest.\nhttps://commitfest.postgresql.org/30/2761/\n\nBest regards.\n\n\n\n\n", "msg_date": "Tue, 13 Oct 2020 08:02:10 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Use list_delete_cell instead in some places" } ]
[ { "msg_contents": "Hi,\n\nWhile developing some improvements for TPC-DS queries I found out that with\nUNION ALL partial paths are not emitted. Whilst fixing that I also came across\nthe subquery costing which does not seem to consider parallelism when doing\nthe costing.\n\nI added a simplified testcase in pg-regress to show this goes wrong, and\nattached also a before and after explain output of tpc-ds SF100 query 5\nbased on version 12.4.\n\nI hope I followed all etiquette and these kind of improvements are welcome.\n\nKind regards,\nLuc\nSwarm64", "msg_date": "Tue, 13 Oct 2020 08:57:23 +0000", "msg_from": "Luc Vlaming <luc@swarm64.com>", "msg_from_op": true, "msg_subject": "allow partial union-all and improve parallel subquery costing" }, { "msg_contents": "Hi,\n\nIt seems I ran the wrong make checks to verify everything is correct (make check instead\nof make installcheck-world) and this uncovered another regress test change. I also noticed\nthe statistics are sometimes giving different row count results so I increased the row\nstatistics target to make sure the regress output is stable. Updated patch attached which\nnow successfully runs installcheck-world for v13 and master.\n\nKind regards,\nLuc\n\n________________________________________\nFrom: Luc Vlaming <luc@swarm64.com>\nSent: Tuesday, October 13, 2020 10:57 AM\nTo: pgsql-hackers\nSubject: allow partial union-all and improve parallel subquery costing\n\nHi,\n\nWhile developing some improvements for TPC-DS queries I found out that with\nUNION ALL partial paths are not emitted. Whilst fixing that I also came across\nthe subquery costing which does not seem to consider parallelism when doing\nthe costing.\n\nI added a simplified testcase in pg-regress to show this goes wrong, and\nattached also a before and after explain output of tpc-ds SF100 query 5\nbased on version 12.4.\n\nI hope I followed all etiquette and these kind of improvements are welcome.\n\nKind regards,\nLuc\nSwarm64", "msg_date": "Wed, 14 Oct 2020 07:38:08 +0000", "msg_from": "Luc Vlaming <luc@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: allow partial union-all and improve parallel subquery costing" }, { "msg_contents": "On 14.10.20 09:38, Luc Vlaming wrote:\n> Hi,\n> \n> It seems I ran the wrong make checks to verify everything is correct (make check instead\n> of make installcheck-world) and this uncovered another regress test change. I also noticed\n> the statistics are sometimes giving different row count results so I increased the row\n> statistics target to make sure the regress output is stable. Updated patch attached which\n> now successfully runs installcheck-world for v13 and master.\n> \n> Kind regards,\n> Luc\n> \n> ________________________________________\n> From: Luc Vlaming <luc@swarm64.com>\n> Sent: Tuesday, October 13, 2020 10:57 AM\n> To: pgsql-hackers\n> Subject: allow partial union-all and improve parallel subquery costing\n> \n> Hi,\n> \n> While developing some improvements for TPC-DS queries I found out that with\n> UNION ALL partial paths are not emitted. Whilst fixing that I also came across\n> the subquery costing which does not seem to consider parallelism when doing\n> the costing.\n> \n> I added a simplified testcase in pg-regress to show this goes wrong, and\n> attached also a before and after explain output of tpc-ds SF100 query 5\n> based on version 12.4.\n> \n> I hope I followed all etiquette and these kind of improvements are welcome.\n> \n> Kind regards,\n> Luc\n> Swarm64\n> \n\nHi,\n\nCreated a commitfest entry assuming this is the right thing to do so \nthat someone can potentially pick it up during the commitfest.\n\nKind regards,\nLuc\nSwarm64\n\n\n", "msg_date": "Fri, 23 Oct 2020 07:51:16 +0200", "msg_from": "Luc Vlaming <luc@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: allow partial union-all and improve parallel subquery costing" }, { "msg_contents": "On 23-10-2020 07:51, Luc Vlaming wrote:\n> On 14.10.20 09:38, Luc Vlaming wrote:\n>> Hi,\n>>\n>> It seems I ran the wrong make checks to verify everything is correct \n>> (make check instead\n>> of make installcheck-world) and this uncovered another regress test \n>> change. I also noticed\n>> the statistics are sometimes giving different row count results so I \n>> increased the row\n>> statistics target to make sure the regress output is stable. Updated \n>> patch attached which\n>> now successfully runs installcheck-world for v13 and master.\n>>\n>> Kind regards,\n>> Luc\n>>\n>> ________________________________________\n>> From: Luc Vlaming <luc@swarm64.com>\n>> Sent: Tuesday, October 13, 2020 10:57 AM\n>> To: pgsql-hackers\n>> Subject: allow partial union-all and improve parallel subquery costing\n>>\n>> Hi,\n>>\n>> While developing some improvements for TPC-DS queries I found out that \n>> with\n>> UNION ALL partial paths are not emitted. Whilst fixing that I also \n>> came across\n>> the subquery costing which does not seem to consider parallelism when \n>> doing\n>> the costing.\n>>\n>> I added a simplified testcase in pg-regress to show this goes wrong, and\n>> attached also a before and after explain output of tpc-ds SF100 query 5\n>> based on version 12.4.\n>>\n>> I hope I followed all etiquette and these kind of improvements are \n>> welcome.\n>>\n>> Kind regards,\n>> Luc\n>> Swarm64\n>>\n> \n> Hi,\n> \n> Created a commitfest entry assuming this is the right thing to do so \n> that someone can potentially pick it up during the commitfest.\n> \n> Kind regards,\n> Luc\n> Swarm64\n\nHi,\n\nProviding an updated patch based on latest master.\n\nCheers,\nLuc", "msg_date": "Wed, 30 Dec 2020 14:54:39 +0100", "msg_from": "Luc Vlaming <luc@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: allow partial union-all and improve parallel subquery costing" }, { "msg_contents": "Hi Luc,\n\nOn 12/30/20 8:54 AM, Luc Vlaming wrote:\n>>\n>> Created a commitfest entry assuming this is the right thing to do so \n>> that someone can potentially pick it up during the commitfest.\n> \n> Providing an updated patch based on latest master.\n\nLooks like you need another rebase: \nhttp://cfbot.cputube.org/patch_32_2787.log. Marked as Waiting for Author.\n\nYou may also want to give a more detailed description of what you have \ndone here and why it improves execution plans. This may help draw some \nreviewers.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Mon, 15 Mar 2021 09:09:16 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: allow partial union-all and improve parallel subquery costing" }, { "msg_contents": "On 3/15/21 9:09 AM, David Steele wrote:\n> \n> On 12/30/20 8:54 AM, Luc Vlaming wrote:\n>>>\n>>> Created a commitfest entry assuming this is the right thing to do so \n>>> that someone can potentially pick it up during the commitfest.\n>>\n>> Providing an updated patch based on latest master.\n> \n> Looks like you need another rebase: \n> http://cfbot.cputube.org/patch_32_2787.log. Marked as Waiting for Author.\n> \n> You may also want to give a more detailed description of what you have \n> done here and why it improves execution plans. This may help draw some \n> reviewers.\n\nSince no new patch has been provided, marking this Returned with Feedback.\n\nPlease resubmit to the next CF when you have a new patch.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Thu, 8 Apr 2021 10:45:58 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: allow partial union-all and improve parallel subquery costing" }, { "msg_contents": "Hi David,\n\nOn 15-03-2021 14:09, David Steele wrote:\n> Hi Luc,\n> \n> On 12/30/20 8:54 AM, Luc Vlaming wrote:\n>>>\n>>> Created a commitfest entry assuming this is the right thing to do so \n>>> that someone can potentially pick it up during the commitfest.\n>>\n>> Providing an updated patch based on latest master.\n> \n> Looks like you need another rebase: \n> http://cfbot.cputube.org/patch_32_2787.log. Marked as Waiting for Author.\n> \n> You may also want to give a more detailed description of what you have \n> done here and why it improves execution plans. This may help draw some \n> reviewers.\n> \n> Regards,\n\nHere's an improved and rebased patch. Hope the description helps some \npeople. I will resubmit it to the next commitfest.\n\nRegards,\nLuc", "msg_date": "Mon, 12 Apr 2021 14:01:36 +0200", "msg_from": "Luc Vlaming <luc@swarm64.com>", "msg_from_op": true, "msg_subject": "Re: allow partial union-all and improve parallel subquery costing" }, { "msg_contents": "Le lundi 12 avril 2021, 14:01:36 CEST Luc Vlaming a écrit :\n> Here's an improved and rebased patch. Hope the description helps some\n> people. I will resubmit it to the next commitfest.\n> \n\nHello Luc,\n\nI've taken a look at this patch, and while I don't fully understand its \nimplications here are a couple remarks.\n\nI think you should add a test demonstrating the use of the new partial append \npath you add, for example using your base query:\n\nexplain (costs off)\nselect sum(two) from\n( \nselect *, 1::int from tenk1 a\nunion all\nselect *, 1::bigint from tenk1 b \n) t\n;\n\nI'm not sure I understand why the subquery scan rows estimate has not been \naccounted like you propose before, because the way it's done as of now \nbasically doubles the estimate for the subqueryscan, since we account for it \nalready being divided by it's number of workers, as mentioned in cost_append:\n\n/*\n * Apply parallel divisor to subpaths. Scale the number of rows\n * for each partial subpath based on the ratio of the parallel\n * divisor originally used for the subpath to the one we adopted.\n * Also add the cost of partial paths to the total cost, but\n * ignore non-partial paths for now.\n */\n\nDo we have other nodes for which we make this assumption ?\n\nAlso, adding a partial path comprised only of underlying partial paths might \nnot be enough: maybe we should add one partial path even in the case of mixed \npartial / nonpartial paths like it's done in add_paths_to_append_rel ?\n\nRegards,\n\n-- \nRonan Dunklau\n\n\n\n\n", "msg_date": "Fri, 23 Jul 2021 13:46:42 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: allow partial union-all and improve parallel subquery costing" }, { "msg_contents": "With the thread stalled and requests for a test (documentation really?) not\nresponded to I'm marking this patch Returned with Feedback.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 2 Dec 2021 15:10:15 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: allow partial union-all and improve parallel subquery costing" } ]
[ { "msg_contents": "I have some code that I've been using that supports adding and\nauthenticating Windows groups via the pg_ident file. This is useful for\nsysadmins as it lets them control database access outside the database\nusing Windows groups. It has a new\nindicator (+), that signifies the identifier is a Windows group, as in the\nfollowing example:\n\n# MAPNAME SYSTEM-USERNAME PG-USERNAME\n\"Users\" \"+User group\" postgres\n\nA new function was added to test if a user token is in the windows group:\n\n/*\n* Check if the user (sspiToken) is a member of the specified group\n*/\nstatic BOOL\nsspi_user_is_in_group(HANDLE sspiToken, LPCTSTR groupName)\n\nAttached is the patch.\n\nthanks,\nRussell Foster", "msg_date": "Tue, 13 Oct 2020 09:10:43 -0400", "msg_from": "Russell Foster <russell.foster.coding@gmail.com>", "msg_from_op": true, "msg_subject": "[Patch] Using Windows groups for SSPI authentication" }, { "msg_contents": "Russell Foster <russell.foster.coding@gmail.com> writes:\n> I have some code that I've been using that supports adding and\n> authenticating Windows groups via the pg_ident file. This is useful for\n> sysadmins as it lets them control database access outside the database\n> using Windows groups. It has a new\n> indicator (+), that signifies the identifier is a Windows group, as in the\n> following example:\n\n> # MAPNAME SYSTEM-USERNAME PG-USERNAME\n> \"Users\" \"+User group\" postgres\n\nWhile I don't object to adding functionality to access Windows groups,\nI do object to using syntax that makes random assumptions about what a\nuser name can or can't be.\n\nThere was a prior discussion of this in the context of some other patch\nthat had a similar idea. [ digs in archives... ] Ah, here it is:\n\nhttps://www.postgresql.org/message-id/flat/4ba3ad54-bb32-98c6-033a-ccca7058fc2f%402ndquadrant.com\n\nIt doesn't look like we arrived at any firm consensus about what to\ndo instead, but maybe you can find some ideas there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Oct 2020 13:15:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Patch] Using Windows groups for SSPI authentication" }, { "msg_contents": "Going to take a guess at what you mean by:\n\nI do object to using syntax that makes random assumptions about what a\nuser name can or can't be.\n\nAre you referring to the \"+\" syntax in the ident file? I chose that because\nsomewhere else (hba?) using the same syntax for groups. The quotes are just\nthere to make the group name case sensitive.\n\n\nOn Tue, Oct 13, 2020 at 1:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Russell Foster <russell.foster.coding@gmail.com> writes:\n> > I have some code that I've been using that supports adding and\n> > authenticating Windows groups via the pg_ident file. This is useful for\n> > sysadmins as it lets them control database access outside the database\n> > using Windows groups. It has a new\n> > indicator (+), that signifies the identifier is a Windows group, as in\n> the\n> > following example:\n>\n> > # MAPNAME SYSTEM-USERNAME PG-USERNAME\n> > \"Users\" \"+User group\" postgres\n>\n> While I don't object to adding functionality to access Windows groups,\n> I do object to using syntax that makes random assumptions about what a\n> user name can or can't be.\n>\n> There was a prior discussion of this in the context of some other patch\n> that had a similar idea. [ digs in archives... ] Ah, here it is:\n>\n>\n> https://www.postgresql.org/message-id/flat/4ba3ad54-bb32-98c6-033a-ccca7058fc2f%402ndquadrant.com\n>\n> It doesn't look like we arrived at any firm consensus about what to\n> do instead, but maybe you can find some ideas there.\n>\n> regards, tom lane\n>\n\nGoing to take a guess at what you mean by:I do object to using syntax that makes random assumptions about what auser name can or can't be.Are you referring to the \"+\" syntax in the ident file? I chose that because somewhere else (hba?) using the same syntax for groups. The quotes are just there to make the  group name case sensitive.On Tue, Oct 13, 2020 at 1:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Russell Foster <russell.foster.coding@gmail.com> writes:\n> I have some code that I've been using that supports adding and\n> authenticating Windows groups via the pg_ident file. This is useful for\n> sysadmins as it lets them control database access outside the database\n> using Windows groups. It has a new\n> indicator (+), that signifies the identifier is a Windows group, as in the\n> following example:\n\n> # MAPNAME SYSTEM-USERNAME PG-USERNAME\n> \"Users\" \"+User group\" postgres\n\nWhile I don't object to adding functionality to access Windows groups,\nI do object to using syntax that makes random assumptions about what a\nuser name can or can't be.\n\nThere was a prior discussion of this in the context of some other patch\nthat had a similar idea.  [ digs in archives... ]  Ah, here it is:\n\nhttps://www.postgresql.org/message-id/flat/4ba3ad54-bb32-98c6-033a-ccca7058fc2f%402ndquadrant.com\n\nIt doesn't look like we arrived at any firm consensus about what to\ndo instead, but maybe you can find some ideas there.\n\n                        regards, tom lane", "msg_date": "Tue, 13 Oct 2020 15:02:16 -0400", "msg_from": "Russell Foster <russell.foster.coding@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Patch] Using Windows groups for SSPI authentication" }, { "msg_contents": "Russell Foster <russell.foster.coding@gmail.com> writes:\n> Going to take a guess at what you mean by:\n>> I do object to using syntax that makes random assumptions about what a\n>> user name can or can't be.\n\n> Are you referring to the \"+\" syntax in the ident file? I chose that because\n> somewhere else (hba?) using the same syntax for groups. The quotes are just\n> there to make the group name case sensitive.\n\nIf this were a Postgres group name, I'd say yeah we already broke\nthe case of spelling group names with a leading \"+\". (Which I'm\nnot very happy about either, but the precedent is there.)\n\nHowever, this isn't. Unless I'm totally confused, the field you're\ntalking about is normally an external, operating-system-defined name.\nI do not think it's wise to make any assumptions about what those\ncan be.\n\nBy the same token, the idea of using a \"pg_\" prefix as discussed\nin the other thread will not work here :-(.\n\nAfter a few minutes' thought, the best I can can come up with is\nto extend the syntax of identmap files with an \"options\" field,\nso that your example becomes something like\n\n# MAPNAME SYSTEM-USERNAME PG-USERNAME OPTIONS\n\"Users\" \"User group\" postgres windows-group\n\nI'm envisioning OPTIONS as allowing a comma- or space-separated\nlist of keywords, which would give room to grow for other special\nfeatures we might want later.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Oct 2020 15:32:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Patch] Using Windows groups for SSPI authentication" }, { "msg_contents": "Russell Foster <russell.foster.coding@gmail.com> writes:\n> I understand your concerns overall, and the solution you propose seems\n> reasonable. But are we just using \"windows-group\" because the code is not\n> there today to check for a user in another OS group?\n\nIt's not clear to me whether Windows groups have exact equivalents in\nother OSes. If we think the concept is generic, I'd be okay with\nspelling the keyword system-group or the like. The patch you\nproposed looked pretty Windows-specific though. Somebody with more\nSSPI knowledge than me would have to opine on whether \"sspi-group\"\nis a reasonable name.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Oct 2020 16:32:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Patch] Using Windows groups for SSPI authentication" }, { "msg_contents": "Right after I sent that I realized that sspi-group was a bad idea, not sure\nif that's even a thing. Tried to cancel as it was still in moderation, but\nit made it through anyways! You are right, it is very windows specific. I\ncan make it windows-group as you said, and resubmit.\n\nOn Tue, Oct 13, 2020 at 4:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Russell Foster <russell.foster.coding@gmail.com> writes:\n> > I understand your concerns overall, and the solution you propose seems\n> > reasonable. But are we just using \"windows-group\" because the code is not\n> > there today to check for a user in another OS group?\n>\n> It's not clear to me whether Windows groups have exact equivalents in\n> other OSes. If we think the concept is generic, I'd be okay with\n> spelling the keyword system-group or the like. The patch you\n> proposed looked pretty Windows-specific though. Somebody with more\n> SSPI knowledge than me would have to opine on whether \"sspi-group\"\n> is a reasonable name.\n>\n> regards, tom lane\n>\n\nRight after I sent that I realized that sspi-group was a bad idea, not sure if that's even a thing. Tried to cancel as it was still in moderation, but it made it through anyways! You are right, it is very windows specific. I can make it windows-group as you said, and resubmit.On Tue, Oct 13, 2020 at 4:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Russell Foster <russell.foster.coding@gmail.com> writes:\n> I understand your concerns overall, and the solution you propose seems\n> reasonable. But are we just using \"windows-group\" because the code is not\n> there today to check for a user in another OS group?\n\nIt's not clear to me whether Windows groups have exact equivalents in\nother OSes.  If we think the concept is generic, I'd be okay with\nspelling the keyword system-group or the like.  The patch you\nproposed looked pretty Windows-specific though.  Somebody with more\nSSPI knowledge than me would have to opine on whether \"sspi-group\"\nis a reasonable name.\n\n                        regards, tom lane", "msg_date": "Tue, 13 Oct 2020 17:08:55 -0400", "msg_from": "Russell Foster <russell.foster.coding@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Patch] Using Windows groups for SSPI authentication" }, { "msg_contents": "Greetings,\n\n* Russell Foster (russell.foster.coding@gmail.com) wrote:\n> Right after I sent that I realized that sspi-group was a bad idea, not sure\n> if that's even a thing. Tried to cancel as it was still in moderation, but\n> it made it through anyways! You are right, it is very windows specific. I\n> can make it windows-group as you said, and resubmit.\n\nPlease don't top-post on these lists..\n\n> On Tue, Oct 13, 2020 at 4:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > Russell Foster <russell.foster.coding@gmail.com> writes:\n> > > I understand your concerns overall, and the solution you propose seems\n> > > reasonable. But are we just using \"windows-group\" because the code is not\n> > > there today to check for a user in another OS group?\n> >\n> > It's not clear to me whether Windows groups have exact equivalents in\n> > other OSes. If we think the concept is generic, I'd be okay with\n> > spelling the keyword system-group or the like. The patch you\n> > proposed looked pretty Windows-specific though. Somebody with more\n> > SSPI knowledge than me would have to opine on whether \"sspi-group\"\n> > is a reasonable name.\n\nWhile not exactly the same, of course, they are more-or-less equivilant\nto Unix groups (it's even possible using NSS to get Unix groups to be\nbacked by Windows groups) and so calling it 'system-group' does seem\nlike it'd make sense, rather than calling it \"Windows groups\" or\nsimilar.\n\nOne unfortunate thing regarding this is that, unless things have\nchanged, this won't end up working with GSS (unless we add the unix\ngroup support and that's then backed by AD as I described above) since\nthe ability to check group membership using SSPI is an extension to the\nKerberos protocol, which never included group membership information in\nit, and therefore while this would work for Windows clients connecting\nto Windows servers, it won't work for Windows clients connecting to Unix\nservers with GSSAPI authentication.\n\nThe direction I had been thinking of addressing that was to add an\noption to pg_hba.conf's 'gss' auth method which would allow reaching out\nto check group membership against an AD server. In a similar vein, we\ncould add an option to the 'sspi' auth method to check the group\nmembership, rather than having this done in pg_ident.conf, which is\nreally intended to allow mapping between system usernames and PG\nusernames which are different, not really for controlling authentication\nbased on group membership when the username is the same.\n\nRussell, thoughts on that..?\n\nThanks,\n\nStephen", "msg_date": "Thu, 15 Oct 2020 11:31:16 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: [Patch] Using Windows groups for SSPI authentication" }, { "msg_contents": "On Thu, Oct 15, 2020 at 11:31 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Please don't top-post on these lists..\nDidn't even know what that was, had to look it up. Hopefully it is\nresolved. Gmail does too many things for you!\n\n> While not exactly the same, of course, they are more-or-less equivilant\n> to Unix groups (it's even possible using NSS to get Unix groups to be\n> backed by Windows groups) and so calling it 'system-group' does seem\n> like it'd make sense, rather than calling it \"Windows groups\" or\n> similar.\n>\n> One unfortunate thing regarding this is that, unless things have\n> changed, this won't end up working with GSS (unless we add the unix\n> group support and that's then backed by AD as I described above) since\n> the ability to check group membership using SSPI is an extension to the\n> Kerberos protocol, which never included group membership information in\n> it, and therefore while this would work for Windows clients connecting\n> to Windows servers, it won't work for Windows clients connecting to Unix\n> servers with GSSAPI authentication.\n>\n> The direction I had been thinking of addressing that was to add an\n> option to pg_hba.conf's 'gss' auth method which would allow reaching out\n> to check group membership against an AD server. In a similar vein, we\n> could add an option to the 'sspi' auth method to check the group\n> membership, rather than having this done in pg_ident.conf, which is\n> really intended to allow mapping between system usernames and PG\n> usernames which are different, not really for controlling authentication\n> based on group membership when the username is the same.\n>\n> Russell, thoughts on that..?\n\nSo are you saying something like this where its an option to the sspi method?\n\n# TYPE DATABASE USER ADDRESS MASK METHOD\nhostssl all some_user 0.0.0.0 0.0.0.0 sspi group=\"Windows Group\"\n\nI guess the code wouldn't change much, unless you mean for it to do a\nmore generic ldap query. Seems OK to me, but I guess the hba could\nbecome more verbose. The map is nice as it allows your HBA to be very\nprecise in how your connections and database users are represented,\nand the ident map file is there to group those external identities. I\ncan't say I have a strong opinion either way though.\n\n\n", "msg_date": "Thu, 15 Oct 2020 15:56:51 -0400", "msg_from": "Russell Foster <russell.foster.coding@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Patch] Using Windows groups for SSPI authentication" }, { "msg_contents": "Greetings,\n\n* Russell Foster (russell.foster.coding@gmail.com) wrote:\n> On Thu, Oct 15, 2020 at 11:31 AM Stephen Frost <sfrost@snowman.net> wrote:\n> \n> > Please don't top-post on these lists..\n> Didn't even know what that was, had to look it up. Hopefully it is\n> resolved. Gmail does too many things for you!\n\nIndeed! This looks much better, thanks!\n\n> > While not exactly the same, of course, they are more-or-less equivilant\n> > to Unix groups (it's even possible using NSS to get Unix groups to be\n> > backed by Windows groups) and so calling it 'system-group' does seem\n> > like it'd make sense, rather than calling it \"Windows groups\" or\n> > similar.\n> >\n> > One unfortunate thing regarding this is that, unless things have\n> > changed, this won't end up working with GSS (unless we add the unix\n> > group support and that's then backed by AD as I described above) since\n> > the ability to check group membership using SSPI is an extension to the\n> > Kerberos protocol, which never included group membership information in\n> > it, and therefore while this would work for Windows clients connecting\n> > to Windows servers, it won't work for Windows clients connecting to Unix\n> > servers with GSSAPI authentication.\n> >\n> > The direction I had been thinking of addressing that was to add an\n> > option to pg_hba.conf's 'gss' auth method which would allow reaching out\n> > to check group membership against an AD server. In a similar vein, we\n> > could add an option to the 'sspi' auth method to check the group\n> > membership, rather than having this done in pg_ident.conf, which is\n> > really intended to allow mapping between system usernames and PG\n> > usernames which are different, not really for controlling authentication\n> > based on group membership when the username is the same.\n> >\n> > Russell, thoughts on that..?\n> \n> So are you saying something like this where its an option to the sspi method?\n> \n> # TYPE DATABASE USER ADDRESS MASK METHOD\n> hostssl all some_user 0.0.0.0 0.0.0.0 sspi group=\"Windows Group\"\n\nYes, something along those lines.\n\n> I guess the code wouldn't change much, unless you mean for it to do a\n> more generic ldap query. Seems OK to me, but I guess the hba could\n> become more verbose. The map is nice as it allows your HBA to be very\n> precise in how your connections and database users are represented,\n> and the ident map file is there to group those external identities. I\n> can't say I have a strong opinion either way though.\n\nNo, no, not suggesting you need to rewrite it as a generic LDAP query-\nthat would be a patch that I'd like to see but is a different feature\nfrom this and wouldn't even be applicable to SSPI (it'd be for GSS..\nand perhaps some other methods, but with SSPI we should use the SSPI\nmethods- I can't think of a reason to go to an LDAP query when the group\nmembership is directly available from SSPI, can you?).\n\nThe pg_ident is specifically intended to be a mapping from external user\nidentities to PG users. Reading back through the thread, in the end it\nseems like it really depends on what we're trying to solve here and\nperhaps it's my fault for misunderstanding your original goal, but maybe\nwe get two features out of this in the end, and for not much more code.\n\nBased on your example pg_ident.conf (which I took as more of a \"this is\nwhat using this would look like\" and not as literally as I think you\nmeant it, now that I read back through it), there's a use-case of:\n\n\"Allow anyone in this group to log in as this *specific* PG user\"\n\nThe other use-case is:\n\n\"Allow users in this group to be able to log into this PG server\"\n\n(The latter use-case potentially being further extended to\n\"automatically create the PG user if it doesn't already exist\",\nsomething which has been discussed elsewhere previously and is what\nfolks coming from other database systems may be used to).\n\nThe former would be more appropriate in pg_ident.conf, the latter would\nfit into pg_hba.conf, both are useful.\n\nTo the prior discussion around pg_ident.conf, I do think having the\nkeyword being 'system-group' would fit well, but something we need to\nthink about is that multiple auth methods work with pg_ident and we need\nto either implement the functionality for each of them, or make it clear\nthat it doesn't work- in particular, if you have 'system-group' as an\noption in pg_ident.conf and you're using 'peer' auth on a Unix system,\nwe either need to make it work (which should be pretty easy..?), or\nrefuse to accept that map for that auth-method if it's not going to\nwork.\n\nAs it relates to pg_hba.conf- if you don't think it'd be much additional\ncode and you'd be up for it, I do think it'd be awesome to address that\nuse-case as well, but I do agree it's a separate feature and probably\ncommitted independently.\n\nOr, if I've managed to misunderstand again, please let me know. :)\n\nThanks!\n\nStephen", "msg_date": "Fri, 16 Oct 2020 12:00:29 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: [Patch] Using Windows groups for SSPI authentication" } ]
[ { "msg_contents": "Hi,\n\nI found in guc-file.l we can omit the else branch in AbsoluteConfigLocation().\n\ndiff --git a/src/backend/utils/misc/guc-file.l b/src/backend/utils/misc/guc-file.l\nindex c98e220295..9d4b3d7236 100644\n--- a/src/backend/utils/misc/guc-file.l\n+++ b/src/backend/utils/misc/guc-file.l\n@@ -522,23 +522,21 @@ AbsoluteConfigLocation(const char *location, const char *calling_file)\n\n if (is_absolute_path(location))\n return pstrdup(location);\n+\n+ if (calling_file != NULL)\n+ {\n+ strlcpy(abs_path, calling_file, sizeof(abs_path));\n+ get_parent_directory(abs_path);\n+ join_path_components(abs_path, abs_path, location);\n+ canonicalize_path(abs_path);\n+ }\n else\n {\n- if (calling_file != NULL)\n- {\n- strlcpy(abs_path, calling_file, sizeof(abs_path));\n- get_parent_directory(abs_path);\n- join_path_components(abs_path, abs_path, location);\n- canonicalize_path(abs_path);\n- }\n- else\n- {\n- AssertState(DataDir);\n- join_path_components(abs_path, DataDir, location);\n- canonicalize_path(abs_path);\n- }\n- return pstrdup(abs_path);\n+ AssertState(DataDir);\n+ join_path_components(abs_path, DataDir, location);\n+ canonicalize_path(abs_path);\n }\n+ return pstrdup(abs_path);\n }\n\n\n--\nBest regards\nJapin Li\n\n\n\n\n\n", "msg_date": "Tue, 13 Oct 2020 13:30:47 +0000", "msg_from": "Li Japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Remove unnecessary else branch" }, { "msg_contents": "On 13/10/2020 16:30, Li Japin wrote:\n> Hi,\n> \n> I found in guc-file.l we can omit the else branch in AbsoluteConfigLocation().\n\nIt will compile the same, so it's just a matter of code readability or \ntaste which style is better here. I think we should leave it alone, it's \nfine as it is.\n\n- Heikki\n\n\n", "msg_date": "Tue, 13 Oct 2020 16:36:54 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Remove unnecessary else branch" }, { "msg_contents": "On Tue, Oct 13, 2020 at 6:37 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 13/10/2020 16:30, Li Japin wrote:\n> > Hi,\n> >\n> > I found in guc-file.l we can omit the else branch in\n> AbsoluteConfigLocation().\n>\n> It will compile the same, so it's just a matter of code readability or\n> taste which style is better here. I think we should leave it alone, it's\n> fine as it is.\n>\n> - Heikki\n>\n>\n>\nI agree with Heikki from the code execution point of view.\n\n\"canonicalize_path(abs_path);\" statement is also condition independent and\ncan be pulled out of both if and else blocks. Removing\nunnecessary statements makes the code more readable, but it is a matter of\nchoice/style.\n\nOn Tue, Oct 13, 2020 at 6:37 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:On 13/10/2020 16:30, Li Japin wrote:\n> Hi,\n> \n> I found in guc-file.l we can omit the else branch in AbsoluteConfigLocation().\n\nIt will compile the same, so it's just a matter of code readability or \ntaste which style is better here. I think we should leave it alone, it's \nfine as it is.\n\n- Heikki\n\n\nI agree with Heikki from the code execution point of view. \"canonicalize_path(abs_path);\" statement is also condition independent and can be pulled out of both if and else blocks. Removing unnecessary statements makes the code more readable, but it is a matter of choice/style.", "msg_date": "Tue, 13 Oct 2020 18:59:21 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove unnecessary else branch" }, { "msg_contents": "On Oct 13, 2020, at 9:59 PM, Hamid Akhtar <hamid.akhtar@gmail.com<mailto:hamid.akhtar@gmail.com>> wrote:\n\n\n\nOn Tue, Oct 13, 2020 at 6:37 PM Heikki Linnakangas <hlinnaka@iki.fi<mailto:hlinnaka@iki.fi>> wrote:\nOn 13/10/2020 16:30, Li Japin wrote:\n> Hi,\n>\n> I found in guc-file.l we can omit the else branch in AbsoluteConfigLocation().\n\nIt will compile the same, so it's just a matter of code readability or\ntaste which style is better here. I think we should leave it alone, it's\nfine as it is.\n\n- Heikki\n\n\n\nI agree with Heikki from the code execution point of view.\n\nIn code execution point of view they are same, however, the code is for user, i think the readability is also important.\n\n\n\"canonicalize_path(abs_path);\" statement is also condition independent and can be pulled out of both if and else blocks. Removing unnecessary statements makes the code more readable, but it is a matter of choice/style.\n\n+1\n\ndiff --git a/src/backend/utils/misc/guc-file.l b/src/backend/utils/misc/guc-file.l\nindex c98e220295..b3549665ef 100644\n--- a/src/backend/utils/misc/guc-file.l\n+++ b/src/backend/utils/misc/guc-file.l\n@@ -522,23 +522,21 @@ AbsoluteConfigLocation(const char *location, const char *calling_file)\n\n if (is_absolute_path(location))\n return pstrdup(location);\n+\n+ if (calling_file != NULL)\n+ {\n+ strlcpy(abs_path, calling_file, sizeof(abs_path));\n+ get_parent_directory(abs_path);\n+ join_path_components(abs_path, abs_path, location);\n+ }\n else\n {\n- if (calling_file != NULL)\n- {\n- strlcpy(abs_path, calling_file, sizeof(abs_path));\n- get_parent_directory(abs_path);\n- join_path_components(abs_path, abs_path, location);\n- canonicalize_path(abs_path);\n- }\n- else\n- {\n- AssertState(DataDir);\n- join_path_components(abs_path, DataDir, location);\n- canonicalize_path(abs_path);\n- }\n- return pstrdup(abs_path);\n+ AssertState(DataDir);\n+ join_path_components(abs_path, DataDir, location);\n }\n+\n+ canonicalize_path(abs_path);\n+ return pstrdup(abs_path);\n }\n\n--\nBest regards\nJapin Li\n\n\n\n\n\n\n\n\n\n\n\n\n\nOn Oct 13, 2020, at 9:59 PM, Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n\n\n\n\n\n\n\nOn Tue, Oct 13, 2020 at 6:37 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n\nOn 13/10/2020 16:30, Li Japin wrote:\n> Hi,\n> \n> I found in guc-file.l we can omit the else branch in AbsoluteConfigLocation().\n\nIt will compile the same, so it's just a matter of code readability or \ntaste which style is better here. I think we should leave it alone, it's \nfine as it is.\n\n- Heikki\n\n\n\n\n\nI agree with Heikki from the code execution point of view. \n\n\n\n\n\nIn code execution point of view they are same, however, the code is for user, i think the readability is also important.\n\n\n\n\n\n\n\"canonicalize_path(abs_path);\" statement is also condition independent and can be pulled out of both if and else blocks. Removing unnecessary statements makes the code more readable, but it is a matter of choice/style.\n\n\n\n\n\n\n+1\n\ndiff --git a/src/backend/utils/misc/guc-file.l b/src/backend/utils/misc/guc-file.l\nindex c98e220295..b3549665ef 100644\n--- a/src/backend/utils/misc/guc-file.l\n+++ b/src/backend/utils/misc/guc-file.l\n@@ -522,23 +522,21 @@ AbsoluteConfigLocation(const char *location, const char *calling_file)\n\n\n        if (is_absolute_path(location))\n                return pstrdup(location);\n+\n+       if (calling_file != NULL)\n+       {\n+               strlcpy(abs_path, calling_file, sizeof(abs_path));\n+               get_parent_directory(abs_path);\n+               join_path_components(abs_path, abs_path, location);\n+       }\n        else\n        {\n-               if (calling_file != NULL)\n-               {\n-                       strlcpy(abs_path, calling_file, sizeof(abs_path));\n-                       get_parent_directory(abs_path);\n-                       join_path_components(abs_path, abs_path, location);\n-                       canonicalize_path(abs_path);\n-               }\n-               else\n-               {\n-                       AssertState(DataDir);\n-                       join_path_components(abs_path, DataDir, location);\n-                       canonicalize_path(abs_path);\n-               }\n-               return pstrdup(abs_path);\n+               AssertState(DataDir);\n+               join_path_components(abs_path, DataDir, location);\n        }\n+\n+       canonicalize_path(abs_path);\n+       return pstrdup(abs_path);\n }\n\n\n\n--\nBest regards\nJapin Li", "msg_date": "Tue, 13 Oct 2020 15:03:01 +0000", "msg_from": "Li Japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove unnecessary else branch" }, { "msg_contents": "On Oct 13, 2020, at 9:59 PM, Hamid Akhtar <hamid.akhtar@gmail.com<mailto:hamid.akhtar@gmail.com>> wrote:\n\n\n\nOn Tue, Oct 13, 2020 at 6:37 PM Heikki Linnakangas <hlinnaka@iki.fi<mailto:hlinnaka@iki.fi>> wrote:\nOn 13/10/2020 16:30, Li Japin wrote:\n> Hi,\n>\n> I found in guc-file.l we can omit the else branch in AbsoluteConfigLocation().\n\nIt will compile the same, so it's just a matter of code readability or\ntaste which style is better here. I think we should leave it alone, it's\nfine as it is.\n\n- Heikki\n\n\n\nI agree with Heikki from the code execution point of view.\n\nIn code execution point of view they are same, however, the code is for user, i think the readability is also important.\n\n\n\"canonicalize_path(abs_path);\" statement is also condition independent and can be pulled out of both if and else blocks. Removing unnecessary statements makes the code more readable, but it is a matter of choice/style.\n\n+1\n\ndiff --git a/src/backend/utils/misc/guc-file.l b/src/backend/utils/misc/guc-file.l\nindex c98e220295..b3549665ef 100644\n--- a/src/backend/utils/misc/guc-file.l\n+++ b/src/backend/utils/misc/guc-file.l\n@@ -522,23 +522,21 @@ AbsoluteConfigLocation(const char *location, const char *calling_file)\n\n if (is_absolute_path(location))\n return pstrdup(location);\n+\n+ if (calling_file != NULL)\n+ {\n+ strlcpy(abs_path, calling_file, sizeof(abs_path));\n+ get_parent_directory(abs_path);\n+ join_path_components(abs_path, abs_path, location);\n+ }\n else\n {\n- if (calling_file != NULL)\n- {\n- strlcpy(abs_path, calling_file, sizeof(abs_path));\n- get_parent_directory(abs_path);\n- join_path_components(abs_path, abs_path, location);\n- canonicalize_path(abs_path);\n- }\n- else\n- {\n- AssertState(DataDir);\n- join_path_components(abs_path, DataDir, location);\n- canonicalize_path(abs_path);\n- }\n- return pstrdup(abs_path);\n+ AssertState(DataDir);\n+ join_path_components(abs_path, DataDir, location);\n }\n+\n+ canonicalize_path(abs_path);\n+ return pstrdup(abs_path);\n }\n\n--\nBest regards\nJapin Li\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOn Oct 13, 2020, at 9:59 PM, Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n\n\n\n\n\n\n\nOn Tue, Oct 13, 2020 at 6:37 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n\nOn 13/10/2020 16:30, Li Japin wrote:\n> Hi,\n> \n> I found in guc-file.l we can omit the else branch in AbsoluteConfigLocation().\n\nIt will compile the same, so it's just a matter of code readability or \ntaste which style is better here. I think we should leave it alone, it's \nfine as it is.\n\n- Heikki\n\n\n\n\n\nI agree with Heikki from the code execution point of view. \n\n\n\n\n\nIn code execution point of view they are same, however, the code is for user, i think the readability is also important.\n\n\n\n\n\n\n\"canonicalize_path(abs_path);\" statement is also condition independent and can be pulled out of both if and else blocks. Removing unnecessary statements makes the code more readable, but it is a matter of choice/style.\n\n\n\n\n\n\n+1\n\ndiff --git a/src/backend/utils/misc/guc-file.l b/src/backend/utils/misc/guc-file.l\nindex c98e220295..b3549665ef 100644\n--- a/src/backend/utils/misc/guc-file.l\n+++ b/src/backend/utils/misc/guc-file.l\n@@ -522,23 +522,21 @@ AbsoluteConfigLocation(const char *location, const char *calling_file)\n\n\n        if (is_absolute_path(location))\n                return pstrdup(location);\n+\n+       if (calling_file != NULL)\n+       {\n+               strlcpy(abs_path, calling_file, sizeof(abs_path));\n+               get_parent_directory(abs_path);\n+               join_path_components(abs_path, abs_path, location);\n+       }\n        else\n        {\n-               if (calling_file != NULL)\n-               {\n-                       strlcpy(abs_path, calling_file, sizeof(abs_path));\n-                       get_parent_directory(abs_path);\n-                       join_path_components(abs_path, abs_path, location);\n-                       canonicalize_path(abs_path);\n-               }\n-               else\n-               {\n-                       AssertState(DataDir);\n-                       join_path_components(abs_path, DataDir, location);\n-                       canonicalize_path(abs_path);\n-               }\n-               return pstrdup(abs_path);\n+               AssertState(DataDir);\n+               join_path_components(abs_path, DataDir, location);\n        }\n+\n+       canonicalize_path(abs_path);\n+       return pstrdup(abs_path);\n }\n\n\n\n--\nBest regards\nJapin Li", "msg_date": "Tue, 13 Oct 2020 15:03:13 +0000", "msg_from": "Li Japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove unnecessary else branch" }, { "msg_contents": "Li Japin <japinli@hotmail.com> writes:\n> I agree with Heikki from the code execution point of view.\n\n> In code execution point of view they are same, however, the code is for user, i think the readability is also important.\n\nThere is another consideration here, which is avoiding creating\nback-patching hazards from gratuitous cross-branch code differences.\n\nIf you need to rewrite a chunk of logic anyway, then fixing\nsmall cosmetic issues in it is fine. Otherwise I think \"leave\nwell enough alone\" is a good guiding principle.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Oct 2020 11:25:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove unnecessary else branch" } ]
[ { "msg_contents": "Hi\n\nOne customer reports issue related to pg_upgrade.\n\nI found a thread\nhttps://www.postgresql-archive.org/Upgrade-and-re-synchronization-with-logical-replication-pglogical-and-PG-10-td6001990.html\n\nBut I didn't find documentation of this limitation?\n\nRegards\n\nPavel\n\nHiOne customer reports issue related to pg_upgrade.I found a thread https://www.postgresql-archive.org/Upgrade-and-re-synchronization-with-logical-replication-pglogical-and-PG-10-td6001990.htmlBut I didn't find documentation of this limitation?RegardsPavel", "msg_date": "Tue, 13 Oct 2020 18:20:41 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "lost replication slots after pg_upgrade" }, { "msg_contents": "On Tue, Oct 13, 2020 at 06:20:41PM +0200, Pavel Stehule wrote:\n> Hi\n> \n> One customer reports issue related to pg_upgrade.\n> \n> I found a thread https://www.postgresql-archive.org/\n> Upgrade-and-re-synchronization-with-logical-replication-pglogical-and-PG-10-td6001990.html\n> \n> But I didn't find documentation of this limitation?\n\nSo, what is the question? Peter Eisentraut is right that WAL is not\npreserved, so replication slots are not preserved. We do have\npg_upgrade instructions for upgrading binary replication, but I assume\npeople recreate the slots.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 13 Oct 2020 12:33:49 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: lost replication slots after pg_upgrade" }, { "msg_contents": "út 13. 10. 2020 v 18:33 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n\n> On Tue, Oct 13, 2020 at 06:20:41PM +0200, Pavel Stehule wrote:\n> > Hi\n> >\n> > One customer reports issue related to pg_upgrade.\n> >\n> > I found a thread https://www.postgresql-archive.org/\n> >\n> Upgrade-and-re-synchronization-with-logical-replication-pglogical-and-PG-10-td6001990.html\n> >\n> > But I didn't find documentation of this limitation?\n>\n> So, what is the question? Peter Eisentraut is right that WAL is not\n> preserved, so replication slots are not preserved. We do have\n> pg_upgrade instructions for upgrading binary replication, but I assume\n> people recreate the slots.\n>\n\nI cannot find related documentation.\n\n\n\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EnterpriseDB https://enterprisedb.com\n>\n> The usefulness of a cup is in its emptiness, Bruce Lee\n>\n>\n\nút 13. 10. 2020 v 18:33 odesílatel Bruce Momjian <bruce@momjian.us> napsal:On Tue, Oct 13, 2020 at 06:20:41PM +0200, Pavel Stehule wrote:\n> Hi\n> \n> One customer reports issue related to pg_upgrade.\n> \n> I found a thread https://www.postgresql-archive.org/\n> Upgrade-and-re-synchronization-with-logical-replication-pglogical-and-PG-10-td6001990.html\n> \n> But I didn't find documentation of this limitation?\n\nSo, what is the question?  Peter Eisentraut is right that WAL is not\npreserved, so replication slots are not preserved.  We do have\npg_upgrade instructions for upgrading binary replication, but I assume\npeople recreate the slots.I cannot find related documentation. \n\n-- \n  Bruce Momjian  <bruce@momjian.us>        https://momjian.us\n  EnterpriseDB                             https://enterprisedb.com\n\n  The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Tue, 13 Oct 2020 18:37:14 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: lost replication slots after pg_upgrade" }, { "msg_contents": "On Tue, Oct 13, 2020 at 06:37:14PM +0200, Pavel Stehule wrote:\n> \n> \n> �t 13. 10. 2020 v�18:33 odes�latel Bruce Momjian <bruce@momjian.us> napsal:\n> \n> On Tue, Oct 13, 2020 at 06:20:41PM +0200, Pavel Stehule wrote:\n> > Hi\n> >\n> > One customer reports issue related to pg_upgrade.\n> >\n> > I found a thread https://www.postgresql-archive.org/\n> >\n> Upgrade-and-re-synchronization-with-logical-replication-pglogical-and-PG-10-td6001990.html\n> >\n> > But I didn't find documentation of this limitation?\n> \n> So, what is the question?� Peter Eisentraut is right that WAL is not\n> preserved, so replication slots are not preserved.� We do have\n> pg_upgrade instructions for upgrading binary replication, but I assume\n> people recreate the slots.\n> \n> \n> I cannot find related documentation.\n\nYou mean related documentation of how to manage changing replication\nslots?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 13 Oct 2020 12:57:44 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: lost replication slots after pg_upgrade" }, { "msg_contents": "út 13. 10. 2020 v 18:57 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n\n> On Tue, Oct 13, 2020 at 06:37:14PM +0200, Pavel Stehule wrote:\n> >\n> >\n> > út 13. 10. 2020 v 18:33 odesílatel Bruce Momjian <bruce@momjian.us>\n> napsal:\n> >\n> > On Tue, Oct 13, 2020 at 06:20:41PM +0200, Pavel Stehule wrote:\n> > > Hi\n> > >\n> > > One customer reports issue related to pg_upgrade.\n> > >\n> > > I found a thread https://www.postgresql-archive.org/\n> > >\n> >\n> Upgrade-and-re-synchronization-with-logical-replication-pglogical-and-PG-10-td6001990.html\n> > >\n> > > But I didn't find documentation of this limitation?\n> >\n> > So, what is the question? Peter Eisentraut is right that WAL is not\n> > preserved, so replication slots are not preserved. We do have\n> > pg_upgrade instructions for upgrading binary replication, but I\n> assume\n> > people recreate the slots.\n> >\n> >\n> > I cannot find related documentation.\n>\n> You mean related documentation of how to manage changing replication\n> slots?\n>\n\nno, I just missi note, so after upgrade by pg_upgrade I have to recreate\nreplication slots. Some like\n\nafter pg_upgrade you should to do:\n\na) run analyze .... (it is a known case)\nb) recreate replication slots - these slots are not removed in the upgrade\nprocess.\n\nRegards\n\nPavel\n\n\n\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EnterpriseDB https://enterprisedb.com\n>\n> The usefulness of a cup is in its emptiness, Bruce Lee\n>\n>\n\nút 13. 10. 2020 v 18:57 odesílatel Bruce Momjian <bruce@momjian.us> napsal:On Tue, Oct 13, 2020 at 06:37:14PM +0200, Pavel Stehule wrote:\n> \n> \n> út 13. 10. 2020 v 18:33 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n> \n>     On Tue, Oct 13, 2020 at 06:20:41PM +0200, Pavel Stehule wrote:\n>     > Hi\n>     >\n>     > One customer reports issue related to pg_upgrade.\n>     >\n>     > I found a thread https://www.postgresql-archive.org/\n>     >\n>     Upgrade-and-re-synchronization-with-logical-replication-pglogical-and-PG-10-td6001990.html\n>     >\n>     > But I didn't find documentation of this limitation?\n> \n>     So, what is the question?  Peter Eisentraut is right that WAL is not\n>     preserved, so replication slots are not preserved.  We do have\n>     pg_upgrade instructions for upgrading binary replication, but I assume\n>     people recreate the slots.\n> \n> \n> I cannot find related documentation.\n\nYou mean related documentation of how to manage changing replication\nslots?no, I just missi note, so after upgrade by pg_upgrade I have to recreate replication slots. Some likeafter pg_upgrade you should to do:a) run analyze .... (it is a known case)b) recreate replication slots - these slots are not removed in the upgrade process.RegardsPavel\n\n-- \n  Bruce Momjian  <bruce@momjian.us>        https://momjian.us\n  EnterpriseDB                             https://enterprisedb.com\n\n  The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Tue, 13 Oct 2020 19:23:09 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: lost replication slots after pg_upgrade" }, { "msg_contents": "On 2020-10-13 19:23, Pavel Stehule wrote:\n> no, I just missi note, so after upgrade by pg_upgrade I have to recreate \n> replication slots. Some like\n> \n> after pg_upgrade you should to do:\n> \n> a) run analyze .... (it is a known case)\n> b) recreate replication slots - these slots are not removed in the \n> upgrade process.\n\nAn argument could be made that pg_upgrade should copy over logical \nreplication slots. The normal scenario would be that you pause your \nlogical subscriptions, run pg_upgrade on the publisher, then un-pause \nthe subscriptions. The subscribers then ought to be able to reconnect \nand continue consuming logical changes. Since the content of the \npublisher database is logically the same before and after the upgrade, \nthis should appear transparent to the subscribers. They'll just see \nthat the publisher was offline for a while.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 13 Oct 2020 23:34:27 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: lost replication slots after pg_upgrade" }, { "msg_contents": "On Tue, Oct 13, 2020 at 11:34:27PM +0200, Peter Eisentraut wrote:\n> On 2020-10-13 19:23, Pavel Stehule wrote:\n> > no, I just missi note, so after upgrade by pg_upgrade I have to recreate\n> > replication slots. Some like\n> > \n> > after pg_upgrade you should to do:\n> > \n> > a) run analyze .... (it is a known case)\n> > b) recreate replication slots - these slots are not removed in the\n> > upgrade process.\n> \n> An argument could be made that pg_upgrade should copy over logical\n> replication slots. The normal scenario would be that you pause your logical\n> subscriptions, run pg_upgrade on the publisher, then un-pause the\n> subscriptions. The subscribers then ought to be able to reconnect and\n> continue consuming logical changes. Since the content of the publisher\n> database is logically the same before and after the upgrade, this should\n> appear transparent to the subscribers. They'll just see that the publisher\n> was offline for a while.\n\nI guess that is possible since pg_upgrade resets the WAL location,\nthough not the WAL contents.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 13 Oct 2020 17:37:18 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: lost replication slots after pg_upgrade" } ]
[ { "msg_contents": "I grow quite weary of the number of buildfarm failures we see as a\nconsequence of the Linux PPC64 bug discussed in [1]. Although we can\nanticipate that the fix will roll out into new kernel builds before much\nlonger, that will have very little effect on the buildfarm situation,\ngiven that a lot of Mark's PPC64 armada is running hoary \"stable\" kernels.\nIt might be many years before there are no unpatched systems to worry about.\n\nI think it's time to give up and disable the infinite_recurse test on such\nplatforms. It's teaching us nothing and we waste valuable developer time\neyeballing failures to make sure they're just the same old same old.\nTesting the case on not-Linux-PPC64 is enough to verify that our own code\nworks.\n\nWe can use the same technique used in collate.linux.utf8.sql,\nnamely check the output of version() and abandon the test if it matches.\nTo minimize the maintenance pain from needing two expected-files, it seems\nprudent to split infinite_recurse into its own test script, which leads\nto the attached proposed patch.\n\nAny objections?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/20190723162703.GM22387%40telsasoft.com", "msg_date": "Tue, 13 Oct 2020 12:50:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Getting rid of intermittent PPC64 buildfarm failures" } ]
[ { "msg_contents": "Hi there, it's my first email, so I’d like to first thanks everyone working on this, using pgadmin had made a huge difference for me! I’m using it with serverless PostgreSQL databases on AWS, and for at the first one I went through the work of creating an EC2 instance I could ssh tunnel to so I could use PGAdmin. I’m now having to create more databases by hand for now in different accounts and ultimately this should be all automated, and I’d rather find another way, hence the question: what would it take, what would be the simplest way to get pgadmin to work using the https based Data API of AWS RDS <https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html> ? I’m unfortunately not a python developer, focusing on JavaScript client and server-side myself, but I hope I’m not the only one who’s like to do this!\n\nThanks for any feedback/pointers!\n\nBenoit\nHi there, it's my first email, so I’d like to first thanks everyone working on this, using pgadmin had made a huge difference for me! I’m using it with serverless PostgreSQL databases on AWS, and for at the first one I went through the work of creating an EC2 instance I could ssh tunnel to so I could use PGAdmin. I’m now having to create more databases by hand for now in different accounts and ultimately this should be all automated, and I’d rather find another way, hence the question: what would it take, what would be the simplest way to get pgadmin to work using the https based Data API of AWS RDS ? I’m unfortunately not a python developer, focusing on JavaScript client and server-side myself, but I hope I’m not the only one who’s like to do this!Thanks for any feedback/pointers!Benoit", "msg_date": "Tue, 13 Oct 2020 12:05:43 -0700", "msg_from": "Benoit Marchant <marchant@mac.com>", "msg_from_op": true, "msg_subject": "Using pgadmin with AWS RDS https based Data API" } ]
[ { "msg_contents": "Commit 464824323e has added the support of the streaming of\nin-progress transactions into the built-in logical replication. The\nattached patch adds the statistics about transactions streamed to the\ndecoding output plugin from ReorderBuffer. Users can query the\npg_stat_replication_slots view to check these stats and call\npg_stat_reset_replication_slot to reset the stats of a particular\nslot. Users can pass NULL in pg_stat_reset_replication_slot to reset\nstats of all the slots.\n\nCommit 9868167500 has added the basic infrastructure to capture the\nstats of slot and this commit extends the statistics collector to\ntrack additional information about slots.\n\nThis patch was originally written by Ajin Cherian [1]. I have fixed\nbugs and modified some comments in the code.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CAFPTHDZ8RnOovefzB%2BOMoRxLSD404WRLqWBUHe6bWqM5ew1bNA%40mail.gmail.com\n\n--\nWith Regards,\nAmit Kapila", "msg_date": "Wed, 14 Oct 2020 09:10:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Track statistics for streaming of in-progress transactions" }, { "msg_contents": "On Wed, Oct 14, 2020 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Commit 464824323e has added the support of the streaming of\n> in-progress transactions into the built-in logical replication. The\n> attached patch adds the statistics about transactions streamed to the\n> decoding output plugin from ReorderBuffer. Users can query the\n> pg_stat_replication_slots view to check these stats and call\n> pg_stat_reset_replication_slot to reset the stats of a particular\n> slot. Users can pass NULL in pg_stat_reset_replication_slot to reset\n> stats of all the slots.\n>\n> Commit 9868167500 has added the basic infrastructure to capture the\n> stats of slot and this commit extends the statistics collector to\n> track additional information about slots.\n>\n> This patch was originally written by Ajin Cherian [1]. I have fixed\n> bugs and modified some comments in the code.\n>\n> Thoughts?\n>\n> [1] - https://www.postgresql.org/message-id/CAFPTHDZ8RnOovefzB%2BOMoRxLSD404WRLqWBUHe6bWqM5ew1bNA%40mail.gmail.com\n\nI've applied the patch. It applies cleanly. I've reviewed the patch\nand have no comments to report.\nI have also run some tests to get streaming stats as well as reset the\nstats counter, everything seems to be working as expected.\nI am fine with the changes.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n", "msg_date": "Mon, 19 Oct 2020 19:20:46 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Track statistics for streaming of in-progress transactions" }, { "msg_contents": "On Mon, Oct 19, 2020 at 1:52 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Wed, Oct 14, 2020 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Commit 464824323e has added the support of the streaming of\n> > in-progress transactions into the built-in logical replication. The\n> > attached patch adds the statistics about transactions streamed to the\n> > decoding output plugin from ReorderBuffer. Users can query the\n> > pg_stat_replication_slots view to check these stats and call\n> > pg_stat_reset_replication_slot to reset the stats of a particular\n> > slot. Users can pass NULL in pg_stat_reset_replication_slot to reset\n> > stats of all the slots.\n> >\n> > Commit 9868167500 has added the basic infrastructure to capture the\n> > stats of slot and this commit extends the statistics collector to\n> > track additional information about slots.\n> >\n> > This patch was originally written by Ajin Cherian [1]. I have fixed\n> > bugs and modified some comments in the code.\n> >\n> > Thoughts?\n> >\n> > [1] - https://www.postgresql.org/message-id/CAFPTHDZ8RnOovefzB%2BOMoRxLSD404WRLqWBUHe6bWqM5ew1bNA%40mail.gmail.com\n>\n> I've applied the patch. It applies cleanly. I've reviewed the patch\n> and have no comments to report.\n> I have also run some tests to get streaming stats as well as reset the\n> stats counter, everything seems to be working as expected.\n> I am fine with the changes.\n>\n\nThanks. One thing I have considered while updating this patch was to\nwrite a test case similar to what we have for spilled stats in\ntest_decoding/sql/stats.sql but I decided not to do it as that doesn't\nseem to add much value for the streaming case because we already have\nsome tests in test_decoding/sql/stream.sql which indicates that the\nstreaming is happening. If we could have a way to get the exact\nstreaming stats then it would have been better but while writing tests\nfor spilled stats we found that it is not possible because some\nbackground transactions (like autovacuum) might send the stats earlier\nmaking the actual number inconsistent. What do you think?\n\nSawada-San, do you have any thoughts on this matter?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 20 Oct 2020 10:59:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Track statistics for streaming of in-progress transactions" }, { "msg_contents": "On Tue, 20 Oct 2020 at 14:29, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Oct 19, 2020 at 1:52 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > On Wed, Oct 14, 2020 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > Commit 464824323e has added the support of the streaming of\n> > > in-progress transactions into the built-in logical replication. The\n> > > attached patch adds the statistics about transactions streamed to the\n> > > decoding output plugin from ReorderBuffer. Users can query the\n> > > pg_stat_replication_slots view to check these stats and call\n> > > pg_stat_reset_replication_slot to reset the stats of a particular\n> > > slot. Users can pass NULL in pg_stat_reset_replication_slot to reset\n> > > stats of all the slots.\n> > >\n> > > Commit 9868167500 has added the basic infrastructure to capture the\n> > > stats of slot and this commit extends the statistics collector to\n> > > track additional information about slots.\n> > >\n> > > This patch was originally written by Ajin Cherian [1]. I have fixed\n> > > bugs and modified some comments in the code.\n> > >\n> > > Thoughts?\n> > >\n> > > [1] - https://www.postgresql.org/message-id/CAFPTHDZ8RnOovefzB%2BOMoRxLSD404WRLqWBUHe6bWqM5ew1bNA%40mail.gmail.com\n> >\n> > I've applied the patch. It applies cleanly. I've reviewed the patch\n> > and have no comments to report.\n> > I have also run some tests to get streaming stats as well as reset the\n> > stats counter, everything seems to be working as expected.\n> > I am fine with the changes.\n> >\n>\n> Thanks. One thing I have considered while updating this patch was to\n> write a test case similar to what we have for spilled stats in\n> test_decoding/sql/stats.sql but I decided not to do it as that doesn't\n> seem to add much value for the streaming case because we already have\n> some tests in test_decoding/sql/stream.sql which indicates that the\n> streaming is happening. If we could have a way to get the exact\n> streaming stats then it would have been better but while writing tests\n> for spilled stats we found that it is not possible because some\n> background transactions (like autovacuum) might send the stats earlier\n> making the actual number inconsistent. What do you think?\n>\n> Sawada-San, do you have any thoughts on this matter?\n\nI basically agree with that. Reading the patch, I have a question that\nmight be relevant to this matter:\n\nThe patch has the following code:\n\n+ /*\n+ * Remember this information to be used later to update stats. We can't\n+ * update the stats here as an error while processing the changes would\n+ * lead to the accumulation of stats even though we haven't streamed all\n+ * the changes.\n+ */\n+ txn_is_streamed = rbtxn_is_streamed(txn);\n+ stream_bytes = txn->total_size;\n\nThe commend seems to mention only about when an error happened while\nprocessing the changes but I wonder if the same is true for the\naborted transaction. That is, if we catch an error due to concurrent\ntransaction abort while processing the changes, we stop to stream the\nchanges. But the patch accumulates the stats even in this case. If we\ndon’t want to accumulate the stats of the abort transaction and it’s\neasily reproducible, it might be better to add a test checking if we\ndon’t accumulate in that case.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 21 Oct 2020 11:44:47 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Track statistics for streaming of in-progress transactions" }, { "msg_contents": "On Wed, Oct 21, 2020 at 8:15 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 20 Oct 2020 at 14:29, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Thanks. One thing I have considered while updating this patch was to\n> > write a test case similar to what we have for spilled stats in\n> > test_decoding/sql/stats.sql but I decided not to do it as that doesn't\n> > seem to add much value for the streaming case because we already have\n> > some tests in test_decoding/sql/stream.sql which indicates that the\n> > streaming is happening. If we could have a way to get the exact\n> > streaming stats then it would have been better but while writing tests\n> > for spilled stats we found that it is not possible because some\n> > background transactions (like autovacuum) might send the stats earlier\n> > making the actual number inconsistent. What do you think?\n> >\n> > Sawada-San, do you have any thoughts on this matter?\n>\n> I basically agree with that. Reading the patch, I have a question that\n> might be relevant to this matter:\n>\n> The patch has the following code:\n>\n> + /*\n> + * Remember this information to be used later to update stats. We can't\n> + * update the stats here as an error while processing the changes would\n> + * lead to the accumulation of stats even though we haven't streamed all\n> + * the changes.\n> + */\n> + txn_is_streamed = rbtxn_is_streamed(txn);\n> + stream_bytes = txn->total_size;\n>\n> The commend seems to mention only about when an error happened while\n> processing the changes but I wonder if the same is true for the\n> aborted transaction. That is, if we catch an error due to concurrent\n> transaction abort while processing the changes, we stop to stream the\n> changes. But the patch accumulates the stats even in this case.\n>\n\nIt would only add for the current stream and I don't think that is\nwrong because we would have sent some data (at least the start\nmessage) for which we send the stream_stop message later while\ndecoding Abort message. I had thought to avoid this we can update the\nstats in ReorderBufferProcessTXN at the end when we know streaming is\ncomplete but again that would miss the counter update for the data we\nhave sent before an error has occurred and also updating the streaming\ncounters in ReorderBufferStreamTXN seems more logical to me.\n\n> If we\n> don’t want to accumulate the stats of the abort transaction and it’s\n> easily reproducible, it might be better to add a test checking if we\n> don’t accumulate in that case.\n>\n\nBut as explained above, I think we count it as we would have sent at\nleast one message (could be more) before we encounter this error.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Oct 2020 09:47:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Track statistics for streaming of in-progress transactions" }, { "msg_contents": "On Tue, Oct 20, 2020 at 4:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> Thanks. One thing I have considered while updating this patch was to\n> write a test case similar to what we have for spilled stats in\n> test_decoding/sql/stats.sql but I decided not to do it as that doesn't\n> seem to add much value for the streaming case because we already have\n> some tests in test_decoding/sql/stream.sql which indicates that the\n> streaming is happening. If we could have a way to get the exact\n> streaming stats then it would have been better but while writing tests\n> for spilled stats we found that it is not possible because some\n> background transactions (like autovacuum) might send the stats earlier\n> making the actual number inconsistent. What do you think?\n>\n\nI agree. If the stat numbers can't be guaranteed to be consistent it's\nnot worth writing specific tests for this.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n", "msg_date": "Thu, 22 Oct 2020 17:02:06 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Track statistics for streaming of in-progress transactions" }, { "msg_contents": "On Wed, Oct 14, 2020 at 9:09 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Commit 464824323e has added the support of the streaming of\n> in-progress transactions into the built-in logical replication. The\n> attached patch adds the statistics about transactions streamed to the\n> decoding output plugin from ReorderBuffer.\n\nI have reviewed the attached patch, I have one comment\n\n+ int64 streamTxns; /* number of transactions streamed to the decoding\noutput plugin */\n+ int64 streamCount; /* streaming invocation counter */\n+ int64 streamBytes; /* amount of data streamed to subscriber */\n\nI think instead of saying \"amount of data streamed to subscriber\" it\nshould be \" amount of data streamed to the decoding output plugin\"\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Oct 2020 11:51:57 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Track statistics for streaming of in-progress transactions" }, { "msg_contents": "On Thu, Oct 22, 2020 at 11:52 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Oct 14, 2020 at 9:09 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Commit 464824323e has added the support of the streaming of\n> > in-progress transactions into the built-in logical replication. The\n> > attached patch adds the statistics about transactions streamed to the\n> > decoding output plugin from ReorderBuffer.\n>\n> I have reviewed the attached patch, I have one comment\n>\n> + int64 streamTxns; /* number of transactions streamed to the decoding\n> output plugin */\n> + int64 streamCount; /* streaming invocation counter */\n> + int64 streamBytes; /* amount of data streamed to subscriber */\n>\n> I think instead of saying \"amount of data streamed to subscriber\" it\n> should be \" amount of data streamed to the decoding output plugin\"\n>\n\nThanks, I think a similar change is required in docs as well. One more\nthing I was considering whether to change docs to explain stream_count\nand stream_txns somewhat more clearly based on what I have posted for\nspilled_count and spilled_txns in the other thread [1]? Do you think\nthat patch is an improvement over what we have now? If yes, we can\nadapt the similar changes here as well, otherwise, we can leave it as\nit is.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1LdPQucvp9St2D6NhO9aQ2KKr3U0yAbKDox2UC86Q%2B_zg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 22 Oct 2020 14:09:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Track statistics for streaming of in-progress transactions" }, { "msg_contents": "On Thu, Oct 22, 2020 at 2:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 22, 2020 at 11:52 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Oct 14, 2020 at 9:09 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > Commit 464824323e has added the support of the streaming of\n> > > in-progress transactions into the built-in logical replication. The\n> > > attached patch adds the statistics about transactions streamed to the\n> > > decoding output plugin from ReorderBuffer.\n> >\n> > I have reviewed the attached patch, I have one comment\n> >\n> > + int64 streamTxns; /* number of transactions streamed to the decoding\n> > output plugin */\n> > + int64 streamCount; /* streaming invocation counter */\n> > + int64 streamBytes; /* amount of data streamed to subscriber */\n> >\n> > I think instead of saying \"amount of data streamed to subscriber\" it\n> > should be \" amount of data streamed to the decoding output plugin\"\n> >\n>\n> Thanks, I think a similar change is required in docs as well.\n>\n\nI have fixed the above comment and rebased the patch. I have changed\nthe docs a bit to add more explanation about the counters. Let me know\nif you have any more comments. Thanks Dilip and Sawada-San for\nreviewing this patch.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Fri, 23 Oct 2020 10:24:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Track statistics for streaming of in-progress transactions" }, { "msg_contents": "On Fri, Oct 23, 2020 at 10:24 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 22, 2020 at 2:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> I have fixed the above comment and rebased the patch. I have changed\n> the docs a bit to add more explanation about the counters. Let me know\n> if you have any more comments. Thanks Dilip and Sawada-San for\n> reviewing this patch.\n>\n\nAttached is an updated patch with minor changes in docs and cosmetic\nchanges. I am planning to push this patch tomorrow unless there are\nany more comments/suggestions.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 28 Oct 2020 08:54:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Track statistics for streaming of in-progress transactions" }, { "msg_contents": "On Wed, Oct 28, 2020 at 08:54:53AM +0530, Amit Kapila wrote:\n>On Fri, Oct 23, 2020 at 10:24 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Thu, Oct 22, 2020 at 2:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >\n>>\n>> I have fixed the above comment and rebased the patch. I have changed\n>> the docs a bit to add more explanation about the counters. Let me know\n>> if you have any more comments. Thanks Dilip and Sawada-San for\n>> reviewing this patch.\n>>\n>\n>Attached is an updated patch with minor changes in docs and cosmetic\n>changes. I am planning to push this patch tomorrow unless there are\n>any more comments/suggestions.\n>\n\n+1 and thanks for working on this\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 29 Oct 2020 00:46:05 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Track statistics for streaming of in-progress transactions" }, { "msg_contents": "On Thu, Oct 29, 2020 at 5:16 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Wed, Oct 28, 2020 at 08:54:53AM +0530, Amit Kapila wrote:\n> >On Fri, Oct 23, 2020 at 10:24 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> On Thu, Oct 22, 2020 at 2:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> >\n> >>\n> >> I have fixed the above comment and rebased the patch. I have changed\n> >> the docs a bit to add more explanation about the counters. Let me know\n> >> if you have any more comments. Thanks Dilip and Sawada-San for\n> >> reviewing this patch.\n> >>\n> >\n> >Attached is an updated patch with minor changes in docs and cosmetic\n> >changes. I am planning to push this patch tomorrow unless there are\n> >any more comments/suggestions.\n> >\n>\n> +1 and thanks for working on this\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 29 Oct 2020 15:06:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Track statistics for streaming of in-progress transactions" } ]
[ { "msg_contents": "Hi all,\n\nSince 510b8cbf, we have in-core equivalents for htonl(), ntohl() & co\nthrough pg_bswap.h that allows to compile with a built-in function if\nthe compiler used has one.\n\nAll the existing calls in the code tree have been changed with\n0ba99c84 for performance reasons (except the libpq examples), however\nthe FE/BE code of GSSAPI encryption code did not get this call in\nb0b39f7. I think that we had better switch to the built-ins functions\nas well for this case. The argument of consistency matters here, but\nalso perhaps the argument of performance, where it may not be easy to\nmeasure a difference.\n\nAttached is a patch to do the switch. None of the files changed\ninclude arpa/inet.h. Any thoughts?\n\nThanks,\n--\nMichael", "msg_date": "Wed, 14 Oct 2020 14:53:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Some remaining htonl() and ntohl() calls in the code" }, { "msg_contents": "Hi,\n\nOn 2020-10-14 14:53:03 +0900, Michael Paquier wrote:\n> Since 510b8cbf, we have in-core equivalents for htonl(), ntohl() & co\n> through pg_bswap.h that allows to compile with a built-in function if\n> the compiler used has one.\n> \n> All the existing calls in the code tree have been changed with\n> 0ba99c84 for performance reasons (except the libpq examples), however\n> the FE/BE code of GSSAPI encryption code did not get this call in\n> b0b39f7. I think that we had better switch to the built-ins functions\n> as well for this case. The argument of consistency matters here, but\n> also perhaps the argument of performance, where it may not be easy to\n> measure a difference.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 14 Oct 2020 13:41:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Some remaining htonl() and ntohl() calls in the code" }, { "msg_contents": "On Wed, Oct 14, 2020 at 01:41:23PM -0700, Andres Freund wrote:\n> +1\n\nThanks. I have applied that as of 86dba33, and did not see a need for\na back-patch.\n--\nMichael", "msg_date": "Fri, 16 Oct 2020 09:01:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Some remaining htonl() and ntohl() calls in the code" } ]
[ { "msg_contents": "As for the initscan, It looks to me that the codes and comments\ndon't match (obviously I'm wrong, this is why I'm asking).\n\n /*\n * Determine the number of blocks we have to scan.\n *\n * It is sufficient to do this once at scan start, since any tuples\nadded\n * while the scan is in progress will be invisible to my snapshot\nanyway.\n\nAndy : I can understand until this.\n\n * That is not true when using a non-MVCC snapshot. However, we couldn't\n * guarantee to return tuples added after scan start anyway,\n\nAndy: For any isolation level rather than \"READ Committed\", we should not\nread that, for \"READ UNCommitted\", we can still do the same. So I think\nI can understand it here.\n\n\n * since they\n * might go into pages we already scanned. To guarantee consistent\n * results for a non-MVCC snapshot, the caller must hold some\nhigher-level\n * lock that ensures the interesting tuple(s) won't change.)\n */\n\nAndy: I can't understand what the \"To guarantee consistent results for a\nnon-MVCC snapshot\" mean. Looks something need to be handled\ndifferently for non-MVCC snapshot. Until now I think we CAN Determine\nthe number of blocks only once for MVCC snapshot which should be\nvery common.\n\n if (scan->rs_parallel != NULL)\n scan->rs_nblocks = scan->rs_parallel->phs_nblocks;\n else\n scan->rs_nblocks = RelationGetNumberOfBlocks(scan->rs_rd);\n\nAndy: However I see the code checks the number of blocks at every\nrescan regardless of snapshot type which I can't understand.\n\nThis behavior doesn't cause any troubles to me (I may care about this\nfor Index Scan, but looks IndexScan doesn't need to do that), So I am\nasking just for education purposes. Thanks!\n\n-- \nBest Regards\nAndy Fan\n\nAs for the initscan,   It looks to me that the codes and commentsdon't match (obviously I'm wrong,  this is why I'm asking).    /*     * Determine the number of blocks we have to scan.     *     * It is sufficient to do this once at scan start, since any tuples added     * while the scan is in progress will be invisible to my snapshot anyway.Andy :  I can understand until this.       * That is not true when using a non-MVCC snapshot. However, we couldn't     * guarantee to return tuples added after scan start anyway, Andy:  For any isolation level rather than \"READ Committed\", we should notread that,  for \"READ UNCommitted\",  we can still do the same.  So I thinkI can understand it here.    * since they     * might go into pages we already scanned.  To guarantee consistent     * results for a non-MVCC snapshot, the caller must hold some higher-level     * lock that ensures the interesting tuple(s) won't change.)     */Andy:  I can't understand what the \"To guarantee consistent results for a non-MVCC snapshot\" mean.  Looks something need to be handleddifferently for non-MVCC snapshot.  Until now I think we CAN Determinethe number of blocks only once for MVCC snapshot which should bevery common.    if (scan->rs_parallel != NULL)        scan->rs_nblocks = scan->rs_parallel->phs_nblocks;    else        scan->rs_nblocks = RelationGetNumberOfBlocks(scan->rs_rd);Andy:  However I see the code checks the number of blocks at everyrescan regardless of snapshot type which I can't understand.This behavior doesn't cause any troubles to me (I may care about thisfor Index Scan, but looks IndexScan doesn't need to do that),  So I amasking just for education purposes.   Thanks!-- Best RegardsAndy Fan", "msg_date": "Wed, 14 Oct 2020 20:02:29 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "RelationGetNumberOfBlocks is called every time of heap_rescan." } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: RIPEMD160\n\n\nImprove psql \\df to choose functions by their arguments\n\n== OVERVIEW\n\nHaving to scroll through same-named functions with different argument types \nwhen you know exactly which one you want is annoying at best, error causing \nat worst. This patch enables a quick narrowing of functions with the \nsame name but different arguments. For example, to see the full details \nof a function names \"myfunc\" with a TEXT argument, but not showing \nthe version of \"myfunc\" with a BIGINT argument, one can now do:\n\npsql=# \\df myfunc text\n\nFor this, we are fairly liberal in what we accept, and try to be as \nintuitive as possible.\n\nFeatures:\n\n* Type names are case insensitive. Whitespace is optional, but quoting is respected:\n\ngreg=# \\df myfunc text \"character varying\" INTEGER\n\n* Abbreviations of common types is permitted (because who really likes \nto type out \"character varying\"?), so the above could also be written as:\n\ngreg=# \\df myfunc text varchar int\n\n* The matching is greedy, so you can see everything matching a subset:\n\ngreg=# \\df myfunc timestamptz\n List of functions\n Schema | Name | Result data type | Argument data types | Type \n- --------+--------+------------------+-------------------------------------------+------\n public | myfunc | void | timestamp with time zone | func\n public | myfunc | void | timestamp with time zone, bigint | func\n public | myfunc | void | timestamp with time zone, bigint, boolean | func\n public | myfunc | void | timestamp with time zone, integer | func\n public | myfunc | void | timestamp with time zone, text, cidr | func\n(5 rows)\n\n* The appearance of a closing paren indicates we do not want the greediness:\n\ngreg=# \\df myfunc (timestamptz, bigint)\n List of functions\n Schema | Name | Result data type | Argument data types | Type \n- --------+--------+------------------+----------------------------------+------\n public | myfunc | void | timestamp with time zone, bigint | func\n(1 row)\n\n\n== TAB COMPLETION:\n\nI'm not entirely happy with this, but I figure piggybacking \nonto COMPLETE_WITH_FUNCTION_ARG is better than nothing at all.\nIdeally we'd walk prev*_wd to refine the returned list, but \nthat's an awful lot of complexity for very little gain, and I think \nthe current behavior of showing the complete list of args each time \nshould suffice.\n\n\n== DOCUMENTATION:\n\nThe new feature is briefly mentioned: wordsmithing help in the \nsgml section is appreciated. I'm not sure how many of the above features \nneed to be documented in detail.\n\nRegarding psql/help.c, I don't think this really warrants a change there. \nAs it is, we've gone through great lengths to keep this overloaded backslash \ncommand left justified with the rest!\n\n\n== TESTS:\n\nI put this into psql.c, seems the best place. Mostly testing out \nbasic functionality, quoting, and the various abbreviations. Not much \nelse to test, near as I can tell, as this is a pure convienence addition \nand shouldn't affect anything else. Any extra words after a function name \nfor \\df was previously treated as an error.\n\n\n== IMPLEMENTATION:\n\nRather than messing with psqlscanslash, we simply slurp in the entire rest \nof the line via psql_scan_slash_option (all of which was previously ignored). \nThis is passed to describeFunction, which then uses strtokx to break it \ninto tokens. We look for a match by comparing the current proargtypes entry, \ncasted to text, against the lowercase version of the token found by strtokx. \nAlong the way, we convert things like \"timestamptz\" to the official version \n(i.e. \"timestamp with time zone\"). If any of the tokens start with a closing \nparen, we immediately stop parsing and set pronargs to the current number \nof valid tokens, thereby forcing a match to one (or zero) functions.\n\ndcd972f6b945070ef4454ea39d25378427a90e89 df.patch\n\n-----BEGIN PGP SIGNATURE-----\n\niF0EAREDAB0WIQQlKd9quPeUB+lERbS8m5BnFJZKyAUCX4bsgwAKCRC8m5BnFJZK\nyGDvAJ9ix8jzwtTwKLDQUgu5yb/iBoC7EQCfQsf8LLZ0RWsiiMposi57u3S94nE=\n=rQj2\n-----END PGP SIGNATURE-----", "msg_date": "Wed, 14 Oct 2020 12:56:38 -0000", "msg_from": "\"Greg Sabino Mullane\" <greg@turnstep.com>", "msg_from_op": true, "msg_subject": "WIP psql \\df choose functions by their arguments" } ]
[ { "msg_contents": "Having committed the optimization for unicode normalization quick check,\nMichael Paquier suggested I might do the same for decomposition as well. I\nwrote:\n\n> I'll\n> do some performance testing soon. Note that a 25kB increase in size could\n> be present in frontend binaries as well in this case. While looking at\n> decomposition, I noticed that recomposition does a linear search through\n> all 6600+ entries, although it seems only about 800 are valid for that.\n> That could be optimized as well now, since with hashing we have more\n> flexibility in the ordering and can put the recomp-valid entries in front.\n> I'm not yet sure if it's worth the additional complexity. I'll take a look\n> and start a new thread.\n\nThe attached patch uses a perfect hash for codepoint decomposition, and for\nrecomposing reduces the linear search from 6604 entries to 942.\n\nThe performance is very nice, and if I'd known better I would have done\nthis first, since the decomp array is as big as the two quick check arrays\nput together:\n\nNormalize, decomp only\n\nselect count(normalize(t, NFD)) from (\nselect md5(i::text) as t from\ngenerate_series(1,100000) as i\n) s;\n\nmaster patchÏ\n887ms 231ms\n\nselect count(normalize(t, NFD)) from (\nselect repeat(U&'\\00E4\\00C5\\0958\\00F4\\1EBF\\3300\\1FE2\\3316\\2465\\322D', i % 3\n+ 1) as t from\ngenerate_series(1,100000) as i\n) s;\n\nmaster patch\n1110ms 208ms\n\n\nNormalize, decomp+recomp (note: 100x less data)\n\nselect count(normalize(t, NFC)) from (\nselect md5(i::text) as t from\ngenerate_series(1,1000) as i\n) s;\n\nmaster patch\n194ms 50.6ms\n\nselect count(normalize(t, NFC)) from (\nselect repeat(U&'\\00E4\\00C5\\0958\\00F4\\1EBF\\3300\\1FE2\\3316\\2465\\322D', i % 3\n+ 1) as t from\ngenerate_series(1,1000) as i\n) s;\n\nmaster patch\n137ms 39.4ms\n\n\nQuick check is another 2x faster on top of previous gains, since it tests\ncanonical class via the decomposition array:\n\n-- all chars are quickcheck YES\nselect count(*) from (\nselect md5(i::text) as t from\ngenerate_series(1,100000) as i\n) s\nwhere t is NFC normalized;\n\nmaster patch\n296ms 131ms\n\n\nSome other considerations:\n- As I alluded above, this adds ~26kB to libpq because of SASLPrep. Since\nthe decomp array was reordered to optimize linear search, it can no longer\nbe used for binary search. It's possible to build two arrays, one for\nfrontend and one for backend, but that's additional complexity. We could\nalso force frontend to do a linear search all the time, but that seems\nfoolish. I haven't checked if it's possible to exclude the hash from\nbackend's libpq.\n- I could split out the two approaches into separate patches, but it'd be\nrather messy.\n\nI'll add a CF entry for this.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 14 Oct 2020 12:58:21 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "speed up unicode decomposition and recomposition" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> Some other considerations:\n> - As I alluded above, this adds ~26kB to libpq because of SASLPrep. Since\n> the decomp array was reordered to optimize linear search, it can no longer\n> be used for binary search. It's possible to build two arrays, one for\n> frontend and one for backend, but that's additional complexity. We could\n> also force frontend to do a linear search all the time, but that seems\n> foolish. I haven't checked if it's possible to exclude the hash from\n> backend's libpq.\n\nIIUC, the only place libpq uses this is to process a password-sized string\nor two during connection establishment. It seems quite silly to add\n26kB in order to make that faster. Seems like a nice speedup on the\nbackend side, but I'd vote for keeping the frontend as-is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Oct 2020 13:06:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "On Wed, Oct 14, 2020 at 01:06:40PM -0400, Tom Lane wrote:\n> John Naylor <john.naylor@enterprisedb.com> writes:\n>> Some other considerations:\n>> - As I alluded above, this adds ~26kB to libpq because of SASLPrep. Since\n>> the decomp array was reordered to optimize linear search, it can no longer\n>> be used for binary search. It's possible to build two arrays, one for\n>> frontend and one for backend, but that's additional complexity. We could\n>> also force frontend to do a linear search all the time, but that seems\n>> foolish. I haven't checked if it's possible to exclude the hash from\n>> backend's libpq.\n> \n> IIUC, the only place libpq uses this is to process a password-sized string\n> or two during connection establishment. It seems quite silly to add\n> 26kB in order to make that faster. Seems like a nice speedup on the\n> backend side, but I'd vote for keeping the frontend as-is.\n\nAgreed. Let's only use the perfect hash in the backend. It would be\nnice to avoid an extra generation of the decomposition table for that,\nand a table ordered by codepoints is easier to look at. How much do\nyou think would be the performance impact if we don't use for the\nlinear search the most-optimized decomposition table?\n--\nMichael", "msg_date": "Thu, 15 Oct 2020 09:25:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "On Wed, Oct 14, 2020 at 8:25 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Oct 14, 2020 at 01:06:40PM -0400, Tom Lane wrote:\n> > IIUC, the only place libpq uses this is to process a password-sized\n> string\n> > or two during connection establishment. It seems quite silly to add\n> > 26kB in order to make that faster. Seems like a nice speedup on the\n> > backend side, but I'd vote for keeping the frontend as-is.\n>\n> Agreed. Let's only use the perfect hash in the backend. It would be\n> nice to avoid an extra generation of the decomposition table for that,\n> and a table ordered by codepoints is easier to look at. How much do\n> you think would be the performance impact if we don't use for the\n> linear search the most-optimized decomposition table?\n>\n\nWith those points in mind and thinking more broadly, I'd like to try harder\non recomposition. Even several times faster, recomposition is still orders\nof magnitude slower than ICU, as measured by Daniel Verite [1]. I only did\nit this way because I couldn't think of how to do the inverse lookup with a\nhash. But I think if we constructed the hash key like\n\npg_hton64((code1 << 32) | code2)\n\nand on the Perl side do something like\n\npack('N',$code1) . pack('N',$code2)\n\nthat might work. Or something that looks more like the C side. And make\nsure to use the lowest codepoint for the result. That way, we can still\nkeep the large decomp array ordered, making it easier to keep the current\nimplementation in the front end, and hopefully getting even better\nperformance in the backend.\n\n[1]\nhttps://www.postgresql.org/message-id/2c5e8df9-43b8-41fa-88e6-286e8634f00a%40manitou-mail.org\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Wed, Oct 14, 2020 at 8:25 PM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Oct 14, 2020 at 01:06:40PM -0400, Tom Lane wrote:> IIUC, the only place libpq uses this is to process a password-sized string\n> or two during connection establishment.  It seems quite silly to add\n> 26kB in order to make that faster.  Seems like a nice speedup on the\n> backend side, but I'd vote for keeping the frontend as-is.\n\nAgreed.  Let's only use the perfect hash in the backend.  It would be\nnice to avoid an extra generation of the decomposition table for that,\nand a table ordered by codepoints is easier to look at.  How much do\nyou think would be the performance impact if we don't use for the\nlinear search the most-optimized decomposition table?With those points in mind and thinking more broadly, I'd like to try harder on recomposition. Even several times faster, recomposition is still orders of magnitude slower than ICU, as measured by Daniel Verite [1]. I only did it this way because I couldn't think of how to do the inverse lookup with a hash. But I think if we constructed the hash key likepg_hton64((code1 << 32) | code2)and on the Perl side do something likepack('N',$code1) . pack('N',$code2)that might work. Or something that looks more like the C side. And make sure to use the lowest codepoint for the result. That way, we can still keep the large decomp array ordered, making it easier to keep the current implementation in the front end, and hopefully getting even better performance in the backend.[1] https://www.postgresql.org/message-id/2c5e8df9-43b8-41fa-88e6-286e8634f00a%40manitou-mail.org-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Wed, 14 Oct 2020 22:56:41 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> With those points in mind and thinking more broadly, I'd like to try harder\n> on recomposition. Even several times faster, recomposition is still orders\n> of magnitude slower than ICU, as measured by Daniel Verite [1].\n\nHuh. Has anyone looked into how they do it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Oct 2020 23:06:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "At Wed, 14 Oct 2020 23:06:28 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > With those points in mind and thinking more broadly, I'd like to try harder\n> > on recomposition. Even several times faster, recomposition is still orders\n> > of magnitude slower than ICU, as measured by Daniel Verite [1].\n> \n> Huh. Has anyone looked into how they do it?\n\nI'm not sure it is that, but it would be that.. It uses separate\ntables for decomposition and composition pointed from a trie?\n\nThat table is used after trying algorithmic decomposition/composition\nfor, for example, Hangul. I didn't look it any fruther but just for\ninformation, icu4c/source/common/normalizer2impl.cpp seems doing that.\nFor example icu4c/srouce/common/norm2_nfc_data.h defines the static data.\n\nicu4c/source/common/normalier2impl.h:244 points a design documentation\nof normalization.\n\nhttp://site.icu-project.org/design/normalization/custom\n\n> Old and New Implementation Details\n> \n> The old normalization data format (unorm.icu, ca. 2001..2009) uses\n> three data structures for normalization: A trie for looking up 32-bit\n> values for every code point, a 16-bit-unit array with decompositions\n> and some other data, and a composition table (16-bit-unit array,\n> linear search list per starter). The data is combined for all 4\n> standard normalization forms: NFC, NFD, NFKC and NFKD.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 15 Oct 2020 14:30:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "On Thu, Oct 15, 2020 at 1:30 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Wed, 14 Oct 2020 23:06:28 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> > John Naylor <john.naylor@enterprisedb.com> writes:\n> > > With those points in mind and thinking more broadly, I'd like to try\n> harder\n> > > on recomposition. Even several times faster, recomposition is still\n> orders\n> > > of magnitude slower than ICU, as measured by Daniel Verite [1].\n> >\n> > Huh. Has anyone looked into how they do it?\n>\n> I'm not sure it is that, but it would be that.. It uses separate\n> tables for decomposition and composition pointed from a trie?\n>\n\nI think I've seen a trie recommended somewhere, maybe the official website.\nThat said, I was able to get the hash working for recomposition (split into\na separate patch, and both of them now leave frontend alone), and I'm\npleased to say it's 50-75x faster than linear search in simple tests. I'd\nbe curious how it compares to ICU now. Perhaps Daniel Verite would be\ninterested in testing again? (CC'd)\n\nselect count(normalize(t, NFC)) from (\nselect md5(i::text) as t from\ngenerate_series(1,100000) as i\n) s;\n\nmaster patch\n18800ms 257ms\n\nselect count(normalize(t, NFC)) from (\nselect repeat(U&'\\00E4\\00C5\\0958\\00F4\\1EBF\\3300\\1FE2\\3316\\2465\\322D', i % 3\n+ 1) as t from\ngenerate_series(1,100000) as i\n) s;\n\nmaster patch\n13000ms 254ms\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 15 Oct 2020 13:59:38 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "On Thu, Oct 15, 2020 at 01:59:38PM -0400, John Naylor wrote:\n> I think I've seen a trie recommended somewhere, maybe the official website.\n> That said, I was able to get the hash working for recomposition (split into\n> a separate patch, and both of them now leave frontend alone), and I'm\n> pleased to say it's 50-75x faster than linear search in simple tests. I'd\n> be curious how it compares to ICU now. Perhaps Daniel Verite would be\n> interested in testing again? (CC'd)\n\nYeah, that would be interesting to compare. Now the gains proposed by\nthis patch are already a good step forward, so I don't think that it\nshould be a blocker for a solution we have at hand as the numbers\nspeak by themselves here. So if something better gets proposed, we\ncould always change the decomposition and recomposition logic as\nneeded.\n\n> select count(normalize(t, NFC)) from (\n> select md5(i::text) as t from\n> generate_series(1,100000) as i\n> ) s;\n> \n> master patch\n> 18800ms 257ms\n\nMy environment was showing HEAD as being a bit faster with 15s, while\nthe patch gets \"only\" down to 290~300ms (compiled with -O2, as I guess\nyou did). Nice.\n\n+ # Then the second\n+ return -1 if $a2 < $b2;\n+ return 1 if $a2 > $b2;\nShould say \"second code point\" here?\n\n+ hashkey = pg_hton64(((uint64) start << 32) | (uint64) code);\n+ h = recompinfo.hash(&hashkey);\nThis choice should be documented, and most likely we should have\ncomments on the perl and C sides to keep track of the relationship\nbetween the two.\n\nThe binary sizes of libpgcommon_shlib.a and libpgcommon.a change\nbecause Decomp_hash_func() gets included, impacting libpq.\nStructurally, wouldn't it be better to move this part into its own,\nbackend-only, header? It could be possible to paint the difference\nwith some ifdef FRONTEND of course, or just keep things as they are\nbecause this can be useful for some out-of-core frontend tool? But if\nwe keep that as a separate header then any C part can decide to\ninclude it or not, so frontend tools could also make this choice.\nNote that we don't include unicode_normprops_table.h for frontends in\nunicode_norm.c, but that's the case of unicode_norm_table.h.\n--\nMichael", "msg_date": "Fri, 16 Oct 2020 12:32:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "\tJohn Naylor wrote:\n\n> I'd be curious how it compares to ICU now\n\nI've made another run of the test in [1] with your v2 patches\nfrom this thread against icu_ext built with ICU-67.1.\nThe results show the times in milliseconds to process\nabout 10 million short strings:\n\n operation | unpatched | patched | icu_ext \n------------+-----------+---------+---------\n nfc check |\t 7968 | 5989 | 4076\n nfc conv |\t 700894 | 15163 | 6808\n nfd check |\t 16399 | 9852 | 3849\n nfd conv |\t 17391 | 10916 | 6758\n nfkc check |\t 8259 | 6092 | 4301\n nfkc conv |\t 700241 | 15354 | 7034\n nfkd check |\t 16585 | 10049 | 4038\n nfkd conv |\t 17587 | 11109 | 7086\n\nThe ICU implementation still wins by a large margin, but\nthe improvements brought by the patch are gorgeous,\nespecially for the conversion to NFC/NFKC.\nThis test ran on a slower machine than what I used for [1], so\nthat's why all queries take longer.\n\nFor the two queries upthread, I get this:\n\n1)\nselect count(normalize(t, NFC)) from (\nselect md5(i::text) as t from\ngenerate_series(1,100000) as i\n) s;\ncount \n--------\n 100000\n(1 row)\n\nTime: 371.043 ms\n\nVS ICU:\n\nselect count(icu_normalize(t, 'NFC')) from (\nselect md5(i::text) as t from\ngenerate_series(1,100000) as i\n) s;\n count\t\n--------\n 100000\n(1 row)\n\nTime: 125.809 ms\n\n\n2)\nselect count(normalize(t, NFC)) from (\nselect repeat(U&'\\00E4\\00C5\\0958\\00F4\\1EBF\\3300\\1FE2\\3316\\2465\\322D', i % 3\n+ 1) as t from\ngenerate_series(1,100000) as i\n) s;\n count\t\n--------\n 100000\n(1 row)\nTime: 428.214 ms\n\n\nVS ICU:\n\nselect count(icu_normalize(t, 'NFC')) from (\nselect repeat(U&'\\00E4\\00C5\\0958\\00F4\\1EBF\\3300\\1FE2\\3316\\2465\\322D', i % 3\n+ 1) as t from\ngenerate_series(1,100000) as i\n) s;\n count\t\n--------\n 100000\n(1 row)\n\nTime: 147.713 ms\n\n\n[1]\nhttps://www.postgresql.org/message-id/2c5e8df9-43b8-41fa-88e6-286e8634f00a%40manitou-mail.org\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: https://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n", "msg_date": "Fri, 16 Oct 2020 20:08:55 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "On Thu, Oct 15, 2020 at 11:32 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n>\n> The binary sizes of libpgcommon_shlib.a and libpgcommon.a change\n> because Decomp_hash_func() gets included, impacting libpq.\n>\n\nI don't see any difference on gcc/Linux in those two files, nor in\nunicode_norm_shlib.o -- I do see a difference in unicode_norm_srv.o as\nexpected. Could it depend on the compiler?\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Thu, Oct 15, 2020 at 11:32 PM Michael Paquier <michael@paquier.xyz> wrote:\nThe binary sizes of libpgcommon_shlib.a and libpgcommon.a change\nbecause Decomp_hash_func() gets included, impacting libpq.I don't see any difference on gcc/Linux in those two files, nor in unicode_norm_shlib.o -- I do see a difference in unicode_norm_srv.o as expected. Could it depend on the compiler?-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Mon, 19 Oct 2020 10:34:33 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "On Fri, Oct 16, 2020 at 2:08 PM Daniel Verite <daniel@manitou-mail.org>\nwrote:\n\n> John Naylor wrote:\n>\n> > I'd be curious how it compares to ICU now\n>\n> I've made another run of the test in [1] with your v2 patches\n> from this thread against icu_ext built with ICU-67.1.\n> The results show the times in milliseconds to process\n> about 10 million short strings:\n>\n\nThanks for testing!\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Fri, Oct 16, 2020 at 2:08 PM Daniel Verite <daniel@manitou-mail.org> wrote:        John Naylor wrote:\n\n> I'd be curious how it compares to ICU now\n\nI've made another run of the test in [1] with your v2 patches\nfrom this thread against icu_ext built with ICU-67.1.\nThe results show the times in milliseconds to process\nabout 10 million short strings:Thanks for testing!-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Mon, 19 Oct 2020 10:36:00 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "On Mon, Oct 19, 2020 at 10:34:33AM -0400, John Naylor wrote:\n> I don't see any difference on gcc/Linux in those two files, nor in\n> unicode_norm_shlib.o -- I do see a difference in unicode_norm_srv.o as\n> expected. Could it depend on the compiler?\n\nHmm. My guess is that you don't have --enable-debug in your set of\nconfigure options? It is not unusual to have this one enabled for GCC\neven on production systems, and the size of the libs is impacted in\nthis case with your patch.\n--\nMichael", "msg_date": "Tue, 20 Oct 2020 16:22:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "On Tue, Oct 20, 2020 at 3:22 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Oct 19, 2020 at 10:34:33AM -0400, John Naylor wrote:\n> > I don't see any difference on gcc/Linux in those two files, nor in\n> > unicode_norm_shlib.o -- I do see a difference in unicode_norm_srv.o as\n> > expected. Could it depend on the compiler?\n>\n> Hmm. My guess is that you don't have --enable-debug in your set of\n> configure options? It is not unusual to have this one enabled for GCC\n> even on production systems, and the size of the libs is impacted in\n> this case with your patch.\n>\n\nI've confirmed that. How about a new header unicode_norm_hashfunc.h which\nwould include unicode_norm_table.h at the top. In unicode.c, we can include\none of these depending on frontend or backend.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Tue, Oct 20, 2020 at 3:22 AM Michael Paquier <michael@paquier.xyz> wrote:On Mon, Oct 19, 2020 at 10:34:33AM -0400, John Naylor wrote:\n> I don't see any difference on gcc/Linux in those two files, nor in\n> unicode_norm_shlib.o -- I do see a difference in unicode_norm_srv.o as\n> expected. Could it depend on the compiler?\n\nHmm.  My guess is that you don't have --enable-debug in your set of\nconfigure options?  It is not unusual to have this one enabled for GCC\neven on production systems, and the size of the libs is impacted in\nthis case with your patch.\nI've confirmed that. How about a new header unicode_norm_hashfunc.h which would include unicode_norm_table.h at the top. In unicode.c, we can include one of these depending on frontend or backend. -- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Tue, 20 Oct 2020 08:03:12 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "On Tue, Oct 20, 2020 at 08:03:12AM -0400, John Naylor wrote:\n> I've confirmed that. How about a new header unicode_norm_hashfunc.h which\n> would include unicode_norm_table.h at the top. In unicode.c, we can include\n> one of these depending on frontend or backend.\n\nSounds good to me. Looking at the code, I would just generate the\nsecond file within generate-unicode_norm_table.pl.\n--\nMichael", "msg_date": "Wed, 21 Oct 2020 09:31:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "Attached v3 addressing review points below:\n\nOn Thu, Oct 15, 2020 at 11:32 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> + # Then the second\n> + return -1 if $a2 < $b2;\n> + return 1 if $a2 > $b2;\n> Should say \"second code point\" here?\n>\n\nDone. Also changed the tiebreaker to the composed codepoint. Beforehand, it\nwas the index into DecompMain[], which is only equivalent if the list is in\norder (it is but we don't want correctness to depend on that), and not very\nclear.\n\n\n> + hashkey = pg_hton64(((uint64) start << 32) | (uint64) code);\n> + h = recompinfo.hash(&hashkey);\n> This choice should be documented, and most likely we should have\n> comments on the perl and C sides to keep track of the relationship\n> between the two.\n>\n\nDone.\n\n\n> <separate headers>\n\n\nDone.\n\nOther cosmetic changes:\n- format recomp array comments like /* U+0045+032D -> U+1E18 */\n- make sure to comment #endif's that are far from the #if\n- small whitespace fixes\n\nNote: for the new header I simply adapted from unicode_norm_table.h the\nchoice of \"There is deliberately not an #ifndef PG_UNICODE_NORM_TABLE_H\nhere\", although I must confess I'm not sure what the purpose of that is, in\nthis case.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 21 Oct 2020 18:45:44 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "There was a mistake in v3 with pgindent/exclude_file_patterns, fixed in v4.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 21 Oct 2020 19:35:12 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "On Wed, Oct 21, 2020 at 07:35:12PM -0400, John Naylor wrote:\n> There was a mistake in v3 with pgindent/exclude_file_patterns, fixed in v4.\n\nThanks for the updated version, that was fast. I have found a couple\nof places that needed to be adjusted, like the comment at the top of\ngenerate-unicode_norm_table.pl or some comments, an incorrect include\nin the new headers and the indentation was not right in perl (we use\nperltidy v20170521, see the README in src/tools/pgindent).\n\nExcept that, this looks good to me. Attached is the updated version\nwith all my tweaks, that I would like to commit. If there are any\ncomments, please feel free of course.\n--\nMichael", "msg_date": "Thu, 22 Oct 2020 13:34:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "On Thu, Oct 22, 2020 at 12:34 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> Thanks for the updated version, that was fast. I have found a couple\n> of places that needed to be adjusted, like the comment at the top of\n> generate-unicode_norm_table.pl or some comments, an incorrect include\n> in the new headers and the indentation was not right in perl (we use\n> perltidy v20170521, see the README in src/tools/pgindent).\n>\n> Except that, this looks good to me. Attached is the updated version\n> with all my tweaks, that I would like to commit. If there are any\n> comments, please feel free of course.\n>\n\nLooks good to me.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Thu, Oct 22, 2020 at 12:34 AM Michael Paquier <michael@paquier.xyz> wrote:Thanks for the updated version, that was fast.  I have found a couple\nof places that needed to be adjusted, like the comment at the top of\ngenerate-unicode_norm_table.pl or some comments, an incorrect include\nin the new headers and the indentation was not right in perl (we use\nperltidy v20170521, see the README in src/tools/pgindent).\n\nExcept that, this looks good to me.  Attached is the updated version\nwith all my tweaks, that I would like to commit.  If there are any\ncomments, please feel free of course.Looks good to me.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Thu, 22 Oct 2020 05:50:52 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "On Thu, Oct 22, 2020 at 05:50:52AM -0400, John Naylor wrote:\n> Looks good to me.\n\nThanks. Committed, then. Great work!\n--\nMichael", "msg_date": "Fri, 23 Oct 2020 11:11:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "On Thu, Oct 22, 2020 at 10:11 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Thu, Oct 22, 2020 at 05:50:52AM -0400, John Naylor wrote:\n> > Looks good to me.\n>\n> Thanks. Committed, then. Great work!\n>\n\nThank you!\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Thu, Oct 22, 2020 at 10:11 PM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Oct 22, 2020 at 05:50:52AM -0400, John Naylor wrote:\n> Looks good to me.\n\nThanks.  Committed, then.  Great work!\nThank you!-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Fri, 23 Oct 2020 05:54:26 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "I chanced to do an --enable-coverage test run today, and I got this\nweird message during \"make coverage-html\":\n\ngenhtml: WARNING: function data mismatch at /home/postgres/pgsql/src/common/unicode_norm.c:102\n\nI've never seen anything like that before. I suppose it means that\nsomething about 783f0cc64 is a bit fishy, but I don't know what.\n\nThe emitted coverage report looks fairly normal anyway. It says\nunicode_norm.c has zero test coverage, which is very possibly correct\nsince I wasn't running in UTF8 encoding, but I'm not entirely sure of\nthat either.\n\nThis is with RHEL8's lcov-1.13-4.el8 package. I suppose the first\nquestion is does anybody else see that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Oct 2020 12:07:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "\n\n> On Oct 23, 2020, at 9:07 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I chanced to do an --enable-coverage test run today, and I got this\n> weird message during \"make coverage-html\":\n> \n> genhtml: WARNING: function data mismatch at /home/postgres/pgsql/src/common/unicode_norm.c:102\n> \n> I've never seen anything like that before. I suppose it means that\n> something about 783f0cc64 is a bit fishy, but I don't know what.\n> \n> The emitted coverage report looks fairly normal anyway. It says\n> unicode_norm.c has zero test coverage, which is very possibly correct\n> since I wasn't running in UTF8 encoding, but I'm not entirely sure of\n> that either.\n> \n> This is with RHEL8's lcov-1.13-4.el8 package. I suppose the first\n> question is does anybody else see that?\n\nI don't see it on mac nor on ubuntu64. I get 70.6% coverage of lines and 90.9% of functions on ubuntu.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 23 Oct 2020 16:18:13 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "On Fri, Oct 23, 2020 at 04:18:13PM -0700, Mark Dilger wrote:\n> On Oct 23, 2020, at 9:07 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> genhtml: WARNING: function data mismatch at /home/postgres/pgsql/src/common/unicode_norm.c:102\n>> \n>> I've never seen anything like that before. I suppose it means that\n>> something about 783f0cc64 is a bit fishy, but I don't know what.\n>> \n>> The emitted coverage report looks fairly normal anyway. It says\n>> unicode_norm.c has zero test coverage, which is very possibly correct\n>> since I wasn't running in UTF8 encoding, but I'm not entirely sure of\n>> that either.\n> \n> I don't see it on mac nor on ubuntu64. I get 70.6% coverage of\n> lines and 90.9% of functions on ubuntu.\n\nI can see the problem on Debian GID with lcov 1.14-2. This points to\nthe second declaration of get_code_entry(). I think that genhtml,\nbecause it considers the code of unicode_norm.c as a whole without its\nCFLAGS, gets confused because it finds the same function to index as\ndefined twice. It expects only one definition, hence the warning. So\nI think that this can lead to some incorrect data in the HTML report,\nand the attached patch takes care of fixing that. Tom, does it take\ncare of the issue on your side?\n--\nMichael", "msg_date": "Sat, 24 Oct 2020 09:02:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Oct 23, 2020 at 04:18:13PM -0700, Mark Dilger wrote:\n>> On Oct 23, 2020, at 9:07 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> genhtml: WARNING: function data mismatch at /home/postgres/pgsql/src/common/unicode_norm.c:102\n\n> I can see the problem on Debian GID with lcov 1.14-2. This points to\n> the second declaration of get_code_entry(). I think that genhtml,\n> because it considers the code of unicode_norm.c as a whole without its\n> CFLAGS, gets confused because it finds the same function to index as\n> defined twice. It expects only one definition, hence the warning. So\n> I think that this can lead to some incorrect data in the HTML report,\n> and the attached patch takes care of fixing that. Tom, does it take\n> care of the issue on your side?\n\nGood catch! Yeah, that fixes it for me.\n\nI'd advise not putting conv_compare() between get_code_entry() and\nthe header comment for get_code_entry(). Looks good otherwise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Oct 2020 20:24:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "On Fri, Oct 23, 2020 at 08:24:06PM -0400, Tom Lane wrote:\n> I'd advise not putting conv_compare() between get_code_entry() and\n> the header comment for get_code_entry(). Looks good otherwise.\n\nIndeed. I have adjusted the position of the comment, and applied the\nfix. Thanks for the report.\n--\nMichael", "msg_date": "Sat, 24 Oct 2020 14:25:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "There is a latent bug in the way code pairs for recomposition are sorted\ndue to a copy-pasto on my part. Makes no difference now, but it could in\nthe future. While looking, it seems pg_bswap.h should actually be\nbackend-only. Both fixed in the attached.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 6 Nov 2020 18:20:00 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "On Fri, Nov 06, 2020 at 06:20:00PM -0400, John Naylor wrote:\n> There is a latent bug in the way code pairs for recomposition are sorted\n> due to a copy-pasto on my part. Makes no difference now, but it could in\n> the future. While looking, it seems pg_bswap.h should actually be\n> backend-only. Both fixed in the attached.\n\nThanks John. Both look right to me. I'll apply both in a bit.\n--\nMichael", "msg_date": "Sat, 7 Nov 2020 09:29:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode decomposition and recomposition" }, { "msg_contents": "On Sat, Nov 07, 2020 at 09:29:30AM +0900, Michael Paquier wrote:\n> Thanks John. Both look right to me. I'll apply both in a bit.\n\nDone that now. Just for the note: you forgot to run pgperltidy.\n--\nMichael", "msg_date": "Sat, 7 Nov 2020 10:23:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode decomposition and recomposition" } ]
[ { "msg_contents": "I noticed that chipmunk failed [1] with a rather interesting log:\n\n2020-10-14 08:57:01.661 EEST [27048:6] pg_regress/prepared_xacts LOG: statement: UPDATE pxtest1 SET foobar = 'bbb' WHERE foobar = 'aaa';\n2020-10-14 08:57:01.721 EEST [27048:7] pg_regress/prepared_xacts LOG: statement: SELECT * FROM pxtest1;\n2020-10-14 08:57:01.823 EEST [27048:8] pg_regress/prepared_xacts FATAL: postmaster exited during a parallel transaction\nTRAP: FailedAssertion(\"entry->trans == NULL\", File: \"pgstat.c\", Line: 909, PID: 27048)\n2020-10-14 08:57:01.861 EEST [27051:1] ERROR: could not attach to dynamic shared area\n2020-10-14 08:57:01.861 EEST [27051:2] STATEMENT: SELECT * FROM pxtest1;\n\nI do not know what happened to the postmaster, but seeing that chipmunk\nis a very small machine running a pretty old Linux kernel, it's plausible\nto guess that the OOM killer decided to pick on the postmaster. (I wonder\nwhether Heikki has taken any steps to prevent that on that machine.)\nMy concern today is not that the postmaster died, but that the subsequent\nresponse was an Assert failure. Not good.\n\nI tried to reproduce this by dint of manually kill -9'ing the postmaster\nduring the select_parallel regression test. Figuring that a slower\nmachine would give me a better chance of success at that, I used an old\nMac laptop that wasn't doing anything else. I did not succeed yet, but\nwhat I did reproducibly get (in five out of five tries) was a leader\nprocess that was permanently stuck on a latch, waiting for its worker(s)\nto die. Needless to say, there were no workers and never would be.\nThe stack trace varied a bit, but here's an interesting case:\n\n(gdb) bt\n#0 0x90b267ac in kevent ()\n#1 0x003b76e8 in WaitEventSetWaitBlock [inlined] () at latch.c:1506\n#2 0x003b76e8 in WaitEventSetWait (set=0x10003a8, timeout=-1, occurred_events=<value temporarily unavailable, due to optimizations>, nevents=1, wait_event_info=<value temporarily unavailable, due to optimizations>) at latch.c:1309\n#3 0x003b814c in WaitLatch (latch=<value temporarily unavailable, due to optimizations>, wakeEvents=17, timeout=-1, wait_event_info=134217729) at latch.c:411\n#4 0x0032e77c in WaitForBackgroundWorkerShutdown (handle=0x10280a4) at bgworker.c:1139\n#5 0x000bc6fc in WaitForParallelWorkersToExit (pcxt=0xc6f84c) at parallel.c:876\n#6 0x000bc99c in DestroyParallelContext (pcxt=0xc6f84c) at parallel.c:958\n#7 0x000bdc48 in dlist_is_empty [inlined] () at lib/ilist.h:1231\n#8 0x000bdc48 in AtEOXact_Parallel (isCommit=4 '\\004') at parallel.c:1224\n#9 0x000ccf24 in AbortTransaction () at xact.c:2702\n#10 0x000cd534 in AbortOutOfAnyTransaction () at xact.c:4623\n#11 0x00550b54 in ShutdownPostgres (code=<value temporarily unavailable, due to optimizations>, arg=<value temporarily unavailable, due to optimizations>) at postinit.c:1195\n#12 0x003b5ff0 in shmem_exit (code=1) at ipc.c:239\n#13 0x003b6168 in proc_exit_prepare (code=1) at ipc.c:194\n#14 0x003b6240 in proc_exit (code=1) at ipc.c:107\n#15 0x00541f6c in errfinish (filename=<value temporarily unavailable, due to optimizations>, lineno=<value temporarily unavailable, due to optimizations>, funcname=0x5bd55c \"WaitForParallelWorkersToExit\") at elog.c:578\n#16 0x000bc748 in WaitForParallelWorkersToExit (pcxt=0xc6f84c) at parallel.c:885\n#17 0x000bc7c8 in ReinitializeParallelDSM (pcxt=0xc6f84c) at parallel.c:471\n#18 0x0021e468 in ExecParallelReinitialize (planstate=0x109a9a0, pei=0xc3a09c, sendParams=0x0) at execParallel.c:906\n#19 0x00239f6c in ExecGather (pstate=0x109a848) at nodeGather.c:177\n#20 0x00221cd8 in ExecProcNodeInstr (node=0x109a848) at execProcnode.c:466\n#21 0x0024f7a8 in ExecNestLoop (pstate=0x1099700) at executor/executor.h:244\n#22 0x00221cd8 in ExecProcNodeInstr (node=0x1099700) at execProcnode.c:466\n#23 0x0022edbc in ExecProcNode [inlined] () at executor.h:244\n#24 0x0022edbc in fetch_input_tuple (aggstate=0x10993d0) at executor.h:589\n\nWe appear to have already realized that the postmaster died, since we're\ninside proc_exit. WaitForBackgroundWorkerShutdown is doing this:\n\n rc = WaitLatch(MyLatch,\n WL_LATCH_SET | WL_POSTMASTER_DEATH, 0,\n WAIT_EVENT_BGWORKER_SHUTDOWN);\n\nwhich one would certainly hope would not block at all if the postmaster\nis already dead, yet it's doing so. I guess that the kevent stuff is\nfailing to handle the case of another WaitLatch call after the postmaster\nis already known dead.\n\nIn case it helps, I checked the contents of the WaitEventSet:\n\n(gdb) p *LatchWaitSet\n$2 = {\n nevents = 2, \n nevents_space = 2, \n events = 0x10003cc, \n latch = 0xa1b673c, \n latch_pos = 0, \n exit_on_postmaster_death = 0 '\\0', \n kqueue_fd = 12, \n kqueue_ret_events = 0x10003ec, \n report_postmaster_not_running = 0 '\\0'\n}\n(gdb) p LatchWaitSet->events[0]\n$3 = {\n pos = 0, \n events = 1, \n fd = 10, \n user_data = 0x0\n}\n(gdb) p LatchWaitSet->events[1]\n$4 = {\n pos = 1, \n events = 16, \n fd = 3, \n user_data = 0x0\n}\n\n\nI thought possibly this was an ancient-macOS problem, but I've now\nreproduced substantially the same behavior on an up-to-date Catalina\nmachine (10.15.7), so I do not think we can write it off that way.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2020-10-14%2000%3A04%3A08\n\n\n", "msg_date": "Wed, 14 Oct 2020 14:58:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "kevent latch paths don't handle postmaster death well" }, { "msg_contents": "On Thu, Oct 15, 2020 at 7:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> We appear to have already realized that the postmaster died, since we're\n> inside proc_exit. WaitForBackgroundWorkerShutdown is doing this:\n>\n> rc = WaitLatch(MyLatch,\n> WL_LATCH_SET | WL_POSTMASTER_DEATH, 0,\n> WAIT_EVENT_BGWORKER_SHUTDOWN);\n>\n> which one would certainly hope would not block at all if the postmaster\n> is already dead, yet it's doing so. I guess that the kevent stuff is\n> failing to handle the case of another WaitLatch call after the postmaster\n> is already known dead.\n\nThe process exit event is like an 'edge', not a 'level'... hmm. It\nmight be enough to set report_postmaster_not_running = true the first\ntime it tells us so if we try to wait again we'll treat it like a\nlevel. I will look into it later today.\n\n\n", "msg_date": "Thu, 15 Oct 2020 08:36:37 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: kevent latch paths don't handle postmaster death well" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> The process exit event is like an 'edge', not a 'level'... hmm. It\n> might be enough to set report_postmaster_not_running = true the first\n> time it tells us so if we try to wait again we'll treat it like a\n> level. I will look into it later today.\n\nSeems like having that be per-WaitEventSet state is also not a great\nidea --- if we detect PM death while waiting on one WES, and then\nwait on another one, it won't work. A plain process-wide static\nvariable would be a better way I bet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Oct 2020 15:40:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: kevent latch paths don't handle postmaster death well" }, { "msg_contents": "On 14/10/2020 21:58, Tom Lane wrote:\n> I noticed that chipmunk failed [1] with a rather interesting log:\n> \n> 2020-10-14 08:57:01.661 EEST [27048:6] pg_regress/prepared_xacts LOG: statement: UPDATE pxtest1 SET foobar = 'bbb' WHERE foobar = 'aaa';\n> 2020-10-14 08:57:01.721 EEST [27048:7] pg_regress/prepared_xacts LOG: statement: SELECT * FROM pxtest1;\n> 2020-10-14 08:57:01.823 EEST [27048:8] pg_regress/prepared_xacts FATAL: postmaster exited during a parallel transaction\n> TRAP: FailedAssertion(\"entry->trans == NULL\", File: \"pgstat.c\", Line: 909, PID: 27048)\n> 2020-10-14 08:57:01.861 EEST [27051:1] ERROR: could not attach to dynamic shared area\n> 2020-10-14 08:57:01.861 EEST [27051:2] STATEMENT: SELECT * FROM pxtest1;\n> \n> I do not know what happened to the postmaster, but seeing that chipmunk\n> is a very small machine running a pretty old Linux kernel, it's plausible\n> to guess that the OOM killer decided to pick on the postmaster. (I wonder\n> whether Heikki has taken any steps to prevent that on that machine.)\n\nFor the record, it was not the OOM killer. It was the buildfarm cron job \nthat did it:\n\nOct 14 08:57:01 raspberrypi /USR/SBIN/CRON[27050]: (pgbfarm) CMD \n(killall -q -9 postgres; cd /home/pgbfarm/build-farm-client/ && \n./run_branches.pl --run-all)\n\nApparently building and testing all the branches is now taking slightly \nmore than 24 h on that system, so the next day's cron job kills the \nprevious tests. I'm going to change the cron schedule so that it runs \nonly every other day.\n\n- Heikki\n\n\n", "msg_date": "Wed, 14 Oct 2020 23:10:22 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: kevent latch paths don't handle postmaster death well" }, { "msg_contents": "On Thu, Oct 15, 2020 at 8:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > The process exit event is like an 'edge', not a 'level'... hmm. It\n> > might be enough to set report_postmaster_not_running = true the first\n> > time it tells us so if we try to wait again we'll treat it like a\n> > level. I will look into it later today.\n>\n> Seems like having that be per-WaitEventSet state is also not a great\n> idea --- if we detect PM death while waiting on one WES, and then\n> wait on another one, it won't work. A plain process-wide static\n> variable would be a better way I bet.\n\nI don't think that's a problem -- the kernel will report the event to\neach interested kqueue object. The attached fixes the problem for me.", "msg_date": "Thu, 15 Oct 2020 11:10:28 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: kevent latch paths don't handle postmaster death well" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Oct 15, 2020 at 8:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Seems like having that be per-WaitEventSet state is also not a great\n>> idea --- if we detect PM death while waiting on one WES, and then\n>> wait on another one, it won't work. A plain process-wide static\n>> variable would be a better way I bet.\n\n> I don't think that's a problem -- the kernel will report the event to\n> each interested kqueue object. The attached fixes the problem for me.\n\nOh, OK. I confirm this makes the kqueue path work like the EPOLL and POLL\npaths. (I can't test the WIN32 path.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Oct 2020 18:18:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: kevent latch paths don't handle postmaster death well" }, { "msg_contents": "On Thu, Oct 15, 2020 at 11:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Thu, Oct 15, 2020 at 8:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Seems like having that be per-WaitEventSet state is also not a great\n> >> idea --- if we detect PM death while waiting on one WES, and then\n> >> wait on another one, it won't work. A plain process-wide static\n> >> variable would be a better way I bet.\n>\n> > I don't think that's a problem -- the kernel will report the event to\n> > each interested kqueue object. The attached fixes the problem for me.\n>\n> Oh, OK. I confirm this makes the kqueue path work like the EPOLL and POLL\n> paths. (I can't test the WIN32 path.)\n\nThanks. Pushed.\n\n(Hmm, I wonder about that Windows process exit event.)\n\n\n", "msg_date": "Thu, 15 Oct 2020 11:50:18 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: kevent latch paths don't handle postmaster death well" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> (Hmm, I wonder about that Windows process exit event.)\n\nIf anyone wants to test that, I can save you a little time building\ninfrastructure, perhaps. I used the attached program built into a .so.\nAfter creating the function, invoke it, and once it's blocked kill -9\nthe postmaster. If it successfully reports multiple WL_POSTMASTER_DEATH\nresults then it's good.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 14 Oct 2020 18:57:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: kevent latch paths don't handle postmaster death well" }, { "msg_contents": "Hi,\n\nOn 2020-10-15 11:10:28 +1300, Thomas Munro wrote:\n> I don't think that's a problem -- the kernel will report the event to\n> each interested kqueue object. The attached fixes the problem for me.\n\nWill it do so even if the kqueue is created after postmaster death?\n\n- Andres\n\n\n", "msg_date": "Wed, 14 Oct 2020 15:59:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: kevent latch paths don't handle postmaster death well" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-10-15 11:10:28 +1300, Thomas Munro wrote:\n>> I don't think that's a problem -- the kernel will report the event to\n>> each interested kqueue object. The attached fixes the problem for me.\n\n> Will it do so even if the kqueue is created after postmaster death?\n\nI did not try to test it, but there's code that purports to handle that\nin latch.c, ~ line 1150, and the behavior it's expecting mostly agrees\nwith what I read in the macOS kevent man page. One thing I'd suggest\nis that EACCES probably needs to be treated as \"postmaster already dead\",\ntoo, in case the PID is now owned by another user ID.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Oct 2020 19:14:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: kevent latch paths don't handle postmaster death well" }, { "msg_contents": "On Thu, Oct 15, 2020 at 12:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-10-15 11:10:28 +1300, Thomas Munro wrote:\n> >> I don't think that's a problem -- the kernel will report the event to\n> >> each interested kqueue object. The attached fixes the problem for me.\n>\n> > Will it do so even if the kqueue is created after postmaster death?\n>\n> I did not try to test it, but there's code that purports to handle that\n> in latch.c, ~ line 1150, and the behavior it's expecting mostly agrees\n> with what I read in the macOS kevent man page. One thing I'd suggest\n\nYep, I did handle the obvious races here.\n\n> is that EACCES probably needs to be treated as \"postmaster already dead\",\n> too, in case the PID is now owned by another user ID.\n\nGood point. I'll push that change later today.\n\n\n", "msg_date": "Thu, 15 Oct 2020 12:55:19 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: kevent latch paths don't handle postmaster death well" }, { "msg_contents": "On Thu, Oct 15, 2020 at 12:55 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Oct 15, 2020 at 12:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I did not try to test it, but there's code that purports to handle that\n> > in latch.c, ~ line 1150, and the behavior it's expecting mostly agrees\n> > with what I read in the macOS kevent man page. One thing I'd suggest\n> > is that EACCES probably needs to be treated as \"postmaster already dead\",\n> > too, in case the PID is now owned by another user ID.\n>\n> Good point. I'll push that change later today.\n\nI tried to test this on my system but it seems like maybe FreeBSD\ncan't really report EACCES for EVFILT_PROC. From the man page and a\nquick inspection of the source, you only have to be able to \"see\" the\nprocess, and if you can't I think you'll get ESRCH, so EACCES may be\nfor other kinds of filters. I don't currently have any Apple gear to\nhand, but its man page uses the same language, but on the other hand I\ndo see EACCES in filt_procattach() in the darwin-xnu sources on github\nso I guess you can reach this case and get an ugly ereport (hopefully\nfollowed swiftly by a proc_exit() from the next wait on one of the\nlong lived WESs, or a FATAL if this was the creation of one of those).\n Fixed. Thanks!\n\n\n", "msg_date": "Thu, 15 Oct 2020 18:42:01 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: kevent latch paths don't handle postmaster death well" }, { "msg_contents": "On Thu, Oct 15, 2020 at 6:42 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I tried to test this on my system but it seems like maybe FreeBSD\n> can't really report EACCES for EVFILT_PROC. From the man page and a\n> quick inspection of the source, you only have to be able to \"see\" the\n> process, and if you can't I think you'll get ESRCH, so EACCES may be\n> for other kinds of filters. I don't currently have any Apple gear to\n> hand, but its man page uses the same language, but on the other hand I\n> do see EACCES in filt_procattach() in the darwin-xnu sources on github\n> so I guess you can reach this case and get an ugly ereport (hopefully\n> followed swiftly by a proc_exit() from the next wait on one of the\n> long lived WESs, or a FATAL if this was the creation of one of those).\n\nI couldn't resist digging further into the Apple sources to figure out\nwhat was going on there, and I realised that the code path I was\nlooking at can only report EACCES if you asked for NOTE_EXITSTATUS,\nwhich appears to be an Apple extension to the original FreeBSD kqueue\nsystem designed to let you receive the exit status of the monitored\nprocess. That is indeed much more privileged information, and it's\nonly allowed for your own children. So it's possible that commit\n70516a17 was a waste of electrons, but I don't think it can hurt;\neither way, our system is toast if we get that error, so it's mostly\njust a question of what sort of noises we make as we fail, if indeed\nany system really can produce EACCES for NOTE_EXIT (maybe in some\nother code path I haven't found, or some other cousin BSD).\n\n\n", "msg_date": "Fri, 16 Oct 2020 10:32:54 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: kevent latch paths don't handle postmaster death well" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I couldn't resist digging further into the Apple sources to figure out\n> what was going on there, and I realised that the code path I was\n> looking at can only report EACCES if you asked for NOTE_EXITSTATUS,\n> which appears to be an Apple extension to the original FreeBSD kqueue\n> system designed to let you receive the exit status of the monitored\n> process. That is indeed much more privileged information, and it's\n> only allowed for your own children.\n\nAh.\n\n> So it's possible that commit\n> 70516a17 was a waste of electrons, but I don't think it can hurt;\n\nYeah, I'm not inclined to revert it. If we did get that errno,\nit'd be hard to interpret it in any way that didn't involve the\npostmaster being gone.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 15 Oct 2020 17:40:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: kevent latch paths don't handle postmaster death well" } ]
[ { "msg_contents": " /* don't print information if no JITing happened */\n if (!ji || ji->created_functions == 0)\n return;\n\nThis applies even when (es->format != EXPLAIN_FORMAT_TEXT), which I think is\nwrong. Jit use can be determined by cost, so I think jit details should be\nshown in non-text format whenever ji!=NULL, even if it's zeros. Arguably, bits\ncould be omitted if jit_expressions=off or jit_tuple_deforming=off, but I don't\nsee the point.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 14 Oct 2020 14:39:00 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "jit and explain nontext" }, { "msg_contents": "On Thu, 15 Oct 2020 at 08:39, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> /* don't print information if no JITing happened */\n> if (!ji || ji->created_functions == 0)\n> return;\n>\n> This applies even when (es->format != EXPLAIN_FORMAT_TEXT), which I think is\n> wrong. Jit use can be determined by cost, so I think jit details should be\n> shown in non-text format whenever ji!=NULL, even if it's zeros. Arguably, bits\n> could be omitted if jit_expressions=off or jit_tuple_deforming=off, but I don't\n> see the point.\n\nJust for some reference. Some wisdom was shared in [1], which made a\nlot of sense to me.\n\nIf we apply that, then we just need to decide if displaying any jit\nrelated fields without any jitted expressions is relevant.\n\nI'm a bit undecided.\n\n[1] https://www.postgresql.org/message-id/2276865.1593102811%40sss.pgh.pa.us\n\n\n", "msg_date": "Thu, 15 Oct 2020 14:02:15 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jit and explain nontext" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Just for some reference. Some wisdom was shared in [1], which made a\n> lot of sense to me.\n> If we apply that, then we just need to decide if displaying any jit\n> related fields without any jitted expressions is relevant.\n\nHmm, I dunno if my opinion counts as \"wisdom\", but what I was arguing for\nthere was that we should print stuff if it's potentially invoked by a\nrun-time decision, but not if it was excluded at plan time. I'm not\ntotally clear on whether jitting decisions are fixed by the plan tree\n(including its cost values) or if the executor can make different\ndecisions in different executions of the identical plan tree.\nIf the latter, then I agree with Justin that this is a bug.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Oct 2020 21:15:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: jit and explain nontext" }, { "msg_contents": "On Thu, 15 Oct 2020 at 14:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > Just for some reference. Some wisdom was shared in [1], which made a\n> > lot of sense to me.\n> > If we apply that, then we just need to decide if displaying any jit\n> > related fields without any jitted expressions is relevant.\n>\n> Hmm, I dunno if my opinion counts as \"wisdom\", but what I was arguing for\n> there was that we should print stuff if it's potentially invoked by a\n> run-time decision, but not if it was excluded at plan time. I'm not\n> totally clear on whether jitting decisions are fixed by the plan tree\n> (including its cost values) or if the executor can make different\n> decisions in different executions of the identical plan tree.\n> If the latter, then I agree with Justin that this is a bug.\n\nAs far as I know, the only exception where the executor overwrites the\nplanner's decision is in nodeValuesscan.c where it turns jit off\nbecause each VALUES will get evaluated just once, which would be a\nwaste of effort to JIT.\n\nApart from that the choice is baked in by the planner and set in\nPlannedStmt.jitfFlags.\n\nDavid\n\n\n", "msg_date": "Thu, 15 Oct 2020 14:23:01 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jit and explain nontext" }, { "msg_contents": "On Thu, Oct 15, 2020 at 02:23:01PM +1300, David Rowley wrote:\n> On Thu, 15 Oct 2020 at 14:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > David Rowley <dgrowleyml@gmail.com> writes:\n> > > Just for some reference. Some wisdom was shared in [1], which made a\n> > > lot of sense to me.\n> > > If we apply that, then we just need to decide if displaying any jit\n> > > related fields without any jitted expressions is relevant.\n> >\n> > Hmm, I dunno if my opinion counts as \"wisdom\", but what I was arguing for\n> > there was that we should print stuff if it's potentially invoked by a\n> > run-time decision, but not if it was excluded at plan time. I'm not\n> > totally clear on whether jitting decisions are fixed by the plan tree\n> > (including its cost values) or if the executor can make different\n> > decisions in different executions of the identical plan tree.\n> > If the latter, then I agree with Justin that this is a bug.\n> \n> As far as I know, the only exception where the executor overwrites the\n> planner's decision is in nodeValuesscan.c where it turns jit off\n> because each VALUES will get evaluated just once, which would be a\n> waste of effort to JIT.\n> \n> Apart from that the choice is baked in by the planner and set in\n> PlannedStmt.jitfFlags.\n\nWhat about the GUCs themselves ?\n\nThey can change after planning, which means a given execution of a plan might\nor might not use jit.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 14 Oct 2020 20:43:26 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: jit and explain nontext" }, { "msg_contents": "On Thu, 15 Oct 2020 at 14:43, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Oct 15, 2020 at 02:23:01PM +1300, David Rowley wrote:\n> > On Thu, 15 Oct 2020 at 14:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Hmm, I dunno if my opinion counts as \"wisdom\", but what I was arguing for\n> > > there was that we should print stuff if it's potentially invoked by a\n> > > run-time decision, but not if it was excluded at plan time. I'm not\n> > > totally clear on whether jitting decisions are fixed by the plan tree\n> > > (including its cost values) or if the executor can make different\n> > > decisions in different executions of the identical plan tree.\n> > > If the latter, then I agree with Justin that this is a bug.\n> >\n> > As far as I know, the only exception where the executor overwrites the\n> > planner's decision is in nodeValuesscan.c where it turns jit off\n> > because each VALUES will get evaluated just once, which would be a\n> > waste of effort to JIT.\n> >\n> > Apart from that the choice is baked in by the planner and set in\n> > PlannedStmt.jitfFlags.\n>\n> What about the GUCs themselves ?\n>\n> They can change after planning, which means a given execution of a plan might\n> or might not use jit.\n\nThat's a pretty good point. If we do SET enable_sort TO off; then\ncached plans are unaffected. That's not the case when someone does\nSET jit TO off; as we'll check that in provider_init() during\nexecution. Although, switching jit back on again works differently.\nIf the planner saw it was off then switching it on again won't have\nexisting plans use it. That's slightly weird, but perhaps it was done\nthat way to ensure there was a hard off switch. You might want to\nensure that to ensure queries don't break if there was some problem\nwith LLVM libraries.\n\nDavid\n\n\n", "msg_date": "Thu, 15 Oct 2020 14:51:38 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jit and explain nontext" }, { "msg_contents": "Hi,\n\nOn 2020-10-15 14:51:38 +1300, David Rowley wrote:\n> That's a pretty good point. If we do SET enable_sort TO off; then\n> cached plans are unaffected.\n\nIn contrast to that we do however use the current work_mem for\nexecution, I believe. Which could be fairly nasty, if a plan was made\nfor a huge work_mem, for example.\n\n\n> That's not the case when someone does SET jit TO off; as we'll check\n> that in provider_init() during execution. Although, switching jit\n> back on again works differently. If the planner saw it was off then\n> switching it on again won't have existing plans use it. That's\n> slightly weird, but perhaps it was done that way to ensure there was a\n> hard off switch.\n\nIt was motivated by not wanting to just enable JIT for queries that were\nprepared within something like SET LOCAL jit=off;PREPARE; RESET\njit;. I'm open to revising it, but that's where it's coming from.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 15 Oct 2020 16:00:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: jit and explain nontext" }, { "msg_contents": "On Thu, Oct 15, 2020 at 02:51:38PM +1300, David Rowley wrote:\n> On Thu, 15 Oct 2020 at 14:43, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Thu, Oct 15, 2020 at 02:23:01PM +1300, David Rowley wrote:\n> > > On Thu, 15 Oct 2020 at 14:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > Hmm, I dunno if my opinion counts as \"wisdom\", but what I was arguing for\n> > > > there was that we should print stuff if it's potentially invoked by a\n> > > > run-time decision, but not if it was excluded at plan time. I'm not\n> > > > totally clear on whether jitting decisions are fixed by the plan tree\n> > > > (including its cost values) or if the executor can make different\n> > > > decisions in different executions of the identical plan tree.\n> > > > If the latter, then I agree with Justin that this is a bug.\n> > >\n> > > As far as I know, the only exception where the executor overwrites the\n> > > planner's decision is in nodeValuesscan.c where it turns jit off\n> > > because each VALUES will get evaluated just once, which would be a\n> > > waste of effort to JIT.\n> > >\n> > > Apart from that the choice is baked in by the planner and set in\n> > > PlannedStmt.jitfFlags.\n> >\n> > What about the GUCs themselves ?\n> >\n> > They can change after planning, which means a given execution of a plan might\n> > or might not use jit.\n> \n> That's a pretty good point.\n\nAdded at: https://commitfest.postgresql.org/30/2766/\n\ndiff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c\nindex 41317f1837..7345971507 100644\n--- a/src/backend/commands/explain.c\n+++ b/src/backend/commands/explain.c\n@@ -839,7 +839,8 @@ ExplainPrintJIT(ExplainState *es, int jit_flags, JitInstrumentation *ji)\n \tinstr_time\ttotal_time;\n \n \t/* don't print information if no JITing happened */\n-\tif (!ji || ji->created_functions == 0)\n+\tif (!ji || (ji->created_functions == 0 &&\n+\t\t\tes->format == EXPLAIN_FORMAT_TEXT))\n \t\treturn;\n \n \t/* calculate total time */\n-- \n2.17.0", "msg_date": "Sat, 17 Oct 2020 14:21:34 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: jit and explain nontext" }, { "msg_contents": "On Sun, 18 Oct 2020 at 08:21, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> /* don't print information if no JITing happened */\n> - if (!ji || ji->created_functions == 0)\n> + if (!ji || (ji->created_functions == 0 &&\n> + es->format == EXPLAIN_FORMAT_TEXT))\n> return;\n\nIsn't that comment now outdated?\n\nI imagine something more like; /* Only show JIT details when we jitted\nsomething or when in non-text mode */ might be better after making\nthat code change.\n\nDavid\n\n\n", "msg_date": "Mon, 19 Oct 2020 11:20:16 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jit and explain nontext" }, { "msg_contents": "On 2020-10-17 21:21, Justin Pryzby wrote:\n> Added at:https://commitfest.postgresql.org/30/2766/\n> \n> diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c\n> index 41317f1837..7345971507 100644\n> --- a/src/backend/commands/explain.c\n> +++ b/src/backend/commands/explain.c\n> @@ -839,7 +839,8 @@ ExplainPrintJIT(ExplainState *es, int jit_flags, JitInstrumentation *ji)\n> \tinstr_time\ttotal_time;\n> \n> \t/* don't print information if no JITing happened */\n> -\tif (!ji || ji->created_functions == 0)\n> +\tif (!ji || (ji->created_functions == 0 &&\n> +\t\t\tes->format == EXPLAIN_FORMAT_TEXT))\n> \t\treturn;\n> \n> \t/* calculate total time */\n\nCan you show an output example of where this patch makes a difference? \nJust from reading the description, I would expect some kind of \nadditional JIT-related output from something like\n\nEXPLAIN (FORMAT YAML) SELECT 1;\n\nbut I don't see anything.\n\n\n\n", "msg_date": "Fri, 20 Nov 2020 16:56:38 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: jit and explain nontext" }, { "msg_contents": "On Fri, Nov 20, 2020 at 04:56:38PM +0100, Peter Eisentraut wrote:\n> On 2020-10-17 21:21, Justin Pryzby wrote:\n> > Added at:https://commitfest.postgresql.org/30/2766/\n> > \n> > diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c\n> > index 41317f1837..7345971507 100644\n> > --- a/src/backend/commands/explain.c\n> > +++ b/src/backend/commands/explain.c\n> > @@ -839,7 +839,8 @@ ExplainPrintJIT(ExplainState *es, int jit_flags, JitInstrumentation *ji)\n> > \tinstr_time\ttotal_time;\n> > \t/* don't print information if no JITing happened */\n> > -\tif (!ji || ji->created_functions == 0)\n> > +\tif (!ji || (ji->created_functions == 0 &&\n> > +\t\t\tes->format == EXPLAIN_FORMAT_TEXT))\n> > \t\treturn;\n> > \t/* calculate total time */\n> \n> Can you show an output example of where this patch makes a difference? Just\n> from reading the description, I would expect some kind of additional\n> JIT-related output from something like\n> \n> EXPLAIN (FORMAT YAML) SELECT 1;\n\nIt matters if it was planned with jit but executed without jit.\n\npostgres=# DEALLOCATE p; SET jit=on; SET jit_above_cost=0; prepare p as select from generate_series(1,9); explain(format yaml) execute p; SET jit=off; explain(format yaml) execute p;\n\nPatched shows this for both explains:\n JIT: +\n Functions: 3 +\n\nUnpatched shows only in the first case.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 20 Nov 2020 10:16:22 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: jit and explain nontext" }, { "msg_contents": "On 2020-11-20 17:16, Justin Pryzby wrote:\n> It matters if it was planned with jit but executed without jit.\n> \n> postgres=# DEALLOCATE p; SET jit=on; SET jit_above_cost=0; prepare p as select from generate_series(1,9); explain(format yaml) execute p; SET jit=off; explain(format yaml) execute p;\n> \n> Patched shows this for both explains:\n> JIT: +\n> Functions: 3 +\n> \n> Unpatched shows only in the first case.\n\nIn this context, I don't see the point of this change. If you set \njit=off explicitly, then there is no need to clutter the EXPLAIN output \nwith a bunch of zeroes about JIT.\n\n\n", "msg_date": "Sat, 21 Nov 2020 08:39:11 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: jit and explain nontext" }, { "msg_contents": "On Sat, Nov 21, 2020 at 08:39:11AM +0100, Peter Eisentraut wrote:\n> On 2020-11-20 17:16, Justin Pryzby wrote:\n> > It matters if it was planned with jit but executed without jit.\n> > \n> > postgres=# DEALLOCATE p; SET jit=on; SET jit_above_cost=0; prepare p as select from generate_series(1,9); explain(format yaml) execute p; SET jit=off; explain(format yaml) execute p;\n> > \n> > Patched shows this for both explains:\n> > JIT: +\n> > Functions: 3 +\n> > \n> > Unpatched shows only in the first case.\n> \n> In this context, I don't see the point of this change. If you set jit=off\n> explicitly, then there is no need to clutter the EXPLAIN output with a bunch\n> of zeroes about JIT.\n\nI think the idea is that someone should be able to parse the YAML/XML/other\noutput by writing something like a['JIT']['Functions'] rather than something\nlike a.get('JIT',{}).get('Functions',0)\n\nThe standard seems to be that parameters that can change during execution\nshould change the *values* in the non-text output, but the *keys* should not\ndisappear just because (for example) parallel workers weren't available, or\nsomeone (else) turned off jit. We had discussion about this earlier in the\nyear:\nhttps://www.postgresql.org/message-id/20200728033622.GC20393@telsasoft.com\n\n(Since it's machine-readable output, Key: 0 is Consistency and Completeness,\nnot Clutter.)\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 21 Nov 2020 10:26:00 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: jit and explain nontext" }, { "msg_contents": "On Sat, Nov 21, 2020 at 10:26:00AM -0600, Justin Pryzby wrote:\n> On Sat, Nov 21, 2020 at 08:39:11AM +0100, Peter Eisentraut wrote:\n> > On 2020-11-20 17:16, Justin Pryzby wrote:\n> > > It matters if it was planned with jit but executed without jit.\n> > > \n> > > postgres=# DEALLOCATE p; SET jit=on; SET jit_above_cost=0; prepare p as select from generate_series(1,9); explain(format yaml) execute p; SET jit=off; explain(format yaml) execute p;\n> > > \n> > > Patched shows this for both explains:\n> > > JIT: +\n> > > Functions: 3 +\n> > > \n> > > Unpatched shows only in the first case.\n> > \n> > In this context, I don't see the point of this change. If you set jit=off\n> > explicitly, then there is no need to clutter the EXPLAIN output with a bunch\n> > of zeroes about JIT.\n> \n> I think the idea is that someone should be able to parse the YAML/XML/other\n> output by writing something like a['JIT']['Functions'] rather than something\n> like a.get('JIT',{}).get('Functions',0)\n> \n> The standard seems to be that parameters that can change during execution\n> should change the *values* in the non-text output, but the *keys* should not\n> disappear just because (for example) parallel workers weren't available, or\n> someone (else) turned off jit. We had discussion about this earlier in the\n> year:\n> https://www.postgresql.org/message-id/20200728033622.GC20393@telsasoft.com\n> \n> (Since it's machine-readable output, Key: 0 is Consistency and Completeness,\n> not Clutter.)\n\nIf there's no interest or agreement in it, we should just close it.\nI have no personal need for it, but noticed it in passing.", "msg_date": "Mon, 30 Nov 2020 10:59:32 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: jit and explain nontext" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sat, Nov 21, 2020 at 10:26:00AM -0600, Justin Pryzby wrote:\n>> On Sat, Nov 21, 2020 at 08:39:11AM +0100, Peter Eisentraut wrote:\n>>> In this context, I don't see the point of this change. If you set jit=off\n>>> explicitly, then there is no need to clutter the EXPLAIN output with a bunch\n>>> of zeroes about JIT.\n\n> If there's no interest or agreement in it, we should just close it.\n> I have no personal need for it, but noticed it in passing.\n\nI dug around a bit and saw that essentially all of the JIT control\nGUCs are consulted only at plan time (cf standard_planner, which\nfills PlannedStmt.jitFlags based on the then-active settings).\nSo the only thing that really counts as a \"run time decision\"\nhere is that if you set jit = off between planning and execution,\nor if we fail to load the JIT provider at all, then you'll get\nno JITting even though the planner expected it to happen.\n\nOn balance I agree with Peter's opinion that this isn't worth\nchanging. I would be for the patch if the executor had a little\nmore freedom of action, but as things stand there's not much\nfreedom there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 15 Jan 2021 14:53:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: jit and explain nontext" }, { "msg_contents": "On Fri, Jan 15, 2021 at 02:53:49PM -0500, Tom Lane wrote:\n> On balance I agree with Peter's opinion that this isn't worth\n> changing. I would be for the patch if the executor had a little\n> more freedom of action, but as things stand there's not much\n> freedom there.\n\nThanks for looking\nCF: withdrawn.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 15 Jan 2021 14:25:46 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: jit and explain nontext" } ]
[ { "msg_contents": "As already noted in another thread, buildfarm member chipmunk\nfailed today with an unexpected Assert [1]. I've now reproduced\nthis by manually killing the postmaster during the regression\ntests. The stack trace looks like\n\n#0 0x0000ffff91507598 in raise () from /lib64/libc.so.6\n#1 0x0000ffff914f3da0 in abort () from /lib64/libc.so.6\n#2 0x0000000000904bd0 in ExceptionalCondition (conditionName=conditionName@entry=0xa5ba88 \"entry->trans == NULL\", \n errorType=errorType@entry=0x95da10 \"FailedAssertion\", fileName=fileName@entry=0xa5b1b8 \"pgstat.c\", \n lineNumber=lineNumber@entry=909) at assert.c:69\n#3 0x0000000000749e64 in pgstat_report_stat (force=force@entry=true) at pgstat.c:909\n#4 0x0000000000749ee8 in pgstat_beshutdown_hook (code=<optimized out>, arg=<optimized out>) at pgstat.c:3248\n#5 0x00000000007b5cd0 in shmem_exit (code=code@entry=1) at ipc.c:272\n#6 0x00000000007b5dc4 in proc_exit_prepare (code=code@entry=1) at ipc.c:194\n#7 0x00000000007b5e74 in proc_exit (code=code@entry=1) at ipc.c:107\n#8 0x0000000000908c8c in errfinish (filename=<optimized out>, filename@entry=0x976260 \"parallel.c\", lineno=lineno@entry=885, \n funcname=funcname@entry=0x9765a8 <__func__.10> \"WaitForParallelWorkersToExit\") at elog.c:578\n#9 0x0000000000521ad4 in WaitForParallelWorkersToExit (pcxt=pcxt@entry=0x16af54f0) at parallel.c:885\n#10 0x0000000000522af8 in DestroyParallelContext (pcxt=0x16af54f0) at parallel.c:958\n#11 0x00000000005230cc in AtEOXact_Parallel (isCommit=isCommit@entry=false) at parallel.c:1231\n#12 0x0000000000530588 in AbortTransaction () at xact.c:2702\n#13 0x0000000000531234 in AbortOutOfAnyTransaction () at xact.c:4623\n#14 0x0000000000915cbc in ShutdownPostgres (code=<optimized out>, arg=<optimized out>) at postinit.c:1195\n#15 0x00000000007b5c78 in shmem_exit (code=code@entry=1) at ipc.c:239\n#16 0x00000000007b5dc4 in proc_exit_prepare (code=code@entry=1) at ipc.c:194\n#17 0x00000000007b5e74 in proc_exit (code=code@entry=1) at ipc.c:107\n#18 0x00000000007b7888 in WaitEventSetWaitBlock (nevents=1, occurred_events=0xfffff82c41b8, cur_timeout=-1, set=0x16a6b0d8)\n at latch.c:1429\n#19 WaitEventSetWait (set=0x16a6b0d8, timeout=-1, timeout@entry=0, occurred_events=occurred_events@entry=0xfffff82c41b8, \n nevents=nevents@entry=1, wait_event_info=wait_event_info@entry=134217734) at latch.c:1309\n#20 0x00000000007b7994 in WaitLatch (latch=<optimized out>, wakeEvents=wakeEvents@entry=33, timeout=timeout@entry=0, \n wait_event_info=wait_event_info@entry=134217734) at latch.c:411\n#21 0x0000000000671ccc in gather_readnext (gatherstate=<optimized out>) at nodeGather.c:386\n#22 gather_getnext (gatherstate=0x16bc4c28) at nodeGather.c:277\n#23 ExecGather (pstate=0x16bc4c28) at nodeGather.c:227\n#24 0x0000000000668434 in ExecProcNode (node=0x16bc4c28) at ../../../src/include/executor/executor.h:244\n#25 fetch_input_tuple (aggstate=aggstate@entry=0x16bc4628) at nodeAgg.c:589\n#26 0x000000000066aee8 in agg_retrieve_direct (aggstate=0x16bc4628) at nodeAgg.c:2451\n#27 ExecAgg (pstate=0x16bc4628) at nodeAgg.c:2171\n#28 0x0000000000655a0c in ExecProcNode (node=0x16bc4628) at ../../../src/include/executor/executor.h:244\n#29 ExecutePlan (execute_once=<optimized out>, dest=0x16bce798, direction=<optimized out>, numberTuples=0, \n sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x16bc4628, estate=0x16b65eb0)\n at execMain.c:1539\n#30 standard_ExecutorRun (queryDesc=0x16a90ca0, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n\nFundamentally, pgstat_report_stat() is Assert'ing that it can never\nbe called within an active transaction (i.e., without AtEOXact_PgStat\nhaving been called first). That fails in this scenario because while\nwe are trying to abort the active transaction, AtEOXact_Parallel\nsuffers a new FATAL error, so we abandon the attempt to run the\nShutdownPostgres on-exit hook and move on to the next one.\nWhen we get to pgstat_beshutdown_hook, that fails because\nAtEOXact_PgStat was never run.\n\nWe could decide that this is just an overly-optimistic assertion\nand fix it locally in pgstat.c. However, it seems to me that we\nhave bigger problems here. Were it not for the assertion failure,\nwe'd (probably) eventually get through all the on_proc_exit callbacks\nand do exit(1), which the postmaster would think is fine. But in\npoint of fact, we've missed out doing most of AbortTransaction().\nIs it really safe to allow the rest of the system to keep running\nin that scenario?\n\n(Yeah, I realize that with the postmaster gone, there's no \"rest\nof the system\" to worry about. But the same scenario could arise\nfrom elog(FATAL) triggered by a less dire failure.)\n\nSo what I'm wondering, basically, is if an elog(ERROR) or elog(FATAL)\noccurring after we've started to run proc_exit() should be promoted\nto a PANIC. If we don't do that, how can we convince ourselves that\nthe system is left in an acceptable state?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2020-10-14%2000%3A04%3A08\n\n\n", "msg_date": "Wed, 14 Oct 2020 17:37:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Failures during FATAL exit" } ]
[ { "msg_contents": "Restore replication protocol's duplicate command tags\n\nI removed the duplicate command tags for START_REPLICATION inadvertently\nin commit 07082b08cc5d, but the replication protocol requires them. The\nfact that the replication protocol was broken was not noticed because\nall our test cases use an optimized code path that exits early, failing\nto verify that the behavior is correct for non-optimized cases. Put\nthem back.\n\nAlso document this protocol quirk.\n\nAdd a test case that shows the failure. It might still succeed even\nwithout the patch when run on a fast enough server, but it suffices to\nshow the bug in enough cases that it would be noticed in buildfarm.\n\nAuthor: Álvaro Herrera <alvherre@alvh.no-ip.org>\nReported-by: Henry Hinze <henry.hinze@gmail.com>\nReviewed-by: Petr Jelínek <petr.jelinek@2ndquadrant.com>\nDiscussion: https://postgr.es/m/16643-eaadeb2a1a58d28c@postgresql.org\n\nBranch\n------\nREL_13_STABLE\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/72e43fc313e93c95704c574bcf98805805668063\n\nModified Files\n--------------\ndoc/src/sgml/protocol.sgml | 8 +++--\nsrc/backend/replication/logical/worker.c | 1 -\nsrc/backend/replication/walsender.c | 3 +-\nsrc/test/subscription/t/100_bugs.pl | 55 +++++++++++++++++++++++++++++++-\n4 files changed, 61 insertions(+), 6 deletions(-)", "msg_date": "Wed, 14 Oct 2020 23:16:27 +0000", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "pgsql: Restore replication protocol's duplicate command tags" }, { "msg_contents": "On 2020-Oct-14, Alvaro Herrera wrote:\n\n> Add a test case that shows the failure. It might still succeed even\n> without the patch when run on a fast enough server, but it suffices to\n> show the bug in enough cases that it would be noticed in buildfarm.\n\nHm, this failed on sidewinder. I think the \"wait for catchup\" stuff in\nlogical replication is broken; I added a wait for sync workers to go\naway after the normal wait_for_catchup, but evidently it is not\nsufficient even with that.\n\n\n\n", "msg_date": "Wed, 14 Oct 2020 21:36:54 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: pgsql: Restore replication protocol's duplicate command tags" }, { "msg_contents": "On Thu, Oct 15, 2020 at 6:07 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2020-Oct-14, Alvaro Herrera wrote:\n>\n> > Add a test case that shows the failure. It might still succeed even\n> > without the patch when run on a fast enough server, but it suffices to\n> > show the bug in enough cases that it would be noticed in buildfarm.\n>\n> Hm, this failed on sidewinder.\n>\n\nNow, curculio [1] also seems to be failing for the same reason.\n\n> I think the \"wait for catchup\" stuff in\n> logical replication is broken; I added a wait for sync workers to go\n> away after the normal wait_for_catchup, but evidently it is not\n> sufficient even with that.\n>\n>\n\nFor the initial table sync, we use below in some of the tests (see\n001_rep_changes):\n\n# Also wait for initial table sync to finish\nmy $synced_query =\n \"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT\nIN ('r', 's');\";\n$node_subscriber->poll_query_until('postgres', $synced_query)\n or die \"Timed out while waiting for subscriber to synchronize data\";\n\nIs it not possible to use the same thing in this test as well?\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=curculio&dt=2020-10-15%2005%3A30%3A43\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 15 Oct 2020 12:37:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Restore replication protocol's duplicate command tags" }, { "msg_contents": "On 2020-Oct-15, Amit Kapila wrote:\n\n> On Thu, Oct 15, 2020 at 6:07 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > On 2020-Oct-14, Alvaro Herrera wrote:\n\n> > Hm, this failed on sidewinder.\n> \n> Now, curculio [1] also seems to be failing for the same reason.\n\nYeah ... and now they're both green. Anyway clearly the test is\nunstable.\n\n> For the initial table sync, we use below in some of the tests (see\n> 001_rep_changes):\n\nAh yeah, thanks, this should work. Pushed, we'll see how it goes.\n\nThanks,\n\n\n", "msg_date": "Thu, 15 Oct 2020 09:52:21 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: pgsql: Restore replication protocol's duplicate command tags" } ]
[ { "msg_contents": "Hi all,\n\nIt happens that pgcrypto has the following leak if a digest cannot be\ninitialized:\n--- a/contrib/pgcrypto/openssl.c\n+++ b/contrib/pgcrypto/openssl.c\n@@ -202,6 +202,7 @@ px_find_digest(const char *name, PX_MD **res)\n }\n if (EVP_DigestInit_ex(ctx, md, NULL) == 0)\n {\n+ EVP_MD_CTX_destroy(ctx);\n pfree(digest);\n return -1;\n }\n\nThat's a bit annoying, because this memory is allocated directly by\nOpenSSL, and Postgres does not know how to free it until it gets\nregistered in the list of open_digests that would be used by the\ncleanup callback, so I think that we had better back-patch this fix.\n\nThoughts?\n--\nMichael", "msg_date": "Thu, 15 Oct 2020 16:22:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Possible memory leak in pgcrypto with EVP_MD_CTX" }, { "msg_contents": "On Thu, Oct 15, 2020 at 04:22:12PM +0900, Michael Paquier wrote:\n> That's a bit annoying, because this memory is allocated directly by\n> OpenSSL, and Postgres does not know how to free it until it gets\n> registered in the list of open_digests that would be used by the\n> cleanup callback, so I think that we had better back-patch this fix.\n\nHearing nothing, I have fixed the issue and back-patched it.\n\nWhile looking at it, I have noticed that e2838c58 has never actually\nworked with OpenSSL 0.9.6 because we lack an equivalent for\nEVP_MD_CTX_destroy() and EVP_MD_CTX_create(). This issue would be\neasy enough to fix as the size of EVP_MD_CTX is known in those\nversions of OpenSSL, but as we have heard zero complaints on this\nmatter I have left that out in the 9.5 and 9.6 branches. Back in\n2016, even 0.9.8 was barely used, so I can't even imagine somebody\nusing 0.9.6 with the most recent PG releases.\n--\nMichael", "msg_date": "Mon, 19 Oct 2020 10:08:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Possible memory leak in pgcrypto with EVP_MD_CTX" } ]
[ { "msg_contents": "In cached_plan_cost, we do consider the cost of planning, with the following\nalgorithm.\n\nint nrelations = list_length(plannedstmt->rtable);\n\nresult += 1000.0 * cpu_operator_cost * (nrelations + 1);\n\nI run into a case where 10 relations are joined, 3 of them have\nhundreds of partitions. at last nrelations = 421 for this case.\n\n| Plan Type | Estimate Cost | Real Execution Time(ms) | Real Planning\nTime(ms) |\n| Custom Plan | 100867.52 | 13 | 665.816\n |\n| Generic Plan | 104941.86 | 33(ms) | 0.76 (used\ncached plan) |\n\nAt last, it chooses the custom plan all the time. so the final performance\nis\n678ms+, however if it chooses the generic plan, it is 34ms in total. It\nlooks\nto me that the planning cost is estimated improperly.\n\nSince we do know the planning time exactly for a custom plan when we call\ncached_plan_cost, if we have a way to convert the real timing to cost, then\nwe\nprobably can fix this issue.\n\nThe cost unit is seq_page_scan, looks we know the latency of seq_page\nread, we can build such mapping. however, the correct seq_page_cost\ndetection needs we clear file system cache at least which is\nsomething we can't do in pg kernel[1]. So any suggestion on this topic?\n\nnote that both plans have no plan time partition prune and have run time\npartition prune, so the issue at [2] probably doesn't impact this.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/20191127164821.lspxyrf3c5r6zu5n%40development#cf34e9db80326709af892ac64bc4cb45\n\n[2]\nhttps://www.postgresql.org/message-id/CAKU4AWqUJmQdu9qf_pXxBYETkiXhTaXAQ_qtX7wxeLw27phdOw@mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan\n\nIn cached_plan_cost, we do consider the cost of planning, with the followingalgorithm.int\t\t\tnrelations = list_length(plannedstmt->rtable);result += 1000.0 * cpu_operator_cost * (nrelations + 1);I run into a case where 10 relations  are joined, 3 of them havehundreds of partitions.  at last  nrelations = 421 for this case.| Plan Type    | Estimate Cost | Real Execution Time(ms) | Real Planning Time(ms)  || Custom Plan  |     100867.52 | 13                      | 665.816                 || Generic Plan |     104941.86 | 33(ms)                  | 0.76 (used cached plan) |At last, it chooses the custom plan all the time. so the final performance is678ms+, however if it chooses the generic plan, it is 34ms in total. It looks to me that the planning cost is estimated improperly.Since we do know the planning time exactly for a custom plan when we callcached_plan_cost, if we have a way to convert the real timing to cost, then weprobably can fix this issue. The cost unit is seq_page_scan, looks we know the latency of seq_pageread, we can build such mapping. however, the correct seq_page_cost detection needs we clear file system cache at least which issomething we can't do in pg kernel[1].  So any suggestion on this topic?note that both plans have no plan time partition prune and have run timepartition prune, so the issue at [2] probably doesn't impact this.[1] https://www.postgresql.org/message-id/flat/20191127164821.lspxyrf3c5r6zu5n%40development#cf34e9db80326709af892ac64bc4cb45 [2] https://www.postgresql.org/message-id/CAKU4AWqUJmQdu9qf_pXxBYETkiXhTaXAQ_qtX7wxeLw27phdOw@mail.gmail.com -- Best RegardsAndy Fan", "msg_date": "Thu, 15 Oct 2020 21:12:19 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "improve the algorithm cached_plan_cost with real planning time?" }, { "msg_contents": "On Thu, Oct 15, 2020 at 9:12 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n> In cached_plan_cost, we do consider the cost of planning, with the\n> following\n> algorithm.\n>\n> int nrelations = list_length(plannedstmt->rtable);\n>\n> result += 1000.0 * cpu_operator_cost * (nrelations + 1);\n>\n> I run into a case where 10 relations are joined, 3 of them have\n> hundreds of partitions. at last nrelations = 421 for this case.\n>\n> | Plan Type | Estimate Cost | Real Execution Time(ms) | Real Planning\n> Time(ms) |\n> | Custom Plan | 100867.52 | 13 | 665.816\n> |\n> | Generic Plan | 104941.86 | 33(ms) | 0.76 (used\n> cached plan) |\n>\n> At last, it chooses the custom plan all the time. so the final performance\n> is\n> 678ms+, however if it chooses the generic plan, it is 34ms in total. It\n> looks\n> to me that the planning cost is estimated improperly.\n>\n> Since we do know the planning time exactly for a custom plan when we call\n> cached_plan_cost, if we have a way to convert the real timing to cost,\n> then we\n> probably can fix this issue.\n>\n> The cost unit is seq_page_scan, looks we know the latency of seq_page\n> read, we can build such mapping. however, the correct seq_page_cost\n> detection needs we clear file system cache at least which is\n> something we can't do in pg kernel[1]. So any suggestion on this topic?\n>\n\nOne of the simplest methods might be to just add a new GUC\nseq_page_latency to the user (and we can also provide tools to user\nto detect their IO latency [1]) If user set seq_page_latency, then we can\ndo the timing to cost translation. I got the seq_page_latency = 8us on\nmy local SSD environment before, if the above real case have similar\nnumber, then the planning cost should be 83227 while the current\nalgorithm sets it to 1055. 83227 in this case is big enough to choose\nthe generic plan.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/20191127164821.lspxyrf3c5r6zu5n%40development#cf34e9db80326709af892ac64bc4cb45\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Thu, Oct 15, 2020 at 9:12 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:In cached_plan_cost, we do consider the cost of planning, with the followingalgorithm.int\t\t\tnrelations = list_length(plannedstmt->rtable);result += 1000.0 * cpu_operator_cost * (nrelations + 1);I run into a case where 10 relations  are joined, 3 of them havehundreds of partitions.  at last  nrelations = 421 for this case.| Plan Type    | Estimate Cost | Real Execution Time(ms) | Real Planning Time(ms)  || Custom Plan  |     100867.52 | 13                      | 665.816                 || Generic Plan |     104941.86 | 33(ms)                  | 0.76 (used cached plan) |At last, it chooses the custom plan all the time. so the final performance is678ms+, however if it chooses the generic plan, it is 34ms in total. It looks to me that the planning cost is estimated improperly.Since we do know the planning time exactly for a custom plan when we callcached_plan_cost, if we have a way to convert the real timing to cost, then weprobably can fix this issue. The cost unit is seq_page_scan, looks we know the latency of seq_pageread, we can build such mapping. however, the correct seq_page_cost detection needs we clear file system cache at least which issomething we can't do in pg kernel[1].  So any suggestion on this topic?One of the simplest methods might be to just add a new GUC seq_page_latency to the user (and we can also provide tools to userto detect their IO latency [1]) If user set seq_page_latency, then we can do the timing to cost translation.  I got the seq_page_latency = 8us on my local SSD environment before, if the above real case have similarnumber, then the planning cost should be  83227 while the current algorithm sets it to 1055.  83227 in this case is big enough to choosethe generic plan. [1] https://www.postgresql.org/message-id/flat/20191127164821.lspxyrf3c5r6zu5n%40development#cf34e9db80326709af892ac64bc4cb45  -- Best RegardsAndy Fan", "msg_date": "Fri, 16 Oct 2020 03:18:19 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: improve the algorithm cached_plan_cost with real planning time?" } ]
[ { "msg_contents": "Hi Hackers,\n\nFirst, thanks for working on such a great database! :)\n\nWe're currently trying to automate our PostgreSQL setup by using Ansible.\nWe have an Ansible role for which we can specify supplemental extensions\nfor which a deployment must install.\n\nTo keep it simple across deployed version we simply ask to specify\nextension list, as simple as:\n\n - pgaudit\n - postgis\n - wal2json\n - ... and so on ...\n\n\nIn the installation steps, we simply install all of these packages and add\nthe version to the name. But it appears that some package names are either:\n\n - <package>_<version>\n - <package><version>\n\nSo, it is impossible to simply ask the package/extension name and\nprogrammatically add the version using a common pattern. I think that if we\nuse the underscore to specify the version, it should be the same across all\nversions.\n\nMaybe I'm missing something\nThanks\nBruno Lavoie\n\nHi Hackers, First, thanks for working on such a great database! :)We're currently trying to automate our PostgreSQL setup by using Ansible. We have an Ansible role for which we can specify supplemental extensions for which a deployment must install. To keep it simple across deployed version we simply ask to specify extension list, as simple as:pgauditpostgiswal2json... and so on ...In the installation steps, we simply install all of these packages and add the version to the name. But it appears that some package names are either:<package>_<version><package><version>So, it is impossible to simply ask the package/extension name and programmatically add the version using a common pattern. I think that if we use the underscore to specify the version, it should be the same across all versions.Maybe I'm missing somethingThanksBruno Lavoie", "msg_date": "Thu, 15 Oct 2020 10:23:33 -0400", "msg_from": "Bruno Lavoie <bl@brunol.com>", "msg_from_op": true, "msg_subject": "Packaging - Packages names consistency (RPM)" }, { "msg_contents": "Hi,\n\nRPM packager speaking: I agree that this is very annoying, and this is also in my todo list. Let me try to prioritize it.\n\nRegards, Devrim\n\nOn 15 October 2020 17:23:33 GMT+03:00, Bruno Lavoie <bl@brunol.com> wrote:\n>Hi Hackers,\n>\n>First, thanks for working on such a great database! :)\n>\n>We're currently trying to automate our PostgreSQL setup by using\n>Ansible.\n>We have an Ansible role for which we can specify supplemental\n>extensions\n>for which a deployment must install.\n>\n>To keep it simple across deployed version we simply ask to specify\n>extension list, as simple as:\n>\n> - pgaudit\n> - postgis\n> - wal2json\n> - ... and so on ...\n>\n>\n>In the installation steps, we simply install all of these packages and\n>add\n>the version to the name. But it appears that some package names are\n>either:\n>\n> - <package>_<version>\n> - <package><version>\n>\n>So, it is impossible to simply ask the package/extension name and\n>programmatically add the version using a common pattern. I think that\n>if we\n>use the underscore to specify the version, it should be the same across\n>all\n>versions.\n>\n>Maybe I'm missing something\n>Thanks\n>Bruno Lavoie\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\nHi,RPM packager speaking: I agree that this is very annoying, and this is also in my todo list. Let me try to prioritize it.Regards, DevrimOn 15 October 2020 17:23:33 GMT+03:00, Bruno Lavoie <bl@brunol.com> wrote:\nHi Hackers, First, thanks for working on such a great database! :)We're currently trying to automate our PostgreSQL setup by using Ansible. We have an Ansible role for which we can specify supplemental extensions for which a deployment must install. To keep it simple across deployed version we simply ask to specify extension list, as simple as:pgauditpostgiswal2json... and so on ...In the installation steps, we simply install all of these packages and add the version to the name. But it appears that some package names are either:<package>_<version><package><version>So, it is impossible to simply ask the package/extension name and programmatically add the version using a common pattern. I think that if we use the underscore to specify the version, it should be the same across all versions.Maybe I'm missing somethingThanksBruno Lavoie\n-- Sent from my Android device with K-9 Mail. Please excuse my brevity.", "msg_date": "Thu, 15 Oct 2020 19:26:34 +0300", "msg_from": "=?ISO-8859-1?Q?Devrim_G=FCnd=FCz?= <devrim@gunduz.org>", "msg_from_op": false, "msg_subject": "Re: Packaging - Packages names consistency (RPM)" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: RIPEMD160\n\n\nImprove psql \\df to choose functions by their arguments\n\n== OVERVIEW\n\nHaving to scroll through same-named functions with different argument types\nwhen you know exactly which one you want is annoying at best, error causing\nat worst. This patch enables a quick narrowing of functions with the\nsame name but different arguments. For example, to see the full details\nof a function names \"myfunc\" with a TEXT argument, but not showing\nthe version of \"myfunc\" with a BIGINT argument, one can now do:\n\npsql=# \\df myfunc text\n\nFor this, we are fairly liberal in what we accept, and try to be as\nintuitive as possible.\n\nFeatures:\n\n* Type names are case insensitive. Whitespace is optional, but quoting is\nrespected:\n\ngreg=# \\df myfunc text \"character varying\" INTEGER\n\n* Abbreviations of common types is permitted (because who really likes\nto type out \"character varying\"?), so the above could also be written as:\n\ngreg=# \\df myfunc text varchar int\n\n* The matching is greedy, so you can see everything matching a subset:\n\ngreg=# \\df myfunc timestamptz\n List of functions\n Schema | Name | Result data type | Argument data types\n | Type\n-\n--------+--------+------------------+-------------------------------------------+------\n public | myfunc | void | timestamp with time zone\n | func\n public | myfunc | void | timestamp with time zone, bigint\n | func\n public | myfunc | void | timestamp with time zone, bigint,\nboolean | func\n public | myfunc | void | timestamp with time zone, integer\n | func\n public | myfunc | void | timestamp with time zone, text, cidr\n | func\n(5 rows)\n\n* The appearance of a closing paren indicates we do not want the greediness:\n\ngreg=# \\df myfunc (timestamptz, bigint)\n List of functions\n Schema | Name | Result data type | Argument data types |\nType\n-\n--------+--------+------------------+----------------------------------+------\n public | myfunc | void | timestamp with time zone, bigint |\nfunc\n(1 row)\n\n\n== TAB COMPLETION:\n\nI'm not entirely happy with this, but I figure piggybacking\nonto COMPLETE_WITH_FUNCTION_ARG is better than nothing at all.\nIdeally we'd walk prev*_wd to refine the returned list, but\nthat's an awful lot of complexity for very little gain, and I think\nthe current behavior of showing the complete list of args each time\nshould suffice.\n\n\n== DOCUMENTATION:\n\nThe new feature is briefly mentioned: wordsmithing help in the\nsgml section is appreciated. I'm not sure how many of the above features\nneed to be documented in detail.\n\nRegarding psql/help.c, I don't think this really warrants a change there.\nAs it is, we've gone through great lengths to keep this overloaded\nbackslash\ncommand left justified with the rest!\n\n\n== TESTS:\n\nI put this into psql.c, seems the best place. Mostly testing out\nbasic functionality, quoting, and the various abbreviations. Not much\nelse to test, near as I can tell, as this is a pure convienence addition\nand shouldn't affect anything else. Any extra words after a function name\nfor \\df was previously treated as an error.\n\n\n== IMPLEMENTATION:\n\nRather than messing with psqlscanslash, we simply slurp in the entire rest\nof the line via psql_scan_slash_option (all of which was previously\nignored).\nThis is passed to describeFunction, which then uses strtokx to break it\ninto tokens. We look for a match by comparing the current proargtypes\nentry,\ncasted to text, against the lowercase version of the token found by\nstrtokx.\nAlong the way, we convert things like \"timestamptz\" to the official version\n(i.e. \"timestamp with time zone\"). If any of the tokens start with a\nclosing\nparen, we immediately stop parsing and set pronargs to the current number\nof valid tokens, thereby forcing a match to one (or zero) functions.\n\n6ab7a45d541f2c31c5631b811f14081bf7b22271\nv1-psql-df-pick-function-by-type.patch\n\n- --\nGreg Sabino Mullane\nPGP Key: 0x14964AC8 202010151316\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n\n-----BEGIN PGP SIGNATURE-----\n\niF0EAREDAB0WIQQlKd9quPeUB+lERbS8m5BnFJZKyAUCX4iENQAKCRC8m5BnFJZK\nyIUKAKDiv1E9KgXuSO7lE9p+ttFdk02O2ACg44lu9VdKt3IggIrPiXBPKR8C85M=\n=QPSd\n-----END PGP SIGNATURE-----", "msg_date": "Thu, 15 Oct 2020 13:21:06 -0400", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": true, "msg_subject": "psql \\df choose functions by their arguments" }, { "msg_contents": "On Thu, Oct 15, 2020 at 01:21:06PM -0400, Greg Sabino Mullane wrote:\n> Improve psql \\df to choose functions by their arguments\n\nI think this is a good idea.\n\nThis isn't working for arrays:\n\npostgres=# \\df aa\n public | aa | integer | integer, integer | func\n public | aa | integer | integer, integer, integer | func\n public | aa | integer | integer[], integer, integer | func\n\npostgres=# \\df aa aa int[]\n\nI think it should use the same syntax as \\sf and \\ef, which require parenthesis\nand commas, not spaces.\n\nint x = 0\nwhile ((functoken = strtokx(x++ ? NULL : funcargs, \" \\t\\n\\r\", \".,();\", \"\\\"\", 0, false, true, pset.encoding)))\n\nI think x is just used as \"initial\", so I think you should make it boolean and\nthen set is_initial = false, or similar.\n\n+ pg_strcasecmp(functoken, \"bool\") == 0 ? \"'boolean'\"\n\nI think writing this all within a call to appendPQExpBuffer() is excessive.\nYou can make an array or structure to search through and then append the result\nto the buffer.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 28 Oct 2020 23:26:45 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "Thank you for looking this over.\n\n\n> This isn't working for arrays:\n> ...\n> postgres=# \\df aa aa int[]\n>\n\nArrays should work as expected, I think you have one too many \"aa\" in there?\n\n\n> I think it should use the same syntax as \\sf and \\ef, which require\n> parenthesis\n> and commas, not spaces.\n>\n\nHmm, that will not allow partial matches if we require a closing parens.\nRight now both commas and parens are accepted, but optional.\n\n\n> I think x is just used as \"initial\", so I think you should make it boolean\n> and\n> then set is_initial = false, or similar.\n>\n\nGood suggestion, it is done.\n\n\n> +\n> pg_strcasecmp(functoken, \"bool\") == 0 ? \"'boolean'\"\n>\n> I think writing this all within a call to appendPQExpBuffer() is excessive.\n> You can make an array or structure to search through and then append the\n> result\n> to the buffer.\n>\n\nHmm, like a custom struct we loop through? I will look into implementing\nthat and submit a new patch.\n\nCheers,\nGreg\n\nThank you for looking this over. This isn't working for arrays:...\npostgres=# \\df aa aa int[]Arrays should work as expected, I think you have one too many \"aa\" in there? \nI think it should use the same syntax as \\sf and \\ef, which require parenthesis\nand commas, not spaces.Hmm, that will not allow partial matches if we require a closing parens. Right now both commas and parens are accepted, but optional.   I think x is just used as \"initial\", so I think you should make it boolean and\nthen set is_initial = false, or similar.Good suggestion, it is done. \n+                                                                 pg_strcasecmp(functoken, \"bool\") == 0 ? \"'boolean'\"\n\nI think writing this all within a call to appendPQExpBuffer() is excessive.\nYou can make an array or structure to search through and then append the result\nto the buffer.Hmm, like a custom struct we loop through? I will look into implementing that and submit a new patch.Cheers,Greg", "msg_date": "Thu, 29 Oct 2020 20:35:20 -0400", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "Thanks for the feedback, attached is version two of the patch. Major\nchanges:\n\n* Use booleans not generic \"int x\"\n* Build a quick list of abbreviations at the top of the function\n* Add array mapping for all types\n* Removed the tab-complete bit, it was too fragile and unhelpful\n\nCheers,\nGreg", "msg_date": "Sun, 1 Nov 2020 11:40:28 -0500", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "\n> * Removed the tab-complete bit, it was too fragile and unhelpful\n\nI can’t speak for the specific patch, but tab completion of proc args for \\df, \\ef and friends has long been a desired feature of mine, particularly when you are dealing with functions with huge numbers of arguments and the same name which I have (sadly) come across many times in the wild. \n\nRemoving this because it was brittle is fine, but would be good to see if we could figure out a way to have this kind of feature in psql IMHO. \n\nBest,\n\nDavid\n\n", "msg_date": "Sun, 1 Nov 2020 11:05:25 -0600", "msg_from": "David Christensen <david@pgguru.net>", "msg_from_op": false, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "Hi\r\n\r\n(sorry forget to cc the hacklist)\r\n\r\n> Improve psql \\df to choose functions by their arguments\r\n\r\nI think this is useful.\r\n\r\nI found some comments in the patch.\r\n\r\n1.\r\n> * Abbreviations of common types is permitted (because who really likes \r\n> to type out \"character varying\"?), so the above could also be written as:\r\n\r\nsome Abbreviations of common types are not added to the type_abbreviations[] Such as:\r\n\r\nInt8 => bigint\r\nInt2 => smallint\r\nInt4 ,int => integer\r\nFloat4 => real\r\nFloat8,float,double => double precision\r\n(as same as array type)\r\n\r\nSingle array seems difficult to handle it, may be we can use double array or use a struct.\r\n\r\n2.\r\nAnd I think It's better to update '/?' info about '\\df[+]' in function slashUsage(unsigned short int pager).\r\n\r\nBest regards,\r\nhouzj\r\n\n\n", "msg_date": "Tue, 3 Nov 2020 07:59:51 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: psql \\df choose functions by their arguments" }, { "msg_contents": "Thanks for looking this over!\n\n\n> some Abbreviations of common types are not added to the\n> type_abbreviations[] Such as:\n>\n> Int8 => bigint\n>\n\nI wasn't aiming to provide a canonical list, as I personally have never\nseen anyone use int8 instead of bigint when (for example) creating a\nfunction, but I'm not strongly opposed to expanding the list.\n\nSingle array seems difficult to handle it, may be we can use double array\n> or use a struct.\n>\n\nI think the single works out okay, as this is a simple write-once variable\nthat is not likely to get updated often.\n\n\n> And I think It's better to update '/?' info about '\\df[+]' in function\n> slashUsage(unsigned short int pager).\n>\n\nSuggestions welcome, but it's already pretty tight in there, so I couldn't\nthink of anything:\n\n fprintf(output, _(\" \\\\dew[+] [PATTERN] list foreign-data\nwrappers\\n\"));\n fprintf(output, _(\" \\\\df[anptw][S+] [PATRN] list [only\nagg/normal/procedures/trigger/window] functions\\n\"));\n fprintf(output, _(\" \\\\dF[+] [PATTERN] list text search\nconfigurations\\n\"));\n\nThe \\df option is already our longest one, even with the silly attempt to\nshorten PATTERN :)\n\nCheers,\nGreg\n\nThanks for looking this over! some Abbreviations of common types are not added to the type_abbreviations[] Such as:\n\nInt8                => bigint\nI wasn't aiming to provide a canonical list, as I personally have never seen anyone use int8 instead of bigint when (for example) creating a function, but I'm not strongly opposed to expanding the list.Single array seems difficult to handle it, may be we can use double array or use a struct.I think the single works out okay, as this is a simple write-once variable that is not likely to get updated often.  \nAnd I think It's better to update '/?' info about '\\df[+]' in function slashUsage(unsigned short int pager).Suggestions welcome, but it's already pretty tight in there, so I couldn't think of anything:    fprintf(output, _(\"  \\\\dew[+] [PATTERN]      list foreign-data wrappers\\n\"));    fprintf(output, _(\"  \\\\df[anptw][S+] [PATRN] list [only agg/normal/procedures/trigger/window] functions\\n\"));    fprintf(output, _(\"  \\\\dF[+]  [PATTERN]      list text search configurations\\n\"));The \\df option is already our longest one, even with the silly attempt to shorten PATTERN :)Cheers,Greg", "msg_date": "Tue, 3 Nov 2020 09:27:04 -0500", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "On Sun, Nov 1, 2020 at 12:05 PM David Christensen <david@pgguru.net> wrote:\n\n>\n> I can’t speak for the specific patch, but tab completion of proc args for\n> \\df, \\ef and friends has long been a desired feature of mine, particularly\n> when you are dealing with functions with huge numbers of arguments and the\n> same name which I have (sadly) come across many times in the wild.\n>\n\nIf someone can get this working against this current patch, that would be\ngreat, but I suspect it will require some macro-jiggering in tab-complete.c\nand possibly more, so yeah, could be something to add to the todo list.\n\nCheers,\nGreg\n\nOn Sun, Nov 1, 2020 at 12:05 PM David Christensen <david@pgguru.net> wrote:\nI can’t speak for the specific patch, but tab completion of proc args for \\df, \\ef and friends has long been a desired feature of mine, particularly when you are dealing with functions with huge numbers of arguments and the same name which I have (sadly) come across many times in the wild. If someone can get this working against this current patch, that would be great, but I suspect it will require some macro-jiggering in tab-complete.c and possibly more, so yeah, could be something to add to the todo list.Cheers,Greg", "msg_date": "Tue, 3 Nov 2020 09:31:32 -0500", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "Attached is the latest patch against HEAD - basically fixes a few typos.\n\nCheers,\nGreg", "msg_date": "Wed, 30 Dec 2020 13:00:24 -0500", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "On Thu, Dec 31, 2020 at 7:01 AM Greg Sabino Mullane <htamfids@gmail.com> wrote:\n> Attached is the latest patch against HEAD - basically fixes a few typos.\n\nHi Greg,\n\nIt looks like there is a collation dependency here that causes the\ntest to fail on some systems:\n\n=== ./src/test/regress/regression.diffs ===\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/psql.out\n/tmp/cirrus-ci-build/src/test/regress/results/psql.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/psql.out 2021-01-01\n16:05:25.749692000 +0000\n+++ /tmp/cirrus-ci-build/src/test/regress/results/psql.out 2021-01-01\n16:11:28.525632000 +0000\n@@ -5094,8 +5094,8 @@\npublic | mtest | integer | double precision, double precision, integer | func\npublic | mtest | integer | integer | func\npublic | mtest | integer | integer, text | func\n- public | mtest | integer | timestamp without time zone, timestamp\nwith time zone | func\npublic | mtest | integer | time without time zone, time with time zone | func\n+ public | mtest | integer | timestamp without time zone, timestamp\nwith time zone | func\n\n\n", "msg_date": "Sat, 2 Jan 2021 19:55:25 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "On Sat, Jan 2, 2021 at 1:56 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> ...\n> It looks like there is a collation dependency here that causes the\n> test to fail on some systems:\n>\n\nThanks for pointing that out. I tweaked the function definitions to\nhopefully sidestep the ordering issue - attached is v4.\n\nCheers,\nGreg", "msg_date": "Wed, 6 Jan 2021 15:48:14 -0500", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "Hi\n\nI tried this patch out last year but was overrolled by Other Stuff before I got\naround to providing any feedback, and was reminded of it just now when I was\ntrying to execute \"\\df somefunction text int\" or similar, which had me\nconfused until I remembered it's not a feature yet, so it would\ncertainly be very\nwelcome to have this.\n\n2020年11月3日(火) 23:27 Greg Sabino Mullane <htamfids@gmail.com>:\n>\n> Thanks for looking this over!\n>\n>>\n>> some Abbreviations of common types are not added to the type_abbreviations[] Such as:\n>>\n>> Int8 => bigint\n>\n>\n> I wasn't aiming to provide a canonical list, as I personally have never seen\n> anyone use int8 instead of bigint when (for example) creating a function, but\n> I'm not strongly opposed to expanding the list.\n\nI have vague memories of working with \"int8\" a bit (possibly related to an\nInformix migration), anyway it seems easy enough to add them for completeness\nas someone (possibly migrating from another database) might wonder why\nit's not working.\n\nJust a small code readability suggestion - in exec_command_d(), it seems\nneater to put the funcargs declaration in a block together with the\ncode with which uses it (see attached diff).\n\n\nRegards\n\nIan Barwick\n\n\n-- \nEnterpriseDB: https://www.enterprisedb.com", "msg_date": "Mon, 11 Jan 2021 16:45:36 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "Thanks for the feedback: new version v5 (attached) has int8, plus the\nsuggested code formatting.\n\nCheers,\nGreg", "msg_date": "Thu, 14 Jan 2021 11:45:44 -0500", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "2021年1月15日(金) 1:46 Greg Sabino Mullane <htamfids@gmail.com>:\n\n> Thanks for the feedback: new version v5 (attached) has int8, plus the\n> suggested code formatting.\n>\n> Cheers,\n> Greg\n>\n\nThanks for the update.\n\nIn my preceding mail I meant we should add int2, int4 and int8 for\ncompleteness\n(apologies, I was a bit unclear there), as AFAICS that covers all aliases,\neven if these\nthree are less widely used.\n\nFWIW one place where these do get used in substantial numbers is in the\nregression tests themselves:\n\n $ for L in 2 4 8; do git grep int$L src/test/regress/ | wc -l; done\n 544\n 2332\n 1353\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n2021年1月15日(金) 1:46 Greg Sabino Mullane <htamfids@gmail.com>:Thanks for the feedback: new version v5 (attached) has int8, plus the suggested code formatting.Cheers,Greg\nThanks for the update.In my preceding mail I meant we should add int2, int4 and int8 for completeness(apologies, I was a bit unclear there), as AFAICS that covers all aliases, even if thesethree are less widely used.FWIW one place where these do get used in substantial numbers is in theregression tests themselves:  $ for L in 2 4 8; do git grep int$L src/test/regress/ | wc -l; done  544  2332  1353RegardsIan Barwick-- EnterpriseDB: https://www.enterprisedb.com", "msg_date": "Tue, 19 Jan 2021 11:03:34 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "Ha ha ha, my bad, I am not sure why I left those out. Here is a new patch\nwith int2, int4, and int8. Thanks for the email.\n\nCheers,\nGreg", "msg_date": "Tue, 19 Jan 2021 11:58:27 -0500", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "On 1/19/21 11:58 AM, Greg Sabino Mullane wrote:\n> Ha ha ha, my bad, I am not sure why I left those out. Here is a new \n> patch with int2, int4, and int8. Thanks for the email.\n\nIan, does the new patch look good to you?\n\nAlso, not sure why the target version for this patch is stable so I have \nupdated it to PG14.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 19 Mar 2021 11:40:07 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "Greg Sabino Mullane <htamfids@gmail.com> writes:\n> [ v6-psql-df-pick-function-by-type.patch ]\n\nI looked this over. I like the idea a lot, but not much of anything\nabout the implementation. I think the additional arguments should be\nmatched to the function types using the same rules as for \\dT. That\nallows patterns for the argument type names, which is particularly\nuseful if you want to do something like\n\t\\df foo * integer\nto find functions whose second argument is integer, without restricting\nthe first argument.\n\nAs a lesser quibble, splitting the arguments with strtokx is a hack;\nwe should let the normal psql scanner collect the arguments.\n\nSo that leads me to the attached, which I think is committable. Since\nwe're down to the last day of the CF, I'm going to push this shortly if\nthere aren't squawks soon.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 07 Apr 2021 15:57:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "I like the wildcard aspect, but I have a few issues with the patch:\n\n* It doesn't respect some common abbreviations that work elsewhere (e.g.\nCREATE FUNCTION). So while \"int4\" works, \"int\" does not. Nor does \"float\",\nwhich thus requires the mandatory-double-quoted \"double precision\"\n\n* Adding commas to the args, as returned by psql itself via \\df, provides\nno matches.\n\n* There seems to be no way (?) to limit the functions returned if they\nshare a common root. The previous incantation allowed you to pull out\nfoo(int) from foo(int, bigint). This was a big motivation for writing this\npatch.\n\n* SQL error on \\df foo a..b as well as one on \\df foo (bigint bigint)\n\nCheers,\nGreg\n\nI like the wildcard aspect, but I have a few issues with the patch:* It doesn't respect some common abbreviations that work elsewhere (e.g. CREATE FUNCTION). So while \"int4\" works, \"int\" does not. Nor does \"float\", which thus requires the mandatory-double-quoted \"double precision\"* Adding commas to the args, as returned by psql itself via \\df, provides no matches.* There seems to be no way (?) to limit the functions returned if they share a common root. The previous incantation allowed you to pull out foo(int) from foo(int, bigint). This was a big motivation for writing this patch.* SQL error on \\df foo a..b as well as one on \\df foo (bigint bigint)Cheers,Greg", "msg_date": "Wed, 7 Apr 2021 17:25:13 -0400", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "Greg Sabino Mullane <htamfids@gmail.com> writes:\n> I like the wildcard aspect, but I have a few issues with the patch:\n\n> * It doesn't respect some common abbreviations that work elsewhere (e.g.\n> CREATE FUNCTION). So while \"int4\" works, \"int\" does not. Nor does \"float\",\n> which thus requires the mandatory-double-quoted \"double precision\"\n\n\"\\dT int\" doesn't match anything either. Maybe there's room to improve\non that, but I don't think this patch should deviate from what \\dT does.\n\n> * Adding commas to the args, as returned by psql itself via \\df, provides\n> no matches.\n\nThe docs are fairly clear that the args are to be space-separated, not\ncomma-separated. This fits with psql's general treatment of backslash\narguments, and I think trying to \"improve\" on it will just end badly.\n\n> * There seems to be no way (?) to limit the functions returned if they\n> share a common root. The previous incantation allowed you to pull out\n> foo(int) from foo(int, bigint). This was a big motivation for writing this\n> patch.\n\nHmm, are you trying to say that a invocation with N arg patterns should\nmatch only functions with exactly N arguments? We could do that, but\nI'm not convinced it's an improvement over what I did here. Default\narguments are a counterexample.\n\n> * SQL error on \\df foo a..b as well as one on \\df foo (bigint bigint)\n\nThe first one seems to be a bug, will look. As for the second, I still\ndon't agree that that should be within the mandated syntax.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Apr 2021 17:39:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "I wrote:\n> Greg Sabino Mullane <htamfids@gmail.com> writes:\n>> * SQL error on \\df foo a..b as well as one on \\df foo (bigint bigint)\n\n> The first one seems to be a bug, will look.\n\nArgh, silly typo (and I'd failed to test the schema-qualified-name case).\n\nWhile I was thinking about use-cases for this, I realized that at least\nfor me, being able to restrict \\do operator searches by input type would\nbe even more useful than is true for \\df. Operator names tend to be\noverloaded even more heavily than functions. So here's a v8 that\nalso fixes \\do in the same spirit.\n\n(With respect to the other point: for \\do it does seem to make sense\nto constrain the match to operators with exactly as many arguments\nas specified. I still say that's a bad idea for functions, though.)\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 07 Apr 2021 17:58:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "I wrote:\n> Greg Sabino Mullane <htamfids@gmail.com> writes:\n>> * There seems to be no way (?) to limit the functions returned if they\n>> share a common root. The previous incantation allowed you to pull out\n>> foo(int) from foo(int, bigint). This was a big motivation for writing this\n>> patch.\n\n> Hmm, are you trying to say that a invocation with N arg patterns should\n> match only functions with exactly N arguments? We could do that, but\n> I'm not convinced it's an improvement over what I did here. Default\n> arguments are a counterexample.\n\nI had an idea about that. I've not tested this, but I think it would be\na trivial matter of adding a coalesce() call to make the query act like\nthe type name for a not-present argument is an empty string, rather than\nNULL which is what it gets right now. Then you could do what I think\nyou're asking for with\n\n\\df foo integer \"\"\n\nAdmittedly this is a bit of a hack, but to me this seems like a\nminority use-case, so maybe that's good enough.\n\nAs for the point about \"int\" versus \"integer\" and so on, I wouldn't\nbe averse to installing a mapping layer for that, so long as we\ndid it to \\dT as well.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Apr 2021 19:34:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql \\df choose functions by their arguments" }, { "msg_contents": "I wrote:\n> I had an idea about that. I've not tested this, but I think it would be\n> a trivial matter of adding a coalesce() call to make the query act like\n> the type name for a not-present argument is an empty string, rather than\n> NULL which is what it gets right now. Then you could do what I think\n> you're asking for with\n\n> \\df foo integer \"\"\n\nActually, what would make more sense is to treat \"-\" as specifying\na non-existent argument. There are precedents for that in, eg, \\c,\nand a dash is a little more robust than an empty-string argument.\nSo that leads me to 0001 attached.\n\n> As for the point about \"int\" versus \"integer\" and so on, I wouldn't\n> be averse to installing a mapping layer for that, so long as we\n> did it to \\dT as well.\n\nAnd for that, I suggest 0002. (We only need mappings for cases that\ndon't work out-of-the-box, so your list seemed a bit redundant.)\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 07 Apr 2021 22:00:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql \\df choose functions by their arguments" } ]
[ { "msg_contents": "Hi\n\nI am playing with fixing the speed of CALL statement in a non atomic\ncontext, and when I tested my patch I found another issue of CALL statement\n- an invalidation of plans doesn't work for CALL statement (in atomic\ncontext).\n\nCREATE OR REPLACE FUNCTION public.fx(a integer)\n RETURNS integer\n LANGUAGE plpgsql\nAS $function$\nbegin\n return a;\nend;\n$function$\n\ncreate or replace function fxo(a int)\nreturns int as $$\nbegin\n return fx(a);\nend;\n$$ language plpgsql;\n\ndrop function fx;\n\n-- create fx again\ncreate or replace function fx(a int)\nreturns int as $$\nbegin\n return a;\nend;\n$$ language plpgsql;\n\n-- should be ok\nselect fxo(10);\n\n-- but\ncreate procedure pe(a int)\nas $$\nbegin\nend;\n$$ language plpgsql;\n\ncreate or replace function fxo(a int)\nreturns int as $$\nbegin\n call pe(a);\n return fx(a);\nend;\n$$ language plpgsql;\n\n-- ok\nselect fxo(10);\n\npostgres=# drop procedure pe;\nDROP PROCEDURE\npostgres=# create procedure pe(a int)\nas $$\nbegin\nend;\n$$ language plpgsql;\nCREATE PROCEDURE\npostgres=# select fxo(10);\nERROR: cache lookup failed for function 16389\nCONTEXT: SQL statement \"CALL pe(a)\"\nPL/pgSQL function fxo(integer) line 2 at CALL\n\nRegards\n\nPavel\n\nHiI am playing with fixing the speed of CALL statement in a non atomic context, and when I tested my patch I found another issue of CALL statement - an invalidation of plans doesn't work for CALL statement (in atomic context).CREATE OR REPLACE FUNCTION public.fx(a integer) RETURNS integer LANGUAGE plpgsqlAS $function$begin  return a;end;$function$create or replace function fxo(a int)returns int as $$begin  return fx(a);end;$$ language plpgsql;drop function fx;-- create fx againcreate or replace function fx(a int)returns int as $$begin  return a;end;$$ language plpgsql;-- should be okselect fxo(10);-- butcreate procedure pe(a int)as $$beginend;$$ language plpgsql;create or replace function fxo(a int)returns int as $$begin   call pe(a);  return fx(a);end;$$ language plpgsql;-- okselect fxo(10);postgres=# drop procedure pe;DROP PROCEDUREpostgres=# create procedure pe(a int)as $$beginend;$$ language plpgsql;CREATE PROCEDUREpostgres=# select fxo(10);ERROR:  cache lookup failed for function 16389CONTEXT:  SQL statement \"CALL pe(a)\"PL/pgSQL function fxo(integer) line 2 at CALLRegardsPavel", "msg_date": "Thu, 15 Oct 2020 19:54:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "plan cache doesn't clean plans with references to dropped procedures" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I am playing with fixing the speed of CALL statement in a non atomic\n> context, and when I tested my patch I found another issue of CALL statement\n> - an invalidation of plans doesn't work for CALL statement (in atomic\n> context).\n\nYeah, that's not the plancache's fault. CALL doesn't register any\ndependencies for the parsed expression it keeps in its parsetree.\n\nI remain of the opinion that we need to decide whether CALL is\na utility command or an optimizable statement, and then make it\nfollow the relevant set of rules. It can't live halfway between,\nespecially not when none of the required infrastructure has been\nbuilt to allow it to act like an optimizable statement. (Hm,\nI could swear we discussed this before, but searching the archives\ndoesn't immediately turn up the thread. Anyway, you don't get to\ndo parse analysis in advance of execution when you are a utility\ncommand.)\n\nProbably the only feasible fix for the back branches is to go in the\nutility-command direction, which means ripping out the pre-parsed\nexpression in CallStmt. Somebody could look at making it act like an\noptimizable statement in the future; but that'll involve touching a\nnontrivial amount of code, and I'm not sure how much performance it'll\nreally buy.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 15 Oct 2020 14:17:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plan cache doesn't clean plans with references to dropped\n procedures" }, { "msg_contents": "čt 15. 10. 2020 v 20:17 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I am playing with fixing the speed of CALL statement in a non atomic\n> > context, and when I tested my patch I found another issue of CALL\n> statement\n> > - an invalidation of plans doesn't work for CALL statement (in atomic\n> > context).\n>\n> Yeah, that's not the plancache's fault. CALL doesn't register any\n> dependencies for the parsed expression it keeps in its parsetree.\n>\n> I remain of the opinion that we need to decide whether CALL is\n> a utility command or an optimizable statement, and then make it\n> follow the relevant set of rules. It can't live halfway between,\n> especially not when none of the required infrastructure has been\n> built to allow it to act like an optimizable statement. (Hm,\n> I could swear we discussed this before, but searching the archives\n> doesn't immediately turn up the thread. Anyway, you don't get to\n> do parse analysis in advance of execution when you are a utility\n> command.)\n>\n> Probably the only feasible fix for the back branches is to go in the\n> utility-command direction, which means ripping out the pre-parsed\n> expression in CallStmt. Somebody could look at making it act like an\n> optimizable statement in the future; but that'll involve touching a\n> nontrivial amount of code, and I'm not sure how much performance it'll\n> really buy.\n>\n\nMaybe I wrote necessary code (or some part) for LET statement\n\nhttps://commitfest.postgresql.org/30/1608/\n\nAnyway, I think another related issue will be in work with optimized\n(cached) target.\n\n\n\n> regards, tom lane\n>\n\nčt 15. 10. 2020 v 20:17 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I am playing with fixing the speed of CALL statement in a non atomic\n> context, and when I tested my patch I found another issue of CALL statement\n> - an invalidation of plans doesn't work for CALL statement (in atomic\n> context).\n\nYeah, that's not the plancache's fault.  CALL doesn't register any\ndependencies for the parsed expression it keeps in its parsetree.\n\nI remain of the opinion that we need to decide whether CALL is\na utility command or an optimizable statement, and then make it\nfollow the relevant set of rules.  It can't live halfway between,\nespecially not when none of the required infrastructure has been\nbuilt to allow it to act like an optimizable statement.  (Hm,\nI could swear we discussed this before, but searching the archives\ndoesn't immediately turn up the thread.  Anyway, you don't get to\ndo parse analysis in advance of execution when you are a utility\ncommand.)\n\nProbably the only feasible fix for the back branches is to go in the\nutility-command direction, which means ripping out the pre-parsed\nexpression in CallStmt.  Somebody could look at making it act like an\noptimizable statement in the future; but that'll involve touching a\nnontrivial amount of code, and I'm not sure how much performance it'll\nreally buy.Maybe I wrote necessary code (or some part) for LET statementhttps://commitfest.postgresql.org/30/1608/Anyway, I think another related issue will be in work with optimized (cached) target. \n\n                        regards, tom lane", "msg_date": "Thu, 15 Oct 2020 20:51:46 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan cache doesn't clean plans with references to dropped\n procedures" } ]
[ { "msg_contents": "Hi,\n\nThere will be a breaking API change for JIT related API in LLVM\n12. Mostly about making control over various aspects easier, and then\nbuilding on top of that providing new features (like JIT compiling in\nthe background and making it easier to share JIT compiled output between\nprocesses).\n\nI've worked with Lang Hames to ensure that the new C API has feature\nparity...\n\nThe postgres changes are fairly localized, all in llvmjit.c - it's just\na few #ifdefs to support both LLVM 12 and before.\n\nThe two questions I have are:\n\n1) Which versions do we want to add LLVM 12 support? It'd be fairly\n easy to backport all the way. But it's not quite a bugfix... OTOH,\n it'd probably painful for packagers to have dependencies on different\n versions of LLVM for different versions of postgres.\n\n2) When do we want to add LLVM 12 support? PG will soon stop compiling\n against LLVM 12, which will be released in about 6 months. I worked\n with Lang to make most of the breaking changes in a branch (to be\n merged in the next few days), but it's possible that there will be a\n few smaller changes.\n\nI'd be inclined to add support for LLVM 12 to master soon, and then\nbackpatch that support around LLVM 12's release.\n\nComments?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 15 Oct 2020 18:12:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "upcoming API changes for LLVM 12" }, { "msg_contents": "On Fri, Oct 16, 2020 at 2:12 PM Andres Freund <andres@anarazel.de> wrote:\n> There will be a breaking API change for JIT related API in LLVM\n> 12. Mostly about making control over various aspects easier, and then\n> building on top of that providing new features (like JIT compiling in\n> the background and making it easier to share JIT compiled output between\n> processes).\n>\n> I've worked with Lang Hames to ensure that the new C API has feature\n> parity...\n\nCool!\n\n> I'd be inclined to add support for LLVM 12 to master soon, and then\n> backpatch that support around LLVM 12's release.\n\n+1. I guess Fabien's animal \"seawasp\" will turn red next week.\nApparently it rebuilds bleeding edge LLVM weekly (though strangely\nlast week it went backwards... huh).\n\n\n", "msg_date": "Fri, 16 Oct 2020 15:37:43 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: upcoming API changes for LLVM 12" }, { "msg_contents": "On 2020-Oct-15, Andres Freund wrote:\n\n> There will be a breaking API change for JIT related API in LLVM\n> 12. Mostly about making control over various aspects easier, and then\n> building on top of that providing new features (like JIT compiling in\n> the background and making it easier to share JIT compiled output between\n> processes).\n> \n> I've worked with Lang Hames to ensure that the new C API has feature\n> parity...\n\nWhee, sounds pretty good ... (am I dreaming too much if I hope execution\nstarts with non-jitted and switches on the fly to jitted once\nbackground compilation finishes?)\n\n> 2) When do we want to add LLVM 12 support? PG will soon stop compiling\n> against LLVM 12, which will be released in about 6 months. I worked\n> with Lang to make most of the breaking changes in a branch (to be\n> merged in the next few days), but it's possible that there will be a\n> few smaller changes.\n\nhmm, how regular are LLVM releases? I mean, what if pg14 ends up being\nreleased sooner than LLVM12 – would there be a problem?\n\n\n", "msg_date": "Fri, 16 Oct 2020 02:45:51 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: upcoming API changes for LLVM 12" }, { "msg_contents": "Hi,\n\nOn 2020-10-16 02:45:51 -0300, Alvaro Herrera wrote:\n> Whee, sounds pretty good ... (am I dreaming too much if I hope\n> execution starts with non-jitted and switches on the fly to jitted\n> once background compilation finishes?)\n\nThere's some more work needed to get there, but yes, the basics for that\nare there now. It'd perhaps be doable with threads now, but it's not\nclear we want that... We probably could build it with processes too -\nit'd require some memory management fun, but it's doable.\n\n\n> > 2) When do we want to add LLVM 12 support? PG will soon stop compiling\n> > against LLVM 12, which will be released in about 6 months. I worked\n> > with Lang to make most of the breaking changes in a branch (to be\n> > merged in the next few days), but it's possible that there will be a\n> > few smaller changes.\n> \n> hmm, how regular are LLVM releases? I mean, what if pg14 ends up being\n> released sooner than LLVM12 – would there be a problem?\n\nPretty unlikely - they're half yearly releases, and come out on a\nsomewhat regular schedule. They've moved a few weeks but not more. And\neven if they did - having a few #ifdefs for LLVM 12 would be ok anyway.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 16 Oct 2020 00:38:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: upcoming API changes for LLVM 12" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-10-16 02:45:51 -0300, Alvaro Herrera wrote:\n>>> 2) When do we want to add LLVM 12 support? PG will soon stop compiling\n>>> against LLVM 12, which will be released in about 6 months. I worked\n>>> with Lang to make most of the breaking changes in a branch (to be\n>>> merged in the next few days), but it's possible that there will be a\n>>> few smaller changes.\n\n>> hmm, how regular are LLVM releases? I mean, what if pg14 ends up being\n>> released sooner than LLVM12 – would there be a problem?\n\n> Pretty unlikely - they're half yearly releases, and come out on a\n> somewhat regular schedule. They've moved a few weeks but not more. And\n> even if they did - having a few #ifdefs for LLVM 12 would be ok anyway.\n\nYeah. As long as we're not breaking the ability to build against older\nLLVM, I can't see a reason not to apply and back-patch these changes.\nWe usually want all supported PG versions to build against newer tool\nchains, and this seems to fall into that category.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Oct 2020 10:22:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: upcoming API changes for LLVM 12" }, { "msg_contents": "Hi,\n\nOn 2020-10-16 10:22:57 -0400, Tom Lane wrote:\n> Yeah. As long as we're not breaking the ability to build against older\n> LLVM, I can't see a reason not to apply and back-patch these changes.\n> We usually want all supported PG versions to build against newer tool\n> chains, and this seems to fall into that category.\n\nCool! I just ran that branch against 3.9 (the currently oldest supported\nversion), and that still works.\n\n\nA related question is whether it'd be time to prune the oldest supported\nLLVM version. 3.9.0 was released 2016-08-31 (and 3.9.1, the only point\nrelease, was 2016-12-13). There's currently no *pressing* reason to\nreduce it, but it is the cause of few #ifdefs - but more importantly it\nincreases the test matrix.\n\nI'm inclined to just have a deterministic policy that we apply around\nrelease time going forward. E.g. don't support versions that are newer\nthan the newest available LLVM version in the second newest\nlong-term-supported distribution release of RHEL, Ubuntu, Debian?\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Fri, 16 Oct 2020 13:53:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: upcoming API changes for LLVM 12" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> A related question is whether it'd be time to prune the oldest supported\n> LLVM version. 3.9.0 was released 2016-08-31 (and 3.9.1, the only point\n> release, was 2016-12-13). There's currently no *pressing* reason to\n> reduce it, but it is the cause of few #ifdefs - but more importantly it\n> increases the test matrix.\n\n> I'm inclined to just have a deterministic policy that we apply around\n> release time going forward. E.g. don't support versions that are newer\n> than the newest available LLVM version in the second newest\n> long-term-supported distribution release of RHEL, Ubuntu, Debian?\n\nMeh. I do not think these should be mechanistic one-size-fits-all\ndecisions. A lot hinges on just how messy it is to continue support\nfor a given tool. Moreover, the policy you propose above is\ncompletely out of line with our approach to every other toolchain\nwe use.\n\nI'd rather see an approach along the lines of \"it's time to drop\nsupport for LLVM version X because it can't do Y\", rather than\n\"... because Z amount of time has passed\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Oct 2020 17:04:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: upcoming API changes for LLVM 12" }, { "msg_contents": "On 2020-Oct-16, Andres Freund wrote:\n\n> A related question is whether it'd be time to prune the oldest supported\n> LLVM version. 3.9.0 was released 2016-08-31 (and 3.9.1, the only point\n> release, was 2016-12-13). There's currently no *pressing* reason to\n> reduce it, but it is the cause of few #ifdefs - but more importantly it\n> increases the test matrix.\n\nIs there a matrix of LLVM versions supported by live distros? It sounds\nlike pruning away 3.9 from branch master would be reasonable enough;\nOTOH looking at the current LLVM support code in Postgres it doesn't\nlook like you would actually save all that much. Maybe the picture\nchanges with things you're doing now, but it's not evident from what's\nin the tree now.\n\n> I'm inclined to just have a deterministic policy that we apply around\n> release time going forward. E.g. don't support versions that are newer\n> than the newest available LLVM version in the second newest\n> long-term-supported distribution release of RHEL, Ubuntu, Debian?\n\nIt seems fair to think that new Postgres releases should be put in\nproduction only with the newest LTS release of each OS -- no need to go\nback to the second newest. But I think we should use such a criteria to\ndrive discussion rather than as a battle axe chopping stuff away.\n\n\n", "msg_date": "Fri, 16 Oct 2020 19:28:19 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: upcoming API changes for LLVM 12" }, { "msg_contents": "Hi Andres,\n\nOn Thu, Oct 15, 2020 at 6:12 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> There will be a breaking API change for JIT related API in LLVM\n> 12. Mostly about making control over various aspects easier, and then\n> building on top of that providing new features (like JIT compiling in\n> the background and making it easier to share JIT compiled output between\n> processes).\n>\n> I've worked with Lang Hames to ensure that the new C API has feature\n> parity...\n>\n\nI assume you're alluding to the removal of ORC legacy (v1) API? How far\nback was feature parity in the new API, or we could only switch starting\nwith LLVM 12?\n\n> The postgres changes are fairly localized, all in llvmjit.c - it's just\n> a few #ifdefs to support both LLVM 12 and before.\n>\n> The two questions I have are:\n>\n> 1) Which versions do we want to add LLVM 12 support? It'd be fairly\n> easy to backport all the way. But it's not quite a bugfix... OTOH,\n> it'd probably painful for packagers to have dependencies on different\n> versions of LLVM for different versions of postgres.\n>\n> 2) When do we want to add LLVM 12 support? PG will soon stop compiling\n> against LLVM 12, which will be released in about 6 months. I worked\n> with Lang to make most of the breaking changes in a branch (to be\n> merged in the next few days), but it's possible that there will be a\n> few smaller changes.\n\nI think this has already happened about two weeks ago when Lang's commit\n6154c4115cd4b78d landed in LLVM master.\n\nCheers,\nJesse\n\n\n", "msg_date": "Mon, 2 Nov 2020 10:28:33 -0800", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": false, "msg_subject": "Re: upcoming API changes for LLVM 12" }, { "msg_contents": "Hi,\n\nOn 2020-11-02 10:28:33 -0800, Jesse Zhang wrote:\n> On Thu, Oct 15, 2020 at 6:12 PM Andres Freund <andres@anarazel.de> wrote:\n> > There will be a breaking API change for JIT related API in LLVM\n> > 12. Mostly about making control over various aspects easier, and then\n> > building on top of that providing new features (like JIT compiling in\n> > the background and making it easier to share JIT compiled output between\n> > processes).\n> >\n> > I've worked with Lang Hames to ensure that the new C API has feature\n> > parity...\n> >\n> \n> I assume you're alluding to the removal of ORC legacy (v1) API?\n\nYes.\n\n\n> How far back was feature parity in the new API, or we could only switch starting\n> with LLVM 12?\n\nParity is in 12 only - I had to work with Lang for a while to get to\nparity. There really is no reason to switch earlier anyway.\n\n\n> > The postgres changes are fairly localized, all in llvmjit.c - it's just\n> > a few #ifdefs to support both LLVM 12 and before.\n> >\n> > The two questions I have are:\n> >\n> > 1) Which versions do we want to add LLVM 12 support? It'd be fairly\n> > easy to backport all the way. But it's not quite a bugfix... OTOH,\n> > it'd probably painful for packagers to have dependencies on different\n> > versions of LLVM for different versions of postgres.\n> >\n> > 2) When do we want to add LLVM 12 support? PG will soon stop compiling\n> > against LLVM 12, which will be released in about 6 months. I worked\n> > with Lang to make most of the breaking changes in a branch (to be\n> > merged in the next few days), but it's possible that there will be a\n> > few smaller changes.\n> \n> I think this has already happened about two weeks ago when Lang's commit\n> 6154c4115cd4b78d landed in LLVM master.\n\nYea, I just need to polish the support a bit more. Shouldn't be too\nmuch more work (right now it has too much unnecessary duplication, need\nto split some reindation out into a separate commit).\n\nhttps://github.com/anarazel/postgres/commits/llvm-12\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 2 Nov 2020 10:40:20 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: upcoming API changes for LLVM 12" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-11-02 10:28:33 -0800, Jesse Zhang wrote:\n>> On Thu, Oct 15, 2020 at 6:12 PM Andres Freund <andres@anarazel.de> wrote:\n>>> There will be a breaking API change for JIT related API in LLVM\n>>> 12.\n\nseawasp, which runs some bleeding-edge version of clang, has been falling\nover for the last couple of weeks:\n\n/home/fabien/clgtk/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv -O2 -D__STDC_LIMIT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_CONSTANT_MACROS -D_DEBUG -D_GNU_SOURCE -I/home/fabien/clgtk/include -I../../../../src/include -D_GNU_SOURCE -I/usr/include/libxml2 -flto=thin -emit-llvm -c -o llvmjit_types.bc llvmjit_types.c\nllvmjit.c:21:10: fatal error: 'llvm-c/OrcBindings.h' file not found\n#include <llvm-c/OrcBindings.h>\n ^~~~~~~~~~~~~~~~~~~~~~\n1 error generated.\n\nI suppose this is related to what you are talking about here.\nIf so, could we prioritize getting that committed? It's annoying\nto have the buildfarm failures page so full of this one issue.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 08 Nov 2020 17:35:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: upcoming API changes for LLVM 12" }, { "msg_contents": "Hi,\n\nOn 2020-11-08 17:35:20 -0500, Tom Lane wrote:\n> I suppose this is related to what you are talking about here.\n\nYes.\n\n\n> If so, could we prioritize getting that committed? It's annoying\n> to have the buildfarm failures page so full of this one issue.\n\nYea, I'll try to do that in the next few days (was plannin to last week,\nbut due to a hand injury I was typing one handed last week - makes it\npretty annoying to clean up code. But I just started being able to at\nleast use my left thumb again...).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 8 Nov 2020 15:18:54 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: upcoming API changes for LLVM 12" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Yea, I'll try to do that in the next few days (was plannin to last week,\n> but due to a hand injury I was typing one handed last week - makes it\n> pretty annoying to clean up code. But I just started being able to at\n> least use my left thumb again...).\n\nOuch. Get well soon, and don't overstress your hand --- that's a\ngood recipe for long-term problems.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 08 Nov 2020 18:22:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: upcoming API changes for LLVM 12" }, { "msg_contents": "Hi,\n\nOn 2020-11-08 18:22:50 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Yea, I'll try to do that in the next few days\n\nI pushed the change to master. If that doesn't show any problems, I'll\nbackpatch in a week or so. Seawasp runs only on master, so it should\nsatisfy the buildfarm at least.\n\n\n> > (was plannin to last week,\n> > but due to a hand injury I was typing one handed last week - makes it\n> > pretty annoying to clean up code. But I just started being able to at\n> > least use my left thumb again...).\n> \n> Ouch. Get well soon, and don't overstress your hand --- that's a\n> good recipe for long-term problems.\n\nThanks! I *am* planning not to write all that much for a while. But it's\nfrustrating / hard, as many other activities are even less an option...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Nov 2020 20:13:43 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: upcoming API changes for LLVM 12" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I pushed the change to master.\n\nThanks!\n\n> If that doesn't show any problems, I'll\n> backpatch in a week or so. Seawasp runs only on master, so it should\n> satisfy the buildfarm at least.\n\nYeah, sounds like a good plan. FWIW, master builds clean for me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Nov 2020 23:16:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: upcoming API changes for LLVM 12" }, { "msg_contents": "Hi,\n\nOn 2020-11-09 20:13:43 -0800, Andres Freund wrote:\n> I pushed the change to master. If that doesn't show any problems, I'll\n> backpatch in a week or so. Seawasp runs only on master, so it should\n> satisfy the buildfarm at least.\n\nIt was a bit longer than a week, but I finally have done so... Let's see\nwhat the world^Wbuildfarm says.\n\n- Andres\n\n\n", "msg_date": "Mon, 7 Dec 2020 19:39:55 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: upcoming API changes for LLVM 12" } ]
[ { "msg_contents": "Hi!\n\nOn PgCon 2020 we had been discussing some caveats of synchronous replication [0] related to data durability in HA postgres installations.\n\nBut also there was raised important concern about streaming logical replication only after it \"actually happened\" for HA cluster.\nIs anyone working on it?If no, I propose to discuss design of this feature.\n\nWhy is it important? It's important for changed data capture (CDC).\nFor physical replication we can apply changed forward (just replay WAL) and backward (with help of pg_rewind).\nBut there is no clean way to undo logical replication.\n\nConsider someone having a data publication from HA cluster A to another postgres installation B. A consists of primary A1 and standby A2.\n\nWhen failover happens from A1 to A2 some part of A1 history can be committed locally on A. And streamed to B via logical replication. After failover to A2 B cannot continue CDC from A2 because B already applied part of a history from A1 which never existed for A2.\n\nDuring unconference session [0] there was proposed GUC that is 'post_synchronous_standby_names' of standbys that can't get data until the transaction has been sent to the sync standbys.\nThis will do the trick, though I'm not sure It's best possible interface for the feature.\nAny ideas on the feature will be appreciated.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n\n[0] https://wiki.postgresql.org/wiki/PgCon_2020_Developer_Unconference/Edge_cases_of_synchronous_replication_in_HA_solutions\n\n", "msg_date": "Fri, 16 Oct 2020 12:21:27 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Sending logical replication data only after synchronous replication\n happened" } ]
[ { "msg_contents": "Hi All,\n Logical replication protocol uses single byte character to identify\ndifferent chunks of logical repliation messages. The code uses\ncharacter literals for the same. These literals are used as bare\nconstants in code as well. That's true for almost all the code that\ndeals with wire protocol. With that it becomes difficult to identify\nthe code which deals with a particular message. For example code that\ndeals with message type 'B'. In various protocol 'B' has different\nmeaning and it gets difficult and time consuming to differentiate one\nusage from other and find all places which deal with one usage. Here's\na patch simplifying that for top level logical replication messages.\n\nI think I have covered the places that need change. But I might have\nmissed something, given that these literals are used at several other\nplaces (a problem this patch tries to fix :)).\n\nInitially I had used #define for the same, but Peter E suggested using\nEnums so that switch cases can detect any remaining items along with\nstronger type checks.\n\nPavan offleast suggested to create a wrapper\npg_send_logical_rep_message() on top of pg_sendbyte(), similarly for\npg_getmsgbyte(). I wanted to see if this change is acceptable. If so,\nI will change that as well. Comments/suggestions welcome.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Fri, 16 Oct 2020 12:55:26 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Enumize logical replication message actions" }, { "msg_contents": "\r\n> On Oct 16, 2020, at 3:25 PM, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\r\n> \r\n> Hi All,\r\n> Logical replication protocol uses single byte character to identify\r\n> different chunks of logical repliation messages. The code uses\r\n> character literals for the same. These literals are used as bare\r\n> constants in code as well. That's true for almost all the code that\r\n> deals with wire protocol. With that it becomes difficult to identify\r\n> the code which deals with a particular message. For example code that\r\n> deals with message type 'B'. In various protocol 'B' has different\r\n> meaning and it gets difficult and time consuming to differentiate one\r\n> usage from other and find all places which deal with one usage. Here's\r\n> a patch simplifying that for top level logical replication messages.\r\n> \r\n> I think I have covered the places that need change. But I might have\r\n> missed something, given that these literals are used at several other\r\n> places (a problem this patch tries to fix :)).\r\n> \r\n> Initially I had used #define for the same, but Peter E suggested using\r\n> Enums so that switch cases can detect any remaining items along with\r\n> stronger type checks.\r\n> \r\n> Pavan offleast suggested to create a wrapper\r\n> pg_send_logical_rep_message() on top of pg_sendbyte(), similarly for\r\n> pg_getmsgbyte(). I wanted to see if this change is acceptable. If so,\r\n> I will change that as well. Comments/suggestions welcome.\r\n> \r\n> -- \r\n> Best Wishes,\r\n> Ashutosh Bapat\r\n> <0001-Enumize-top-level-logical-replication-actions.patch>\r\n\r\nWhat about ’N’ for new tuples, ‘O’ for old tuple follows, ‘K’ for old key follows?\r\nThose are also logical replication protocol message, I think.\r\n\r\n--\r\nBest regards\r\nJapin Li\r\n\r\n", "msg_date": "Fri, 16 Oct 2020 08:08:40 +0000", "msg_from": "Li Japin <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "At Fri, 16 Oct 2020 08:08:40 +0000, Li Japin <japinli@hotmail.com> wrote in \r\n> \r\n> > On Oct 16, 2020, at 3:25 PM, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\r\n> > \r\n> > Hi All,\r\n> > Logical replication protocol uses single byte character to identify\r\n> > different chunks of logical repliation messages. The code uses\r\n> > character literals for the same. These literals are used as bare\r\n> > constants in code as well. That's true for almost all the code that\r\n> > deals with wire protocol. With that it becomes difficult to identify\r\n> > the code which deals with a particular message. For example code that\r\n> > deals with message type 'B'. In various protocol 'B' has different\r\n> > meaning and it gets difficult and time consuming to differentiate one\r\n> > usage from other and find all places which deal with one usage. Here's\r\n> > a patch simplifying that for top level logical replication messages.\r\n> > \r\n> > I think I have covered the places that need change. But I might have\r\n> > missed something, given that these literals are used at several other\r\n> > places (a problem this patch tries to fix :)).\r\n> > \r\n> > Initially I had used #define for the same, but Peter E suggested using\r\n> > Enums so that switch cases can detect any remaining items along with\r\n> > stronger type checks.\r\n> > \r\n> > Pavan offleast suggested to create a wrapper\r\n> > pg_send_logical_rep_message() on top of pg_sendbyte(), similarly for\r\n> > pg_getmsgbyte(). I wanted to see if this change is acceptable. If so,\r\n> > I will change that as well. Comments/suggestions welcome.\r\n> > \r\n> > -- \r\n> > Best Wishes,\r\n> > Ashutosh Bapat\r\n> > <0001-Enumize-top-level-logical-replication-actions.patch>\r\n> \r\n> What about ’N’ for new tuples, ‘O’ for old tuple follows, ‘K’ for old key follows?\r\n> Those are also logical replication protocol message, I think.\r\n\r\nThey are flags stored in a message so they can be seen as different\r\nfrom the message type letters.\r\n\r\nAnyway if the values are determined after some meaning, I'm not sure\r\nenumerize them is good thing or not. In other words, 'U' conveys\r\nalmost same amount of information with LOGICAL_REP_MSG_UPDATE in the\r\ncontext of logical replcation protocol.\r\n\r\nWe have the same code pattern in PostgresMain and perhaps we don't\r\ngoing to change them into enums.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Fri, 16 Oct 2020 17:36:28 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Fri, 16 Oct 2020 at 14:06, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Fri, 16 Oct 2020 08:08:40 +0000, Li Japin <japinli@hotmail.com> wrote\n> in\n> >\n> > > On Oct 16, 2020, at 3:25 PM, Ashutosh Bapat <\n> ashutosh.bapat.oss@gmail.com> wrote:\n>\n> >\n> > What about ’N’ for new tuples, ‘O’ for old tuple follows, ‘K’ for old\n> key follows?\n> > Those are also logical replication protocol message, I think.\n>\n> They are flags stored in a message so they can be seen as different\n> from the message type letters.\n>\n\nI think we converting those into macros/enums will help but for now I have\ntackled only the top level message types.\n\n\n>\n> Anyway if the values are determined after some meaning, I'm not sure\n> enumerize them is good thing or not. In other words, 'U' conveys\n> almost same amount of information with LOGICAL_REP_MSG_UPDATE in the\n> context of logical replcation protocol.\n>\n> We have the same code pattern in PostgresMain and perhaps we don't\n> going to change them into enums.\n>\n\nThat's exactly the problem I am trying to solve. Take for example 'B' as I\nhave mentioned before. That string literal appears in 64 different places\nin the master branch. Which of those are the ones related to a \"BEGIN\"\nmessage in logical replication protocol is not clear, unless I thumb\nthrough each of those. In PostgresMain it's used to indicate a BIND\nmessage. Which of those 64 instances are also using 'B' to mean a bind\nmessage? Using enums or macros makes it clear. Just look\nup LOGICAL_REP_MSG_BEGIN. Converting all 'B' to their respective macros\nwill help but might be problematic for back-patching. So that's arguable.\nBut doing that in something as new as logical replication will be helpful,\nbefore it gets too late to change that.\n\nFurther logical repliation protocol is using the same literal e.g. 'O' to\nmean origin in some places and old tuple in some other. While comments\nthere help, it's not easy to locate all the code that deals with one\nmeaning or the other. This change will help with that. Another reason as to\nwhy logical replication.\n-- \nBest Wishes,\nAshutosh\n\nOn Fri, 16 Oct 2020 at 14:06, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Fri, 16 Oct 2020 08:08:40 +0000, Li Japin <japinli@hotmail.com> wrote in \n> \n> > On Oct 16, 2020, at 3:25 PM, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n> \n> What about ’N’ for new tuples, ‘O’ for old tuple follows, ‘K’ for old key follows?\n> Those are also logical replication protocol message, I think.\n\nThey are flags stored in a message so they can be seen as different\nfrom the message type letters.I think we converting those into macros/enums will help but for now I have tackled only the top level message types. \n\nAnyway if the values are determined after some meaning, I'm not sure\nenumerize them is good thing or not.  In other words, 'U' conveys\nalmost same amount of information with LOGICAL_REP_MSG_UPDATE in the\ncontext of logical replcation protocol.\n\nWe have the same code pattern in PostgresMain and perhaps we don't\ngoing to change them into enums.That's exactly the problem I am trying to solve. Take for example 'B' as I have mentioned before. That string literal appears in 64 different places in the master branch. Which of those are the ones related to a \"BEGIN\" message in logical replication protocol is not clear, unless I thumb through each of those. In PostgresMain it's used to indicate a BIND message. Which of those 64 instances are also using 'B' to mean a bind message? Using enums or macros makes it clear. Just look up LOGICAL_REP_MSG_BEGIN. Converting all 'B' to their respective macros will help but might be problematic for back-patching. So that's arguable. But doing that in something as new as logical replication will be helpful, before it gets too late to change that.Further logical repliation protocol is using the same literal e.g. 'O' to mean origin in some places and old tuple in some other. While comments there help, it's not easy to locate all the code that deals with one meaning or the other. This change will help with that. Another reason as to why logical replication.-- Best Wishes,Ashutosh", "msg_date": "Fri, 16 Oct 2020 15:03:25 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Fri, Oct 16, 2020 at 12:55 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Hi All,\n> Logical replication protocol uses single byte character to identify\n> different chunks of logical repliation messages. The code uses\n> character literals for the same. These literals are used as bare\n> constants in code as well. That's true for almost all the code that\n> deals with wire protocol. With that it becomes difficult to identify\n> the code which deals with a particular message. For example code that\n> deals with message type 'B'. In various protocol 'B' has different\n> meaning and it gets difficult and time consuming to differentiate one\n> usage from other and find all places which deal with one usage. Here's\n> a patch simplifying that for top level logical replication messages.\n>\n\n+1. I think this will make the code easier to read and understand. I\nthink it would be good to do this in some other parts as well but\nstarting with logical replication is a good idea as that area is still\nevolving.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 16 Oct 2020 16:43:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "Hi,\n\nOn 2020-10-16 12:55:26 +0530, Ashutosh Bapat wrote:\n> Here's a patch simplifying that for top level logical replication\n> messages.\n\nI think that's a good plan. One big benefit for me is that it's much\neasier to search for an enum than for a single letter\nconstant. Including searching for all the places that deal with any sort\nof logical rep message type.\n\n\n> void\n> logicalrep_write_begin(StringInfo out, ReorderBufferTXN *txn)\n> {\n> -\tpq_sendbyte(out, 'B');\t\t/* BEGIN */\n> +\tpq_sendbyte(out, LOGICAL_REP_MSG_BEGIN);\t\t/* BEGIN */\n\nI think if we have the LOGICAL_REP_MSG_BEGIN we don't need the /* BEGIN */.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 19 Oct 2020 16:27:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "Thanks Andres for your review. Thanks Li, Horiguchi-san and Amit for your\ncomments.\n\nOn Tue, 20 Oct 2020 at 04:57, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-10-16 12:55:26 +0530, Ashutosh Bapat wrote:\n> > Here's a patch simplifying that for top level logical replication\n> > messages.\n>\n> I think that's a good plan. One big benefit for me is that it's much\n> easier to search for an enum than for a single letter\n> constant. Including searching for all the places that deal with any sort\n> of logical rep message type.\n\n\n>\n> > void\n> > logicalrep_write_begin(StringInfo out, ReorderBufferTXN *txn)\n> > {\n> > - pq_sendbyte(out, 'B'); /* BEGIN */\n> > + pq_sendbyte(out, LOGICAL_REP_MSG_BEGIN); /* BEGIN */\n>\n> I think if we have the LOGICAL_REP_MSG_BEGIN we don't need the /* BEGIN */.\n>\n\nYes. Fixed all places.\n\nI have attached two places - 0001 which is previous 0001 patch with your\ncomments addressed.\n\n0002 adds wrappers on top of pq_sendbyte() and pq_getmsgbyte() to send and\nreceive a logical replication message type respectively. These wrappers add\nmore protection to make sure that the enum definitions fit one byte. This\nalso removes the default case from apply_dispatch() so that we can detect\nany LogicalRepMsgType not handled by that function.\n\nThese two patches are intended to be committed together as a single commit.\nFor now the second one is separate so that it's easy to remove the changes\nif they are not acceptable.\n\n-- \nBest Wishes,\nAshutosh", "msg_date": "Thu, 22 Oct 2020 12:13:40 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "At Thu, 22 Oct 2020 12:13:40 +0530, Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com> wrote in \n> Thanks Andres for your review. Thanks Li, Horiguchi-san and Amit for your\n> comments.\n> \n> On Tue, 20 Oct 2020 at 04:57, Andres Freund <andres@anarazel.de> wrote:\n> \n> > Hi,\n> >\n> > On 2020-10-16 12:55:26 +0530, Ashutosh Bapat wrote:\n> > > Here's a patch simplifying that for top level logical replication\n> > > messages.\n> >\n> > I think that's a good plan. One big benefit for me is that it's much\n> > easier to search for an enum than for a single letter\n> > constant. Including searching for all the places that deal with any sort\n> > of logical rep message type.\n> \n> \n> >\n> > > void\n> > > logicalrep_write_begin(StringInfo out, ReorderBufferTXN *txn)\n> > > {\n> > > - pq_sendbyte(out, 'B'); /* BEGIN */\n> > > + pq_sendbyte(out, LOGICAL_REP_MSG_BEGIN); /* BEGIN */\n> >\n> > I think if we have the LOGICAL_REP_MSG_BEGIN we don't need the /* BEGIN */.\n> >\n> \n> Yes. Fixed all places.\n> \n> I have attached two places - 0001 which is previous 0001 patch with your\n> comments addressed.\n\nWe shouldn't have the default: in the switch() block in\napply_dispatch(). That prevents compilers from checking\ncompleteness. The content of the default: should be moved out to after\nthe switch() block.\n\napply_dispatch()\n{\n switch (action)\n\t{\n\t ....\n\t case LOGICAL_REP_MSG_STREAM_COMMIT(s);\n\t\t apply_handle_stream_commit(s);\n\t\t return;\n }\n\n ereport(ERROR, ...);\n} \n\n> 0002 adds wrappers on top of pq_sendbyte() and pq_getmsgbyte() to send and\n> receive a logical replication message type respectively. These wrappers add\n> more protection to make sure that the enum definitions fit one byte. This\n> also removes the default case from apply_dispatch() so that we can detect\n> any LogicalRepMsgType not handled by that function.\n\npg_send_logicalrep_msg_type() looks somewhat too-much. If we need\nsomething like that we shouldn't do this refactoring, I think.\n\npg_get_logicalrep_msg_type() seems doing the same check (that the\nvalue is compared aganst every keyword value) with\napply_dispatch(). Why do we need that function separately from\napply_dispatch()?\n\n\n> These two patches are intended to be committed together as a single commit.\n> For now the second one is separate so that it's easy to remove the changes\n> if they are not acceptable.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 22 Oct 2020 18:16:48 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Thu, 22 Oct 2020 at 14:46, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n>\n>\n> We shouldn't have the default: in the switch() block in\n> apply_dispatch(). That prevents compilers from checking\n> completeness. The content of the default: should be moved out to after\n> the switch() block.\n>\n> apply_dispatch()\n> {\n> switch (action)\n> {\n> ....\n> case LOGICAL_REP_MSG_STREAM_COMMIT(s);\n> apply_handle_stream_commit(s);\n> return;\n> }\n>\n> ereport(ERROR, ...);\n> }\n>\n> > 0002 adds wrappers on top of pq_sendbyte() and pq_getmsgbyte() to send\n> and\n> > receive a logical replication message type respectively. These wrappers\n> add\n> > more protection to make sure that the enum definitions fit one byte. This\n> > also removes the default case from apply_dispatch() so that we can detect\n> > any LogicalRepMsgType not handled by that function.\n>\n> pg_send_logicalrep_msg_type() looks somewhat too-much. If we need\n> something like that we shouldn't do this refactoring, I think.\n>\n\nEnum is an integer, and we want to send byte. The function asserts that the\nenum fits a byte. If there's a way to declare byte long enums I would use\nthat. But I didn't find a way to do that.\n\n\npg_get_logicalrep_msg_type() seems doing the same check (that the\n> value is compared aganst every keyword value) with\n> apply_dispatch(). Why do we need that function separately from\n> apply_dispatch()?\n>\n>\nThe second patch removes the default case you quoted above. I think that's\nimportant to detect any unhandled case at compile time rather than at run\ntime. But we need some way to detect whether the values we get from wire\nare legit. pg_get_logicalrep_msg_type() does that. Further that function\ncan be used at places other than apply_dispatch() if required without each\nof those places having their own validation.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Thu, 22 Oct 2020 at 14:46, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\nWe shouldn't have the default: in the switch() block in\napply_dispatch(). That prevents compilers from checking\ncompleteness. The content of the default: should be moved out to after\nthe switch() block.\n\napply_dispatch()\n{\n    switch (action)\n        {\n           ....\n            case LOGICAL_REP_MSG_STREAM_COMMIT(s);\n                   apply_handle_stream_commit(s);\n                   return;\n    }\n\n    ereport(ERROR, ...);\n}    \n\n> 0002 adds wrappers on top of pq_sendbyte() and pq_getmsgbyte() to send and\n> receive a logical replication message type respectively. These wrappers add\n> more protection to make sure that the enum definitions fit one byte. This\n> also removes the default case from apply_dispatch() so that we can detect\n> any LogicalRepMsgType not handled by that function.\n\npg_send_logicalrep_msg_type() looks somewhat too-much.  If we need\nsomething like that we shouldn't do this refactoring, I think.Enum is an integer, and we want to send byte. The function asserts that the enum fits a byte. If there's a way to declare byte long enums I would use that. But I didn't find a way to do that.pg_get_logicalrep_msg_type() seems doing the same check (that the\nvalue is compared aganst every keyword value) with\napply_dispatch(). Why do we need that function separately from\napply_dispatch()?The second patch removes the default case you quoted above. I think that's important to detect any unhandled case at compile time rather than at run time. But we need some way to detect whether the values we get from wire are legit. pg_get_logicalrep_msg_type() does that. Further that function can be used at places other than apply_dispatch() if required without each of those places having their own validation.-- Best Wishes,Ashutosh", "msg_date": "Thu, 22 Oct 2020 16:37:18 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "At Thu, 22 Oct 2020 16:37:18 +0530, Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com> wrote in \n> On Thu, 22 Oct 2020 at 14:46, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> \n> >\n> >\n> > We shouldn't have the default: in the switch() block in\n> > apply_dispatch(). That prevents compilers from checking\n> > completeness. The content of the default: should be moved out to after\n> > the switch() block.\n> >\n> > apply_dispatch()\n> > {\n> > switch (action)\n> > {\n> > ....\n> > case LOGICAL_REP_MSG_STREAM_COMMIT(s);\n> > apply_handle_stream_commit(s);\n> > return;\n> > }\n> >\n> > ereport(ERROR, ...);\n> > }\n> >\n> > > 0002 adds wrappers on top of pq_sendbyte() and pq_getmsgbyte() to send\n> > and\n> > > receive a logical replication message type respectively. These wrappers\n> > add\n> > > more protection to make sure that the enum definitions fit one byte. This\n> > > also removes the default case from apply_dispatch() so that we can detect\n> > > any LogicalRepMsgType not handled by that function.\n> >\n> > pg_send_logicalrep_msg_type() looks somewhat too-much. If we need\n> > something like that we shouldn't do this refactoring, I think.\n> >\n> \n> Enum is an integer, and we want to send byte. The function asserts that the\n> enum fits a byte. If there's a way to declare byte long enums I would use\n> that. But I didn't find a way to do that.\n\nThat way of defining enums can contain two different symbols with the\nsame value. If we need to check the values are actually in the range\nof char, checking duplicate values has more importance from the\nstandpoint of likelihood.\n\nAFAICS there're two instances of this kind of enums, CoreceionMethod\nand TypeCat. None of them are not checked for width nor duplicates\nwhen they are used.\n\nEven if we need such a kind of check, it souldn't be a wrapper\nfunction that adds costs on non-assertion builds, but a replacing of\npq_sendbyte() done only on USE_ASSERT_CHECKING.\n\n> pg_get_logicalrep_msg_type() seems doing the same check (that the\n> > value is compared aganst every keyword value) with\n> > apply_dispatch(). Why do we need that function separately from\n> > apply_dispatch()?\n> >\n> >\n> The second patch removes the default case you quoted above. I think that's\n> important to detect any unhandled case at compile time rather than at run\n> time. But we need some way to detect whether the values we get from wire\n> are legit. pg_get_logicalrep_msg_type() does that. Further that function\n> can be used at places other than apply_dispatch() if required without each\n> of those places having their own validation.\n\nEven if that enum contains out-of-range values, that \"command\" is sent\nhaving truncated to uint8 and on the receiver side apply_dispatch()\ndoesn't identify the command and raises an error. That is equivalent\nto what pq_send_logicalrep_msg_type() does. (Also equivalent on the\npoint that symbols that are not used in regression are not checked.)\n\nreagrds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 23 Oct 2020 10:08:44 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "At Fri, 23 Oct 2020 10:08:44 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 22 Oct 2020 16:37:18 +0530, Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com> wrote in \n> > On Thu, 22 Oct 2020 at 14:46, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > wrote:\n> > pg_get_logicalrep_msg_type() seems doing the same check (that the\n> > > value is compared aganst every keyword value) with\n> > > apply_dispatch(). Why do we need that function separately from\n> > > apply_dispatch()?\n> > >\n> > >\n> > The second patch removes the default case you quoted above. I think that's\n> > important to detect any unhandled case at compile time rather than at run\n> > time. But we need some way to detect whether the values we get from wire\n> > are legit. pg_get_logicalrep_msg_type() does that. Further that function\n> > can be used at places other than apply_dispatch() if required without each\n> > of those places having their own validation.\n> \n> Even if that enum contains out-of-range values, that \"command\" is sent\n> having truncated to uint8 and on the receiver side apply_dispatch()\n> doesn't identify the command and raises an error. That is equivalent\n> to what pq_send_logicalrep_msg_type() does. (Also equivalent on the\n> point that symbols that are not used in regression are not checked.)\n\nSorry, this is about pg_send_logicalrep_msg_type(), not\npg_get..(). And I forgot to mention pg_get_logicalrep_msg_type().\n\nFor the pg_get_logicalrep_msg_type(), It is just a repetion of what\napply_displatch() does in switch().\n\nIf I flattened the code, it looks like:\n\napply_dispatch(s)\n{\n LogicalRepMsgType msgtype = pq_getmsgtype(s);\n bool pass = false;\n\n switch (msgtype)\n {\n case LOGICAL_REP_MSG_BEGIN:\n ...\n case LOGICAL_REP_MSG_STREAM_COMMIT:\n pass = true;\n }\n if (!pass)\n ereport(ERROR, (errmsg(\"invalid logical replication message type\"..\n\n switch (msgtype)\n {\n case LOGICAL_REP_MSG_BEGIN:\n apply_handle_begin();\n break;\n ...\n case LOGICAL_REP_MSG_STREAM_COMMIT:\n apply_handle_begin();\n break;\n } \n} \n\nThose two switch()es are apparently redundant. That code is exactly\nequivalent to:\n\napply_dispatch(s)\n{\n LogicalRepMsgType msgtype = pq_getmsgtype(s);\n\n switch (msgtype)\n {\n case LOGICAL_REP_MSG_BEGIN:\n apply_handle_begin();\n! return;\n ...\n case LOGICAL_REP_MSG_STREAM_COMMIT:\n apply_handle_begin();\n! return;\n }\n\n ereport(ERROR, (errmsg(\"invalid logical replication message type\"..\n} \n \nwhich is smaller and fast.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 23 Oct 2020 10:20:11 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On 2020-Oct-22, Ashutosh Bapat wrote:\n\n> On Thu, 22 Oct 2020 at 14:46, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n\n> > pg_send_logicalrep_msg_type() looks somewhat too-much. If we need\n> > something like that we shouldn't do this refactoring, I think.\n> \n> Enum is an integer, and we want to send byte. The function asserts that the\n> enum fits a byte. If there's a way to declare byte long enums I would use\n> that. But I didn't find a way to do that.\n\nI didn't look at the code, but maybe it's sufficient to add a\nStaticAssert?\n\n\n", "msg_date": "Thu, 22 Oct 2020 22:31:41 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "At Thu, 22 Oct 2020 22:31:41 -0300, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> On 2020-Oct-22, Ashutosh Bapat wrote:\n> \n> > On Thu, 22 Oct 2020 at 14:46, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > wrote:\n> \n> > > pg_send_logicalrep_msg_type() looks somewhat too-much. If we need\n> > > something like that we shouldn't do this refactoring, I think.\n> > \n> > Enum is an integer, and we want to send byte. The function asserts that the\n> > enum fits a byte. If there's a way to declare byte long enums I would use\n> > that. But I didn't find a way to do that.\n> \n> I didn't look at the code, but maybe it's sufficient to add a\n> StaticAssert?\n\nThat check needs to visit all symbols in a enum and confirm that each\nof them is in a certain range.\n\nI thought of StaticAssert, but it cannot run a code and I don't know\nof a syntax that loops through all symbols in a enumeration so I think\nwe needs to write a static assertion on every symbol in the\nenumeration, which seems to be a kind of stupid.\n\nenum hoge\n{\n a = '1',\n b = '2',\n c = '3'\n};\n\nStaticAssertDecl((unsigned int)(a | b | c ...) <= 0xff, \"too large symbol value\");\n\nI didn't come up with a way to apply static assertion on each symbol\ndefinition line.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 23 Oct 2020 15:20:20 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Fri, Oct 23, 2020 at 5:20 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 22 Oct 2020 22:31:41 -0300, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n> > On 2020-Oct-22, Ashutosh Bapat wrote:\n> >\n> > > On Thu, 22 Oct 2020 at 14:46, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > > wrote:\n> >\n> > > > pg_send_logicalrep_msg_type() looks somewhat too-much. If we need\n> > > > something like that we shouldn't do this refactoring, I think.\n> > >\n> > > Enum is an integer, and we want to send byte. The function asserts that the\n> > > enum fits a byte. If there's a way to declare byte long enums I would use\n> > > that. But I didn't find a way to do that.\n\nThe pq_send_logicalrep_msg_type() function seemed a bit overkill to me.\n\nThe comment in the LogicalRepMsgType enum will sufficiently ensure\nnobody is going to accidentally add any bad replication message codes.\nAnd it's not like these are going to be changed often.\n\nWhy not simply downcast your enums when calling pq_sendbyte?\nThere are only a few of them.\n\ne.g. pq_sendbyte(out, (uint8)LOGICAL_REP_MSG_STREAM_COMMIT);\n\nKind Regards.\nPeter Smith\nFujitsu Australia.\n\n\n", "msg_date": "Fri, 23 Oct 2020 19:53:00 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "At Fri, 23 Oct 2020 19:53:00 +1100, Peter Smith <smithpb2250@gmail.com> wrote in \n> On Fri, Oct 23, 2020 at 5:20 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Thu, 22 Oct 2020 22:31:41 -0300, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n> > > On 2020-Oct-22, Ashutosh Bapat wrote:\n> > >\n> > > > On Thu, 22 Oct 2020 at 14:46, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > > > wrote:\n> > >\n> > > > > pg_send_logicalrep_msg_type() looks somewhat too-much. If we need\n> > > > > something like that we shouldn't do this refactoring, I think.\n> > > >\n> > > > Enum is an integer, and we want to send byte. The function asserts that the\n> > > > enum fits a byte. If there's a way to declare byte long enums I would use\n> > > > that. But I didn't find a way to do that.\n> \n> The pq_send_logicalrep_msg_type() function seemed a bit overkill to me.\n\nAh, yes, it is what I meant. I didn't come up with the word \"overkill\".\n\n> The comment in the LogicalRepMsgType enum will sufficiently ensure\n> nobody is going to accidentally add any bad replication message codes.\n> And it's not like these are going to be changed often.\n\nAgreed.\n\n> Why not simply downcast your enums when calling pq_sendbyte?\n> There are only a few of them.\n> \n> e.g. pq_sendbyte(out, (uint8)LOGICAL_REP_MSG_STREAM_COMMIT);\n\nIf you are worried about compiler warning, that explicit cast is not\nrequired. Even if the symbol is larger than 0xff, the upper bytes are\nsilently truncated off.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 23 Oct 2020 20:32:49 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Fri, 23 Oct 2020 at 06:50, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n>\n> Those two switch()es are apparently redundant. That code is exactly\n> equivalent to:\n>\n> apply_dispatch(s)\n> {\n> LogicalRepMsgType msgtype = pq_getmsgtype(s);\n>\n> switch (msgtype)\n> {\n> case LOGICAL_REP_MSG_BEGIN:\n> apply_handle_begin();\n> ! return;\n> ...\n> case LOGICAL_REP_MSG_STREAM_COMMIT:\n> apply_handle_begin();\n> ! return;\n> }\n>\n> ereport(ERROR, (errmsg(\"invalid logical replication message type\"..\n> }\n>\n> which is smaller and fast.\n>\n\nGood idea. Implemented in the latest patch posted with the next mail.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Fri, 23 Oct 2020 at 06:50, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\nThose two switch()es are apparently redundant. That code is exactly\nequivalent to:\n\napply_dispatch(s)\n{\n  LogicalRepMsgType msgtype = pq_getmsgtype(s);\n\n  switch (msgtype)\n  {\n     case LOGICAL_REP_MSG_BEGIN:\n        apply_handle_begin();\n!       return;\n     ...\n     case LOGICAL_REP_MSG_STREAM_COMMIT:\n        apply_handle_begin();\n!       return;\n  }\n\n  ereport(ERROR, (errmsg(\"invalid logical replication message type\"..\n}     \n\nwhich is smaller and fast.Good idea. Implemented in the latest patch posted with the next mail. -- Best Wishes,Ashutosh", "msg_date": "Fri, 23 Oct 2020 18:20:36 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Fri, 23 Oct 2020 at 17:02, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Fri, 23 Oct 2020 19:53:00 +1100, Peter Smith <smithpb2250@gmail.com>\n> wrote in\n> > On Fri, Oct 23, 2020 at 5:20 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Thu, 22 Oct 2020 22:31:41 -0300, Alvaro Herrera <\n> alvherre@alvh.no-ip.org> wrote in\n> > > > On 2020-Oct-22, Ashutosh Bapat wrote:\n> > > >\n> > > > > On Thu, 22 Oct 2020 at 14:46, Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com>\n> > > > > wrote:\n> > > >\n> > > > > > pg_send_logicalrep_msg_type() looks somewhat too-much. If we\n> need\n> > > > > > something like that we shouldn't do this refactoring, I think.\n> > > > >\n> > > > > Enum is an integer, and we want to send byte. The function asserts\n> that the\n> > > > > enum fits a byte. If there's a way to declare byte long enums I\n> would use\n> > > > > that. But I didn't find a way to do that.\n> >\n> > The pq_send_logicalrep_msg_type() function seemed a bit overkill to me.\n>\n> Ah, yes, it is what I meant. I didn't come up with the word \"overkill\".\n>\n> > The comment in the LogicalRepMsgType enum will sufficiently ensure\n> > nobody is going to accidentally add any bad replication message codes.\n> > And it's not like these are going to be changed often.\n>\n> Agreed.\n>\n> > Why not simply downcast your enums when calling pq_sendbyte?\n> > There are only a few of them.\n> >\n> > e.g. pq_sendbyte(out, (uint8)LOGICAL_REP_MSG_STREAM_COMMIT);\n>\n> If you are worried about compiler warning, that explicit cast is not\n> required. Even if the symbol is larger than 0xff, the upper bytes are\n> silently truncated off.\n>\n>\nI agree with Peter that the prologue of LogicalRepMsgType is enough.\n\nI also agree with Kyotaro, that explicit cast is unnecessary.\n\nAll this together makes the second patch useless. Removed it. Instead used\nKyotaro's idea in previous mail.\n\nPFA updated patch.\n\n-- \nBest Wishes,\nAshutosh", "msg_date": "Fri, 23 Oct 2020 18:23:40 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Fri, Oct 23, 2020 at 11:50 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 22 Oct 2020 22:31:41 -0300, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n> > On 2020-Oct-22, Ashutosh Bapat wrote:\n> >\n> > > On Thu, 22 Oct 2020 at 14:46, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > > wrote:\n> >\n> > > > pg_send_logicalrep_msg_type() looks somewhat too-much. If we need\n> > > > something like that we shouldn't do this refactoring, I think.\n> > >\n> > > Enum is an integer, and we want to send byte. The function asserts that the\n> > > enum fits a byte. If there's a way to declare byte long enums I would use\n> > > that. But I didn't find a way to do that.\n> >\n> > I didn't look at the code, but maybe it's sufficient to add a\n> > StaticAssert?\n>\n> That check needs to visit all symbols in a enum and confirm that each\n> of them is in a certain range.\n>\n\nCan we define something like LOGICAL_REP_MSG_LAST (also add a comment\nindicating this is a fake message and must be the last one) as the\nlast and just check that?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 23 Oct 2020 18:23:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Fri, 23 Oct 2020 at 18:23, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Oct 23, 2020 at 11:50 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Thu, 22 Oct 2020 22:31:41 -0300, Alvaro Herrera <\n> alvherre@alvh.no-ip.org> wrote in\n> > > On 2020-Oct-22, Ashutosh Bapat wrote:\n> > >\n> > > > On Thu, 22 Oct 2020 at 14:46, Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com>\n> > > > wrote:\n> > >\n> > > > > pg_send_logicalrep_msg_type() looks somewhat too-much. If we need\n> > > > > something like that we shouldn't do this refactoring, I think.\n> > > >\n> > > > Enum is an integer, and we want to send byte. The function asserts\n> that the\n> > > > enum fits a byte. If there's a way to declare byte long enums I\n> would use\n> > > > that. But I didn't find a way to do that.\n> > >\n> > > I didn't look at the code, but maybe it's sufficient to add a\n> > > StaticAssert?\n> >\n> > That check needs to visit all symbols in a enum and confirm that each\n> > of them is in a certain range.\n> >\n>\n> Can we define something like LOGICAL_REP_MSG_LAST (also add a comment\n> indicating this is a fake message and must be the last one) as the\n> last and just check that?\n>\n>\nI don't think that's required once I applied suggestions from Kyotaro and\nPeter. Please check the latest patch.\nUsually LAST is added to an enum when we need to cap the number of symbols\nor want to find the number of symbols. None of that is necessary here. Do\nyou see any other use?\n\n-- \nBest Wishes,\nAshutosh\n\nOn Fri, 23 Oct 2020 at 18:23, Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Oct 23, 2020 at 11:50 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 22 Oct 2020 22:31:41 -0300, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n> > On 2020-Oct-22, Ashutosh Bapat wrote:\n> >\n> > > On Thu, 22 Oct 2020 at 14:46, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > > wrote:\n> >\n> > > > pg_send_logicalrep_msg_type() looks somewhat too-much.  If we need\n> > > > something like that we shouldn't do this refactoring, I think.\n> > >\n> > > Enum is an integer, and we want to send byte. The function asserts that the\n> > > enum fits a byte. If there's a way to declare byte long enums I would use\n> > > that. But I didn't find a way to do that.\n> >\n> > I didn't look at the code, but maybe it's sufficient to add a\n> > StaticAssert?\n>\n> That check needs to visit all symbols in a enum and confirm that each\n> of them is in a certain range.\n>\n\nCan we define something like LOGICAL_REP_MSG_LAST (also add a comment\nindicating this is a fake message and must be the last one) as the\nlast and just check that?I don't think that's required once I applied suggestions from Kyotaro and Peter. Please check the latest patch. Usually LAST is added to an enum when we need to cap the number of symbols or want to find the number of symbols. None of that is necessary here. Do you see any other use?-- Best Wishes,Ashutosh", "msg_date": "Fri, 23 Oct 2020 18:25:57 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Fri, Oct 23, 2020 at 6:26 PM Ashutosh Bapat\n<ashutosh.bapat@2ndquadrant.com> wrote:\n>\n>\n>\n> On Fri, 23 Oct 2020 at 18:23, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Fri, Oct 23, 2020 at 11:50 AM Kyotaro Horiguchi\n>> <horikyota.ntt@gmail.com> wrote:\n>> >\n>> > At Thu, 22 Oct 2020 22:31:41 -0300, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n>> > > On 2020-Oct-22, Ashutosh Bapat wrote:\n>> > >\n>> > > > On Thu, 22 Oct 2020 at 14:46, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n>> > > > wrote:\n>> > >\n>> > > > > pg_send_logicalrep_msg_type() looks somewhat too-much. If we need\n>> > > > > something like that we shouldn't do this refactoring, I think.\n>> > > >\n>> > > > Enum is an integer, and we want to send byte. The function asserts that the\n>> > > > enum fits a byte. If there's a way to declare byte long enums I would use\n>> > > > that. But I didn't find a way to do that.\n>> > >\n>> > > I didn't look at the code, but maybe it's sufficient to add a\n>> > > StaticAssert?\n>> >\n>> > That check needs to visit all symbols in a enum and confirm that each\n>> > of them is in a certain range.\n>> >\n>>\n>> Can we define something like LOGICAL_REP_MSG_LAST (also add a comment\n>> indicating this is a fake message and must be the last one) as the\n>> last and just check that?\n>>\n>\n> I don't think that's required once I applied suggestions from Kyotaro and Peter. Please check the latest patch.\n> Usually LAST is added to an enum when we need to cap the number of symbols or want to find the number of symbols. None of that is necessary here. Do you see any other use?\n>\n\nYou mentioned in the beginning that you prefer to use Enum instead of\ndefine so that switch cases can detect any remaining items but I have\ntried adding extra enum value at the end and didn't handle that in\nswitch case but I didn't get any compilation warning or error. Do we\nneed something else to detect that at compile time?\n\nSome comments assuming we want to use enum either because I am missing\nsomething or due to some other reason we have not discussed yet.\n\n1.\n+ LOGICAL_REP_MSG_STREAM_ABORT = 'A',\n+} LogicalRepMsgType;\n\nThere is no need for a comma after the last message.\n\n2.\n+/*\n+ * Logical message types\n+ *\n+ * Used by logical replication wire protocol.\n+ *\n+ * Note: though this is an enum it should fit a single byte and should be a\n+ * printable character.\n+ */\n+typedef enum\n+{\n\nI think we can expand the comments to probably say why we need these\nto fit in a single byte or what problem it can cause if that rule is\ndisobeyed. This is to make the next person clear why we are imposing\nsuch a rule.\n\n3.\n+typedef enum\n+{\n..\n+} LogicalRepMsgType;\n\nThere are places in code where we use the enum name\n(LogicalRepMsgType) both in the start and end. See TypeCat,\nCoercionMethod, CoercionCodes, etc. I see places where we use the way\nyou have in the code. I would prefer the way we have used at places\nlike TypeCat as that makes it easier to read.\n\n4.\n switch (action)\n {\n- /* BEGIN */\n- case 'B':\n+ case LOGICAL_REP_MSG_BEGIN:\n apply_handle_begin(s);\n- break;\n- /* COMMIT */\n- case 'C':\n+ return;\n\nI think we can simply use 'return apply_handle_begin;' instead of\nadding return in another line. Again, I think we changed this handling\nin apply_dispatch() to improve the case where we can detect at the\ncompile time any missing enum but at this stage it is not clear to me\nif that is true.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 30 Oct 2020 09:17:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Fri, Oct 30, 2020 at 2:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\nHi Amit\n\n> You mentioned in the beginning that you prefer to use Enum instead of\n> define so that switch cases can detect any remaining items but I have\n> tried adding extra enum value at the end and didn't handle that in\n> switch case but I didn't get any compilation warning or error. Do we\n> need something else to detect that at compile time?\n\nSee [1] some GCC compiler options that can expose missing cases like those.\n\ne.g. use -Wswitch or -Wswitch-enum\nDetection depends if the switch has a default case or not.\n\n> 4.\n> switch (action)\n> {\n> - /* BEGIN */\n> - case 'B':\n> + case LOGICAL_REP_MSG_BEGIN:\n> apply_handle_begin(s);\n> - break;\n> - /* COMMIT */\n> - case 'C':\n> + return;\n>\n> I think we can simply use 'return apply_handle_begin;' instead of\n> adding return in another line. Again, I think we changed this handling\n> in apply_dispatch() to improve the case where we can detect at the\n> compile time any missing enum but at this stage it is not clear to me\n> if that is true.\n\nIIUC getting rid of the default from the switch can make the missing\nenum detection easier because then you can use -Wswitch option to\nexpose the problem (instead of -Wswitch-enum which may give lots of\nfalse positives as well)\n\n===\n\n[1] https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#index-Wswitch\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 30 Oct 2020 16:07:21 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Fri, Oct 30, 2020 at 10:37 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Oct 30, 2020 at 2:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Hi Amit\n>\n> > You mentioned in the beginning that you prefer to use Enum instead of\n> > define so that switch cases can detect any remaining items but I have\n> > tried adding extra enum value at the end and didn't handle that in\n> > switch case but I didn't get any compilation warning or error. Do we\n> > need something else to detect that at compile time?\n>\n> See [1] some GCC compiler options that can expose missing cases like those.\n>\n\nThanks, I am able to see the warnings now.\n\n> e.g. use -Wswitch or -Wswitch-enum\n> Detection depends if the switch has a default case or not.\n>\n> > 4.\n> > switch (action)\n> > {\n> > - /* BEGIN */\n> > - case 'B':\n> > + case LOGICAL_REP_MSG_BEGIN:\n> > apply_handle_begin(s);\n> > - break;\n> > - /* COMMIT */\n> > - case 'C':\n> > + return;\n> >\n> > I think we can simply use 'return apply_handle_begin;' instead of\n> > adding return in another line. Again, I think we changed this handling\n> > in apply_dispatch() to improve the case where we can detect at the\n> > compile time any missing enum but at this stage it is not clear to me\n> > if that is true.\n>\n> IIUC getting rid of the default from the switch can make the missing\n> enum detection easier because then you can use -Wswitch option to\n> expose the problem (instead of -Wswitch-enum which may give lots of\n> false positives as well)\n>\n\nFair enough. So, it makes sense to move the default out of the switch case.\n\nAshutosh, see if we can add in comments (or may be commit message) why\nwe preferred to use enum for these messages.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 30 Oct 2020 11:50:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Fri, Oct 30, 2020 at 11:50 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 30, 2020 at 10:37 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > IIUC getting rid of the default from the switch can make the missing\n> > enum detection easier because then you can use -Wswitch option to\n> > expose the problem (instead of -Wswitch-enum which may give lots of\n> > false positives as well)\n> >\n>\n> Fair enough. So, it makes sense to move the default out of the switch case.\n>\n\nOne more thing I was thinking about this patch was whether it has any\nimpact w.r.t to Endianness as we are using four-bytes to represent\none-byte and it seems there is no issue with that because pq_sendbyte\naccepts just one-byte and sends that over the network. So, we could\nsee a problem only if we use any enum value which is more than\none-byte which we are anyway adding a warning message along with the\ndefinition of enum. So, we are safe here. Does that make sense?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 30 Oct 2020 15:00:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Fri, 30 Oct 2020 at 09:16, Amit Kapila <amit.kapila16@gmail.com> wrote\n\n>\n> 1.\n> + LOGICAL_REP_MSG_STREAM_ABORT = 'A',\n> +} LogicalRepMsgType;\n>\n> There is no need for a comma after the last message.\n>\n> Done. Thanks for noticing it.\n\n\n> 2.\n> +/*\n> + * Logical message types\n> + *\n> + * Used by logical replication wire protocol.\n> + *\n> + * Note: though this is an enum it should fit a single byte and should be\n> a\n> + * printable character.\n> + */\n> +typedef enum\n> +{\n>\n> I think we can expand the comments to probably say why we need these\n> to fit in a single byte or what problem it can cause if that rule is\n> disobeyed. This is to make the next person clear why we are imposing\n> such a rule.\n>\n\nDone. Please check.\n\n\n>\n> 3.\n> +typedef enum\n> +{\n> ..\n> +} LogicalRepMsgType;\n>\n> There are places in code where we use the enum name\n> (LogicalRepMsgType) both in the start and end. See TypeCat,\n> CoercionMethod, CoercionCodes, etc. I see places where we use the way\n> you have in the code. I would prefer the way we have used at places\n> like TypeCat as that makes it easier to read.\n>\n\nNot my favourite style since changing the type name requires changing enum\nname to keep those consistent. But anyway done.\n\n\n>\n> 4.\n> switch (action)\n> {\n> - /* BEGIN */\n> - case 'B':\n> + case LOGICAL_REP_MSG_BEGIN:\n> apply_handle_begin(s);\n> - break;\n> - /* COMMIT */\n> - case 'C':\n> + return;\n>\n> I think we can simply use 'return apply_handle_begin;' instead of\n> adding return in another line. Again, I think we changed this handling\n> in apply_dispatch() to improve the case where we can detect at the\n> compile time any missing enum but at this stage it is not clear to me\n> if that is true.\n>\n\nI don't see much value in writing it like \"return apply_handle_begin()\";\ngives an impression that apply_handle_begin() and apply_dispatch() are\nreturning something which they are not. I would prefer return on separate\nline unless there's something more than style improvement.\n\nI have added rationale behind Enum in the commit message as you suggested\nin one of the later mails.\n\nPFA patch addressing your comments.\n-- \nBest Wishes,\nAshutosh", "msg_date": "Fri, 30 Oct 2020 17:05:36 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Fri, 30 Oct 2020 at 14:59, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Oct 30, 2020 at 11:50 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > On Fri, Oct 30, 2020 at 10:37 AM Peter Smith <smithpb2250@gmail.com>\n> wrote:\n> > >\n> > > IIUC getting rid of the default from the switch can make the missing\n> > > enum detection easier because then you can use -Wswitch option to\n> > > expose the problem (instead of -Wswitch-enum which may give lots of\n> > > false positives as well)\n> > >\n> >\n> > Fair enough. So, it makes sense to move the default out of the switch\n> case.\n> >\n>\n> One more thing I was thinking about this patch was whether it has any\n> impact w.r.t to Endianness as we are using four-bytes to represent\n> one-byte and it seems there is no issue with that because pq_sendbyte\n> accepts just one-byte and sends that over the network. So, we could\n> see a problem only if we use any enum value which is more than\n> one-byte which we are anyway adding a warning message along with the\n> definition of enum. So, we are safe here. Does that make sense?\n>\n>\nyes. Endian-ness should be handled by the compiler transparently.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Fri, 30 Oct 2020 at 14:59, Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Oct 30, 2020 at 11:50 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 30, 2020 at 10:37 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > IIUC getting rid of the default from the switch can make the missing\n> > enum detection easier because then you can use -Wswitch option to\n> > expose the problem (instead of -Wswitch-enum which may give lots of\n> > false positives as well)\n> >\n>\n> Fair enough. So, it makes sense to move the default out of the switch case.\n>\n\nOne more thing I was thinking about this patch was whether it has any\nimpact w.r.t to Endianness as we are using four-bytes to represent\none-byte and it seems there is no issue with that because pq_sendbyte\naccepts just one-byte and sends that over the network. So, we could\nsee a problem only if we use any enum value which is more than\none-byte which we are anyway adding a warning message along with the\ndefinition of enum. So, we are safe here. Does that make sense?yes. Endian-ness should be handled by the compiler transparently. -- Best Wishes,Ashutosh", "msg_date": "Fri, 30 Oct 2020 17:06:57 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Fri, Oct 30, 2020 at 5:05 PM Ashutosh Bapat\n<ashutosh.bapat@2ndquadrant.com> wrote:\n>\n>\n>\n> On Fri, 30 Oct 2020 at 09:16, Amit Kapila <amit.kapila16@gmail.com> wrote\n>>\n>> I think we can simply use 'return apply_handle_begin;' instead of\n>> adding return in another line. Again, I think we changed this handling\n>> in apply_dispatch() to improve the case where we can detect at the\n>> compile time any missing enum but at this stage it is not clear to me\n>> if that is true.\n>\n>\n> I don't see much value in writing it like \"return apply_handle_begin()\"; gives an impression that apply_handle_begin() and apply_dispatch() are returning something which they are not. I would prefer return on separate line unless there's something more than style improvement.\n>\n\nFair enough.\n\n> I have added rationale behind Enum in the commit message as you suggested in one of the later mails.\n>\n> PFA patch addressing your comments.\n>\n\nI don't like the word 'Enumize' in commit message. How about changing\nit to something like: (a) Add defines for logical replication protocol\nmessages, or (b) Associate names with logical replication protocol\nmessages.\n\n+ 2. It's easy to locate the code handling a given type.\n\nIn the above instead of 'type', shouldn't it be 'message'.\n\nOther than that the patch looks good to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 30 Oct 2020 17:38:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Fri, 30 Oct 2020 at 17:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n>\n> I don't like the word 'Enumize' in commit message. How about changing\n> it to something like: (a) Add defines for logical replication protocol\n> messages, or (b) Associate names with logical replication protocol\n> messages.\n>\n\nI have used \"Use Enum for top level logical replication message types\" in\nthe attached patch. But please free to use (a) if you feel so.\n\n\n>\n> + 2. It's easy to locate the code handling a given type.\n>\n> In the above instead of 'type', shouldn't it be 'message'.\n>\n\nUsed \"message type\". But please feel free to use \"message\" if you think\nthat's appropriate.\n\n\n>\n> Other than that the patch looks good to me.\n>\n>\nPatch with updated commit message and also the list of reviewers\n\n-- \nBest Wishes,\nAshutosh", "msg_date": "Fri, 30 Oct 2020 17:52:00 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Fri, Oct 30, 2020 at 5:52 PM Ashutosh Bapat\n<ashutosh.bapat@2ndquadrant.com> wrote:\n>\n> On Fri, 30 Oct 2020 at 17:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>>\n>>\n>> Other than that the patch looks good to me.\n>>\n>\n> Patch with updated commit message and also the list of reviewers\n>\n\nThanks, pushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 2 Nov 2020 14:15:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "Thanks Amit.\n\nOn Mon, 2 Nov 2020 at 14:15, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Oct 30, 2020 at 5:52 PM Ashutosh Bapat\n> <ashutosh.bapat@2ndquadrant.com> wrote:\n> >\n> > On Fri, 30 Oct 2020 at 17:37, Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >>\n> >>\n> >>\n> >> Other than that the patch looks good to me.\n> >>\n> >\n> > Patch with updated commit message and also the list of reviewers\n> >\n>\n> Thanks, pushed!\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\n\n-- \nBest Wishes,\nAshutosh\n\nThanks Amit.On Mon, 2 Nov 2020 at 14:15, Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Oct 30, 2020 at 5:52 PM Ashutosh Bapat\n<ashutosh.bapat@2ndquadrant.com> wrote:\n>\n> On Fri, 30 Oct 2020 at 17:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>>\n>>\n>> Other than that the patch looks good to me.\n>>\n>\n> Patch with updated commit message and also the list of reviewers\n>\n\nThanks, pushed!\n\n-- \nWith Regards,\nAmit Kapila.\n-- Best Wishes,Ashutosh", "msg_date": "Mon, 2 Nov 2020 14:23:57 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "Hi Hackers.\n\nLast month there was a commit [1] for replacing logical replication\nmessage type characters with enums of equivalent values.\n\nI was revisiting this code recently and I think due to oversight that\ninitial patch was incomplete. IIUC there are several more enum\nsubstitutions which should have been made.\n\nPSA my patch which adds those missing substitutions.\n\n---\n\n[1] https://github.com/postgres/postgres/commit/644f0d7cc9c2cb270746f2024c706554e0fbec82\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 25 Nov 2020 19:56:24 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Wed, Nov 25, 2020 at 2:26 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Hackers.\n>\n> Last month there was a commit [1] for replacing logical replication\n> message type characters with enums of equivalent values.\n>\n> I was revisiting this code recently and I think due to oversight that\n> initial patch was incomplete. IIUC there are several more enum\n> substitutions which should have been made.\n>\n> PSA my patch which adds those missing substitutions.\n>\n\nGood catch. I'll review it in a day or so.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 25 Nov 2020 14:52:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Wed, Nov 25, 2020 at 2:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Nov 25, 2020 at 2:26 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hi Hackers.\n> >\n> > Last month there was a commit [1] for replacing logical replication\n> > message type characters with enums of equivalent values.\n> >\n> > I was revisiting this code recently and I think due to oversight that\n> > initial patch was incomplete. IIUC there are several more enum\n> > substitutions which should have been made.\n> >\n> > PSA my patch which adds those missing substitutions.\n> >\n>\n> Good catch. I'll review it in a day or so.\n>\n\nThe patch looks good to me and I have pushed it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 26 Nov 2020 10:15:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" }, { "msg_contents": "On Thu, Nov 26, 2020 at 10:15 AM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Wed, Nov 25, 2020 at 2:52 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > On Wed, Nov 25, 2020 at 2:26 PM Peter Smith <smithpb2250@gmail.com>\n> wrote:\n> > >\n> > > Hi Hackers.\n> > >\n> > > Last month there was a commit [1] for replacing logical replication\n> > > message type characters with enums of equivalent values.\n> > >\n> > > I was revisiting this code recently and I think due to oversight that\n> > > initial patch was incomplete. IIUC there are several more enum\n> > > substitutions which should have been made.\n> > >\n> > > PSA my patch which adds those missing substitutions.\n> > >\n> >\n> > Good catch. I'll review it in a day or so.\n> >\n>\n> The patch looks good to me and I have pushed it.\n>\n\nThanks Peter and Amit for noticing the missing substitutions and fixing\nthose.\n\n--\nBest Wishes,\nAshutosh\n\nOn Thu, Nov 26, 2020 at 10:15 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Nov 25, 2020 at 2:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Nov 25, 2020 at 2:26 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hi Hackers.\n> >\n> > Last month there was a commit [1] for replacing logical replication\n> > message type characters with enums of equivalent values.\n> >\n> > I was revisiting this code recently and I think due to oversight that\n> > initial patch was incomplete. IIUC there are several more enum\n> > substitutions which should have been made.\n> >\n> > PSA my patch which adds those missing substitutions.\n> >\n>\n> Good catch. I'll review it in a day or so.\n>\n\nThe patch looks good to me and I have pushed it.Thanks Peter and Amit for noticing the missing substitutions and fixing those.  --Best Wishes,Ashutosh", "msg_date": "Thu, 26 Nov 2020 11:52:02 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Enumize logical replication message actions" } ]
[ { "msg_contents": "Hi, hackers.\nFor some distributions of data in tables, different loops in nested loop \njoins can take different time and process different amounts of entries. \nIt makes average statistics returned by explain analyze not very useful \nfor DBA.\nTo fix it, here is the patch that add printing of min and max statistics \nfor time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.\nPlease don't hesitate to share any thoughts on this topic!\n-- \nEkaterina Sokolova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 16 Oct 2020 10:42:43 +0300", "msg_from": "e.sokolova@postgrespro.ru", "msg_from_op": true, "msg_subject": "[PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:\n\n> Hi, hackers.\n> For some distributions of data in tables, different loops in nested loop\n> joins can take different time and process different amounts of entries.\n> It makes average statistics returned by explain analyze not very useful\n> for DBA.\n> To fix it, here is the patch that add printing of min and max statistics\n> for time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.\n> Please don't hesitate to share any thoughts on this topic!\n>\n\n+1\n\nThis is great feature - sometimes it can be pretty messy current limited\nformat\n\nPavel\n\n-- \n> Ekaterina Sokolova\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n\npá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:Hi, hackers.\nFor some distributions of data in tables, different loops in nested loop \njoins can take different time and process different amounts of entries. \nIt makes average statistics returned by explain analyze not very useful \nfor DBA.\nTo fix it, here is the patch that add printing of min and max statistics \nfor time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.\nPlease don't hesitate to share any thoughts on this topic!+1This is great feature - sometimes it can be pretty messy current limited formatPavel\n-- \nEkaterina Sokolova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 16 Oct 2020 10:11:48 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "Le ven. 16 oct. 2020 à 16:12, Pavel Stehule <pavel.stehule@gmail.com> a\nécrit :\n\n>\n>\n> pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:\n>\n>> Hi, hackers.\n>> For some distributions of data in tables, different loops in nested loop\n>> joins can take different time and process different amounts of entries.\n>> It makes average statistics returned by explain analyze not very useful\n>> for DBA.\n>> To fix it, here is the patch that add printing of min and max statistics\n>> for time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.\n>> Please don't hesitate to share any thoughts on this topic!\n>>\n>\n> +1\n>\n> This is great feature - sometimes it can be pretty messy current limited\n> format\n>\n\n+1, this can be very handy!\n\n>\n\nLe ven. 16 oct. 2020 à 16:12, Pavel Stehule <pavel.stehule@gmail.com> a écrit :pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:Hi, hackers.\nFor some distributions of data in tables, different loops in nested loop \njoins can take different time and process different amounts of entries. \nIt makes average statistics returned by explain analyze not very useful \nfor DBA.\nTo fix it, here is the patch that add printing of min and max statistics \nfor time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.\nPlease don't hesitate to share any thoughts on this topic!+1This is great feature - sometimes it can be pretty messy current limited format+1, this can be very handy!", "msg_date": "Fri, 16 Oct 2020 17:07:34 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "On 16.10.2020 12:07, Julien Rouhaud wrote:\n> Le ven. 16 oct. 2020 à 16:12, Pavel Stehule <pavel.stehule@gmail.com \n> <mailto:pavel.stehule@gmail.com>> a écrit :\n>\n>\n>\n> pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru\n> <mailto:e.sokolova@postgrespro.ru>> napsal:\n>\n> Hi, hackers.\n> For some distributions of data in tables, different loops in\n> nested loop\n> joins can take different time and process different amounts of\n> entries.\n> It makes average statistics returned by explain analyze not\n> very useful\n> for DBA.\n> To fix it, here is the patch that add printing of min and max\n> statistics\n> for time and rows across all loops in Nested Loop to EXPLAIN\n> ANALYSE.\n> Please don't hesitate to share any thoughts on this topic!\n>\n>\n> +1\n>\n> This is great feature - sometimes it can be pretty messy current\n> limited format\n>\n>\n> +1, this can be very handy!\n>\nCool.\nI have added your patch to the commitfest, so it won't get lost.\nhttps://commitfest.postgresql.org/30/2765/\n\nI will review the code next week.  Unfortunately, I cannot give any \nfeedback about usability of this feature.\n\nUser visible change is:\n\n-               ->  Nested Loop (actual rows=N loops=N)\n+              ->  Nested Loop (actual min_rows=0 rows=0 max_rows=0 loops=2)\n\nPavel, Julien, could you please say if it looks good?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nOn 16.10.2020 12:07, Julien Rouhaud\n wrote:\n\n\n\n\n\nLe ven. 16 oct. 2020 à\n 16:12, Pavel Stehule <pavel.stehule@gmail.com> a\n écrit :\n\n\n\n\n\n\n\npá 16. 10. 2020 v 9:43\n odesílatel <e.sokolova@postgrespro.ru>\n napsal:\n\nHi, hackers.\n For some distributions of data in tables, different\n loops in nested loop \n joins can take different time and process different\n amounts of entries. \n It makes average statistics returned by explain\n analyze not very useful \n for DBA.\n To fix it, here is the patch that add printing of min\n and max statistics \n for time and rows across all loops in Nested Loop to\n EXPLAIN ANALYSE.\n Please don't hesitate to share any thoughts on this\n topic!\n\n\n\n+1\n\n\nThis is great feature - sometimes it can be pretty\n messy current limited format\n\n\n\n\n\n\n+1, this can be very handy! \n\n\n\n\n\n\nCool.\n I have added your patch to the commitfest, so it won't get lost.\nhttps://commitfest.postgresql.org/30/2765/\n\n I will review the code next week.  Unfortunately, I cannot give\n any feedback about usability of this feature.\n\n User visible change is:\n\n -               ->  Nested Loop (actual rows=N loops=N)\n +              ->  Nested Loop (actual min_rows=0 rows=0\n max_rows=0 loops=2)\n\n Pavel, Julien, could you please say if it looks good?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sat, 17 Oct 2020 01:11:24 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "so 17. 10. 2020 v 0:11 odesílatel Anastasia Lubennikova <\na.lubennikova@postgrespro.ru> napsal:\n\n> On 16.10.2020 12:07, Julien Rouhaud wrote:\n>\n> Le ven. 16 oct. 2020 à 16:12, Pavel Stehule <pavel.stehule@gmail.com> a\n> écrit :\n>\n>>\n>>\n>> pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:\n>>\n>>> Hi, hackers.\n>>> For some distributions of data in tables, different loops in nested loop\n>>> joins can take different time and process different amounts of entries.\n>>> It makes average statistics returned by explain analyze not very useful\n>>> for DBA.\n>>> To fix it, here is the patch that add printing of min and max statistics\n>>> for time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.\n>>> Please don't hesitate to share any thoughts on this topic!\n>>>\n>>\n>> +1\n>>\n>> This is great feature - sometimes it can be pretty messy current limited\n>> format\n>>\n>\n> +1, this can be very handy!\n>\n>> Cool.\n> I have added your patch to the commitfest, so it won't get lost.\n> https://commitfest.postgresql.org/30/2765/\n>\n> I will review the code next week. Unfortunately, I cannot give any\n> feedback about usability of this feature.\n>\n> User visible change is:\n>\n> - -> Nested Loop (actual rows=N loops=N)\n> + -> Nested Loop (actual min_rows=0 rows=0 max_rows=0\n> loops=2)\n>\n\nThis interface is ok - there is not too much space for creativity. I can\nimagine displaying variance or average - but I am afraid about very bad\nperformance impacts.\n\nRegards\n\nPavel\n\n>\n> Pavel, Julien, could you please say if it looks good?\n>\n> --\n> Anastasia Lubennikova\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\nso 17. 10. 2020 v 0:11 odesílatel Anastasia Lubennikova <a.lubennikova@postgrespro.ru> napsal:\n\nOn 16.10.2020 12:07, Julien Rouhaud\n wrote:\n\n\n\n\nLe ven. 16 oct. 2020 à\n 16:12, Pavel Stehule <pavel.stehule@gmail.com> a\n écrit :\n\n\n\n\n\n\n\npá 16. 10. 2020 v 9:43\n odesílatel <e.sokolova@postgrespro.ru>\n napsal:\n\nHi, hackers.\n For some distributions of data in tables, different\n loops in nested loop \n joins can take different time and process different\n amounts of entries. \n It makes average statistics returned by explain\n analyze not very useful \n for DBA.\n To fix it, here is the patch that add printing of min\n and max statistics \n for time and rows across all loops in Nested Loop to\n EXPLAIN ANALYSE.\n Please don't hesitate to share any thoughts on this\n topic!\n\n\n\n+1\n\n\nThis is great feature - sometimes it can be pretty\n messy current limited format\n\n\n\n\n\n\n+1, this can be very handy! \n\n\n\n\n\n\nCool.\n I have added your patch to the commitfest, so it won't get lost.\nhttps://commitfest.postgresql.org/30/2765/\n\n I will review the code next week.  Unfortunately, I cannot give\n any feedback about usability of this feature.\n\n User visible change is:\n\n -               ->  Nested Loop (actual rows=N loops=N)\n +              ->  Nested Loop (actual min_rows=0 rows=0\n max_rows=0 loops=2)This interface is ok - there is not too much space for creativity. I can imagine displaying variance or average - but I am afraid about very bad performance impacts.RegardsPavel\n\n Pavel, Julien, could you please say if it looks good?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sat, 17 Oct 2020 06:14:54 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "On Sat, Oct 17, 2020 at 12:15 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> so 17. 10. 2020 v 0:11 odesílatel Anastasia Lubennikova <a.lubennikova@postgrespro.ru> napsal:\n>>\n>> On 16.10.2020 12:07, Julien Rouhaud wrote:\n>>\n>> Le ven. 16 oct. 2020 à 16:12, Pavel Stehule <pavel.stehule@gmail.com> a écrit :\n>>>\n>>>\n>>>\n>>> pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:\n>>>>\n>>>> Hi, hackers.\n>>>> For some distributions of data in tables, different loops in nested loop\n>>>> joins can take different time and process different amounts of entries.\n>>>> It makes average statistics returned by explain analyze not very useful\n>>>> for DBA.\n>>>> To fix it, here is the patch that add printing of min and max statistics\n>>>> for time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.\n>>>> Please don't hesitate to share any thoughts on this topic!\n>>>\n>>>\n>>> +1\n>>>\n>>> This is great feature - sometimes it can be pretty messy current limited format\n>>\n>>\n>> +1, this can be very handy!\n>>\n>> Cool.\n>> I have added your patch to the commitfest, so it won't get lost.\n\nThanks! I'll also try to review it next week.\n\n>> https://commitfest.postgresql.org/30/2765/\n>>\n>> I will review the code next week. Unfortunately, I cannot give any feedback about usability of this feature.\n>>\n>> User visible change is:\n>>\n>> - -> Nested Loop (actual rows=N loops=N)\n>> + -> Nested Loop (actual min_rows=0 rows=0 max_rows=0 loops=2)\n>\n>\n> This interface is ok - there is not too much space for creativity.\n\nYes I also think it's ok. We should also consider usability for tools\nlike explain.depesz.com, I don't know if the current output is best.\nI'm adding Depesz and Pierre which are both working on this kind of\ntool for additional input.\n\n> I can imagine displaying variance or average - but I am afraid about very bad performance impacts.\n\nThe original counter (rows here) is already an average right?\nVariance could be nice too. Instrumentation will already spam\ngettimeofday() calls for nested loops, I don't think that computing\nvariance would add that much overhead?\n\n\n", "msg_date": "Sat, 17 Oct 2020 12:26:08 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "so 17. 10. 2020 v 6:26 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Sat, Oct 17, 2020 at 12:15 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > so 17. 10. 2020 v 0:11 odesílatel Anastasia Lubennikova <\n> a.lubennikova@postgrespro.ru> napsal:\n> >>\n> >> On 16.10.2020 12:07, Julien Rouhaud wrote:\n> >>\n> >> Le ven. 16 oct. 2020 à 16:12, Pavel Stehule <pavel.stehule@gmail.com>\n> a écrit :\n> >>>\n> >>>\n> >>>\n> >>> pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:\n> >>>>\n> >>>> Hi, hackers.\n> >>>> For some distributions of data in tables, different loops in nested\n> loop\n> >>>> joins can take different time and process different amounts of\n> entries.\n> >>>> It makes average statistics returned by explain analyze not very\n> useful\n> >>>> for DBA.\n> >>>> To fix it, here is the patch that add printing of min and max\n> statistics\n> >>>> for time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.\n> >>>> Please don't hesitate to share any thoughts on this topic!\n> >>>\n> >>>\n> >>> +1\n> >>>\n> >>> This is great feature - sometimes it can be pretty messy current\n> limited format\n> >>\n> >>\n> >> +1, this can be very handy!\n> >>\n> >> Cool.\n> >> I have added your patch to the commitfest, so it won't get lost.\n>\n> Thanks! I'll also try to review it next week.\n>\n> >> https://commitfest.postgresql.org/30/2765/\n> >>\n> >> I will review the code next week. Unfortunately, I cannot give any\n> feedback about usability of this feature.\n> >>\n> >> User visible change is:\n> >>\n> >> - -> Nested Loop (actual rows=N loops=N)\n> >> + -> Nested Loop (actual min_rows=0 rows=0 max_rows=0\n> loops=2)\n> >\n> >\n> > This interface is ok - there is not too much space for creativity.\n>\n> Yes I also think it's ok. We should also consider usability for tools\n> like explain.depesz.com, I don't know if the current output is best.\n> I'm adding Depesz and Pierre which are both working on this kind of\n> tool for additional input.\n>\n> > I can imagine displaying variance or average - but I am afraid about\n> very bad performance impacts.\n>\n> The original counter (rows here) is already an average right?\n> Variance could be nice too. Instrumentation will already spam\n> gettimeofday() calls for nested loops, I don't think that computing\n> variance would add that much overhead?\n>\n\nThere is not any problem to write benchmark for worst case and test it\n\nso 17. 10. 2020 v 6:26 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Sat, Oct 17, 2020 at 12:15 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> so 17. 10. 2020 v 0:11 odesílatel Anastasia Lubennikova <a.lubennikova@postgrespro.ru> napsal:\n>>\n>> On 16.10.2020 12:07, Julien Rouhaud wrote:\n>>\n>> Le ven. 16 oct. 2020 à 16:12, Pavel Stehule <pavel.stehule@gmail.com> a écrit :\n>>>\n>>>\n>>>\n>>> pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:\n>>>>\n>>>> Hi, hackers.\n>>>> For some distributions of data in tables, different loops in nested loop\n>>>> joins can take different time and process different amounts of entries.\n>>>> It makes average statistics returned by explain analyze not very useful\n>>>> for DBA.\n>>>> To fix it, here is the patch that add printing of min and max statistics\n>>>> for time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.\n>>>> Please don't hesitate to share any thoughts on this topic!\n>>>\n>>>\n>>> +1\n>>>\n>>> This is great feature - sometimes it can be pretty messy current limited format\n>>\n>>\n>> +1, this can be very handy!\n>>\n>> Cool.\n>> I have added your patch to the commitfest, so it won't get lost.\n\nThanks!  I'll also try to review it next week.\n\n>> https://commitfest.postgresql.org/30/2765/\n>>\n>> I will review the code next week.  Unfortunately, I cannot give any feedback about usability of this feature.\n>>\n>> User visible change is:\n>>\n>> -               ->  Nested Loop (actual rows=N loops=N)\n>> +              ->  Nested Loop (actual min_rows=0 rows=0 max_rows=0 loops=2)\n>\n>\n> This interface is ok - there is not too much space for creativity.\n\nYes I also think it's ok. We should also consider usability for tools\nlike explain.depesz.com, I don't know if the current output is best.\nI'm adding Depesz and Pierre which are both working on this kind of\ntool for additional input.\n\n> I can imagine displaying variance or average - but I am afraid about very bad performance impacts.\n\nThe original counter (rows here) is already an average right?\nVariance could be nice too.  Instrumentation will already spam\ngettimeofday() calls for nested loops, I don't think that computing\nvariance would add that much overhead?There is not any problem to write benchmark for worst case and test it", "msg_date": "Sat, 17 Oct 2020 06:28:24 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "On Sat, Oct 17, 2020 at 12:26:08PM +0800, Julien Rouhaud wrote:\n> >> - -> Nested Loop (actual rows=N loops=N)\n> >> + -> Nested Loop (actual min_rows=0 rows=0 max_rows=0 loops=2)\n> > This interface is ok - there is not too much space for creativity.\n> Yes I also think it's ok. We should also consider usability for tools\n> like explain.depesz.com, I don't know if the current output is best.\n> I'm adding Depesz and Pierre which are both working on this kind of\n> tool for additional input.\n\nThanks for heads up. This definitely will need some changes on my side,\nbut should be easy to handle.\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Sat, 17 Oct 2020 10:23:45 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "On Fri, Oct 16, 2020 at 3:11 PM Anastasia Lubennikova <\na.lubennikova@postgrespro.ru> wrote:\n\n> User visible change is:\n>\n>\n> - -> Nested Loop (actual rows=N loops=N)\n> + -> Nested Loop (actual min_rows=0 rows=0 max_rows=0\n> loops=2)\n>\nI'd be inclined to append both new rows to the end.\n\n(actual rows=N loops=N min_rows=N max_rows=N)\n\nrows * loops is still an important calculation.\n\nWhy not just add total_rows while we are at it - last in the listing?\n\n(actual rows=N loops=N min_rows=N max_rows=N total_rows=N)\n\nDavid J.\n\nOn Fri, Oct 16, 2020 at 3:11 PM Anastasia Lubennikova <a.lubennikova@postgrespro.ru> wrote:\n\nUser visible change is:\n\n -               ->  Nested Loop (actual rows=N loops=N)\n +              ->  Nested Loop (actual min_rows=0 rows=0\n max_rows=0 loops=2)I'd be inclined to append both new rows to the end.(actual rows=N loops=N min_rows=N max_rows=N)rows * loops is still an important calculation.Why not just add total_rows while we are at it - last in the listing?(actual rows=N loops=N min_rows=N max_rows=N total_rows=N)  David J.", "msg_date": "Sat, 17 Oct 2020 08:11:34 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "On Sat, Oct 17, 2020 at 6:11 AM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n>\n> On 16.10.2020 12:07, Julien Rouhaud wrote:\n>\n> Le ven. 16 oct. 2020 à 16:12, Pavel Stehule <pavel.stehule@gmail.com> a écrit :\n>>\n>>\n>>\n>> pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:\n>>>\n>>> Hi, hackers.\n>>> For some distributions of data in tables, different loops in nested loop\n>>> joins can take different time and process different amounts of entries.\n>>> It makes average statistics returned by explain analyze not very useful\n>>> for DBA.\n>>> To fix it, here is the patch that add printing of min and max statistics\n>>> for time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.\n>>> Please don't hesitate to share any thoughts on this topic!\n>>\n>>\n>> +1\n>>\n>> This is great feature - sometimes it can be pretty messy current limited format\n>\n>\n> +1, this can be very handy!\n>\n> Cool.\n> I have added your patch to the commitfest, so it won't get lost.\n> https://commitfest.postgresql.org/30/2765/\n>\n> I will review the code next week. Unfortunately, I cannot give any feedback about usability of this feature.\n>\n> User visible change is:\n>\n> - -> Nested Loop (actual rows=N loops=N)\n> + -> Nested Loop (actual min_rows=0 rows=0 max_rows=0 loops=2)\n\nThanks for working on this feature! Here are some comments on the patch.\n\nFirst, cosmetic issues. There are a lot of whitespace issues, the new\ncode is space indented while it should be tab indented. Also there\nare 3 patches included with some fixups, could you instead push a\nsingle patch?\n\nIt also misses some modification in the regression tests. For instance:\n\ndiff --git a/src/test/regress/expected/partition_prune.out\nb/src/test/regress/expected/partition_prune.out\nindex 50d2a7e4b9..db0b167ef4 100644\n--- a/src/test/regress/expected/partition_prune.out\n+++ b/src/test/regress/expected/partition_prune.out\n@@ -2065,7 +2065,7 @@ select explain_parallel_append('select avg(ab.a)\nfrom ab inner join lprt_a a on\n Workers Planned: 1\n Workers Launched: N\n -> Partial Aggregate (actual rows=N loops=N)\n- -> Nested Loop (actual rows=N loops=N)\n+ -> Nested Loop (actual min_rows=0 rows=0 max_rows=0 loops=2)\n -> Parallel Seq Scan on lprt_a a (actual rows=N loops=N)\n\nYou should update the explain_parallel_append() plpgsql function\ncreated in that test file to make sure that both \"rows\" and the two\nnew counters are changed to \"N\". There might be other similar changes\nneeded.\n\n\nNow, for the changes themselves. For the min/max time, you're\naggregating \"totaltime - instr->firsttuple\". Why removing the startup\ntime from the loop execution time? I think this should be kept.\nAlso, in explain.c you display the counters in the \"Nested loop\" node,\nbut this should be done in the inner plan node instead, as this is the\none being looped on. So the condition should probably be \"nloops > 1\"\nrather than testing if it's a NestLoop.\n\nI'm switching the patch to WoA.\n\n\n", "msg_date": "Sun, 18 Oct 2020 19:37:44 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "\n\nLe 17/10/2020 à 06:26, Julien Rouhaud a écrit :\n> On Sat, Oct 17, 2020 at 12:15 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>>\n>> so 17. 10. 2020 v 0:11 odesílatel Anastasia Lubennikova <a.lubennikova@postgrespro.ru> napsal:\n>>>\n>>> On 16.10.2020 12:07, Julien Rouhaud wrote:\n>>>\n>>> Le ven. 16 oct. 2020 à 16:12, Pavel Stehule <pavel.stehule@gmail.com> a écrit :\n>>>>\n>>>>\n>>>>\n>>>> pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:\n>>>>>\n>>>>> Hi, hackers.\n>>>>> For some distributions of data in tables, different loops in nested loop\n>>>>> joins can take different time and process different amounts of entries.\n>>>>> It makes average statistics returned by explain analyze not very useful\n>>>>> for DBA.\n>>>>> To fix it, here is the patch that add printing of min and max statistics\n>>>>> for time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.\n>>>>> Please don't hesitate to share any thoughts on this topic!\n>>>>\n>>>>\n>>>> +1\n>>>>\n>>>> This is great feature - sometimes it can be pretty messy current limited format\n>>>\n>>>\n>>> +1, this can be very handy!\n>>>\n>>> Cool.\n>>> I have added your patch to the commitfest, so it won't get lost.\n> \n> Thanks! I'll also try to review it next week.\n> \n>>> https://commitfest.postgresql.org/30/2765/\n>>>\n>>> I will review the code next week. Unfortunately, I cannot give any feedback about usability of this feature.\n>>>\n>>> User visible change is:\n>>>\n>>> - -> Nested Loop (actual rows=N loops=N)\n>>> + -> Nested Loop (actual min_rows=0 rows=0 max_rows=0 loops=2)\n>>\n>>\n>> This interface is ok - there is not too much space for creativity.\n> \n> Yes I also think it's ok. We should also consider usability for tools\n> like explain.depesz.com, I don't know if the current output is best.\n> I'm adding Depesz and Pierre which are both working on this kind of\n> tool for additional input.\n\nSame for me and PEV2. It should be fairly easy to parse this new format.\n\n> \n>> I can imagine displaying variance or average - but I am afraid about very bad performance impacts.\n> \n> The original counter (rows here) is already an average right?\n> Variance could be nice too. Instrumentation will already spam\n> gettimeofday() calls for nested loops, I don't think that computing\n> variance would add that much overhead?\n\nThus, it's an average value. And to be mentioned: a rounded one! Which I\nfound a bit tricky to figure out.\n\n\n", "msg_date": "Mon, 19 Oct 2020 16:17:47 +0200", "msg_from": "Pierre Giraud <pierre.giraud@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "Hi,\n\nOn 2020-10-16 10:42:43 +0300, e.sokolova@postgrespro.ru wrote:\n> For some distributions of data in tables, different loops in nested loop\n> joins can take different time and process different amounts of entries. It\n> makes average statistics returned by explain analyze not very useful for\n> DBA.\n> To fix it, here is the patch that add printing of min and max statistics for\n> time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.\n> Please don't hesitate to share any thoughts on this topic!\n\nInteresting idea!\n\nI'm a bit worried that further increasing the size of struct\nInstrumentation will increase the overhead of EXPLAIN ANALYZE further -\nin some workloads we spend a fair bit of time in code handling that. It\nwould be good to try to find a few bad cases, and see what the overhead is.\n\nUnfortunately your patch is pretty hard to look at - you seem to have\nincluded your incremental hacking efforts?\n\n> From 7871ac1afe7837a6dc0676a6c9819cc68a5c0f07 Mon Sep 17 00:00:00 2001\n> From: \"e.sokolova\" <e.sokolova@postgrespro.ru>\n> Date: Fri, 4 Sep 2020 18:00:47 +0300\n> Subject: Add min and max statistics without case of\n> parallel workers. Tags: commitfest_hotfix.\n\n> From ebdfe117e4074d268e3e7c480b98d375d1d6f62b Mon Sep 17 00:00:00 2001\n> From: \"e.sokolova\" <e.sokolova@postgrespro.ru>\n> Date: Fri, 11 Sep 2020 23:04:34 +0300\n> Subject: Add case of parallel workers. Tags:\n> commitfest_hotfix.\n\n> From ecbf04d519e17b8968103364e89169ab965b41d7 Mon Sep 17 00:00:00 2001\n> From: \"e.sokolova\" <e.sokolova@postgrespro.ru>\n> Date: Fri, 18 Sep 2020 13:35:19 +0300\n> Subject: Fix bugs. Tags: commitfest_hotfix.\n\n> From 7566a98bbc33a24052e1334b0afe2cf341c0818f Mon Sep 17 00:00:00 2001\n> From: \"e.sokolova\" <e.sokolova@postgrespro.ru>\n> Date: Fri, 25 Sep 2020 20:09:22 +0300\n> Subject: Fix tests. Tags: commitfest_hotfix.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 19 Oct 2020 16:20:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "<rjuju123@gmail.com> wrote:\n> You should update the explain_parallel_append() plpgsql function\n> created in that test file to make sure that both \"rows\" and the two\n> new counters are changed to \"N\". There might be other similar changes\n> needed.\n\nThank you for watching this issue. I made the necessary changes in tests \nfollowing your advice.\n\n> Now, for the changes themselves. For the min/max time, you're\n> aggregating \"totaltime - instr->firsttuple\". Why removing the startup\n> time from the loop execution time? I think this should be kept.\n\nI think it's right remark. I fixed it.\n\n> Also, in explain.c you display the counters in the \"Nested loop\" node,\n> but this should be done in the inner plan node instead, as this is the\n> one being looped on. So the condition should probably be \"nloops > 1\"\n> rather than testing if it's a NestLoop.\n\nCondition \"nloops > 1\" is not the same as checking if it's NestLoop. \nThis condition will lead to printing extra statistics for nodes with \ndifferent types of join, not only Nested Loop Join. If this statistic is \nuseful for other plan nodes, it makes sense to change the condition.\n\n<andres@anarazel.de> wrote:\n> I'm a bit worried that further increasing the size of struct\n> Instrumentation will increase the overhead of EXPLAIN ANALYZE further -\n> in some workloads we spend a fair bit of time in code handling that. It\n> would be good to try to find a few bad cases, and see what the overhead \n> is.\n\nThank you for this comment, I will try to figure it out. Do you have \nsome examples of large overhead dependent on this struct? I think I need \nsome sample to know which way to think.\n\nThank you all for the feedback. I hope the new version of my patch will \nbe more correct and useful.\nPlease don't hesitate to share any thoughts on this topic!\n-- \nEkaterina Sokolova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 23 Oct 2020 13:56:45 +0300", "msg_from": "e.sokolova@postgrespro.ru", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "Hello Ekaterina,\n\nseems like an interesting and useful improvement. I did a quick review\nof the patch - attached is a 0002 patch with a couple minor changes (the\n0001 is just your v1 patch, to keep cfbot happy).\n\n1) There's a couple changes to follow project code style (e.g. brackets\nafter \"if\" on a separate line, no brackets around single-line blocks,\netc.). I've reverted some unnecessary whitespace changes. Minor stuff.\n\n2) I don't think InstrEndLoop needs to check if min_t == 0 before\ninitializing it in the first loop. It certainly has to be 0, no? Same\nfor min_tuples. I also suggest comment explaining that we don't have to\ninitialize the max values.\n\n3) In ExplainNode, in the part processing per-worker stats, I think some\nof the fields are incorrectly referencing planstate->instrument instead\nof using the 'instrument' variable from WorkerInstrumentation.\n\n\nIn general, I agree with Andres this might add overhead to explain\nanalyze, although I doubt it's going to be measurable. But maybe try\ndoing some measurements for common and worst-cases.\n\nI wonder if we should have another option EXPLAIN option enabling this.\nI.e. by default we'd not collect/print this, and users would have to\npass some option to EXPLAIN. Or maybe we could tie this to VERBOSE?\n\nAlso, why print this only for nested loop, and not for all nodes with\n(nloops > 1)? I see there was some discussion why checking nodeTag is\nnecessary to identify NL, but that's not my point.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 31 Oct 2020 02:20:53 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "Hi,\r\n\r\nI noticed that this patch fails on the cfbot.\r\nFor this, I changed the status to: 'Waiting on Author'.\r\n\r\nCheers,\r\n//Georgios\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Tue, 10 Nov 2020 15:10:57 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "Tomas Vondra писал 2020-10-31 04:20:\n\n> seems like an interesting and useful improvement. I did a quick review\n> of the patch - attached is a 0002 patch with a couple minor changes \n> (the\n> 0001 is just your v1 patch, to keep cfbot happy).\n\n Thank you for your review and changes!\n\n> 3) In ExplainNode, in the part processing per-worker stats, I think \n> some\n> of the fields are incorrectly referencing planstate->instrument instead\n> of using the 'instrument' variable from WorkerInstrumentation.\n\nIt's correct behavior because of struct WorkerInstrumentation contains \nstruct Instrumentation instrument. But planstate->instrument is struct \nInstrumentation too.\n\n> I wonder if we should have another option EXPLAIN option enabling this.\n> I.e. by default we'd not collect/print this, and users would have to\n> pass some option to EXPLAIN. Or maybe we could tie this to VERBOSE?\n\nIt's good idea. Now additional statistics are only printed when we set \nthe VERBOSE.\n\nNew version of this patch prints extra statistics for all cases of \nmultiple loops, not only for Nested Loop. Also I fixed the example by \nadding VERBOSE.\n\nPlease don't hesitate to share any thoughts on this topic!\n-- \nEkaterina Sokolova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 12 Nov 2020 23:10:05 +0300", "msg_from": "e.sokolova@postgrespro.ru", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "> New version of this patch prints extra statistics for all cases of\n> multiple loops, not only for Nested Loop. Also I fixed the example by\n> adding VERBOSE.\n>\n> Please don't hesitate to share any thoughts on this topic!\n\nThanks a lot for working on this! I really like the extra details, and\nincluding it only with VERBOSE sounds good.\n\n> rows * loops is still an important calculation.\n>\n> Why not just add total_rows while we are at it - last in the listing?\n>\n> (actual rows=N loops=N min_rows=N max_rows=N total_rows=N)\n\nThis total_rows idea from David would really help us too, especially\nin the cases where the actual rows is rounded down to zero. We make an\nexplain visualisation tool, and it'd be nice to show people a better\ntotal than loops * actual rows. It would also help the accuracy of\nsome of our tips, that use this number.\n\nApologies if this input is too late to be helpful.\n\nCheers,\nMichael\n\n\n", "msg_date": "Mon, 18 Jan 2021 11:45:09 +0000", "msg_from": "Michael Christofides <michael@pgmustard.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "Hello,\n\nOn Thu, 12 Nov 2020 23:10:05 +0300\ne.sokolova@postgrespro.ru wrote:\n\n> New version of this patch prints extra statistics for all cases of \n> multiple loops, not only for Nested Loop. Also I fixed the example by \n> adding VERBOSE.\n\nI think this extra statistics seems good because it is useful for DBA\nto understand explained plan. I reviewed this patch. Here are a few\ncomments:\n\n1) \npostgres=# explain (analyze, verbose) select * from a,b where a.i=b.j;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..2752.00 rows=991 width=8) (actual time=0.021..17.651 rows=991 loops=1)\n Output: a.i, b.j\n Join Filter: (a.i = b.j)\n Rows Removed by Join Filter: 99009\n -> Seq Scan on public.b (cost=0.00..2.00 rows=100 width=4) (actual time=0.009..0.023 rows=100 loops=1)\n Output: b.j\n -> Seq Scan on public.a (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 min_time=0.065 max_time=0.163 min_rows=1000 rows=1000 max_rows=1000 loops=100)\n Output: a.i\n Planning Time: 0.066 ms\n Execution Time: 17.719 ms\n(10 rows)\n\nI don't like this format where the extra statistics appear in the same\nline of existing information because the output format differs depended\non whether the plan node's loops > 1 or not. This makes the length of a\nline too long. Also, other information reported by VERBOSE doesn't change\nthe exiting row format and just add extra rows for new information. \n\nInstead, it seems good for me to add extra rows for the new statistics\nwithout changint the existing row format as other VERBOSE information,\nlike below.\n\n -> Seq Scan on public.a (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 rows=1000 loops=100)\n Output: a.i\n Min Time: 0.065 ms\n Max Time: 0.163 ms\n Min Rows: 1000\n Max Rows: 1000\n\nor, like Buffers,\n\n -> Seq Scan on public.a (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 rows=1000 loops=100)\n Output: a.i\n Loops: min_time=0.065 max_time=0.163 min_rows=1000 max_rows=1000\n\nand so on. What do you think about it?\n\n2)\nIn parallel scan, the extra statistics are not reported correctly.\n\npostgres=# explain (analyze, verbose) select * from a,b where a.i=b.j;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Gather (cost=1000.00..2463.52 rows=991 width=8) (actual time=0.823..25.651 rows=991 loops=1)\n Output: a.i, b.j\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop (cost=0.00..1364.42 rows=413 width=8) (actual time=9.426..16.723 min_time=0.000 max_time=22.017 min_rows=0 rows=330 max_rows=991 loops=3)\n Output: a.i, b.j\n Join Filter: (a.i = b.j)\n Rows Removed by Join Filter: 33003\n Worker 0: actual time=14.689..14.692 rows=0 loops=1\n Worker 1: actual time=13.458..13.460 rows=0 loops=1\n -> Parallel Seq Scan on public.a (cost=0.00..9.17 rows=417 width=4) (actual time=0.049..0.152 min_time=0.000 max_time=0.202 min_rows=0 rows=333 max_rows=452 loops=3)\n Output: a.i\n Worker 0: actual time=0.040..0.130 rows=322 loops=1\n Worker 1: actual time=0.039..0.125 rows=226 loops=1\n -> Seq Scan on public.b (cost=0.00..2.00 rows=100 width=4) (actual time=0.006..0.026 min_time=0.012 max_time=0.066 min_rows=100 rows=100 max_rows=100 loops=1000)\n Output: b.j\n Worker 0: actual time=0.006..0.024 min_time=0.000 max_time=0.000 min_rows=0 rows=100 max_rows=0 loops=322\n Worker 1: actual time=0.008..0.030 min_time=0.000 max_time=0.000 min_rows=0 rows=100 max_rows=0 loops=226\n Planning Time: 0.186 ms\n Execution Time: 25.838 ms\n(20 rows)\n\nThis reports max/min rows or time of inner scan as 0 in parallel workers,\nand as a result only the leader process's ones are accounted. To fix this,\nwe would change InstrAggNode as below.\n\n@@ -167,6 +196,10 @@ InstrAggNode(Instrumentation *dst, Instrumentation *add)\n dst->nloops += add->nloops;\n dst->nfiltered1 += add->nfiltered1;\n dst->nfiltered2 += add->nfiltered2;\n+ dst->min_t = Min(dst->min_t, add->min_t);\n+ dst->max_t = Max(dst->max_t, add->max_t);\n+ dst->min_tuples = Min(dst->min_tuples, add->min_tuples);\n+ dst->max_tuples = Max(dst->max_tuples, add->max_tuples);\n\n\n3)\nThere are garbage lines and I could not apply this patch.\n\ndiff --git a/src/test/regress/expected/timetz.out b/src/test/regress/expected/timetz.out\nindex 038bb5fa094..5294179aa45 100644\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Thu, 28 Jan 2021 21:37:13 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "On Thu, Jan 28, 2021 at 8:38 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n> postgres=# explain (analyze, verbose) select * from a,b where a.i=b.j;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..2752.00 rows=991 width=8) (actual time=0.021..17.651 rows=991 loops=1)\n> Output: a.i, b.j\n> Join Filter: (a.i = b.j)\n> Rows Removed by Join Filter: 99009\n> -> Seq Scan on public.b (cost=0.00..2.00 rows=100 width=4) (actual time=0.009..0.023 rows=100 loops=1)\n> Output: b.j\n> -> Seq Scan on public.a (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 min_time=0.065 max_time=0.163 min_rows=1000 rows=1000 max_rows=1000 loops=100)\n> Output: a.i\n> Planning Time: 0.066 ms\n> Execution Time: 17.719 ms\n> (10 rows)\n>\n> I don't like this format where the extra statistics appear in the same\n> line of existing information because the output format differs depended\n> on whether the plan node's loops > 1 or not. This makes the length of a\n> line too long. Also, other information reported by VERBOSE doesn't change\n> the exiting row format and just add extra rows for new information.\n>\n> Instead, it seems good for me to add extra rows for the new statistics\n> without changint the existing row format as other VERBOSE information,\n> like below.\n>\n> -> Seq Scan on public.a (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 rows=1000 loops=100)\n> Output: a.i\n> Min Time: 0.065 ms\n> Max Time: 0.163 ms\n> Min Rows: 1000\n> Max Rows: 1000\n>\n> or, like Buffers,\n>\n> -> Seq Scan on public.a (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 rows=1000 loops=100)\n> Output: a.i\n> Loops: min_time=0.065 max_time=0.163 min_rows=1000 max_rows=1000\n>\n> and so on. What do you think about it?\n\nIt's true that the current output is a bit long, which isn't really\nconvenient to read. Using one of those alternative format would also\nhave the advantage of not breaking compatibility with tools that\nprocess those entries. I personally prefer the 2nd option with the\nextra \"Loops:\" line . For non text format, should we keep the current\nformat?\n\n> 2)\n> In parallel scan, the extra statistics are not reported correctly.\n>\n> postgres=# explain (analyze, verbose) select * from a,b where a.i=b.j;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Gather (cost=1000.00..2463.52 rows=991 width=8) (actual time=0.823..25.651 rows=991 loops=1)\n> Output: a.i, b.j\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Nested Loop (cost=0.00..1364.42 rows=413 width=8) (actual time=9.426..16.723 min_time=0.000 max_time=22.017 min_rows=0 rows=330 max_rows=991 loops=3)\n> Output: a.i, b.j\n> Join Filter: (a.i = b.j)\n> Rows Removed by Join Filter: 33003\n> Worker 0: actual time=14.689..14.692 rows=0 loops=1\n> Worker 1: actual time=13.458..13.460 rows=0 loops=1\n> -> Parallel Seq Scan on public.a (cost=0.00..9.17 rows=417 width=4) (actual time=0.049..0.152 min_time=0.000 max_time=0.202 min_rows=0 rows=333 max_rows=452 loops=3)\n> Output: a.i\n> Worker 0: actual time=0.040..0.130 rows=322 loops=1\n> Worker 1: actual time=0.039..0.125 rows=226 loops=1\n> -> Seq Scan on public.b (cost=0.00..2.00 rows=100 width=4) (actual time=0.006..0.026 min_time=0.012 max_time=0.066 min_rows=100 rows=100 max_rows=100 loops=1000)\n> Output: b.j\n> Worker 0: actual time=0.006..0.024 min_time=0.000 max_time=0.000 min_rows=0 rows=100 max_rows=0 loops=322\n> Worker 1: actual time=0.008..0.030 min_time=0.000 max_time=0.000 min_rows=0 rows=100 max_rows=0 loops=226\n> Planning Time: 0.186 ms\n> Execution Time: 25.838 ms\n> (20 rows)\n>\n> This reports max/min rows or time of inner scan as 0 in parallel workers,\n> and as a result only the leader process's ones are accounted. To fix this,\n> we would change InstrAggNode as below.\n>\n> @@ -167,6 +196,10 @@ InstrAggNode(Instrumentation *dst, Instrumentation *add)\n> dst->nloops += add->nloops;\n> dst->nfiltered1 += add->nfiltered1;\n> dst->nfiltered2 += add->nfiltered2;\n> + dst->min_t = Min(dst->min_t, add->min_t);\n> + dst->max_t = Max(dst->max_t, add->max_t);\n> + dst->min_tuples = Min(dst->min_tuples, add->min_tuples);\n> + dst->max_tuples = Max(dst->max_tuples, add->max_tuples);\n\nAgreed.\n\n> 3)\n> There are garbage lines and I could not apply this patch.\n>\n> diff --git a/src/test/regress/expected/timetz.out b/src/test/regress/expected/timetz.out\n> index 038bb5fa094..5294179aa45 100644\n\nI also switch the patch to \"waiting on author\" on the commit fest app.\n\n\n", "msg_date": "Mon, 1 Feb 2021 13:28:45 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "On Mon, 1 Feb 2021 13:28:45 +0800\nJulien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Thu, Jan 28, 2021 at 8:38 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >\n> > postgres=# explain (analyze, verbose) select * from a,b where a.i=b.j;\n> > QUERY PLAN\n> > ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > Nested Loop (cost=0.00..2752.00 rows=991 width=8) (actual time=0.021..17.651 rows=991 loops=1)\n> > Output: a.i, b.j\n> > Join Filter: (a.i = b.j)\n> > Rows Removed by Join Filter: 99009\n> > -> Seq Scan on public.b (cost=0.00..2.00 rows=100 width=4) (actual time=0.009..0.023 rows=100 loops=1)\n> > Output: b.j\n> > -> Seq Scan on public.a (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 min_time=0.065 max_time=0.163 min_rows=1000 rows=1000 max_rows=1000 loops=100)\n> > Output: a.i\n> > Planning Time: 0.066 ms\n> > Execution Time: 17.719 ms\n> > (10 rows)\n> >\n> > I don't like this format where the extra statistics appear in the same\n> > line of existing information because the output format differs depended\n> > on whether the plan node's loops > 1 or not. This makes the length of a\n> > line too long. Also, other information reported by VERBOSE doesn't change\n> > the exiting row format and just add extra rows for new information.\n> >\n> > Instead, it seems good for me to add extra rows for the new statistics\n> > without changint the existing row format as other VERBOSE information,\n> > like below.\n> >\n> > -> Seq Scan on public.a (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 rows=1000 loops=100)\n> > Output: a.i\n> > Min Time: 0.065 ms\n> > Max Time: 0.163 ms\n> > Min Rows: 1000\n> > Max Rows: 1000\n> >\n> > or, like Buffers,\n> >\n> > -> Seq Scan on public.a (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 rows=1000 loops=100)\n> > Output: a.i\n> > Loops: min_time=0.065 max_time=0.163 min_rows=1000 max_rows=1000\n> >\n> > and so on. What do you think about it?\n> \n> It's true that the current output is a bit long, which isn't really\n> convenient to read. Using one of those alternative format would also\n> have the advantage of not breaking compatibility with tools that\n> process those entries. I personally prefer the 2nd option with the\n> extra \"Loops:\" line . For non text format, should we keep the current\n> format?\n\nFor non text format, I think \"Max/Min Rows\", \"Max/Min Times\" are a bit\nsimple and the meaning is unclear. Instead, similar to a style of \"Buffers\",\ndoes it make sense using \"Max/Min Rows in Loops\" and \"Max/Min Times in Loops\"?\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 1 Feb 2021 22:13:15 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "Thank you all for your feedback and reforms.\nI attach a new version of the patch with the some changes and fixes. \nHere's a list of the major changes:\n1) New format of extra statistics. This is now contained in a line \nseparate from the main statistics.\n\nJulien Rouhaud писал 2021-02-01 08:28:\n> On Thu, Jan 28, 2021 at 8:38 PM Yugo NAGATA <nagata@sraoss.co.jp> \n> wrote:\n>> \n>> postgres=# explain (analyze, verbose) select * from a,b where a.i=b.j;\n>> \n>> QUERY PLAN\n>> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Nested Loop (cost=0.00..2752.00 rows=991 width=8) (actual \n>> time=0.021..17.651 rows=991 loops=1)\n>> Output: a.i, b.j\n>> Join Filter: (a.i = b.j)\n>> Rows Removed by Join Filter: 99009\n>> -> Seq Scan on public.b (cost=0.00..2.00 rows=100 width=4) \n>> (actual time=0.009..0.023 rows=100 loops=1)\n>> Output: b.j\n>> -> Seq Scan on public.a (cost=0.00..15.00 rows=1000 width=4) \n>> (actual time=0.005..0.091 min_time=0.065 max_time=0.163 min_rows=1000 \n>> rows=1000 max_rows=1000 loops=100)\n>> Output: a.i\n>> Planning Time: 0.066 ms\n>> Execution Time: 17.719 ms\n>> (10 rows)\n>> \n>> I don't like this format where the extra statistics appear in the same\n>> line of existing information because the output format differs \n>> depended\n>> on whether the plan node's loops > 1 or not. This makes the length of \n>> a\n>> line too long. Also, other information reported by VERBOSE doesn't \n>> change\n>> the exiting row format and just add extra rows for new information.\n>> \n>> Instead, it seems good for me to add extra rows for the new statistics\n>> without changint the existing row format as other VERBOSE information,\n>> like below.\n>> \n>> -> Seq Scan on public.a (cost=0.00..15.00 rows=1000 width=4) \n>> (actual time=0.005..0.091 rows=1000 loops=100)\n>> Output: a.i\n>> Loops: min_time=0.065 max_time=0.163 min_rows=1000 \n>> max_rows=1000\n>> \n>> and so on. What do you think about it?\n> \n\n2) Correction of the case of parallel scan\n\n>> In parallel scan, the extra statistics are not reported correctly.\n>> \n>> This reports max/min rows or time of inner scan as 0 in parallel \n>> workers,\n>> and as a result only the leader process's ones are accounted. To fix \n>> this,\n>> we would change InstrAggNode as below.\n>> \n> \n\n3) Adding extra statistics about total number of rows (total rows). \nThere were many wishes for this here.\n\nPlease don't hesitate to share any thoughts on this topic.\n\n-- \nEkaterina Sokolova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 25 Mar 2021 12:52:42 +0300", "msg_from": "e.sokolova@postgrespro.ru", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "> diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c\n> index afc45429ba4..723eccca013 100644\n> --- a/src/backend/commands/explain.c\n> +++ b/src/backend/commands/explain.c\n> @@ -1589,29 +1589,82 @@ ExplainNode(PlanState *planstate, List *ancestors,\n> \t\tdouble\t\tstartup_ms = 1000.0 * planstate->instrument->startup / nloops;\n> \t\tdouble\t\ttotal_ms = 1000.0 * planstate->instrument->total / nloops;\n> \t\tdouble\t\trows = planstate->instrument->ntuples / nloops;\n> +\t\tdouble\t\ttotal_rows = planstate->instrument->ntuples;\n> +\t\tdouble\t\tmin_r = planstate->instrument->min_tuples;\n> +\t\tdouble\t\tmax_r = planstate->instrument->max_tuples;\n> +\t\tdouble\t\tmin_t_ms = 1000.0 * planstate->instrument->min_t;\n> +\t\tdouble\t\tmax_t_ms = 1000.0 * planstate->instrument->max_t;\n> \n> \t\tif (es->format == EXPLAIN_FORMAT_TEXT)\n> \t\t{\n> -\t\t\tif (es->timing)\n> -\t\t\t\tappendStringInfo(es->str,\n> -\t\t\t\t\t\t\t\t \" (actual time=%.3f..%.3f rows=%.0f loops=%.0f)\",\n> -\t\t\t\t\t\t\t\t startup_ms, total_ms, rows, nloops);\n> +\t\t\tif (nloops > 1 && es->verbose)\n> +\t\t\t{\n> +\t\t\t\tif (es->timing)\n> +\t\t\t\t{\n> +\t\t\t\t\tappendStringInfo(es->str,\n> +\t\t\t\t\t\t\t\t\t \" (actual time=%.3f..%.3f rows=%.0f loops=%.0f)\\n\",\n> +\t\t\t\t\t\t\t\t\t startup_ms, total_ms, rows, nloops);\n> +\t\t\t\t\tExplainIndentText(es);\n> +\t\t\t\t\tappendStringInfo(es->str,\n> +\t\t\t\t\t\t\t\t\t \"Loop: min_time=%.3f max_time=%.3f min_rows=%.0f max_rows=%.0f total_rows=%.0f\",\n> +\t\t\t\t\t\t\t\t\t min_t_ms, max_t_ms, min_r, max_r, total_rows);\n\nLines with \"colon\" format shouldn't use equal signs, and should use two spaces\nbetween fields. See:\nhttps://www.postgresql.org/message-id/20200619022001.GY17995@telsasoft.com\nhttps://www.postgresql.org/message-id/20200402054120.GC14618@telsasoft.com\nhttps://www.postgresql.org/message-id/20200407042521.GH2228%40telsasoft.com\n\n\n> +\t\t\t\t}\n> +\t\t\t\telse\n> +\t\t\t\t{\n> +\t\t\t\t\tappendStringInfo(es->str,\n> +\t\t\t\t\t\t\t\t\t \" (actual rows=%.0f loops=%.0f)\\n\",\n> +\t\t\t\t\t\t\t\t\t rows, nloops);\n> +\t\t\t\t\tExplainIndentText(es);\n> +\t\t\t\t\tappendStringInfo(es->str,\n> +\t\t\t\t\t\t\t\t\t \"Loop: min_rows=%.0f max_rows=%.0f total_rows=%.0f\",\n> +\t\t\t\t\t\t\t\t\t min_r, max_r, total_rows);\n> +\t\t\t\t}\n> +\t\t\t}\n> \t\t\telse\n> -\t\t\t\tappendStringInfo(es->str,\n> -\t\t\t\t\t\t\t\t \" (actual rows=%.0f loops=%.0f)\",\n> -\t\t\t\t\t\t\t\t rows, nloops);\n> +\t\t\t{\n> +\t\t\t\tif (es->timing)\n> +\t\t\t\t\tappendStringInfo(es->str,\n> +\t\t\t\t\t\t\t\t\t \" (actual time=%.3f..%.3f rows=%.0f loops=%.0f)\",\n> +\t\t\t\t\t\t\t\t\t startup_ms, total_ms, rows, nloops);\n> +\t\t\t\telse\n> +\t\t\t\t\tappendStringInfo(es->str,\n> +\t\t\t\t\t\t\t\t\t \" (actual rows=%.0f loops=%.0f)\",\n> +\t\t\t\t\t\t\t\t\t rows, nloops);\n> +\t\t\t}\n> \t\t}\n> \t\telse\n\nSince this is now on a separate line, the \"if (nloops > 1 && es->verbose)\"\ncan be after the existing \"if (es->timing)\", and shouldn't need its own\n\"if (es->timing)\". It should conditionally add a separate line, rather than\nduplicating the \"(actual.*\" line.\n\n> -\t\t\tif (es->timing)\n> +\t\t\tif (nloops > 1 && es->verbose)\n\nIn non-text mode, think you should not check \"nloops > 1\". Rather, print the\nfield as 0.\n\nThe whole logic is duplicated for parallel workers. This could instead be a\nfunction, called from both places. I think this would handle the computation\nas well as the output. This would make the patch shorter.\n\n> +\t\t\t\t\t\tExplainPropertyFloat(\"Min Time\", \"ms\",\n> +\t\t\t\t\t\t\t\t\t\t\t min_t_ms, 3, es);\n> +\t\t\t\t\t\tExplainPropertyFloat(\"Max Time\", \"ms\",\n> +\t\t\t\t\t\t\t\t\t\t\t max_t_ms, 3, es);\n\nI think the labels in non-text format should say \"Loop Min Time\" or similar.\n\n> diff --git a/src/include/executor/instrument.h b/src/include/executor/instrument.h\n> index aa8eceda5f4..93ba7c83461 100644\n> --- a/src/include/executor/instrument.h\n> +++ b/src/include/executor/instrument.h\n> @@ -66,7 +66,13 @@ typedef struct Instrumentation\n> \t/* Accumulated statistics across all completed cycles: */\n> \tdouble\t\tstartup;\t\t/* total startup time (in seconds) */\n> \tdouble\t\ttotal;\t\t\t/* total time (in seconds) */\n> +\tdouble\t\tmin_t;\t\t\t/* time of fastest loop (in seconds) */\n> +\tdouble\t\tmax_t;\t\t\t/* time of slowest loop (in seconds) */\n> \tdouble\t\tntuples;\t\t/* total tuples produced */\n> +\tdouble\t\tmin_tuples;\t\t/* min counter of produced tuples for all\n> +\t\t\t\t\t\t\t\t * loops */\n> +\tdouble\t\tmax_tuples;\t\t/* max counter of produced tuples for all\n> +\t\t\t\t\t\t\t\t * loops */\n\nAnd these variables should have a loop_ prefix like loop_min_t ?\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 25 Mar 2021 10:30:31 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "Thank you for working on this issue. Your comments helped me make this \npatch more correct.\n\n> Lines with \"colon\" format shouldn't use equal signs, and should use two \n> spaces\n> between fields.\nDone. Now extra line looks like \"Loop min_rows: %.0f max_rows: %.0f \ntotal_rows: %.0f\" or \"Loop min_time: %.3f max_time: %.3f min_rows: \n%.0f max_rows: %.0f total_rows: %.0f\".\n\n> Since this is now on a separate line, the \"if (nloops > 1 && \n> es->verbose)\"\n> can be after the existing \"if (es->timing)\", and shouldn't need its own\n> \"if (es->timing)\". It should conditionally add a separate line, rather \n> than\n> duplicating the \"(actual.*\" line.\n> \n>> -\t\t\tif (es->timing)\n>> +\t\t\tif (nloops > 1 && es->verbose)\nNew version of patch contains this correction. It helped make the patch \nshorter.\n\n> In non-text mode, think you should not check \"nloops > 1\". Rather, \n> print the\n> field as 0.\nThe fields will not be zeros. New line will almost repeat the line with \nmain sttistics.\n\n> I think the labels in non-text format should say \"Loop Min Time\" or \n> similar.\n> \n> And these variables should have a loop_ prefix like loop_min_t ?\nThere are good ideas. I changed it.\n\nI apply new version of this patch. I hope it got better.\nPlease don't hesitate to share any thoughts on this topic.\n\n-- \nEkaterina Sokolova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 14 Apr 2021 14:27:36 +0300", "msg_from": "e.sokolova@postgrespro.ru", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "On Wed, Apr 14, 2021 at 4:57 PM <e.sokolova@postgrespro.ru> wrote:\n>\n> Thank you for working on this issue. Your comments helped me make this\n> patch more correct.\n>\n> > Lines with \"colon\" format shouldn't use equal signs, and should use two\n> > spaces\n> > between fields.\n> Done. Now extra line looks like \"Loop min_rows: %.0f max_rows: %.0f\n> total_rows: %.0f\" or \"Loop min_time: %.3f max_time: %.3f min_rows:\n> %.0f max_rows: %.0f total_rows: %.0f\".\n>\n> > Since this is now on a separate line, the \"if (nloops > 1 &&\n> > es->verbose)\"\n> > can be after the existing \"if (es->timing)\", and shouldn't need its own\n> > \"if (es->timing)\". It should conditionally add a separate line, rather\n> > than\n> > duplicating the \"(actual.*\" line.\n> >\n> >> - if (es->timing)\n> >> + if (nloops > 1 && es->verbose)\n> New version of patch contains this correction. It helped make the patch\n> shorter.\n>\n> > In non-text mode, think you should not check \"nloops > 1\". Rather,\n> > print the\n> > field as 0.\n> The fields will not be zeros. New line will almost repeat the line with\n> main sttistics.\n>\n> > I think the labels in non-text format should say \"Loop Min Time\" or\n> > similar.\n> >\n> > And these variables should have a loop_ prefix like loop_min_t ?\n> There are good ideas. I changed it.\n>\n> I apply new version of this patch. I hope it got better.\n> Please don't hesitate to share any thoughts on this topic.\n\nThe patch does not apply on Head, I'm changing the status to \"Waiting\nfor Author\":\n1 out of 2 hunks FAILED -- saving rejects to file\nsrc/test/regress/expected/partition_prune.out.rej\npatching file src/test/regress/sql/partition_prune.sql\nHunk #1 FAILED at 467.\nHunk #2 succeeded at 654 (offset -3 lines).\n1 out of 2 hunks FAILED -- saving rejects to file\nsrc/test/regress/sql/partition_prune.sql.rej\n\nPlease post a new patch rebased on head.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 14 Jul 2021 16:46:18 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "Hi, hackers.\n\nHere is the new version of patch that add printing of min, max and total\nstatistics for time and rows across all loops to EXPLAIN ANALYSE.\n\n1) Please add VERBOSE to display extra statistics.\n2) Format of extra statistics is:\n\n a) FORMAT TEXT\n\n> Loop min_time: N max_time: N min_rows: N max_rows: N total_rows: N\n> Output: ...\n\n b) FORMAT JSON \n\n> ...\n> \"Actual Total Time\": N,\n> \"Loop Min Time\": N,\n> \"Loop Max Time\": N,\n> \"Actual Rows\": N,\n> \"Loop Min Rows\": N,\n> \"Loop Max Rows\": N,\n> \"Loop Total Rows\": N,\n> \"Actual Loops\": N,\n> ...\n\nI hope you find this patch useful.\nPlease don't hesitate to share any thoughts on this topic!\n-- \nEkaterina Sokolova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 17 Aug 2021 15:30:10 +0300", "msg_from": "Ekaterina Sokolova <e.sokolova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "Hi, and sorry to take such a long break from this patch.\n\nOn Wed, Apr 14, 2021 at 02:27:36PM +0300, e.sokolova@postgrespro.ru wrote:\n> diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c\n> index b62a76e7e5a..bf8c37baefd 100644\n> --- a/src/backend/commands/explain.c\n> +++ b/src/backend/commands/explain.c\n> @@ -1615,6 +1615,11 @@ ExplainNode(PlanState *planstate, List *ancestors,\n> \t\tdouble\t\tstartup_ms = 1000.0 * planstate->instrument->startup / nloops;\n> \t\tdouble\t\ttotal_ms = 1000.0 * planstate->instrument->total / nloops;\n> \t\tdouble\t\trows = planstate->instrument->ntuples / nloops;\n> +\t\tdouble\t\tloop_total_rows = planstate->instrument->ntuples;\n> +\t\tdouble\t\tloop_min_r = planstate->instrument->min_tuples;\n> +\t\tdouble\t\tloop_max_r = planstate->instrument->max_tuples;\n> +\t\tdouble\t\tloop_min_t_ms = 1000.0 * planstate->instrument->min_t;\n> +\t\tdouble\t\tloop_max_t_ms = 1000.0 * planstate->instrument->max_t;\n> \n> \t\tif (es->format == EXPLAIN_FORMAT_TEXT)\n> \t\t{\n> @@ -1626,6 +1631,19 @@ ExplainNode(PlanState *planstate, List *ancestors,\n> \t\t\t\tappendStringInfo(es->str,\n> \t\t\t\t\t\t\t\t \" (actual rows=%.0f loops=%.0f)\",\n> \t\t\t\t\t\t\t\t rows, nloops);\n> +\t\t\tif (nloops > 1 && es->verbose)\n> +\t\t\t{\n> + appendStringInfo(es->str, \"\\n\");\n> +\t\t\t\tExplainIndentText(es);\n> +\t\t\t\tif (es->timing)\n> +\t\t\t\t\tappendStringInfo(es->str,\n> +\t\t\t\t\t\t\t\t\t \"Loop min_time: %.3f max_time: %.3f min_rows: %.0f max_rows: %.0f total_rows: %.0f\",\n> +\t\t\t\t\t\t\t\t\t loop_min_t_ms, loop_max_t_ms, loop_min_r, loop_max_r, loop_total_rows);\n\nNow that I see it, I think it should say it with spaces, and not underscores,\nlike\n| Loop Min Time: %.3f Max Time: %.3f ...\n\n\"Memory Usage:\" already has spaces in its fields names, so this is more\nconsistent, and isn't doing anything new.\n\nI think the min/max/total should be first, and the timing should follow, if\nenabled. The \"if(timing)\" doesn't even need to duplicate the output, it could\nappend just the timing part.\n\nI refactored this all into a separate function. I don't see why we'd repeat\nthese.\n\n+ double loop_total_rows = planstate->instrument->ntuples;\n+ double loop_min_r = planstate->instrument->min_tuples;\n+ double loop_max_r = planstate->instrument->max_tuples;\n+ double loop_min_t_ms = 1000.0 * planstate->instrument->min_t;\n+ double loop_max_t_ms = 1000.0 * planstate->instrument->max_t;\n\nI realize the duplication doesn't originate with your patch. But because of\nthe duplication, there can be inconsistencies; for example, you wrote \"ms\" in\none place and \"s\" in another. Maybe you copied from before\nf90c708a048667befbf6bbe5f48ae9695cb89de4 (an issue I reported the first time I\nwas looking at this patch).\n\nI think the non-text format timing stuff needs to be within \"if (timing)\".\n\nI'm curious to hear what you and others think of the refactoring.\n\nIt'd be nice if there's a good way to add a test case for verbose output\ninvolving parallel workers, but the output is unstable ...\n\n-- \nJustin", "msg_date": "Sun, 21 Nov 2021 22:55:06 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "On Sun, Nov 21, 2021 at 8:55 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> I'm curious to hear what you and others think of the refactoring.\n>\n> It'd be nice if there's a good way to add a test case for verbose output\n> involving parallel workers, but the output is unstable ...\n>\n\nI've reviewed this patch, and it works as expected - the refactoring\nchanges by Justin also appear to make sense to me.\n\nI've briefly thought whether this needs documentation (currently the patch\nincludes none), but there does not appear to be a good place to add\ndocumentation about this from a quick glance, so it seems acceptable to\nleave this out given the lack of more detailed EXPLAIN documentation in\ngeneral.\n\nThe one item that still feels a bit open to me is benchmarking, based on\nAndres' comment a while ago:\n\nOn Mon, Oct 19, 2020 at 4:20 PM Andres Freund <andres@anarazel.de> wrote:\n\n> I'm a bit worried that further increasing the size of struct\n> Instrumentation will increase the overhead of EXPLAIN ANALYZE further -\n> in some workloads we spend a fair bit of time in code handling that. It\n> would be good to try to find a few bad cases, and see what the overhead is.\n>\n\nWhilst no specific bad cases were provided, I wonder if even a simple\npgbench with auto_explain (and log_analyze=1) would be a way to test this?\n\nThe overhead of the Instrumentation struct size should show regardless of\nwhether a plan actually includes a Nested Loop.\n\nThanks,\nLukas\n\n-- \nLukas Fittl\n\nOn Sun, Nov 21, 2021 at 8:55 PM Justin Pryzby <pryzby@telsasoft.com> wrote:I'm curious to hear what you and others think of the refactoring.\n\nIt'd be nice if there's a good way to add a test case for verbose output\ninvolving parallel workers, but the output is unstable ...I've reviewed this patch, and it works as expected - the refactoring changes by Justin also appear to make sense to me.I've briefly thought whether this needs documentation (currently the patch includes none), but there does not appear to be a good place to add documentation about this from a quick glance, so it seems acceptable to leave this out given the lack of more detailed EXPLAIN documentation in general.The one item that still feels a bit open to me is benchmarking, based on Andres' comment a while ago:On Mon, Oct 19, 2020 at 4:20 PM Andres Freund <andres@anarazel.de> wrote:\nI'm a bit worried that further increasing the size of struct\nInstrumentation will increase the overhead of EXPLAIN ANALYZE further -\nin some workloads we spend a fair bit of time in code handling that. It\nwould be good to try to find a few bad cases, and see what the overhead is.\nWhilst no specific bad cases were provided, I wonder if even a simple pgbench with auto_explain (and log_analyze=1) would be a way to test this?The overhead of the Instrumentation struct size should show regardless of whether a plan actually includes a Nested Loop.Thanks,Lukas-- Lukas Fittl", "msg_date": "Thu, 6 Jan 2022 19:33:28 -0800", "msg_from": "Lukas Fittl <lukas@fittl.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "Hi, hackers.\n\nI apply the new version of patch.\n\nJustin Pryzby <pryzby@telsasoft.com> wrote:\n> I'm curious to hear what you and others think of the refactoring.\nThank you so much. With your changes, the patch has become more \nunderstandable and readable.\n\n> It'd be nice if there's a good way to add a test case for verbose \n> output\n> involving parallel workers, but the output is unstable ...\nDone!\n\nLukas Fittl <lukas@fittl.com> wrote:\n> I've briefly thought whether this needs documentation (currently the \n> patch includes none),\n> but there does not appear to be a good place to add documentation about \n> this from a\n> quick glance, so it seems acceptable to leave this out given the lack \n> of more detailed\n> EXPLAIN documentation in general.\nYou're right! I added feature description to the patch header.\n\n> Whilst no specific bad cases were provided, I wonder if even a simple \n> pgbench with\n> auto_explain (and log_analyze=1) would be a way to test this?\nI wanted to measure overheads, but could't choose correct way. Thanks \nfor idea with auto_explain.\nI loaded it and made 10 requests of pgbench (number of clients: 1, of \nthreads: 1).\nI'm not sure I chose the right way to measure overhead, so any \nsuggestions are welcome.\nCurrent results are in file overhead_v0.txt.\n\nPlease feel free to share your suggestions and comments. Regards,\n\n-- \nEkaterina Sokolova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 03 Feb 2022 00:59:03 +0300", "msg_from": "Ekaterina Sokolova <e.sokolova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "Hi,\n\nOn Thu, Feb 03, 2022 at 12:59:03AM +0300, Ekaterina Sokolova wrote:\n>\n> I apply the new version of patch.\n>\n> I wanted to measure overheads, but could't choose correct way. Thanks for\n> idea with auto_explain.\n> I loaded it and made 10 requests of pgbench (number of clients: 1, of\n> threads: 1).\n> I'm not sure I chose the right way to measure overhead, so any suggestions\n> are welcome.\n> Current results are in file overhead_v0.txt.\n>\n> Please feel free to share your suggestions and comments. Regards,\n>\n\n> | master latency (ms) | master tps | | new latency (ms) | new tps\n> --------------------------------------------------------------------------\n> 1 | 2,462 | 406,190341 | | 4,485 | 222,950527\n> 2 | 3,877 | 257,89813 | | 4,141 | 241,493395\n> 3 | 3,789 | 263,935811 | | 2,837 | 352,522297\n> 4 | 3,53 | 283,310196 | | 5,510 | 181,488203\n> 5 | 3,413 | 292,997363 | | 6,475 | 154,432999\n> 6 | 3,757 | 266,148564 | | 4,073 | 245,507218\n> 7 | 3,752 | 266,560043 | | 3,901 | 256,331385\n> 8 | 4,389 | 227,847524 | | 4,658 | 214,675196\n> 9 | 4,341 | 230,372282 | | 4,220 | 236,983672\n> 10 | 3,893 | 256,891104 | | 7.059 | 141,667139\n> --------------------------------------------------------------------------\n> avg| 3,7203 | 275,215136 | | 4,03 | 224,8052031\n>\n>\n> master/new latency | 0,92315 |\n> master/new tps | 1,22424 |\n>\n> new/master latency | 1,08325 |\n> new/master tps | 0,81683 |\n\nThe overhead is quite significant (at least for OLTP-style workload).\n\nI think this should be done with a new InstrumentOption, like\nINSTRUMENT_LOOP_DETAILS or something like that, and set it where appropriate.\nOtherwise you will have to pay that overhead even if you won't use the new\nfields at all. It could be EXPLAIN (ANALYZE, VERBOSE OFF), but also simply\nusing pg_stat_statements which doesn't seem acceptable.\n\nOne problem is that some extensions (like pg_stat_statements) can rely on\nINSTRUMENT_ALL but may or may not care about those extra counters. Maybe we\nshould remove that alias and instead provide two (like INSTRUMENT_ALL_VERBOSE\nand INSTRUMENT_ALL_SOMETHINGELSE, I don't have any bright name right now) so\nthat authors can decide what they need instead of silently having such\nextension ruin the performance for no reason.\n\nAbout the implementation itself:\n\n+static void show_loop_info(Instrumentation *instrument, bool isworker,\n+ ExplainState *es);\n\nI think this should be done as a separate refactoring commit.\n\n+ /*\n+ * this is first loop\n+ *\n+ * We only initialize the min values. We don't need to bother with the\n+ * max, because those are 0 and the non-zero values will get updated a\n+ * couple lines later.\n+ */\n+ if (instr->nloops == 0)\n+ {\n+ instr->min_t = totaltime;\n+ instr->min_tuples = instr->tuplecount;\n+ }\n+\n+ if (instr->min_t > totaltime)\n+ instr->min_t = totaltime;\n+\n+ if (instr->max_t < totaltime)\n+ instr->max_t = totaltime;\n+\n instr->ntuples += instr->tuplecount;\n+\n+ if (instr->min_tuples > instr->tuplecount)\n+ instr->min_tuples = instr->tuplecount;\n+\n+ if (instr->max_tuples < instr->tuplecount)\n+ instr->max_tuples = instr->tuplecount;\n+\n instr->nloops += 1;\n\nWhy do you need to initialize min_t and min_tuples but not max_t and\nmax_tuples while both will initially be 0 and possibly updated afterwards?\n\nI think you should either entirely remove this if (instr->nloops == 0) part, or\nhandle some else block.\n\n\n", "msg_date": "Mon, 7 Mar 2022 13:08:30 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "This patch got some very positive feedback and some significant amount\nof work earlier in the release cycle. The feedback from Julien earlier\nthis month seemed pretty minor.\n\nEkaterina, is there any chance you'll be able to work on this this\nweek and do you think it has a chance of making this release? Julien,\ndo you think it's likely to be possible to polish for this release?\n\nOtherwise I guess we should move it to the next CF but it seems a\nshame given how much work has been done and how close it is.\n\nOn Mon, 7 Mar 2022 at 00:17, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Thu, Feb 03, 2022 at 12:59:03AM +0300, Ekaterina Sokolova wrote:\n> >\n> > I apply the new version of patch.\n> >\n> > I wanted to measure overheads, but could't choose correct way. Thanks for\n> > idea with auto_explain.\n> > I loaded it and made 10 requests of pgbench (number of clients: 1, of\n> > threads: 1).\n> > I'm not sure I chose the right way to measure overhead, so any suggestions\n> > are welcome.\n> > Current results are in file overhead_v0.txt.\n> >\n> > Please feel free to share your suggestions and comments. Regards,\n> >\n>\n> > | master latency (ms) | master tps | | new latency (ms) | new tps\n> > --------------------------------------------------------------------------\n> > 1 | 2,462 | 406,190341 | | 4,485 | 222,950527\n> > 2 | 3,877 | 257,89813 | | 4,141 | 241,493395\n> > 3 | 3,789 | 263,935811 | | 2,837 | 352,522297\n> > 4 | 3,53 | 283,310196 | | 5,510 | 181,488203\n> > 5 | 3,413 | 292,997363 | | 6,475 | 154,432999\n> > 6 | 3,757 | 266,148564 | | 4,073 | 245,507218\n> > 7 | 3,752 | 266,560043 | | 3,901 | 256,331385\n> > 8 | 4,389 | 227,847524 | | 4,658 | 214,675196\n> > 9 | 4,341 | 230,372282 | | 4,220 | 236,983672\n> > 10 | 3,893 | 256,891104 | | 7.059 | 141,667139\n> > --------------------------------------------------------------------------\n> > avg| 3,7203 | 275,215136 | | 4,03 | 224,8052031\n> >\n> >\n> > master/new latency | 0,92315 |\n> > master/new tps | 1,22424 |\n> >\n> > new/master latency | 1,08325 |\n> > new/master tps | 0,81683 |\n>\n> The overhead is quite significant (at least for OLTP-style workload).\n>\n> I think this should be done with a new InstrumentOption, like\n> INSTRUMENT_LOOP_DETAILS or something like that, and set it where appropriate.\n> Otherwise you will have to pay that overhead even if you won't use the new\n> fields at all. It could be EXPLAIN (ANALYZE, VERBOSE OFF), but also simply\n> using pg_stat_statements which doesn't seem acceptable.\n>\n> One problem is that some extensions (like pg_stat_statements) can rely on\n> INSTRUMENT_ALL but may or may not care about those extra counters. Maybe we\n> should remove that alias and instead provide two (like INSTRUMENT_ALL_VERBOSE\n> and INSTRUMENT_ALL_SOMETHINGELSE, I don't have any bright name right now) so\n> that authors can decide what they need instead of silently having such\n> extension ruin the performance for no reason.\n>\n> About the implementation itself:\n>\n> +static void show_loop_info(Instrumentation *instrument, bool isworker,\n> + ExplainState *es);\n>\n> I think this should be done as a separate refactoring commit.\n>\n> + /*\n> + * this is first loop\n> + *\n> + * We only initialize the min values. We don't need to bother with the\n> + * max, because those are 0 and the non-zero values will get updated a\n> + * couple lines later.\n> + */\n> + if (instr->nloops == 0)\n> + {\n> + instr->min_t = totaltime;\n> + instr->min_tuples = instr->tuplecount;\n> + }\n> +\n> + if (instr->min_t > totaltime)\n> + instr->min_t = totaltime;\n> +\n> + if (instr->max_t < totaltime)\n> + instr->max_t = totaltime;\n> +\n> instr->ntuples += instr->tuplecount;\n> +\n> + if (instr->min_tuples > instr->tuplecount)\n> + instr->min_tuples = instr->tuplecount;\n> +\n> + if (instr->max_tuples < instr->tuplecount)\n> + instr->max_tuples = instr->tuplecount;\n> +\n> instr->nloops += 1;\n>\n> Why do you need to initialize min_t and min_tuples but not max_t and\n> max_tuples while both will initially be 0 and possibly updated afterwards?\n>\n> I think you should either entirely remove this if (instr->nloops == 0) part, or\n> handle some else block.\n>\n>\n\n\n-- \ngreg\n\n\n", "msg_date": "Mon, 28 Mar 2022 15:09:12 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "> > +static void show_loop_info(Instrumentation *instrument, bool isworker,\n> > + ExplainState *es);\n> >\n> > I think this should be done as a separate refactoring commit.\n\nRight - the 0001 patch I sent seems independently beneficial, and makes the\nchanges in 0002 more apparent. My 0001 could also be applied after the feature\nfreeze and before branching for v16..\n\n\n", "msg_date": "Mon, 28 Mar 2022 18:28:32 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "Hi,\n\nOn Mon, Mar 28, 2022 at 03:09:12PM -0400, Greg Stark wrote:\n> This patch got some very positive feedback and some significant amount\n> of work earlier in the release cycle. The feedback from Julien earlier\n> this month seemed pretty minor.\n> \n> Ekaterina, is there any chance you'll be able to work on this this\n> week and do you think it has a chance of making this release? Julien,\n> do you think it's likely to be possible to polish for this release?\n\nMost of the comments I have are easy to fix. But I think that the real problem\nis the significant overhead shown by Ekaterina that for now would apply even if\nyou don't consume the new stats, for instance if you have pg_stat_statements.\nAnd I'm still not sure of what is the best way to avoid that.\n\n\n", "msg_date": "Tue, 29 Mar 2022 10:53:43 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" }, { "msg_contents": "This message lost track of the email headers so CFBOT isn't processing the new\npatches. Which I'm attempting to remedy now.\nhttps://www.postgresql.org/message-id/flat/ae576cac3f451d318374f2a2e494aab1@postgrespro.ru\n\nOn Fri, Apr 01, 2022 at 11:46:47PM +0300, Ekaterina Sokolova wrote:\n> Hi, hackers. Thank you for your attention to this topic.\n> \n> Julien Rouhaud wrote:\n> > +static void show_loop_info(Instrumentation *instrument, bool isworker,\n> > + ExplainState *es);\n> > \n> > I think this should be done as a separate refactoring commit.\n> Sure. I divided the patch. Now Justin's refactor commit is separated. Also I\n> actualized it a bit.\n> \n> > Most of the comments I have are easy to fix. But I think that the real\n> > problem\n> > is the significant overhead shown by Ekaterina that for now would apply\n> > even if\n> > you don't consume the new stats, for instance if you have\n> > pg_stat_statements.\n> > And I'm still not sure of what is the best way to avoid that.\n> I took your advice about InstrumentOption. Now INSTRUMENT_EXTRA exists.\n> So currently it's no overheads during basic load. Operations using\n> INSTRUMENT_ALL contain overheads (because of INSTRUMENT_EXTRA is a part of\n> INSTRUMENT_ALL), but they are much less significant than before. I apply new\n> overhead statistics collected by pgbench with auto _explain enabled.\n> \n> > Why do you need to initialize min_t and min_tuples but not max_t and\n> > max_tuples while both will initially be 0 and possibly updated\n> > afterwards?\n> We need this initialization for min values so comment about it located above\n> the block of code with initialization.\n> \n> I am convinced that the latest changes have affected the patch in a positive\n> way. I'll be pleased to hear your thoughts on this.", "msg_date": "Sat, 2 Apr 2022 07:38:26 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add extra statistics to explain for Nested Loop" } ]
[ { "msg_contents": "Hi\n\nIn /src/backend/executor/nodeAgg.c\n\nI found the following comment still use work mem,\nSince hash_mem has been introduced, Is it more accurate to use hash_mem here ?\n\n@@ -1827,7 +1827,7 @@ hash_agg_set_limits(double hashentrysize, double input_groups, int used_bits,\n \t/*\n \t * Don't set the limit below 3/4 of hash_mem. In that case, we are at the\n \t * minimum number of partitions, so we aren't going to dramatically exceed\n-\t * work mem anyway.\n+\t * hash_mem anyway.\n\nBest regards,\nhouzj", "msg_date": "Fri, 16 Oct 2020 09:03:52 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "Possible typo in nodeAgg.c" }, { "msg_contents": "On Fri, Oct 16, 2020 at 09:03:52AM +0000, Hou, Zhijie wrote:\n> Hi\n> \n> In /src/backend/executor/nodeAgg.c\n> \n> I found the following comment still use work mem,\n> Since hash_mem has been introduced, Is it more accurate to use hash_mem here ?\n> \n> @@ -1827,7 +1827,7 @@ hash_agg_set_limits(double hashentrysize, double input_groups, int used_bits,\n> \t/*\n> \t * Don't set the limit below 3/4 of hash_mem. In that case, we are at the\n> \t * minimum number of partitions, so we aren't going to dramatically exceed\n> -\t * work mem anyway.\n> +\t * hash_mem anyway.\n\nCan someone comment on this? Is the text change correct?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 2 Nov 2023 20:49:32 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Possible typo in nodeAgg.c" }, { "msg_contents": "On Fri, 3 Nov 2023 at 13:49, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Fri, Oct 16, 2020 at 09:03:52AM +0000, Hou, Zhijie wrote:\n> > /*\n> > * Don't set the limit below 3/4 of hash_mem. In that case, we are at the\n> > * minimum number of partitions, so we aren't going to dramatically exceed\n> > - * work mem anyway.\n> > + * hash_mem anyway.\n>\n> Can someone comment on this? Is the text change correct?\n\n\"work mem\" is incorrect. I'd prefer it if we didn't talk about\nhash_mem as if it were a thing. It's work_mem * hash_mem_multiplier.\nBecause of the underscore, using \"hash_mem\" to mean this makes it look\nlike we're talking about a variable by that name. Maybe it would be\nbetter to refer to the variable name that's used to store the result\nof get_hash_memory_limit(), i.e. hash_mem_limit. \"the limit\" should\nlikely use \"*mem_limit\" instead as there are multiple limits\nmentioned.\n\nIt would also be better if this comment explained what's special about\n4 * partition_mem. It seems to have nothing to do with the 3/4\nmentioned in the comment.\n\nDavid\n\n\n", "msg_date": "Fri, 3 Nov 2023 14:30:07 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible typo in nodeAgg.c" } ]
[ { "msg_contents": "In px_crypt_md5() we have this section, with the second assignment to err being\nunchecked:\n\n /* */\n err = px_find_digest(\"md5\", &ctx);\n if (err)\n return NULL;\n err = px_find_digest(\"md5\", &ctx1);\n\nEven though we know that the digest algorithm exists when we reach the second\ncall, we must check the returnvalue from each call to px_find_digest to handle\nallocation errors. Depending on which lib is backing pgcrypto, px_find_digest\nmay perform resource allocation which can fail on the subsequent call. It does\nfall in the not-terrible-likely-to-happen category but there is a non-zero risk\nwhich would lead to using a broken context. The attached checks the err\nreturnvalue and exits in case it indicates an error.\n\ncheers ./daniel", "msg_date": "Fri, 16 Oct 2020 14:43:42 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Potential use of uninitialized context in pgcrypto" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Even though we know that the digest algorithm exists when we reach the second\n> call, we must check the returnvalue from each call to px_find_digest to handle\n> allocation errors.\n\nAgreed, it's a bug. Will push in a bit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Oct 2020 11:47:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Potential use of uninitialized context in pgcrypto" } ]
[ { "msg_contents": "It occurs to mean that statistics collector stats such as\npg_statio_*_tables.idx_blks_hit are highly misleading in practice\nbecause they fail to take account of the difference between internal\npages and leaf pages in B-Tree indexes. These two types of pages are\nin fundamentally different categories, and I think that failing to\nrecognize that at the level of these system views makes them much less\nuseful. Somebody should probably write a patch that makes this\ndifference clear from the system views. Possibly by using some\ngeneralized notion of \"record\" pages instead of leaf pages, and\n\"metadata\" pages instead of internal pages. That would even work with\nhash indexes, I think.\n\nConsider the following example, which is based on a standard nbtree\nindex, but could work in almost the same way with other index access\nmethods:\n\nWe have a pgbench_accounts pkey after initialization by pgbench at\nscale 1500. It has 409,837 leaf pages and 1,451 internal pages,\nmeaning that about one third of one percent of all pages in the index\nare internal pages. Occasionally, with indexes on large text strings\nwe might notice that as many as 1% of all index pages are internal\npages, but that's very much on the high side. Generally speaking,\nwe're virtually guaranteed to have *all* internal pages in\nshared_buffers once a steady state has been reached. Once the cache\nwarms up, point lookups (like the queries pgbench performs) will only\nhave to access one leaf page at most, which amounts to only one I/O at\nmost. (This asymmetry is the main reason why B-Trees are generally\nvery effective when buffered in a buffer cache.)\n\nIf we run the pgbench queries against this database/example index\nwe'll find that we have to access 4 index pages per query execution --\nthe root, two additional internal pages, plus a leaf page. Based on\nthe reasonable assumptions I'm making, 3 out of 4 of those pages will\nbe hits when steady state is reached with pgbench's SELECT-only\nworkload, regardless of how large shared_buffers is or how bloated the\nindex is (we only need 1451 buffers for that, and those are bound to\nget hot quickly).\n\nThe overall effect is idx_blks_hit changes over time in a way that\nmakes no sense -- even to an expert. Let's say we start with this\nentire 3213 MB pgbench index in shared_buffers. We should only get\nincrements in idx_blks_hit, never increments in idx_blks_read - that\nmuch makes sense. If we then iteratively shrink shared_buffers (or\nequivalently, make the index grow without adding a new level), the\nproportion of page accesses that increment idx_blks_read (rather than\nincrementing idx_blks_hit) goes up roughly linearly as misses increase\nlinearly - which also makes sense. But here is the silly part: we\ncannot really have a hit rate of less than 75% if you compare\nidx_blks_hit to idx_blks_read, unless and until we can barely even fit\n1% of the index in memory (at which point it's hard to distinguish\nfrom noise). So if we naively consume the current view we'll see a hit\nrate that starts at 100%, and very slowly shrinks to 75%, which is\nwhere we bottom out (more or less, roughly speaking). This behavior\nseems pretty hard to defend to me.\n\nIf somebody fixed this by putting internal pages into their own bucket\nin the system view, then motivated users would quickly learn that\ninternal page stats aren't really useful -- they are only included for\ncompleteness. They're such a small contributor to the overall hit rate\nthat they can simply be ignored completely. The thing that users ought\nto focus on is leaf page hit rate. Now index hit rate (by which I mean\nleaf page hit rate) actually makes sense. Note that Heroku promoted\nsimple heuristics like this for many years.\n\nI suppose that a change like this could end up affecting other things,\nsuch as EXPLAIN ANALYZE statistics. OTOH we only break out index pages\nseparately for bitmap scans at the moment, so maybe it could be fairly\nwell targeted. And, maybe this is unappealing given the current\nstatistics collector limitations. I'm not volunteering to work on it\nright now, but it would be nice to fix this. Please don't wait for me\nto do it.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 16 Oct 2020 15:35:51 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Stats collector's idx_blks_hit value is highly misleading in practice" }, { "msg_contents": "On Fri, Oct 16, 2020 at 03:35:51PM -0700, Peter Geoghegan wrote:\n>It occurs to mean that statistics collector stats such as\n>pg_statio_*_tables.idx_blks_hit are highly misleading in practice\n>because they fail to take account of the difference between internal\n>pages and leaf pages in B-Tree indexes. These two types of pages are\n>in fundamentally different categories, and I think that failing to\n>recognize that at the level of these system views makes them much less\n>useful. Somebody should probably write a patch that makes this\n>difference clear from the system views. Possibly by using some\n>generalized notion of \"record\" pages instead of leaf pages, and\n>\"metadata\" pages instead of internal pages. That would even work with\n>hash indexes, I think.\n>\n>Consider the following example, which is based on a standard nbtree\n>index, but could work in almost the same way with other index access\n>methods:\n>\n>We have a pgbench_accounts pkey after initialization by pgbench at\n>scale 1500. It has 409,837 leaf pages and 1,451 internal pages,\n>meaning that about one third of one percent of all pages in the index\n>are internal pages. Occasionally, with indexes on large text strings\n>we might notice that as many as 1% of all index pages are internal\n>pages, but that's very much on the high side. Generally speaking,\n>we're virtually guaranteed to have *all* internal pages in\n>shared_buffers once a steady state has been reached. Once the cache\n>warms up, point lookups (like the queries pgbench performs) will only\n>have to access one leaf page at most, which amounts to only one I/O at\n>most. (This asymmetry is the main reason why B-Trees are generally\n>very effective when buffered in a buffer cache.)\n>\n>If we run the pgbench queries against this database/example index\n>we'll find that we have to access 4 index pages per query execution --\n>the root, two additional internal pages, plus a leaf page. Based on\n>the reasonable assumptions I'm making, 3 out of 4 of those pages will\n>be hits when steady state is reached with pgbench's SELECT-only\n>workload, regardless of how large shared_buffers is or how bloated the\n>index is (we only need 1451 buffers for that, and those are bound to\n>get hot quickly).\n>\n>The overall effect is idx_blks_hit changes over time in a way that\n>makes no sense -- even to an expert. Let's say we start with this\n>entire 3213 MB pgbench index in shared_buffers. We should only get\n>increments in idx_blks_hit, never increments in idx_blks_read - that\n>much makes sense. If we then iteratively shrink shared_buffers (or\n>equivalently, make the index grow without adding a new level), the\n>proportion of page accesses that increment idx_blks_read (rather than\n>incrementing idx_blks_hit) goes up roughly linearly as misses increase\n>linearly - which also makes sense. But here is the silly part: we\n>cannot really have a hit rate of less than 75% if you compare\n>idx_blks_hit to idx_blks_read, unless and until we can barely even fit\n>1% of the index in memory (at which point it's hard to distinguish\n>from noise). So if we naively consume the current view we'll see a hit\n>rate that starts at 100%, and very slowly shrinks to 75%, which is\n>where we bottom out (more or less, roughly speaking). This behavior\n>seems pretty hard to defend to me.\n>\n\nYeah. The behavior is technically correct, but it's not very useful for\npractical purposes. And most people don't even realize it behaves like\nthis :-( It's possible to compensate for this effect and estimate the\nactually \"interesting\" hit rate, but if we could have it directly that\nwould be great.\n\n>If somebody fixed this by putting internal pages into their own bucket\n>in the system view, then motivated users would quickly learn that\n>internal page stats aren't really useful -- they are only included for\n>completeness. They're such a small contributor to the overall hit rate\n>that they can simply be ignored completely. The thing that users ought\n>to focus on is leaf page hit rate. Now index hit rate (by which I mean\n>leaf page hit rate) actually makes sense. Note that Heroku promoted\n>simple heuristics like this for many years.\n>\n>I suppose that a change like this could end up affecting other things,\n>such as EXPLAIN ANALYZE statistics. OTOH we only break out index pages\n>separately for bitmap scans at the moment, so maybe it could be fairly\n>well targeted. And, maybe this is unappealing given the current\n>statistics collector limitations. I'm not volunteering to work on it\n>right now, but it would be nice to fix this. Please don't wait for me\n>to do it.\n>\n\nIt seems to me this should not be a particularly difficult patch in\nprinciple, so suitable for new contributors. The main challenge would be\npassing information about what page we're dealing with (internal/leaf)\nto the place actually calling pgstat_count_buffer_(read|hit). That\nhappens in ReadBufferExtended, which just has no idea what page it's\ndealing with. Not sure how to do that cleanly ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sat, 31 Oct 2020 02:46:17 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Stats collector's idx_blks_hit value is highly misleading in\n practice" }, { "msg_contents": "On Fri, Oct 30, 2020 at 6:46 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> Yeah. The behavior is technically correct, but it's not very useful for\n> practical purposes. And most people don't even realize it behaves like\n> this :-( It's possible to compensate for this effect and estimate the\n> actually \"interesting\" hit rate, but if we could have it directly that\n> would be great.\n\nIt's important that the information we provide in system views (and\nother instrumentation) reflect reality, even when the underlying\nmechanisms are not well understood by most users. DBAs often observe\ncorrelations and arrive at useful conclusions without truly\nunderstanding what's happening. Individual hackers have occasionally\nexpressed skepticism of exposing the internals of the system through\ninstrumentation; they object on the grounds that users are unlikely to\nunderstand what they see anyway. It seems to me that this completely\nmisses the point. You don't necessarily have to truly understand\nwhat's going on to have mechanical sympathy for the system. You don't\nneed to be a physicist to do folk physics.\n\nTo my mind the best example of this is wait events, which first\nappeared in proprietary database systems. Wait events expose\ninformation about mechanisms that couldn't possibly be fully\nunderstood by the end consumer. Because technically the details were\ntrade secrets. That didn't stop them from being very useful in\npractice.\n\n> It seems to me this should not be a particularly difficult patch in\n> principle, so suitable for new contributors. The main challenge would be\n> passing information about what page we're dealing with (internal/leaf)\n> to the place actually calling pgstat_count_buffer_(read|hit). That\n> happens in ReadBufferExtended, which just has no idea what page it's\n> dealing with. Not sure how to do that cleanly ...\n\nIt would be a bit messy to pass down a flag like that, but it could be\ndone. I think the idea of generalized definitions of internal pages\nand leaf pages (\"metadata pages and record pages\") could work well,\nbut would require a little thought in some cases. I'm thinking of GIN.\nI doubt it would really matter what the final determination is about\n(say) which particular generalized page bucket GIN pending list pages\nget placed in. It will be a little arbitrary in a few corner cases,\nbut it hardly matters at all. Right now we have something that's\ntechnically correct but also practically useless.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 31 Oct 2020 10:16:01 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Stats collector's idx_blks_hit value is highly misleading in\n practice" }, { "msg_contents": "On Fri, Oct 30, 2020 at 9:46 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n>\n> On Fri, Oct 16, 2020 at 03:35:51PM -0700, Peter Geoghegan wrote:\n\n> >I suppose that a change like this could end up affecting other things,\n> >such as EXPLAIN ANALYZE statistics. OTOH we only break out index pages\n> >separately for bitmap scans at the moment, so maybe it could be fairly\n> >well targeted. And, maybe this is unappealing given the current\n> >statistics collector limitations. I'm not volunteering to work on it\n> >right now, but it would be nice to fix this. Please don't wait for me\n> >to do it.\n> >\n>\n> It seems to me this should not be a particularly difficult patch in\n> principle, so suitable for new contributors. The main challenge would be\n> passing information about what page we're dealing with (internal/leaf)\n> to the place actually calling pgstat_count_buffer_(read|hit). That\n> happens in ReadBufferExtended, which just has no idea what page it's\n> dealing with. Not sure how to do that cleanly ...\n\nIs this a TODO candidate? What would be a succinct title for it?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Oct 30, 2020 at 9:46 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:>> On Fri, Oct 16, 2020 at 03:35:51PM -0700, Peter Geoghegan wrote:> >I suppose that a change like this could end up affecting other things,> >such as EXPLAIN ANALYZE statistics. OTOH we only break out index pages> >separately for bitmap scans at the moment, so maybe it could be fairly> >well targeted. And, maybe this is unappealing given the current> >statistics collector limitations. I'm not volunteering to work on it> >right now, but it would be nice to fix this. Please don't wait for me> >to do it.> >>> It seems to me this should not be a particularly difficult patch in> principle, so suitable for new contributors. The main challenge would be> passing information about what page we're dealing with (internal/leaf)> to the place actually calling pgstat_count_buffer_(read|hit). That> happens in ReadBufferExtended, which just has no idea what page it's> dealing with. Not sure how to do that cleanly ...Is this a TODO candidate? What would be a succinct title for it?--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 3 Feb 2022 19:08:22 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Stats collector's idx_blks_hit value is highly misleading in\n practice" }, { "msg_contents": "On Thu, Feb 3, 2022 at 7:08 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n> Is this a TODO candidate? What would be a succinct title for it?\n\nI definitely think that it's worth working on. I suppose it follows\nthat it should go on the TODO list.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 4 Feb 2022 11:18:43 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Stats collector's idx_blks_hit value is highly misleading in\n practice" }, { "msg_contents": "On Fri, Feb 4, 2022 at 11:19 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Feb 3, 2022 at 7:08 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n> > Is this a TODO candidate? What would be a succinct title for it?\n>\n> I definitely think that it's worth working on. I suppose it follows\n> that it should go on the TODO list.\n\nAdded TODO item \"Teach stats collector to differentiate between\ninternal and leaf index pages\"\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 4 Feb 2022 11:39:12 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Stats collector's idx_blks_hit value is highly misleading in\n practice" }, { "msg_contents": "Hello,\n\nI would like to get some feedback on that task.\n\n> pg_statio_*_tables.idx_blks_hit are highly misleading in practice\n> because they fail to take account of the difference between internal\n> pages and leaf pages in B-Tree indexes.\n\nI see it is still the case, so the issue is relevant, isn't it ?\n\n> The main challenge would be\n> passing information about what page we're dealing with (internal/leaf)\n> to the place actually calling pgstat_count_buffer_(read|hit). That\n> happens in ReadBufferExtended, which just has no idea what page it's\n> dealing with. Not sure how to do that cleanly ...\n\nI do not immediately see the way to pass the information in a\ncompletely clean manner.\n\nEither\n(1) ReadBufferExtended needs to know the type of an index page (leaf/internal)\nor\n(2) caller of ReadBufferExtended that can check the page type needs to learn\nif there was a hit and call pgstat_count_buffer_(read|hit) accordingly.\n\nIn either case necessary code changes seem quite invasive to me.\nI have attached a code snippet to illustrate the second idea.\n\nRegards,\nSergey", "msg_date": "Wed, 29 Jun 2022 22:42:44 +0200", "msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Stats collector's idx_blks_hit value is highly misleading in\n practice" }, { "msg_contents": "Hi again,\n\nHaving played with the task for a little while, I am no longer sure\nit completely justifies the effort involved.\nThe reason being the task requires modifying the buffer pool in one\nway or the other, which implies\n(a) significant effort on performance testing and\n(b) changes in the buffer pool interfaces that community might not\nwelcome just to get 1-2 extra statistics numbers.\n\nAny ideas ?\n\nRegards,\nSergey\n\n\n", "msg_date": "Thu, 11 Aug 2022 07:42:04 +0200", "msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Stats collector's idx_blks_hit value is highly misleading in\n practice" } ]
[ { "msg_contents": "Forgetting to assign the return value of list APIs such as lappend() is \na perennial favorite. The compiler can help point out such mistakes. \nGCC has an attribute warn_unused_results. Also C++ has standardized \nthis under the name \"nodiscard\", and C has a proposal to do the same \n[0]. In my patch I call the symbol pg_nodiscard, so that perhaps in a \ndistant future one only has to do s/pg_nodiscard/nodiscard/ or something \nsimilar. Also, the name is short enough that it doesn't mess up the \nformatting of function declarations too much.\n\nI have added pg_nodiscard decorations to all the list functions where I \nfound it sensible, as well as repalloc() for good measure, since \nrealloc() is usually mentioned as an example where this function \nattribute is useful.\n\nI have found two places in the existing code where this creates \nwarnings. Both places are correct as is, but make assumptions about the \ninternals of the list APIs and it seemed better just to fix the warning \nthan to write a treatise about why it's correct as is.\n\n\n[0]: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2051.pdf\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 17 Oct 2020 08:57:51 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "warn_unused_results" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Forgetting to assign the return value of list APIs such as lappend() is \n> a perennial favorite. The compiler can help point out such mistakes. \n> GCC has an attribute warn_unused_results. Also C++ has standardized \n> this under the name \"nodiscard\", and C has a proposal to do the same \n> [0]. In my patch I call the symbol pg_nodiscard, so that perhaps in a \n> distant future one only has to do s/pg_nodiscard/nodiscard/ or something \n> similar. Also, the name is short enough that it doesn't mess up the \n> formatting of function declarations too much.\n\n+1 in principle (I've not read the patch in detail); but I wonder what\npgindent does with these added keywords.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 17 Oct 2020 11:58:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: warn_unused_results" }, { "msg_contents": "On 2020-10-17 17:58, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Forgetting to assign the return value of list APIs such as lappend() is\n>> a perennial favorite. The compiler can help point out such mistakes.\n>> GCC has an attribute warn_unused_results. Also C++ has standardized\n>> this under the name \"nodiscard\", and C has a proposal to do the same\n>> [0]. In my patch I call the symbol pg_nodiscard, so that perhaps in a\n>> distant future one only has to do s/pg_nodiscard/nodiscard/ or something\n>> similar. Also, the name is short enough that it doesn't mess up the\n>> formatting of function declarations too much.\n> \n> +1 in principle (I've not read the patch in detail); but I wonder what\n> pgindent does with these added keywords.\n\npgindent doesn't seem to want to change anything about the patched files \nas I had sent them.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 23 Oct 2020 10:14:12 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: warn_unused_results" }, { "msg_contents": "On Sat, Oct 17, 2020 at 08:57:51AM +0200, Peter Eisentraut wrote:\n> Forgetting to assign the return value of list APIs such as lappend() is a\n> perennial favorite. The compiler can help point out such mistakes. GCC has\n> an attribute warn_unused_results. Also C++ has standardized this under the\n> name \"nodiscard\", and C has a proposal to do the same [0]. In my patch I\n> call the symbol pg_nodiscard, so that perhaps in a distant future one only\n> has to do s/pg_nodiscard/nodiscard/ or something similar. Also, the name is\n> short enough that it doesn't mess up the formatting of function declarations\n> too much.\n\nI have seen as well this stuff being a source of confusion in the\npast.\n\n> I have added pg_nodiscard decorations to all the list functions where I\n> found it sensible, as well as repalloc() for good measure, since realloc()\n> is usually mentioned as an example where this function attribute is useful.\n\n+#ifdef __GNUC__\n+#define pg_nodiscard __attribute__((warn_unused_result))\n+#else\n+#define pg_nodiscard\n+#endif\n\nThis is accepted by clang, and MSVC has visibly an equivalent for\nthat, as of VS 2012:\n#elif defined(_MSC_VER) && (_MSC_VER >= 1700)\n#define pg_nodiscard _Check_return_\nWe don't care about the 1700 condition as we support only >= 1800 on\nHEAD, and in this case the addition of pg_nodiscard would be required\non the definition and the declaration. Should it be added? It is\nmuch more invasive than the gcc/clang equivalent though.. \n\n> I have found two places in the existing code where this creates warnings.\n> Both places are correct as is, but make assumptions about the internals of\n> the list APIs and it seemed better just to fix the warning than to write a\n> treatise about why it's correct as is.\n\nFWIW, I saw an extra case with parsePGArray() today. I am not sure\nabout the addition of repalloc(), as it is quite obvious that one has\nto use its result. Lists are fine, these are proper to PG internals\nand beginners tend to be easily confused in the way to use them.\n--\nMichael", "msg_date": "Mon, 9 Nov 2020 15:56:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: warn_unused_results" }, { "msg_contents": "On 2020-11-09 07:56, Michael Paquier wrote:\n> This is accepted by clang, and MSVC has visibly an equivalent for\n> that, as of VS 2012:\n> #elif defined(_MSC_VER) && (_MSC_VER >= 1700)\n> #define pg_nodiscard _Check_return_\n> We don't care about the 1700 condition as we support only >= 1800 on\n> HEAD, and in this case the addition of pg_nodiscard would be required\n> on the definition and the declaration. Should it be added? It is\n> much more invasive than the gcc/clang equivalent though..\n\nAFAICT from the documentation, this only applies for special \"analyze\" \nruns, not as a normal compiler warning. Do we have any support for \nanalyze runs with MSVC?\n\n> FWIW, I saw an extra case with parsePGArray() today.\n\nAFAICT, that's more in the category of checking for error returns, which \nis similar to the \"fortify\" options that some environments have for \nchecking the return of fwrite() etc.\n\n> I am not sure\n> about the addition of repalloc(), as it is quite obvious that one has\n> to use its result. Lists are fine, these are proper to PG internals\n> and beginners tend to be easily confused in the way to use them.\n\nrealloc() is listed in the GCC documentation as the reason this option \nexists, and glibc tags its realloc() with this attribute, so doing the \nsame for repalloc() seems sensible.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n", "msg_date": "Mon, 9 Nov 2020 08:23:31 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: warn_unused_results" }, { "msg_contents": "On Mon, Nov 09, 2020 at 08:23:31AM +0100, Peter Eisentraut wrote:\n> On 2020-11-09 07:56, Michael Paquier wrote:\n>> This is accepted by clang, and MSVC has visibly an equivalent for\n>> that, as of VS 2012:\n>> #elif defined(_MSC_VER) && (_MSC_VER >= 1700)\n>> #define pg_nodiscard _Check_return_\n>> We don't care about the 1700 condition as we support only >= 1800 on\n>> HEAD, and in this case the addition of pg_nodiscard would be required\n>> on the definition and the declaration. Should it be added? It is\n>> much more invasive than the gcc/clang equivalent though..\n> \n> AFAICT from the documentation, this only applies for special \"analyze\" runs,\n> not as a normal compiler warning. Do we have any support for analyze runs\n> with MSVC?\n\nYou can run them by passing down /p:RunCodeAnalysis=true to MSBFLAGS\nwhen calling the build script. There are more options like\n/p:CodeAnalysisRuleSet to define a set of rules. By default the\noutput is rather noisy though now that I look at it. And having to\nadd the flag to the definition and the declaration is annoying, so\nwhat you are doing would be enough without MSVC.\n\n>> I am not sure\n>> about the addition of repalloc(), as it is quite obvious that one has\n>> to use its result. Lists are fine, these are proper to PG internals\n>> and beginners tend to be easily confused in the way to use them.\n> \n> realloc() is listed in the GCC documentation as the reason this option\n> exists, and glibc tags its realloc() with this attribute, so doing the same\n> for repalloc() seems sensible.\n\nGood point.\n--\nMichael", "msg_date": "Tue, 10 Nov 2020 12:34:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: warn_unused_results" }, { "msg_contents": "On 2020-11-10 04:34, Michael Paquier wrote:\n>>> I am not sure\n>>> about the addition of repalloc(), as it is quite obvious that one has\n>>> to use its result. Lists are fine, these are proper to PG internals\n>>> and beginners tend to be easily confused in the way to use them.\n>> realloc() is listed in the GCC documentation as the reason this option\n>> exists, and glibc tags its realloc() with this attribute, so doing the same\n>> for repalloc() seems sensible.\n> Good point.\n\ncommitted\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n", "msg_date": "Wed, 11 Nov 2020 11:07:59 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: warn_unused_results" } ]
[ { "msg_contents": "While reviewing what became commit fe4d022, I was surprised at the sequence of\nrelfilenode values that RelationInitPhysicalAddr() computed for pg_class,\nduring ParallelWorkerMain(), when running the last command of this recipe:\n\n begin;\n cluster pg_class using pg_class_oid_index;\n set force_parallel_mode = 'regress';\n values (1);\n\nThere's $OLD_NODE (relfilenode in the committed relation map) and $NEW_NODE\n(relfilenode in this transaction's active_local_updates). The worker performs\nRelationInitPhysicalAddr(pg_class) four times:\n\n1) $OLD_NODE in BackgroundWorkerInitializeConnectionByOid().\n2) $OLD_NODE in RelationCacheInvalidate() directly.\n3) $OLD_NODE in RelationReloadNailed(), indirectly via RelationCacheInvalidate().\n4) $NEW_NODE indirectly as part of the executor running the query.\n\nI did expect $OLD_NODE in (1), since ParallelWorkerMain() calls\nBackgroundWorkerInitializeConnectionByOid() before\nStartParallelWorkerTransaction(). I expected $NEW_NODE in (2) and (3); that\ndidn't happen, because ParallelWorkerMain() calls InvalidateSystemCaches()\nbefore RestoreRelationMap(). Let's move InvalidateSystemCaches() later.\nInvalidation should follow any worker initialization step that changes the\nresults of relcache validation; otherwise, we'd need to ensure the\nInvalidateSystemCaches() will not validate any relcache entry. Invalidation\nshould precede any step that reads from a cache; otherwise, we'd need to redo\nthat step after inval. (Currently, no step reads from a cache.) Many steps,\ne.g. AttachSerializableXact(), have no effect on relcache validation, so it's\narbitrary whether they happen before or after inval. I'm putting inval as\nlate as possible, because I think it's easier to confirm that a step doesn't\nread from a cache than to confirm that a step doesn't affect relcache\nvalidation. An also-reasonable alternative would be to move inval and its\nprerequisites as early as possible.\n\nFor reasons described in the attached commit message, this doesn't have\nuser-visible consequences today. Innocent-looking relcache.c changes might\nupheave that, so I'm proposing this on robustness grounds. No need to\nback-patch.", "msg_date": "Sat, 17 Oct 2020 04:53:06 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Timing of relcache inval at parallel worker init" }, { "msg_contents": "At Sat, 17 Oct 2020 04:53:06 -0700, Noah Misch <noah@leadboat.com> wrote in \n> While reviewing what became commit fe4d022, I was surprised at the sequence of\n> relfilenode values that RelationInitPhysicalAddr() computed for pg_class,\n> during ParallelWorkerMain(), when running the last command of this recipe:\n> \n> begin;\n> cluster pg_class using pg_class_oid_index;\n> set force_parallel_mode = 'regress';\n> values (1);\n> \n> There's $OLD_NODE (relfilenode in the committed relation map) and $NEW_NODE\n> (relfilenode in this transaction's active_local_updates). The worker performs\n> RelationInitPhysicalAddr(pg_class) four times:\n> \n> 1) $OLD_NODE in BackgroundWorkerInitializeConnectionByOid().\n> 2) $OLD_NODE in RelationCacheInvalidate() directly.\n> 3) $OLD_NODE in RelationReloadNailed(), indirectly via RelationCacheInvalidate().\n> 4) $NEW_NODE indirectly as part of the executor running the query.\n> \n> I did expect $OLD_NODE in (1), since ParallelWorkerMain() calls\n> BackgroundWorkerInitializeConnectionByOid() before\n> StartParallelWorkerTransaction(). I expected $NEW_NODE in (2) and (3); that\n> didn't happen, because ParallelWorkerMain() calls InvalidateSystemCaches()\n> before RestoreRelationMap(). Let's move InvalidateSystemCaches() later.\n> Invalidation should follow any worker initialization step that changes the\n> results of relcache validation; otherwise, we'd need to ensure the\n> InvalidateSystemCaches() will not validate any relcache entry. Invalidation\n> should precede any step that reads from a cache; otherwise, we'd need to redo\n> that step after inval. (Currently, no step reads from a cache.) Many steps,\n> e.g. AttachSerializableXact(), have no effect on relcache validation, so it's\n> arbitrary whether they happen before or after inval. I'm putting inval as\n\nI agree to both the discussions.\n\n> late as possible, because I think it's easier to confirm that a step doesn't\n> read from a cache than to confirm that a step doesn't affect relcache\n> validation. An also-reasonable alternative would be to move inval and its\n> prerequisites as early as possible.\n\nThe steps became moved before the invalidation by this patch seems to\nbe in the lower layer than snapshot, so it seems to be reasonable.\n\n> For reasons described in the attached commit message, this doesn't have\n> user-visible consequences today. Innocent-looking relcache.c changes might\n> upheave that, so I'm proposing this on robustness grounds. No need to\n> back-patch.\n\nI'm not sure about the necessity but lower-to-upper initialization\norder is neat. I agree about not back-patching.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 20 Oct 2020 17:35:53 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Timing of relcache inval at parallel worker init" }, { "msg_contents": "On Sat, Oct 17, 2020 at 7:53 AM Noah Misch <noah@leadboat.com> wrote:\n> Let's move InvalidateSystemCaches() later.\n> Invalidation should follow any worker initialization step that changes the\n> results of relcache validation; otherwise, we'd need to ensure the\n> InvalidateSystemCaches() will not validate any relcache entry.\n\nThe thinking behind the current placement was this: just before the\ncall to InvalidateSystemCaches(), we restore the active and\ntransaction snapshots. I think that we must now flush the caches\nbefore anyone does any more lookups; otherwise, they might get wrong\nanswers. So, putting this code later makes the assumption that no such\nlookups happen meanwhile. That feels a little risky to me; at the very\nleast, it should be clearly spelled out in the comments that no system\ncache lookups can happen in the functions we call in the interim.\nWould it be obvious to a future developer that e.g.\nRestoreEnumBlacklist cannot perform any syscache lookups? It doesn't\nseem so to me.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 21 Oct 2020 11:31:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Timing of relcache inval at parallel worker init" }, { "msg_contents": "On Wed, Oct 21, 2020 at 11:31:54AM -0400, Robert Haas wrote:\n> On Sat, Oct 17, 2020 at 7:53 AM Noah Misch <noah@leadboat.com> wrote:\n> > Let's move InvalidateSystemCaches() later.\n> > Invalidation should follow any worker initialization step that changes the\n> > results of relcache validation; otherwise, we'd need to ensure the\n> > InvalidateSystemCaches() will not validate any relcache entry.\n> \n> The thinking behind the current placement was this: just before the\n> call to InvalidateSystemCaches(), we restore the active and\n> transaction snapshots. I think that we must now flush the caches\n> before anyone does any more lookups; otherwise, they might get wrong\n> answers. So, putting this code later makes the assumption that no such\n> lookups happen meanwhile. That feels a little risky to me; at the very\n> least, it should be clearly spelled out in the comments that no system\n> cache lookups can happen in the functions we call in the interim.\n\nMy comment edits did attempt that. I could enlarge those comments, at the\nrisk of putting undue weight on the topic. One could also arrange for an\nassertion failure if something takes a snapshot in the unwelcome period,\nbetween StartParallelWorkerTransaction() and InvalidateSystemCaches().\nLooking closer, we have live bugs from lookups during RestoreGUCState():\n\n$ echo \"begin; create user alice; set session authorization alice; set force_parallel_mode = regress; select 1\" | psql -X\nBEGIN\nCREATE ROLE\nSET\nSET\nERROR: role \"alice\" does not exist\nCONTEXT: while setting parameter \"session_authorization\" to \"alice\"\n\n$ echo \"begin; create text search configuration foo (copy=english); set default_text_search_config = foo; set force_parallel_mode = regress; select 1\" | psql -X\nBEGIN\nCREATE TEXT SEARCH CONFIGURATION\nSET\nSET\nERROR: invalid value for parameter \"default_text_search_config\": \"public.foo\"\nCONTEXT: while setting parameter \"default_text_search_config\" to \"public.foo\"\n\nI gather $SUBJECT is the tip of an iceberg, so I'm withdrawing the patch and\nabandoning the topic.\n\n\n", "msg_date": "Sat, 24 Oct 2020 08:29:10 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: Timing of relcache inval at parallel worker init" } ]
[ { "msg_contents": "[doc] improve tableoid description\n\nHi\n\nAttached patch aims to improve the description of the tableoid system column [1]\nby:\n\n- mentioning it's useful for determining table names for partitioned tables as\n well as for those in inheritance hierarchies\n- mentioning the possibility of casting tableoid to regclass (which is simpler\n than the currently suggested join on pg_class, which is only needed if\n the schema name is absolutely required)\n\n[1] https://www.postgresql.org/docs/current/ddl-system-columns.html\n\n\nRegards\n\nIan Barwick\n\n\n-- \nEnterpriseDB: https://www.enterprisedb.com", "msg_date": "Sat, 17 Oct 2020 22:04:38 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "[doc] improve tableoid description" }, { "msg_contents": "On Sat, Oct 17, 2020 at 6:35 PM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n>\n> [doc] improve tableoid description\n>\n> Hi\n>\n> Attached patch aims to improve the description of the tableoid system column [1]\n> by:\n>\n> - mentioning it's useful for determining table names for partitioned tables as\n> well as for those in inheritance hierarchies\n\nThis looks fine.\n\n> - mentioning the possibility of casting tableoid to regclass (which is simpler\n> than the currently suggested join on pg_class, which is only needed if\n> the schema name is absolutely required)\n\nMentioning casting to regclass is worthwhile but it's not performance\nefficient if there are many tableoids. In that case, joining with\npg_class.oid is quite efficient. That line further suggests using\nregnamespace which is not as efficient as joining with\npg_namespace.oid. But pg_namespace won't have as many entries as\npg_class so casting to regnamespace might be fine. Should we suggest\nboth the methods somehow?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 19 Oct 2020 16:52:04 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [doc] improve tableoid description" }, { "msg_contents": "2020年10月19日(月) 20:22 Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>:\n>\n> On Sat, Oct 17, 2020 at 6:35 PM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> >\n> > [doc] improve tableoid description\n> >\n> > Hi\n> >\n> > Attached patch aims to improve the description of the tableoid system column [1]\n> > by:\n> >\n> > - mentioning it's useful for determining table names for partitioned tables as\n> > well as for those in inheritance hierarchies\n>\n> This looks fine.\n>\n> > - mentioning the possibility of casting tableoid to regclass (which is simpler\n> > than the currently suggested join on pg_class, which is only needed if\n> > the schema name is absolutely required)\n>\n> Mentioning casting to regclass is worthwhile but it's not performance\n> efficient if there are many tableoids. In that case, joining with\n> pg_class.oid is quite efficient.\n\nTrue.\n\n> That line further suggests using\n> regnamespace which is not as efficient as joining with\n> pg_namespace.oid. But pg_namespace won't have as many entries as\n> pg_class so casting to regnamespace might be fine. Should we suggest\n> both the methods somehow?\n\nOn further reflection, I think trying to explain all that is going to\nend up as a\nmini-tutorial which is beyond the scope of the explanation of a column, so\nthe existing reference to pg_class should be enough.\n\nRevised patch attached just mentioning partitioned tables.\n\n\nRegards\n\nIan Barwick\n\n\n-- \nEnterpriseDB: https://www.enterprisedb.com", "msg_date": "Mon, 19 Oct 2020 21:28:39 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [doc] improve tableoid description" }, { "msg_contents": "On Mon, Oct 19, 2020 at 5:58 PM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n>\n> > That line further suggests using\n> > regnamespace which is not as efficient as joining with\n> > pg_namespace.oid. But pg_namespace won't have as many entries as\n> > pg_class so casting to regnamespace might be fine. Should we suggest\n> > both the methods somehow?\n>\n> On further reflection, I think trying to explain all that is going to\n> end up as a\n> mini-tutorial which is beyond the scope of the explanation of a column, so\n> the existing reference to pg_class should be enough.\n\n\n\n>\n> Revised patch attached just mentioning partitioned tables.\n\n From a user's point of view, it makes sense to differentiate between\npartitioning and inheritance, though internally the first uses the\nlater.\n\nMaybe we could just generalize the sentence as \"tableoid can be used\nto obtain the table name either by joining against the oid column of\npg_class or casting it to regclass as appropriate.\" Or just \"\"tableoid\ncan be used to obtain the table name.\". Probably the users would find\nout how to do that from some other part of the document.\n\n <structfield>tableoid</structfield> can be joined against the\n <structfield>oid</structfield> column of\n <structname>pg_class</structname> to obtain the table name.\n\nBut even without that change, the current patch is useful. Please add\nit to commitfest so it's not forgotten.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 20 Oct 2020 17:34:06 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [doc] improve tableoid description" }, { "msg_contents": "On 2020-10-19 14:28, Ian Lawrence Barwick wrote:\n> On further reflection, I think trying to explain all that is going to\n> end up as a\n> mini-tutorial which is beyond the scope of the explanation of a column, so\n> the existing reference to pg_class should be enough.\n> \n> Revised patch attached just mentioning partitioned tables.\n\ncommitted\n\n\n", "msg_date": "Sat, 21 Nov 2020 08:29:19 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [doc] improve tableoid description" }, { "msg_contents": "2020年11月21日(土) 16:29 Peter Eisentraut <peter.eisentraut@enterprisedb.com>:\n>\n> On 2020-10-19 14:28, Ian Lawrence Barwick wrote:\n> > On further reflection, I think trying to explain all that is going to\n> > end up as a\n> > mini-tutorial which is beyond the scope of the explanation of a column, so\n> > the existing reference to pg_class should be enough.\n> >\n> > Revised patch attached just mentioning partitioned tables.\n>\n> committed\n\nThanks!\n\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Nov 2020 09:35:10 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [doc] improve tableoid description" } ]
[ { "msg_contents": "I overflowed my homedir while testing with pg_reload, and got:\n|pg_restore: error: could not write to large object (result: 18446744073709551615, expected: 30)\n\nsrc/bin/pg_dump/pg_backup_archiver.c\n\n f (res != AH->lo_buf_used)\n fatal(\"could not write to large object (result: %lu, expected: %lu)\",\n (unsigned long) res, (unsigned long) AH->lo_buf_used);\n\n\n; 18446744073709551615 - 1<<64\n -1\n\nI guess casting to long was the best option c. 2002 (commit 6faf8024f) but I\ngather the modern way is with %z.\n\nI confirmed this fixes the message.\n|pg_restore: error: could not write to large object (result: -1, expected: 16384)\n\n\n-- \nJustin", "msg_date": "Sat, 17 Oct 2020 20:02:32 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "pg_restore error message during ENOSPC with largeobj" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I overflowed my homedir while testing with pg_reload, and got:\n> |pg_restore: error: could not write to large object (result: 18446744073709551615, expected: 30)\n\nBleah.\n\n> I guess casting to long was the best option c. 2002 (commit 6faf8024f) but I\n> gather the modern way is with %z.\n\nIsn't the real problem that lo_write returns int, not size_t?\n\nAFAICT, every other call site stores the result in an int,\nit's just this one that's out in left field.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 17 Oct 2020 22:41:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_restore error message during ENOSPC with largeobj" }, { "msg_contents": "I wrote:\n> Isn't the real problem that lo_write returns int, not size_t?\n\nAfter looking at it some more, I decided that we'd just been lazy\nto begin with: we should be handling this as a regular SQL error\ncondition. Pushed at 929c69aa19.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 18 Oct 2020 12:27:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_restore error message during ENOSPC with largeobj" } ]
[ { "msg_contents": "If you go into src/test/thread/ and type \"make\", you get\na bunch of \"undefined reference to `pg_fprintf'\" failures.\nThat's because thread_test.c #include's postgres.h but\nthe Makefile doesn't bother to link it with libpgport,\narguing (falsely) that that might not exist yet.\n\nPresumably, this has been busted on all platforms since\n96bf88d52, and for many years before that on platforms\nthat have always used src/port/snprintf.c.\n\nConfigure's use of the program works anyway because it doesn't\nuse the Makefile and thread_test.c doesn't #include postgres.h\nwhen IN_CONFIGURE.\n\nIt doesn't really seem sane to me to support two different build\nenvironments for thread_test, especially when one of them is so\nlittle-used that it can be broken for years before we notice.\nSo I'd be inclined to rip out the Makefile and just consider\nthat thread_test.c is *only* meant to be used by configure.\nIf we wish to resurrect the standalone build method, we could\nprobably do so by adding LIBS to the Makefile's link command\n... but what's the point, and what will keep it from getting\nbroken again later?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 18 Oct 2020 13:20:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Non-configure build of thread_test has been broken for awhile" }, { "msg_contents": "On 2020-Oct-18, Tom Lane wrote:\n\n> It doesn't really seem sane to me to support two different build\n> environments for thread_test, especially when one of them is so\n> little-used that it can be broken for years before we notice.\n> So I'd be inclined to rip out the Makefile and just consider\n> that thread_test.c is *only* meant to be used by configure.\n> If we wish to resurrect the standalone build method, we could\n> probably do so by adding LIBS to the Makefile's link command\n> ... but what's the point, and what will keep it from getting\n> broken again later?\n\nStandalone usage of that program is evidently non-existant, so +1 for\nremoving the Makefile and just keep the configure compile path for it.\n\nBTW the only animal reporting without thread-safety in the buildfarm is\ngaur.\n\n\n", "msg_date": "Mon, 19 Oct 2020 18:41:43 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Non-configure build of thread_test has been broken for awhile" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2020-Oct-18, Tom Lane wrote:\n>> It doesn't really seem sane to me to support two different build\n>> environments for thread_test, especially when one of them is so\n>> little-used that it can be broken for years before we notice.\n>> So I'd be inclined to rip out the Makefile and just consider\n>> that thread_test.c is *only* meant to be used by configure.\n>> If we wish to resurrect the standalone build method, we could\n>> probably do so by adding LIBS to the Makefile's link command\n>> ... but what's the point, and what will keep it from getting\n>> broken again later?\n\n> Standalone usage of that program is evidently non-existant, so +1 for\n> removing the Makefile and just keep the configure compile path for it.\n\nI concluded that if thread_test.c will only be used by configure,\nthen we should stick it under $(SRCDIR)/config/ and nuke the\nsrc/test/thread/ subdirectory altogether. See attached.\n\n> BTW the only animal reporting without thread-safety in the buildfarm is\n> gaur.\n\nYeah. At some point maybe we should just drop support for non-thread-safe\nplatforms, but I'm not proposing to do that yet.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 20 Oct 2020 12:25:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Non-configure build of thread_test has been broken for awhile" }, { "msg_contents": "On Tue, Oct 20, 2020 at 12:25:48PM -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2020-Oct-18, Tom Lane wrote:\n> >> It doesn't really seem sane to me to support two different build\n> >> environments for thread_test, especially when one of them is so\n> >> little-used that it can be broken for years before we notice.\n> >> So I'd be inclined to rip out the Makefile and just consider\n> >> that thread_test.c is *only* meant to be used by configure.\n> >> If we wish to resurrect the standalone build method, we could\n> >> probably do so by adding LIBS to the Makefile's link command\n> >> ... but what's the point, and what will keep it from getting\n> >> broken again later?\n> \n> > Standalone usage of that program is evidently non-existant, so +1 for\n> > removing the Makefile and just keep the configure compile path for it.\n> \n> I concluded that if thread_test.c will only be used by configure,\n> then we should stick it under $(SRCDIR)/config/ and nuke the\n> src/test/thread/ subdirectory altogether. See attached.\n\nSounds good.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 26 Oct 2020 19:30:15 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Non-configure build of thread_test has been broken for awhile" } ]
[ { "msg_contents": "Hi\r\n\r\nFound one more place needed to be changed(long -> int64).\r\n\r\nAlso changed the output for int64 data(Debug mode on & define EXEC_SORTDEBUG )\r\n\r\nAnd, maybe there's a typo in \" src\\backend\\executor\\nodeIncrementalSort.c\" as below.\r\nObviously, the \">=\" is meaningless, right?\r\n\r\n-\t\tSO1_printf(\"Sorting presorted prefix tuplesort with >= %ld tuples\\n\", nTuples);\r\n+\t\tSO1_printf(\"Sorting presorted prefix tuplesort with %ld tuples\\n\", nTuples);\r\n\r\nPlease take a check at the attached patch file.\r\n\r\nPrevious disscution:\r\nhttps://www.postgresql.org/message-id/CAApHDvpky%2BUhof8mryPf5i%3D6e6fib2dxHqBrhp0Qhu0NeBhLJw%40mail.gmail.com\r\n\r\nBest regards\r\nTang", "msg_date": "Mon, 19 Oct 2020 03:57:00 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Use of \"long\" in incremental sort code" }, { "msg_contents": "Hi\r\n\r\n>Found one more place needed to be changed(long -> int64).\r\n>\r\n>Also changed the output for int64 data(Debug mode on & define EXEC_SORTDEBUG )\r\n>\r\n>And, maybe there's a typo in \" src\\backend\\executor\\nodeIncrementalSort.c\" as below.\r\n>Obviously, the \">=\" is meaningless, right?\r\n>\r\n>And, maybe there's a typo in \" src\\backend\\executor\\nodeIncrementalSort.c\" as below.\r\n>Obviously, the \">=\" is meaningless, right?\r\n>\r\n>-\t\tSO1_printf(\"Sorting presorted prefix tuplesort with >= %ld tuples\\n\", nTuples);\r\n>+\t\tSO1_printf(\"Sorting presorted prefix tuplesort with %ld tuples\\n\", nTuples);\r\n>\r\n>Please take a check at the attached patch file.\r\n\r\nI have added it to commit fest.\r\nhttps://commitfest.postgresql.org/30/2772/\r\n\r\nBest regards\r\nTang\r\n\r\n-----Original Message-----\r\nFrom: Tang, Haiying <tanghy.fnst@cn.fujitsu.com> \r\nSent: Monday, October 19, 2020 12:57 PM\r\nTo: David Rowley <dgrowleyml@gmail.com>; James Coleman <jtc331@gmail.com>\r\nCc: pgsql-hackers@postgresql.org\r\nSubject: RE: Use of \"long\" in incremental sort code\r\n\r\nHi\r\n\r\nFound one more place needed to be changed(long -> int64).\r\n\r\nAlso changed the output for int64 data(Debug mode on & define EXEC_SORTDEBUG )\r\n\r\nAnd, maybe there's a typo in \" src\\backend\\executor\\nodeIncrementalSort.c\" as below.\r\nObviously, the \">=\" is meaningless, right?\r\n\r\n-\t\tSO1_printf(\"Sorting presorted prefix tuplesort with >= %ld tuples\\n\", nTuples);\r\n+\t\tSO1_printf(\"Sorting presorted prefix tuplesort with %ld tuples\\n\", nTuples);\r\n\r\nPlease take a check at the attached patch file.\r\n\r\nPrevious disscution:\r\nhttps://www.postgresql.org/message-id/CAApHDvpky%2BUhof8mryPf5i%3D6e6fib2dxHqBrhp0Qhu0NeBhLJw%40mail.gmail.com\r\n\r\nBest regards\r\nTang\r\n\r\n\r\n\r\n\n\n", "msg_date": "Wed, 21 Oct 2020 06:06:52 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Use of \"long\" in incremental sort code" }, { "msg_contents": "On Wed, Oct 21, 2020 at 06:06:52AM +0000, Tang, Haiying wrote:\n>Hi\n>\n>>Found one more place needed to be changed(long -> int64).\n>>\n>>Also changed the output for int64 data(Debug mode on & define EXEC_SORTDEBUG )\n>>\n>>And, maybe there's a typo in \" src\\backend\\executor\\nodeIncrementalSort.c\" as below.\n>>Obviously, the \">=\" is meaningless, right?\n>>\n>>And, maybe there's a typo in \" src\\backend\\executor\\nodeIncrementalSort.c\" as below.\n>>Obviously, the \">=\" is meaningless, right?\n>>\n>>-\t\tSO1_printf(\"Sorting presorted prefix tuplesort with >= %ld tuples\\n\", nTuples);\n>>+\t\tSO1_printf(\"Sorting presorted prefix tuplesort with %ld tuples\\n\", nTuples);\n>>\n>>Please take a check at the attached patch file.\n>\n>I have added it to commit fest.\n>https://commitfest.postgresql.org/30/2772/\n>\n\nThanks, the changes seem fine to me. I'll do a bit more review and get\nit pushed.\n\n\nregards\nTomas\n\n\n", "msg_date": "Wed, 21 Oct 2020 23:00:05 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "Hi,\n\nI took another look at this, and 99% of the patch (the fixes to sort\ndebug messages) seems fine to me. Attached is the part I plan to get\ncommitted, including commit message etc.\n\n\nThe one change I decided to remove is this change in tuplesort_free:\n\n- long spaceUsed;\n+ int64 spaceUsed;\n\nThe reason why I think this variable should be 'long' is that we're\nusing it for this:\n\n spaceUsed = LogicalTapeSetBlocks(state->tapeset);\n\nand LogicalTapeSetBlocks is defined like this:\n\n extern long LogicalTapeSetBlocks(LogicalTapeSet *lts);\n\nFWIW the \"long\" is not introduced by incremental sort - it used to be in\ntuplesort_end, the incremental sort patch just moved it to a different\nfunction. It's a bit confusing that tuplesort_updatemax has this:\n\n int64 spaceUsed;\n\nBut I'd argue this is actually wrong, and should be \"long\" instead. (And\nthis actually comes from the incremental sort patch, by me.)\n\n\nFWIW while looking at what the other places calling LogicalTapeSetBlocks\ndo, and I noticed this:\n\n uint64 disk_used = LogicalTapeSetBlocks(...);\n\nin the disk-based hashagg patch. So that's a third data type ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 3 Nov 2020 03:53:53 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "On Tue, Nov 03, 2020 at 03:53:53AM +0100, Tomas Vondra wrote:\n>Hi,\n>\n>I took another look at this, and 99% of the patch (the fixes to sort\n>debug messages) seems fine to me. Attached is the part I plan to get\n>committed, including commit message etc.\n>\n\nI've pushed this part. Thanks for the patch, Haiying Tang.\n\n>\n>The one change I decided to remove is this change in tuplesort_free:\n>\n>- long spaceUsed;\n>+ int64 spaceUsed;\n>\n>The reason why I think this variable should be 'long' is that we're\n>using it for this:\n>\n> spaceUsed = LogicalTapeSetBlocks(state->tapeset);\n>\n>and LogicalTapeSetBlocks is defined like this:\n>\n> extern long LogicalTapeSetBlocks(LogicalTapeSet *lts);\n>\n>FWIW the \"long\" is not introduced by incremental sort - it used to be in\n>tuplesort_end, the incremental sort patch just moved it to a different\n>function. It's a bit confusing that tuplesort_updatemax has this:\n>\n> int64 spaceUsed;\n>\n>But I'd argue this is actually wrong, and should be \"long\" instead. (And\n>this actually comes from the incremental sort patch, by me.)\n>\n>\n>FWIW while looking at what the other places calling LogicalTapeSetBlocks\n>do, and I noticed this:\n>\n> uint64 disk_used = LogicalTapeSetBlocks(...);\n>\n>in the disk-based hashagg patch. So that's a third data type ...\n>\n\nIMHO this should simply switch the current int64 variable to long, as it\nwas before. Not sure about about the hashagg uint64 variable.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Tue, 3 Nov 2020 22:42:46 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "On Wed, 4 Nov 2020 at 10:42, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> IMHO this should simply switch the current int64 variable to long, as it\n> was before. Not sure about about the hashagg uint64 variable.\n\nIMO, we should just get rid of the use of \"long\" here. As far as I'm\nconcerned, using long in the core code at all is just unnecessary and\njust increases the chances of having bugs.\n\nHow often do people forget that we support a 64-bit platform that has\nsizeof(long) == 4?\n\nCan't we use size_t and ssize_t if we really need a processor\nword-sized type? And use int64/uint64 when we really want a 64-bit\ntype.\n\nDavid\n\n\n", "msg_date": "Thu, 5 Nov 2020 10:58:44 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "\nOn 11/4/20 10:58 PM, David Rowley wrote:\n> On Wed, 4 Nov 2020 at 10:42, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> IMHO this should simply switch the current int64 variable to long, as it\n>> was before. Not sure about about the hashagg uint64 variable.\n> \n> IMO, we should just get rid of the use of \"long\" here. As far as I'm\n> concerned, using long in the core code at all is just unnecessary and\n> just increases the chances of having bugs.\n> \n> How often do people forget that we support a 64-bit platform that has\n> sizeof(long) == 4?\n> \n> Can't we use size_t and ssize_t if we really need a processor\n> word-sized type? And use int64/uint64 when we really want a 64-bit\n> type.\n> \n\nPerhaps. But I guess it's a bit strange to have function declared as \nreturning long, but store the result in int64 everywhere. That was the \npoint I was trying to make - it's not just a matter of changing all the \nvariables to int64, IMHO.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 4 Nov 2020 23:13:41 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "On Tue, Nov 3, 2020 at 4:42 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Tue, Nov 03, 2020 at 03:53:53AM +0100, Tomas Vondra wrote:\n> >Hi,\n> >\n> >I took another look at this, and 99% of the patch (the fixes to sort\n> >debug messages) seems fine to me. Attached is the part I plan to get\n> >committed, including commit message etc.\n> >\n>\n> I've pushed this part. Thanks for the patch, Haiying Tang.\n>\n> >\n> >The one change I decided to remove is this change in tuplesort_free:\n> >\n> >- long spaceUsed;\n> >+ int64 spaceUsed;\n> >\n> >The reason why I think this variable should be 'long' is that we're\n> >using it for this:\n> >\n> > spaceUsed = LogicalTapeSetBlocks(state->tapeset);\n> >\n> >and LogicalTapeSetBlocks is defined like this:\n> >\n> > extern long LogicalTapeSetBlocks(LogicalTapeSet *lts);\n> >\n> >FWIW the \"long\" is not introduced by incremental sort - it used to be in\n> >tuplesort_end, the incremental sort patch just moved it to a different\n> >function. It's a bit confusing that tuplesort_updatemax has this:\n> >\n> > int64 spaceUsed;\n> >\n> >But I'd argue this is actually wrong, and should be \"long\" instead. (And\n> >this actually comes from the incremental sort patch, by me.)\n> >\n> >\n> >FWIW while looking at what the other places calling LogicalTapeSetBlocks\n> >do, and I noticed this:\n> >\n> > uint64 disk_used = LogicalTapeSetBlocks(...);\n> >\n> >in the disk-based hashagg patch. So that's a third data type ...\n> >\n>\n> IMHO this should simply switch the current int64 variable to long, as it\n> was before. Not sure about about the hashagg uint64 variable.\n\nIs there anything that actually limits tape code to using at most 4GB\non 32-bit systems?\n\nJames\n\n\n", "msg_date": "Wed, 4 Nov 2020 18:53:32 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "On 05.11.2020 02:53, James Coleman wrote:\n> On Tue, Nov 3, 2020 at 4:42 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>> On Tue, Nov 03, 2020 at 03:53:53AM +0100, Tomas Vondra wrote:\n>>> Hi,\n>>>\n>>> I took another look at this, and 99% of the patch (the fixes to sort\n>>> debug messages) seems fine to me. Attached is the part I plan to get\n>>> committed, including commit message etc.\n>>>\n>> I've pushed this part. Thanks for the patch, Haiying Tang.\n>>\n>>> The one change I decided to remove is this change in tuplesort_free:\n>>>\n>>> - long spaceUsed;\n>>> + int64 spaceUsed;\n>>>\n>>> The reason why I think this variable should be 'long' is that we're\n>>> using it for this:\n>>>\n>>> spaceUsed = LogicalTapeSetBlocks(state->tapeset);\n>>>\n>>> and LogicalTapeSetBlocks is defined like this:\n>>>\n>>> extern long LogicalTapeSetBlocks(LogicalTapeSet *lts);\n>>>\n>>> FWIW the \"long\" is not introduced by incremental sort - it used to be in\n>>> tuplesort_end, the incremental sort patch just moved it to a different\n>>> function. It's a bit confusing that tuplesort_updatemax has this:\n>>>\n>>> int64 spaceUsed;\n>>>\n>>> But I'd argue this is actually wrong, and should be \"long\" instead. (And\n>>> this actually comes from the incremental sort patch, by me.)\n>>>\n>>>\n>>> FWIW while looking at what the other places calling LogicalTapeSetBlocks\n>>> do, and I noticed this:\n>>>\n>>> uint64 disk_used = LogicalTapeSetBlocks(...);\n>>>\n>>> in the disk-based hashagg patch. So that's a third data type ...\n>>>\n>> IMHO this should simply switch the current int64 variable to long, as it\n>> was before. Not sure about about the hashagg uint64 variable.\n> Is there anything that actually limits tape code to using at most 4GB\n> on 32-bit systems?\n\nAt first glance, I haven't found anything that could limit tape code. It \nuses BufFile, which is not limited by the OS file size limit.\nStill, If we want to change 'long' in LogicalTapeSetBlocks, we should \nprobably also update nBlocksWritten and other variables.\n\nAs far as I see, the major part of the patch was committed, so l update \nthe status of the CF entry to \"Committed\". Feel free to create a new \nentry, if you're going to continue working on the remaining issue.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Mon, 23 Nov 2020 14:11:04 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" } ]
[ { "msg_contents": "Hi\n\nThe errdetail emitted when creating/modifying an ENUM value is misleading:\n\n postgres=# CREATE TYPE enum_valtest AS ENUM (\n 'foo',\n 'ああああああああああああああああああああああ'\n );\n ERROR: invalid enum label \"ああああああああああああああああああああああ\"\n DETAIL: Labels must be 63 characters or less.\n\nAttached trivial patch changes the message to:\n\n DETAIL: Labels must be 63 bytes or less.\n\nThis matches the documentation, which states:\n\n The length of an enum value's textual label is limited by the NAMEDATALEN\n setting compiled into PostgreSQL; in standard builds this means at most\n 63 bytes.\n\n https://www.postgresql.org/docs/current/datatype-enum.html\n\nI don't see any particular need to backpatch this.\n\n\nRegards\n\nIan Barwick\n\n\n-- \nEnterpriseDB: https://www.enterprisedb.com", "msg_date": "Mon, 19 Oct 2020 13:18:07 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "[patch] ENUM errdetail should mention bytes, not chars" }, { "msg_contents": "On Mon, Oct 19, 2020 at 12:18 PM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n>\n> Hi\n>\n> The errdetail emitted when creating/modifying an ENUM value is misleading:\n>\n> postgres=# CREATE TYPE enum_valtest AS ENUM (\n> 'foo',\n> 'ああああああああああああああああああああああ'\n> );\n> ERROR: invalid enum label \"ああああああああああああああああああああああ\"\n> DETAIL: Labels must be 63 characters or less.\n>\n> Attached trivial patch changes the message to:\n>\n> DETAIL: Labels must be 63 bytes or less.\n>\n> This matches the documentation, which states:\n>\n> The length of an enum value's textual label is limited by the NAMEDATALEN\n> setting compiled into PostgreSQL; in standard builds this means at most\n> 63 bytes.\n>\n> https://www.postgresql.org/docs/current/datatype-enum.html\n>\n> I don't see any particular need to backpatch this.\n\nIndeed the message is wrong, and patch LGTM.\n\n\n", "msg_date": "Mon, 19 Oct 2020 12:34:24 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] ENUM errdetail should mention bytes, not chars" }, { "msg_contents": "On 2020-10-19 06:34, Julien Rouhaud wrote:\n>> ERROR: invalid enum label \"ああああああああああああああああああああああ\"\n>> DETAIL: Labels must be 63 characters or less.\n>>\n>> Attached trivial patch changes the message to:\n>>\n>> DETAIL: Labels must be 63 bytes or less.\n>>\n>> This matches the documentation, which states:\n>>\n>> The length of an enum value's textual label is limited by the NAMEDATALEN\n>> setting compiled into PostgreSQL; in standard builds this means at most\n>> 63 bytes.\n>>\n>> https://www.postgresql.org/docs/current/datatype-enum.html\n>>\n>> I don't see any particular need to backpatch this.\n> \n> Indeed the message is wrong, and patch LGTM.\n\nCommitted. Btw., the patch didn't update the regression test output.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 27 Oct 2020 12:00:33 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [patch] ENUM errdetail should mention bytes, not chars" }, { "msg_contents": "2020年10月27日(火) 20:00 Peter Eisentraut <peter.eisentraut@2ndquadrant.com>:\n>\n> On 2020-10-19 06:34, Julien Rouhaud wrote:\n> >> ERROR: invalid enum label \"ああああああああああああああああああああああ\"\n> >> DETAIL: Labels must be 63 characters or less.\n> >>\n> >> Attached trivial patch changes the message to:\n> >>\n> >> DETAIL: Labels must be 63 bytes or less.\n> >>\n> >> This matches the documentation, which states:\n> >>\n> >> The length of an enum value's textual label is limited by the NAMEDATALEN\n> >> setting compiled into PostgreSQL; in standard builds this means at most\n> >> 63 bytes.\n> >>\n> >> https://www.postgresql.org/docs/current/datatype-enum.html\n> >>\n> >> I don't see any particular need to backpatch this.\n> >\n> > Indeed the message is wrong, and patch LGTM.\n>\n> Committed.\n\nThanks!\n\n> Btw., the patch didn't update the regression test output.\n\nWhoops... /me hangs head in shame and slinks away...\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Oct 2020 10:35:03 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [patch] ENUM errdetail should mention bytes, not chars" } ]
[ { "msg_contents": "Hi\n\nI found some code like the following:\n\n> StringInfoData s;\n> ...\n> values[6] = CStringGetTextDatum(s.data);\n\nThe length of string can be found in ' StringInfoData.len', \nbut the macro CStringGetTextDatum will use strlen to count the length again.\nI think we can use PointerGetDatum(cstring_to_text_with_len(s.data, s.len)) to improve.\n\n> #define CStringGetTextDatum(s) PointerGetDatum(cstring_to_text(s))\n> text *\n> cstring_to_text(const char *s)\n> {\n> \treturn cstring_to_text_with_len(s, strlen(s));\n> }\n\n\nThere may have more places that can get the length of string in advance,\nBut that may need new variable to store it ,So I just find all StringInfoData cases.\n\nBest regards,\nhouzj", "msg_date": "Mon, 19 Oct 2020 06:32:57 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "Use PointerGetDatum(cstring_to_text_with_len()) instead of\n CStringGetTextDatum() to avoid duplicate strlen" }, { "msg_contents": "On 19/10/2020 09:32, Hou, Zhijie wrote:\n> Hi\n> \n> I found some code like the following:\n> \n>> StringInfoData s;\n>> ...\n>> values[6] = CStringGetTextDatum(s.data);\n> \n> The length of string can be found in ' StringInfoData.len',\n> but the macro CStringGetTextDatum will use strlen to count the length again.\n> I think we can use PointerGetDatum(cstring_to_text_with_len(s.data, s.len)) to improve.\n> \n>> #define CStringGetTextDatum(s) PointerGetDatum(cstring_to_text(s))\n>> text *\n>> cstring_to_text(const char *s)\n>> {\n>> \treturn cstring_to_text_with_len(s, strlen(s));\n>> }\n> \n> \n> There may have more places that can get the length of string in advance,\n> But that may need new variable to store it ,So I just find all StringInfoData cases.\n\nNone of these calls are performance-critical, so it hardly matters. I \nwould rather keep them short and simple.\n\nIt might make sense to create a new macro or function for this, though. \nSomething like:\n\nstatic inline text *\nStringInfoGetTextDatum(StringInfo s)\n{\n return cstring_to_text_with_len(s->data, s->len);\n}\n\nThat would perhaps make existing code a bit shorter and nicer to read.\n\n- Heikki\n\n\n", "msg_date": "Mon, 19 Oct 2020 15:07:01 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Use PointerGetDatum(cstring_to_text_with_len()) instead of\n CStringGetTextDatum() to avoid duplicate strlen" } ]
[ { "msg_contents": "In [0] it was discussed that hash support for row types/record would be \nhandy. So I implemented that.\n\nThe implementation hashes each field and combines the hash values. Most \nof the code structure can be borrowed from the record comparison \nfunctions/btree support. To combine the hash values, I adapted the code \nfrom the array hashing functions. (The hash_combine()/hash_combine64() \nfunctions also looked sensible, but they don't appear to work in a way \nthat satisfies the hash_func regression test. Could be documented better.)\n\nThe main motivation is to support UNION [DISTINCT] as discussed in [0], \nbut this also enables other hash-related functionality such as hash \njoins (as one regression test accidentally revealed) and hash partitioning.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/52beaf44-ccc3-0ba1-45c7-74aa251cd6ab%402ndquadrant.com#9559845e0ee2129c483b745b9843c571\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 19 Oct 2020 10:01:14 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Hash support for row types" }, { "msg_contents": "Hi,\n\nOn 2020-10-19 10:01:14 +0200, Peter Eisentraut wrote:\n> In [0] it was discussed that hash support for row types/record would be\n> handy. So I implemented that.\n\n> The implementation hashes each field and combines the hash values. Most of\n> the code structure can be borrowed from the record comparison\n> functions/btree support. To combine the hash values, I adapted the code\n> from the array hashing functions. (The hash_combine()/hash_combine64()\n> functions also looked sensible, but they don't appear to work in a way that\n> satisfies the hash_func regression test. Could be documented better.)\n> \n> The main motivation is to support UNION [DISTINCT] as discussed in [0], but\n> this also enables other hash-related functionality such as hash joins (as\n> one regression test accidentally revealed) and hash partitioning.\n\nHow does this deal with row types with a field that doesn't have a hash\nfunction? Erroring out at runtime could cause queries that used to\nsucceed, e.g. because all fields have btree ops, to fail, if we just have\na generic unconditionally present hash opclass? Is that an OK\n\"regression\"?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 19 Oct 2020 16:32:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Hash support for row types" }, { "msg_contents": "On 2020-10-20 01:32, Andres Freund wrote:\n> How does this deal with row types with a field that doesn't have a hash\n> function? Erroring out at runtime could cause queries that used to\n> succeed, e.g. because all fields have btree ops, to fail, if we just have\n> a generic unconditionally present hash opclass? Is that an OK\n> \"regression\"?\n\nGood point. There is actually code in the type cache that is supposed \nto handle that, so I'll need to adjust that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 20 Oct 2020 17:10:24 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Hash support for row types" }, { "msg_contents": "On Tue, Oct 20, 2020 at 11:10 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-10-20 01:32, Andres Freund wrote:\n> > How does this deal with row types with a field that doesn't have a hash\n> > function? Erroring out at runtime could cause queries that used to\n> > succeed, e.g. because all fields have btree ops, to fail, if we just have\n> > a generic unconditionally present hash opclass? Is that an OK\n> > \"regression\"?\n>\n> Good point. There is actually code in the type cache that is supposed\n> to handle that, so I'll need to adjust that.\n\nDo we need to worry about what happens if somebody modifies the\nopclass/opfamily definitions?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 20 Oct 2020 14:41:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hash support for row types" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Do we need to worry about what happens if somebody modifies the\n> opclass/opfamily definitions?\n\nThere's a lot of places that you can break by doing that. I'm not\ntoo concerned about it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Oct 2020 16:36:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hash support for row types" }, { "msg_contents": "On 2020-10-20 17:10, Peter Eisentraut wrote:\n> On 2020-10-20 01:32, Andres Freund wrote:\n>> How does this deal with row types with a field that doesn't have a hash\n>> function? Erroring out at runtime could cause queries that used to\n>> succeed, e.g. because all fields have btree ops, to fail, if we just have\n>> a generic unconditionally present hash opclass? Is that an OK\n>> \"regression\"?\n> \n> Good point. There is actually code in the type cache that is supposed\n> to handle that, so I'll need to adjust that.\n\nHere is an updated patch with the type cache integration added.\n\nTo your point, this now checks each fields hashability before \nconsidering a row type as hashable. It can still have run-time errors \nfor untyped record datums, but that's not something we can do anything \nabout.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 23 Oct 2020 09:49:15 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Hash support for row types" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Here is an updated patch with the type cache integration added.\n\n> To your point, this now checks each fields hashability before \n> considering a row type as hashable. It can still have run-time errors \n> for untyped record datums, but that's not something we can do anything \n> about.\n\nThis looks good code-wise. A couple small niggles on the tests:\n\n* The new test in with.sql claims to be testing row hashing, but\nit's not very apparent that any such thing actually happens. Maybe\nEXPLAIN the query, as well as execute it, to confirm that a\nhash-based plan is used.\n\n* Is it worth devising a test case in which hashing is not possible\nbecause one of the columns isn't hashable? I have mixed feelings\nabout this because the set of suitable column types may decrease\nto empty over time, making it hard to maintain the test case.\n\nI marked it RFC.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Nov 2020 14:51:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hash support for row types" }, { "msg_contents": "I wrote a new patch to add a lot more tests around hash-based plans. \nThis is intended to apply separately from the other patch, and the other \npatch would then \"flip\" some of the test cases.\n\nOn 2020-11-13 20:51, Tom Lane wrote:\n> * The new test in with.sql claims to be testing row hashing, but\n> it's not very apparent that any such thing actually happens. Maybe\n> EXPLAIN the query, as well as execute it, to confirm that a\n> hash-based plan is used.\n\nThe recursive union requires hashing, but this is not visible in the \nplan. You only get an error if there is no hashing support for a type. \nI have added a test for this.\n\nFor the non-recursive union, I have added more tests that show this in \nthe plans.\n\n> * Is it worth devising a test case in which hashing is not possible\n> because one of the columns isn't hashable? I have mixed feelings\n> about this because the set of suitable column types may decrease\n> to empty over time, making it hard to maintain the test case.\n\nI used the money type for now. If someone adds hash support for that, \nwe'll change it. I don't think this will change too rapidly, though.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/", "msg_date": "Tue, 17 Nov 2020 14:25:59 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Hash support for row types" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I wrote a new patch to add a lot more tests around hash-based plans. \n> This is intended to apply separately from the other patch, and the other \n> patch would then \"flip\" some of the test cases.\n\nOK, that addresses my concerns.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 17 Nov 2020 14:33:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hash support for row types" }, { "msg_contents": "On 2020-11-17 20:33, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> I wrote a new patch to add a lot more tests around hash-based plans.\n>> This is intended to apply separately from the other patch, and the other\n>> patch would then \"flip\" some of the test cases.\n> \n> OK, that addresses my concerns.\n\nThanks. I have committed the tests and then subsequently the feature patch.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n", "msg_date": "Thu, 19 Nov 2020 09:44:43 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Hash support for row types" } ]
[ { "msg_contents": "A follow-up to the recently added support for OUT parameters for \nprocedures. The JDBC driver sends OUT parameters with type void. This \nmakes sense when calling a function, so that the parameters are ignored \nin ParseFuncOrColumn(). For a procedure call we want to treat them as \nunknown. This is of course a bit of a hack on top of another hack, but \nit's small and contained and gets the job done.\n\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 19 Oct 2020 11:19:10 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Make procedure OUT parameters work with JDBC" }, { "msg_contents": "\nOn 10/19/20 5:19 AM, Peter Eisentraut wrote:\n> A follow-up to the recently added support for OUT parameters for\n> procedures.  The JDBC driver sends OUT parameters with type void. \n> This makes sense when calling a function, so that the parameters are\n> ignored in ParseFuncOrColumn().  For a procedure call we want to treat\n> them as unknown.  This is of course a bit of a hack on top of another\n> hack, but it's small and contained and gets the job done.\n>\n>\n\nI've tested this and it works as expected. +1 to apply.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 19 Oct 2020 07:15:48 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Make procedure OUT parameters work with JDBC" }, { "msg_contents": "On Mon, 19 Oct 2020, 19:16 Andrew Dunstan, <andrew@dunslane.net> wrote:\n\n>\n> On 10/19/20 5:19 AM, Peter Eisentraut wrote:\n> > A follow-up to the recently added support for OUT parameters for\n> > procedures. The JDBC driver sends OUT parameters with type void.\n> > This makes sense when calling a function, so that the parameters are\n> > ignored in ParseFuncOrColumn(). For a procedure call we want to treat\n> > them as unknown. This is of course a bit of a hack on top of another\n> > hack, but it's small and contained and gets the job done.\n> >\n>\n\nThe JDBC spec defines CallableStatement.registerOutPararameter(...)\nvariants that take SQLType enumeration value and optionally type name.\n\nIt's important that this change not break correct and fully specified use\nof the CallableStatement interface.\n\nOn Mon, 19 Oct 2020, 19:16 Andrew Dunstan, <andrew@dunslane.net> wrote:\nOn 10/19/20 5:19 AM, Peter Eisentraut wrote:\n> A follow-up to the recently added support for OUT parameters for\n> procedures.  The JDBC driver sends OUT parameters with type void. \n> This makes sense when calling a function, so that the parameters are\n> ignored in ParseFuncOrColumn().  For a procedure call we want to treat\n> them as unknown.  This is of course a bit of a hack on top of another\n> hack, but it's small and contained and gets the job done.\n>The JDBC spec defines CallableStatement.registerOutPararameter(...) variants that take SQLType enumeration value and optionally type name.It's important that this change not break correct and fully specified use of the CallableStatement interface.", "msg_date": "Tue, 20 Oct 2020 08:35:56 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Make procedure OUT parameters work with JDBC" }, { "msg_contents": "\nOn 10/19/20 8:35 PM, Craig Ringer wrote:\n>\n>\n> On Mon, 19 Oct 2020, 19:16 Andrew Dunstan, <andrew@dunslane.net\n> <mailto:andrew@dunslane.net>> wrote:\n>\n>\n> On 10/19/20 5:19 AM, Peter Eisentraut wrote:\n> > A follow-up to the recently added support for OUT parameters for\n> > procedures.  The JDBC driver sends OUT parameters with type void. \n> > This makes sense when calling a function, so that the parameters are\n> > ignored in ParseFuncOrColumn().  For a procedure call we want to\n> treat\n> > them as unknown.  This is of course a bit of a hack on top of\n> another\n> > hack, but it's small and contained and gets the job done.\n> >\n>\n>\n> The JDBC spec defines CallableStatement.registerOutPararameter(...)\n> variants that take SQLType enumeration value and optionally type name.\n>\n> It's important that this change not break correct and fully specified\n> use of the CallableStatement interface.\n\n\nThe JDBC driver currently implements this but discards any type\ninformation and sends VOIDOID. This patch accommodates that. This\nactually works fine, except in the case of overloaded procedures, where\nthe workaround is to include an explicit cast in the CALL statement.\n\nModifying the JDBC driver to send real type info for these cases is\nsomething to be done, but there are some difficulties in that the class\nwhere it's handled doesn't have enough context. And there will also\nalways be cases where it really doesn't know what to send (user defined\ntypes etc.), so sending VOIDOID or UNKNOWNOID will still be done.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Tue, 20 Oct 2020 08:55:37 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Make procedure OUT parameters work with JDBC" }, { "msg_contents": "On 2020-10-19 13:15, Andrew Dunstan wrote:\n> On 10/19/20 5:19 AM, Peter Eisentraut wrote:\n>> A follow-up to the recently added support for OUT parameters for\n>> procedures.  The JDBC driver sends OUT parameters with type void.\n>> This makes sense when calling a function, so that the parameters are\n>> ignored in ParseFuncOrColumn().  For a procedure call we want to treat\n>> them as unknown.  This is of course a bit of a hack on top of another\n>> hack, but it's small and contained and gets the job done.\n> \n> I've tested this and it works as expected. +1 to apply.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 27 Oct 2020 09:12:20 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Make procedure OUT parameters work with JDBC" } ]
[ { "msg_contents": "Hi,\n\nAssertion added in commits 6b2c4e59d016 is failing with following test:\n\nCREATE TABLE sales\n(\n prod_id int,\n prod_quantity int,\n sold_month date\n) PARTITION BY RANGE(sold_month);\n\nCREATE TABLE public.sales_p1 PARTITION OF public.sales\nFOR VALUES FROM (MINVALUE) TO ('2019-02-15');\n\nCREATE TABLE sales_p2(like sales including all);\nALTER TABLE sales ATTACH PARTITION sales_p2\nFOR VALUES FROM ('2019-02-15') TO ('2019-03-15');\n\nCREATE TABLE fail PARTITION OF public.sales\nFOR VALUES FROM ('2019-01-15') TO ('2019-02-15');\n\n\nHere is the backtrace:\n\n(gdb) bt\n#0 0x00007fa373333277 in raise () from /lib64/libc.so.6\n#1 0x00007fa373334968 in abort () from /lib64/libc.so.6\n#2 0x0000000000abecdc in ExceptionalCondition (conditionName=0xc5de6d\n\"cmpval >= 0\", errorType=0xc5cf03 \"FailedAssertion\", fileName=0xc5d03e\n\"partbounds.c\", lineNumber=3092) at assert.c:69\n#3 0x000000000086189c in check_new_partition_bound\n(relname=0x7fff225f5ef0 \"fail\", parent=0x7fa3744868a0, spec=0x2e98888,\npstate=0x2e905e8) at partbounds.c:3092\n#4 0x00000000006b44dc in DefineRelation (stmt=0x2e83198, relkind=114\n'r', ownerId=10, typaddress=0x0, queryString=0x2dc07c0 \"CREATE TABLE\nfail PARTITION OF public.sales \\nFOR VALUES FROM ('2019-01-15') TO\n('2019-02-15');\") at tablecmds.c:1011\n#5 0x0000000000941430 in ProcessUtilitySlow (pstate=0x2e83080,\npstmt=0x2dc19b8, queryString=0x2dc07c0 \"CREATE TABLE fail PARTITION OF\npublic.sales \\nFOR VALUES FROM ('2019-01-15') TO ('2019-02-15');\",\ncontext=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\ndest=0x2dc1aa8, qc=0x7fff225f67c0) at utility.c:1163\n#6 0x000000000094123e in standard_ProcessUtility (pstmt=0x2dc19b8,\nqueryString=0x2dc07c0 \"CREATE TABLE fail PARTITION OF public.s ales\n\\nFOR VALUES FROM ('2019-01-15') TO ('2019-02-15');\",\ncontext=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\ndest=0x2dc1 aa8, qc=0x7fff225f67c0) at utility.c:1071\n#7 0x0000000000940349 in ProcessUtility (pstmt=0x2dc19b8,\nqueryString=0x2dc07c0 \"CREATE TABLE fail PARTITION OF public.sales\n\\nFO R VALUES FROM ('2019-01-15') TO ('2019-02-15');\",\ncontext=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\ndest=0x2dc1aa8, qc=0 x7fff225f67c0) at utility.c:524\n#8 0x000000000093f163 in PortalRunUtility (portal=0x2e22ab0,\npstmt=0x2dc19b8, isTopLevel=true, setHoldSnapshot=false, dest=0x2dc1\naa8, qc=0x7fff225f67c0) at pquery.c:1159\n#9 0x000000000093f380 in PortalRunMulti (portal=0x2e22ab0,\nisTopLevel=true, setHoldSnapshot=false, dest=0x2dc1aa8, altdest=0x2dc1\naa8, qc=0x7fff225f67c0) at pquery.c:1305\n#10 0x000000000093e882 in PortalRun (portal=0x2e22ab0,\ncount=9223372036854775807, isTopLevel=true, run_once=true,\ndest=0x2dc1aa8, altdest=0x2dc1aa8, qc=0x7fff225f67c0) at pquery.c:779\n#11 0x00000000009389e8 in exec_simple_query (query_string=0x2dc07c0\n\"CREATE TABLE fail PARTITION OF public.sales \\nFOR VALUES FROM\n('2019-01-15') TO ('2019-02-15');\") at postgres.c:1239\n\nRegards,\nAmul Sul\n\n\n", "msg_date": "Mon, 19 Oct 2020 16:58:22 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Assertion failure when ATTACH partition followed by CREATE PARTITION." }, { "msg_contents": "Hi,\n\nOn Mon, Oct 19, 2020 at 4:58 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Hi,\n>\n> Assertion added in commits 6b2c4e59d016 is failing with following test:\n>\n> CREATE TABLE sales\n> (\n> prod_id int,\n> prod_quantity int,\n> sold_month date\n> ) PARTITION BY RANGE(sold_month);\n>\n> CREATE TABLE public.sales_p1 PARTITION OF public.sales\n> FOR VALUES FROM (MINVALUE) TO ('2019-02-15');\n>\n> CREATE TABLE sales_p2(like sales including all);\n> ALTER TABLE sales ATTACH PARTITION sales_p2\n> FOR VALUES FROM ('2019-02-15') TO ('2019-03-15');\n>\n> CREATE TABLE fail PARTITION OF public.sales\n> FOR VALUES FROM ('2019-01-15') TO ('2019-02-15');\n>\n\nThe reported issue has nothing to do with the ATTACH PARTITION stmt this can\nalso be reproducible with the following CREATE stmts:\n\nCREATE TABLE sales\n(\n prod_id int,\n prod_quantity int,\n sold_month date\n) PARTITION BY RANGE(sold_month);\n\nCREATE TABLE sales_p1 PARTITION OF sales\nFOR VALUES FROM (MINVALUE) TO ('2019-02-15');\n\nCREATE TABLE sales_p2 PARTITION OF sales\nFOR VALUES FROM ('2019-02-15') TO ('2019-03-15');\n\nCREATE TABLE fail PARTITION OF sales\nFOR VALUES FROM ('2019-01-15') TO ('2019-02-15');\n\nAFAICU, the assert assumption is not correct. In the attached patch, I have\nremoved that assert and the related comment. Also, minor adjustments to the\ncode fetching correct datum.\n\nRegards,\nAmul\n\n>\n> Here is the backtrace:\n>\n> (gdb) bt\n> #0 0x00007fa373333277 in raise () from /lib64/libc.so.6\n> #1 0x00007fa373334968 in abort () from /lib64/libc.so.6\n> #2 0x0000000000abecdc in ExceptionalCondition (conditionName=0xc5de6d\n> \"cmpval >= 0\", errorType=0xc5cf03 \"FailedAssertion\", fileName=0xc5d03e\n> \"partbounds.c\", lineNumber=3092) at assert.c:69\n> #3 0x000000000086189c in check_new_partition_bound\n> (relname=0x7fff225f5ef0 \"fail\", parent=0x7fa3744868a0, spec=0x2e98888,\n> pstate=0x2e905e8) at partbounds.c:3092\n> #4 0x00000000006b44dc in DefineRelation (stmt=0x2e83198, relkind=114\n> 'r', ownerId=10, typaddress=0x0, queryString=0x2dc07c0 \"CREATE TABLE\n> fail PARTITION OF public.sales \\nFOR VALUES FROM ('2019-01-15') TO\n> ('2019-02-15');\") at tablecmds.c:1011\n> #5 0x0000000000941430 in ProcessUtilitySlow (pstate=0x2e83080,\n> pstmt=0x2dc19b8, queryString=0x2dc07c0 \"CREATE TABLE fail PARTITION OF\n> public.sales \\nFOR VALUES FROM ('2019-01-15') TO ('2019-02-15');\",\n> context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\n> dest=0x2dc1aa8, qc=0x7fff225f67c0) at utility.c:1163\n> #6 0x000000000094123e in standard_ProcessUtility (pstmt=0x2dc19b8,\n> queryString=0x2dc07c0 \"CREATE TABLE fail PARTITION OF public.s ales\n> \\nFOR VALUES FROM ('2019-01-15') TO ('2019-02-15');\",\n> context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\n> dest=0x2dc1 aa8, qc=0x7fff225f67c0) at utility.c:1071\n> #7 0x0000000000940349 in ProcessUtility (pstmt=0x2dc19b8,\n> queryString=0x2dc07c0 \"CREATE TABLE fail PARTITION OF public.sales\n> \\nFO R VALUES FROM ('2019-01-15') TO ('2019-02-15');\",\n> context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\n> dest=0x2dc1aa8, qc=0 x7fff225f67c0) at utility.c:524\n> #8 0x000000000093f163 in PortalRunUtility (portal=0x2e22ab0,\n> pstmt=0x2dc19b8, isTopLevel=true, setHoldSnapshot=false, dest=0x2dc1\n> aa8, qc=0x7fff225f67c0) at pquery.c:1159\n> #9 0x000000000093f380 in PortalRunMulti (portal=0x2e22ab0,\n> isTopLevel=true, setHoldSnapshot=false, dest=0x2dc1aa8, altdest=0x2dc1\n> aa8, qc=0x7fff225f67c0) at pquery.c:1305\n> #10 0x000000000093e882 in PortalRun (portal=0x2e22ab0,\n> count=9223372036854775807, isTopLevel=true, run_once=true,\n> dest=0x2dc1aa8, altdest=0x2dc1aa8, qc=0x7fff225f67c0) at pquery.c:779\n> #11 0x00000000009389e8 in exec_simple_query (query_string=0x2dc07c0\n> \"CREATE TABLE fail PARTITION OF public.sales \\nFOR VALUES FROM\n> ('2019-01-15') TO ('2019-02-15');\") at postgres.c:1239\n>\n> Regards,\n> Amul Sul", "msg_date": "Tue, 27 Oct 2020 10:56:01 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure when ATTACH partition followed by CREATE\n PARTITION." }, { "msg_contents": "Not sure if Tom saw this yet.\n\nOn Tue, Oct 27, 2020 at 10:56:01AM +0530, Amul Sul wrote:\n> On Mon, Oct 19, 2020 at 4:58 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Assertion added in commits 6b2c4e59d016 is failing with following test:\n> >\n> > CREATE TABLE sales\n> > (\n> > prod_id int,\n> > prod_quantity int,\n> > sold_month date\n> > ) PARTITION BY RANGE(sold_month);\n> >\n> > CREATE TABLE public.sales_p1 PARTITION OF public.sales\n> > FOR VALUES FROM (MINVALUE) TO ('2019-02-15');\n> >\n> > CREATE TABLE sales_p2(like sales including all);\n> > ALTER TABLE sales ATTACH PARTITION sales_p2\n> > FOR VALUES FROM ('2019-02-15') TO ('2019-03-15');\n> >\n> > CREATE TABLE fail PARTITION OF public.sales\n> > FOR VALUES FROM ('2019-01-15') TO ('2019-02-15');\n> \n> The reported issue has nothing to do with the ATTACH PARTITION stmt this can\n> also be reproducible with the following CREATE stmts:\n> \n> CREATE TABLE sales\n> (\n> prod_id int,\n> prod_quantity int,\n> sold_month date\n> ) PARTITION BY RANGE(sold_month);\n> \n> CREATE TABLE sales_p1 PARTITION OF sales\n> FOR VALUES FROM (MINVALUE) TO ('2019-02-15');\n> \n> CREATE TABLE sales_p2 PARTITION OF sales\n> FOR VALUES FROM ('2019-02-15') TO ('2019-03-15');\n> \n> CREATE TABLE fail PARTITION OF sales\n> FOR VALUES FROM ('2019-01-15') TO ('2019-02-15');\n> \n> AFAICU, the assert assumption is not correct. In the attached patch, I have\n> removed that assert and the related comment. Also, minor adjustments to the\n> code fetching correct datum.\n> \n> Regards,\n> Amul\n> \n> >\n> > Here is the backtrace:\n> >\n> > (gdb) bt\n> > #0 0x00007fa373333277 in raise () from /lib64/libc.so.6\n> > #1 0x00007fa373334968 in abort () from /lib64/libc.so.6\n> > #2 0x0000000000abecdc in ExceptionalCondition (conditionName=0xc5de6d\n> > \"cmpval >= 0\", errorType=0xc5cf03 \"FailedAssertion\", fileName=0xc5d03e\n> > \"partbounds.c\", lineNumber=3092) at assert.c:69\n> > #3 0x000000000086189c in check_new_partition_bound\n> > (relname=0x7fff225f5ef0 \"fail\", parent=0x7fa3744868a0, spec=0x2e98888,\n> > pstate=0x2e905e8) at partbounds.c:3092\n> > #4 0x00000000006b44dc in DefineRelation (stmt=0x2e83198, relkind=114\n> > 'r', ownerId=10, typaddress=0x0, queryString=0x2dc07c0 \"CREATE TABLE\n> > fail PARTITION OF public.sales \\nFOR VALUES FROM ('2019-01-15') TO\n> > ('2019-02-15');\") at tablecmds.c:1011\n> > #5 0x0000000000941430 in ProcessUtilitySlow (pstate=0x2e83080,\n> > pstmt=0x2dc19b8, queryString=0x2dc07c0 \"CREATE TABLE fail PARTITION OF\n> > public.sales \\nFOR VALUES FROM ('2019-01-15') TO ('2019-02-15');\",\n> > context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\n> > dest=0x2dc1aa8, qc=0x7fff225f67c0) at utility.c:1163\n> > #6 0x000000000094123e in standard_ProcessUtility (pstmt=0x2dc19b8,\n> > queryString=0x2dc07c0 \"CREATE TABLE fail PARTITION OF public.s ales\n> > \\nFOR VALUES FROM ('2019-01-15') TO ('2019-02-15');\",\n> > context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\n> > dest=0x2dc1 aa8, qc=0x7fff225f67c0) at utility.c:1071\n> > #7 0x0000000000940349 in ProcessUtility (pstmt=0x2dc19b8,\n> > queryString=0x2dc07c0 \"CREATE TABLE fail PARTITION OF public.sales\n> > \\nFO R VALUES FROM ('2019-01-15') TO ('2019-02-15');\",\n> > context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\n> > dest=0x2dc1aa8, qc=0 x7fff225f67c0) at utility.c:524\n> > #8 0x000000000093f163 in PortalRunUtility (portal=0x2e22ab0,\n> > pstmt=0x2dc19b8, isTopLevel=true, setHoldSnapshot=false, dest=0x2dc1\n> > aa8, qc=0x7fff225f67c0) at pquery.c:1159\n> > #9 0x000000000093f380 in PortalRunMulti (portal=0x2e22ab0,\n> > isTopLevel=true, setHoldSnapshot=false, dest=0x2dc1aa8, altdest=0x2dc1\n> > aa8, qc=0x7fff225f67c0) at pquery.c:1305\n> > #10 0x000000000093e882 in PortalRun (portal=0x2e22ab0,\n> > count=9223372036854775807, isTopLevel=true, run_once=true,\n> > dest=0x2dc1aa8, altdest=0x2dc1aa8, qc=0x7fff225f67c0) at pquery.c:779\n> > #11 0x00000000009389e8 in exec_simple_query (query_string=0x2dc07c0\n> > \"CREATE TABLE fail PARTITION OF public.sales \\nFOR VALUES FROM\n> > ('2019-01-15') TO ('2019-02-15');\") at postgres.c:1239\n> >\n> > Regards,\n> > Amul Sul\n\n\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581\n\n\n", "msg_date": "Fri, 30 Oct 2020 15:01:00 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure when ATTACH partition followed by CREATE\n PARTITION." }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Not sure if Tom saw this yet.\n\nIndeed, I'd not been paying attention. Fix looks good, pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Oct 2020 17:01:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assertion failure when ATTACH partition followed by CREATE\n PARTITION." } ]
[ { "msg_contents": "Hello,\n\nWe have an interface to pause the WAL replay (pg_wal_replay_pause) and\nto know whether the WAL replay pause is requested\n(pg_is_wal_replay_paused). But there is no way to know whether the\nrecovery is actually paused or not. Actually, the recovery process\nmight process an extra WAL before pausing the recovery. So does it\nmake sense to provide a new interface to tell whether the recovery is\nactually paused or not?\n\nOne solution could be that we convert the XLogCtlData->recoveryPause\nfrom bool to tri-state variable (0-> recovery not paused 1-> pause\nrequested 2-> actually paused).\n\nAny opinion on this?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Oct 2020 19:40:49 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Is Recovery actually paused?" }, { "msg_contents": "On Mon, 19 Oct 2020 at 15:11, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> We have an interface to pause the WAL replay (pg_wal_replay_pause) and\n> to know whether the WAL replay pause is requested\n> (pg_is_wal_replay_paused). But there is no way to know whether the\n> recovery is actually paused or not. Actually, the recovery process\n> might process an extra WAL before pausing the recovery. So does it\n> make sense to provide a new interface to tell whether the recovery is\n> actually paused or not?\n>\n> One solution could be that we convert the XLogCtlData->recoveryPause\n> from bool to tri-state variable (0-> recovery not paused 1-> pause\n> requested 2-> actually paused).\n>\n> Any opinion on this?\n\nWhy would we want this? What problem are you trying to solve?\n\nIf we do care, why not fix pg_is_wal_replay_paused() so it responds as you wish?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 20 Oct 2020 08:41:18 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Tue, Oct 20, 2020 at 1:11 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> On Mon, 19 Oct 2020 at 15:11, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > We have an interface to pause the WAL replay (pg_wal_replay_pause) and\n> > to know whether the WAL replay pause is requested\n> > (pg_is_wal_replay_paused). But there is no way to know whether the\n> > recovery is actually paused or not. Actually, the recovery process\n> > might process an extra WAL before pausing the recovery. So does it\n> > make sense to provide a new interface to tell whether the recovery is\n> > actually paused or not?\n> >\n> > One solution could be that we convert the XLogCtlData->recoveryPause\n> > from bool to tri-state variable (0-> recovery not paused 1-> pause\n> > requested 2-> actually paused).\n> >\n> > Any opinion on this?\n>\n> Why would we want this? What problem are you trying to solve?\n\nThe requirement is to know the last replayed WAL on the standby so\nunless we can guarantee that the recovery is actually paused we can\nnever get the safe last_replay_lsn value.\n\n> If we do care, why not fix pg_is_wal_replay_paused() so it responds as you wish?\n\nMaybe we can also do that but pg_is_wal_replay_paused is an existing\nAPI and the behavior is to know whether the recovery paused is\nrequested or not, So I am not sure is it a good idea to change the\nbehavior of the existing API?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Oct 2020 13:22:21 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Tue, Oct 20, 2020 at 1:22 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Oct 20, 2020 at 1:11 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> >\n> > On Mon, 19 Oct 2020 at 15:11, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > > We have an interface to pause the WAL replay (pg_wal_replay_pause) and\n> > > to know whether the WAL replay pause is requested\n> > > (pg_is_wal_replay_paused). But there is no way to know whether the\n> > > recovery is actually paused or not. Actually, the recovery process\n> > > might process an extra WAL before pausing the recovery. So does it\n> > > make sense to provide a new interface to tell whether the recovery is\n> > > actually paused or not?\n> > >\n> > > One solution could be that we convert the XLogCtlData->recoveryPause\n> > > from bool to tri-state variable (0-> recovery not paused 1-> pause\n> > > requested 2-> actually paused).\n> > >\n> > > Any opinion on this?\n> >\n> > Why would we want this? What problem are you trying to solve?\n>\n> The requirement is to know the last replayed WAL on the standby so\n> unless we can guarantee that the recovery is actually paused we can\n> never get the safe last_replay_lsn value.\n>\n> > If we do care, why not fix pg_is_wal_replay_paused() so it responds as you wish?\n>\n> Maybe we can also do that but pg_is_wal_replay_paused is an existing\n> API and the behavior is to know whether the recovery paused is\n> requested or not, So I am not sure is it a good idea to change the\n> behavior of the existing API?\n>\n\nAttached is the POC patch to show what I have in mind.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 20 Oct 2020 14:19:50 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Tue, 20 Oct 2020 at 09:50, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> > > Why would we want this? What problem are you trying to solve?\n> >\n> > The requirement is to know the last replayed WAL on the standby so\n> > unless we can guarantee that the recovery is actually paused we can\n> > never get the safe last_replay_lsn value.\n> >\n> > > If we do care, why not fix pg_is_wal_replay_paused() so it responds as you wish?\n> >\n> > Maybe we can also do that but pg_is_wal_replay_paused is an existing\n> > API and the behavior is to know whether the recovery paused is\n> > requested or not, So I am not sure is it a good idea to change the\n> > behavior of the existing API?\n> >\n>\n> Attached is the POC patch to show what I have in mind.\n\nIf you don't like it, I doubt anyone else cares for the exact current\nbehavior either. Thanks for pointing those issues out.\n\nIt would make sense to alter pg_wal_replay_pause() so that it blocks\nuntil paused.\n\nI suggest you add the 3-value state as you suggest, but make\npg_is_wal_replay_paused() respond:\nif paused, true\nif requested, wait until paused, then return true\nelse false\n\nThat then solves your issues with a smoother interface.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 20 Oct 2020 10:30:03 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Tue, Oct 20, 2020 at 3:00 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> On Tue, 20 Oct 2020 at 09:50, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > > > Why would we want this? What problem are you trying to solve?\n> > >\n> > > The requirement is to know the last replayed WAL on the standby so\n> > > unless we can guarantee that the recovery is actually paused we can\n> > > never get the safe last_replay_lsn value.\n> > >\n> > > > If we do care, why not fix pg_is_wal_replay_paused() so it responds as you wish?\n> > >\n> > > Maybe we can also do that but pg_is_wal_replay_paused is an existing\n> > > API and the behavior is to know whether the recovery paused is\n> > > requested or not, So I am not sure is it a good idea to change the\n> > > behavior of the existing API?\n> > >\n> >\n> > Attached is the POC patch to show what I have in mind.\n>\n> If you don't like it, I doubt anyone else cares for the exact current\n> behavior either. Thanks for pointing those issues out.\n>\n> It would make sense to alter pg_wal_replay_pause() so that it blocks\n> until paused.\n>\n> I suggest you add the 3-value state as you suggest, but make\n> pg_is_wal_replay_paused() respond:\n> if paused, true\n> if requested, wait until paused, then return true\n> else false\n>\n> That then solves your issues with a smoother interface.\n>\n\nMake sense to me, I will change as per the suggestion.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Oct 2020 17:59:39 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Tue, Oct 20, 2020 at 5:59 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Oct 20, 2020 at 3:00 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> >\n> > On Tue, 20 Oct 2020 at 09:50, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > > > > Why would we want this? What problem are you trying to solve?\n> > > >\n> > > > The requirement is to know the last replayed WAL on the standby so\n> > > > unless we can guarantee that the recovery is actually paused we can\n> > > > never get the safe last_replay_lsn value.\n> > > >\n> > > > > If we do care, why not fix pg_is_wal_replay_paused() so it responds as you wish?\n> > > >\n> > > > Maybe we can also do that but pg_is_wal_replay_paused is an existing\n> > > > API and the behavior is to know whether the recovery paused is\n> > > > requested or not, So I am not sure is it a good idea to change the\n> > > > behavior of the existing API?\n> > > >\n> > >\n> > > Attached is the POC patch to show what I have in mind.\n> >\n> > If you don't like it, I doubt anyone else cares for the exact current\n> > behavior either. Thanks for pointing those issues out.\n> >\n> > It would make sense to alter pg_wal_replay_pause() so that it blocks\n> > until paused.\n> >\n> > I suggest you add the 3-value state as you suggest, but make\n> > pg_is_wal_replay_paused() respond:\n> > if paused, true\n> > if requested, wait until paused, then return true\n> > else false\n> >\n> > That then solves your issues with a smoother interface.\n> >\n>\n> Make sense to me, I will change as per the suggestion.\n\nI have noticed one more issue, the problem is that if the recovery\nprocess is currently not processing any WAL and just waiting for the\nWAL to become available then the pg_is_wal_replay_paused will be stuck\nforever. Having said that there is the same problem even if we design\nthe new interface which checks whether the recovery is actually paused\nor not because until the recovery process gets the next wal it will\nnot check whether the recovery pause is requested or not so the actual\nrecovery paused flag will never be set.\n\nOne idea could be, if the recovery process is waiting for WAL and a\nrecovery pause is requested then we can assume that the recovery is\npaused because before processing the next wal it will always check\nwhether the recovery pause is requested or not.\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 21 Oct 2020 16:46:17 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Wed, Oct 21, 2020 at 7:16 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have noticed one more issue, the problem is that if the recovery\n> process is currently not processing any WAL and just waiting for the\n> WAL to become available then the pg_is_wal_replay_paused will be stuck\n> forever. Having said that there is the same problem even if we design\n> the new interface which checks whether the recovery is actually paused\n> or not because until the recovery process gets the next wal it will\n> not check whether the recovery pause is requested or not so the actual\n> recovery paused flag will never be set.\n>\n> One idea could be, if the recovery process is waiting for WAL and a\n> recovery pause is requested then we can assume that the recovery is\n> paused because before processing the next wal it will always check\n> whether the recovery pause is requested or not.\n\nThat seems fine, because the user's question is presumably whether the\npause has taken effect so that no more records will be replayed\nbarring an un-paused.\n\nHowever, it might be better to implement this by having the system\nabsorb the pause immediately when it's in this state, rather than\ntrying to detect this state and treat it specially.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 21 Oct 2020 11:14:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Wed, 21 Oct 2020 at 12:16, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Oct 20, 2020 at 5:59 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Oct 20, 2020 at 3:00 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > >\n> > > On Tue, 20 Oct 2020 at 09:50, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > > > > Why would we want this? What problem are you trying to solve?\n> > > > >\n> > > > > The requirement is to know the last replayed WAL on the standby so\n> > > > > unless we can guarantee that the recovery is actually paused we can\n> > > > > never get the safe last_replay_lsn value.\n> > > > >\n> > > > > > If we do care, why not fix pg_is_wal_replay_paused() so it responds as you wish?\n> > > > >\n> > > > > Maybe we can also do that but pg_is_wal_replay_paused is an existing\n> > > > > API and the behavior is to know whether the recovery paused is\n> > > > > requested or not, So I am not sure is it a good idea to change the\n> > > > > behavior of the existing API?\n> > > > >\n> > > >\n> > > > Attached is the POC patch to show what I have in mind.\n> > >\n> > > If you don't like it, I doubt anyone else cares for the exact current\n> > > behavior either. Thanks for pointing those issues out.\n> > >\n> > > It would make sense to alter pg_wal_replay_pause() so that it blocks\n> > > until paused.\n> > >\n> > > I suggest you add the 3-value state as you suggest, but make\n> > > pg_is_wal_replay_paused() respond:\n> > > if paused, true\n> > > if requested, wait until paused, then return true\n> > > else false\n> > >\n> > > That then solves your issues with a smoother interface.\n> > >\n> >\n> > Make sense to me, I will change as per the suggestion.\n>\n> I have noticed one more issue, the problem is that if the recovery\n> process is currently not processing any WAL and just waiting for the\n> WAL to become available then the pg_is_wal_replay_paused will be stuck\n> forever. Having said that there is the same problem even if we design\n> the new interface which checks whether the recovery is actually paused\n> or not because until the recovery process gets the next wal it will\n> not check whether the recovery pause is requested or not so the actual\n> recovery paused flag will never be set.\n>\n> One idea could be, if the recovery process is waiting for WAL and a\n> recovery pause is requested then we can assume that the recovery is\n> paused because before processing the next wal it will always check\n> whether the recovery pause is requested or not.\n\nIf ReadRecord() is waiting for WAL (at bottom of recovery loop), then\nwhen it does return it will immediately move to pause (at top of next\nloop). Which makes it easy to cover these cases.\n\nIt would be easy enough to create another variable that shows \"waiting\nfor WAL\", since that is in itself a useful and interesting thing to be\nable to report.\n\npg_is_wal_replay_paused() and pg_wal_replay_pause() would then return\nwhenever it is either (fully paused || waiting for WAL &&\npause_requested))\n\nWe can then create a new function called pg_wal_replay_status() that\nreturns multiple values: RECOVERING | WAITING_FOR_WAL | PAUSED\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 21 Oct 2020 17:07:37 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Wed, 21 Oct 2020 11:14:24 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Wed, Oct 21, 2020 at 7:16 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > One idea could be, if the recovery process is waiting for WAL and a\n> > recovery pause is requested then we can assume that the recovery is\n> > paused because before processing the next wal it will always check\n> > whether the recovery pause is requested or not.\n..\n> However, it might be better to implement this by having the system\n> absorb the pause immediately when it's in this state, rather than\n> trying to detect this state and treat it specially.\n\nThe paused state is shown in pg_stat_activity.wait_event and it is\nstrange that pg_is_wal_replay_paused() is inconsistent with the\ncolumn. To make them consistent, we need to call recoveryPausesHere()\nat the end of WaitForWALToBecomeAvailable() and let\npg_wal_replay_pause() call WakeupRecovery().\n\nI think we don't need a separate function to find the state.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 22 Oct 2020 10:28:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, Oct 22, 2020 at 6:59 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 21 Oct 2020 11:14:24 -0400, Robert Haas <robertmhaas@gmail.com> wrote in\n> > On Wed, Oct 21, 2020 at 7:16 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > One idea could be, if the recovery process is waiting for WAL and a\n> > > recovery pause is requested then we can assume that the recovery is\n> > > paused because before processing the next wal it will always check\n> > > whether the recovery pause is requested or not.\n> ..\n> > However, it might be better to implement this by having the system\n> > absorb the pause immediately when it's in this state, rather than\n> > trying to detect this state and treat it specially.\n>\n> The paused state is shown in pg_stat_activity.wait_event and it is\n> strange that pg_is_wal_replay_paused() is inconsistent with the\n> column.\n\nRight\n\nTo make them consistent, we need to call recoveryPausesHere()\n> at the end of WaitForWALToBecomeAvailable() and let\n> pg_wal_replay_pause() call WakeupRecovery().\n>\n> I think we don't need a separate function to find the state.\n\nThe idea makes sense to me. I will try to change the patch as per the\nsuggestion.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Oct 2020 19:50:49 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, Oct 22, 2020 at 7:50 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Oct 22, 2020 at 6:59 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Wed, 21 Oct 2020 11:14:24 -0400, Robert Haas <robertmhaas@gmail.com> wrote in\n> > > On Wed, Oct 21, 2020 at 7:16 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > One idea could be, if the recovery process is waiting for WAL and a\n> > > > recovery pause is requested then we can assume that the recovery is\n> > > > paused because before processing the next wal it will always check\n> > > > whether the recovery pause is requested or not.\n> > ..\n> > > However, it might be better to implement this by having the system\n> > > absorb the pause immediately when it's in this state, rather than\n> > > trying to detect this state and treat it specially.\n> >\n> > The paused state is shown in pg_stat_activity.wait_event and it is\n> > strange that pg_is_wal_replay_paused() is inconsistent with the\n> > column.\n>\n> Right\n>\n> To make them consistent, we need to call recoveryPausesHere()\n> > at the end of WaitForWALToBecomeAvailable() and let\n> > pg_wal_replay_pause() call WakeupRecovery().\n> >\n> > I think we don't need a separate function to find the state.\n>\n> The idea makes sense to me. I will try to change the patch as per the\n> suggestion.\n\nHere is the patch based on this idea.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 22 Oct 2020 20:36:48 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, 22 Oct 2020 20:36:48 +0530\nDilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Thu, Oct 22, 2020 at 7:50 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Thu, Oct 22, 2020 at 6:59 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Wed, 21 Oct 2020 11:14:24 -0400, Robert Haas <robertmhaas@gmail.com> wrote in\n> > > > On Wed, Oct 21, 2020 at 7:16 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > One idea could be, if the recovery process is waiting for WAL and a\n> > > > > recovery pause is requested then we can assume that the recovery is\n> > > > > paused because before processing the next wal it will always check\n> > > > > whether the recovery pause is requested or not.\n> > > ..\n> > > > However, it might be better to implement this by having the system\n> > > > absorb the pause immediately when it's in this state, rather than\n> > > > trying to detect this state and treat it specially.\n> > >\n> > > The paused state is shown in pg_stat_activity.wait_event and it is\n> > > strange that pg_is_wal_replay_paused() is inconsistent with the\n> > > column.\n> >\n> > Right\n> >\n> > To make them consistent, we need to call recoveryPausesHere()\n> > > at the end of WaitForWALToBecomeAvailable() and let\n> > > pg_wal_replay_pause() call WakeupRecovery().\n> > >\n> > > I think we don't need a separate function to find the state.\n> >\n> > The idea makes sense to me. I will try to change the patch as per the\n> > suggestion.\n> \n> Here is the patch based on this idea.\n\nI reviewd this patch. \n\nFirst, I made a recovery conflict situation using a table lock.\n\nStandby:\n#= begin;\n#= select * from t;\n\nPrimary:\n#= begin;\n#= lock t in ;\n\nAfter this, WAL of the table lock cannot be replayed due to a lock acquired\nin the standby.\n\nSecond, during the delay, I executed pg_wal_replay_pause() and\npg_is_wal_replay_paused(). Then, pg_is_wal_replay_paused was blocked until\nmax_standby_streaming_delay was expired, and eventually returned true.\n\nI can also see the same behaviour by setting recovery_min_apply_delay.\n\nSo, pg_is_wal_replay_paused waits for recovery to get paused and this works\nsuccessfully as expected.\n\nHowever, I wonder users don't expect pg_is_wal_replay_paused to wait.\nEspecially, if max_standby_streaming_delay is -1, this will be blocked forever, \nalthough this setting may not be usual. In addition, some users may set \nrecovery_min_apply_delay for a large. If such users call pg_is_wal_replay_paused,\nit could wait for a long time.\n\nAt least, I think we need some descriptions on document to explain\npg_is_wal_replay_paused could wait while a time. \n\nAlso, how about adding a new boolean argument to pg_is_wal_replay_paused to\ncontrol whether this waits for recovery to get paused or not? By setting its\ndefault value to true or false, users can use the old format for calling this\nand the backward compatibility can be maintained.\n\n\nAs another comment, while pg_is_wal_replay_paused is blocking, I can not cancel\nthe query. I think CHECK_FOR_INTERRUPTS() is necessary in the waiting loop.\n\n\n+ errhint(\"Recovery control functions can only be executed during recovery.\"))); \n\nThere are a few tabs at the end of this line.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 30 Nov 2020 15:45:34 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, Nov 30, 2020 at 12:17 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n\nThanks for looking into this.\n\n> On Thu, 22 Oct 2020 20:36:48 +0530\n> Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > On Thu, Oct 22, 2020 at 7:50 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Thu, Oct 22, 2020 at 6:59 AM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > >\n> > > > At Wed, 21 Oct 2020 11:14:24 -0400, Robert Haas <robertmhaas@gmail.com> wrote in\n> > > > > On Wed, Oct 21, 2020 at 7:16 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > > One idea could be, if the recovery process is waiting for WAL and a\n> > > > > > recovery pause is requested then we can assume that the recovery is\n> > > > > > paused because before processing the next wal it will always check\n> > > > > > whether the recovery pause is requested or not.\n> > > > ..\n> > > > > However, it might be better to implement this by having the system\n> > > > > absorb the pause immediately when it's in this state, rather than\n> > > > > trying to detect this state and treat it specially.\n> > > >\n> > > > The paused state is shown in pg_stat_activity.wait_event and it is\n> > > > strange that pg_is_wal_replay_paused() is inconsistent with the\n> > > > column.\n> > >\n> > > Right\n> > >\n> > > To make them consistent, we need to call recoveryPausesHere()\n> > > > at the end of WaitForWALToBecomeAvailable() and let\n> > > > pg_wal_replay_pause() call WakeupRecovery().\n> > > >\n> > > > I think we don't need a separate function to find the state.\n> > >\n> > > The idea makes sense to me. I will try to change the patch as per the\n> > > suggestion.\n> >\n> > Here is the patch based on this idea.\n>\n> I reviewd this patch.\n>\n> First, I made a recovery conflict situation using a table lock.\n>\n> Standby:\n> #= begin;\n> #= select * from t;\n>\n> Primary:\n> #= begin;\n> #= lock t in ;\n>\n> After this, WAL of the table lock cannot be replayed due to a lock acquired\n> in the standby.\n>\n> Second, during the delay, I executed pg_wal_replay_pause() and\n> pg_is_wal_replay_paused(). Then, pg_is_wal_replay_paused was blocked until\n> max_standby_streaming_delay was expired, and eventually returned true.\n>\n> I can also see the same behaviour by setting recovery_min_apply_delay.\n>\n> So, pg_is_wal_replay_paused waits for recovery to get paused and this works\n> successfully as expected.\n>\n> However, I wonder users don't expect pg_is_wal_replay_paused to wait.\n> Especially, if max_standby_streaming_delay is -1, this will be blocked forever,\n> although this setting may not be usual. In addition, some users may set\n> recovery_min_apply_delay for a large. If such users call pg_is_wal_replay_paused,\n> it could wait for a long time.\n>\n> At least, I think we need some descriptions on document to explain\n> pg_is_wal_replay_paused could wait while a time.\n\nOk\n\n> Also, how about adding a new boolean argument to pg_is_wal_replay_paused to\n> control whether this waits for recovery to get paused or not? By setting its\n> default value to true or false, users can use the old format for calling this\n> and the backward compatibility can be maintained.\n\nSo basically, if the wait_recovery_pause flag is false then we will\nimmediately return true if the pause is requested? I agree that it is\ngood to have an API to know whether the recovery pause is requested or\nnot but I am not sure is it good idea to make this API serve both the\npurpose? Anyone else have any thoughts on this?\n\n>\n> As another comment, while pg_is_wal_replay_paused is blocking, I can not cancel\n> the query. I think CHECK_FOR_INTERRUPTS() is necessary in the waiting loop.\n>\n>\n> + errhint(\"Recovery control functions can only be executed during recovery.\")));\n>\n> There are a few tabs at the end of this line.\n\nI will fix.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 30 Nov 2020 14:40:08 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, Nov 30, 2020 at 2:40 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Nov 30, 2020 at 12:17 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n> Thanks for looking into this.\n>\n> > On Thu, 22 Oct 2020 20:36:48 +0530\n> > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > > On Thu, Oct 22, 2020 at 7:50 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Thu, Oct 22, 2020 at 6:59 AM Kyotaro Horiguchi\n> > > > <horikyota.ntt@gmail.com> wrote:\n> > > > >\n> > > > > At Wed, 21 Oct 2020 11:14:24 -0400, Robert Haas <robertmhaas@gmail.com> wrote in\n> > > > > > On Wed, Oct 21, 2020 at 7:16 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > > > One idea could be, if the recovery process is waiting for WAL and a\n> > > > > > > recovery pause is requested then we can assume that the recovery is\n> > > > > > > paused because before processing the next wal it will always check\n> > > > > > > whether the recovery pause is requested or not.\n> > > > > ..\n> > > > > > However, it might be better to implement this by having the system\n> > > > > > absorb the pause immediately when it's in this state, rather than\n> > > > > > trying to detect this state and treat it specially.\n> > > > >\n> > > > > The paused state is shown in pg_stat_activity.wait_event and it is\n> > > > > strange that pg_is_wal_replay_paused() is inconsistent with the\n> > > > > column.\n> > > >\n> > > > Right\n> > > >\n> > > > To make them consistent, we need to call recoveryPausesHere()\n> > > > > at the end of WaitForWALToBecomeAvailable() and let\n> > > > > pg_wal_replay_pause() call WakeupRecovery().\n> > > > >\n> > > > > I think we don't need a separate function to find the state.\n> > > >\n> > > > The idea makes sense to me. I will try to change the patch as per the\n> > > > suggestion.\n> > >\n> > > Here is the patch based on this idea.\n> >\n> > I reviewd this patch.\n> >\n> > First, I made a recovery conflict situation using a table lock.\n> >\n> > Standby:\n> > #= begin;\n> > #= select * from t;\n> >\n> > Primary:\n> > #= begin;\n> > #= lock t in ;\n> >\n> > After this, WAL of the table lock cannot be replayed due to a lock acquired\n> > in the standby.\n> >\n> > Second, during the delay, I executed pg_wal_replay_pause() and\n> > pg_is_wal_replay_paused(). Then, pg_is_wal_replay_paused was blocked until\n> > max_standby_streaming_delay was expired, and eventually returned true.\n> >\n> > I can also see the same behaviour by setting recovery_min_apply_delay.\n> >\n> > So, pg_is_wal_replay_paused waits for recovery to get paused and this works\n> > successfully as expected.\n> >\n> > However, I wonder users don't expect pg_is_wal_replay_paused to wait.\n> > Especially, if max_standby_streaming_delay is -1, this will be blocked forever,\n> > although this setting may not be usual. In addition, some users may set\n> > recovery_min_apply_delay for a large. If such users call pg_is_wal_replay_paused,\n> > it could wait for a long time.\n> >\n> > At least, I think we need some descriptions on document to explain\n> > pg_is_wal_replay_paused could wait while a time.\n>\n> Ok\n\nFixed this, added some comments in .sgml as well as in function header\n\n> > Also, how about adding a new boolean argument to pg_is_wal_replay_paused to\n> > control whether this waits for recovery to get paused or not? By setting its\n> > default value to true or false, users can use the old format for calling this\n> > and the backward compatibility can be maintained.\n>\n> So basically, if the wait_recovery_pause flag is false then we will\n> immediately return true if the pause is requested? I agree that it is\n> good to have an API to know whether the recovery pause is requested or\n> not but I am not sure is it good idea to make this API serve both the\n> purpose? Anyone else have any thoughts on this?\n>\n> >\n> > As another comment, while pg_is_wal_replay_paused is blocking, I can not cancel\n> > the query. I think CHECK_FOR_INTERRUPTS() is necessary in the waiting loop.\n> >\n> >\n> > + errhint(\"Recovery control functions can only be executed during recovery.\")));\n> >\n> > There are a few tabs at the end of this line.\n>\n> I will fix.\n\nFixed this as well.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 10 Dec 2020 11:25:23 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, 10 Dec 2020 11:25:23 +0530\nDilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> > > However, I wonder users don't expect pg_is_wal_replay_paused to wait.\n> > > Especially, if max_standby_streaming_delay is -1, this will be blocked forever,\n> > > although this setting may not be usual. In addition, some users may set\n> > > recovery_min_apply_delay for a large. If such users call pg_is_wal_replay_paused,\n> > > it could wait for a long time.\n> > >\n> > > At least, I think we need some descriptions on document to explain\n> > > pg_is_wal_replay_paused could wait while a time.\n> >\n> > Ok\n> \n> Fixed this, added some comments in .sgml as well as in function header\n\nThank you for fixing this.\n\nAlso, is it better to fix the description of pg_wal_replay_pause from\n\"Pauses recovery.\" to \"Request to pause recovery.\" in according with \npg_is_wal_replay_paused?\n\n> > > Also, how about adding a new boolean argument to pg_is_wal_replay_paused to\n> > > control whether this waits for recovery to get paused or not? By setting its\n> > > default value to true or false, users can use the old format for calling this\n> > > and the backward compatibility can be maintained.\n> >\n> > So basically, if the wait_recovery_pause flag is false then we will\n> > immediately return true if the pause is requested? I agree that it is\n> > good to have an API to know whether the recovery pause is requested or\n> > not but I am not sure is it good idea to make this API serve both the\n> > purpose? Anyone else have any thoughts on this?\n> >\n\nI think the current pg_is_wal_replay_paused() already has another purpose;\nthis waits recovery to actually get paused. If we want to limit this API's\npurpose only to return the pause state, it seems better to fix this to return\nthe actual state at the cost of lacking the backward compatibility. If we want\nto know whether pause is requested, we may add a new API like\npg_is_wal_replay_paluse_requeseted(). Also, if we want to wait recovery to actually\nget paused, we may add an option to pg_wal_replay_pause() for this purpose.\n\nHowever, this might be a bikeshedding. If anyone don't care that\npg_is_wal_replay_paused() can make user wait for a long time, I don't care either.\n\n> > > As another comment, while pg_is_wal_replay_paused is blocking, I can not cancel\n> > > the query. I think CHECK_FOR_INTERRUPTS() is necessary in the waiting loop.\n\nHow about this fix? I think users may want to cancel pg_is_wal_replay_paused() during\nthis is blocking.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Wed, 13 Jan 2021 18:55:43 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Wed, Jan 13, 2021 at 3:27 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n> On Thu, 10 Dec 2020 11:25:23 +0530\n> Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > > > However, I wonder users don't expect pg_is_wal_replay_paused to wait.\n> > > > Especially, if max_standby_streaming_delay is -1, this will be blocked forever,\n> > > > although this setting may not be usual. In addition, some users may set\n> > > > recovery_min_apply_delay for a large. If such users call pg_is_wal_replay_paused,\n> > > > it could wait for a long time.\n> > > >\n> > > > At least, I think we need some descriptions on document to explain\n> > > > pg_is_wal_replay_paused could wait while a time.\n> > >\n> > > Ok\n> >\n> > Fixed this, added some comments in .sgml as well as in function header\n>\n> Thank you for fixing this.\n>\n> Also, is it better to fix the description of pg_wal_replay_pause from\n> \"Pauses recovery.\" to \"Request to pause recovery.\" in according with\n> pg_is_wal_replay_paused?\n\nOkay\n\n>\n> > > > Also, how about adding a new boolean argument to pg_is_wal_replay_paused to\n> > > > control whether this waits for recovery to get paused or not? By setting its\n> > > > default value to true or false, users can use the old format for calling this\n> > > > and the backward compatibility can be maintained.\n> > >\n> > > So basically, if the wait_recovery_pause flag is false then we will\n> > > immediately return true if the pause is requested? I agree that it is\n> > > good to have an API to know whether the recovery pause is requested or\n> > > not but I am not sure is it good idea to make this API serve both the\n> > > purpose? Anyone else have any thoughts on this?\n> > >\n>\n> I think the current pg_is_wal_replay_paused() already has another purpose;\n> this waits recovery to actually get paused. If we want to limit this API's\n> purpose only to return the pause state, it seems better to fix this to return\n> the actual state at the cost of lacking the backward compatibility. If we want\n> to know whether pause is requested, we may add a new API like\n> pg_is_wal_replay_paluse_requeseted(). Also, if we want to wait recovery to actually\n> get paused, we may add an option to pg_wal_replay_pause() for this purpose.\n>\n> However, this might be a bikeshedding. If anyone don't care that\n> pg_is_wal_replay_paused() can make user wait for a long time, I don't care either.\n\nI don't think that it will be blocked ever, because\npg_wal_replay_pause is sending the WakeupRecovery() which means the\nrecovery process will not be stuck on waiting for the WAL.\n\n> > > > As another comment, while pg_is_wal_replay_paused is blocking, I can not cancel\n> > > > the query. I think CHECK_FOR_INTERRUPTS() is necessary in the waiting loop.\n>\n> How about this fix? I think users may want to cancel pg_is_wal_replay_paused() during\n> this is blocking.\n\nYeah, we can do this. I will send the updated patch after putting\nsome more thought into these comments. Thanks again for the feedback.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Jan 2021 15:35:41 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Wed, Jan 13, 2021 at 3:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Jan 13, 2021 at 3:27 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >\n> > On Thu, 10 Dec 2020 11:25:23 +0530\n> > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > > > > However, I wonder users don't expect pg_is_wal_replay_paused to wait.\n> > > > > Especially, if max_standby_streaming_delay is -1, this will be blocked forever,\n> > > > > although this setting may not be usual. In addition, some users may set\n> > > > > recovery_min_apply_delay for a large. If such users call pg_is_wal_replay_paused,\n> > > > > it could wait for a long time.\n> > > > >\n> > > > > At least, I think we need some descriptions on document to explain\n> > > > > pg_is_wal_replay_paused could wait while a time.\n> > > >\n> > > > Ok\n> > >\n> > > Fixed this, added some comments in .sgml as well as in function header\n> >\n> > Thank you for fixing this.\n> >\n> > Also, is it better to fix the description of pg_wal_replay_pause from\n> > \"Pauses recovery.\" to \"Request to pause recovery.\" in according with\n> > pg_is_wal_replay_paused?\n>\n> Okay\n>\n> >\n> > > > > Also, how about adding a new boolean argument to pg_is_wal_replay_paused to\n> > > > > control whether this waits for recovery to get paused or not? By setting its\n> > > > > default value to true or false, users can use the old format for calling this\n> > > > > and the backward compatibility can be maintained.\n> > > >\n> > > > So basically, if the wait_recovery_pause flag is false then we will\n> > > > immediately return true if the pause is requested? I agree that it is\n> > > > good to have an API to know whether the recovery pause is requested or\n> > > > not but I am not sure is it good idea to make this API serve both the\n> > > > purpose? Anyone else have any thoughts on this?\n> > > >\n> >\n> > I think the current pg_is_wal_replay_paused() already has another purpose;\n> > this waits recovery to actually get paused. If we want to limit this API's\n> > purpose only to return the pause state, it seems better to fix this to return\n> > the actual state at the cost of lacking the backward compatibility. If we want\n> > to know whether pause is requested, we may add a new API like\n> > pg_is_wal_replay_paluse_requeseted(). Also, if we want to wait recovery to actually\n> > get paused, we may add an option to pg_wal_replay_pause() for this purpose.\n> >\n> > However, this might be a bikeshedding. If anyone don't care that\n> > pg_is_wal_replay_paused() can make user wait for a long time, I don't care either.\n>\n> I don't think that it will be blocked ever, because\n> pg_wal_replay_pause is sending the WakeupRecovery() which means the\n> recovery process will not be stuck on waiting for the WAL.\n>\n> > > > > As another comment, while pg_is_wal_replay_paused is blocking, I can not cancel\n> > > > > the query. I think CHECK_FOR_INTERRUPTS() is necessary in the waiting loop.\n> >\n> > How about this fix? I think users may want to cancel pg_is_wal_replay_paused() during\n> > this is blocking.\n>\n> Yeah, we can do this. I will send the updated patch after putting\n> some more thought into these comments. Thanks again for the feedback.\n>\n\nPlease find the updated patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 13 Jan 2021 17:49:43 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Wed, 13 Jan 2021 17:49:43 +0530\nDilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Wed, Jan 13, 2021 at 3:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Jan 13, 2021 at 3:27 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > >\n> > > On Thu, 10 Dec 2020 11:25:23 +0530\n> > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > > > > However, I wonder users don't expect pg_is_wal_replay_paused to wait.\n> > > > > > Especially, if max_standby_streaming_delay is -1, this will be blocked forever,\n> > > > > > although this setting may not be usual. In addition, some users may set\n> > > > > > recovery_min_apply_delay for a large. If such users call pg_is_wal_replay_paused,\n> > > > > > it could wait for a long time.\n> > > > > >\n> > > > > > At least, I think we need some descriptions on document to explain\n> > > > > > pg_is_wal_replay_paused could wait while a time.\n> > > > >\n> > > > > Ok\n> > > >\n> > > > Fixed this, added some comments in .sgml as well as in function header\n> > >\n> > > Thank you for fixing this.\n> > >\n> > > Also, is it better to fix the description of pg_wal_replay_pause from\n> > > \"Pauses recovery.\" to \"Request to pause recovery.\" in according with\n> > > pg_is_wal_replay_paused?\n> >\n> > Okay\n> >\n> > >\n> > > > > > Also, how about adding a new boolean argument to pg_is_wal_replay_paused to\n> > > > > > control whether this waits for recovery to get paused or not? By setting its\n> > > > > > default value to true or false, users can use the old format for calling this\n> > > > > > and the backward compatibility can be maintained.\n> > > > >\n> > > > > So basically, if the wait_recovery_pause flag is false then we will\n> > > > > immediately return true if the pause is requested? I agree that it is\n> > > > > good to have an API to know whether the recovery pause is requested or\n> > > > > not but I am not sure is it good idea to make this API serve both the\n> > > > > purpose? Anyone else have any thoughts on this?\n> > > > >\n> > >\n> > > I think the current pg_is_wal_replay_paused() already has another purpose;\n> > > this waits recovery to actually get paused. If we want to limit this API's\n> > > purpose only to return the pause state, it seems better to fix this to return\n> > > the actual state at the cost of lacking the backward compatibility. If we want\n> > > to know whether pause is requested, we may add a new API like\n> > > pg_is_wal_replay_paluse_requeseted(). Also, if we want to wait recovery to actually\n> > > get paused, we may add an option to pg_wal_replay_pause() for this purpose.\n> > >\n> > > However, this might be a bikeshedding. If anyone don't care that\n> > > pg_is_wal_replay_paused() can make user wait for a long time, I don't care either.\n> >\n> > I don't think that it will be blocked ever, because\n> > pg_wal_replay_pause is sending the WakeupRecovery() which means the\n> > recovery process will not be stuck on waiting for the WAL.\n\nYes, there is no stuck on waiting for the WAL. However, it can be stuck during resolving\na recovery conflict. The process could wait for max_standby_streaming_delay or\nmax_standby_archive_delay at most before recovery get completely paused.\n\nAlso, it could wait for recovery_min_apply_delay if it has a valid value. It is possible\nthat a user set this parameter to a large value, so it could wait for a long time. However,\nthis will be avoided by calling recoveryPausesHere() or CheckAndSetRecoveryPause() in\nrecoveryApplyDelay().\n\n> > > > > > As another comment, while pg_is_wal_replay_paused is blocking, I can not cancel\n> > > > > > the query. I think CHECK_FOR_INTERRUPTS() is necessary in the waiting loop.\n> > >\n> > > How about this fix? I think users may want to cancel pg_is_wal_replay_paused() during\n> > > this is blocking.\n> >\n> > Yeah, we can do this. I will send the updated patch after putting\n> > some more thought into these comments. Thanks again for the feedback.\n> >\n> \n> Please find the updated patch.\n\nThanks. I confirmed that I can cancel pg_is_wal_repaly_paused() during stuck.\n\n\nAlthough it is a very trivial comment, I think that the new line before\nHandleStartupProcInterrupts() is unnecessary.\n\n@@ -6052,12 +6062,20 @@ recoveryPausesHere(bool endOfRecovery)\n \t\t\t\t(errmsg(\"recovery has paused\"),\n \t\t\t\t errhint(\"Execute pg_wal_replay_resume() to continue.\")));\n \n-\twhile (RecoveryIsPaused())\n+\twhile (RecoveryPauseRequested())\n \t{\n+\n \t\tHandleStartupProcInterrupts();\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Thu, 14 Jan 2021 22:18:06 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Wed, Jan 13, 2021 at 9:20 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Jan 13, 2021 at 3:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Jan 13, 2021 at 3:27 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > >\n> > > On Thu, 10 Dec 2020 11:25:23 +0530\n> > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > > > > However, I wonder users don't expect pg_is_wal_replay_paused to wait.\n> > > > > > Especially, if max_standby_streaming_delay is -1, this will be blocked forever,\n> > > > > > although this setting may not be usual. In addition, some users may set\n> > > > > > recovery_min_apply_delay for a large. If such users call pg_is_wal_replay_paused,\n> > > > > > it could wait for a long time.\n> > > > > >\n> > > > > > At least, I think we need some descriptions on document to explain\n> > > > > > pg_is_wal_replay_paused could wait while a time.\n> > > > >\n> > > > > Ok\n> > > >\n> > > > Fixed this, added some comments in .sgml as well as in function header\n> > >\n> > > Thank you for fixing this.\n> > >\n> > > Also, is it better to fix the description of pg_wal_replay_pause from\n> > > \"Pauses recovery.\" to \"Request to pause recovery.\" in according with\n> > > pg_is_wal_replay_paused?\n> >\n> > Okay\n> >\n> > >\n> > > > > > Also, how about adding a new boolean argument to pg_is_wal_replay_paused to\n> > > > > > control whether this waits for recovery to get paused or not? By setting its\n> > > > > > default value to true or false, users can use the old format for calling this\n> > > > > > and the backward compatibility can be maintained.\n> > > > >\n> > > > > So basically, if the wait_recovery_pause flag is false then we will\n> > > > > immediately return true if the pause is requested? I agree that it is\n> > > > > good to have an API to know whether the recovery pause is requested or\n> > > > > not but I am not sure is it good idea to make this API serve both the\n> > > > > purpose? Anyone else have any thoughts on this?\n> > > > >\n> > >\n> > > I think the current pg_is_wal_replay_paused() already has another purpose;\n> > > this waits recovery to actually get paused. If we want to limit this API's\n> > > purpose only to return the pause state, it seems better to fix this to return\n> > > the actual state at the cost of lacking the backward compatibility. If we want\n> > > to know whether pause is requested, we may add a new API like\n> > > pg_is_wal_replay_paluse_requeseted(). Also, if we want to wait recovery to actually\n> > > get paused, we may add an option to pg_wal_replay_pause() for this purpose.\n> > >\n> > > However, this might be a bikeshedding. If anyone don't care that\n> > > pg_is_wal_replay_paused() can make user wait for a long time, I don't care either.\n> >\n> > I don't think that it will be blocked ever, because\n> > pg_wal_replay_pause is sending the WakeupRecovery() which means the\n> > recovery process will not be stuck on waiting for the WAL.\n> >\n> > > > > > As another comment, while pg_is_wal_replay_paused is blocking, I can not cancel\n> > > > > > the query. I think CHECK_FOR_INTERRUPTS() is necessary in the waiting loop.\n> > >\n> > > How about this fix? I think users may want to cancel pg_is_wal_replay_paused() during\n> > > this is blocking.\n> >\n> > Yeah, we can do this. I will send the updated patch after putting\n> > some more thought into these comments. Thanks again for the feedback.\n> >\n>\n> Please find the updated patch.\n\nI've looked at the patch. Here are review comments:\n\n+ /* Recovery pause state */\n+ RecoveryPauseState recoveryPause;\n\nNow that the value can have tri-state, how about renaming it to\nrecoveryPauseState?\n\n---\n bool\n RecoveryIsPaused(void)\n+{\n+ bool recoveryPause;\n+\n+ SpinLockAcquire(&XLogCtl->info_lck);\n+ recoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED) ?\ntrue : false;\n+ SpinLockRelease(&XLogCtl->info_lck);\n+\n+ return recoveryPause;\n+}\n+\n+bool\n+RecoveryPauseRequested(void)\n {\n bool recoveryPause;\n\n SpinLockAcquire(&XLogCtl->info_lck);\n- recoveryPause = XLogCtl->recoveryPause;\n+ recoveryPause = (XLogCtl->recoveryPause !=\nRECOVERY_IN_PROGRESS) ? true : false;\n SpinLockRelease(&XLogCtl->info_lck);\n\n return recoveryPause;\n }\n\nWe can write like recoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED);\n\nAlso, since these functions do the almost same thing, I think we can\nhave a common function to get XLogCtl->recoveryPause, say\nGetRecoveryPauseState() or GetRecoveryPause(), and both\nRecoveryIsPaused() and RecoveryPauseRequested() use the returned\nvalue. What do you think?\n\n---\n+static void\n+CheckAndSetRecoveryPause(void)\n\nMaybe we need to declare the prototype of this function like other\nfunctions in xlog.c.\n\n---\n+ /*\n+ * If recovery is not in progress anymore then report an error this\n+ * could happen if the standby is promoted while we were waiting for\n+ * recovery to get paused.\n+ */\n+ if (!RecoveryInProgress())\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"recovery is not in progress\"),\n+ errhint(\"Recovery control functions can only be\nexecuted during recovery.\")));\n\nI think we can improve the error message so that we can tell users the\nstandby has been promoted during the wait. For example,\n\n errmsg(\"the standby was promoted during waiting for\nrecovery to be paused\")));\n\n---\n+ /* test for recovery pause if user has requested the pause */\n+ if (((volatile XLogCtlData *) XLogCtl)->recoveryPause)\n+ recoveryPausesHere(false);\n+\n+ now = GetCurrentTimestamp();\n+\n\nHmm, if the recovery pauses here, the wal receiver isn't launched even\nwhen wal_retrieve_retry_interval has passed, which seems not good. I\nthink we want the recovery to be paused but want the wal receiver to\ncontinue receiving WAL.\n\nAnd why do we need to set 'now' here?\n\n---\n/*\n * Wait until shared recoveryPause flag is cleared.\n *\n * endOfRecovery is true if the recovery target is reached and\n * the paused state starts at the end of recovery because of\n * recovery_target_action=pause, and false otherwise.\n *\n * XXX Could also be done with shared latch, avoiding the pg_usleep loop.\n * Probably not worth the trouble though. This state shouldn't be one that\n * anyone cares about server power consumption in.\n */\nstatic void\nrecoveryPausesHere(bool endOfRecovery)\n\nWe can improve the first sentence in the above function comment to\n\"Wait until shared recoveryPause is set to RECOVERY_IN_PROGRESS\" or\nsomething.\n\n---\n- PG_RETURN_BOOL(RecoveryIsPaused());\n+ if (!RecoveryPauseRequested())\n+ PG_RETURN_BOOL(false);\n+\n+ /* loop until the recovery is actually paused */\n+ while(!RecoveryIsPaused())\n+ {\n+ pg_usleep(10000L); /* wait for 10 msec */\n+\n+ /* meanwhile if recovery is resume requested then return false */\n+ if (!RecoveryPauseRequested())\n+ PG_RETURN_BOOL(false);\n+\n+ CHECK_FOR_INTERRUPTS();\n+\n+ /*\n+ * If recovery is not in progress anymore then report an error this\n+ * could happen if the standby is promoted while we were waiting for\n+ * recovery to get paused.\n+ */\n+ if (!RecoveryInProgress())\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"recovery is not in progress\"),\n+ errhint(\"Recovery control functions can only be\nexecuted during recovery.\")));\n+ }\n+\n+ PG_RETURN_BOOL(true);\n\nWe have the same !RecoveryPauseRequested() check twice, how about the\nfollowing arrangement?\n\n for (;;)\n {\n if (!RecoveryPauseRequested())\n PG_RETURN_BOOL(false);\n\n if (RecoveryIsPaused())\n break;\n\n pg_usleep(10000L);\n\n CHECK_FOR_INTERRUPTS();\n\n if (!RecoveryInProgress())\n ereport(...);\n }\n\nPG_RETURN_BOOL(true);\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Sat, 16 Jan 2021 12:28:34 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Sat, Jan 16, 2021 at 12:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> ---\n> + /* test for recovery pause if user has requested the pause */\n> + if (((volatile XLogCtlData *) XLogCtl)->recoveryPause)\n> + recoveryPausesHere(false);\n> +\n> + now = GetCurrentTimestamp();\n> +\n>\n> Hmm, if the recovery pauses here, the wal receiver isn't launched even\n> when wal_retrieve_retry_interval has passed, which seems not good. I\n> think we want the recovery to be paused but want the wal receiver to\n> continue receiving WAL.\n\nI had misunderstood the code and the patch, please ignore this\ncomment. Pausing the recovery here is fine with me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Sun, 17 Jan 2021 07:21:42 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Sun, Jan 17, 2021 at 3:52 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Jan 16, 2021 at 12:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > ---\n> > + /* test for recovery pause if user has requested the pause */\n> > + if (((volatile XLogCtlData *) XLogCtl)->recoveryPause)\n> > + recoveryPausesHere(false);\n> > +\n> > + now = GetCurrentTimestamp();\n> > +\n> >\n> > Hmm, if the recovery pauses here, the wal receiver isn't launched even\n> > when wal_retrieve_retry_interval has passed, which seems not good. I\n> > think we want the recovery to be paused but want the wal receiver to\n> > continue receiving WAL.\n>\n> I had misunderstood the code and the patch, please ignore this\n> comment. Pausing the recovery here is fine with me.\n\nThanks for the review Sawada-San, I will work on your other comments\nand post the patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 17 Jan 2021 11:11:08 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, Jan 14, 2021 at 6:49 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n> On Wed, 13 Jan 2021 17:49:43 +0530\n> Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > On Wed, Jan 13, 2021 at 3:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Wed, Jan 13, 2021 at 3:27 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > >\n> > > > On Thu, 10 Dec 2020 11:25:23 +0530\n> > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > > > > However, I wonder users don't expect pg_is_wal_replay_paused to wait.\n> > > > > > > Especially, if max_standby_streaming_delay is -1, this will be blocked forever,\n> > > > > > > although this setting may not be usual. In addition, some users may set\n> > > > > > > recovery_min_apply_delay for a large. If such users call pg_is_wal_replay_paused,\n> > > > > > > it could wait for a long time.\n> > > > > > >\n> > > > > > > At least, I think we need some descriptions on document to explain\n> > > > > > > pg_is_wal_replay_paused could wait while a time.\n> > > > > >\n> > > > > > Ok\n> > > > >\n> > > > > Fixed this, added some comments in .sgml as well as in function header\n> > > >\n> > > > Thank you for fixing this.\n> > > >\n> > > > Also, is it better to fix the description of pg_wal_replay_pause from\n> > > > \"Pauses recovery.\" to \"Request to pause recovery.\" in according with\n> > > > pg_is_wal_replay_paused?\n> > >\n> > > Okay\n> > >\n> > > >\n> > > > > > > Also, how about adding a new boolean argument to pg_is_wal_replay_paused to\n> > > > > > > control whether this waits for recovery to get paused or not? By setting its\n> > > > > > > default value to true or false, users can use the old format for calling this\n> > > > > > > and the backward compatibility can be maintained.\n> > > > > >\n> > > > > > So basically, if the wait_recovery_pause flag is false then we will\n> > > > > > immediately return true if the pause is requested? I agree that it is\n> > > > > > good to have an API to know whether the recovery pause is requested or\n> > > > > > not but I am not sure is it good idea to make this API serve both the\n> > > > > > purpose? Anyone else have any thoughts on this?\n> > > > > >\n> > > >\n> > > > I think the current pg_is_wal_replay_paused() already has another purpose;\n> > > > this waits recovery to actually get paused. If we want to limit this API's\n> > > > purpose only to return the pause state, it seems better to fix this to return\n> > > > the actual state at the cost of lacking the backward compatibility. If we want\n> > > > to know whether pause is requested, we may add a new API like\n> > > > pg_is_wal_replay_paluse_requeseted(). Also, if we want to wait recovery to actually\n> > > > get paused, we may add an option to pg_wal_replay_pause() for this purpose.\n> > > >\n> > > > However, this might be a bikeshedding. If anyone don't care that\n> > > > pg_is_wal_replay_paused() can make user wait for a long time, I don't care either.\n> > >\n> > > I don't think that it will be blocked ever, because\n> > > pg_wal_replay_pause is sending the WakeupRecovery() which means the\n> > > recovery process will not be stuck on waiting for the WAL.\n>\n> Yes, there is no stuck on waiting for the WAL. However, it can be stuck during resolving\n> a recovery conflict. The process could wait for max_standby_streaming_delay or\n> max_standby_archive_delay at most before recovery get completely paused.\n\nOkay, I agree that it is possible so for handling this we have a\ncouple of options\n1. pg_is_wal_replay_paused(), interface will wait for recovery to\nactually get paused, but user have an option to cancel that. So I\nagree that there is currently no option to just know that recovery\npause is requested without waiting for its actually get paused if it\nis requested. So one option is we can provide an another interface as\nyou mentioned pg_is_wal_replay_paluse_requeseted(), which can just\nreturn the request status. I am not sure how useful it is.\n\n2. Pass an option to pg_is_wal_replay_paused whether to wait for\nrecovery to actually get paused or not.\n\n3. Pass an option to pg_wal_replay_pause(), whether to wait for\nrecovery pause or just request and return.\n\nI like the option 1, any other opinion on this?\n\n> Also, it could wait for recovery_min_apply_delay if it has a valid value. It is possible\n> that a user set this parameter to a large value, so it could wait for a long time. However,\n> this will be avoided by calling recoveryPausesHere() or CheckAndSetRecoveryPause() in\n> recoveryApplyDelay().\n\nRight\n\n> > > > > > > As another comment, while pg_is_wal_replay_paused is blocking, I can not cancel\n> > > > > > > the query. I think CHECK_FOR_INTERRUPTS() is necessary in the waiting loop.\n> > > >\n> > > > How about this fix? I think users may want to cancel pg_is_wal_replay_paused() during\n> > > > this is blocking.\n> > >\n> > > Yeah, we can do this. I will send the updated patch after putting\n> > > some more thought into these comments. Thanks again for the feedback.\n> > >\n> >\n> > Please find the updated patch.\n>\n> Thanks. I confirmed that I can cancel pg_is_wal_repaly_paused() during stuck.\n\nThanks\n\n> Although it is a very trivial comment, I think that the new line before\n> HandleStartupProcInterrupts() is unnecessary.\n>\n> @@ -6052,12 +6062,20 @@ recoveryPausesHere(bool endOfRecovery)\n> (errmsg(\"recovery has paused\"),\n> errhint(\"Execute pg_wal_replay_resume() to continue.\")));\n>\n> - while (RecoveryIsPaused())\n> + while (RecoveryPauseRequested())\n> {\n> +\n> HandleStartupProcInterrupts();\n>\n>\n\nI will fix in the next version.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 17 Jan 2021 11:33:52 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Sat, Jan 16, 2021 at 8:59 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jan 13, 2021 at 9:20 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Jan 13, 2021 at 3:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Wed, Jan 13, 2021 at 3:27 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > >\n> > > > On Thu, 10 Dec 2020 11:25:23 +0530\n> > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > > > > However, I wonder users don't expect pg_is_wal_replay_paused to wait.\n> > > > > > > Especially, if max_standby_streaming_delay is -1, this will be blocked forever,\n> > > > > > > although this setting may not be usual. In addition, some users may set\n> > > > > > > recovery_min_apply_delay for a large. If such users call pg_is_wal_replay_paused,\n> > > > > > > it could wait for a long time.\n> > > > > > >\n> > > > > > > At least, I think we need some descriptions on document to explain\n> > > > > > > pg_is_wal_replay_paused could wait while a time.\n> > > > > >\n> > > > > > Ok\n> > > > >\n> > > > > Fixed this, added some comments in .sgml as well as in function header\n> > > >\n> > > > Thank you for fixing this.\n> > > >\n> > > > Also, is it better to fix the description of pg_wal_replay_pause from\n> > > > \"Pauses recovery.\" to \"Request to pause recovery.\" in according with\n> > > > pg_is_wal_replay_paused?\n> > >\n> > > Okay\n> > >\n> > > >\n> > > > > > > Also, how about adding a new boolean argument to pg_is_wal_replay_paused to\n> > > > > > > control whether this waits for recovery to get paused or not? By setting its\n> > > > > > > default value to true or false, users can use the old format for calling this\n> > > > > > > and the backward compatibility can be maintained.\n> > > > > >\n> > > > > > So basically, if the wait_recovery_pause flag is false then we will\n> > > > > > immediately return true if the pause is requested? I agree that it is\n> > > > > > good to have an API to know whether the recovery pause is requested or\n> > > > > > not but I am not sure is it good idea to make this API serve both the\n> > > > > > purpose? Anyone else have any thoughts on this?\n> > > > > >\n> > > >\n> > > > I think the current pg_is_wal_replay_paused() already has another purpose;\n> > > > this waits recovery to actually get paused. If we want to limit this API's\n> > > > purpose only to return the pause state, it seems better to fix this to return\n> > > > the actual state at the cost of lacking the backward compatibility. If we want\n> > > > to know whether pause is requested, we may add a new API like\n> > > > pg_is_wal_replay_paluse_requeseted(). Also, if we want to wait recovery to actually\n> > > > get paused, we may add an option to pg_wal_replay_pause() for this purpose.\n> > > >\n> > > > However, this might be a bikeshedding. If anyone don't care that\n> > > > pg_is_wal_replay_paused() can make user wait for a long time, I don't care either.\n> > >\n> > > I don't think that it will be blocked ever, because\n> > > pg_wal_replay_pause is sending the WakeupRecovery() which means the\n> > > recovery process will not be stuck on waiting for the WAL.\n> > >\n> > > > > > > As another comment, while pg_is_wal_replay_paused is blocking, I can not cancel\n> > > > > > > the query. I think CHECK_FOR_INTERRUPTS() is necessary in the waiting loop.\n> > > >\n> > > > How about this fix? I think users may want to cancel pg_is_wal_replay_paused() during\n> > > > this is blocking.\n> > >\n> > > Yeah, we can do this. I will send the updated patch after putting\n> > > some more thought into these comments. Thanks again for the feedback.\n> > >\n> >\n> > Please find the updated patch.\n>\n> I've looked at the patch. Here are review comments:\n>\n> + /* Recovery pause state */\n> + RecoveryPauseState recoveryPause;\n>\n> Now that the value can have tri-state, how about renaming it to\n> recoveryPauseState?\n\nThis makes sense to me.\n\n> ---\n> bool\n> RecoveryIsPaused(void)\n> +{\n> + bool recoveryPause;\n> +\n> + SpinLockAcquire(&XLogCtl->info_lck);\n> + recoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED) ?\n> true : false;\n> + SpinLockRelease(&XLogCtl->info_lck);\n> +\n> + return recoveryPause;\n> +}\n> +\n> +bool\n> +RecoveryPauseRequested(void)\n> {\n> bool recoveryPause;\n>\n> SpinLockAcquire(&XLogCtl->info_lck);\n> - recoveryPause = XLogCtl->recoveryPause;\n> + recoveryPause = (XLogCtl->recoveryPause !=\n> RECOVERY_IN_PROGRESS) ? true : false;\n> SpinLockRelease(&XLogCtl->info_lck);\n>\n> return recoveryPause;\n> }\n>\n> We can write like recoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED);\n\nIn RecoveryPauseRequested, we just want to know whether the pause is\nrequested or not, even if the pause requested and not yet pause then\nalso we want to return true. So how\nrecoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED) will work?\n\n> Also, since these functions do the almost same thing, I think we can\n> have a common function to get XLogCtl->recoveryPause, say\n> GetRecoveryPauseState() or GetRecoveryPause(), and both\n> RecoveryIsPaused() and RecoveryPauseRequested() use the returned\n> value. What do you think?\n\nYeah we can do that.\n\n> ---\n> +static void\n> +CheckAndSetRecoveryPause(void)\n>\n> Maybe we need to declare the prototype of this function like other\n> functions in xlog.c.\n\nOkay\n\n> ---\n> + /*\n> + * If recovery is not in progress anymore then report an error this\n> + * could happen if the standby is promoted while we were waiting for\n> + * recovery to get paused.\n> + */\n> + if (!RecoveryInProgress())\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"recovery is not in progress\"),\n> + errhint(\"Recovery control functions can only be\n> executed during recovery.\")));\n>\n> I think we can improve the error message so that we can tell users the\n> standby has been promoted during the wait. For example,\n>\n> errmsg(\"the standby was promoted during waiting for\n> recovery to be paused\")));\n>\n> ---\n> + /* test for recovery pause if user has requested the pause */\n> + if (((volatile XLogCtlData *) XLogCtl)->recoveryPause)\n> + recoveryPausesHere(false);\n> +\n> + now = GetCurrentTimestamp();\n> +\n>\n> Hmm, if the recovery pauses here, the wal receiver isn't launched even\n> when wal_retrieve_retry_interval has passed, which seems not good. I\n> think we want the recovery to be paused but want the wal receiver to\n> continue receiving WAL.\n>\n> And why do we need to set 'now' here?\n>\n> ---\n> /*\n> * Wait until shared recoveryPause flag is cleared.\n> *\n> * endOfRecovery is true if the recovery target is reached and\n> * the paused state starts at the end of recovery because of\n> * recovery_target_action=pause, and false otherwise.\n> *\n> * XXX Could also be done with shared latch, avoiding the pg_usleep loop.\n> * Probably not worth the trouble though. This state shouldn't be one that\n> * anyone cares about server power consumption in.\n> */\n> static void\n> recoveryPausesHere(bool endOfRecovery)\n>\n> We can improve the first sentence in the above function comment to\n> \"Wait until shared recoveryPause is set to RECOVERY_IN_PROGRESS\" or\n> something.\n>\n> ---\n> - PG_RETURN_BOOL(RecoveryIsPaused());\n> + if (!RecoveryPauseRequested())\n> + PG_RETURN_BOOL(false);\n> +\n> + /* loop until the recovery is actually paused */\n> + while(!RecoveryIsPaused())\n> + {\n> + pg_usleep(10000L); /* wait for 10 msec */\n> +\n> + /* meanwhile if recovery is resume requested then return false */\n> + if (!RecoveryPauseRequested())\n> + PG_RETURN_BOOL(false);\n> +\n> + CHECK_FOR_INTERRUPTS();\n> +\n> + /*\n> + * If recovery is not in progress anymore then report an error this\n> + * could happen if the standby is promoted while we were waiting for\n> + * recovery to get paused.\n> + */\n> + if (!RecoveryInProgress())\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"recovery is not in progress\"),\n> + errhint(\"Recovery control functions can only be\n> executed during recovery.\")));\n> + }\n> +\n> + PG_RETURN_BOOL(true);\n>\n> We have the same !RecoveryPauseRequested() check twice, how about the\n> following arrangement?\n>\n> for (;;)\n> {\n> if (!RecoveryPauseRequested())\n> PG_RETURN_BOOL(false);\n>\n> if (RecoveryIsPaused())\n> break;\n>\n> pg_usleep(10000L);\n>\n> CHECK_FOR_INTERRUPTS();\n>\n> if (!RecoveryInProgress())\n> ereport(...);\n> }\n>\n> PG_RETURN_BOOL(true);\n>\n> Regards,\n>\n\nOkay, we can do that. I will make these changes in the next patch.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 17 Jan 2021 13:48:47 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Sun, Jan 17, 2021 at 1:48 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sat, Jan 16, 2021 at 8:59 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jan 13, 2021 at 9:20 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Wed, Jan 13, 2021 at 3:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Wed, Jan 13, 2021 at 3:27 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > > >\n> > > > > On Thu, 10 Dec 2020 11:25:23 +0530\n> > > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > > > > However, I wonder users don't expect pg_is_wal_replay_paused to wait.\n> > > > > > > > Especially, if max_standby_streaming_delay is -1, this will be blocked forever,\n> > > > > > > > although this setting may not be usual. In addition, some users may set\n> > > > > > > > recovery_min_apply_delay for a large. If such users call pg_is_wal_replay_paused,\n> > > > > > > > it could wait for a long time.\n> > > > > > > >\n> > > > > > > > At least, I think we need some descriptions on document to explain\n> > > > > > > > pg_is_wal_replay_paused could wait while a time.\n> > > > > > >\n> > > > > > > Ok\n> > > > > >\n> > > > > > Fixed this, added some comments in .sgml as well as in function header\n> > > > >\n> > > > > Thank you for fixing this.\n> > > > >\n> > > > > Also, is it better to fix the description of pg_wal_replay_pause from\n> > > > > \"Pauses recovery.\" to \"Request to pause recovery.\" in according with\n> > > > > pg_is_wal_replay_paused?\n> > > >\n> > > > Okay\n> > > >\n> > > > >\n> > > > > > > > Also, how about adding a new boolean argument to pg_is_wal_replay_paused to\n> > > > > > > > control whether this waits for recovery to get paused or not? By setting its\n> > > > > > > > default value to true or false, users can use the old format for calling this\n> > > > > > > > and the backward compatibility can be maintained.\n> > > > > > >\n> > > > > > > So basically, if the wait_recovery_pause flag is false then we will\n> > > > > > > immediately return true if the pause is requested? I agree that it is\n> > > > > > > good to have an API to know whether the recovery pause is requested or\n> > > > > > > not but I am not sure is it good idea to make this API serve both the\n> > > > > > > purpose? Anyone else have any thoughts on this?\n> > > > > > >\n> > > > >\n> > > > > I think the current pg_is_wal_replay_paused() already has another purpose;\n> > > > > this waits recovery to actually get paused. If we want to limit this API's\n> > > > > purpose only to return the pause state, it seems better to fix this to return\n> > > > > the actual state at the cost of lacking the backward compatibility. If we want\n> > > > > to know whether pause is requested, we may add a new API like\n> > > > > pg_is_wal_replay_paluse_requeseted(). Also, if we want to wait recovery to actually\n> > > > > get paused, we may add an option to pg_wal_replay_pause() for this purpose.\n> > > > >\n> > > > > However, this might be a bikeshedding. If anyone don't care that\n> > > > > pg_is_wal_replay_paused() can make user wait for a long time, I don't care either.\n> > > >\n> > > > I don't think that it will be blocked ever, because\n> > > > pg_wal_replay_pause is sending the WakeupRecovery() which means the\n> > > > recovery process will not be stuck on waiting for the WAL.\n> > > >\n> > > > > > > > As another comment, while pg_is_wal_replay_paused is blocking, I can not cancel\n> > > > > > > > the query. I think CHECK_FOR_INTERRUPTS() is necessary in the waiting loop.\n> > > > >\n> > > > > How about this fix? I think users may want to cancel pg_is_wal_replay_paused() during\n> > > > > this is blocking.\n> > > >\n> > > > Yeah, we can do this. I will send the updated patch after putting\n> > > > some more thought into these comments. Thanks again for the feedback.\n> > > >\n> > >\n> > > Please find the updated patch.\n> >\n> > I've looked at the patch. Here are review comments:\n> >\n> > + /* Recovery pause state */\n> > + RecoveryPauseState recoveryPause;\n> >\n> > Now that the value can have tri-state, how about renaming it to\n> > recoveryPauseState?\n>\n> This makes sense to me.\n>\n> > ---\n> > bool\n> > RecoveryIsPaused(void)\n> > +{\n> > + bool recoveryPause;\n> > +\n> > + SpinLockAcquire(&XLogCtl->info_lck);\n> > + recoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED) ?\n> > true : false;\n> > + SpinLockRelease(&XLogCtl->info_lck);\n> > +\n> > + return recoveryPause;\n> > +}\n> > +\n> > +bool\n> > +RecoveryPauseRequested(void)\n> > {\n> > bool recoveryPause;\n> >\n> > SpinLockAcquire(&XLogCtl->info_lck);\n> > - recoveryPause = XLogCtl->recoveryPause;\n> > + recoveryPause = (XLogCtl->recoveryPause !=\n> > RECOVERY_IN_PROGRESS) ? true : false;\n> > SpinLockRelease(&XLogCtl->info_lck);\n> >\n> > return recoveryPause;\n> > }\n> >\n> > We can write like recoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED);\n>\n> In RecoveryPauseRequested, we just want to know whether the pause is\n> requested or not, even if the pause requested and not yet pause then\n> also we want to return true. So how\n> recoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED) will work?\n>\n> > Also, since these functions do the almost same thing, I think we can\n> > have a common function to get XLogCtl->recoveryPause, say\n> > GetRecoveryPauseState() or GetRecoveryPause(), and both\n> > RecoveryIsPaused() and RecoveryPauseRequested() use the returned\n> > value. What do you think?\n>\n> Yeah we can do that.\n>\n> > ---\n> > +static void\n> > +CheckAndSetRecoveryPause(void)\n> >\n> > Maybe we need to declare the prototype of this function like other\n> > functions in xlog.c.\n>\n> Okay\n>\n> > ---\n> > + /*\n> > + * If recovery is not in progress anymore then report an error this\n> > + * could happen if the standby is promoted while we were waiting for\n> > + * recovery to get paused.\n> > + */\n> > + if (!RecoveryInProgress())\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > + errmsg(\"recovery is not in progress\"),\n> > + errhint(\"Recovery control functions can only be\n> > executed during recovery.\")));\n> >\n> > I think we can improve the error message so that we can tell users the\n> > standby has been promoted during the wait. For example,\n> >\n> > errmsg(\"the standby was promoted during waiting for\n> > recovery to be paused\")));\n> >\n> > ---\n> > + /* test for recovery pause if user has requested the pause */\n> > + if (((volatile XLogCtlData *) XLogCtl)->recoveryPause)\n> > + recoveryPausesHere(false);\n> > +\n> > + now = GetCurrentTimestamp();\n> > +\n> >\n> > Hmm, if the recovery pauses here, the wal receiver isn't launched even\n> > when wal_retrieve_retry_interval has passed, which seems not good. I\n> > think we want the recovery to be paused but want the wal receiver to\n> > continue receiving WAL.\n> >\n> > And why do we need to set 'now' here?\n> >\n> > ---\n> > /*\n> > * Wait until shared recoveryPause flag is cleared.\n> > *\n> > * endOfRecovery is true if the recovery target is reached and\n> > * the paused state starts at the end of recovery because of\n> > * recovery_target_action=pause, and false otherwise.\n> > *\n> > * XXX Could also be done with shared latch, avoiding the pg_usleep loop.\n> > * Probably not worth the trouble though. This state shouldn't be one that\n> > * anyone cares about server power consumption in.\n> > */\n> > static void\n> > recoveryPausesHere(bool endOfRecovery)\n> >\n> > We can improve the first sentence in the above function comment to\n> > \"Wait until shared recoveryPause is set to RECOVERY_IN_PROGRESS\" or\n> > something.\n> >\n> > ---\n> > - PG_RETURN_BOOL(RecoveryIsPaused());\n> > + if (!RecoveryPauseRequested())\n> > + PG_RETURN_BOOL(false);\n> > +\n> > + /* loop until the recovery is actually paused */\n> > + while(!RecoveryIsPaused())\n> > + {\n> > + pg_usleep(10000L); /* wait for 10 msec */\n> > +\n> > + /* meanwhile if recovery is resume requested then return false */\n> > + if (!RecoveryPauseRequested())\n> > + PG_RETURN_BOOL(false);\n> > +\n> > + CHECK_FOR_INTERRUPTS();\n> > +\n> > + /*\n> > + * If recovery is not in progress anymore then report an error this\n> > + * could happen if the standby is promoted while we were waiting for\n> > + * recovery to get paused.\n> > + */\n> > + if (!RecoveryInProgress())\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > + errmsg(\"recovery is not in progress\"),\n> > + errhint(\"Recovery control functions can only be\n> > executed during recovery.\")));\n> > + }\n> > +\n> > + PG_RETURN_BOOL(true);\n> >\n> > We have the same !RecoveryPauseRequested() check twice, how about the\n> > following arrangement?\n> >\n> > for (;;)\n> > {\n> > if (!RecoveryPauseRequested())\n> > PG_RETURN_BOOL(false);\n> >\n> > if (RecoveryIsPaused())\n> > break;\n> >\n> > pg_usleep(10000L);\n> >\n> > CHECK_FOR_INTERRUPTS();\n> >\n> > if (!RecoveryInProgress())\n> > ereport(...);\n> > }\n> >\n> > PG_RETURN_BOOL(true);\n> >\n> > Regards,\n> >\n>\n> Okay, we can do that. I will make these changes in the next patch.\n>\n\nI have fixed the above agreed comments. Please have a look.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 18 Jan 2021 11:26:00 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Sun, 17 Jan 2021 11:33:52 +0530\nDilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Thu, Jan 14, 2021 at 6:49 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >\n> > On Wed, 13 Jan 2021 17:49:43 +0530\n> > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > > On Wed, Jan 13, 2021 at 3:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Wed, Jan 13, 2021 at 3:27 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > > >\n> > > > > On Thu, 10 Dec 2020 11:25:23 +0530\n> > > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > > > > However, I wonder users don't expect pg_is_wal_replay_paused to wait.\n> > > > > > > > Especially, if max_standby_streaming_delay is -1, this will be blocked forever,\n> > > > > > > > although this setting may not be usual. In addition, some users may set\n> > > > > > > > recovery_min_apply_delay for a large. If such users call pg_is_wal_replay_paused,\n> > > > > > > > it could wait for a long time.\n> > > > > > > >\n> > > > > > > > At least, I think we need some descriptions on document to explain\n> > > > > > > > pg_is_wal_replay_paused could wait while a time.\n> > > > > > >\n> > > > > > > Ok\n> > > > > >\n> > > > > > Fixed this, added some comments in .sgml as well as in function header\n> > > > >\n> > > > > Thank you for fixing this.\n> > > > >\n> > > > > Also, is it better to fix the description of pg_wal_replay_pause from\n> > > > > \"Pauses recovery.\" to \"Request to pause recovery.\" in according with\n> > > > > pg_is_wal_replay_paused?\n> > > >\n> > > > Okay\n> > > >\n> > > > >\n> > > > > > > > Also, how about adding a new boolean argument to pg_is_wal_replay_paused to\n> > > > > > > > control whether this waits for recovery to get paused or not? By setting its\n> > > > > > > > default value to true or false, users can use the old format for calling this\n> > > > > > > > and the backward compatibility can be maintained.\n> > > > > > >\n> > > > > > > So basically, if the wait_recovery_pause flag is false then we will\n> > > > > > > immediately return true if the pause is requested? I agree that it is\n> > > > > > > good to have an API to know whether the recovery pause is requested or\n> > > > > > > not but I am not sure is it good idea to make this API serve both the\n> > > > > > > purpose? Anyone else have any thoughts on this?\n> > > > > > >\n> > > > >\n> > > > > I think the current pg_is_wal_replay_paused() already has another purpose;\n> > > > > this waits recovery to actually get paused. If we want to limit this API's\n> > > > > purpose only to return the pause state, it seems better to fix this to return\n> > > > > the actual state at the cost of lacking the backward compatibility. If we want\n> > > > > to know whether pause is requested, we may add a new API like\n> > > > > pg_is_wal_replay_paluse_requeseted(). Also, if we want to wait recovery to actually\n> > > > > get paused, we may add an option to pg_wal_replay_pause() for this purpose.\n> > > > >\n> > > > > However, this might be a bikeshedding. If anyone don't care that\n> > > > > pg_is_wal_replay_paused() can make user wait for a long time, I don't care either.\n> > > >\n> > > > I don't think that it will be blocked ever, because\n> > > > pg_wal_replay_pause is sending the WakeupRecovery() which means the\n> > > > recovery process will not be stuck on waiting for the WAL.\n> >\n> > Yes, there is no stuck on waiting for the WAL. However, it can be stuck during resolving\n> > a recovery conflict. The process could wait for max_standby_streaming_delay or\n> > max_standby_archive_delay at most before recovery get completely paused.\n> \n> Okay, I agree that it is possible so for handling this we have a\n> couple of options\n> 1. pg_is_wal_replay_paused(), interface will wait for recovery to\n> actually get paused, but user have an option to cancel that. So I\n> agree that there is currently no option to just know that recovery\n> pause is requested without waiting for its actually get paused if it\n> is requested. So one option is we can provide an another interface as\n> you mentioned pg_is_wal_replay_paluse_requeseted(), which can just\n> return the request status. I am not sure how useful it is.\n\nIf it is acceptable that pg_is_wal_replay_paused() makes users wait, \nI'm ok for the current interface. I don't feel the need of\npg_is_wal_replay_paluse_requeseted().\n\n> \n> 2. Pass an option to pg_is_wal_replay_paused whether to wait for\n> recovery to actually get paused or not.\n> \n> 3. Pass an option to pg_wal_replay_pause(), whether to wait for\n> recovery pause or just request and return.\n> \n> I like the option 1, any other opinion on this?\n> \n> > Also, it could wait for recovery_min_apply_delay if it has a valid value. It is possible\n> > that a user set this parameter to a large value, so it could wait for a long time. However,\n> > this will be avoided by calling recoveryPausesHere() or CheckAndSetRecoveryPause() in\n> > recoveryApplyDelay().\n> \n> Right\n\nIs there any reason not to do it?\n\n> \n> > > > > > > > As another comment, while pg_is_wal_replay_paused is blocking, I can not cancel\n> > > > > > > > the query. I think CHECK_FOR_INTERRUPTS() is necessary in the waiting loop.\n> > > > >\n> > > > > How about this fix? I think users may want to cancel pg_is_wal_replay_paused() during\n> > > > > this is blocking.\n> > > >\n> > > > Yeah, we can do this. I will send the updated patch after putting\n> > > > some more thought into these comments. Thanks again for the feedback.\n> > > >\n> > >\n> > > Please find the updated patch.\n> >\n> > Thanks. I confirmed that I can cancel pg_is_wal_repaly_paused() during stuck.\n> \n> Thanks\n> \n> > Although it is a very trivial comment, I think that the new line before\n> > HandleStartupProcInterrupts() is unnecessary.\n> >\n> > @@ -6052,12 +6062,20 @@ recoveryPausesHere(bool endOfRecovery)\n> > (errmsg(\"recovery has paused\"),\n> > errhint(\"Execute pg_wal_replay_resume() to continue.\")));\n> >\n> > - while (RecoveryIsPaused())\n> > + while (RecoveryPauseRequested())\n> > {\n> > +\n> > HandleStartupProcInterrupts();\n> >\n> >\n> \n> I will fix in the next version.\n> \n> -- \n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Tue, 19 Jan 2021 11:41:18 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Tue, 19 Jan 2021 at 8:12 AM, Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Sun, 17 Jan 2021 11:33:52 +0530\n> Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > On Thu, Jan 14, 2021 at 6:49 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > >\n> > > On Wed, 13 Jan 2021 17:49:43 +0530\n> > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > > On Wed, Jan 13, 2021 at 3:35 PM Dilip Kumar <dilipbalaut@gmail.com>\n> wrote:\n> > > > >\n> > > > > On Wed, Jan 13, 2021 at 3:27 PM Yugo NAGATA <nagata@sraoss.co.jp>\n> wrote:\n> > > > > >\n> > > > > > On Thu, 10 Dec 2020 11:25:23 +0530\n> > > > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > >\n> > > > > > > > > However, I wonder users don't expect\n> pg_is_wal_replay_paused to wait.\n> > > > > > > > > Especially, if max_standby_streaming_delay is -1, this\n> will be blocked forever,\n> > > > > > > > > although this setting may not be usual. In addition, some\n> users may set\n> > > > > > > > > recovery_min_apply_delay for a large. If such users call\n> pg_is_wal_replay_paused,\n> > > > > > > > > it could wait for a long time.\n> > > > > > > > >\n> > > > > > > > > At least, I think we need some descriptions on document to\n> explain\n> > > > > > > > > pg_is_wal_replay_paused could wait while a time.\n> > > > > > > >\n> > > > > > > > Ok\n> > > > > > >\n> > > > > > > Fixed this, added some comments in .sgml as well as in\n> function header\n> > > > > >\n> > > > > > Thank you for fixing this.\n> > > > > >\n> > > > > > Also, is it better to fix the description of pg_wal_replay_pause\n> from\n> > > > > > \"Pauses recovery.\" to \"Request to pause recovery.\" in according\n> with\n> > > > > > pg_is_wal_replay_paused?\n> > > > >\n> > > > > Okay\n> > > > >\n> > > > > >\n> > > > > > > > > Also, how about adding a new boolean argument to\n> pg_is_wal_replay_paused to\n> > > > > > > > > control whether this waits for recovery to get paused or\n> not? By setting its\n> > > > > > > > > default value to true or false, users can use the old\n> format for calling this\n> > > > > > > > > and the backward compatibility can be maintained.\n> > > > > > > >\n> > > > > > > > So basically, if the wait_recovery_pause flag is false then\n> we will\n> > > > > > > > immediately return true if the pause is requested? I agree\n> that it is\n> > > > > > > > good to have an API to know whether the recovery pause is\n> requested or\n> > > > > > > > not but I am not sure is it good idea to make this API serve\n> both the\n> > > > > > > > purpose? Anyone else have any thoughts on this?\n> > > > > > > >\n> > > > > >\n> > > > > > I think the current pg_is_wal_replay_paused() already has\n> another purpose;\n> > > > > > this waits recovery to actually get paused. If we want to limit\n> this API's\n> > > > > > purpose only to return the pause state, it seems better to fix\n> this to return\n> > > > > > the actual state at the cost of lacking the backward\n> compatibility. If we want\n> > > > > > to know whether pause is requested, we may add a new API like\n> > > > > > pg_is_wal_replay_paluse_requeseted(). Also, if we want to wait\n> recovery to actually\n> > > > > > get paused, we may add an option to pg_wal_replay_pause() for\n> this purpose.\n> > > > > >\n> > > > > > However, this might be a bikeshedding. If anyone don't care that\n> > > > > > pg_is_wal_replay_paused() can make user wait for a long time, I\n> don't care either.\n> > > > >\n> > > > > I don't think that it will be blocked ever, because\n> > > > > pg_wal_replay_pause is sending the WakeupRecovery() which means the\n> > > > > recovery process will not be stuck on waiting for the WAL.\n> > >\n> > > Yes, there is no stuck on waiting for the WAL. However, it can be\n> stuck during resolving\n> > > a recovery conflict. The process could wait for\n> max_standby_streaming_delay or\n> > > max_standby_archive_delay at most before recovery get completely\n> paused.\n> >\n> > Okay, I agree that it is possible so for handling this we have a\n> > couple of options\n> > 1. pg_is_wal_replay_paused(), interface will wait for recovery to\n> > actually get paused, but user have an option to cancel that. So I\n> > agree that there is currently no option to just know that recovery\n> > pause is requested without waiting for its actually get paused if it\n> > is requested. So one option is we can provide an another interface as\n> > you mentioned pg_is_wal_replay_paluse_requeseted(), which can just\n> > return the request status. I am not sure how useful it is.\n>\n> If it is acceptable that pg_is_wal_replay_paused() makes users wait,\n> I'm ok for the current interface. I don't feel the need of\n> pg_is_wal_replay_paluse_requeseted().\n>\n> >\n> > 2. Pass an option to pg_is_wal_replay_paused whether to wait for\n> > recovery to actually get paused or not.\n> >\n> > 3. Pass an option to pg_wal_replay_pause(), whether to wait for\n> > recovery pause or just request and return.\n> >\n> > I like the option 1, any other opinion on this?\n> >\n> > > Also, it could wait for recovery_min_apply_delay if it has a valid\n> value. It is possible\n> > > that a user set this parameter to a large value, so it could wait for\n> a long time. However,\n> > > this will be avoided by calling recoveryPausesHere() or\n> CheckAndSetRecoveryPause() in\n> > > recoveryApplyDelay().\n> >\n> > Right\n>\n> Is there any reason not to do it?\n\n\n\nI think I missed that.. I will do in the next version\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, 19 Jan 2021 at 8:12 AM, Yugo NAGATA <nagata@sraoss.co.jp> wrote:On Sun, 17 Jan 2021 11:33:52 +0530\nDilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Thu, Jan 14, 2021 at 6:49 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >\n> > On Wed, 13 Jan 2021 17:49:43 +0530\n> > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > > On Wed, Jan 13, 2021 at 3:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Wed, Jan 13, 2021 at 3:27 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > > >\n> > > > > On Thu, 10 Dec 2020 11:25:23 +0530\n> > > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > > > > However, I wonder users don't expect pg_is_wal_replay_paused to wait.\n> > > > > > > > Especially, if max_standby_streaming_delay is -1, this will be blocked forever,\n> > > > > > > > although this setting may not be usual. In addition, some users may set\n> > > > > > > > recovery_min_apply_delay for a large.  If such users call pg_is_wal_replay_paused,\n> > > > > > > > it could wait for a long time.\n> > > > > > > >\n> > > > > > > > At least, I think we need some descriptions on document to explain\n> > > > > > > > pg_is_wal_replay_paused could wait while a time.\n> > > > > > >\n> > > > > > > Ok\n> > > > > >\n> > > > > > Fixed this, added some comments in .sgml as well as in function header\n> > > > >\n> > > > > Thank you for fixing this.\n> > > > >\n> > > > > Also, is it better to fix the description of pg_wal_replay_pause from\n> > > > > \"Pauses recovery.\" to \"Request to pause recovery.\" in according with\n> > > > > pg_is_wal_replay_paused?\n> > > >\n> > > > Okay\n> > > >\n> > > > >\n> > > > > > > > Also, how about adding a new boolean argument to pg_is_wal_replay_paused to\n> > > > > > > > control whether this waits for recovery to get paused or not? By setting its\n> > > > > > > > default value to true or false, users can use the old format for calling this\n> > > > > > > > and the backward compatibility can be maintained.\n> > > > > > >\n> > > > > > > So basically, if the wait_recovery_pause flag is false then we will\n> > > > > > > immediately return true if the pause is requested?  I agree that it is\n> > > > > > > good to have an API to know whether the recovery pause is requested or\n> > > > > > > not but I am not sure is it good idea to make this API serve both the\n> > > > > > > purpose?  Anyone else have any thoughts on this?\n> > > > > > >\n> > > > >\n> > > > > I think the current pg_is_wal_replay_paused() already has another purpose;\n> > > > > this waits recovery to actually get paused. If we want to limit this API's\n> > > > > purpose only to return the pause state, it seems better to fix this to return\n> > > > > the actual state at the cost of lacking the backward compatibility. If we want\n> > > > > to know whether pause is requested, we may add a new API like\n> > > > > pg_is_wal_replay_paluse_requeseted(). Also, if we want to wait recovery to actually\n> > > > > get paused, we may add an option to pg_wal_replay_pause() for this purpose.\n> > > > >\n> > > > > However, this might be a bikeshedding. If anyone don't care that\n> > > > > pg_is_wal_replay_paused() can make user wait for a long time, I don't care either.\n> > > >\n> > > > I don't think that it will be blocked ever, because\n> > > > pg_wal_replay_pause is sending the WakeupRecovery() which means the\n> > > > recovery process will not be stuck on waiting for the WAL.\n> >\n> > Yes, there is no stuck on waiting for the WAL. However, it can be stuck during resolving\n> > a recovery conflict. The process could wait for max_standby_streaming_delay or\n> > max_standby_archive_delay at most before recovery get completely paused.\n> \n> Okay, I agree that it is possible so for handling this we have a\n> couple of options\n> 1. pg_is_wal_replay_paused(), interface will wait for recovery to\n> actually get paused, but user have an option to cancel that.  So I\n> agree that there is currently no option to just know that recovery\n> pause is requested without waiting for its actually get paused if it\n> is requested.  So one option is we can provide an another interface as\n> you mentioned pg_is_wal_replay_paluse_requeseted(), which can just\n> return the request status.  I am not sure how useful it is.\n\nIf it is acceptable that pg_is_wal_replay_paused() makes users wait, \nI'm ok for the current interface. I don't feel the need of\npg_is_wal_replay_paluse_requeseted().\n\n> \n> 2. Pass an option to pg_is_wal_replay_paused whether to wait for\n> recovery to actually get paused or not.\n> \n> 3. Pass an option to pg_wal_replay_pause(), whether to wait for\n> recovery pause or just request and return.\n> \n> I like the option 1, any other opinion on this?\n> \n> > Also, it could wait for recovery_min_apply_delay if it has a valid value. It is possible\n> > that a user set this parameter to a large value, so it could wait for a long time. However,\n> > this will be avoided by calling recoveryPausesHere() or CheckAndSetRecoveryPause() in\n> > recoveryApplyDelay().\n> \n> Right\n\nIs there any reason not to do it?I think I missed that.. I will do in the next version -- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 19 Jan 2021 08:34:24 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Tue, 19 Jan 2021 11:41:18 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \n> On Sun, 17 Jan 2021 11:33:52 +0530\n> Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > >\n> > > > > > I think the current pg_is_wal_replay_paused() already has another purpose;\n> > > > > > this waits recovery to actually get paused. If we want to limit this API's\n> > > > > > purpose only to return the pause state, it seems better to fix this to return\n> > > > > > the actual state at the cost of lacking the backward compatibility. If we want\n> > > > > > to know whether pause is requested, we may add a new API like\n> > > > > > pg_is_wal_replay_paluse_requeseted(). Also, if we want to wait recovery to actually\n> > > > > > get paused, we may add an option to pg_wal_replay_pause() for this purpose.\n> > > > > >\n> > > > > > However, this might be a bikeshedding. If anyone don't care that\n> > > > > > pg_is_wal_replay_paused() can make user wait for a long time, I don't care either.\n> > > > >\n> > > > > I don't think that it will be blocked ever, because\n> > > > > pg_wal_replay_pause is sending the WakeupRecovery() which means the\n> > > > > recovery process will not be stuck on waiting for the WAL.\n> > >\n> > > Yes, there is no stuck on waiting for the WAL. However, it can be stuck during resolving\n> > > a recovery conflict. The process could wait for max_standby_streaming_delay or\n> > > max_standby_archive_delay at most before recovery get completely paused.\n> > \n> > Okay, I agree that it is possible so for handling this we have a\n> > couple of options\n> > 1. pg_is_wal_replay_paused(), interface will wait for recovery to\n> > actually get paused, but user have an option to cancel that. So I\n> > agree that there is currently no option to just know that recovery\n> > pause is requested without waiting for its actually get paused if it\n> > is requested. So one option is we can provide an another interface as\n> > you mentioned pg_is_wal_replay_paluse_requeseted(), which can just\n> > return the request status. I am not sure how useful it is.\n> \n> If it is acceptable that pg_is_wal_replay_paused() makes users wait, \n> I'm ok for the current interface. I don't feel the need of\n> pg_is_wal_replay_paluse_requeseted().\n\nFWIW, the name \"pg_is_wal_replay_paused\" is suggesting \"to know\nwhether recovery is paused or not at present\" and it would be\nsurprising to see it to wait for the recovery actually paused by\ndefault.\n\nI think there's no functions to wait for some situation at least for\nnow. If we wanted to wait for some condition to make, we would loop\nover check-and-wait using plpgsql.\n\nIf you desire to wait to replication to pause by a function, I would\ndo that by adding a parameter to the function.\n\npg_is_wal_replay_paused(OPTIONAL bool wait_for_pause)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 19 Jan 2021 14:00:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Tue, Jan 19, 2021 at 10:30 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 19 Jan 2021 11:41:18 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in\n> > On Sun, 17 Jan 2021 11:33:52 +0530\n> > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > > >\n> > > > > > > I think the current pg_is_wal_replay_paused() already has another purpose;\n> > > > > > > this waits recovery to actually get paused. If we want to limit this API's\n> > > > > > > purpose only to return the pause state, it seems better to fix this to return\n> > > > > > > the actual state at the cost of lacking the backward compatibility. If we want\n> > > > > > > to know whether pause is requested, we may add a new API like\n> > > > > > > pg_is_wal_replay_paluse_requeseted(). Also, if we want to wait recovery to actually\n> > > > > > > get paused, we may add an option to pg_wal_replay_pause() for this purpose.\n> > > > > > >\n> > > > > > > However, this might be a bikeshedding. If anyone don't care that\n> > > > > > > pg_is_wal_replay_paused() can make user wait for a long time, I don't care either.\n> > > > > >\n> > > > > > I don't think that it will be blocked ever, because\n> > > > > > pg_wal_replay_pause is sending the WakeupRecovery() which means the\n> > > > > > recovery process will not be stuck on waiting for the WAL.\n> > > >\n> > > > Yes, there is no stuck on waiting for the WAL. However, it can be stuck during resolving\n> > > > a recovery conflict. The process could wait for max_standby_streaming_delay or\n> > > > max_standby_archive_delay at most before recovery get completely paused.\n> > >\n> > > Okay, I agree that it is possible so for handling this we have a\n> > > couple of options\n> > > 1. pg_is_wal_replay_paused(), interface will wait for recovery to\n> > > actually get paused, but user have an option to cancel that. So I\n> > > agree that there is currently no option to just know that recovery\n> > > pause is requested without waiting for its actually get paused if it\n> > > is requested. So one option is we can provide an another interface as\n> > > you mentioned pg_is_wal_replay_paluse_requeseted(), which can just\n> > > return the request status. I am not sure how useful it is.\n> >\n> > If it is acceptable that pg_is_wal_replay_paused() makes users wait,\n> > I'm ok for the current interface. I don't feel the need of\n> > pg_is_wal_replay_paluse_requeseted().\n>\n> FWIW, the name \"pg_is_wal_replay_paused\" is suggesting \"to know\n> whether recovery is paused or not at present\" and it would be\n> surprising to see it to wait for the recovery actually paused by\n> default.\n>\n> I think there's no functions to wait for some situation at least for\n> now. If we wanted to wait for some condition to make, we would loop\n> over check-and-wait using plpgsql.\n>\n> If you desire to wait to replication to pause by a function, I would\n> do that by adding a parameter to the function.\n>\n> pg_is_wal_replay_paused(OPTIONAL bool wait_for_pause)\n\nThis seems to be a fair point to me. So I will add an option to the\nAPI, and if that is passed true then we will wait for recovery to get\npaused.\notherwise, this will just return true if the pause is requested same\nas the current behavior.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Jan 2021 12:12:14 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Tue, Jan 19, 2021 at 8:34 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, 19 Jan 2021 at 8:12 AM, Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>>\n>> On Sun, 17 Jan 2021 11:33:52 +0530\n>> Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>\n>> > On Thu, Jan 14, 2021 at 6:49 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>> > >\n>> > > On Wed, 13 Jan 2021 17:49:43 +0530\n>> > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> > >\n>> > > > On Wed, Jan 13, 2021 at 3:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> > > > >\n>> > > > > On Wed, Jan 13, 2021 at 3:27 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>> > > > > >\n>> > > > > > On Thu, 10 Dec 2020 11:25:23 +0530\n>> > > > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> > > > > >\n>> > > > > > > > > However, I wonder users don't expect pg_is_wal_replay_paused to wait.\n>> > > > > > > > > Especially, if max_standby_streaming_delay is -1, this will be blocked forever,\n>> > > > > > > > > although this setting may not be usual. In addition, some users may set\n>> > > > > > > > > recovery_min_apply_delay for a large. If such users call pg_is_wal_replay_paused,\n>> > > > > > > > > it could wait for a long time.\n>> > > > > > > > >\n>> > > > > > > > > At least, I think we need some descriptions on document to explain\n>> > > > > > > > > pg_is_wal_replay_paused could wait while a time.\n>> > > > > > > >\n>> > > > > > > > Ok\n>> > > > > > >\n>> > > > > > > Fixed this, added some comments in .sgml as well as in function header\n>> > > > > >\n>> > > > > > Thank you for fixing this.\n>> > > > > >\n>> > > > > > Also, is it better to fix the description of pg_wal_replay_pause from\n>> > > > > > \"Pauses recovery.\" to \"Request to pause recovery.\" in according with\n>> > > > > > pg_is_wal_replay_paused?\n>> > > > >\n>> > > > > Okay\n>> > > > >\n>> > > > > >\n>> > > > > > > > > Also, how about adding a new boolean argument to pg_is_wal_replay_paused to\n>> > > > > > > > > control whether this waits for recovery to get paused or not? By setting its\n>> > > > > > > > > default value to true or false, users can use the old format for calling this\n>> > > > > > > > > and the backward compatibility can be maintained.\n>> > > > > > > >\n>> > > > > > > > So basically, if the wait_recovery_pause flag is false then we will\n>> > > > > > > > immediately return true if the pause is requested? I agree that it is\n>> > > > > > > > good to have an API to know whether the recovery pause is requested or\n>> > > > > > > > not but I am not sure is it good idea to make this API serve both the\n>> > > > > > > > purpose? Anyone else have any thoughts on this?\n>> > > > > > > >\n>> > > > > >\n>> > > > > > I think the current pg_is_wal_replay_paused() already has another purpose;\n>> > > > > > this waits recovery to actually get paused. If we want to limit this API's\n>> > > > > > purpose only to return the pause state, it seems better to fix this to return\n>> > > > > > the actual state at the cost of lacking the backward compatibility. If we want\n>> > > > > > to know whether pause is requested, we may add a new API like\n>> > > > > > pg_is_wal_replay_paluse_requeseted(). Also, if we want to wait recovery to actually\n>> > > > > > get paused, we may add an option to pg_wal_replay_pause() for this purpose.\n>> > > > > >\n>> > > > > > However, this might be a bikeshedding. If anyone don't care that\n>> > > > > > pg_is_wal_replay_paused() can make user wait for a long time, I don't care either.\n>> > > > >\n>> > > > > I don't think that it will be blocked ever, because\n>> > > > > pg_wal_replay_pause is sending the WakeupRecovery() which means the\n>> > > > > recovery process will not be stuck on waiting for the WAL.\n>> > >\n>> > > Yes, there is no stuck on waiting for the WAL. However, it can be stuck during resolving\n>> > > a recovery conflict. The process could wait for max_standby_streaming_delay or\n>> > > max_standby_archive_delay at most before recovery get completely paused.\n>> >\n>> > Okay, I agree that it is possible so for handling this we have a\n>> > couple of options\n>> > 1. pg_is_wal_replay_paused(), interface will wait for recovery to\n>> > actually get paused, but user have an option to cancel that. So I\n>> > agree that there is currently no option to just know that recovery\n>> > pause is requested without waiting for its actually get paused if it\n>> > is requested. So one option is we can provide an another interface as\n>> > you mentioned pg_is_wal_replay_paluse_requeseted(), which can just\n>> > return the request status. I am not sure how useful it is.\n>>\n>> If it is acceptable that pg_is_wal_replay_paused() makes users wait,\n>> I'm ok for the current interface. I don't feel the need of\n>> pg_is_wal_replay_paluse_requeseted().\n>>\n>> >\n>> > 2. Pass an option to pg_is_wal_replay_paused whether to wait for\n>> > recovery to actually get paused or not.\n>> >\n>> > 3. Pass an option to pg_wal_replay_pause(), whether to wait for\n>> > recovery pause or just request and return.\n>> >\n>> > I like the option 1, any other opinion on this?\n>> >\n>> > > Also, it could wait for recovery_min_apply_delay if it has a valid value. It is possible\n>> > > that a user set this parameter to a large value, so it could wait for a long time. However,\n>> > > this will be avoided by calling recoveryPausesHere() or CheckAndSetRecoveryPause() in\n>> > > recoveryApplyDelay().\n>> >\n>> > Right\n>>\n>> Is there any reason not to do it?\n>\n>\n>\n> I think I missed that.. I will do in the next version\n>\n\nIn the last patch there were some local changes which I did not add to\nthe patch and it was giving compilation warning so fixed that along\nwith that I have addressed your this comment as well.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 19 Jan 2021 21:32:31 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Tue, Jan 19, 2021 at 9:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> In the last patch there were some local changes which I did not add to\n> the patch and it was giving compilation warning so fixed that along\n> with that I have addressed your this comment as well.\n\nThanks for the patch. I took a look at the v5 patch, below are some\ncomments. Please ignore if I'm repeating any of the comments discussed\nupthread.\n\n[1] Can we also have a wait version for pg_wal_replay_pause that waits\nuntil recovery is actually paused right after setting the recovery\nstate to RECOVERY_PAUSE_REQUESTED? Something like this -\npg_wal_replay_pause_and_wait(wait boolean, wait_seconds integer\nDEFAULT 60) returns boolean. It waits until either default or provided\nwait_seconds and returns true if the recovery is paused within that\nwait_seconds otherwise false. If wait_seconds is 0 or -1, then it\nwaits until recovery is paused and returns true. One advantage of this\nfunction is that users don't need to call pg_is_wal_replay_paused().\nIMHO, the job of ensuring whether or not the recovery is actually\npaused, is better done by the one who requests\nit(pg_wal_replay_pause/pg_wal_replay_pause_and_wait).\n\n[2] Is it intentional that RecoveryPauseRequested() returns true even\nif XLogCtl->recoveryPauseState is either RECOVERY_PAUSE_REQUESTED or\nRECOVERY_PAUSED?\n\n[3] Can we change IsRecoveryPaused() instead of RecoveryIsPaused() and\nIsRecoveryPauseRequested() instead of RecoveryPauseRequested()? How\nabout having inline(because they have one line of code) functions like\nIsRecoveryPauseRequested(), IsRecoveryPaused() and\nIsRecoveryInProgress(), returning true when RECOVERY_PAUSE_REQUESTED,\nRECOVERY_PAUSED and RECOVERY_IN_PROGRESS respectively?\n\n[4] Can we have at least one line of comments for each of the new\nfunctions, I know the function names mean everything? And also for\nexisting SetRecoveryPause() and GetRecoveryPauseState()?\n\n[5] Typo, it's \"every time\" ---> + * this everytime.\n\n[6] Do we reach PG_RETURN_BOOL(true); at the end of\npg_is_wal_replay_paused()? If not, is it there for satisfying the\ncompiler?\n\n[7] In pg_is_wal_replay_paused(), do we need if\n(!RecoveryPauseRequested()) inside the for (;;)? If yes, can we add\ncomments about why we need it there?\n\n[8] Isn't it good to have\npgstat_report_wait_start(WAIT_EVENT_RECOVERY_PAUSE) and\npgstat_report_wait_end(), just before for (;;) and at the end\nrespectively? This will help users in knowing what they are waiting\nfor? Alternatively we can issue notice/warnings/write to server log\nwhile we are waiting in for (;;) for the recovery to get paused?\n\n[9] In pg_is_wal_replay_paused(), is 10 msec sleep time chosen\nrandomly or based on some analysis that the time it takes to get to\nrecovery paused state from request state or some other?\n\n[10] errhint(\"The standby was promoted while waiting for recovery to\nbe paused.\"))); Can we know whether standby is actually promoted and\nthrow this error? Because the error \"recovery is not in progress\" here\nis being thrown by just relying on if (!RecoveryInProgress()). IIUC,\nusing pg_promote\n\n[11] Can we try to add tests for these functions in TAP? Currently, we\ndon't have any tests for pg_is_wal_replay_paused, pg_wal_replay_resume\nor pg_wal_replay_pause, but we have tests for pg_promote in timeline\nswitch.\n\n[12] Isn't it better to change RecoveryPauseState enum\nRECOVERY_IN_PROGRESS value to start from 1 instead of 0? Because when\nXLogCtl shared memory is initialized, I think recoveryPauseState can\nbe 0, so should it mean recovery in progress?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Jan 2021 15:29:13 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, Jan 21, 2021 at 3:29 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n\nThanks for reviewing Bharat.\n\n> On Tue, Jan 19, 2021 at 9:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > In the last patch there were some local changes which I did not add to\n> > the patch and it was giving compilation warning so fixed that along\n> > with that I have addressed your this comment as well.\n>\n> Thanks for the patch. I took a look at the v5 patch, below are some\n> comments. Please ignore if I'm repeating any of the comments discussed\n> upthread.\n>\n> [1] Can we also have a wait version for pg_wal_replay_pause that waits\n> until recovery is actually paused right after setting the recovery\n> state to RECOVERY_PAUSE_REQUESTED? Something like this -\n> pg_wal_replay_pause_and_wait(wait boolean, wait_seconds integer\n> DEFAULT 60) returns boolean. It waits until either default or provided\n> wait_seconds and returns true if the recovery is paused within that\n> wait_seconds otherwise false. If wait_seconds is 0 or -1, then it\n> waits until recovery is paused and returns true. One advantage of this\n> function is that users don't need to call pg_is_wal_replay_paused().\n> IMHO, the job of ensuring whether or not the recovery is actually\n> paused, is better done by the one who requests\n> it(pg_wal_replay_pause/pg_wal_replay_pause_and_wait).\n\nI don't think we need wait/onwait version for all the APIs, IMHO it\nwould be enough for the user to know whether the recovery is actually\npaused or not\nand for that, we are changing pg_is_wal_replay_paused to wait for the\npause. However, in the next version in pg_is_wal_replay_paused I will\nprovide a flag so that the user can decide whether to wait for the\npause or just get the request status.\n\n> [2] Is it intentional that RecoveryPauseRequested() returns true even\n> if XLogCtl->recoveryPauseState is either RECOVERY_PAUSE_REQUESTED or\n> RECOVERY_PAUSED?\n\nYes this is intended\n\n> [3] Can we change IsRecoveryPaused() instead of RecoveryIsPaused() and\n> IsRecoveryPauseRequested() instead of RecoveryPauseRequested()? How\n> about having inline(because they have one line of code) functions like\n> IsRecoveryPauseRequested(), IsRecoveryPaused() and\n> IsRecoveryInProgress(), returning true when RECOVERY_PAUSE_REQUESTED,\n> RECOVERY_PAUSED and RECOVERY_IN_PROGRESS respectively?\n\nYeah, we can do that, I am not sure whether we need\nIsRecoveryInProgress function though.\n\n> [4] Can we have at least one line of comments for each of the new\n> functions, I know the function names mean everything? And also for\n> existing SetRecoveryPause() and GetRecoveryPauseState()?\n\nWill do that\n\n> [5] Typo, it's \"every time\" ---> + * this everytime.\n\nOk\n\n> [6] Do we reach PG_RETURN_BOOL(true); at the end of\n> pg_is_wal_replay_paused()? If not, is it there for satisfying the\n> compiler?\n\nYes\n\n> [7] In pg_is_wal_replay_paused(), do we need if\n> (!RecoveryPauseRequested()) inside the for (;;)? If yes, can we add\n> comments about why we need it there?\n\nYes, we need it if replay resumed during the loop, I will add the comments.\n\n> [8] Isn't it good to have\n> pgstat_report_wait_start(WAIT_EVENT_RECOVERY_PAUSE) and\n> pgstat_report_wait_end(), just before for (;;) and at the end\n> respectively? This will help users in knowing what they are waiting\n> for? Alternatively we can issue notice/warnings/write to server log\n> while we are waiting in for (;;) for the recovery to get paused?\n\nI think we can do that, let me think about this and get back to you.\n\n> [9] In pg_is_wal_replay_paused(), is 10 msec sleep time chosen\n> randomly or based on some analysis that the time it takes to get to\n> recovery paused state from request state or some other?\n\nI don't think we can identify that when actually recovery can get\npaused. Though in pg_wal_replay_pause() we are sending a signal to\nall the places we are waiting for WAL and right after we are checking.\nBut it depends upon other configuration parameters like\nmax_standby_streaming_delay.\n\n> [10] errhint(\"The standby was promoted while waiting for recovery to\n> be paused.\"))); Can we know whether standby is actually promoted and\n> throw this error? Because the error \"recovery is not in progress\" here\n> is being thrown by just relying on if (!RecoveryInProgress()). IIUC,\n> using pg_promote\n\nBecause before checking this it has already checked\nRecoveryPauseRequested() and if that is true then the\nRecoveryInProgress was in progress at some time and now it is not\nanymore and that can happen due to promotion. But I am fine with\nreverting to the old error that can not execute if recovery is not in\nprogress.\n\n> [11] Can we try to add tests for these functions in TAP? Currently, we\n> don't have any tests for pg_is_wal_replay_paused, pg_wal_replay_resume\n> or pg_wal_replay_pause, but we have tests for pg_promote in timeline\n> switch.\n\nI will work on this.\n\n> [12] Isn't it better to change RecoveryPauseState enum\n> RECOVERY_IN_PROGRESS value to start from 1 instead of 0? Because when\n> XLogCtl shared memory is initialized, I think recoveryPauseState can\n> be 0, so should it mean recovery in progress?\n\nI think this state is to track the pause status, either pause\nrequested or actually pause, we don't really want to trace the\nRECOVERY_IN_PROGRESS. Maybe we can change the name of that status to\nRECOVERY_PAUSE_NOT_REQUESTED or RECOVERY_PAUSE_NONE?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Jan 2021 17:23:24 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Tue, 19 Jan 2021 21:32:31 +0530\nDilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Tue, Jan 19, 2021 at 8:34 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, 19 Jan 2021 at 8:12 AM, Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >>\n> >> On Sun, 17 Jan 2021 11:33:52 +0530\n> >> Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >>\n> >> > On Thu, Jan 14, 2021 at 6:49 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >> > >\n> >> > > On Wed, 13 Jan 2021 17:49:43 +0530\n> >> > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >> > >\n> >> > > > On Wed, Jan 13, 2021 at 3:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >> > > > >\n> >> > > > > On Wed, Jan 13, 2021 at 3:27 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >> > > > > >\n> >> > > > > > On Thu, 10 Dec 2020 11:25:23 +0530\n> >> > > > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >> > > > > >\n> >> > > > > > > > > However, I wonder users don't expect pg_is_wal_replay_paused to wait.\n> >> > > > > > > > > Especially, if max_standby_streaming_delay is -1, this will be blocked forever,\n> >> > > > > > > > > although this setting may not be usual. In addition, some users may set\n> >> > > > > > > > > recovery_min_apply_delay for a large. If such users call pg_is_wal_replay_paused,\n> >> > > > > > > > > it could wait for a long time.\n> >> > > > > > > > >\n> >> > > > > > > > > At least, I think we need some descriptions on document to explain\n> >> > > > > > > > > pg_is_wal_replay_paused could wait while a time.\n> >> > > > > > > >\n> >> > > > > > > > Ok\n> >> > > > > > >\n> >> > > > > > > Fixed this, added some comments in .sgml as well as in function header\n> >> > > > > >\n> >> > > > > > Thank you for fixing this.\n> >> > > > > >\n> >> > > > > > Also, is it better to fix the description of pg_wal_replay_pause from\n> >> > > > > > \"Pauses recovery.\" to \"Request to pause recovery.\" in according with\n> >> > > > > > pg_is_wal_replay_paused?\n> >> > > > >\n> >> > > > > Okay\n> >> > > > >\n> >> > > > > >\n> >> > > > > > > > > Also, how about adding a new boolean argument to pg_is_wal_replay_paused to\n> >> > > > > > > > > control whether this waits for recovery to get paused or not? By setting its\n> >> > > > > > > > > default value to true or false, users can use the old format for calling this\n> >> > > > > > > > > and the backward compatibility can be maintained.\n> >> > > > > > > >\n> >> > > > > > > > So basically, if the wait_recovery_pause flag is false then we will\n> >> > > > > > > > immediately return true if the pause is requested? I agree that it is\n> >> > > > > > > > good to have an API to know whether the recovery pause is requested or\n> >> > > > > > > > not but I am not sure is it good idea to make this API serve both the\n> >> > > > > > > > purpose? Anyone else have any thoughts on this?\n> >> > > > > > > >\n> >> > > > > >\n> >> > > > > > I think the current pg_is_wal_replay_paused() already has another purpose;\n> >> > > > > > this waits recovery to actually get paused. If we want to limit this API's\n> >> > > > > > purpose only to return the pause state, it seems better to fix this to return\n> >> > > > > > the actual state at the cost of lacking the backward compatibility. If we want\n> >> > > > > > to know whether pause is requested, we may add a new API like\n> >> > > > > > pg_is_wal_replay_paluse_requeseted(). Also, if we want to wait recovery to actually\n> >> > > > > > get paused, we may add an option to pg_wal_replay_pause() for this purpose.\n> >> > > > > >\n> >> > > > > > However, this might be a bikeshedding. If anyone don't care that\n> >> > > > > > pg_is_wal_replay_paused() can make user wait for a long time, I don't care either.\n> >> > > > >\n> >> > > > > I don't think that it will be blocked ever, because\n> >> > > > > pg_wal_replay_pause is sending the WakeupRecovery() which means the\n> >> > > > > recovery process will not be stuck on waiting for the WAL.\n> >> > >\n> >> > > Yes, there is no stuck on waiting for the WAL. However, it can be stuck during resolving\n> >> > > a recovery conflict. The process could wait for max_standby_streaming_delay or\n> >> > > max_standby_archive_delay at most before recovery get completely paused.\n> >> >\n> >> > Okay, I agree that it is possible so for handling this we have a\n> >> > couple of options\n> >> > 1. pg_is_wal_replay_paused(), interface will wait for recovery to\n> >> > actually get paused, but user have an option to cancel that. So I\n> >> > agree that there is currently no option to just know that recovery\n> >> > pause is requested without waiting for its actually get paused if it\n> >> > is requested. So one option is we can provide an another interface as\n> >> > you mentioned pg_is_wal_replay_paluse_requeseted(), which can just\n> >> > return the request status. I am not sure how useful it is.\n> >>\n> >> If it is acceptable that pg_is_wal_replay_paused() makes users wait,\n> >> I'm ok for the current interface. I don't feel the need of\n> >> pg_is_wal_replay_paluse_requeseted().\n> >>\n> >> >\n> >> > 2. Pass an option to pg_is_wal_replay_paused whether to wait for\n> >> > recovery to actually get paused or not.\n> >> >\n> >> > 3. Pass an option to pg_wal_replay_pause(), whether to wait for\n> >> > recovery pause or just request and return.\n> >> >\n> >> > I like the option 1, any other opinion on this?\n> >> >\n> >> > > Also, it could wait for recovery_min_apply_delay if it has a valid value. It is possible\n> >> > > that a user set this parameter to a large value, so it could wait for a long time. However,\n> >> > > this will be avoided by calling recoveryPausesHere() or CheckAndSetRecoveryPause() in\n> >> > > recoveryApplyDelay().\n> >> >\n> >> > Right\n> >>\n> >> Is there any reason not to do it?\n> >\n> >\n> >\n> > I think I missed that.. I will do in the next version\n> >\n> \n> In the last patch there were some local changes which I did not add to\n> the patch and it was giving compilation warning so fixed that along\n> with that I have addressed your this comment as well.\n\nThank you fixing this!\n\nI noticed that, after this fix, the following recoveryPausesHere() might\nbe unnecessary because this test and pause is already done in recoveryApplyDelay\n What do you think about it?\n\n if (recoveryApplyDelay(xlogreader))\n {\n /*\n * We test for paused recovery again here. If user sets\n * delayed apply, it may be because they expect to pause\n * recovery in case of problems, so we must test again\n * here otherwise pausing during the delay-wait wouldn't\n * work.\n */\n if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState)\n recoveryPausesHere(false);\n }\n\nRegards,\nYugo Nagata\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Thu, 21 Jan 2021 21:49:23 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, Jan 18, 2021 at 9:42 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> If it is acceptable that pg_is_wal_replay_paused() makes users wait,\n> I'm ok for the current interface. I don't feel the need of\n> pg_is_wal_replay_paluse_requeseted().\n\nAnother idea could be that pg_is_wal_replay_paused() could be changed\nto text, and the string could be either 'paused' or 'pause requested'\nor 'not paused'. That way we'd be returning a direct representation of\nthe state we're keeping in memory. Some of the complexity in this\ndiscussion seems to come from trying to squeeze 3 possibilities into a\nBoolean.\n\nLet's also consider that we don't really know whether the client wants\nus to wait or not, and different clients may want different things, or\nmaybe not, but we don't really know at this point. If we provide an\ninterface that waits, and the client doesn't want to wait but just\nknow the current state, they don't necessarily have any great options.\nIf we provide an interface that doesn't wait, and the client wants to\nwait, it can poll until it gets the answer it wants. Polling can be\ninefficient, but anybody who is writing a tool that uses this should\nbe able to manage an algorithm with some reasonable back-off behavior\n(e.g. try after 10ms, 20ms, keep doubling, max of 5s, or something of\nthat sort), so I'm not sure there's actually any real problem in\npractice. So to me it seems more likely that an interface that is\nbased on waiting will cause difficulty for tool-writers than one that\ndoes not.\n\nOther people may feel differently, of course...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Jan 2021 15:48:35 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Fri, Jan 22, 2021 at 2:18 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jan 18, 2021 at 9:42 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > If it is acceptable that pg_is_wal_replay_paused() makes users wait,\n> > I'm ok for the current interface. I don't feel the need of\n> > pg_is_wal_replay_paluse_requeseted().\n>\n> Another idea could be that pg_is_wal_replay_paused() could be changed\n> to text, and the string could be either 'paused' or 'pause requested'\n> or 'not paused'. That way we'd be returning a direct representation of\n> the state we're keeping in memory. Some of the complexity in this\n> discussion seems to come from trying to squeeze 3 possibilities into a\n> Boolean.\n>\n> Let's also consider that we don't really know whether the client wants\n> us to wait or not, and different clients may want different things, or\n> maybe not, but we don't really know at this point. If we provide an\n> interface that waits, and the client doesn't want to wait but just\n> know the current state, they don't necessarily have any great options.\n> If we provide an interface that doesn't wait, and the client wants to\n> wait, it can poll until it gets the answer it wants. Polling can be\n> inefficient, but anybody who is writing a tool that uses this should\n> be able to manage an algorithm with some reasonable back-off behavior\n> (e.g. try after 10ms, 20ms, keep doubling, max of 5s, or something of\n> that sort), so I'm not sure there's actually any real problem in\n> practice. So to me it seems more likely that an interface that is\n> based on waiting will cause difficulty for tool-writers than one that\n> does not.\n>\n> Other people may feel differently, of course...\n\nI think this is the better way of handling this. So +1 from my side,\nI will send an updated patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 23 Jan 2021 09:56:38 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, Jan 21, 2021 at 6:20 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n> On Tue, 19 Jan 2021 21:32:31 +0530\n> Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > On Tue, Jan 19, 2021 at 8:34 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Tue, 19 Jan 2021 at 8:12 AM, Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > >>\n> > >> On Sun, 17 Jan 2021 11:33:52 +0530\n> > >> Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >>\n> > >> > On Thu, Jan 14, 2021 at 6:49 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > >> > >\n> > >> > > On Wed, 13 Jan 2021 17:49:43 +0530\n> > >> > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >> > >\n> > >> > > > On Wed, Jan 13, 2021 at 3:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >> > > > >\n> > >> > > > > On Wed, Jan 13, 2021 at 3:27 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > >> > > > > >\n> > >> > > > > > On Thu, 10 Dec 2020 11:25:23 +0530\n> > >> > > > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >> > > > > >\n> > >> > > > > > > > > However, I wonder users don't expect pg_is_wal_replay_paused to wait.\n> > >> > > > > > > > > Especially, if max_standby_streaming_delay is -1, this will be blocked forever,\n> > >> > > > > > > > > although this setting may not be usual. In addition, some users may set\n> > >> > > > > > > > > recovery_min_apply_delay for a large. If such users call pg_is_wal_replay_paused,\n> > >> > > > > > > > > it could wait for a long time.\n> > >> > > > > > > > >\n> > >> > > > > > > > > At least, I think we need some descriptions on document to explain\n> > >> > > > > > > > > pg_is_wal_replay_paused could wait while a time.\n> > >> > > > > > > >\n> > >> > > > > > > > Ok\n> > >> > > > > > >\n> > >> > > > > > > Fixed this, added some comments in .sgml as well as in function header\n> > >> > > > > >\n> > >> > > > > > Thank you for fixing this.\n> > >> > > > > >\n> > >> > > > > > Also, is it better to fix the description of pg_wal_replay_pause from\n> > >> > > > > > \"Pauses recovery.\" to \"Request to pause recovery.\" in according with\n> > >> > > > > > pg_is_wal_replay_paused?\n> > >> > > > >\n> > >> > > > > Okay\n> > >> > > > >\n> > >> > > > > >\n> > >> > > > > > > > > Also, how about adding a new boolean argument to pg_is_wal_replay_paused to\n> > >> > > > > > > > > control whether this waits for recovery to get paused or not? By setting its\n> > >> > > > > > > > > default value to true or false, users can use the old format for calling this\n> > >> > > > > > > > > and the backward compatibility can be maintained.\n> > >> > > > > > > >\n> > >> > > > > > > > So basically, if the wait_recovery_pause flag is false then we will\n> > >> > > > > > > > immediately return true if the pause is requested? I agree that it is\n> > >> > > > > > > > good to have an API to know whether the recovery pause is requested or\n> > >> > > > > > > > not but I am not sure is it good idea to make this API serve both the\n> > >> > > > > > > > purpose? Anyone else have any thoughts on this?\n> > >> > > > > > > >\n> > >> > > > > >\n> > >> > > > > > I think the current pg_is_wal_replay_paused() already has another purpose;\n> > >> > > > > > this waits recovery to actually get paused. If we want to limit this API's\n> > >> > > > > > purpose only to return the pause state, it seems better to fix this to return\n> > >> > > > > > the actual state at the cost of lacking the backward compatibility. If we want\n> > >> > > > > > to know whether pause is requested, we may add a new API like\n> > >> > > > > > pg_is_wal_replay_paluse_requeseted(). Also, if we want to wait recovery to actually\n> > >> > > > > > get paused, we may add an option to pg_wal_replay_pause() for this purpose.\n> > >> > > > > >\n> > >> > > > > > However, this might be a bikeshedding. If anyone don't care that\n> > >> > > > > > pg_is_wal_replay_paused() can make user wait for a long time, I don't care either.\n> > >> > > > >\n> > >> > > > > I don't think that it will be blocked ever, because\n> > >> > > > > pg_wal_replay_pause is sending the WakeupRecovery() which means the\n> > >> > > > > recovery process will not be stuck on waiting for the WAL.\n> > >> > >\n> > >> > > Yes, there is no stuck on waiting for the WAL. However, it can be stuck during resolving\n> > >> > > a recovery conflict. The process could wait for max_standby_streaming_delay or\n> > >> > > max_standby_archive_delay at most before recovery get completely paused.\n> > >> >\n> > >> > Okay, I agree that it is possible so for handling this we have a\n> > >> > couple of options\n> > >> > 1. pg_is_wal_replay_paused(), interface will wait for recovery to\n> > >> > actually get paused, but user have an option to cancel that. So I\n> > >> > agree that there is currently no option to just know that recovery\n> > >> > pause is requested without waiting for its actually get paused if it\n> > >> > is requested. So one option is we can provide an another interface as\n> > >> > you mentioned pg_is_wal_replay_paluse_requeseted(), which can just\n> > >> > return the request status. I am not sure how useful it is.\n> > >>\n> > >> If it is acceptable that pg_is_wal_replay_paused() makes users wait,\n> > >> I'm ok for the current interface. I don't feel the need of\n> > >> pg_is_wal_replay_paluse_requeseted().\n> > >>\n> > >> >\n> > >> > 2. Pass an option to pg_is_wal_replay_paused whether to wait for\n> > >> > recovery to actually get paused or not.\n> > >> >\n> > >> > 3. Pass an option to pg_wal_replay_pause(), whether to wait for\n> > >> > recovery pause or just request and return.\n> > >> >\n> > >> > I like the option 1, any other opinion on this?\n> > >> >\n> > >> > > Also, it could wait for recovery_min_apply_delay if it has a valid value. It is possible\n> > >> > > that a user set this parameter to a large value, so it could wait for a long time. However,\n> > >> > > this will be avoided by calling recoveryPausesHere() or CheckAndSetRecoveryPause() in\n> > >> > > recoveryApplyDelay().\n> > >> >\n> > >> > Right\n> > >>\n> > >> Is there any reason not to do it?\n> > >\n> > >\n> > >\n> > > I think I missed that.. I will do in the next version\n> > >\n> >\n> > In the last patch there were some local changes which I did not add to\n> > the patch and it was giving compilation warning so fixed that along\n> > with that I have addressed your this comment as well.\n>\n> Thank you fixing this!\n>\n> I noticed that, after this fix, the following recoveryPausesHere() might\n> be unnecessary because this test and pause is already done in recoveryApplyDelay\n> What do you think about it?\n>\n> if (recoveryApplyDelay(xlogreader))\n> {\n> /*\n> * We test for paused recovery again here. If user sets\n> * delayed apply, it may be because they expect to pause\n> * recovery in case of problems, so we must test again\n> * here otherwise pausing during the delay-wait wouldn't\n> * work.\n> */\n> if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState)\n> recoveryPausesHere(false);\n> }\n\nYeah, a valid point. Thanks!\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 23 Jan 2021 10:17:13 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Sat, Jan 23, 2021 at 9:56 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Jan 22, 2021 at 2:18 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Mon, Jan 18, 2021 at 9:42 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > If it is acceptable that pg_is_wal_replay_paused() makes users wait,\n> > > I'm ok for the current interface. I don't feel the need of\n> > > pg_is_wal_replay_paluse_requeseted().\n> >\n> > Another idea could be that pg_is_wal_replay_paused() could be changed\n> > to text, and the string could be either 'paused' or 'pause requested'\n> > or 'not paused'. That way we'd be returning a direct representation of\n> > the state we're keeping in memory. Some of the complexity in this\n> > discussion seems to come from trying to squeeze 3 possibilities into a\n> > Boolean.\n> >\n> > Let's also consider that we don't really know whether the client wants\n> > us to wait or not, and different clients may want different things, or\n> > maybe not, but we don't really know at this point. If we provide an\n> > interface that waits, and the client doesn't want to wait but just\n> > know the current state, they don't necessarily have any great options.\n> > If we provide an interface that doesn't wait, and the client wants to\n> > wait, it can poll until it gets the answer it wants. Polling can be\n> > inefficient, but anybody who is writing a tool that uses this should\n> > be able to manage an algorithm with some reasonable back-off behavior\n> > (e.g. try after 10ms, 20ms, keep doubling, max of 5s, or something of\n> > that sort), so I'm not sure there's actually any real problem in\n> > practice. So to me it seems more likely that an interface that is\n> > based on waiting will cause difficulty for tool-writers than one that\n> > does not.\n> >\n> > Other people may feel differently, of course...\n>\n> I think this is the better way of handling this. So +1 from my side,\n> I will send an updated patch.\n\nPlease find the patch for the same. I haven't added a test case for\nthis yet. I mean we can write a test case to pause the recovery and\nget the status. But I am not sure that we can really write a reliable\ntest case for 'pause requested' and 'paused'.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 23 Jan 2021 13:36:12 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Sat, Jan 23, 2021 at 1:36 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Please find the patch for the same. I haven't added a test case for\n> this yet. I mean we can write a test case to pause the recovery and\n> get the status. But I am not sure that we can really write a reliable\n> test case for 'pause requested' and 'paused'.\n\n+1 to just show the recovery pause state in the output of\npg_is_wal_replay_paused. But, should the function name\n\"pg_is_wal_replay_paused\" be something like\n\"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\nin a function, I expect a boolean output. Others may have better\nthoughts.\n\nIIUC the above change, ensuring the recovery is paused after it's\nrequested lies with the user. IMHO, the main problem we are trying to\nsolve is not met. Isn't it better if we have a new function(wait\nversion) along with the above change to pg_is_wal_replay_paused,\nsomething like \"pg_wal_replay_pause_and_wait\" returning true or false?\nThe functionality is pg_wal_replay_pause + wait until it's actually\npaused.\n\nThoughts?\n\nSome comments on the v6 patch:\n\n[1] How about\n+ * This function returns the current state of the recovery pause.\ninstead of\n+ * This api will return the current state of the recovery pause.\n\n[2] Typo - it's \"requested\" + * 'paused requested' - if pause is\nreqested but recovery is not yet paused\n\n[3] I think it's \"+ * 'pause requested'\" instead of \"+ * 'paused requested'\"\n\n[4] Isn't it good to have an example usage and output of the function\nin the documentaion?\n+ Returns recovery pause status, which is <literal>not\npaused</literal> if\n+ pause is not requested, <literal>pause requested</literal> if pause is\n+ requested but recovery is not yet paused and,\n<literal>paused</literal> if\n+ the recovery is actually paused.\n </para></entry>\n\n[5] Is it\n+ * Wait until shared recoveryPause state is set to RECOVERY_NOT_PAUSED.\ninstead of\n+ * Wait until shared recoveryPause is set to RECOVERY_NOT_PAUSED.\n\n[6] As I mentioned upthread, isn't it better to have\n\"IsRecoveryPaused(void)\" than \"RecoveryIsPaused(void)\"?\n\n[7] Can we have the function variable name \"recoveryPause\" as \"state\"\nor \"pauseState? Because that variable context is set by the enum name\nRecoveryPauseState and the function name.\n\n+SetRecoveryPause(RecoveryPauseState recoveryPause)\n\nHere as well, \"recoveryPauseState\" to \"state\"?\n+GetRecoveryPauseState(void)\n {\n- bool recoveryPause;\n+ RecoveryPauseState recoveryPauseState;\n\n[6] Function name RecoveryIsPaused and it's comment \"Check whether the\nrecovery pause is requested.\" doesn't seem to be matching. Seems like\nit returns true even when RECOVERY_PAUSE_REQUESTED or RECOVERY_PAUSED.\nShould it return true only when the state is RECOVERY_PAUSE_REQUESTED?\n\nInstead of \"while (RecoveryIsPaused())\", can't we change it to \"while\n(GetRecoveryPauseState() != RECOVERY_NOT_PAUSED)\" and remove the\nRecoveryIsPaused()?\n\n[7] Can we change the switch-case in pg_is_wal_replay_paused to\nsomething like below?\n\nDatum\npg_is_wal_replay_paused(PG_FUNCTION_ARGS)\n{\n+ char *state;\n+ /* get the recovery pause state */\n+ switch(GetRecoveryPauseState())\n+ {\n+ case RECOVERY_NOT_PAUSED:\n+ state = \"not paused\";\n+ case RECOVERY_PAUSE_REQUESTED:\n+ state = \"paused requested\";\n+ case RECOVERY_PAUSED:\n+ state = \"paused\";\n+ default:\n+ elog(ERROR, \"invalid recovery pause state\");\n+ }\n+\n+ PG_RETURN_TEXT_P(cstring_to_text(type));\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 23 Jan 2021 16:40:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Sat, 23 Jan 2021 at 4:40 PM, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Sat, Jan 23, 2021 at 1:36 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Please find the patch for the same. I haven't added a test case for\n> > this yet. I mean we can write a test case to pause the recovery and\n> > get the status. But I am not sure that we can really write a reliable\n> > test case for 'pause requested' and 'paused'.\n>\n> +1 to just show the recovery pause state in the output of\n> pg_is_wal_replay_paused. But, should the function name\n> \"pg_is_wal_replay_paused\" be something like\n> \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> in a function, I expect a boolean output. Others may have better\n> thoughts.\n>\n> IIUC the above change, ensuring the recovery is paused after it's\n> requested lies with the user. IMHO, the main problem we are trying to\n> solve is not met\n\n\nBasically earlier their was no way for the user yo know whether the\nrecovery is actually paused or not because it was always returning true\nafter pause requested. Now, we will return whether pause requested or\nactually paused. So for tool designer who want to wait for recovery to get\npaused can have a loop and wait until the recovery state reaches to\npaused. That will give a better control.\n\nI will check other comments and respond along with the patch.\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Sat, 23 Jan 2021 at 4:40 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Sat, Jan 23, 2021 at 1:36 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Please find the patch for the same.  I haven't added a test case for\n> this yet.  I mean we can write a test case to pause the recovery and\n> get the status.  But I am not sure that we can really write a reliable\n> test case for 'pause requested' and 'paused'.\n\n+1 to just show the recovery pause state in the output of\npg_is_wal_replay_paused. But, should the function name\n\"pg_is_wal_replay_paused\" be something like\n\"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\nin a function, I expect a boolean output. Others may have better\nthoughts.\n\nIIUC the above change, ensuring the recovery is paused after it's\nrequested lies with the user. IMHO, the main problem we are trying to\nsolve is not metBasically earlier their was no way for the user yo know whether the recovery is actually paused or not because it was always returning true after pause requested.  Now, we will return whether pause requested or actually paused.  So for tool designer who want to wait for recovery to get paused can have a loop and wait until the recovery state reaches to paused.  That will give a better control.I will check other comments and respond along with the patch.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 24 Jan 2021 07:17:47 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Sat, Jan 23, 2021 at 4:40 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sat, Jan 23, 2021 at 1:36 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Please find the patch for the same. I haven't added a test case for\n> > this yet. I mean we can write a test case to pause the recovery and\n> > get the status. But I am not sure that we can really write a reliable\n> > test case for 'pause requested' and 'paused'.\n>\n> +1 to just show the recovery pause state in the output of\n> pg_is_wal_replay_paused. But, should the function name\n> \"pg_is_wal_replay_paused\" be something like\n> \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> in a function, I expect a boolean output. Others may have better\n> thoughts.\n\nI am fine with the name change, but don't feel that it will be\ncompletely wrong if pg_is_wal_replay_paused returns a different state\nof the recovery pause.\nSo I would like to see what others thinks and based on that we can decide.\n\n> IIUC the above change, ensuring the recovery is paused after it's\n> requested lies with the user. IMHO, the main problem we are trying to\n> solve is not met. Isn't it better if we have a new function(wait\n> version) along with the above change to pg_is_wal_replay_paused,\n> something like \"pg_wal_replay_pause_and_wait\" returning true or false?\n> The functionality is pg_wal_replay_pause + wait until it's actually\n> paused.\n>\n> Thoughts?\n\nAlready replied in the last mail.\n\n> Some comments on the v6 patch:\n>\n> [1] How about\n> + * This function returns the current state of the recovery pause.\n> instead of\n> + * This api will return the current state of the recovery pause.\n\nOkay\n\n> [2] Typo - it's \"requested\" + * 'paused requested' - if pause is\n> reqested but recovery is not yet paused\n>\n> [3] I think it's \"+ * 'pause requested'\" instead of \"+ * 'paused requested'\"\n\nWhich code does it refer, can give put the snippet from the patch.\nHowever, I have found there were 'paused requested' in two places so I\nhave fixed.\n\n> [4] Isn't it good to have an example usage and output of the function\n> in the documentaion?\n> + Returns recovery pause status, which is <literal>not\n> paused</literal> if\n> + pause is not requested, <literal>pause requested</literal> if pause is\n> + requested but recovery is not yet paused and,\n> <literal>paused</literal> if\n> + the recovery is actually paused.\n> </para></entry>\n\nI will add.\n\n> [5] Is it\n> + * Wait until shared recoveryPause state is set to RECOVERY_NOT_PAUSED.\n> instead of\n> + * Wait until shared recoveryPause is set to RECOVERY_NOT_PAUSED.\n\nOk\n\n> [6] As I mentioned upthread, isn't it better to have\n> \"IsRecoveryPaused(void)\" than \"RecoveryIsPaused(void)\"?\n\nThat is an existing function so I think it's fine to keep the same name.\n\n> [7] Can we have the function variable name \"recoveryPause\" as \"state\"\n> or \"pauseState? Because that variable context is set by the enum name\n> RecoveryPauseState and the function name.\n>\n> +SetRecoveryPause(RecoveryPauseState recoveryPause)\n>\n> Here as well, \"recoveryPauseState\" to \"state\"?\n> +GetRecoveryPauseState(void)\n> {\n> - bool recoveryPause;\n> + RecoveryPauseState recoveryPauseState;\n\nI don't think it is required but while changing the patch I will see\nwhether to change or not.\n\n> [6] Function name RecoveryIsPaused and it's comment \"Check whether the\n> recovery pause is requested.\" doesn't seem to be matching. Seems like\n> it returns true even when RECOVERY_PAUSE_REQUESTED or RECOVERY_PAUSED.\n> Should it return true only when the state is RECOVERY_PAUSE_REQUESTED?\n\nCode is doing right, I will change the comments.\n\n> Instead of \"while (RecoveryIsPaused())\", can't we change it to \"while\n> (GetRecoveryPauseState() != RECOVERY_NOT_PAUSED)\" and remove the\n> RecoveryIsPaused()?\n\nI think it looks clean with the function\n\n> [7] Can we change the switch-case in pg_is_wal_replay_paused to\n> something like below?\n>\n> Datum\n> pg_is_wal_replay_paused(PG_FUNCTION_ARGS)\n> {\n> + char *state;\n> + /* get the recovery pause state */\n> + switch(GetRecoveryPauseState())\n> + {\n> + case RECOVERY_NOT_PAUSED:\n> + state = \"not paused\";\n> + case RECOVERY_PAUSE_REQUESTED:\n> + state = \"paused requested\";\n> + case RECOVERY_PAUSED:\n> + state = \"paused\";\n> + default:\n> + elog(ERROR, \"invalid recovery pause state\");\n> + }\n> +\n> + PG_RETURN_TEXT_P(cstring_to_text(type));\n\nWhy do you think it is better to use an extra variable?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 24 Jan 2021 11:29:30 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Sun, Jan 24, 2021 at 7:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> On Sat, 23 Jan 2021 at 4:40 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Sat, Jan 23, 2021 at 1:36 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> > Please find the patch for the same. I haven't added a test case for\n>> > this yet. I mean we can write a test case to pause the recovery and\n>> > get the status. But I am not sure that we can really write a reliable\n>> > test case for 'pause requested' and 'paused'.\n>>\n>> +1 to just show the recovery pause state in the output of\n>> pg_is_wal_replay_paused. But, should the function name\n>> \"pg_is_wal_replay_paused\" be something like\n>> \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n>> in a function, I expect a boolean output. Others may have better\n>> thoughts.\n>>\n>> IIUC the above change, ensuring the recovery is paused after it's\n>> requested lies with the user. IMHO, the main problem we are trying to\n>> solve is not met\n>\n>\n> Basically earlier their was no way for the user yo know whether the recovery is actually paused or not because it was always returning true after pause requested. Now, we will return whether pause requested or actually paused. So > for tool designer who want to wait for recovery to get paused can have a loop and wait until the recovery state reaches to paused. That will give a better control.\n\nI get it and I agree to have that change. My point was whether we can\nhave a new function pg_wal_replay_pause_and_wait that waits until\nrecovery is actually paused ((along with pg_is_wal_replay_paused\nreturning the actual state than a true/false) so that tool developers\ndon't need to have the waiting code outside, if at all they care about\nit? Others may have better thoughts than me.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 24 Jan 2021 12:16:32 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Sun, Jan 24, 2021 at 11:29 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Some comments on the v6 patch:\n\n> > [2] Typo - it's \"requested\" + * 'paused requested' - if pause is\n> > reqested but recovery is not yet paused\n\nHere I meant the typo \"reqested\" in \"if pause is reqested but recovery\nis not yet paused\" statement from v6 patch.\n\n> > [3] I think it's \"+ * 'pause requested'\" instead of \"+ * 'paused requested'\"\n>\n> Which code does it refer, can give put the snippet from the patch.\n> However, I have found there were 'paused requested' in two places so I\n> have fixed.\n\nThanks.\n\n> > [6] As I mentioned upthread, isn't it better to have\n> > \"IsRecoveryPaused(void)\" than \"RecoveryIsPaused(void)\"?\n>\n> That is an existing function so I think it's fine to keep the same name.\n\nPersonally, I think the function RecoveryIsPaused itself is\nunnecessary with the new function GetRecoveryPauseState introduced in\nyour patch. IMHO, we can remove it. If not okay, then we are at it,\ncan we at least change the function name to be meaningful\n\"IsRecoveryPaused\"? Others may have better thoughts than me.\n\n> > [7] Can we have the function variable name \"recoveryPause\" as \"state\"\n> > or \"pauseState? Because that variable context is set by the enum name\n> > RecoveryPauseState and the function name.\n> >\n> > +SetRecoveryPause(RecoveryPauseState recoveryPause)\n> >\n> > Here as well, \"recoveryPauseState\" to \"state\"?\n> > +GetRecoveryPauseState(void)\n> > {\n> > - bool recoveryPause;\n> > + RecoveryPauseState recoveryPauseState;\n>\n> I don't think it is required but while changing the patch I will see\n> whether to change or not.\n\nIt will be good to change that. I personally don't like structure\nnames and variable names to be the same.\n\n> > [6] Function name RecoveryIsPaused and it's comment \"Check whether the\n> > recovery pause is requested.\" doesn't seem to be matching. Seems like\n> > it returns true even when RECOVERY_PAUSE_REQUESTED or RECOVERY_PAUSED.\n> > Should it return true only when the state is RECOVERY_PAUSE_REQUESTED?\n>\n> Code is doing right, I will change the comments.\n>\n> > Instead of \"while (RecoveryIsPaused())\", can't we change it to \"while\n> > (GetRecoveryPauseState() != RECOVERY_NOT_PAUSED)\" and remove the\n> > RecoveryIsPaused()?\n>\n> I think it looks clean with the function\n\nAs I said earlier, I see no use of RecoveryIsPaused() with the\nintroduction of the new function GetRecoveryPauseState(). Others may\nhave better thoughts than me.\n\n> > [7] Can we change the switch-case in pg_is_wal_replay_paused to\n> > something like below?\n> >\n> > Datum\n> > pg_is_wal_replay_paused(PG_FUNCTION_ARGS)\n> > {\n> > + char *state;\n> > + /* get the recovery pause state */\n> > + switch(GetRecoveryPauseState())\n> > + {\n> > + case RECOVERY_NOT_PAUSED:\n> > + state = \"not paused\";\n> > + case RECOVERY_PAUSE_REQUESTED:\n> > + state = \"paused requested\";\n> > + case RECOVERY_PAUSED:\n> > + state = \"paused\";\n> > + default:\n> > + elog(ERROR, \"invalid recovery pause state\");\n> > + }\n> > +\n> > + PG_RETURN_TEXT_P(cstring_to_text(type));\n>\n> Why do you think it is better to use an extra variable?\n\nI see no wrong in having PG_RETURN_TEXT_P and cstring_to_text 's in\nevery case statement. But, just to make sure the code looks cleaner, I\nsaid that we can have a local state variable and just one\nPG_RETURN_TEXT_P(cstring_to_text(state));. See some existing functions\nbrin_page_type, hash_page_type, json_typeof,\npg_stat_get_backend_activity, pg_stat_get_backend_wait_event_type,\npg_stat_get_backend_wait_event, get_command_type.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 24 Jan 2021 12:17:13 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Sun, Jan 24, 2021 at 12:16 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sun, Jan 24, 2021 at 7:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > On Sat, 23 Jan 2021 at 4:40 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >>\n> >> On Sat, Jan 23, 2021 at 1:36 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >> > Please find the patch for the same. I haven't added a test case for\n> >> > this yet. I mean we can write a test case to pause the recovery and\n> >> > get the status. But I am not sure that we can really write a reliable\n> >> > test case for 'pause requested' and 'paused'.\n> >>\n> >> +1 to just show the recovery pause state in the output of\n> >> pg_is_wal_replay_paused. But, should the function name\n> >> \"pg_is_wal_replay_paused\" be something like\n> >> \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> >> in a function, I expect a boolean output. Others may have better\n> >> thoughts.\n> >>\n> >> IIUC the above change, ensuring the recovery is paused after it's\n> >> requested lies with the user. IMHO, the main problem we are trying to\n> >> solve is not met\n> >\n> >\n> > Basically earlier their was no way for the user yo know whether the recovery is actually paused or not because it was always returning true after pause requested. Now, we will return whether pause requested or actually paused. So > for tool designer who want to wait for recovery to get paused can have a loop and wait until the recovery state reaches to paused. That will give a better control.\n>\n> I get it and I agree to have that change. My point was whether we can\n> have a new function pg_wal_replay_pause_and_wait that waits until\n> recovery is actually paused ((along with pg_is_wal_replay_paused\n> returning the actual state than a true/false) so that tool developers\n> don't need to have the waiting code outside, if at all they care about\n> it? Others may have better thoughts than me.\n\nI think the previous patch was based on that idea where we thought\nthat we can pass an argument to pg_is_wal_replay_paused which can\ndecide whether to wait or return without the wait. I think this\nversion looks better to me where we give the status instead of\nwaiting. I am not sure whether we want another version of\npg_wal_replay_pause which will wait for actually it to get paused. I\nmean there is always a scope to include the functionality in the\ndatabase which can be achieved by the tool, but this patch was trying\nto solve the problem that there was no way to know the status so I\nthink returning the correct status should be the scope of this.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 24 Jan 2021 14:26:08 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Sun, Jan 17, 2021 at 5:19 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sat, Jan 16, 2021 at 8:59 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jan 13, 2021 at 9:20 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Wed, Jan 13, 2021 at 3:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Wed, Jan 13, 2021 at 3:27 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > > >\n> > > > > On Thu, 10 Dec 2020 11:25:23 +0530\n> > > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > > > > However, I wonder users don't expect pg_is_wal_replay_paused to wait.\n> > > > > > > > Especially, if max_standby_streaming_delay is -1, this will be blocked forever,\n> > > > > > > > although this setting may not be usual. In addition, some users may set\n> > > > > > > > recovery_min_apply_delay for a large. If such users call pg_is_wal_replay_paused,\n> > > > > > > > it could wait for a long time.\n> > > > > > > >\n> > > > > > > > At least, I think we need some descriptions on document to explain\n> > > > > > > > pg_is_wal_replay_paused could wait while a time.\n> > > > > > >\n> > > > > > > Ok\n> > > > > >\n> > > > > > Fixed this, added some comments in .sgml as well as in function header\n> > > > >\n> > > > > Thank you for fixing this.\n> > > > >\n> > > > > Also, is it better to fix the description of pg_wal_replay_pause from\n> > > > > \"Pauses recovery.\" to \"Request to pause recovery.\" in according with\n> > > > > pg_is_wal_replay_paused?\n> > > >\n> > > > Okay\n> > > >\n> > > > >\n> > > > > > > > Also, how about adding a new boolean argument to pg_is_wal_replay_paused to\n> > > > > > > > control whether this waits for recovery to get paused or not? By setting its\n> > > > > > > > default value to true or false, users can use the old format for calling this\n> > > > > > > > and the backward compatibility can be maintained.\n> > > > > > >\n> > > > > > > So basically, if the wait_recovery_pause flag is false then we will\n> > > > > > > immediately return true if the pause is requested? I agree that it is\n> > > > > > > good to have an API to know whether the recovery pause is requested or\n> > > > > > > not but I am not sure is it good idea to make this API serve both the\n> > > > > > > purpose? Anyone else have any thoughts on this?\n> > > > > > >\n> > > > >\n> > > > > I think the current pg_is_wal_replay_paused() already has another purpose;\n> > > > > this waits recovery to actually get paused. If we want to limit this API's\n> > > > > purpose only to return the pause state, it seems better to fix this to return\n> > > > > the actual state at the cost of lacking the backward compatibility. If we want\n> > > > > to know whether pause is requested, we may add a new API like\n> > > > > pg_is_wal_replay_paluse_requeseted(). Also, if we want to wait recovery to actually\n> > > > > get paused, we may add an option to pg_wal_replay_pause() for this purpose.\n> > > > >\n> > > > > However, this might be a bikeshedding. If anyone don't care that\n> > > > > pg_is_wal_replay_paused() can make user wait for a long time, I don't care either.\n> > > >\n> > > > I don't think that it will be blocked ever, because\n> > > > pg_wal_replay_pause is sending the WakeupRecovery() which means the\n> > > > recovery process will not be stuck on waiting for the WAL.\n> > > >\n> > > > > > > > As another comment, while pg_is_wal_replay_paused is blocking, I can not cancel\n> > > > > > > > the query. I think CHECK_FOR_INTERRUPTS() is necessary in the waiting loop.\n> > > > >\n> > > > > How about this fix? I think users may want to cancel pg_is_wal_replay_paused() during\n> > > > > this is blocking.\n> > > >\n> > > > Yeah, we can do this. I will send the updated patch after putting\n> > > > some more thought into these comments. Thanks again for the feedback.\n> > > >\n> > >\n> > > Please find the updated patch.\n> >\n> > I've looked at the patch. Here are review comments:\n> >\n> > + /* Recovery pause state */\n> > + RecoveryPauseState recoveryPause;\n> >\n> > Now that the value can have tri-state, how about renaming it to\n> > recoveryPauseState?\n>\n> This makes sense to me.\n>\n> > ---\n> > bool\n> > RecoveryIsPaused(void)\n> > +{\n> > + bool recoveryPause;\n> > +\n> > + SpinLockAcquire(&XLogCtl->info_lck);\n> > + recoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED) ?\n> > true : false;\n> > + SpinLockRelease(&XLogCtl->info_lck);\n> > +\n> > + return recoveryPause;\n> > +}\n> > +\n> > +bool\n> > +RecoveryPauseRequested(void)\n> > {\n> > bool recoveryPause;\n> >\n> > SpinLockAcquire(&XLogCtl->info_lck);\n> > - recoveryPause = XLogCtl->recoveryPause;\n> > + recoveryPause = (XLogCtl->recoveryPause !=\n> > RECOVERY_IN_PROGRESS) ? true : false;\n> > SpinLockRelease(&XLogCtl->info_lck);\n> >\n> > return recoveryPause;\n> > }\n> >\n> > We can write like recoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED);\n>\n> In RecoveryPauseRequested, we just want to know whether the pause is\n> requested or not, even if the pause requested and not yet pause then\n> also we want to return true. So how\n> recoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED) will work?\n\nSorry for the late response.\n\nWhat I wanted to say is that the ternary operator is not necessary in\nthose cases.\n\nThe codes,\n\nrecoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED) ? true : false;\nrecoveryPause = (XLogCtl->recoveryPause != RECOVERY_IN_PROGRESS) ? true : false;\n\nare equivalent with,\n\nrecoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED);\nrecoveryPause = (XLogCtl->recoveryPause != RECOVERY_IN_PROGRESS);\n\nrespectively.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 25 Jan 2021 09:41:34 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Sun, 24 Jan 2021 14:26:08 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Sun, Jan 24, 2021 at 12:16 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Sun, Jan 24, 2021 at 7:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > On Sat, 23 Jan 2021 at 4:40 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >>\n> > >> On Sat, Jan 23, 2021 at 1:36 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >> > Please find the patch for the same. I haven't added a test case for\n> > >> > this yet. I mean we can write a test case to pause the recovery and\n> > >> > get the status. But I am not sure that we can really write a reliable\n> > >> > test case for 'pause requested' and 'paused'.\n> > >>\n> > >> +1 to just show the recovery pause state in the output of\n> > >> pg_is_wal_replay_paused. But, should the function name\n> > >> \"pg_is_wal_replay_paused\" be something like\n> > >> \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> > >> in a function, I expect a boolean output. Others may have better\n> > >> thoughts.\n> > >>\n> > >> IIUC the above change, ensuring the recovery is paused after it's\n> > >> requested lies with the user. IMHO, the main problem we are trying to\n> > >> solve is not met\n> > >\n> > >\n> > > Basically earlier their was no way for the user yo know whether the recovery is actually paused or not because it was always returning true after pause requested. Now, we will return whether pause requested or actually paused. So > for tool designer who want to wait for recovery to get paused can have a loop and wait until the recovery state reaches to paused. That will give a better control.\n> >\n> > I get it and I agree to have that change. My point was whether we can\n> > have a new function pg_wal_replay_pause_and_wait that waits until\n> > recovery is actually paused ((along with pg_is_wal_replay_paused\n> > returning the actual state than a true/false) so that tool developers\n> > don't need to have the waiting code outside, if at all they care about\n> > it? Others may have better thoughts than me.\n> \n> I think the previous patch was based on that idea where we thought\n> that we can pass an argument to pg_is_wal_replay_paused which can\n> decide whether to wait or return without the wait. I think this\n> version looks better to me where we give the status instead of\n> waiting. I am not sure whether we want another version of\n> pg_wal_replay_pause which will wait for actually it to get paused. I\n> mean there is always a scope to include the functionality in the\n> database which can be achieved by the tool, but this patch was trying\n> to solve the problem that there was no way to know the status so I\n> think returning the correct status should be the scope of this.\n\nI understand that the requirement here is that no record is replayed\nafter pg_wal_replay_pause() is returned, or pg_is_wal_replay_paused()\nreturns true, and delays taken while recovery don't delay the state\nchange. That requirements are really synchronous.\n\nOn the other hand the machinery is designed to be asynchronous.\n\n>\t * Note that we intentionally don't take the info_lck spinlock\n>\t * here. We might therefore read a slightly stale value of\n>\t * the recoveryPause flag, but it can't be very stale (no\n>\t * worse than the last spinlock we did acquire). Since a\n>\t * pause request is a pretty asynchronous thing anyway,\n>\t * possibly responding to it one WAL record later than we\n>\t * otherwise would is a minor issue, so it doesn't seem worth\n>\t * adding another spinlock cycle to prevent that.\n\nAs the result, this patch tries to introduce several new checkpoints\nto some delaying point so that that waits can find pause request in a\ntimely manner. I think we had better use locking (or atomics) for the\ninformation instead of such scattered checkpoints if we expect that\nmachinery to work in a such syhcnronous manner.\n\nThat would make the tri-state state variable and many checkpoints\nunnecessary. Maybe.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 25 Jan 2021 12:12:53 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, Jan 25, 2021 at 6:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sun, Jan 17, 2021 at 5:19 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Sat, Jan 16, 2021 at 8:59 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Jan 13, 2021 at 9:20 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Wed, Jan 13, 2021 at 3:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Jan 13, 2021 at 3:27 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > > > >\n> > > > > > On Thu, 10 Dec 2020 11:25:23 +0530\n> > > > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > >\n> > > > > > > > > However, I wonder users don't expect pg_is_wal_replay_paused to wait.\n> > > > > > > > > Especially, if max_standby_streaming_delay is -1, this will be blocked forever,\n> > > > > > > > > although this setting may not be usual. In addition, some users may set\n> > > > > > > > > recovery_min_apply_delay for a large. If such users call pg_is_wal_replay_paused,\n> > > > > > > > > it could wait for a long time.\n> > > > > > > > >\n> > > > > > > > > At least, I think we need some descriptions on document to explain\n> > > > > > > > > pg_is_wal_replay_paused could wait while a time.\n> > > > > > > >\n> > > > > > > > Ok\n> > > > > > >\n> > > > > > > Fixed this, added some comments in .sgml as well as in function header\n> > > > > >\n> > > > > > Thank you for fixing this.\n> > > > > >\n> > > > > > Also, is it better to fix the description of pg_wal_replay_pause from\n> > > > > > \"Pauses recovery.\" to \"Request to pause recovery.\" in according with\n> > > > > > pg_is_wal_replay_paused?\n> > > > >\n> > > > > Okay\n> > > > >\n> > > > > >\n> > > > > > > > > Also, how about adding a new boolean argument to pg_is_wal_replay_paused to\n> > > > > > > > > control whether this waits for recovery to get paused or not? By setting its\n> > > > > > > > > default value to true or false, users can use the old format for calling this\n> > > > > > > > > and the backward compatibility can be maintained.\n> > > > > > > >\n> > > > > > > > So basically, if the wait_recovery_pause flag is false then we will\n> > > > > > > > immediately return true if the pause is requested? I agree that it is\n> > > > > > > > good to have an API to know whether the recovery pause is requested or\n> > > > > > > > not but I am not sure is it good idea to make this API serve both the\n> > > > > > > > purpose? Anyone else have any thoughts on this?\n> > > > > > > >\n> > > > > >\n> > > > > > I think the current pg_is_wal_replay_paused() already has another purpose;\n> > > > > > this waits recovery to actually get paused. If we want to limit this API's\n> > > > > > purpose only to return the pause state, it seems better to fix this to return\n> > > > > > the actual state at the cost of lacking the backward compatibility. If we want\n> > > > > > to know whether pause is requested, we may add a new API like\n> > > > > > pg_is_wal_replay_paluse_requeseted(). Also, if we want to wait recovery to actually\n> > > > > > get paused, we may add an option to pg_wal_replay_pause() for this purpose.\n> > > > > >\n> > > > > > However, this might be a bikeshedding. If anyone don't care that\n> > > > > > pg_is_wal_replay_paused() can make user wait for a long time, I don't care either.\n> > > > >\n> > > > > I don't think that it will be blocked ever, because\n> > > > > pg_wal_replay_pause is sending the WakeupRecovery() which means the\n> > > > > recovery process will not be stuck on waiting for the WAL.\n> > > > >\n> > > > > > > > > As another comment, while pg_is_wal_replay_paused is blocking, I can not cancel\n> > > > > > > > > the query. I think CHECK_FOR_INTERRUPTS() is necessary in the waiting loop.\n> > > > > >\n> > > > > > How about this fix? I think users may want to cancel pg_is_wal_replay_paused() during\n> > > > > > this is blocking.\n> > > > >\n> > > > > Yeah, we can do this. I will send the updated patch after putting\n> > > > > some more thought into these comments. Thanks again for the feedback.\n> > > > >\n> > > >\n> > > > Please find the updated patch.\n> > >\n> > > I've looked at the patch. Here are review comments:\n> > >\n> > > + /* Recovery pause state */\n> > > + RecoveryPauseState recoveryPause;\n> > >\n> > > Now that the value can have tri-state, how about renaming it to\n> > > recoveryPauseState?\n> >\n> > This makes sense to me.\n> >\n> > > ---\n> > > bool\n> > > RecoveryIsPaused(void)\n> > > +{\n> > > + bool recoveryPause;\n> > > +\n> > > + SpinLockAcquire(&XLogCtl->info_lck);\n> > > + recoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED) ?\n> > > true : false;\n> > > + SpinLockRelease(&XLogCtl->info_lck);\n> > > +\n> > > + return recoveryPause;\n> > > +}\n> > > +\n> > > +bool\n> > > +RecoveryPauseRequested(void)\n> > > {\n> > > bool recoveryPause;\n> > >\n> > > SpinLockAcquire(&XLogCtl->info_lck);\n> > > - recoveryPause = XLogCtl->recoveryPause;\n> > > + recoveryPause = (XLogCtl->recoveryPause !=\n> > > RECOVERY_IN_PROGRESS) ? true : false;\n> > > SpinLockRelease(&XLogCtl->info_lck);\n> > >\n> > > return recoveryPause;\n> > > }\n> > >\n> > > We can write like recoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED);\n> >\n> > In RecoveryPauseRequested, we just want to know whether the pause is\n> > requested or not, even if the pause requested and not yet pause then\n> > also we want to return true. So how\n> > recoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED) will work?\n>\n> Sorry for the late response.\n>\n> What I wanted to say is that the ternary operator is not necessary in\n> those cases.\n>\n> The codes,\n>\n> recoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED) ? true : false;\n> recoveryPause = (XLogCtl->recoveryPause != RECOVERY_IN_PROGRESS) ? true : false;\n>\n> are equivalent with,\n>\n> recoveryPause = (XLogCtl->recoveryPause == RECOVERY_PAUSED);\n> recoveryPause = (XLogCtl->recoveryPause != RECOVERY_IN_PROGRESS);\n>\n> respectively.\n>\n\nYeah absolutely correct. Will changes.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 09:53:40 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, Jan 25, 2021 at 8:42 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Sun, 24 Jan 2021 14:26:08 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > On Sun, Jan 24, 2021 at 12:16 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Sun, Jan 24, 2021 at 7:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > On Sat, 23 Jan 2021 at 4:40 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > >>\n> > > >> On Sat, Jan 23, 2021 at 1:36 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >> > Please find the patch for the same. I haven't added a test case for\n> > > >> > this yet. I mean we can write a test case to pause the recovery and\n> > > >> > get the status. But I am not sure that we can really write a reliable\n> > > >> > test case for 'pause requested' and 'paused'.\n> > > >>\n> > > >> +1 to just show the recovery pause state in the output of\n> > > >> pg_is_wal_replay_paused. But, should the function name\n> > > >> \"pg_is_wal_replay_paused\" be something like\n> > > >> \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> > > >> in a function, I expect a boolean output. Others may have better\n> > > >> thoughts.\n> > > >>\n> > > >> IIUC the above change, ensuring the recovery is paused after it's\n> > > >> requested lies with the user. IMHO, the main problem we are trying to\n> > > >> solve is not met\n> > > >\n> > > >\n> > > > Basically earlier their was no way for the user yo know whether the recovery is actually paused or not because it was always returning true after pause requested. Now, we will return whether pause requested or actually paused. So > for tool designer who want to wait for recovery to get paused can have a loop and wait until the recovery state reaches to paused. That will give a better control.\n> > >\n> > > I get it and I agree to have that change. My point was whether we can\n> > > have a new function pg_wal_replay_pause_and_wait that waits until\n> > > recovery is actually paused ((along with pg_is_wal_replay_paused\n> > > returning the actual state than a true/false) so that tool developers\n> > > don't need to have the waiting code outside, if at all they care about\n> > > it? Others may have better thoughts than me.\n> >\n> > I think the previous patch was based on that idea where we thought\n> > that we can pass an argument to pg_is_wal_replay_paused which can\n> > decide whether to wait or return without the wait. I think this\n> > version looks better to me where we give the status instead of\n> > waiting. I am not sure whether we want another version of\n> > pg_wal_replay_pause which will wait for actually it to get paused. I\n> > mean there is always a scope to include the functionality in the\n> > database which can be achieved by the tool, but this patch was trying\n> > to solve the problem that there was no way to know the status so I\n> > think returning the correct status should be the scope of this.\n>\n> I understand that the requirement here is that no record is replayed\n> after pg_wal_replay_pause() is returned, or pg_is_wal_replay_paused()\n> returns true, and delays taken while recovery don't delay the state\n> change. That requirements are really synchronous.\n>\n> On the other hand the machinery is designed to be asynchronous.\n>\n> > * Note that we intentionally don't take the info_lck spinlock\n> > * here. We might therefore read a slightly stale value of\n> > * the recoveryPause flag, but it can't be very stale (no\n> > * worse than the last spinlock we did acquire). Since a\n> > * pause request is a pretty asynchronous thing anyway,\n> > * possibly responding to it one WAL record later than we\n> > * otherwise would is a minor issue, so it doesn't seem worth\n> > * adding another spinlock cycle to prevent that.\n>\n> As the result, this patch tries to introduce several new checkpoints\n> to some delaying point so that that waits can find pause request in a\n> timely manner. I think we had better use locking (or atomics) for the\n> information instead of such scattered checkpoints if we expect that\n> machinery to work in a such syhcnronous manner.\n>\n> That would make the tri-state state variable and many checkpoints\n> unnecessary. Maybe.\n\nI don't think the intention was so to make it synchronous, I think\nthe main intention was that pg_is_wal_replay_paused can return us the\ncorrect state, in short user can know that whether any more wal will\nbe replayed after pg_is_wal_replay_paused returns true or some other\nstate. I agree that along with that we have also introduced some\nextra checkpoint where the recovery process is waiting for WAL and\napply delay and from the pg_wal_replay_pause we had sent a signal to\nwakeup the recovery process. So I am not sure is it worth adding the\nlock/atomic variable to make this synchronous. Any other thoughts on\nthis?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 10:05:19 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Mon, 25 Jan 2021 10:05:19 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Mon, Jan 25, 2021 at 8:42 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Sun, 24 Jan 2021 14:26:08 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > > On Sun, Jan 24, 2021 at 12:16 PM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > >\n> > > > On Sun, Jan 24, 2021 at 7:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > On Sat, 23 Jan 2021 at 4:40 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > >>\n> > > > >> On Sat, Jan 23, 2021 at 1:36 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >> > Please find the patch for the same. I haven't added a test case for\n> > > > >> > this yet. I mean we can write a test case to pause the recovery and\n> > > > >> > get the status. But I am not sure that we can really write a reliable\n> > > > >> > test case for 'pause requested' and 'paused'.\n> > > > >>\n> > > > >> +1 to just show the recovery pause state in the output of\n> > > > >> pg_is_wal_replay_paused. But, should the function name\n> > > > >> \"pg_is_wal_replay_paused\" be something like\n> > > > >> \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> > > > >> in a function, I expect a boolean output. Others may have better\n> > > > >> thoughts.\n> > > > >>\n> > > > >> IIUC the above change, ensuring the recovery is paused after it's\n> > > > >> requested lies with the user. IMHO, the main problem we are trying to\n> > > > >> solve is not met\n> > > > >\n> > > > >\n> > > > > Basically earlier their was no way for the user yo know whether the recovery is actually paused or not because it was always returning true after pause requested. Now, we will return whether pause requested or actually paused. So > for tool designer who want to wait for recovery to get paused can have a loop and wait until the recovery state reaches to paused. That will give a better control.\n> > > >\n> > > > I get it and I agree to have that change. My point was whether we can\n> > > > have a new function pg_wal_replay_pause_and_wait that waits until\n> > > > recovery is actually paused ((along with pg_is_wal_replay_paused\n> > > > returning the actual state than a true/false) so that tool developers\n> > > > don't need to have the waiting code outside, if at all they care about\n> > > > it? Others may have better thoughts than me.\n> > >\n> > > I think the previous patch was based on that idea where we thought\n> > > that we can pass an argument to pg_is_wal_replay_paused which can\n> > > decide whether to wait or return without the wait. I think this\n> > > version looks better to me where we give the status instead of\n> > > waiting. I am not sure whether we want another version of\n> > > pg_wal_replay_pause which will wait for actually it to get paused. I\n> > > mean there is always a scope to include the functionality in the\n> > > database which can be achieved by the tool, but this patch was trying\n> > > to solve the problem that there was no way to know the status so I\n> > > think returning the correct status should be the scope of this.\n> >\n> > I understand that the requirement here is that no record is replayed\n> > after pg_wal_replay_pause() is returned, or pg_is_wal_replay_paused()\n> > returns true, and delays taken while recovery don't delay the state\n> > change. That requirements are really synchronous.\n> >\n> > On the other hand the machinery is designed to be asynchronous.\n> >\n> > > * Note that we intentionally don't take the info_lck spinlock\n> > > * here. We might therefore read a slightly stale value of\n> > > * the recoveryPause flag, but it can't be very stale (no\n> > > * worse than the last spinlock we did acquire). Since a\n> > > * pause request is a pretty asynchronous thing anyway,\n> > > * possibly responding to it one WAL record later than we\n> > > * otherwise would is a minor issue, so it doesn't seem worth\n> > > * adding another spinlock cycle to prevent that.\n> >\n> > As the result, this patch tries to introduce several new checkpoints\n> > to some delaying point so that that waits can find pause request in a\n> > timely manner. I think we had better use locking (or atomics) for the\n> > information instead of such scattered checkpoints if we expect that\n> > machinery to work in a such syhcnronous manner.\n> >\n> > That would make the tri-state state variable and many checkpoints\n> > unnecessary. Maybe.\n> \n> I don't think the intention was so to make it synchronous, I think\n> the main intention was that pg_is_wal_replay_paused can return us the\n> correct state, in short user can know that whether any more wal will\n> be replayed after pg_is_wal_replay_paused returns true or some other\n> state. I agree that along with that we have also introduced some\n\nI meant that kind of correctness in a time-series by using the word\n\"synchronous\". So it can be implemented both by adopting many\ncheckpoints and by just makeing the state-change synchronous.\n\n> extra checkpoint where the recovery process is waiting for WAL and\n> apply delay and from the pg_wal_replay_pause we had sent a signal to\n> wakeup the recovery process. So I am not sure is it worth adding the\n\n> lock/atomic variable to make this synchronous. Any other thoughts on\n> this?\n\n+1\n\nThere're only one reader process (startup) and at-most (in the sane\nusage) one writer process (the caller to pg_wal_replay_pause) so the\nchance of conflicting is negligibely low. However, I'm not sure how\nmuch penalty non-conflicted atomic updates/read imposes on\nperformance..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 25 Jan 2021 14:23:01 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Sun, Jan 24, 2021 at 12:17 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sun, Jan 24, 2021 at 11:29 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > Some comments on the v6 patch:\n>\n> > > [2] Typo - it's \"requested\" + * 'paused requested' - if pause is\n> > > reqested but recovery is not yet paused\n>\n> Here I meant the typo \"reqested\" in \"if pause is reqested but recovery\n> is not yet paused\" statement from v6 patch.\n\nOk\n\n> > > [3] I think it's \"+ * 'pause requested'\" instead of \"+ * 'paused requested'\"\n> >\n> > Which code does it refer, can give put the snippet from the patch.\n> > However, I have found there were 'paused requested' in two places so I\n> > have fixed.\n>\n> Thanks.\n>\n> > > [6] As I mentioned upthread, isn't it better to have\n> > > \"IsRecoveryPaused(void)\" than \"RecoveryIsPaused(void)\"?\n> >\n> > That is an existing function so I think it's fine to keep the same name.\n>\n> Personally, I think the function RecoveryIsPaused itself is\n> unnecessary with the new function GetRecoveryPauseState introduced in\n> your patch. IMHO, we can remove it. If not okay, then we are at it,\n> can we at least change the function name to be meaningful\n> \"IsRecoveryPaused\"? Others may have better thoughts than me.\n\nI have removed this function\n\n> > > [7] Can we have the function variable name \"recoveryPause\" as \"state\"\n> > > or \"pauseState? Because that variable context is set by the enum name\n> > > RecoveryPauseState and the function name.\n> > >\n> > > +SetRecoveryPause(RecoveryPauseState recoveryPause)\n> > >\n> > > Here as well, \"recoveryPauseState\" to \"state\"?\n> > > +GetRecoveryPauseState(void)\n> > > {\n> > > - bool recoveryPause;\n> > > + RecoveryPauseState recoveryPauseState;\n> >\n> > I don't think it is required but while changing the patch I will see\n> > whether to change or not.\n>\n> It will be good to change that. I personally don't like structure\n> names and variable names to be the same.\n\nChanged to state\n\n> > > [6] Function name RecoveryIsPaused and it's comment \"Check whether the\n> > > recovery pause is requested.\" doesn't seem to be matching. Seems like\n> > > it returns true even when RECOVERY_PAUSE_REQUESTED or RECOVERY_PAUSED.\n> > > Should it return true only when the state is RECOVERY_PAUSE_REQUESTED?\n> >\n> > Code is doing right, I will change the comments.\n> >\n> > > Instead of \"while (RecoveryIsPaused())\", can't we change it to \"while\n> > > (GetRecoveryPauseState() != RECOVERY_NOT_PAUSED)\" and remove the\n> > > RecoveryIsPaused()?\n> >\n> > I think it looks clean with the function\n>\n> As I said earlier, I see no use of RecoveryIsPaused() with the\n> introduction of the new function GetRecoveryPauseState(). Others may\n> have better thoughts than me.\n>\n> > > [7] Can we change the switch-case in pg_is_wal_replay_paused to\n> > > something like below?\n> > >\n> > > Datum\n> > > pg_is_wal_replay_paused(PG_FUNCTION_ARGS)\n> > > {\n> > > + char *state;\n> > > + /* get the recovery pause state */\n> > > + switch(GetRecoveryPauseState())\n> > > + {\n> > > + case RECOVERY_NOT_PAUSED:\n> > > + state = \"not paused\";\n> > > + case RECOVERY_PAUSE_REQUESTED:\n> > > + state = \"paused requested\";\n> > > + case RECOVERY_PAUSED:\n> > > + state = \"paused\";\n> > > + default:\n> > > + elog(ERROR, \"invalid recovery pause state\");\n> > > + }\n> > > +\n> > > + PG_RETURN_TEXT_P(cstring_to_text(type));\n> >\n> > Why do you think it is better to use an extra variable?\n>\n> I see no wrong in having PG_RETURN_TEXT_P and cstring_to_text 's in\n> every case statement. But, just to make sure the code looks cleaner, I\n> said that we can have a local state variable and just one\n> PG_RETURN_TEXT_P(cstring_to_text(state));. See some existing functions\n> brin_page_type, hash_page_type, json_typeof,\n> pg_stat_get_backend_activity, pg_stat_get_backend_wait_event_type,\n> pg_stat_get_backend_wait_event, get_command_type.\n>\n\nI have changed as per other functions for consistency.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 25 Jan 2021 14:53:18 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, Jan 25, 2021 at 2:53 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have changed as per other functions for consistency.\n\nThanks for the v7 patch. Here are some quick comments on it:\n\n[1] I think we need to change return value from boolean to text in\ndocumentation:\n <primary>pg_is_wal_replay_paused</primary>\n </indexterm>\n <function>pg_is_wal_replay_paused</function> ()\n <returnvalue>boolean</returnvalue>\n </para>\n\n[2] Do we intentionally ignore the return type of below function? If\nyes, can we change the return type to void and change the function\ncomment? If we do care about the return value, shouldn't we use it?\n\nstatic bool recoveryApplyDelay(XLogReaderState *record);\n+ recoveryApplyDelay(xlogreader);\n\n[3] Although it's not necessary, I just thought, it will be good to\nhave an example for the new output of pg_is_wal_replay_paused in the\ndocumentation, something like below for brin_page_type:\n\n<screen>\ntest=# SELECT brin_page_type(get_raw_page('brinidx', 0));\n brin_page_type\n----------------\n meta\n</screen>\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 19:10:13 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, 25 Jan 2021 14:53:18 +0530\nDilip Kumar <dilipbalaut@gmail.com> wrote:\n \n> I have changed as per other functions for consistency.\n\nThank you for updating the patch. Here are a few comments:\n\n\n(1)\n\n-\t\t\tSetRecoveryPause(true);\n+\t\t\tSetRecoveryPause(RECOVERY_PAUSE_REQUESTED);\n \n \t\t\tereport(LOG\n \t\t\t\t\t(errmsg(\"recovery has paused\"),\n \t\t\t\t\t errdetail(\"If recovery is unpaused, the server will shut down.\"),\n \t\t\t\t\t errhint(\"You can then restart the server after making the necessary configuration changes.\")));\n \n-\t\t\twhile (RecoveryIsPaused())\n+\t\t\twhile (GetRecoveryPauseState() != RECOVERY_NOT_PAUSED)\n \t\t\t{\n \t\t\t\tHandleStartupProcInterrupts();\n\nThis fix would be required for code added by the following commit. \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=15251c0a60be76eedee74ac0e94b433f9acca5af\n\nDue to this, the recovery could get paused after the configuration\nchange in the primary. However, after applying this patch,\npg_is_wal_replay_paused returns \"pause requested\" although it should\nreturn \"paused\".\n\nTo fix this, we must pass RECOVERY_PAUSED to SetRecoveryPause() instead\nof RECOVERY_PAUSE_REQUESTED. Or, we can call CheckAndSetRecoveryPause()\nin the loop like recoveryPausesHere(), but this seems redundant.\n\n\n(2)\n-\twhile (RecoveryIsPaused())\n+\twhile (GetRecoveryPauseState() != RECOVERY_NOT_PAUSED)\n \t{\n+\n \t\tHandleStartupProcInterrupts();\n\nThough it is trivial, I think the new line after the brace is unnecessary.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Tue, 26 Jan 2021 00:57:04 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Sat, Jan 23, 2021 at 6:10 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> +1 to just show the recovery pause state in the output of\n> pg_is_wal_replay_paused. But, should the function name\n> \"pg_is_wal_replay_paused\" be something like\n> \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> in a function, I expect a boolean output. Others may have better\n> thoughts.\n\nMaybe we should leave the existing function pg_is_wal_replay_paused()\nalone and add a new one with the name you suggest that returns text.\nThat would create less burden for tool authors.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 12:00:15 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Tue, Jan 26, 2021 at 2:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Jan 23, 2021 at 6:10 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > +1 to just show the recovery pause state in the output of\n> > pg_is_wal_replay_paused. But, should the function name\n> > \"pg_is_wal_replay_paused\" be something like\n> > \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> > in a function, I expect a boolean output. Others may have better\n> > thoughts.\n>\n> Maybe we should leave the existing function pg_is_wal_replay_paused()\n> alone and add a new one with the name you suggest that returns text.\n> That would create less burden for tool authors.\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 27 Jan 2021 16:20:02 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Wed, Jan 27, 2021 at 12:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jan 26, 2021 at 2:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Sat, Jan 23, 2021 at 6:10 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > +1 to just show the recovery pause state in the output of\n> > > pg_is_wal_replay_paused. But, should the function name\n> > > \"pg_is_wal_replay_paused\" be something like\n> > > \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> > > in a function, I expect a boolean output. Others may have better\n> > > thoughts.\n> >\n> > Maybe we should leave the existing function pg_is_wal_replay_paused()\n> > alone and add a new one with the name you suggest that returns text.\n> > That would create less burden for tool authors.\n>\n> +1\n>\n\nYeah, we can do that, I will send an updated patch soon.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Jan 2021 13:29:23 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Wed, 27 Jan 2021 13:29:23 +0530\nDilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Wed, Jan 27, 2021 at 12:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Jan 26, 2021 at 2:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > On Sat, Jan 23, 2021 at 6:10 AM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > +1 to just show the recovery pause state in the output of\n> > > > pg_is_wal_replay_paused. But, should the function name\n> > > > \"pg_is_wal_replay_paused\" be something like\n> > > > \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> > > > in a function, I expect a boolean output. Others may have better\n> > > > thoughts.\n> > >\n> > > Maybe we should leave the existing function pg_is_wal_replay_paused()\n> > > alone and add a new one with the name you suggest that returns text.\n> > > That would create less burden for tool authors.\n> >\n> > +1\n> >\n> \n> Yeah, we can do that, I will send an updated patch soon.\n\nThis means pg_is_wal_replay_paused is left without any change and this\nreturns whether pause is requested or not? If so, it seems good to modify\nthe documentation of this function in order to note that this could not\nreturn the actual pause state.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Wed, 27 Jan 2021 17:34:43 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Wed, Jan 27, 2021 at 2:06 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n> On Wed, 27 Jan 2021 13:29:23 +0530\n> Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > On Wed, Jan 27, 2021 at 12:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Jan 26, 2021 at 2:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > >\n> > > > On Sat, Jan 23, 2021 at 6:10 AM Bharath Rupireddy\n> > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > > +1 to just show the recovery pause state in the output of\n> > > > > pg_is_wal_replay_paused. But, should the function name\n> > > > > \"pg_is_wal_replay_paused\" be something like\n> > > > > \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> > > > > in a function, I expect a boolean output. Others may have better\n> > > > > thoughts.\n> > > >\n> > > > Maybe we should leave the existing function pg_is_wal_replay_paused()\n> > > > alone and add a new one with the name you suggest that returns text.\n> > > > That would create less burden for tool authors.\n> > >\n> > > +1\n> > >\n> >\n> > Yeah, we can do that, I will send an updated patch soon.\n>\n> This means pg_is_wal_replay_paused is left without any change and this\n> returns whether pause is requested or not? If so, it seems good to modify\n> the documentation of this function in order to note that this could not\n> return the actual pause state.\n\nYes, we can say that it will return true if the replay pause is\nrequested. I am changing that in my new patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Jan 2021 14:28:22 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Wed, Jan 27, 2021 at 2:28 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Jan 27, 2021 at 2:06 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >\n> > On Wed, 27 Jan 2021 13:29:23 +0530\n> > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > > On Wed, Jan 27, 2021 at 12:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Tue, Jan 26, 2021 at 2:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > >\n> > > > > On Sat, Jan 23, 2021 at 6:10 AM Bharath Rupireddy\n> > > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > > > +1 to just show the recovery pause state in the output of\n> > > > > > pg_is_wal_replay_paused. But, should the function name\n> > > > > > \"pg_is_wal_replay_paused\" be something like\n> > > > > > \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> > > > > > in a function, I expect a boolean output. Others may have better\n> > > > > > thoughts.\n> > > > >\n> > > > > Maybe we should leave the existing function pg_is_wal_replay_paused()\n> > > > > alone and add a new one with the name you suggest that returns text.\n> > > > > That would create less burden for tool authors.\n> > > >\n> > > > +1\n> > > >\n> > >\n> > > Yeah, we can do that, I will send an updated patch soon.\n> >\n> > This means pg_is_wal_replay_paused is left without any change and this\n> > returns whether pause is requested or not? If so, it seems good to modify\n> > the documentation of this function in order to note that this could not\n> > return the actual pause state.\n>\n> Yes, we can say that it will return true if the replay pause is\n> requested. I am changing that in my new patch.\n\nI have modified the patch, changes\n\n- I have added a new interface pg_get_wal_replay_pause_state to get\nthe pause request state\n- Now, we are not waiting for the recovery to actually get paused so I\nthink it doesn't make sense to put a lot of checkpoints to check the\npause requested so I have removed that check from the\nrecoveryApplyDelay but I think it better we still keep that check in\nthe WaitForWalToBecomeAvailable because it can wait forever before the\nnext wal get available.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 28 Jan 2021 09:55:42 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, 28 Jan 2021 09:55:42 +0530\nDilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Wed, Jan 27, 2021 at 2:28 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Jan 27, 2021 at 2:06 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > >\n> > > On Wed, 27 Jan 2021 13:29:23 +0530\n> > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > > On Wed, Jan 27, 2021 at 12:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Tue, Jan 26, 2021 at 2:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > > >\n> > > > > > On Sat, Jan 23, 2021 at 6:10 AM Bharath Rupireddy\n> > > > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > > > > +1 to just show the recovery pause state in the output of\n> > > > > > > pg_is_wal_replay_paused. But, should the function name\n> > > > > > > \"pg_is_wal_replay_paused\" be something like\n> > > > > > > \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> > > > > > > in a function, I expect a boolean output. Others may have better\n> > > > > > > thoughts.\n> > > > > >\n> > > > > > Maybe we should leave the existing function pg_is_wal_replay_paused()\n> > > > > > alone and add a new one with the name you suggest that returns text.\n> > > > > > That would create less burden for tool authors.\n> > > > >\n> > > > > +1\n> > > > >\n> > > >\n> > > > Yeah, we can do that, I will send an updated patch soon.\n> > >\n> > > This means pg_is_wal_replay_paused is left without any change and this\n> > > returns whether pause is requested or not? If so, it seems good to modify\n> > > the documentation of this function in order to note that this could not\n> > > return the actual pause state.\n> >\n> > Yes, we can say that it will return true if the replay pause is\n> > requested. I am changing that in my new patch.\n> \n> I have modified the patch, changes\n> \n> - I have added a new interface pg_get_wal_replay_pause_state to get\n> the pause request state\n> - Now, we are not waiting for the recovery to actually get paused so I\n> think it doesn't make sense to put a lot of checkpoints to check the\n> pause requested so I have removed that check from the\n> recoveryApplyDelay but I think it better we still keep that check in\n> the WaitForWalToBecomeAvailable because it can wait forever before the\n> next wal get available.\n\nI think basically the check in WaitForWalToBecomeAvailable is independent\nof the feature of pg_get_wal_replay_pause_state, that is, reporting the\nactual pause state. This function could just return 'pause requested' \nif a pause is requested during waiting for WAL.\n\nHowever, I agree the change to allow recovery to transit the state to\n'paused' during WAL waiting because 'paused' has more useful information\nfor users than 'pause requested'. Returning 'paused' lets users know\nclearly that no more WAL are applied until recovery is resumed. On the\nother hand, when 'pause requested' is returned, user can't say whether\nthe next WAL wiill be applied or not from this information.\n\nFor the same reason, I think it is also useful to call recoveryPausesHere\nin recoveryApplyDelay. \n\nIn addition, in RecoveryRequiresIntParameter, recovery should get paused\nif a parameter value has a problem. However, pg_get_wal_replay_pause_state\nwill return 'pause requested' in this case. So, I think, we should pass\nRECOVERY_PAUSED to SetRecoveryPause() instead of RECOVERY_PAUSE_REQUESTED,\nor call CheckAndSetRecoveryPause() in the loop like recoveryPausesHere().\n\nRegrads,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Fri, 29 Jan 2021 18:53:53 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Fri, Jan 29, 2021 at 3:25 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n> On Thu, 28 Jan 2021 09:55:42 +0530\n> Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > On Wed, Jan 27, 2021 at 2:28 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Wed, Jan 27, 2021 at 2:06 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > >\n> > > > On Wed, 27 Jan 2021 13:29:23 +0530\n> > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > > On Wed, Jan 27, 2021 at 12:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > On Tue, Jan 26, 2021 at 2:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > > > >\n> > > > > > > On Sat, Jan 23, 2021 at 6:10 AM Bharath Rupireddy\n> > > > > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > > > > > +1 to just show the recovery pause state in the output of\n> > > > > > > > pg_is_wal_replay_paused. But, should the function name\n> > > > > > > > \"pg_is_wal_replay_paused\" be something like\n> > > > > > > > \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> > > > > > > > in a function, I expect a boolean output. Others may have better\n> > > > > > > > thoughts.\n> > > > > > >\n> > > > > > > Maybe we should leave the existing function pg_is_wal_replay_paused()\n> > > > > > > alone and add a new one with the name you suggest that returns text.\n> > > > > > > That would create less burden for tool authors.\n> > > > > >\n> > > > > > +1\n> > > > > >\n> > > > >\n> > > > > Yeah, we can do that, I will send an updated patch soon.\n> > > >\n> > > > This means pg_is_wal_replay_paused is left without any change and this\n> > > > returns whether pause is requested or not? If so, it seems good to modify\n> > > > the documentation of this function in order to note that this could not\n> > > > return the actual pause state.\n> > >\n> > > Yes, we can say that it will return true if the replay pause is\n> > > requested. I am changing that in my new patch.\n> >\n> > I have modified the patch, changes\n> >\n> > - I have added a new interface pg_get_wal_replay_pause_state to get\n> > the pause request state\n> > - Now, we are not waiting for the recovery to actually get paused so I\n> > think it doesn't make sense to put a lot of checkpoints to check the\n> > pause requested so I have removed that check from the\n> > recoveryApplyDelay but I think it better we still keep that check in\n> > the WaitForWalToBecomeAvailable because it can wait forever before the\n> > next wal get available.\n>\n> I think basically the check in WaitForWalToBecomeAvailable is independent\n> of the feature of pg_get_wal_replay_pause_state, that is, reporting the\n> actual pause state. This function could just return 'pause requested'\n> if a pause is requested during waiting for WAL.\n>\n> However, I agree the change to allow recovery to transit the state to\n> 'paused' during WAL waiting because 'paused' has more useful information\n> for users than 'pause requested'. Returning 'paused' lets users know\n> clearly that no more WAL are applied until recovery is resumed. On the\n> other hand, when 'pause requested' is returned, user can't say whether\n> the next WAL wiill be applied or not from this information.\n>\n> For the same reason, I think it is also useful to call recoveryPausesHere\n> in recoveryApplyDelay.\n\nIMHO the WaitForWalToBecomeAvailable can wait until the next wal get\navailable so it can not be controlled by user so it is good to put a\ncheck for the recovery pause, however recoveryApplyDelay wait for the\napply delay which is configured by user and it is predictable value by\nthe user. I don't have much objection to putting that check in the\nrecoveryApplyDelay as well but I feel it is not necessary. Any other\nthoughts on this?\n\n> In addition, in RecoveryRequiresIntParameter, recovery should get paused\n> if a parameter value has a problem. However, pg_get_wal_replay_pause_state\n> will return 'pause requested' in this case. So, I think, we should pass\n> RECOVERY_PAUSED to SetRecoveryPause() instead of RECOVERY_PAUSE_REQUESTED,\n> or call CheckAndSetRecoveryPause() in the loop like recoveryPausesHere().\n\nYeah, absolutely right, it must pass RECOVERY_PAUSED. I will change\nthis, thanks for noticing this.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jan 2021 16:33:32 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Fri, 29 Jan 2021 16:33:32 +0530\nDilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Fri, Jan 29, 2021 at 3:25 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >\n> > On Thu, 28 Jan 2021 09:55:42 +0530\n> > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > > On Wed, Jan 27, 2021 at 2:28 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Wed, Jan 27, 2021 at 2:06 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > > >\n> > > > > On Wed, 27 Jan 2021 13:29:23 +0530\n> > > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > > On Wed, Jan 27, 2021 at 12:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > >\n> > > > > > > On Tue, Jan 26, 2021 at 2:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > > > > >\n> > > > > > > > On Sat, Jan 23, 2021 at 6:10 AM Bharath Rupireddy\n> > > > > > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > > > > > > +1 to just show the recovery pause state in the output of\n> > > > > > > > > pg_is_wal_replay_paused. But, should the function name\n> > > > > > > > > \"pg_is_wal_replay_paused\" be something like\n> > > > > > > > > \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> > > > > > > > > in a function, I expect a boolean output. Others may have better\n> > > > > > > > > thoughts.\n> > > > > > > >\n> > > > > > > > Maybe we should leave the existing function pg_is_wal_replay_paused()\n> > > > > > > > alone and add a new one with the name you suggest that returns text.\n> > > > > > > > That would create less burden for tool authors.\n> > > > > > >\n> > > > > > > +1\n> > > > > > >\n> > > > > >\n> > > > > > Yeah, we can do that, I will send an updated patch soon.\n> > > > >\n> > > > > This means pg_is_wal_replay_paused is left without any change and this\n> > > > > returns whether pause is requested or not? If so, it seems good to modify\n> > > > > the documentation of this function in order to note that this could not\n> > > > > return the actual pause state.\n> > > >\n> > > > Yes, we can say that it will return true if the replay pause is\n> > > > requested. I am changing that in my new patch.\n> > >\n> > > I have modified the patch, changes\n> > >\n> > > - I have added a new interface pg_get_wal_replay_pause_state to get\n> > > the pause request state\n> > > - Now, we are not waiting for the recovery to actually get paused so I\n> > > think it doesn't make sense to put a lot of checkpoints to check the\n> > > pause requested so I have removed that check from the\n> > > recoveryApplyDelay but I think it better we still keep that check in\n> > > the WaitForWalToBecomeAvailable because it can wait forever before the\n> > > next wal get available.\n> >\n> > I think basically the check in WaitForWalToBecomeAvailable is independent\n> > of the feature of pg_get_wal_replay_pause_state, that is, reporting the\n> > actual pause state. This function could just return 'pause requested'\n> > if a pause is requested during waiting for WAL.\n> >\n> > However, I agree the change to allow recovery to transit the state to\n> > 'paused' during WAL waiting because 'paused' has more useful information\n> > for users than 'pause requested'. Returning 'paused' lets users know\n> > clearly that no more WAL are applied until recovery is resumed. On the\n> > other hand, when 'pause requested' is returned, user can't say whether\n> > the next WAL wiill be applied or not from this information.\n> >\n> > For the same reason, I think it is also useful to call recoveryPausesHere\n> > in recoveryApplyDelay.\n> \n> IMHO the WaitForWalToBecomeAvailable can wait until the next wal get\n> available so it can not be controlled by user so it is good to put a\n> check for the recovery pause, however recoveryApplyDelay wait for the\n> apply delay which is configured by user and it is predictable value by\n> the user. I don't have much objection to putting that check in the\n> recoveryApplyDelay as well but I feel it is not necessary. Any other\n> thoughts on this?\n\nI'm not sure if the user can figure out easily that the reason why\npg_get_wal_replay_pause_state returns 'pause requested' is due to\nrecovery_min_apply_delay because it would needs knowledge of the\ninternal mechanism of recovery. However, if there are not any other\nopinions of it, I don't care that recoveryApplyDelay is left as is\nbecause such check and state transition is independent of the goal of\npg_get_wal_replay_pause_state itself as I mentioned above.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Fri, 29 Jan 2021 23:06:59 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Fri, Jan 29, 2021 at 4:33 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Jan 29, 2021 at 3:25 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >\n> > On Thu, 28 Jan 2021 09:55:42 +0530\n> > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > > On Wed, Jan 27, 2021 at 2:28 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Wed, Jan 27, 2021 at 2:06 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > > >\n> > > > > On Wed, 27 Jan 2021 13:29:23 +0530\n> > > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > > On Wed, Jan 27, 2021 at 12:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > >\n> > > > > > > On Tue, Jan 26, 2021 at 2:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > > > > >\n> > > > > > > > On Sat, Jan 23, 2021 at 6:10 AM Bharath Rupireddy\n> > > > > > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > > > > > > +1 to just show the recovery pause state in the output of\n> > > > > > > > > pg_is_wal_replay_paused. But, should the function name\n> > > > > > > > > \"pg_is_wal_replay_paused\" be something like\n> > > > > > > > > \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> > > > > > > > > in a function, I expect a boolean output. Others may have better\n> > > > > > > > > thoughts.\n> > > > > > > >\n> > > > > > > > Maybe we should leave the existing function pg_is_wal_replay_paused()\n> > > > > > > > alone and add a new one with the name you suggest that returns text.\n> > > > > > > > That would create less burden for tool authors.\n> > > > > > >\n> > > > > > > +1\n> > > > > > >\n> > > > > >\n> > > > > > Yeah, we can do that, I will send an updated patch soon.\n> > > > >\n> > > > > This means pg_is_wal_replay_paused is left without any change and this\n> > > > > returns whether pause is requested or not? If so, it seems good to modify\n> > > > > the documentation of this function in order to note that this could not\n> > > > > return the actual pause state.\n> > > >\n> > > > Yes, we can say that it will return true if the replay pause is\n> > > > requested. I am changing that in my new patch.\n> > >\n> > > I have modified the patch, changes\n> > >\n> > > - I have added a new interface pg_get_wal_replay_pause_state to get\n> > > the pause request state\n> > > - Now, we are not waiting for the recovery to actually get paused so I\n> > > think it doesn't make sense to put a lot of checkpoints to check the\n> > > pause requested so I have removed that check from the\n> > > recoveryApplyDelay but I think it better we still keep that check in\n> > > the WaitForWalToBecomeAvailable because it can wait forever before the\n> > > next wal get available.\n> >\n> > I think basically the check in WaitForWalToBecomeAvailable is independent\n> > of the feature of pg_get_wal_replay_pause_state, that is, reporting the\n> > actual pause state. This function could just return 'pause requested'\n> > if a pause is requested during waiting for WAL.\n> >\n> > However, I agree the change to allow recovery to transit the state to\n> > 'paused' during WAL waiting because 'paused' has more useful information\n> > for users than 'pause requested'. Returning 'paused' lets users know\n> > clearly that no more WAL are applied until recovery is resumed. On the\n> > other hand, when 'pause requested' is returned, user can't say whether\n> > the next WAL wiill be applied or not from this information.\n> >\n> > For the same reason, I think it is also useful to call recoveryPausesHere\n> > in recoveryApplyDelay.\n>\n> IMHO the WaitForWalToBecomeAvailable can wait until the next wal get\n> available so it can not be controlled by user so it is good to put a\n> check for the recovery pause, however recoveryApplyDelay wait for the\n> apply delay which is configured by user and it is predictable value by\n> the user. I don't have much objection to putting that check in the\n> recoveryApplyDelay as well but I feel it is not necessary. Any other\n> thoughts on this?\n>\n> > In addition, in RecoveryRequiresIntParameter, recovery should get paused\n> > if a parameter value has a problem. However, pg_get_wal_replay_pause_state\n> > will return 'pause requested' in this case. So, I think, we should pass\n> > RECOVERY_PAUSED to SetRecoveryPause() instead of RECOVERY_PAUSE_REQUESTED,\n> > or call CheckAndSetRecoveryPause() in the loop like recoveryPausesHere().\n>\n> Yeah, absolutely right, it must pass RECOVERY_PAUSED. I will change\n> this, thanks for noticing this.\n\nI have changed this in the new patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 31 Jan 2021 11:24:30 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Sun, 31 Jan 2021 11:24:30 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Fri, Jan 29, 2021 at 4:33 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Fri, Jan 29, 2021 at 3:25 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > >\n> > > On Thu, 28 Jan 2021 09:55:42 +0530\n> > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > > On Wed, Jan 27, 2021 at 2:28 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Jan 27, 2021 at 2:06 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > > > >\n> > > > > > On Wed, 27 Jan 2021 13:29:23 +0530\n> > > > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > >\n> > > > > > > On Wed, Jan 27, 2021 at 12:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > > >\n> > > > > > > > On Tue, Jan 26, 2021 at 2:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > > > > > >\n> > > > > > > > > On Sat, Jan 23, 2021 at 6:10 AM Bharath Rupireddy\n> > > > > > > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > > > > > > > +1 to just show the recovery pause state in the output of\n> > > > > > > > > > pg_is_wal_replay_paused. But, should the function name\n> > > > > > > > > > \"pg_is_wal_replay_paused\" be something like\n> > > > > > > > > > \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> > > > > > > > > > in a function, I expect a boolean output. Others may have better\n> > > > > > > > > > thoughts.\n> > > > > > > > >\n> > > > > > > > > Maybe we should leave the existing function pg_is_wal_replay_paused()\n> > > > > > > > > alone and add a new one with the name you suggest that returns text.\n> > > > > > > > > That would create less burden for tool authors.\n> > > > > > > >\n> > > > > > > > +1\n> > > > > > > >\n> > > > > > >\n> > > > > > > Yeah, we can do that, I will send an updated patch soon.\n> > > > > >\n> > > > > > This means pg_is_wal_replay_paused is left without any change and this\n> > > > > > returns whether pause is requested or not? If so, it seems good to modify\n> > > > > > the documentation of this function in order to note that this could not\n> > > > > > return the actual pause state.\n> > > > >\n> > > > > Yes, we can say that it will return true if the replay pause is\n> > > > > requested. I am changing that in my new patch.\n> > > >\n> > > > I have modified the patch, changes\n> > > >\n> > > > - I have added a new interface pg_get_wal_replay_pause_state to get\n> > > > the pause request state\n> > > > - Now, we are not waiting for the recovery to actually get paused so I\n> > > > think it doesn't make sense to put a lot of checkpoints to check the\n> > > > pause requested so I have removed that check from the\n> > > > recoveryApplyDelay but I think it better we still keep that check in\n> > > > the WaitForWalToBecomeAvailable because it can wait forever before the\n> > > > next wal get available.\n> > >\n> > > I think basically the check in WaitForWalToBecomeAvailable is independent\n> > > of the feature of pg_get_wal_replay_pause_state, that is, reporting the\n> > > actual pause state. This function could just return 'pause requested'\n> > > if a pause is requested during waiting for WAL.\n> > >\n> > > However, I agree the change to allow recovery to transit the state to\n> > > 'paused' during WAL waiting because 'paused' has more useful information\n> > > for users than 'pause requested'. Returning 'paused' lets users know\n> > > clearly that no more WAL are applied until recovery is resumed. On the\n> > > other hand, when 'pause requested' is returned, user can't say whether\n> > > the next WAL wiill be applied or not from this information.\n> > >\n> > > For the same reason, I think it is also useful to call recoveryPausesHere\n> > > in recoveryApplyDelay.\n> >\n> > IMHO the WaitForWalToBecomeAvailable can wait until the next wal get\n> > available so it can not be controlled by user so it is good to put a\n> > check for the recovery pause, however recoveryApplyDelay wait for the\n> > apply delay which is configured by user and it is predictable value by\n> > the user. I don't have much objection to putting that check in the\n> > recoveryApplyDelay as well but I feel it is not necessary. Any other\n> > thoughts on this?\n> >\n> > > In addition, in RecoveryRequiresIntParameter, recovery should get paused\n> > > if a parameter value has a problem. However, pg_get_wal_replay_pause_state\n> > > will return 'pause requested' in this case. So, I think, we should pass\n> > > RECOVERY_PAUSED to SetRecoveryPause() instead of RECOVERY_PAUSE_REQUESTED,\n> > > or call CheckAndSetRecoveryPause() in the loop like recoveryPausesHere().\n> >\n> > Yeah, absolutely right, it must pass RECOVERY_PAUSED. I will change\n> > this, thanks for noticing this.\n> \n> I have changed this in the new patch.\n\nIt seems to work well. The checkpoints seems to be placed properly.\n\n+SetRecoveryPause(RecoveryPauseState state)\n {\n+\tAssert(state >= RECOVERY_NOT_PAUSED && state <= RECOVERY_PAUSED);\n\nI'm not sure that state worth FATAL. Isn't it enough to just ERROR\nout like XLogFileRead?\n\nCheckAndSetRecovery() has only one caller. I think it's better to\nwrite the code directly.\n\nI think the documentation of pg_wal_replay_pause needs to be a bit\nmore detailed about the difference between the two states \"pause\nrequested\" and \"paused\". Something like \"A request doesn't mean that\nrecovery stops right away. If you want a guarantee that recovery is\nactually paused, you need to check for the recovery pause state\nreturned by pg_wal_replay_pause_state(). Note that\npg_is_wal_repay_paused() returns whether a request is made.\"\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 01 Feb 2021 15:29:21 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, Feb 1, 2021 at 11:59 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Sun, 31 Jan 2021 11:24:30 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > On Fri, Jan 29, 2021 at 4:33 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Fri, Jan 29, 2021 at 3:25 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > >\n> > > > On Thu, 28 Jan 2021 09:55:42 +0530\n> > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > > On Wed, Jan 27, 2021 at 2:28 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > >\n> > > > > > On Wed, Jan 27, 2021 at 2:06 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > > > > >\n> > > > > > > On Wed, 27 Jan 2021 13:29:23 +0530\n> > > > > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > > >\n> > > > > > > > On Wed, Jan 27, 2021 at 12:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > > > >\n> > > > > > > > > On Tue, Jan 26, 2021 at 2:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > > > > > > >\n> > > > > > > > > > On Sat, Jan 23, 2021 at 6:10 AM Bharath Rupireddy\n> > > > > > > > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > > > > > > > > +1 to just show the recovery pause state in the output of\n> > > > > > > > > > > pg_is_wal_replay_paused. But, should the function name\n> > > > > > > > > > > \"pg_is_wal_replay_paused\" be something like\n> > > > > > > > > > > \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> > > > > > > > > > > in a function, I expect a boolean output. Others may have better\n> > > > > > > > > > > thoughts.\n> > > > > > > > > >\n> > > > > > > > > > Maybe we should leave the existing function pg_is_wal_replay_paused()\n> > > > > > > > > > alone and add a new one with the name you suggest that returns text.\n> > > > > > > > > > That would create less burden for tool authors.\n> > > > > > > > >\n> > > > > > > > > +1\n> > > > > > > > >\n> > > > > > > >\n> > > > > > > > Yeah, we can do that, I will send an updated patch soon.\n> > > > > > >\n> > > > > > > This means pg_is_wal_replay_paused is left without any change and this\n> > > > > > > returns whether pause is requested or not? If so, it seems good to modify\n> > > > > > > the documentation of this function in order to note that this could not\n> > > > > > > return the actual pause state.\n> > > > > >\n> > > > > > Yes, we can say that it will return true if the replay pause is\n> > > > > > requested. I am changing that in my new patch.\n> > > > >\n> > > > > I have modified the patch, changes\n> > > > >\n> > > > > - I have added a new interface pg_get_wal_replay_pause_state to get\n> > > > > the pause request state\n> > > > > - Now, we are not waiting for the recovery to actually get paused so I\n> > > > > think it doesn't make sense to put a lot of checkpoints to check the\n> > > > > pause requested so I have removed that check from the\n> > > > > recoveryApplyDelay but I think it better we still keep that check in\n> > > > > the WaitForWalToBecomeAvailable because it can wait forever before the\n> > > > > next wal get available.\n> > > >\n> > > > I think basically the check in WaitForWalToBecomeAvailable is independent\n> > > > of the feature of pg_get_wal_replay_pause_state, that is, reporting the\n> > > > actual pause state. This function could just return 'pause requested'\n> > > > if a pause is requested during waiting for WAL.\n> > > >\n> > > > However, I agree the change to allow recovery to transit the state to\n> > > > 'paused' during WAL waiting because 'paused' has more useful information\n> > > > for users than 'pause requested'. Returning 'paused' lets users know\n> > > > clearly that no more WAL are applied until recovery is resumed. On the\n> > > > other hand, when 'pause requested' is returned, user can't say whether\n> > > > the next WAL wiill be applied or not from this information.\n> > > >\n> > > > For the same reason, I think it is also useful to call recoveryPausesHere\n> > > > in recoveryApplyDelay.\n> > >\n> > > IMHO the WaitForWalToBecomeAvailable can wait until the next wal get\n> > > available so it can not be controlled by user so it is good to put a\n> > > check for the recovery pause, however recoveryApplyDelay wait for the\n> > > apply delay which is configured by user and it is predictable value by\n> > > the user. I don't have much objection to putting that check in the\n> > > recoveryApplyDelay as well but I feel it is not necessary. Any other\n> > > thoughts on this?\n> > >\n> > > > In addition, in RecoveryRequiresIntParameter, recovery should get paused\n> > > > if a parameter value has a problem. However, pg_get_wal_replay_pause_state\n> > > > will return 'pause requested' in this case. So, I think, we should pass\n> > > > RECOVERY_PAUSED to SetRecoveryPause() instead of RECOVERY_PAUSE_REQUESTED,\n> > > > or call CheckAndSetRecoveryPause() in the loop like recoveryPausesHere().\n> > >\n> > > Yeah, absolutely right, it must pass RECOVERY_PAUSED. I will change\n> > > this, thanks for noticing this.\n> >\n> > I have changed this in the new patch.\n>\n> It seems to work well. The checkpoints seems to be placed properly.\n\nOkay\n\n> +SetRecoveryPause(RecoveryPauseState state)\n> {\n> + Assert(state >= RECOVERY_NOT_PAUSED && state <= RECOVERY_PAUSED);\n>\n> I'm not sure that state worth FATAL. Isn't it enough to just ERROR\n> out like XLogFileRead?\n\nYeah, that makes sense to me.\n\n> CheckAndSetRecovery() has only one caller. I think it's better to\n> write the code directly.\n\nOkay, I will change.\n\n> I think the documentation of pg_wal_replay_pause needs to be a bit\n> more detailed about the difference between the two states \"pause\n> requested\" and \"paused\". Something like \"A request doesn't mean that\n> recovery stops right away. If you want a guarantee that recovery is\n> actually paused, you need to check for the recovery pause state\n> returned by pg_wal_replay_pause_state(). Note that\n> pg_is_wal_repay_paused() returns whether a request is made.\"\n\nThat seems like better idea, I will change.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 Feb 2021 13:41:55 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, Feb 1, 2021 at 1:41 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Feb 1, 2021 at 11:59 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Sun, 31 Jan 2021 11:24:30 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > > On Fri, Jan 29, 2021 at 4:33 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Fri, Jan 29, 2021 at 3:25 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > > >\n> > > > > On Thu, 28 Jan 2021 09:55:42 +0530\n> > > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > > On Wed, Jan 27, 2021 at 2:28 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > > >\n> > > > > > > On Wed, Jan 27, 2021 at 2:06 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > > > > > >\n> > > > > > > > On Wed, 27 Jan 2021 13:29:23 +0530\n> > > > > > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > > > >\n> > > > > > > > > On Wed, Jan 27, 2021 at 12:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > > > > >\n> > > > > > > > > > On Tue, Jan 26, 2021 at 2:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > > > > > > > >\n> > > > > > > > > > > On Sat, Jan 23, 2021 at 6:10 AM Bharath Rupireddy\n> > > > > > > > > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > > > > > > > > > +1 to just show the recovery pause state in the output of\n> > > > > > > > > > > > pg_is_wal_replay_paused. But, should the function name\n> > > > > > > > > > > > \"pg_is_wal_replay_paused\" be something like\n> > > > > > > > > > > > \"pg_get_wal_replay_pause_state\" or some other? To me, when \"is\" exists\n> > > > > > > > > > > > in a function, I expect a boolean output. Others may have better\n> > > > > > > > > > > > thoughts.\n> > > > > > > > > > >\n> > > > > > > > > > > Maybe we should leave the existing function pg_is_wal_replay_paused()\n> > > > > > > > > > > alone and add a new one with the name you suggest that returns text.\n> > > > > > > > > > > That would create less burden for tool authors.\n> > > > > > > > > >\n> > > > > > > > > > +1\n> > > > > > > > > >\n> > > > > > > > >\n> > > > > > > > > Yeah, we can do that, I will send an updated patch soon.\n> > > > > > > >\n> > > > > > > > This means pg_is_wal_replay_paused is left without any change and this\n> > > > > > > > returns whether pause is requested or not? If so, it seems good to modify\n> > > > > > > > the documentation of this function in order to note that this could not\n> > > > > > > > return the actual pause state.\n> > > > > > >\n> > > > > > > Yes, we can say that it will return true if the replay pause is\n> > > > > > > requested. I am changing that in my new patch.\n> > > > > >\n> > > > > > I have modified the patch, changes\n> > > > > >\n> > > > > > - I have added a new interface pg_get_wal_replay_pause_state to get\n> > > > > > the pause request state\n> > > > > > - Now, we are not waiting for the recovery to actually get paused so I\n> > > > > > think it doesn't make sense to put a lot of checkpoints to check the\n> > > > > > pause requested so I have removed that check from the\n> > > > > > recoveryApplyDelay but I think it better we still keep that check in\n> > > > > > the WaitForWalToBecomeAvailable because it can wait forever before the\n> > > > > > next wal get available.\n> > > > >\n> > > > > I think basically the check in WaitForWalToBecomeAvailable is independent\n> > > > > of the feature of pg_get_wal_replay_pause_state, that is, reporting the\n> > > > > actual pause state. This function could just return 'pause requested'\n> > > > > if a pause is requested during waiting for WAL.\n> > > > >\n> > > > > However, I agree the change to allow recovery to transit the state to\n> > > > > 'paused' during WAL waiting because 'paused' has more useful information\n> > > > > for users than 'pause requested'. Returning 'paused' lets users know\n> > > > > clearly that no more WAL are applied until recovery is resumed. On the\n> > > > > other hand, when 'pause requested' is returned, user can't say whether\n> > > > > the next WAL wiill be applied or not from this information.\n> > > > >\n> > > > > For the same reason, I think it is also useful to call recoveryPausesHere\n> > > > > in recoveryApplyDelay.\n> > > >\n> > > > IMHO the WaitForWalToBecomeAvailable can wait until the next wal get\n> > > > available so it can not be controlled by user so it is good to put a\n> > > > check for the recovery pause, however recoveryApplyDelay wait for the\n> > > > apply delay which is configured by user and it is predictable value by\n> > > > the user. I don't have much objection to putting that check in the\n> > > > recoveryApplyDelay as well but I feel it is not necessary. Any other\n> > > > thoughts on this?\n> > > >\n> > > > > In addition, in RecoveryRequiresIntParameter, recovery should get paused\n> > > > > if a parameter value has a problem. However, pg_get_wal_replay_pause_state\n> > > > > will return 'pause requested' in this case. So, I think, we should pass\n> > > > > RECOVERY_PAUSED to SetRecoveryPause() instead of RECOVERY_PAUSE_REQUESTED,\n> > > > > or call CheckAndSetRecoveryPause() in the loop like recoveryPausesHere().\n> > > >\n> > > > Yeah, absolutely right, it must pass RECOVERY_PAUSED. I will change\n> > > > this, thanks for noticing this.\n> > >\n> > > I have changed this in the new patch.\n> >\n> > It seems to work well. The checkpoints seems to be placed properly.\n>\n> Okay\n>\n> > +SetRecoveryPause(RecoveryPauseState state)\n> > {\n> > + Assert(state >= RECOVERY_NOT_PAUSED && state <= RECOVERY_PAUSED);\n> >\n> > I'm not sure that state worth FATAL. Isn't it enough to just ERROR\n> > out like XLogFileRead?\n>\n> Yeah, that makes sense to me.\n>\n> > CheckAndSetRecovery() has only one caller. I think it's better to\n> > write the code directly.\n>\n> Okay, I will change.\n>\n> > I think the documentation of pg_wal_replay_pause needs to be a bit\n> > more detailed about the difference between the two states \"pause\n> > requested\" and \"paused\". Something like \"A request doesn't mean that\n> > recovery stops right away. If you want a guarantee that recovery is\n> > actually paused, you need to check for the recovery pause state\n> > returned by pg_wal_replay_pause_state(). Note that\n> > pg_is_wal_repay_paused() returns whether a request is made.\"\n>\n> That seems like better idea, I will change.\n>\n\nPlease find an updated patch which addresses these comments.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 4 Feb 2021 10:28:02 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, Feb 4, 2021 at 10:28 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Please find an updated patch which addresses these comments.\n\nThanks for the patch. I tested the new function pg_get_wal_replay_pause_state:\n\npostgres=# select pg_get_wal_replay_pause_state();\n pg_get_wal_replay_pause_state\n-------------------------------\n not paused\npostgres=# select pg_wal_replay_pause();\n pg_wal_replay_pause\n---------------------\n\n(1 row)\n\nI can also see the \"pause requested\" state after I put a gdb\nbreakpoint in WaitForWALToBecomeAvailable in the standby startup\nprocess .\n\npostgres=# select pg_get_wal_replay_pause_state();\n pg_get_wal_replay_pause_state\n-------------------------------\n pause requested\n(1 row)\n\npostgres=# select pg_get_wal_replay_pause_state();\n pg_get_wal_replay_pause_state\n-------------------------------\n paused\n(1 row)\n\nMostly, the v10 patch looks good to me, except below minor comments:\n\n1) A typo in commit message - \"just check\" --> \"just checks\"\n\n2) How about\n+ Returns recovery pause state. The return values are <literal>not paused\ninstead of\n+ Returns recovery pause state, the return values are <literal>not paused\n\n3) I think it is 'get wal replay pause state', instead of { oid =>\n'1137', descr => 'get wal replay is pause state',\n\n4) can we just do this\n /*\n * If recovery pause is requested then set it paused. While we are in\n * the loop, user might resume and pause again so set this every time.\n */\n if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState ==\n RECOVERY_PAUSE_REQUESTED)\n SetRecoveryPause(RECOVERY_PAUSED);\ninstead of\n /*\n * If recovery pause is requested then set it paused. While we are in\n * the loop, user might resume and pause again so set this every time.\n */\n SpinLockAcquire(&XLogCtl->info_lck);\n if (XLogCtl->recoveryPauseState == RECOVERY_PAUSE_REQUESTED)\n XLogCtl->recoveryPauseState = RECOVERY_PAUSED;\n SpinLockRelease(&XLogCtl->info_lck);\n\nI think it's okay, since we take a spinlock anyways in\nGetRecoveryPauseState(). See the below comment and also a relevant\ncommit 6ba4ecbf477e0b25dd7bde1b0c4e07fc2da19348 on why it's not\nnecessary taking spinlock always:\n /*\n * Pause WAL replay, if requested by a hot-standby session via\n * SetRecoveryPause().\n *\n * Note that we intentionally don't take the info_lck spinlock\n * here. We might therefore read a slightly stale value of\n * the recoveryPause flag, but it can't be very stale (no\n * worse than the last spinlock we did acquire). Since a\n * pause request is a pretty asynchronous thing anyway,\n * possibly responding to it one WAL record later than we\n * otherwise would is a minor issue, so it doesn't seem worth\n * adding another spinlock cycle to prevent that.\n */\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Feb 2021 16:58:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, Feb 4, 2021 at 4:58 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Feb 4, 2021 at 10:28 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Please find an updated patch which addresses these comments.\n>\n> Thanks for the patch. I tested the new function pg_get_wal_replay_pause_state:\n>\n> postgres=# select pg_get_wal_replay_pause_state();\n> pg_get_wal_replay_pause_state\n> -------------------------------\n> not paused\n> postgres=# select pg_wal_replay_pause();\n> pg_wal_replay_pause\n> ---------------------\n>\n> (1 row)\n>\n> I can also see the \"pause requested\" state after I put a gdb\n> breakpoint in WaitForWALToBecomeAvailable in the standby startup\n> process .\n>\n> postgres=# select pg_get_wal_replay_pause_state();\n> pg_get_wal_replay_pause_state\n> -------------------------------\n> pause requested\n> (1 row)\n>\n> postgres=# select pg_get_wal_replay_pause_state();\n> pg_get_wal_replay_pause_state\n> -------------------------------\n> paused\n> (1 row)\n>\n> Mostly, the v10 patch looks good to me, except below minor comments:\n\nThanks for the testing.\n\n> 1) A typo in commit message - \"just check\" --> \"just checks\"\n>\n> 2) How about\n> + Returns recovery pause state. The return values are <literal>not paused\n> instead of\n> + Returns recovery pause state, the return values are <literal>not paused\n>\n> 3) I think it is 'get wal replay pause state', instead of { oid =>\n> '1137', descr => 'get wal replay is pause state',\n>\n> 4) can we just do this\n> /*\n> * If recovery pause is requested then set it paused. While we are in\n> * the loop, user might resume and pause again so set this every time.\n> */\n> if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState ==\n> RECOVERY_PAUSE_REQUESTED)\n> SetRecoveryPause(RECOVERY_PAUSED);\n> instead of\n> /*\n> * If recovery pause is requested then set it paused. While we are in\n> * the loop, user might resume and pause again so set this every time.\n> */\n> SpinLockAcquire(&XLogCtl->info_lck);\n> if (XLogCtl->recoveryPauseState == RECOVERY_PAUSE_REQUESTED)\n> XLogCtl->recoveryPauseState = RECOVERY_PAUSED;\n> SpinLockRelease(&XLogCtl->info_lck);\n>\n> I think it's okay, since we take a spinlock anyways in\n> GetRecoveryPauseState(). See the below comment and also a relevant\n> commit 6ba4ecbf477e0b25dd7bde1b0c4e07fc2da19348 on why it's not\n> necessary taking spinlock always:\n> /*\n> * Pause WAL replay, if requested by a hot-standby session via\n> * SetRecoveryPause().\n> *\n> * Note that we intentionally don't take the info_lck spinlock\n> * here. We might therefore read a slightly stale value of\n> * the recoveryPause flag, but it can't be very stale (no\n> * worse than the last spinlock we did acquire). Since a\n> * pause request is a pretty asynchronous thing anyway,\n> * possibly responding to it one WAL record later than we\n> * otherwise would is a minor issue, so it doesn't seem worth\n> * adding another spinlock cycle to prevent that.\n> */\n>\n\nI will work on these comments and send the updated patch soon.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Feb 2021 17:38:21 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, Feb 4, 2021 at 4:58 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Feb 4, 2021 at 10:28 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Please find an updated patch which addresses these comments.\n>\n> Thanks for the patch. I tested the new function pg_get_wal_replay_pause_state:\n>\n> postgres=# select pg_get_wal_replay_pause_state();\n> pg_get_wal_replay_pause_state\n> -------------------------------\n> not paused\n> postgres=# select pg_wal_replay_pause();\n> pg_wal_replay_pause\n> ---------------------\n>\n> (1 row)\n>\n> I can also see the \"pause requested\" state after I put a gdb\n> breakpoint in WaitForWALToBecomeAvailable in the standby startup\n> process .\n>\n> postgres=# select pg_get_wal_replay_pause_state();\n> pg_get_wal_replay_pause_state\n> -------------------------------\n> pause requested\n> (1 row)\n>\n> postgres=# select pg_get_wal_replay_pause_state();\n> pg_get_wal_replay_pause_state\n> -------------------------------\n> paused\n> (1 row)\n>\n> Mostly, the v10 patch looks good to me, except below minor comments:\n>\n> 1) A typo in commit message - \"just check\" --> \"just checks\"\n>\n> 2) How about\n> + Returns recovery pause state. The return values are <literal>not paused\n> instead of\n> + Returns recovery pause state, the return values are <literal>not paused\n>\n> 3) I think it is 'get wal replay pause state', instead of { oid =>\n> '1137', descr => 'get wal replay is pause state',\n>\n> 4) can we just do this\n> /*\n> * If recovery pause is requested then set it paused. While we are in\n> * the loop, user might resume and pause again so set this every time.\n> */\n> if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState ==\n> RECOVERY_PAUSE_REQUESTED)\n> SetRecoveryPause(RECOVERY_PAUSED);\n> instead of\n> /*\n> * If recovery pause is requested then set it paused. While we are in\n> * the loop, user might resume and pause again so set this every time.\n> */\n> SpinLockAcquire(&XLogCtl->info_lck);\n> if (XLogCtl->recoveryPauseState == RECOVERY_PAUSE_REQUESTED)\n> XLogCtl->recoveryPauseState = RECOVERY_PAUSED;\n> SpinLockRelease(&XLogCtl->info_lck);\n>\n> I think it's okay, since we take a spinlock anyways in\n> GetRecoveryPauseState(). See the below comment and also a relevant\n> commit 6ba4ecbf477e0b25dd7bde1b0c4e07fc2da19348 on why it's not\n> necessary taking spinlock always:\n> /*\n> * Pause WAL replay, if requested by a hot-standby session via\n> * SetRecoveryPause().\n> *\n> * Note that we intentionally don't take the info_lck spinlock\n> * here. We might therefore read a slightly stale value of\n> * the recoveryPause flag, but it can't be very stale (no\n> * worse than the last spinlock we did acquire). Since a\n> * pause request is a pretty asynchronous thing anyway,\n> * possibly responding to it one WAL record later than we\n> * otherwise would is a minor issue, so it doesn't seem worth\n> * adding another spinlock cycle to prevent that.\n> */\n\nHow can we do that this is not a 1 byte flag this is enum so I don't\nthink we can read any atomic state without a spin lock here.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Feb 2021 18:16:17 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, Feb 4, 2021 at 6:16 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Feb 4, 2021 at 4:58 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Thu, Feb 4, 2021 at 10:28 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > Please find an updated patch which addresses these comments.\n> >\n> > Thanks for the patch. I tested the new function pg_get_wal_replay_pause_state:\n> >\n> > postgres=# select pg_get_wal_replay_pause_state();\n> > pg_get_wal_replay_pause_state\n> > -------------------------------\n> > not paused\n> > postgres=# select pg_wal_replay_pause();\n> > pg_wal_replay_pause\n> > ---------------------\n> >\n> > (1 row)\n> >\n> > I can also see the \"pause requested\" state after I put a gdb\n> > breakpoint in WaitForWALToBecomeAvailable in the standby startup\n> > process .\n> >\n> > postgres=# select pg_get_wal_replay_pause_state();\n> > pg_get_wal_replay_pause_state\n> > -------------------------------\n> > pause requested\n> > (1 row)\n> >\n> > postgres=# select pg_get_wal_replay_pause_state();\n> > pg_get_wal_replay_pause_state\n> > -------------------------------\n> > paused\n> > (1 row)\n> >\n> > Mostly, the v10 patch looks good to me, except below minor comments:\n> >\n> > 1) A typo in commit message - \"just check\" --> \"just checks\"\n> >\n> > 2) How about\n> > + Returns recovery pause state. The return values are <literal>not paused\n> > instead of\n> > + Returns recovery pause state, the return values are <literal>not paused\n> >\n> > 3) I think it is 'get wal replay pause state', instead of { oid =>\n> > '1137', descr => 'get wal replay is pause state',\n> >\n> > 4) can we just do this\n> > /*\n> > * If recovery pause is requested then set it paused. While we are in\n> > * the loop, user might resume and pause again so set this every time.\n> > */\n> > if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState ==\n> > RECOVERY_PAUSE_REQUESTED)\n> > SetRecoveryPause(RECOVERY_PAUSED);\n> > instead of\n> > /*\n> > * If recovery pause is requested then set it paused. While we are in\n> > * the loop, user might resume and pause again so set this every time.\n> > */\n> > SpinLockAcquire(&XLogCtl->info_lck);\n> > if (XLogCtl->recoveryPauseState == RECOVERY_PAUSE_REQUESTED)\n> > XLogCtl->recoveryPauseState = RECOVERY_PAUSED;\n> > SpinLockRelease(&XLogCtl->info_lck);\n> >\n> > I think it's okay, since we take a spinlock anyways in\n> > GetRecoveryPauseState(). See the below comment and also a relevant\n> > commit 6ba4ecbf477e0b25dd7bde1b0c4e07fc2da19348 on why it's not\n> > necessary taking spinlock always:\n> > /*\n> > * Pause WAL replay, if requested by a hot-standby session via\n> > * SetRecoveryPause().\n> > *\n> > * Note that we intentionally don't take the info_lck spinlock\n> > * here. We might therefore read a slightly stale value of\n> > * the recoveryPause flag, but it can't be very stale (no\n> > * worse than the last spinlock we did acquire). Since a\n> > * pause request is a pretty asynchronous thing anyway,\n> > * possibly responding to it one WAL record later than we\n> > * otherwise would is a minor issue, so it doesn't seem worth\n> > * adding another spinlock cycle to prevent that.\n> > */\n>\n> How can we do that this is not a 1 byte flag this is enum so I don't\n> think we can read any atomic state without a spin lock here.\n\nI have fixed the other comments and the updated patch is attached.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 4 Feb 2021 19:20:02 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, Feb 4, 2021 at 7:46 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> How can we do that this is not a 1 byte flag this is enum so I don't\n> think we can read any atomic state without a spin lock here.\n\nI think this discussion of atomics is confused. Let's talk about what\natomic reads and writes mean. Imagine that you have a 64-bit value\n0x0101010101010101. Somebody sets it to 0x0202020202020202. Imagine\nthat just as they are doing that, someone else reads the value and\ngets 0x0202020201010101, because half of the value has been updated\nand the other half has not yet been updated yet. This kind of thing\ncan actually happen on some platforms and what it means is that on\nthose platforms 8-byte reads and writes are not atomic. The idea of an\n\"atom\" is that it can't be split into pieces but these reads and\nwrites on some platforms are actually not \"atomic\" because they are\nsplit into two 4-byte pieces. But there's no such thing as a 1-byte\nread or write not being atomic. In theory you could imagine a computer\nwhere when you change 0x01 to 0x23 and read in the middle and see 0x21\nor 0x13 or something, but no actual computers behave that way, or at\nleast no mainstream ones that anybody cares about. So the idea that\nyou somehow need a lock to prevent this is just wrong.\n\nConcurrent programs also suffer from another problem which is\nreordering of operations, which can happen either as the program is\ncompiled or as the program is executed by the CPU. The CPU can see you\nset a->x = 1 and a->y = 2 and decide to update y first and then x even\nthough you wrote it the other way around in the program text. To\nprevent this, we have barrier operations; see README.barrier in the\nsource tree for a longer explanation. Atomic operations like\ncompare-and-exchange are also full barriers, so that they not only\nprevent the torn read/write problem described above, but also enforce\norder of operations more strictly.\n\nNow I don't know whether a lock is needed here or not. Maybe it is;\nperhaps for consistency with other code, perhaps because the lock\nacquire and release is serving the function of a barrier; or perhaps\nto guard against some other hazard. But saying that it's because\nreading or writing a 1-byte value might not be atomic does not sound\ncorrect.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Feb 2021 11:49:35 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, Feb 4, 2021 at 7:20 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > How can we do that this is not a 1 byte flag this is enum so I don't\n> > think we can read any atomic state without a spin lock here.\n>\n> I have fixed the other comments and the updated patch is attached.\n\nCan we just do like below so that we could use the existing\nSetRecoveryPause instead of setting the state outside?\n\n /* loop until recoveryPauseState is set to RECOVERY_NOT_PAUSED */\n while (1)\n {\n RecoveryPauseState state;\n\n state = GetRecoveryPauseState();\n\n if (state == RECOVERY_NOT_PAUSED)\n break;\n\n HandleStartupProcInterrupts();\n\n if (CheckForStandbyTrigger())\n return;\n pgstat_report_wait_start(WAIT_EVENT_RECOVERY_PAUSE);\n\n /*\n * If recovery pause is requested then set it paused. While we are in\n * the loop, user might resume and pause again so set this every time.\n */\n if (state == RECOVERY_PAUSE_REQUESTED)\n SetRecoveryPause(RECOVERY_PAUSED)\n\nAnd a typo - it's \"pg_is_wal_replay_paused\" not\n\"pg_is_wal_repay_paused\" +\n<function>pg_is_wal_repay_paused()</function> returns whether a\nrequest\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Feb 2021 06:22:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, Feb 4, 2021 at 10:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Feb 4, 2021 at 7:46 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > How can we do that this is not a 1 byte flag this is enum so I don't\n> > think we can read any atomic state without a spin lock here.\n>\n> I think this discussion of atomics is confused. Let's talk about what\n> atomic reads and writes mean. Imagine that you have a 64-bit value\n> 0x0101010101010101. Somebody sets it to 0x0202020202020202. Imagine\n> that just as they are doing that, someone else reads the value and\n> gets 0x0202020201010101, because half of the value has been updated\n> and the other half has not yet been updated yet. This kind of thing\n> can actually happen on some platforms and what it means is that on\n> those platforms 8-byte reads and writes are not atomic. The idea of an\n> \"atom\" is that it can't be split into pieces but these reads and\n> writes on some platforms are actually not \"atomic\" because they are\n> split into two 4-byte pieces. But there's no such thing as a 1-byte\n> read or write not being atomic. In theory you could imagine a computer\n> where when you change 0x01 to 0x23 and read in the middle and see 0x21\n> or 0x13 or something, but no actual computers behave that way, or at\n> least no mainstream ones that anybody cares about. So the idea that\n> you somehow need a lock to prevent this is just wrong.\n>\n> Concurrent programs also suffer from another problem which is\n> reordering of operations, which can happen either as the program is\n> compiled or as the program is executed by the CPU. The CPU can see you\n> set a->x = 1 and a->y = 2 and decide to update y first and then x even\n> though you wrote it the other way around in the program text. To\n> prevent this, we have barrier operations; see README.barrier in the\n> source tree for a longer explanation. Atomic operations like\n> compare-and-exchange are also full barriers, so that they not only\n> prevent the torn read/write problem described above, but also enforce\n> order of operations more strictly.\n>\n> Now I don't know whether a lock is needed here or not. Maybe it is;\n> perhaps for consistency with other code, perhaps because the lock\n> acquire and release is serving the function of a barrier; or perhaps\n> to guard against some other hazard. But saying that it's because\n> reading or writing a 1-byte value might not be atomic does not sound\n> correct.\n\nI never told that reading /writing 1 byte is not atomic, of course,\nthey are. I told that we can only guarantee that 1-byte read/write is\natomic but this variable is not a bool or 1-byte value and the enum\ncan take 32 bits on a 32-bit platform so we can not guarantee the\natomic read/write on some processor so we need a lock.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Feb 2021 10:04:37 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Fri, Feb 5, 2021 at 6:22 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Feb 4, 2021 at 7:20 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > How can we do that this is not a 1 byte flag this is enum so I don't\n> > > think we can read any atomic state without a spin lock here.\n> >\n> > I have fixed the other comments and the updated patch is attached.\n>\n> Can we just do like below so that we could use the existing\n> SetRecoveryPause instead of setting the state outside?\n>\n> /* loop until recoveryPauseState is set to RECOVERY_NOT_PAUSED */\n> while (1)\n> {\n> RecoveryPauseState state;\n>\n> state = GetRecoveryPauseState();\n>\n> if (state == RECOVERY_NOT_PAUSED)\n> break;\n>\n> HandleStartupProcInterrupts();\n>\n> if (CheckForStandbyTrigger())\n> return;\n> pgstat_report_wait_start(WAIT_EVENT_RECOVERY_PAUSE);\n>\n> /*\n> * If recovery pause is requested then set it paused. While we are in\n> * the loop, user might resume and pause again so set this every time.\n> */\n> if (state == RECOVERY_PAUSE_REQUESTED)\n> SetRecoveryPause(RECOVERY_PAUSED)\n\nWe can not do that, basically, under one lock we need to check the\nstate and set it to pause. Because by the time you release the lock\nsomeone might set it to RECOVERY_NOT_PAUSED then you don't want to set\nit to RECOVERY_PAUSED.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Feb 2021 10:06:39 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Fri, Feb 5, 2021 at 10:06 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Feb 5, 2021 at 6:22 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Thu, Feb 4, 2021 at 7:20 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > How can we do that this is not a 1 byte flag this is enum so I don't\n> > > > think we can read any atomic state without a spin lock here.\n> > >\n> > > I have fixed the other comments and the updated patch is attached.\n> >\n> > Can we just do like below so that we could use the existing\n> > SetRecoveryPause instead of setting the state outside?\n> >\n> > /* loop until recoveryPauseState is set to RECOVERY_NOT_PAUSED */\n> > while (1)\n> > {\n> > RecoveryPauseState state;\n> >\n> > state = GetRecoveryPauseState();\n> >\n> > if (state == RECOVERY_NOT_PAUSED)\n> > break;\n> >\n> > HandleStartupProcInterrupts();\n> >\n> > if (CheckForStandbyTrigger())\n> > return;\n> > pgstat_report_wait_start(WAIT_EVENT_RECOVERY_PAUSE);\n> >\n> > /*\n> > * If recovery pause is requested then set it paused. While we are in\n> > * the loop, user might resume and pause again so set this every time.\n> > */\n> > if (state == RECOVERY_PAUSE_REQUESTED)\n> > SetRecoveryPause(RECOVERY_PAUSED)\n>\n> We can not do that, basically, under one lock we need to check the\n> state and set it to pause. Because by the time you release the lock\n> someone might set it to RECOVERY_NOT_PAUSED then you don't want to set\n> it to RECOVERY_PAUSED.\n\nGot it. Thanks.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Feb 2021 10:14:35 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Fri, Feb 5, 2021 at 10:14 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > We can not do that, basically, under one lock we need to check the\n> > state and set it to pause. Because by the time you release the lock\n> > someone might set it to RECOVERY_NOT_PAUSED then you don't want to set\n> > it to RECOVERY_PAUSED.\n>\n> Got it. Thanks.\n\nHi Dilip, I have one more question:\n\n+ /* test for recovery pause, if user has requested the pause */\n+ if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState ==\n+ RECOVERY_PAUSE_REQUESTED)\n+ recoveryPausesHere(false);\n+\n+ now = GetCurrentTimestamp();\n+\n\nDo we need now = GetCurrentTimestamp(); here? Because, I see that\nwhenever the variable now is used within the for loop in\nWaitForWALToBecomeAvailable, it's re-calculated anyways. It's being\nused within case XLOG_FROM_STREAM:\n\nAm I missing something?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 7 Feb 2021 18:44:31 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Sun, Feb 7, 2021 at 6:44 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Feb 5, 2021 at 10:14 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > We can not do that, basically, under one lock we need to check the\n> > > state and set it to pause. Because by the time you release the lock\n> > > someone might set it to RECOVERY_NOT_PAUSED then you don't want to set\n> > > it to RECOVERY_PAUSED.\n> >\n> > Got it. Thanks.\n>\n> Hi Dilip, I have one more question:\n>\n> + /* test for recovery pause, if user has requested the pause */\n> + if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState ==\n> + RECOVERY_PAUSE_REQUESTED)\n> + recoveryPausesHere(false);\n> +\n> + now = GetCurrentTimestamp();\n> +\n>\n> Do we need now = GetCurrentTimestamp(); here? Because, I see that\n> whenever the variable now is used within the for loop in\n> WaitForWALToBecomeAvailable, it's re-calculated anyways. It's being\n> used within case XLOG_FROM_STREAM:\n>\n> Am I missing something?\n\nYeah, I don't see any reason for doing this, maybe it got copy pasted\nby mistake. Thanks for observing this.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 7 Feb 2021 19:27:02 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "Hi,\n\nOn Sun, 7 Feb 2021 19:27:02 +0530\nDilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Sun, Feb 7, 2021 at 6:44 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Fri, Feb 5, 2021 at 10:14 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > We can not do that, basically, under one lock we need to check the\n> > > > state and set it to pause. Because by the time you release the lock\n> > > > someone might set it to RECOVERY_NOT_PAUSED then you don't want to set\n> > > > it to RECOVERY_PAUSED.\n> > >\n> > > Got it. Thanks.\n> >\n> > Hi Dilip, I have one more question:\n> >\n> > + /* test for recovery pause, if user has requested the pause */\n> > + if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState ==\n> > + RECOVERY_PAUSE_REQUESTED)\n> > + recoveryPausesHere(false);\n> > +\n> > + now = GetCurrentTimestamp();\n> > +\n> >\n> > Do we need now = GetCurrentTimestamp(); here? Because, I see that\n> > whenever the variable now is used within the for loop in\n> > WaitForWALToBecomeAvailable, it's re-calculated anyways. It's being\n> > used within case XLOG_FROM_STREAM:\n> >\n> > Am I missing something?\n> \n> Yeah, I don't see any reason for doing this, maybe it got copy pasted\n> by mistake. Thanks for observing this.\n\nI also have a question:\n \n@@ -6270,14 +6291,14 @@ RecoveryRequiresIntParameter(const char *param_name, int currValue, int minValue\n \t\t\t\t\t\t\t currValue,\n \t\t\t\t\t\t\t minValue)));\n \n-\t\t\tSetRecoveryPause(true);\n+\t\t\tSetRecoveryPause(RECOVERY_PAUSED);\n \n \t\t\tereport(LOG,\n \t\t\t\t\t(errmsg(\"recovery has paused\"),\n \t\t\t\t\t errdetail(\"If recovery is unpaused, the server will shut down.\"),\n \t\t\t\t\t errhint(\"You can then restart the server after making the necessary configuration changes.\")));\n \n-\t\t\twhile (RecoveryIsPaused())\n+\t\t\twhile (GetRecoveryPauseState() != RECOVERY_NOT_PAUSED)\n \t\t\t{\n \t\t\t\tHandleStartupProcInterrupts();\n\n\n\nIf a user call pg_wal_replay_pause while waiting in RecoveryRequiresIntParameter,\nthe state become 'pause requested' and this never returns to 'paused'.\nShould we check recoveryPauseState in this loop as in recoveryPausesHere?\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 8 Feb 2021 10:06:46 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, 8 Feb 2021 at 6:38 AM, Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> Hi,\n>\n> On Sun, 7 Feb 2021 19:27:02 +0530\n> Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > On Sun, Feb 7, 2021 at 6:44 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Fri, Feb 5, 2021 at 10:14 AM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > > We can not do that, basically, under one lock we need to check the\n> > > > > state and set it to pause. Because by the time you release the\n> lock\n> > > > > someone might set it to RECOVERY_NOT_PAUSED then you don't want to\n> set\n> > > > > it to RECOVERY_PAUSED.\n> > > >\n> > > > Got it. Thanks.\n> > >\n> > > Hi Dilip, I have one more question:\n> > >\n> > > + /* test for recovery pause, if user has requested the pause */\n> > > + if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState ==\n> > > + RECOVERY_PAUSE_REQUESTED)\n> > > + recoveryPausesHere(false);\n> > > +\n> > > + now = GetCurrentTimestamp();\n> > > +\n> > >\n> > > Do we need now = GetCurrentTimestamp(); here? Because, I see that\n> > > whenever the variable now is used within the for loop in\n> > > WaitForWALToBecomeAvailable, it's re-calculated anyways. It's being\n> > > used within case XLOG_FROM_STREAM:\n> > >\n> > > Am I missing something?\n> >\n> > Yeah, I don't see any reason for doing this, maybe it got copy pasted\n> > by mistake. Thanks for observing this.\n>\n> I also have a question:\n>\n> @@ -6270,14 +6291,14 @@ RecoveryRequiresIntParameter(const char\n> *param_name, int currValue, int minValue\n> currValue,\n> minValue)));\n>\n> - SetRecoveryPause(true);\n> + SetRecoveryPause(RECOVERY_PAUSED);\n>\n> ereport(LOG,\n> (errmsg(\"recovery has paused\"),\n> errdetail(\"If recovery is\n> unpaused, the server will shut down.\"),\n> errhint(\"You can then restart the\n> server after making the necessary configuration changes.\")));\n>\n> - while (RecoveryIsPaused())\n> + while (GetRecoveryPauseState() !=\n> RECOVERY_NOT_PAUSED)\n> {\n> HandleStartupProcInterrupts();\n>\n>\n>\n> If a user call pg_wal_replay_pause while waiting in\n> RecoveryRequiresIntParameter,\n> the state become 'pause requested' and this never returns to 'paused'.\n> Should we check recoveryPauseState in this loop as in\n\n\nI think the right fix should be that the state should never go from\n‘paused’ to ‘pause requested’ so I think pg_wal_replay_pause should take\ncare of that.\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, 8 Feb 2021 at 6:38 AM, Yugo NAGATA <nagata@sraoss.co.jp> wrote:Hi,\n\nOn Sun, 7 Feb 2021 19:27:02 +0530\nDilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Sun, Feb 7, 2021 at 6:44 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Fri, Feb 5, 2021 at 10:14 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > We can not do that, basically, under one lock we need to check the\n> > > > state and set it to pause.  Because by the time you release the lock\n> > > > someone might set it to RECOVERY_NOT_PAUSED then you don't want to set\n> > > > it to RECOVERY_PAUSED.\n> > >\n> > > Got it. Thanks.\n> >\n> > Hi Dilip, I have one more question:\n> >\n> > +        /* test for recovery pause, if user has requested the pause */\n> > +        if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState ==\n> > +            RECOVERY_PAUSE_REQUESTED)\n> > +            recoveryPausesHere(false);\n> > +\n> > +        now = GetCurrentTimestamp();\n> > +\n> >\n> > Do we need  now = GetCurrentTimestamp(); here? Because, I see that\n> > whenever the variable now is used within the for loop in\n> > WaitForWALToBecomeAvailable, it's re-calculated anyways. It's being\n> > used within case XLOG_FROM_STREAM:\n> >\n> > Am I missing something?\n> \n> Yeah, I don't see any reason for doing this, maybe it got copy pasted\n> by mistake.  Thanks for observing this.\n\nI also have a question:\n\n@@ -6270,14 +6291,14 @@ RecoveryRequiresIntParameter(const char *param_name, int currValue, int minValue\n                                                           currValue,\n                                                           minValue)));\n\n-                       SetRecoveryPause(true);\n+                       SetRecoveryPause(RECOVERY_PAUSED);\n\n                        ereport(LOG,\n                                        (errmsg(\"recovery has paused\"),\n                                         errdetail(\"If recovery is unpaused, the server will shut down.\"),\n                                         errhint(\"You can then restart the server after making the necessary configuration changes.\")));\n\n-                       while (RecoveryIsPaused())\n+                       while (GetRecoveryPauseState() != RECOVERY_NOT_PAUSED)\n                        {\n                                HandleStartupProcInterrupts();\n\n\n\nIf a user call pg_wal_replay_pause while waiting in RecoveryRequiresIntParameter,\nthe state become 'pause requested' and this never returns to 'paused'.\nShould we check recoveryPauseState in this loop as in I think the right fix should be that the state should never go from ‘paused’ to ‘pause requested’  so I think pg_wal_replay_pause should take care of that.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 8 Feb 2021 07:51:22 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, 8 Feb 2021 07:51:22 +0530\nDilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Mon, 8 Feb 2021 at 6:38 AM, Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n> > Hi,\n> >\n> > On Sun, 7 Feb 2021 19:27:02 +0530\n> > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > > On Sun, Feb 7, 2021 at 6:44 PM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > >\n> > > > On Fri, Feb 5, 2021 at 10:14 AM Bharath Rupireddy\n> > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > > > We can not do that, basically, under one lock we need to check the\n> > > > > > state and set it to pause. Because by the time you release the\n> > lock\n> > > > > > someone might set it to RECOVERY_NOT_PAUSED then you don't want to\n> > set\n> > > > > > it to RECOVERY_PAUSED.\n> > > > >\n> > > > > Got it. Thanks.\n> > > >\n> > > > Hi Dilip, I have one more question:\n> > > >\n> > > > + /* test for recovery pause, if user has requested the pause */\n> > > > + if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState ==\n> > > > + RECOVERY_PAUSE_REQUESTED)\n> > > > + recoveryPausesHere(false);\n> > > > +\n> > > > + now = GetCurrentTimestamp();\n> > > > +\n> > > >\n> > > > Do we need now = GetCurrentTimestamp(); here? Because, I see that\n> > > > whenever the variable now is used within the for loop in\n> > > > WaitForWALToBecomeAvailable, it's re-calculated anyways. It's being\n> > > > used within case XLOG_FROM_STREAM:\n> > > >\n> > > > Am I missing something?\n> > >\n> > > Yeah, I don't see any reason for doing this, maybe it got copy pasted\n> > > by mistake. Thanks for observing this.\n> >\n> > I also have a question:\n> >\n> > @@ -6270,14 +6291,14 @@ RecoveryRequiresIntParameter(const char\n> > *param_name, int currValue, int minValue\n> > currValue,\n> > minValue)));\n> >\n> > - SetRecoveryPause(true);\n> > + SetRecoveryPause(RECOVERY_PAUSED);\n> >\n> > ereport(LOG,\n> > (errmsg(\"recovery has paused\"),\n> > errdetail(\"If recovery is\n> > unpaused, the server will shut down.\"),\n> > errhint(\"You can then restart the\n> > server after making the necessary configuration changes.\")));\n> >\n> > - while (RecoveryIsPaused())\n> > + while (GetRecoveryPauseState() !=\n> > RECOVERY_NOT_PAUSED)\n> > {\n> > HandleStartupProcInterrupts();\n> >\n> >\n> >\n> > If a user call pg_wal_replay_pause while waiting in\n> > RecoveryRequiresIntParameter,\n> > the state become 'pause requested' and this never returns to 'paused'.\n> > Should we check recoveryPauseState in this loop as in\n> \n> \n> I think the right fix should be that the state should never go from\n> ‘paused’ to ‘pause requested’ so I think pg_wal_replay_pause should take\n> care of that.\n\nIt makes sense to take care of this in pg_wal_replay_pause, but I wonder\nit can not handle the case that a user resume and pause again while a sleep.\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 8 Feb 2021 11:47:21 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, Feb 8, 2021 at 8:18 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n> On Mon, 8 Feb 2021 07:51:22 +0530\n> Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > On Mon, 8 Feb 2021 at 6:38 AM, Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >\n> > > Hi,\n> > >\n> > > On Sun, 7 Feb 2021 19:27:02 +0530\n> > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > > On Sun, Feb 7, 2021 at 6:44 PM Bharath Rupireddy\n> > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > >\n> > > > > On Fri, Feb 5, 2021 at 10:14 AM Bharath Rupireddy\n> > > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > > > > We can not do that, basically, under one lock we need to check the\n> > > > > > > state and set it to pause. Because by the time you release the\n> > > lock\n> > > > > > > someone might set it to RECOVERY_NOT_PAUSED then you don't want to\n> > > set\n> > > > > > > it to RECOVERY_PAUSED.\n> > > > > >\n> > > > > > Got it. Thanks.\n> > > > >\n> > > > > Hi Dilip, I have one more question:\n> > > > >\n> > > > > + /* test for recovery pause, if user has requested the pause */\n> > > > > + if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState ==\n> > > > > + RECOVERY_PAUSE_REQUESTED)\n> > > > > + recoveryPausesHere(false);\n> > > > > +\n> > > > > + now = GetCurrentTimestamp();\n> > > > > +\n> > > > >\n> > > > > Do we need now = GetCurrentTimestamp(); here? Because, I see that\n> > > > > whenever the variable now is used within the for loop in\n> > > > > WaitForWALToBecomeAvailable, it's re-calculated anyways. It's being\n> > > > > used within case XLOG_FROM_STREAM:\n> > > > >\n> > > > > Am I missing something?\n> > > >\n> > > > Yeah, I don't see any reason for doing this, maybe it got copy pasted\n> > > > by mistake. Thanks for observing this.\n> > >\n> > > I also have a question:\n> > >\n> > > @@ -6270,14 +6291,14 @@ RecoveryRequiresIntParameter(const char\n> > > *param_name, int currValue, int minValue\n> > > currValue,\n> > > minValue)));\n> > >\n> > > - SetRecoveryPause(true);\n> > > + SetRecoveryPause(RECOVERY_PAUSED);\n> > >\n> > > ereport(LOG,\n> > > (errmsg(\"recovery has paused\"),\n> > > errdetail(\"If recovery is\n> > > unpaused, the server will shut down.\"),\n> > > errhint(\"You can then restart the\n> > > server after making the necessary configuration changes.\")));\n> > >\n> > > - while (RecoveryIsPaused())\n> > > + while (GetRecoveryPauseState() !=\n> > > RECOVERY_NOT_PAUSED)\n> > > {\n> > > HandleStartupProcInterrupts();\n> > >\n> > >\n> > >\n> > > If a user call pg_wal_replay_pause while waiting in\n> > > RecoveryRequiresIntParameter,\n> > > the state become 'pause requested' and this never returns to 'paused'.\n> > > Should we check recoveryPauseState in this loop as in\n> >\n> >\n> > I think the right fix should be that the state should never go from\n> > ‘paused’ to ‘pause requested’ so I think pg_wal_replay_pause should take\n> > care of that.\n>\n> It makes sense to take care of this in pg_wal_replay_pause, but I wonder\n> it can not handle the case that a user resume and pause again while a sleep.\n\nRight, we will have to check and set in the loop. But we should not\nallow the state to go from paused to pause requested irrespective of\nthis.\n\nI will make these changes and send the updated patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Feb 2021 09:35:00 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, Feb 8, 2021 at 9:35 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > If a user call pg_wal_replay_pause while waiting in\n> > > > RecoveryRequiresIntParameter,\n> > > > the state become 'pause requested' and this never returns to 'paused'.\n> > > > Should we check recoveryPauseState in this loop as in\n> > >\n> > >\n> > > I think the right fix should be that the state should never go from\n> > > ‘paused’ to ‘pause requested’ so I think pg_wal_replay_pause should take\n> > > care of that.\n> >\n> > It makes sense to take care of this in pg_wal_replay_pause, but I wonder\n> > it can not handle the case that a user resume and pause again while a sleep.\n>\n> Right, we will have to check and set in the loop. But we should not\n> allow the state to go from paused to pause requested irrespective of\n> this.\n\nWe can think of a state machine with the states \"not paused\", \"pause\nrequested\", \"paused\". While we can go to \"not paused\" from any state,\nbut cannot go to \"pause requested\" from \"paused\".\n\nSo, will pg_wal_replay_pause throw an error or warning or silently\nreturn when it's called and the state is \"paused\" already? Maybe we\nshould add better commenting in pg_wal_replay_pause why we don't set\n\"pause requested\" when the state is already \"paused\".\n\nAnd also, if we are adding below code in the\nRecoveryRequiresIntParameter loop, it's better to make it a function,\nlike your earlier patch.\n\n /*\n * If recovery pause is requested then set it paused. While we are in\n * the loop, user might resume and pause again so set this every time.\n */\n SpinLockAcquire(&XLogCtl->info_lck);\n if (XLogCtl->recoveryPauseState == RECOVERY_PAUSE_REQUESTED)\n XLogCtl->recoveryPauseState = RECOVERY_PAUSED;\n SpinLockRelease(&XLogCtl->info_lck);\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Feb 2021 09:49:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, Feb 8, 2021 at 9:49 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Feb 8, 2021 at 9:35 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > If a user call pg_wal_replay_pause while waiting in\n> > > > > RecoveryRequiresIntParameter,\n> > > > > the state become 'pause requested' and this never returns to 'paused'.\n> > > > > Should we check recoveryPauseState in this loop as in\n> > > >\n> > > >\n> > > > I think the right fix should be that the state should never go from\n> > > > ‘paused’ to ‘pause requested’ so I think pg_wal_replay_pause should take\n> > > > care of that.\n> > >\n> > > It makes sense to take care of this in pg_wal_replay_pause, but I wonder\n> > > it can not handle the case that a user resume and pause again while a sleep.\n> >\n> > Right, we will have to check and set in the loop. But we should not\n> > allow the state to go from paused to pause requested irrespective of\n> > this.\n>\n> We can think of a state machine with the states \"not paused\", \"pause\n> requested\", \"paused\". While we can go to \"not paused\" from any state,\n> but cannot go to \"pause requested\" from \"paused\".\n>\n> So, will pg_wal_replay_pause throw an error or warning or silently\n > return when it's called and the state is \"paused\" already?\n\nIt should just silently return because pg_wal_replay_pause just claim\nit request to pause, but it not mean that it can not pause\nimmediately.\n\n Maybe we\n> should add better commenting in pg_wal_replay_pause why we don't set\n> \"pause requested\" when the state is already \"paused\".\n\n\n\n> And also, if we are adding below code in the\n> RecoveryRequiresIntParameter loop, it's better to make it a function,\n> like your earlier patch.\n>\n> /*\n> * If recovery pause is requested then set it paused. While we are in\n> * the loop, user might resume and pause again so set this every time.\n> */\n> SpinLockAcquire(&XLogCtl->info_lck);\n> if (XLogCtl->recoveryPauseState == RECOVERY_PAUSE_REQUESTED)\n> XLogCtl->recoveryPauseState = RECOVERY_PAUSED;\n> SpinLockRelease(&XLogCtl->info_lck);\n\nYes, it should go back to function now as in the older versions.\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Feb 2021 10:00:05 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, 8 Feb 2021 09:35:00 +0530\nDilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Mon, Feb 8, 2021 at 8:18 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >\n> > On Mon, 8 Feb 2021 07:51:22 +0530\n> > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > > On Mon, 8 Feb 2021 at 6:38 AM, Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > >\n> > > > Hi,\n> > > >\n> > > > On Sun, 7 Feb 2021 19:27:02 +0530\n> > > > Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > > On Sun, Feb 7, 2021 at 6:44 PM Bharath Rupireddy\n> > > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > > >\n> > > > > > On Fri, Feb 5, 2021 at 10:14 AM Bharath Rupireddy\n> > > > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > > > > > We can not do that, basically, under one lock we need to check the\n> > > > > > > > state and set it to pause. Because by the time you release the\n> > > > lock\n> > > > > > > > someone might set it to RECOVERY_NOT_PAUSED then you don't want to\n> > > > set\n> > > > > > > > it to RECOVERY_PAUSED.\n> > > > > > >\n> > > > > > > Got it. Thanks.\n> > > > > >\n> > > > > > Hi Dilip, I have one more question:\n> > > > > >\n> > > > > > + /* test for recovery pause, if user has requested the pause */\n> > > > > > + if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState ==\n> > > > > > + RECOVERY_PAUSE_REQUESTED)\n> > > > > > + recoveryPausesHere(false);\n> > > > > > +\n> > > > > > + now = GetCurrentTimestamp();\n> > > > > > +\n> > > > > >\n> > > > > > Do we need now = GetCurrentTimestamp(); here? Because, I see that\n> > > > > > whenever the variable now is used within the for loop in\n> > > > > > WaitForWALToBecomeAvailable, it's re-calculated anyways. It's being\n> > > > > > used within case XLOG_FROM_STREAM:\n> > > > > >\n> > > > > > Am I missing something?\n> > > > >\n> > > > > Yeah, I don't see any reason for doing this, maybe it got copy pasted\n> > > > > by mistake. Thanks for observing this.\n> > > >\n> > > > I also have a question:\n> > > >\n> > > > @@ -6270,14 +6291,14 @@ RecoveryRequiresIntParameter(const char\n> > > > *param_name, int currValue, int minValue\n> > > > currValue,\n> > > > minValue)));\n> > > >\n> > > > - SetRecoveryPause(true);\n> > > > + SetRecoveryPause(RECOVERY_PAUSED);\n> > > >\n> > > > ereport(LOG,\n> > > > (errmsg(\"recovery has paused\"),\n> > > > errdetail(\"If recovery is\n> > > > unpaused, the server will shut down.\"),\n> > > > errhint(\"You can then restart the\n> > > > server after making the necessary configuration changes.\")));\n> > > >\n> > > > - while (RecoveryIsPaused())\n> > > > + while (GetRecoveryPauseState() !=\n> > > > RECOVERY_NOT_PAUSED)\n> > > > {\n> > > > HandleStartupProcInterrupts();\n> > > >\n> > > >\n> > > >\n> > > > If a user call pg_wal_replay_pause while waiting in\n> > > > RecoveryRequiresIntParameter,\n> > > > the state become 'pause requested' and this never returns to 'paused'.\n> > > > Should we check recoveryPauseState in this loop as in\n> > >\n> > >\n> > > I think the right fix should be that the state should never go from\n> > > ‘paused’ to ‘pause requested’ so I think pg_wal_replay_pause should take\n> > > care of that.\n> >\n> > It makes sense to take care of this in pg_wal_replay_pause, but I wonder\n> > it can not handle the case that a user resume and pause again while a sleep.\n> \n> Right, we will have to check and set in the loop. But we should not\n> allow the state to go from paused to pause requested irrespective of\n> this.\n\nI agree with you.\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 8 Feb 2021 14:12:35 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Mon, 8 Feb 2021 14:12:35 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \r\n> > > > I think the right fix should be that the state should never go from\r\n> > > > ‘paused’ to ‘pause requested’ so I think pg_wal_replay_pause should take\r\n> > > > care of that.\r\n> > >\r\n> > > It makes sense to take care of this in pg_wal_replay_pause, but I wonder\r\n> > > it can not handle the case that a user resume and pause again while a sleep.\r\n> > \r\n> > Right, we will have to check and set in the loop. But we should not\r\n> > allow the state to go from paused to pause requested irrespective of\r\n> > this.\r\n> \r\n> I agree with you.\r\n\r\nIs there any actual harm if PAUSED returns to REQUESETED, assuming we\r\nimmediately change the state to PAUSE always we see REQUESTED in the\r\nwaiting loop, despite that we allow change the state from PAUSE to\r\nREQUESTED via NOT_PAUSED between two successive loop condition checks?\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Mon, 08 Feb 2021 17:32:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, 08 Feb 2021 17:32:46 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Mon, 8 Feb 2021 14:12:35 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \n> > > > > I think the right fix should be that the state should never go from\n> > > > > ‘paused’ to ‘pause requested’ so I think pg_wal_replay_pause should take\n> > > > > care of that.\n> > > >\n> > > > It makes sense to take care of this in pg_wal_replay_pause, but I wonder\n> > > > it can not handle the case that a user resume and pause again while a sleep.\n> > > \n> > > Right, we will have to check and set in the loop. But we should not\n> > > allow the state to go from paused to pause requested irrespective of\n> > > this.\n> > \n> > I agree with you.\n> \n> Is there any actual harm if PAUSED returns to REQUESETED, assuming we\n> immediately change the state to PAUSE always we see REQUESTED in the\n> waiting loop, despite that we allow change the state from PAUSE to\n> REQUESTED via NOT_PAUSED between two successive loop condition checks?\n\nIf a user call pg_wal_replay_pause while recovery is paused, users can\nobserve 'pause requested' during a sleep alghough the time window is short. \nIt seems a bit odd that pg_wal_replay_pause changes the state like this\nbecause This state meeans that recovery may not be 'paused'. \n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 8 Feb 2021 17:48:25 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, Feb 8, 2021 at 2:19 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n> On Mon, 08 Feb 2021 17:32:46 +0900 (JST)\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> > At Mon, 8 Feb 2021 14:12:35 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in\n> > > > > > I think the right fix should be that the state should never go from\n> > > > > > ‘paused’ to ‘pause requested’ so I think pg_wal_replay_pause should take\n> > > > > > care of that.\n> > > > >\n> > > > > It makes sense to take care of this in pg_wal_replay_pause, but I wonder\n> > > > > it can not handle the case that a user resume and pause again while a sleep.\n> > > >\n> > > > Right, we will have to check and set in the loop. But we should not\n> > > > allow the state to go from paused to pause requested irrespective of\n> > > > this.\n> > >\n> > > I agree with you.\n> >\n> > Is there any actual harm if PAUSED returns to REQUESETED, assuming we\n> > immediately change the state to PAUSE always we see REQUESTED in the\n> > waiting loop, despite that we allow change the state from PAUSE to\n> > REQUESTED via NOT_PAUSED between two successive loop condition checks?\n>\n> If a user call pg_wal_replay_pause while recovery is paused, users can\n> observe 'pause requested' during a sleep alghough the time window is short.\n> It seems a bit odd that pg_wal_replay_pause changes the state like this\n> because This state meeans that recovery may not be 'paused'.\n\nYeah, this appears wrong that after 'paused' we go back to 'pause\nrequested'. the logical state transition should always be as below\n\nNOT PAUSED -> PAUSE REQUESTED or PAUSED (maybe we should always go to\nrequest and then paused but there is nothing wrong with going to\npaused)\nPAUSE REQUESTED -> NOT PAUSE or PAUSED (either cancel the request or get paused)\nPAUSED -> NOT PAUSED (from PAUSED we should not go to the\nPAUSE_REQUESTED without going to NOT PAUSED)\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Feb 2021 17:05:52 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Mon, 8 Feb 2021 17:05:52 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \r\n> On Mon, Feb 8, 2021 at 2:19 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\r\n> >\r\n> > On Mon, 08 Feb 2021 17:32:46 +0900 (JST)\r\n> > Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\r\n> >\r\n> > > At Mon, 8 Feb 2021 14:12:35 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in\r\n> > > > > > > I think the right fix should be that the state should never go from\r\n> > > > > > > ‘paused’ to ‘pause requested’ so I think pg_wal_replay_pause should take\r\n> > > > > > > care of that.\r\n> > > > > >\r\n> > > > > > It makes sense to take care of this in pg_wal_replay_pause, but I wonder\r\n> > > > > > it can not handle the case that a user resume and pause again while a sleep.\r\n> > > > >\r\n> > > > > Right, we will have to check and set in the loop. But we should not\r\n> > > > > allow the state to go from paused to pause requested irrespective of\r\n> > > > > this.\r\n> > > >\r\n> > > > I agree with you.\r\n> > >\r\n> > > Is there any actual harm if PAUSED returns to REQUESETED, assuming we\r\n> > > immediately change the state to PAUSE always we see REQUESTED in the\r\n> > > waiting loop, despite that we allow change the state from PAUSE to\r\n> > > REQUESTED via NOT_PAUSED between two successive loop condition checks?\r\n> >\r\n> > If a user call pg_wal_replay_pause while recovery is paused, users can\r\n> > observe 'pause requested' during a sleep alghough the time window is short.\r\n> > It seems a bit odd that pg_wal_replay_pause changes the state like this\r\n> > because This state meeans that recovery may not be 'paused'.\r\n> \r\n> Yeah, this appears wrong that after 'paused' we go back to 'pause\r\n> requested'. the logical state transition should always be as below\r\n> \r\n> NOT PAUSED -> PAUSE REQUESTED or PAUSED (maybe we should always go to\r\n> request and then paused but there is nothing wrong with going to\r\n> paused)\r\n> PAUSE REQUESTED -> NOT PAUSE or PAUSED (either cancel the request or get paused)\r\n> PAUSED -> NOT PAUSED (from PAUSED we should not go to the\r\n> PAUSE_REQUESTED without going to NOT PAUSED)\r\n\r\nI didn't asked about the internal logical correctness, but asked about\r\n*actual harm* revealed to users. I don't see any actual harm in the\r\n\"wrong\" transition because:\r\n\r\n1. It is not wrong nor strange that the invoker of pg_wal_replay_pause\r\n sees the state PAUSE_REQUESTED before it changes to PAUSED. Even if\r\n the previous state was PAUSED, it is no business of the requestors.\r\n \r\n2. It is no harm in the recovery side since PAUSE_REQUESTED and PAUSED\r\n are effectively the same state.\r\n\r\n3. After we inhibited the direct transition from\r\n PAUSED->PAUSE_REQUESTED, the effectively the same transition\r\n PAUSED->NOT_PAUSED->PAUSE_REQUESTED is still allowed. The inhibition\r\n of the former transition doesn't protect anything other than seeming\r\n correctness of the transition.\r\n\r\nIf we are going to introduce that complexity, I'd like to re-propose\r\nto introduce interlocking between the recovery side and the\r\npause-requestor side instead of introducing the intermediate state,\r\nwhich is the cause of the complexity.\r\n\r\nThe problem is due to the looseness of checking for pause requests in\r\nthe existing checkponts, and the window after the last checkpoint\r\nuntil calling rm_redo().\r\n\r\nThe attached PoC patch adds:\r\n\r\n- A solid checkpoint just before calling rm_redo. It doesn't add a\r\n info_lck since the check is done in the existing lock section.\r\n\r\n- Interlocking between the above and SetRecoveryPause without adding a\r\n shared variable.\r\n (This is what I called \"synchronous\" before.)\r\n\r\nThere's a concern about pausing after updating\r\nXlogCtl->replayEndRecPtr but I don't see an issue yet..\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center", "msg_date": "Tue, 09 Feb 2021 10:58:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Tue, 09 Feb 2021 10:58:04 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Mon, 8 Feb 2021 17:05:52 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> > On Mon, Feb 8, 2021 at 2:19 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > >\n> > > On Mon, 08 Feb 2021 17:32:46 +0900 (JST)\n> > > Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > > At Mon, 8 Feb 2021 14:12:35 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in\n> > > > > > > > I think the right fix should be that the state should never go from\n> > > > > > > > ‘paused’ to ‘pause requested’ so I think pg_wal_replay_pause should take\n> > > > > > > > care of that.\n> > > > > > >\n> > > > > > > It makes sense to take care of this in pg_wal_replay_pause, but I wonder\n> > > > > > > it can not handle the case that a user resume and pause again while a sleep.\n> > > > > >\n> > > > > > Right, we will have to check and set in the loop. But we should not\n> > > > > > allow the state to go from paused to pause requested irrespective of\n> > > > > > this.\n> > > > >\n> > > > > I agree with you.\n> > > >\n> > > > Is there any actual harm if PAUSED returns to REQUESETED, assuming we\n> > > > immediately change the state to PAUSE always we see REQUESTED in the\n> > > > waiting loop, despite that we allow change the state from PAUSE to\n> > > > REQUESTED via NOT_PAUSED between two successive loop condition checks?\n> > >\n> > > If a user call pg_wal_replay_pause while recovery is paused, users can\n> > > observe 'pause requested' during a sleep alghough the time window is short.\n> > > It seems a bit odd that pg_wal_replay_pause changes the state like this\n> > > because This state meeans that recovery may not be 'paused'.\n> > \n> > Yeah, this appears wrong that after 'paused' we go back to 'pause\n> > requested'. the logical state transition should always be as below\n> > \n> > NOT PAUSED -> PAUSE REQUESTED or PAUSED (maybe we should always go to\n> > request and then paused but there is nothing wrong with going to\n> > paused)\n> > PAUSE REQUESTED -> NOT PAUSE or PAUSED (either cancel the request or get paused)\n> > PAUSED -> NOT PAUSED (from PAUSED we should not go to the\n> > PAUSE_REQUESTED without going to NOT PAUSED)\n> \n> I didn't asked about the internal logical correctness, but asked about\n> *actual harm* revealed to users. I don't see any actual harm in the\n> \"wrong\" transition because:\n\nActually, the incorrect state transition is not so harmful except that\nusers can observe unnecessary state changes. However, I don't think any\nactual harm in prohibit the incorrect state transition. So, I think we\ncan do it.\n\n> If we are going to introduce that complexity, I'd like to re-propose\n> to introduce interlocking between the recovery side and the\n> pause-requestor side instead of introducing the intermediate state,\n> which is the cause of the complexity.\n> \n> The attached PoC patch adds:\n> \n> - A solid checkpoint just before calling rm_redo. It doesn't add a\n> info_lck since the check is done in the existing lock section.\n> \n> - Interlocking between the above and SetRecoveryPause without adding a\n> shared variable.\n> (This is what I called \"synchronous\" before.)\n\nI think waiting in pg_wal_replay_pasue is a possible option, but this will\nalso introduce other complexity to codes such as possibility of waiting for\nlong or for ever. For example, waiting in SetRecoveryPause as in your POC\npatch appears to make recovery stuck in RecoveryRequiresIntParameter.\n\nBy the way, attaching other patch to a thread without the original patch\nwill make commitfest and cfbot APP confused...\n\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Tue, 9 Feb 2021 12:23:23 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Tue, Feb 9, 2021 at 7:28 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 8 Feb 2021 17:05:52 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > On Mon, Feb 8, 2021 at 2:19 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > >\n> > > On Mon, 08 Feb 2021 17:32:46 +0900 (JST)\n> > > Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > > At Mon, 8 Feb 2021 14:12:35 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in\n> > > > > > > > I think the right fix should be that the state should never go from\n> > > > > > > > ‘paused’ to ‘pause requested’ so I think pg_wal_replay_pause should take\n> > > > > > > > care of that.\n> > > > > > >\n> > > > > > > It makes sense to take care of this in pg_wal_replay_pause, but I wonder\n> > > > > > > it can not handle the case that a user resume and pause again while a sleep.\n> > > > > >\n> > > > > > Right, we will have to check and set in the loop. But we should not\n> > > > > > allow the state to go from paused to pause requested irrespective of\n> > > > > > this.\n> > > > >\n> > > > > I agree with you.\n> > > >\n> > > > Is there any actual harm if PAUSED returns to REQUESETED, assuming we\n> > > > immediately change the state to PAUSE always we see REQUESTED in the\n> > > > waiting loop, despite that we allow change the state from PAUSE to\n> > > > REQUESTED via NOT_PAUSED between two successive loop condition checks?\n> > >\n> > > If a user call pg_wal_replay_pause while recovery is paused, users can\n> > > observe 'pause requested' during a sleep alghough the time window is short.\n> > > It seems a bit odd that pg_wal_replay_pause changes the state like this\n> > > because This state meeans that recovery may not be 'paused'.\n> >\n> > Yeah, this appears wrong that after 'paused' we go back to 'pause\n> > requested'. the logical state transition should always be as below\n> >\n> > NOT PAUSED -> PAUSE REQUESTED or PAUSED (maybe we should always go to\n> > request and then paused but there is nothing wrong with going to\n> > paused)\n> > PAUSE REQUESTED -> NOT PAUSE or PAUSED (either cancel the request or get paused)\n> > PAUSED -> NOT PAUSED (from PAUSED we should not go to the\n> > PAUSE_REQUESTED without going to NOT PAUSED)\n>\n> I didn't asked about the internal logical correctness, but asked about\n> *actual harm* revealed to users. I don't see any actual harm in the\n> \"wrong\" transition because:\n>\n> 1. It is not wrong nor strange that the invoker of pg_wal_replay_pause\n> sees the state PAUSE_REQUESTED before it changes to PAUSED. Even if\n> the previous state was PAUSED, it is no business of the requestors.\n\nThe 'pg_wal_replay_pause' request to pause the recovery so it is fine\nto first change the state to PAUSE_REQUESTED and then to PAUSED. But\nif the recovery is already paused then what is the point in bringing\nthe state back to PAUSE_REQUESTED. For example, suppose the tool want\nto raise the pause request and wait until recovery actually paused, so\nif it was already paused then if we bring it back to PAUSE_REQUESTED\nthen it doesn't look correct and we might need to do an extra wait\ncycle in the tool until it reaches back to PAUSED, I don't think that\nis really a best design.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Feb 2021 09:43:39 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Tue, Feb 9, 2021 at 8:54 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n> On Tue, 09 Feb 2021 10:58:04 +0900 (JST)\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> > At Mon, 8 Feb 2021 17:05:52 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > > On Mon, Feb 8, 2021 at 2:19 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > >\n> > > > On Mon, 08 Feb 2021 17:32:46 +0900 (JST)\n> > > > Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > > >\n> > > > > At Mon, 8 Feb 2021 14:12:35 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in\n> > > > > > > > > I think the right fix should be that the state should never go from\n> > > > > > > > > ‘paused’ to ‘pause requested’ so I think pg_wal_replay_pause should take\n> > > > > > > > > care of that.\n> > > > > > > >\n> > > > > > > > It makes sense to take care of this in pg_wal_replay_pause, but I wonder\n> > > > > > > > it can not handle the case that a user resume and pause again while a sleep.\n> > > > > > >\n> > > > > > > Right, we will have to check and set in the loop. But we should not\n> > > > > > > allow the state to go from paused to pause requested irrespective of\n> > > > > > > this.\n> > > > > >\n> > > > > > I agree with you.\n> > > > >\n> > > > > Is there any actual harm if PAUSED returns to REQUESETED, assuming we\n> > > > > immediately change the state to PAUSE always we see REQUESTED in the\n> > > > > waiting loop, despite that we allow change the state from PAUSE to\n> > > > > REQUESTED via NOT_PAUSED between two successive loop condition checks?\n> > > >\n> > > > If a user call pg_wal_replay_pause while recovery is paused, users can\n> > > > observe 'pause requested' during a sleep alghough the time window is short.\n> > > > It seems a bit odd that pg_wal_replay_pause changes the state like this\n> > > > because This state meeans that recovery may not be 'paused'.\n> > >\n> > > Yeah, this appears wrong that after 'paused' we go back to 'pause\n> > > requested'. the logical state transition should always be as below\n> > >\n> > > NOT PAUSED -> PAUSE REQUESTED or PAUSED (maybe we should always go to\n> > > request and then paused but there is nothing wrong with going to\n> > > paused)\n> > > PAUSE REQUESTED -> NOT PAUSE or PAUSED (either cancel the request or get paused)\n> > > PAUSED -> NOT PAUSED (from PAUSED we should not go to the\n> > > PAUSE_REQUESTED without going to NOT PAUSED)\n> >\n> > I didn't asked about the internal logical correctness, but asked about\n> > *actual harm* revealed to users. I don't see any actual harm in the\n> > \"wrong\" transition because:\n>\n> Actually, the incorrect state transition is not so harmful except that\n> users can observe unnecessary state changes. However, I don't think any\n> actual harm in prohibit the incorrect state transition. So, I think we\n> can do it.\n>\n> > If we are going to introduce that complexity, I'd like to re-propose\n> > to introduce interlocking between the recovery side and the\n> > pause-requestor side instead of introducing the intermediate state,\n> > which is the cause of the complexity.\n> >\n> > The attached PoC patch adds:\n> >\n> > - A solid checkpoint just before calling rm_redo. It doesn't add a\n> > info_lck since the check is done in the existing lock section.\n> >\n> > - Interlocking between the above and SetRecoveryPause without adding a\n> > shared variable.\n> > (This is what I called \"synchronous\" before.)\n>\n> I think waiting in pg_wal_replay_pasue is a possible option, but this will\n> also introduce other complexity to codes such as possibility of waiting for\n> long or for ever. For example, waiting in SetRecoveryPause as in your POC\n> patch appears to make recovery stuck in RecoveryRequiresIntParameter.\n>\n\nI agree with this, I think we previously discussed these approaches\nwhere we can wait in pg_wal_replay_pasue() or\npg_is_wal_replay_pasued(). In fact, we had an older version where we\nput the wait in pg_is_wal_replay_pasued(). But it appeared that doing\nso will add extra complexity as well as instead of waiting in these\nAPIs the wait logic can be implemented in the application code which\nis actually using these APIs and IMHO that will give better control to\nthe users.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Feb 2021 09:47:58 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Tue, Feb 9, 2021 at 9:48 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Feb 9, 2021 at 8:54 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >\n> > On Tue, 09 Feb 2021 10:58:04 +0900 (JST)\n> > Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >\n> > > At Mon, 8 Feb 2021 17:05:52 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > > > On Mon, Feb 8, 2021 at 2:19 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > > >\n> > > > > On Mon, 08 Feb 2021 17:32:46 +0900 (JST)\n> > > > > Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > > > >\n> > > > > > At Mon, 8 Feb 2021 14:12:35 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in\n> > > > > > > > > > I think the right fix should be that the state should never go from\n> > > > > > > > > > ‘paused’ to ‘pause requested’ so I think pg_wal_replay_pause should take\n> > > > > > > > > > care of that.\n> > > > > > > > >\n> > > > > > > > > It makes sense to take care of this in pg_wal_replay_pause, but I wonder\n> > > > > > > > > it can not handle the case that a user resume and pause again while a sleep.\n> > > > > > > >\n> > > > > > > > Right, we will have to check and set in the loop. But we should not\n> > > > > > > > allow the state to go from paused to pause requested irrespective of\n> > > > > > > > this.\n> > > > > > >\n> > > > > > > I agree with you.\n> > > > > >\n> > > > > > Is there any actual harm if PAUSED returns to REQUESETED, assuming we\n> > > > > > immediately change the state to PAUSE always we see REQUESTED in the\n> > > > > > waiting loop, despite that we allow change the state from PAUSE to\n> > > > > > REQUESTED via NOT_PAUSED between two successive loop condition checks?\n> > > > >\n> > > > > If a user call pg_wal_replay_pause while recovery is paused, users can\n> > > > > observe 'pause requested' during a sleep alghough the time window is short.\n> > > > > It seems a bit odd that pg_wal_replay_pause changes the state like this\n> > > > > because This state meeans that recovery may not be 'paused'.\n> > > >\n> > > > Yeah, this appears wrong that after 'paused' we go back to 'pause\n> > > > requested'. the logical state transition should always be as below\n> > > >\n> > > > NOT PAUSED -> PAUSE REQUESTED or PAUSED (maybe we should always go to\n> > > > request and then paused but there is nothing wrong with going to\n> > > > paused)\n> > > > PAUSE REQUESTED -> NOT PAUSE or PAUSED (either cancel the request or get paused)\n> > > > PAUSED -> NOT PAUSED (from PAUSED we should not go to the\n> > > > PAUSE_REQUESTED without going to NOT PAUSED)\n> > >\n> > > I didn't asked about the internal logical correctness, but asked about\n> > > *actual harm* revealed to users. I don't see any actual harm in the\n> > > \"wrong\" transition because:\n> >\n> > Actually, the incorrect state transition is not so harmful except that\n> > users can observe unnecessary state changes. However, I don't think any\n> > actual harm in prohibit the incorrect state transition. So, I think we\n> > can do it.\n> >\n> > > If we are going to introduce that complexity, I'd like to re-propose\n> > > to introduce interlocking between the recovery side and the\n> > > pause-requestor side instead of introducing the intermediate state,\n> > > which is the cause of the complexity.\n> > >\n> > > The attached PoC patch adds:\n> > >\n> > > - A solid checkpoint just before calling rm_redo. It doesn't add a\n> > > info_lck since the check is done in the existing lock section.\n> > >\n> > > - Interlocking between the above and SetRecoveryPause without adding a\n> > > shared variable.\n> > > (This is what I called \"synchronous\" before.)\n> >\n> > I think waiting in pg_wal_replay_pasue is a possible option, but this will\n> > also introduce other complexity to codes such as possibility of waiting for\n> > long or for ever. For example, waiting in SetRecoveryPause as in your POC\n> > patch appears to make recovery stuck in RecoveryRequiresIntParameter.\n> >\n>\n> I agree with this, I think we previously discussed these approaches\n> where we can wait in pg_wal_replay_pasue() or\n> pg_is_wal_replay_pasued(). In fact, we had an older version where we\n> put the wait in pg_is_wal_replay_pasued(). But it appeared that doing\n> so will add extra complexity as well as instead of waiting in these\n> APIs the wait logic can be implemented in the application code which\n> is actually using these APIs and IMHO that will give better control to\n> the users.\n\nAnd also, having waiting logic in pg_wal_replay_pasue() or\npg_is_wal_replay_pasued() required changes to the existing API such as\na timeout to not allow them infinitely waiting.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Feb 2021 09:58:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Tue, 9 Feb 2021 12:23:23 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \n> On Tue, 09 Feb 2021 10:58:04 +0900 (JST)\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > I didn't asked about the internal logical correctness, but asked about\n> > *actual harm* revealed to users. I don't see any actual harm in the\n> > \"wrong\" transition because:\n> \n> Actually, the incorrect state transition is not so harmful except that\n> users can observe unnecessary state changes. However, I don't think any\n> actual harm in prohibit the incorrect state transition. So, I think we\n> can do it.\n\nI don't say that we cannot do that. Just it is needeless.\n\n> > If we are going to introduce that complexity, I'd like to re-propose\n> > to introduce interlocking between the recovery side and the\n> > pause-requestor side instead of introducing the intermediate state,\n> > which is the cause of the complexity.\n> > \n> > The attached PoC patch adds:\n> > \n> > - A solid checkpoint just before calling rm_redo. It doesn't add a\n> > info_lck since the check is done in the existing lock section.\n> > \n> > - Interlocking between the above and SetRecoveryPause without adding a\n> > shared variable.\n> > (This is what I called \"synchronous\" before.)\n> \n> I think waiting in pg_wal_replay_pasue is a possible option, but this will\n> also introduce other complexity to codes such as possibility of waiting for\n> long or for ever. For example, waiting in SetRecoveryPause as in your POC\n> patch appears to make recovery stuck in RecoveryRequiresIntParameter.\n\nThat is easily avoidable CFI in the loop.\n\n> By the way, attaching other patch to a thread without the original patch\n> will make commitfest and cfbot APP confused...\n\nOops! Sorry for that. I forgot to append .txt or such to the file name.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 09 Feb 2021 14:17:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Tue, 9 Feb 2021 09:47:58 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \r\n> On Tue, Feb 9, 2021 at 8:54 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\r\n> >\r\n> > On Tue, 09 Feb 2021 10:58:04 +0900 (JST)\r\n> > Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\r\n> >\r\n> > > At Mon, 8 Feb 2021 17:05:52 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\r\n> > > > On Mon, Feb 8, 2021 at 2:19 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\r\n> > > > >\r\n> > > > > On Mon, 08 Feb 2021 17:32:46 +0900 (JST)\r\n> > > > > Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\r\n> > > > >\r\n> > > > > > At Mon, 8 Feb 2021 14:12:35 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in\r\n> > > > > > > > > > I think the right fix should be that the state should never go from\r\n> > > > > > > > > > ‘paused’ to ‘pause requested’ so I think pg_wal_replay_pause should take\r\n> > > > > > > > > > care of that.\r\n> > > > > > > > >\r\n> > > > > > > > > It makes sense to take care of this in pg_wal_replay_pause, but I wonder\r\n> > > > > > > > > it can not handle the case that a user resume and pause again while a sleep.\r\n> > > > > > > >\r\n> > > > > > > > Right, we will have to check and set in the loop. But we should not\r\n> > > > > > > > allow the state to go from paused to pause requested irrespective of\r\n> > > > > > > > this.\r\n> > > > > > >\r\n> > > > > > > I agree with you.\r\n> > > > > >\r\n> > > > > > Is there any actual harm if PAUSED returns to REQUESETED, assuming we\r\n> > > > > > immediately change the state to PAUSE always we see REQUESTED in the\r\n> > > > > > waiting loop, despite that we allow change the state from PAUSE to\r\n> > > > > > REQUESTED via NOT_PAUSED between two successive loop condition checks?\r\n> > > > >\r\n> > > > > If a user call pg_wal_replay_pause while recovery is paused, users can\r\n> > > > > observe 'pause requested' during a sleep alghough the time window is short.\r\n> > > > > It seems a bit odd that pg_wal_replay_pause changes the state like this\r\n> > > > > because This state meeans that recovery may not be 'paused'.\r\n> > > >\r\n> > > > Yeah, this appears wrong that after 'paused' we go back to 'pause\r\n> > > > requested'. the logical state transition should always be as below\r\n> > > >\r\n> > > > NOT PAUSED -> PAUSE REQUESTED or PAUSED (maybe we should always go to\r\n> > > > request and then paused but there is nothing wrong with going to\r\n> > > > paused)\r\n> > > > PAUSE REQUESTED -> NOT PAUSE or PAUSED (either cancel the request or get paused)\r\n> > > > PAUSED -> NOT PAUSED (from PAUSED we should not go to the\r\n> > > > PAUSE_REQUESTED without going to NOT PAUSED)\r\n> > >\r\n> > > I didn't asked about the internal logical correctness, but asked about\r\n> > > *actual harm* revealed to users. I don't see any actual harm in the\r\n> > > \"wrong\" transition because:\r\n> >\r\n> > Actually, the incorrect state transition is not so harmful except that\r\n> > users can observe unnecessary state changes. However, I don't think any\r\n> > actual harm in prohibit the incorrect state transition. So, I think we\r\n> > can do it.\r\n> >\r\n> > > If we are going to introduce that complexity, I'd like to re-propose\r\n> > > to introduce interlocking between the recovery side and the\r\n> > > pause-requestor side instead of introducing the intermediate state,\r\n> > > which is the cause of the complexity.\r\n> > >\r\n> > > The attached PoC patch adds:\r\n> > >\r\n> > > - A solid checkpoint just before calling rm_redo. It doesn't add a\r\n> > > info_lck since the check is done in the existing lock section.\r\n> > >\r\n> > > - Interlocking between the above and SetRecoveryPause without adding a\r\n> > > shared variable.\r\n> > > (This is what I called \"synchronous\" before.)\r\n> >\r\n> > I think waiting in pg_wal_replay_pasue is a possible option, but this will\r\n> > also introduce other complexity to codes such as possibility of waiting for\r\n> > long or for ever. For example, waiting in SetRecoveryPause as in your POC\r\n> > patch appears to make recovery stuck in RecoveryRequiresIntParameter.\r\n\r\nAh. Yes, startup process does not need to wait. That is a bug of the\r\npatch. No other callers don't cause the self dead lock.\r\n\r\n> I agree with this, I think we previously discussed these approaches\r\n> where we can wait in pg_wal_replay_pasue() or\r\n> pg_is_wal_replay_pasued(). In fact, we had an older version where we\r\n> put the wait in pg_is_wal_replay_pasued(). But it appeared that doing\r\n\r\nNote that the expected waiting period is while calling rmgr_redo(). If\r\nit is stuck for a long time, that suggests something's going wrong.\r\n\r\n> so will add extra complexity as well as instead of waiting in these\r\n> APIs the wait logic can be implemented in the application code which\r\n> is actually using these APIs and IMHO that will give better control to\r\n> the users.\r\n\r\nYear, with the PoC pg_wal_replay_pause() can make a short wait as a\r\nside-effect but the tri-state patch also can add a function to wait\r\nfor the state suffices.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 8e3b5df7dc..194a2f9998 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -6076,6 +6076,23 @@ void\n SetRecoveryPause(bool recoveryPause)\n {\n \tSpinLockAcquire(&XLogCtl->info_lck);\n+\n+\t/*\n+\t * Wait for the application of the record being applied to finish, so that\n+\t * no records will be applied after this function returns. We don't need to\n+\t * wait when ending a pause. Anyway we are requesting a recovery pause, we\n+\t * don't mind a possible slow down of recovery by the info_lck here.\n+\t * We don't need to wait in the startup process.\n+\t */\n+\twhile(InRecovery &&\n+\t\t recoveryPause && !XLogCtl->recoveryPause &&\n+\t\t XLogCtl->replayEndRecPtr != XLogCtl->lastReplayedEndRecPtr)\n+\t{\n+\t\tSpinLockRelease(&XLogCtl->info_lck);\n+\t\tCHECK_FOR_INTERRUPTS();\n+\t\tpg_usleep(10000L);\t\t/* 10 ms */\n+\t\tSpinLockAcquire(&XLogCtl->info_lck);\n+\t}\n \tXLogCtl->recoveryPause = recoveryPause;\n \tSpinLockRelease(&XLogCtl->info_lck);\n }\n@@ -7262,6 +7279,7 @@ StartupXLOG(void)\n \t\t\tdo\n \t\t\t{\n \t\t\t\tbool\t\tswitchedTLI = false;\n+\t\t\t\tbool\t\tpause_requested = false;\n \n #ifdef WAL_DEBUG\n \t\t\t\tif (XLOG_DEBUG ||\n@@ -7292,11 +7310,9 @@ StartupXLOG(void)\n \t\t\t\t * Note that we intentionally don't take the info_lck spinlock\n \t\t\t\t * here. We might therefore read a slightly stale value of\n \t\t\t\t * the recoveryPause flag, but it can't be very stale (no\n-\t\t\t\t * worse than the last spinlock we did acquire). Since a\n-\t\t\t\t * pause request is a pretty asynchronous thing anyway,\n-\t\t\t\t * possibly responding to it one WAL record later than we\n-\t\t\t\t * otherwise would is a minor issue, so it doesn't seem worth\n-\t\t\t\t * adding another spinlock cycle to prevent that.\n+\t\t\t\t * worse than the last spinlock we did acquire). We eventually\n+\t\t\t\t * make sure catching the pause request if any just before\n+\t\t\t\t * applying this record.\n \t\t\t\t */\n \t\t\t\tif (((volatile XLogCtlData *) XLogCtl)->recoveryPause)\n \t\t\t\t\trecoveryPausesHere(false);\n@@ -7385,12 +7401,19 @@ StartupXLOG(void)\n \t\t\t\t/*\n \t\t\t\t * Update shared replayEndRecPtr before replaying this record,\n \t\t\t\t * so that XLogFlush will update minRecoveryPoint correctly.\n+\t\t\t\t * Also we check for the correct value of the recoveryPause\n+\t\t\t\t * flag here not to have redo overrun during a pause. See\n+\t\t\t\t * SetRecoveryPuase() for details.\n \t\t\t\t */\n \t\t\t\tSpinLockAcquire(&XLogCtl->info_lck);\n \t\t\t\tXLogCtl->replayEndRecPtr = EndRecPtr;\n \t\t\t\tXLogCtl->replayEndTLI = ThisTimeLineID;\n+\t\t\t\tpause_requested = XLogCtl->recoveryPause;\n \t\t\t\tSpinLockRelease(&XLogCtl->info_lck);\n \n+\t\t\t\tif (pause_requested)\n+\t\t\t\t\trecoveryPausesHere(false);\n+\t\t\t\t\t\n \t\t\t\t/*\n \t\t\t\t * If we are attempting to enter Hot Standby mode, process\n \t\t\t\t * XIDs we see", "msg_date": "Tue, 09 Feb 2021 14:55:23 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Tue, 9 Feb 2021 09:58:30 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Tue, Feb 9, 2021 at 9:48 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Feb 9, 2021 at 8:54 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > >\n> > > On Tue, 09 Feb 2021 10:58:04 +0900 (JST)\n> > > Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > > > If we are going to introduce that complexity, I'd like to re-propose\n> > > > to introduce interlocking between the recovery side and the\n> > > > pause-requestor side instead of introducing the intermediate state,\n> > > > which is the cause of the complexity.\n> > > >\n> > > > The attached PoC patch adds:\n> > > >\n> > > > - A solid checkpoint just before calling rm_redo. It doesn't add a\n> > > > info_lck since the check is done in the existing lock section.\n> > > >\n> > > > - Interlocking between the above and SetRecoveryPause without adding a\n> > > > shared variable.\n> > > > (This is what I called \"synchronous\" before.)\n> > >\n> > > I think waiting in pg_wal_replay_pasue is a possible option, but this will\n> > > also introduce other complexity to codes such as possibility of waiting for\n> > > long or for ever. For example, waiting in SetRecoveryPause as in your POC\n> > > patch appears to make recovery stuck in RecoveryRequiresIntParameter.\n> > >\n> >\n> > I agree with this, I think we previously discussed these approaches\n> > where we can wait in pg_wal_replay_pasue() or\n> > pg_is_wal_replay_pasued(). In fact, we had an older version where we\n> > put the wait in pg_is_wal_replay_pasued(). But it appeared that doing\n> > so will add extra complexity as well as instead of waiting in these\n> > APIs the wait logic can be implemented in the application code which\n> > is actually using these APIs and IMHO that will give better control to\n> > the users.\n> \n> And also, having waiting logic in pg_wal_replay_pasue() or\n> pg_is_wal_replay_pasued() required changes to the existing API such as\n> a timeout to not allow them infinitely waiting.\n\nI don't understand that. pg_wal_replay_pause() is defined as \"pausees\nrecovery\". so it is the correct behavior to wait actual pause.\npg_is_wal_replay_paused() doesn't wait for anything at all.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 09 Feb 2021 15:00:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "Sorry, I made a mistake here.\n\nAt Tue, 09 Feb 2021 14:55:23 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Tue, 9 Feb 2021 09:47:58 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n\n> > APIs the wait logic can be implemented in the application code which\n> > is actually using these APIs and IMHO that will give better control to\n> > the users.\n> \n> Year, with the PoC pg_wal_replay_pause() can make a short wait as a\n> side-effect but the tri-state patch also can add a function to wait\n> for the state suffices.\n\nI said that it is surprising that pg_is_wal_replay_paused() waits for\nthe state change. But I didn't say that pg_wal_replay_pause()\nshouldn't wait for the actual pause.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 09 Feb 2021 15:05:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Tue, Feb 9, 2021 at 11:30 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 9 Feb 2021 09:58:30 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > On Tue, Feb 9, 2021 at 9:48 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Tue, Feb 9, 2021 at 8:54 AM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> > > >\n> > > > On Tue, 09 Feb 2021 10:58:04 +0900 (JST)\n> > > > Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > > > > If we are going to introduce that complexity, I'd like to re-propose\n> > > > > to introduce interlocking between the recovery side and the\n> > > > > pause-requestor side instead of introducing the intermediate state,\n> > > > > which is the cause of the complexity.\n> > > > >\n> > > > > The attached PoC patch adds:\n> > > > >\n> > > > > - A solid checkpoint just before calling rm_redo. It doesn't add a\n> > > > > info_lck since the check is done in the existing lock section.\n> > > > >\n> > > > > - Interlocking between the above and SetRecoveryPause without adding a\n> > > > > shared variable.\n> > > > > (This is what I called \"synchronous\" before.)\n> > > >\n> > > > I think waiting in pg_wal_replay_pasue is a possible option, but this will\n> > > > also introduce other complexity to codes such as possibility of waiting for\n> > > > long or for ever. For example, waiting in SetRecoveryPause as in your POC\n> > > > patch appears to make recovery stuck in RecoveryRequiresIntParameter.\n> > > >\n> > >\n> > > I agree with this, I think we previously discussed these approaches\n> > > where we can wait in pg_wal_replay_pasue() or\n> > > pg_is_wal_replay_pasued(). In fact, we had an older version where we\n> > > put the wait in pg_is_wal_replay_pasued(). But it appeared that doing\n> > > so will add extra complexity as well as instead of waiting in these\n> > > APIs the wait logic can be implemented in the application code which\n> > > is actually using these APIs and IMHO that will give better control to\n> > > the users.\n> >\n> > And also, having waiting logic in pg_wal_replay_pasue() or\n> > pg_is_wal_replay_pasued() required changes to the existing API such as\n> > a timeout to not allow them infinitely waiting.\n>\n> I don't understand that. pg_wal_replay_pause() is defined as \"pausees\n> recovery\". so it is the correct behavior to wait actual pause.\n> pg_is_wal_replay_paused() doesn't wait for anything at all.\n\nWhat I meant was that if we were to add waiting logic inside\npg_wal_replay_pause, we should also have a timeout with some default\nvalue, to avoid pg_wal_replay_pause waiting forever in the waiting\nloop. Within that timeout, if the recovery isn't paused,\npg_wal_replay_pause will return probably a warning and a false(this\nrequires us to change the return value of the existing\npg_wal_replay_pause)?\n\nTo avoid changing the existing API and return type, a new function\npg_get_wal_replay_pause_state is introduced.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Feb 2021 12:27:21 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Tue, 9 Feb 2021 12:27:21 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> What I meant was that if we were to add waiting logic inside\n> pg_wal_replay_pause, we should also have a timeout with some default\n> value, to avoid pg_wal_replay_pause waiting forever in the waiting\n> loop. Within that timeout, if the recovery isn't paused,\n> pg_wal_replay_pause will return probably a warning and a false(this\n> requires us to change the return value of the existing\n> pg_wal_replay_pause)?\n\nI thought that rm_redo finishes shortly unless any trouble\nhappens. But on second thought, I found that I forgot a case of a\nrecovery-conflict. So as you pointed out, pg_wal_replay_pause() needs\na flag 'wait' to wait for a pause established. And the flag can be\nturned into \"timeout\".\n\n# And the prevous verision had another silly bug.\n\n> To avoid changing the existing API and return type, a new function\n> pg_get_wal_replay_pause_state is introduced.\n\nI mentioned about IN parameters, not OUTs. IN parameters can be\noptional to accept existing usage. pg_wal_replay_pause() is changed\nthat way in the attached.\n\nIf all of you still disagree with my proposal, I withdraw it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\nindex 1ab31a9056..7eb93f74dd 100644\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -25320,14 +25320,19 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup());\n <indexterm>\n <primary>pg_wal_replay_pause</primary>\n </indexterm>\n- <function>pg_wal_replay_pause</function> ()\n+ <function>pg_wal_replay_pause</function> (\n+ <optional> <parameter>timeout</parameter> <type>integer</type>\n+ </optional> )\n <returnvalue>void</returnvalue>\n </para>\n <para>\n Pauses recovery. While recovery is paused, no further database\n changes are applied. If hot standby is active, all new queries will\n see the same consistent snapshot of the database, and no further query\n- conflicts will be generated until recovery is resumed.\n+ conflicts will be generated until recovery is resumed. Zero or\n+ positive timeout value means the function errors out after that\n+ milliseconds elapsed before recovery is paused (default is -1, wait\n+ forever).\n </para>\n <para>\n This function is restricted to superusers by default, but other users\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 8e3b5df7dc..8fd614cded 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -6072,10 +6072,61 @@ RecoveryIsPaused(void)\n \treturn recoveryPause;\n }\n \n+/*\n+ * Pauses recovery.\n+ *\n+ * It is guaranteed that no WAL replay happens after this function returns. If\n+ * timeout is zero or positive, emits ERROR when the timeout is reached before\n+ * recovery is paused.\n+ */\n void\n-SetRecoveryPause(bool recoveryPause)\n+SetRecoveryPause(bool recoveryPause, int timeout)\n {\n+\tTimestampTz finish_time = 0;\n+\tTimestampTz now;\n+\tint\t\t sleep_ms;\n+\n \tSpinLockAcquire(&XLogCtl->info_lck);\n+\n+\t/* No need of timeout in the startup process */\n+\tAssert(!InRecovery || timeout < 0);\n+\n+\t/*\n+\t * Wait for the concurrent rm_redo() to finish, so that no records will be\n+\t * applied after this function returns. No need to wait while resuming.\n+\t * Anyway we are requesting a recovery pause, we don't mind a possible slow\n+\t * down of recovery by the info_lck here. We don't need to wait in the\n+\t * startup process since no concurrent rm_redo() runs.\n+\t */\n+\twhile(!InRecovery &&\n+\t\t recoveryPause && !XLogCtl->recoveryPause &&\n+\t\t XLogCtl->replayEndRecPtr != XLogCtl->lastReplayedEndRecPtr)\n+\t{\n+\t\tSpinLockRelease(&XLogCtl->info_lck);\n+\t\tnow = GetCurrentTimestamp();\n+\n+\t\tif (timeout >= 0)\n+\t\t{\n+\t\t\tif (timeout > 0 && finish_time == 0)\n+\t\t\t\tfinish_time = TimestampTzPlusMilliseconds(now, timeout);\n+\n+\t\t\tif (finish_time < now)\n+\t\t\t\tereport(ERROR,\n+\t\t\t\t\t\t(errcode(ERRCODE_SQL_STATEMENT_NOT_YET_COMPLETE),\n+\t\t\t\t\t\t errmsg (\"could not pause recovery: timed out\")));\n+\t\t}\n+\n+\t\tCHECK_FOR_INTERRUPTS();\n+\n+\t\tsleep_ms = 10000L;\t\t/* 10 ms */\n+\n+\t\t/* finish_time may be reached earlier than 10ms */\n+\t\tif (finish_time > 0)\n+\t\t\tMin(sleep_ms, TimestampDifferenceMilliseconds(now, finish_time));\n+\n+\t\tpg_usleep(sleep_ms);\n+\t\tSpinLockAcquire(&XLogCtl->info_lck);\n+\t}\n \tXLogCtl->recoveryPause = recoveryPause;\n \tSpinLockRelease(&XLogCtl->info_lck);\n }\n@@ -6270,7 +6321,7 @@ RecoveryRequiresIntParameter(const char *param_name, int currValue, int minValue\n \t\t\t\t\t\t\t currValue,\n \t\t\t\t\t\t\t minValue)));\n \n-\t\t\tSetRecoveryPause(true);\n+\t\t\tSetRecoveryPause(true, -1);\n \n \t\t\tereport(LOG,\n \t\t\t\t\t(errmsg(\"recovery has paused\"),\n@@ -7262,6 +7313,7 @@ StartupXLOG(void)\n \t\t\tdo\n \t\t\t{\n \t\t\t\tbool\t\tswitchedTLI = false;\n+\t\t\t\tbool\t\tpause_requested = false;\n \n #ifdef WAL_DEBUG\n \t\t\t\tif (XLOG_DEBUG ||\n@@ -7292,11 +7344,9 @@ StartupXLOG(void)\n \t\t\t\t * Note that we intentionally don't take the info_lck spinlock\n \t\t\t\t * here. We might therefore read a slightly stale value of\n \t\t\t\t * the recoveryPause flag, but it can't be very stale (no\n-\t\t\t\t * worse than the last spinlock we did acquire). Since a\n-\t\t\t\t * pause request is a pretty asynchronous thing anyway,\n-\t\t\t\t * possibly responding to it one WAL record later than we\n-\t\t\t\t * otherwise would is a minor issue, so it doesn't seem worth\n-\t\t\t\t * adding another spinlock cycle to prevent that.\n+\t\t\t\t * worse than the last spinlock we did acquire). We eventually\n+\t\t\t\t * make sure catching the pause request if any just before\n+\t\t\t\t * applying this record.\n \t\t\t\t */\n \t\t\t\tif (((volatile XLogCtlData *) XLogCtl)->recoveryPause)\n \t\t\t\t\trecoveryPausesHere(false);\n@@ -7385,12 +7435,19 @@ StartupXLOG(void)\n \t\t\t\t/*\n \t\t\t\t * Update shared replayEndRecPtr before replaying this record,\n \t\t\t\t * so that XLogFlush will update minRecoveryPoint correctly.\n+\t\t\t\t * Also we check for the correct value of the recoveryPause\n+\t\t\t\t * flag here not to have redo overrun during a pause. See\n+\t\t\t\t * SetRecoveryPuase() for details.\n \t\t\t\t */\n \t\t\t\tSpinLockAcquire(&XLogCtl->info_lck);\n \t\t\t\tXLogCtl->replayEndRecPtr = EndRecPtr;\n \t\t\t\tXLogCtl->replayEndTLI = ThisTimeLineID;\n+\t\t\t\tpause_requested = XLogCtl->recoveryPause;\n \t\t\t\tSpinLockRelease(&XLogCtl->info_lck);\n \n+\t\t\t\tif (pause_requested)\n+\t\t\t\t\trecoveryPausesHere(false);\n+\n \t\t\t\t/*\n \t\t\t\t * If we are attempting to enter Hot Standby mode, process\n \t\t\t\t * XIDs we see\n@@ -7497,7 +7554,7 @@ StartupXLOG(void)\n \t\t\t\t\t\tproc_exit(3);\n \n \t\t\t\t\tcase RECOVERY_TARGET_ACTION_PAUSE:\n-\t\t\t\t\t\tSetRecoveryPause(true);\n+\t\t\t\t\t\tSetRecoveryPause(true, -1);\n \t\t\t\t\t\trecoveryPausesHere(true);\n \n \t\t\t\t\t\t/* drop into promote */\ndiff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.c\nindex 5e1aab319d..4c8c41e0bc 100644\n--- a/src/backend/access/transam/xlogfuncs.c\n+++ b/src/backend/access/transam/xlogfuncs.c\n@@ -538,7 +538,7 @@ pg_wal_replay_pause(PG_FUNCTION_ARGS)\n \t\t\t\t errhint(\"%s cannot be executed after promotion is triggered.\",\n \t\t\t\t\t\t \"pg_wal_replay_pause()\")));\n \n-\tSetRecoveryPause(true);\n+\tSetRecoveryPause(true, PG_GETARG_INT32(0));\n \n \tPG_RETURN_VOID();\n }\n@@ -565,7 +565,7 @@ pg_wal_replay_resume(PG_FUNCTION_ARGS)\n \t\t\t\t errhint(\"%s cannot be executed after promotion is triggered.\",\n \t\t\t\t\t\t \"pg_wal_replay_resume()\")));\n \n-\tSetRecoveryPause(false);\n+\tSetRecoveryPause(false, -1);\n \n \tPG_RETURN_VOID();\n }\ndiff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql\nindex fa58afd9d7..e03f22f350 100644\n--- a/src/backend/catalog/system_views.sql\n+++ b/src/backend/catalog/system_views.sql\n@@ -1264,6 +1264,11 @@ CREATE OR REPLACE FUNCTION\n RETURNS boolean STRICT VOLATILE LANGUAGE INTERNAL AS 'pg_promote'\n PARALLEL SAFE;\n \n+CREATE OR REPLACE FUNCTION\n+ pg_wal_replay_pause(timeout int4 DEFAULT -1)\n+ RETURNS void VOLATILE LANGUAGE internal AS 'pg_wal_replay_pause'\n+ PARALLEL SAFE;\n+\n -- legacy definition for compatibility with 9.3\n CREATE OR REPLACE FUNCTION\n json_populate_record(base anyelement, from_json json, use_json_as_text boolean DEFAULT false)\n@@ -1473,7 +1478,7 @@ REVOKE EXECUTE ON FUNCTION pg_stop_backup() FROM public;\n REVOKE EXECUTE ON FUNCTION pg_stop_backup(boolean, boolean) FROM public;\n REVOKE EXECUTE ON FUNCTION pg_create_restore_point(text) FROM public;\n REVOKE EXECUTE ON FUNCTION pg_switch_wal() FROM public;\n-REVOKE EXECUTE ON FUNCTION pg_wal_replay_pause() FROM public;\n+REVOKE EXECUTE ON FUNCTION pg_wal_replay_pause(int) FROM public;\n REVOKE EXECUTE ON FUNCTION pg_wal_replay_resume() FROM public;\n REVOKE EXECUTE ON FUNCTION pg_rotate_logfile() FROM public;\n REVOKE EXECUTE ON FUNCTION pg_reload_conf() FROM public;\ndiff --git a/src/include/access/xlog.h b/src/include/access/xlog.h\nindex 75ec1073bd..397e206433 100644\n--- a/src/include/access/xlog.h\n+++ b/src/include/access/xlog.h\n@@ -311,7 +311,7 @@ extern XLogRecPtr GetXLogReplayRecPtr(TimeLineID *replayTLI);\n extern XLogRecPtr GetXLogInsertRecPtr(void);\n extern XLogRecPtr GetXLogWriteRecPtr(void);\n extern bool RecoveryIsPaused(void);\n-extern void SetRecoveryPause(bool recoveryPause);\n+extern void SetRecoveryPause(bool recoveryPause, int timeout);\n extern TimestampTz GetLatestXTime(void);\n extern TimestampTz GetCurrentChunkReplayStartTime(void);\n \ndiff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat\nindex 4e0c9be58c..a646721c3c 100644\n--- a/src/include/catalog/pg_proc.dat\n+++ b/src/include/catalog/pg_proc.dat\n@@ -6222,7 +6222,7 @@\n \n { oid => '3071', descr => 'pause wal replay',\n proname => 'pg_wal_replay_pause', provolatile => 'v', prorettype => 'void',\n- proargtypes => '', prosrc => 'pg_wal_replay_pause' },\n+ proargtypes => 'int4', prosrc => 'pg_wal_replay_pause' },\n { oid => '3072', descr => 'resume wal replay, if it was paused',\n proname => 'pg_wal_replay_resume', provolatile => 'v', prorettype => 'void',\n proargtypes => '', prosrc => 'pg_wal_replay_resume' },", "msg_date": "Wed, 10 Feb 2021 11:49:52 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Wed, Feb 10, 2021 at 8:19 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 9 Feb 2021 12:27:21 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > What I meant was that if we were to add waiting logic inside\n> > pg_wal_replay_pause, we should also have a timeout with some default\n> > value, to avoid pg_wal_replay_pause waiting forever in the waiting\n> > loop. Within that timeout, if the recovery isn't paused,\n> > pg_wal_replay_pause will return probably a warning and a false(this\n> > requires us to change the return value of the existing\n> > pg_wal_replay_pause)?\n>\n> I thought that rm_redo finishes shortly unless any trouble\n> happens. But on second thought, I found that I forgot a case of a\n> recovery-conflict. So as you pointed out, pg_wal_replay_pause() needs\n> a flag 'wait' to wait for a pause established. And the flag can be\n> turned into \"timeout\".\n>\n> # And the prevous verision had another silly bug.\n>\n> > To avoid changing the existing API and return type, a new function\n> > pg_get_wal_replay_pause_state is introduced.\n>\n> I mentioned about IN parameters, not OUTs. IN parameters can be\n> optional to accept existing usage. pg_wal_replay_pause() is changed\n> that way in the attached.\n>\n> If all of you still disagree with my proposal, I withdraw it.\n\nI don't find any problem with this approach as well, but I personally\nfeel that the other approach where we don't wait in any API and just\nreturn the recovery pause state is much simpler and more flexible. So\nI will make the pending changes in that patch and let's see what are\nthe other opinion and based on that we can conclude. Thanks for the\npatch.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Feb 2021 10:02:54 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Wed, Feb 10, 2021 at 10:02 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I don't find any problem with this approach as well, but I personally\n> feel that the other approach where we don't wait in any API and just\n> return the recovery pause state is much simpler and more flexible. So\n> I will make the pending changes in that patch and let's see what are\n> the other opinion and based on that we can conclude. Thanks for the\n> patch.\n\nHere is an updated version of the patch which fixes the last two open problems\n1. In RecoveryRequiresIntParameter set the recovery pause state in the\nloop so that if recovery resumed and pause requested again we can set\nto pause again.\n2. If the recovery state is already 'paused' then don't set it back to\nthe 'pause requested'.\n\nOne more point is that in 'pg_wal_replay_pause' even if we don't\nchange the state because it was already set to the 'paused' then also\nwe call the WakeupRecovery. But I don't think there is any problem\nwith that, if we think that this should be changed then we can make\nSetRecoveryPause return a bool such that if it doesn't do state change\nthen it returns false and in that case we can avoid calling\nWakeupRecovery, but I felt that is unnecessary. Any other thoughts on\nthis?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 10 Feb 2021 20:38:57 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Wed, Feb 10, 2021 at 8:39 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Feb 10, 2021 at 10:02 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I don't find any problem with this approach as well, but I personally\n> > feel that the other approach where we don't wait in any API and just\n> > return the recovery pause state is much simpler and more flexible. So\n> > I will make the pending changes in that patch and let's see what are\n> > the other opinion and based on that we can conclude. Thanks for the\n> > patch.\n>\n> Here is an updated version of the patch which fixes the last two open problems\n> 1. In RecoveryRequiresIntParameter set the recovery pause state in the\n> loop so that if recovery resumed and pause requested again we can set\n> to pause again.\n> 2. If the recovery state is already 'paused' then don't set it back to\n> the 'pause requested'.\n>\n> One more point is that in 'pg_wal_replay_pause' even if we don't\n> change the state because it was already set to the 'paused' then also\n> we call the WakeupRecovery. But I don't think there is any problem\n> with that, if we think that this should be changed then we can make\n> SetRecoveryPause return a bool such that if it doesn't do state change\n> then it returns false and in that case we can avoid calling\n> WakeupRecovery, but I felt that is unnecessary. Any other thoughts on\n> this?\n\nIMO, that WakeupRecovery should not be a problem, because even now, if\nwe issue a simple select pg_reload_conf(); (without even changing any\nconfig parameter), WakeupRecovery gets called.\n\nThanks for the patch. I tested the new function and it works as\nexpected. I have no further comments on the v13 patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Feb 2021 15:20:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, Feb 11, 2021 at 3:20 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Feb 10, 2021 at 8:39 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Feb 10, 2021 at 10:02 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > I don't find any problem with this approach as well, but I personally\n> > > feel that the other approach where we don't wait in any API and just\n> > > return the recovery pause state is much simpler and more flexible. So\n> > > I will make the pending changes in that patch and let's see what are\n> > > the other opinion and based on that we can conclude. Thanks for the\n> > > patch.\n> >\n> > Here is an updated version of the patch which fixes the last two open problems\n> > 1. In RecoveryRequiresIntParameter set the recovery pause state in the\n> > loop so that if recovery resumed and pause requested again we can set\n> > to pause again.\n> > 2. If the recovery state is already 'paused' then don't set it back to\n> > the 'pause requested'.\n> >\n> > One more point is that in 'pg_wal_replay_pause' even if we don't\n> > change the state because it was already set to the 'paused' then also\n> > we call the WakeupRecovery. But I don't think there is any problem\n> > with that, if we think that this should be changed then we can make\n> > SetRecoveryPause return a bool such that if it doesn't do state change\n> > then it returns false and in that case we can avoid calling\n> > WakeupRecovery, but I felt that is unnecessary. Any other thoughts on\n> > this?\n>\n> IMO, that WakeupRecovery should not be a problem, because even now, if\n> we issue a simple select pg_reload_conf(); (without even changing any\n> config parameter), WakeupRecovery gets called.\n>\n> Thanks for the patch. I tested the new function and it works as\n> expected. I have no further comments on the v13 patch.\n\nThanks for the review and testing.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Feb 2021 16:36:55 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, Feb 11, 2021 at 6:07 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Thanks for the patch. I tested the new function and it works as\n> > expected. I have no further comments on the v13 patch.\n>\n> Thanks for the review and testing.\n\nI don't see a whole lot wrong with this patch, but I think there are\nsome things that could make it a little clearer:\n\n- I suggest renaming CheckAndSetRecoveryPause() to ConfirmRecoveryPaused().\n\n- I suggest moving the definition of that function to just after\nSetRecoveryPause().\n\n- I suggest changing the argument to SetRecoveryPause() back to bool.\nIn the one place where you call SetRecoveryPause(RECOVERY_PAUSED),\njust call SetRecoveryPause(true) and ConfirmRecoveryPaused() back to\nback. This in turn means that the \"if\" statement in\nSetRecoveryPaused() can be rewritten as if (!recoveryPaused)\nXLogCtl->recoveryPauseState = RECOVERY_NOT_PAUSED else if\n(XLogCtl->recoveryPauseState == RECOVERY_NOT_PAUSED)\nXLogCtl->recoveryPauseState = RECOVERY_PAUSE_REQUESTED(). This is\nslightly less efficient, but I don't think it matters, and I think it\nwill be a lot more clear what's the job of SetRecoveryPause (say\nwhether we're trying to pause or not) and what's the job of\nConfirmRecoveryPaused (say whether we've succeeded in pausing).\n\n- Since the numeric values of RecoveryPauseState don't matter and the\nvalues are never visible to anything outside the server nor stored on\ndisk, I would be inclined to (a) not specify particular values in\nxlog.h and (b) remove the test-and-elog in SetRecoveryPause().\n\n- In the places where you say:\n\n- if (((volatile XLogCtlData *) XLogCtl)->recoveryPause)\n+ if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState ==\n+ RECOVERY_PAUSE_REQUESTED)\n\n...I would suggest instead testing for != RECOVERY_NOT_PAUSED. Perhaps\nwe don't think RECOVERY_PAUSED can happen here. But if somehow it did,\ncalling recoveryPausesHere() would be right.\n\nThere might be some more to say here, but those are things I notice on\na first read-through.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Feb 2021 16:56:06 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, 11 Feb 2021 16:36:55 +0530\nDilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Thu, Feb 11, 2021 at 3:20 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, Feb 10, 2021 at 8:39 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Wed, Feb 10, 2021 at 10:02 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > I don't find any problem with this approach as well, but I personally\n> > > > feel that the other approach where we don't wait in any API and just\n> > > > return the recovery pause state is much simpler and more flexible. So\n> > > > I will make the pending changes in that patch and let's see what are\n> > > > the other opinion and based on that we can conclude. Thanks for the\n> > > > patch.\n\nI don't think that we need to include the waiting approach in pg_get_wal_replay_pause_state\npatch. However, Horiguchi-san's patch may be useful for some users who want\npg_wal_replay_pause to wait until recovery gets paused instead of polling the\nstate from applications. So, I shink we could discuss this patch in another\nthread as another commitfest entry independent from pg_get_wal_replay_pause_state.\n\n> > > Here is an updated version of the patch which fixes the last two open problems\n> > > 1. In RecoveryRequiresIntParameter set the recovery pause state in the\n> > > loop so that if recovery resumed and pause requested again we can set\n> > > to pause again.\n> > > 2. If the recovery state is already 'paused' then don't set it back to\n> > > the 'pause requested'.\n> > >\n> > > One more point is that in 'pg_wal_replay_pause' even if we don't\n> > > change the state because it was already set to the 'paused' then also\n> > > we call the WakeupRecovery. But I don't think there is any problem\n> > > with that, if we think that this should be changed then we can make\n> > > SetRecoveryPause return a bool such that if it doesn't do state change\n> > > then it returns false and in that case we can avoid calling\n> > > WakeupRecovery, but I felt that is unnecessary. Any other thoughts on\n> > > this?\n> >\n> > IMO, that WakeupRecovery should not be a problem, because even now, if\n> > we issue a simple select pg_reload_conf(); (without even changing any\n> > config parameter), WakeupRecovery gets called.\n> >\n> > Thanks for the patch. I tested the new function and it works as\n> > expected. I have no further comments on the v13 patch.\n> \n> Thanks for the review and testing.\n\nI have no futher comments on the v13 patch, too. Also, I agree with\nRobert Haas's suggestions.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Fri, 12 Feb 2021 13:33:32 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Fri, 12 Feb 2021 13:33:32 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \n> I don't think that we need to include the waiting approach in pg_get_wal_replay_pause_state\n> patch. However, Horiguchi-san's patch may be useful for some users who want\n> pg_wal_replay_pause to wait until recovery gets paused instead of polling the\n> state from applications. So, I shink we could discuss this patch in another\n> thread as another commitfest entry independent from pg_get_wal_replay_pause_state.\n\nSince what I'm proposing is not making pg_wal_replay_pause() to wait,\nand no one seems on my side, I withdraw the proposal.\n\n> I have no futher comments on the v13 patch, too. Also, I agree with\n> Robert Haas's suggestions.\n\nYeah, look reasonable.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 12 Feb 2021 15:24:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Fri, Feb 12, 2021 at 3:26 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Feb 11, 2021 at 6:07 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > Thanks for the patch. I tested the new function and it works as\n> > > expected. I have no further comments on the v13 patch.\n> >\n> > Thanks for the review and testing.\n>\n> I don't see a whole lot wrong with this patch, but I think there are\n> some things that could make it a little clearer:\n\nThanks for the review\n\n> - I suggest renaming CheckAndSetRecoveryPause() to ConfirmRecoveryPaused().\n\nYeah that make more sense so changed.\n\n> - I suggest moving the definition of that function to just after\n> SetRecoveryPause().\n\nDone\n\n> - I suggest changing the argument to SetRecoveryPause() back to bool.\n> In the one place where you call SetRecoveryPause(RECOVERY_PAUSED),\n> just call SetRecoveryPause(true) and ConfirmRecoveryPaused() back to\n> back.\n\nYeah done that way, I think only in once place we were doing\nSetRecoveryPause(RECOVERY_PAUSED), but after putting more thought I\nthink that was not required because right after setting that we are\nhaving the while loop under that we have to call\nConfirmRecoveryPaused. So I have changed that also as\nSetRecoveryPause(true) without immediate call of\nConfirmRecoveryPaused.\n\nThis in turn means that the \"if\" statement in\n> SetRecoveryPaused() can be rewritten as if (!recoveryPaused)\n> XLogCtl->recoveryPauseState = RECOVERY_NOT_PAUSED else if\n> (XLogCtl->recoveryPauseState == RECOVERY_NOT_PAUSED)\n> XLogCtl->recoveryPauseState = RECOVERY_PAUSE_REQUESTED(). This is\n> slightly less efficient, but I don't think it matters, and I think it\n> will be a lot more clear what's the job of SetRecoveryPause (say\n> whether we're trying to pause or not) and what's the job of\n> ConfirmRecoveryPaused (say whether we've succeeded in pausing).\n\nDone\n\n> - Since the numeric values of RecoveryPauseState don't matter and the\n> values are never visible to anything outside the server nor stored on\n> disk, I would be inclined to (a) not specify particular values in\n> xlog.h and (b) remove the test-and-elog in SetRecoveryPause().\n\nDone\n\n> - In the places where you say:\n>\n> - if (((volatile XLogCtlData *) XLogCtl)->recoveryPause)\n> + if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState ==\n> + RECOVERY_PAUSE_REQUESTED)\n>\n> ...I would suggest instead testing for != RECOVERY_NOT_PAUSED. Perhaps\n> we don't think RECOVERY_PAUSED can happen here. But if somehow it did,\n> calling recoveryPausesHere() would be right.\n\nDone\n\n> There might be some more to say here, but those are things I notice on\n> a first read-through.\n\nOkay.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 23 Feb 2021 12:03:32 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Tue, 23 Feb 2021 12:03:32 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Fri, Feb 12, 2021 at 3:26 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > There might be some more to say here, but those are things I notice on\n> > a first read-through.\n> \n> Okay.\n\nIt seems to me all the suggestions are addressed in this version.\n\n+ Request to pause recovery. A request doesn't mean that recovery stops\n+ right away. If you want a guarantee that recovery is actually paused,\n+ you need to check for the recovery pause state returned by\n+ <function>pg_get_wal_replay_pause_state()</function>. Note that\n+ <function>pg_is_wal_replay_paused()</function> returns whether a request\n+ is made. While recovery is paused, no further database changes are applied.\n\nThis looks like explainig the same thing twice. (\"A request doesn't\nmean..\" and \"While recovery is paused, ...\")\n\nHow about something like this?\n\nRequest to pause recovery. Server actually stops recovery at a\nconvenient time. This can take a few seconds after the request. If you\nneed to strictly guarantee that no further database change will occur,\nyou can check using pg_get_wal_replay_ause_state(). Note that\npg_is_wal_replay_paused() may return true before recovery actually\nstops.\n\n\nThe patch adds two loops whth the following logic:\n\n while (GetRecoveryPauseState() != RECOVERY_NOT_PAUSED)\n {\n ...\n ConfirmRecoveryPaused();\n <wait>\n }\n\nAfter the renaming of the function, the following structure looks\nsimpler and more natural.\n\n while (ConfirmRecoveryPaused())\n {\n ...\n <wait>\n }\n\n\n+\t\t/* test for recovery pause, if user has requested the pause */\n+\t\tif (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState !=\n\nThe reason for the checkpoint is to move to \"paused\" state in a\nreasonable time. I think we need to mention that reason rather than\nwhat is done here.\n\n\n+\t/* get the recovery pause state */\n+\tswitch(GetRecoveryPauseState())\n+\t{\n+\t\tcase RECOVERY_NOT_PAUSED:\n+\t\t\tstate = \"not paused\";\n+\t\t\tbreak;\n...\n+\t\tdefault:\n+\t\t\telog(ERROR, \"invalid recovery pause state\");\n\nThis disables the static enum coverage check and it is not likely to\nhave a wrong value here, other than the case of shared memory\ncorruption. So we can remove the default case\nhere. pg_get_replication_slots() is going that direction and\napply_dispatch() is taking a slightly different way. Anyway I think\nthat we can take away the default case.\n\n\nregard.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 24 Feb 2021 16:09:48 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Wed, Feb 24, 2021 at 12:39 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 23 Feb 2021 12:03:32 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > On Fri, Feb 12, 2021 at 3:26 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > There might be some more to say here, but those are things I notice on\n> > > a first read-through.\n> >\n> > Okay.\n>\n> It seems to me all the suggestions are addressed in this version.\n>\n> + Request to pause recovery. A request doesn't mean that recovery stops\n> + right away. If you want a guarantee that recovery is actually paused,\n> + you need to check for the recovery pause state returned by\n> + <function>pg_get_wal_replay_pause_state()</function>. Note that\n> + <function>pg_is_wal_replay_paused()</function> returns whether a request\n> + is made. While recovery is paused, no further database changes are applied.\n>\n> This looks like explainig the same thing twice. (\"A request doesn't\n> mean..\" and \"While recovery is paused, ...\")\n>\n> How about something like this?\n>\n> Request to pause recovery. Server actually stops recovery at a\n> convenient time. This can take a few seconds after the request. If you\n> need to strictly guarantee that no further database change will occur,\n> you can check using pg_get_wal_replay_ause_state(). Note that\n> pg_is_wal_replay_paused() may return true before recovery actually\n> stops.\n\nI still think that for the user-facing documentation purpose the\ncurrent paragraph looks better.\n\n> The patch adds two loops whth the following logic:\n>\n> while (GetRecoveryPauseState() != RECOVERY_NOT_PAUSED)\n> {\n> ...\n> ConfirmRecoveryPaused();\n> <wait>\n> }\n>\n> After the renaming of the function, the following structure looks\n> simpler and more natural.\n>\n> while (ConfirmRecoveryPaused())\n> {\n> ...\n> <wait>\n> }\n\nSo do you mean that if the pause is requested ConfirmRecoveryPaused\nwill set it to paused and if it is not paused then it will return\nfalse? With the current function name, I don't think that will look\nclean maybe we should change the name to something like\nCheckAndConfirmRecoveryPaused? Or I am fine with the way it is now.\nAny other thoughts?\n\n>\n> + /* test for recovery pause, if user has requested the pause */\n> + if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState !=\n>\n> The reason for the checkpoint is to move to \"paused\" state in a\n> reasonable time. I think we need to mention that reason rather than\n> what is done here.\n\nI will do that.\n\n>\n> + /* get the recovery pause state */\n> + switch(GetRecoveryPauseState())\n> + {\n> + case RECOVERY_NOT_PAUSED:\n> + state = \"not paused\";\n> + break;\n> ...\n> + default:\n> + elog(ERROR, \"invalid recovery pause state\");\n>\n> This disables the static enum coverage check and it is not likely to\n> have a wrong value here, other than the case of shared memory\n> corruption. So we can remove the default case\n> here. pg_get_replication_slots() is going that direction and\n> apply_dispatch() is taking a slightly different way. Anyway I think\n> that we can take away the default case.\n\nSo do you think we should put an assert(0) in the default case?\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Feb 2021 13:15:27 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Wed, 24 Feb 2021 13:15:27 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Wed, Feb 24, 2021 at 12:39 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Tue, 23 Feb 2021 12:03:32 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > > On Fri, Feb 12, 2021 at 3:26 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > How about something like this?\n> >\n> > Request to pause recovery. Server actually stops recovery at a\n> > convenient time. This can take a few seconds after the request. If you\n> > need to strictly guarantee that no further database change will occur,\n> > you can check using pg_get_wal_replay_ause_state(). Note that\n> > pg_is_wal_replay_paused() may return true before recovery actually\n> > stops.\n> \n> I still think that for the user-facing documentation purpose the\n> current paragraph looks better.\n\nOk.\n\n> > The patch adds two loops whth the following logic:\n> >\n> > while (GetRecoveryPauseState() != RECOVERY_NOT_PAUSED)\n> > {\n> > ...\n> > ConfirmRecoveryPaused();\n> > <wait>\n> > }\n> >\n> > After the renaming of the function, the following structure looks\n> > simpler and more natural.\n> >\n> > while (ConfirmRecoveryPaused())\n> > {\n> > ...\n> > <wait>\n> > }\n> \n> So do you mean that if the pause is requested ConfirmRecoveryPaused\n> will set it to paused and if it is not paused then it will return\n> false? With the current function name, I don't think that will look\n> clean maybe we should change the name to something like\n> CheckAndConfirmRecoveryPaused? Or I am fine with the way it is now.\n> Any other thoughts?\n\nI should have took the meaning of \"confirm\" wrongly. I took that as\n\"somehow determine if the recovery is to be paused\". If that reading\nis completely wrong, I don't mind either re-chaging the function name\nor leaving all it alone.\n\n> >\n> > + /* test for recovery pause, if user has requested the pause */\n> > + if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState !=\n> >\n> > The reason for the checkpoint is to move to \"paused\" state in a\n> > reasonable time. I think we need to mention that reason rather than\n> > what is done here.\n> \n> I will do that.\n\nThanks.\n\n> >\n> > + /* get the recovery pause state */\n> > + switch(GetRecoveryPauseState())\n> > + {\n> > + case RECOVERY_NOT_PAUSED:\n> > + state = \"not paused\";\n> > + break;\n> > ...\n> > + default:\n> > + elog(ERROR, \"invalid recovery pause state\");\n> >\n> > This disables the static enum coverage check and it is not likely to\n> > have a wrong value here, other than the case of shared memory\n> > corruption. So we can remove the default case\n> > here. pg_get_replication_slots() is going that direction and\n> > apply_dispatch() is taking a slightly different way. Anyway I think\n> > that we can take away the default case.\n> \n> So do you think we should put an assert(0) in the default case?\n\nNo. Just removing the default in the switch. If the value comes from\nsome other source typically from disk or user-interraction, the\ndefault is necessary, but, in the first place if we have other than\nthe defined value there, it is a sign of something worse than\nERROR. If we care about that case, we *could* do the same thing with\napply_dispatch().\n\n switch (GetRecoveryPauseState())\n {\n case RECOVERY_NOT_PAUSED:\n\t return cstring_to_text(\"not paused\");\n ..\n }\n\n /* we shouldn't reach here */\n Assert (0);\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 24 Feb 2021 17:56:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Wed, 24 Feb 2021 17:56:41 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 24 Feb 2021 13:15:27 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> > > After the renaming of the function, the following structure looks\n> > > simpler and more natural.\n> > >\n> > > while (ConfirmRecoveryPaused())\n> > > {\n> > > ...\n> > > <wait>\n> > > }\n> > \n> > So do you mean that if the pause is requested ConfirmRecoveryPaused\n> > will set it to paused and if it is not paused then it will return\n> > false? With the current function name, I don't think that will look\n> > clean maybe we should change the name to something like\n> > CheckAndConfirmRecoveryPaused? Or I am fine with the way it is now.\n> > Any other thoughts?\n> \n> I should have took the meaning of \"confirm\" wrongly. I took that as\n> \"somehow determine if the recovery is to be paused\". If that reading\n> is completely wrong, I don't mind either re-chaging the function name\n> or leaving all it alone.\n\nOuch. If we choose to re-rename it, it won't be \"CheckAnd...\".\nRecoveryIsPaused() is used for another meaning. Maybe\nRecoveryPauseTriggerd() or such? (I'm not sure, sorry..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 24 Feb 2021 18:01:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Wed, Feb 24, 2021 at 2:26 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 24 Feb 2021 13:15:27 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > On Wed, Feb 24, 2021 at 12:39 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Tue, 23 Feb 2021 12:03:32 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > > > On Fri, Feb 12, 2021 at 3:26 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > How about something like this?\n> > >\n> > > Request to pause recovery. Server actually stops recovery at a\n> > > convenient time. This can take a few seconds after the request. If you\n> > > need to strictly guarantee that no further database change will occur,\n> > > you can check using pg_get_wal_replay_ause_state(). Note that\n> > > pg_is_wal_replay_paused() may return true before recovery actually\n> > > stops.\n> >\n> > I still think that for the user-facing documentation purpose the\n> > current paragraph looks better.\n>\n> Ok.\n>\n> > > The patch adds two loops whth the following logic:\n> > >\n> > > while (GetRecoveryPauseState() != RECOVERY_NOT_PAUSED)\n> > > {\n> > > ...\n> > > ConfirmRecoveryPaused();\n> > > <wait>\n> > > }\n> > >\n> > > After the renaming of the function, the following structure looks\n> > > simpler and more natural.\n> > >\n> > > while (ConfirmRecoveryPaused())\n> > > {\n> > > ...\n> > > <wait>\n> > > }\n> >\n> > So do you mean that if the pause is requested ConfirmRecoveryPaused\n> > will set it to paused and if it is not paused then it will return\n> > false? With the current function name, I don't think that will look\n> > clean maybe we should change the name to something like\n> > CheckAndConfirmRecoveryPaused? Or I am fine with the way it is now.\n> > Any other thoughts?\n>\n> I should have took the meaning of \"confirm\" wrongly. I took that as\n> \"somehow determine if the recovery is to be paused\". If that reading\n> is completely wrong, I don't mind either re-chaging the function name\n> or leaving all it alone.\n\nI am fine with leaving it the way it is unless someone feels that we\nshould change it.\n\n> > >\n> > > + /* test for recovery pause, if user has requested the pause */\n> > > + if (((volatile XLogCtlData *) XLogCtl)->recoveryPauseState !=\n> > >\n> > > The reason for the checkpoint is to move to \"paused\" state in a\n> > > reasonable time. I think we need to mention that reason rather than\n> > > what is done here.\n> >\n> > I will do that.\n>\n> Thanks.\n>\n> > >\n> > > + /* get the recovery pause state */\n> > > + switch(GetRecoveryPauseState())\n> > > + {\n> > > + case RECOVERY_NOT_PAUSED:\n> > > + state = \"not paused\";\n> > > + break;\n> > > ...\n> > > + default:\n> > > + elog(ERROR, \"invalid recovery pause state\");\n> > >\n> > > This disables the static enum coverage check and it is not likely to\n> > > have a wrong value here, other than the case of shared memory\n> > > corruption. So we can remove the default case\n> > > here. pg_get_replication_slots() is going that direction and\n> > > apply_dispatch() is taking a slightly different way. Anyway I think\n> > > that we can take away the default case.\n> >\n> > So do you think we should put an assert(0) in the default case?\n>\n> No. Just removing the default in the switch. If the value comes from\n> some other source typically from disk or user-interraction, the\n> default is necessary, but, in the first place if we have other than\n> the defined value there, it is a sign of something worse than\n> ERROR. If we care about that case, we *could* do the same thing with\n> apply_dispatch().\n>\n> switch (GetRecoveryPauseState())\n> {\n> case RECOVERY_NOT_PAUSED:\n> return cstring_to_text(\"not paused\");\n> ..\n> }\n>\n> /* we shouldn't reach here */\n> Assert (0);\n\nI think for such cases IMHO the preferred style for PostgreSQL is that\nwe add Assert(0) in the default case, at least it appeared to me that\nway.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Feb 2021 15:25:57 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Wed, Feb 24, 2021 at 3:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n\n> > > > The reason for the checkpoint is to move to \"paused\" state in a\n> > > > reasonable time. I think we need to mention that reason rather than\n> > > > what is done here.\n> > >\n> > > I will do that.\n\nI have fixed this.\n\n> > > >\n> > > > + /* get the recovery pause state */\n> > > > + switch(GetRecoveryPauseState())\n> > > > + {\n> > > > + case RECOVERY_NOT_PAUSED:\n> > > > + state = \"not paused\";\n> > > > + break;\n> > > > ...\n> > > > + default:\n> > > > + elog(ERROR, \"invalid recovery pause state\");\n\n>\n> I think for such cases IMHO the preferred style for PostgreSQL is that\n> we add Assert(0) in the default case, at least it appeared to me that\n> way.\n\nAdded an Assert(0) in default case.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 24 Feb 2021 20:21:32 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Wed, 24 Feb 2021 15:25:57 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Wed, Feb 24, 2021 at 2:26 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > I should have took the meaning of \"confirm\" wrongly. I took that as\n> > \"somehow determine if the recovery is to be paused\". If that reading\n> > is completely wrong, I don't mind either re-chaging the function name\n> > or leaving all it alone.\n> \n> I am fine with leaving it the way it is unless someone feels that we\n> should change it.\n\nUnderstood. I don't stick to the change.\n\n> > > > This disables the static enum coverage check and it is not likely to\n> > > > have a wrong value here, other than the case of shared memory\n> > > > corruption. So we can remove the default case\n> > > > here. pg_get_replication_slots() is going that direction and\n> > > > apply_dispatch() is taking a slightly different way. Anyway I think\n> > > > that we can take away the default case.\n> > >\n> > > So do you think we should put an assert(0) in the default case?\n> >\n> > No. Just removing the default in the switch. If the value comes from\n> > some other source typically from disk or user-interraction, the\n> > default is necessary, but, in the first place if we have other than\n> > the defined value there, it is a sign of something worse than\n> > ERROR. If we care about that case, we *could* do the same thing with\n> > apply_dispatch().\n> >\n> > switch (GetRecoveryPauseState())\n> > {\n> > case RECOVERY_NOT_PAUSED:\n> > return cstring_to_text(\"not paused\");\n> > ..\n> > }\n> >\n> > /* we shouldn't reach here */\n> > Assert (0);\n> \n> I think for such cases IMHO the preferred style for PostgreSQL is that\n> we add Assert(0) in the default case, at least it appeared to me that\n> way.\n\nRecently we have mildly changed to the direction to utilize the\ncompiler warning about enum coverage in switch struct. (Maybe we need\nanother compiler option that enables that check for switch'es with the\ndefault case, though.) In that light, the direction is a switch\nwithout the default case then Assert if none of the cases is stepped\non. This is what apply_dispatch does. Slightly different version of\nthe same would be the following. This is more natural than the above.\n\n statestr = NULL;\n swtich(state)\n {\n case RECOVERY_NOT_PAUSED:\n statestr = \"not paused\";\n break;\n ...\n }\n \n Assert (statestr != NULL);\n return cstring_to_text(statestr);\n\nIf the enum had many (more than ten or so?) values and it didn't seem\nstable I push that a bit strongly but it actually consists of only\nthree values and not likely to get further values. So I don't insist\non the style so strongly here.\n\nIn short, I'm fine with default: Assert(0) if you still don't like the\njust above.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 25 Feb 2021 10:22:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, Feb 25, 2021 at 6:52 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n\n> Recently we have mildly changed to the direction to utilize the\n> compiler warning about enum coverage in switch struct. (Maybe we need\n> another compiler option that enables that check for switch'es with the\n> default case, though.) In that light, the direction is a switch\n> without the default case then Assert if none of the cases is stepped\n> on. This is what apply_dispatch does. Slightly different version of\n> the same would be the following. This is more natural than the above.\n>\n> statestr = NULL;\n> swtich(state)\n> {\n> case RECOVERY_NOT_PAUSED:\n> statestr = \"not paused\";\n> break;\n> ...\n> }\n>\n> Assert (statestr != NULL);\n> return cstring_to_text(statestr);\n>\n> If the enum had many (more than ten or so?) values and it didn't seem\n> stable I push that a bit strongly but it actually consists of only\n> three values and not likely to get further values. So I don't insist\n> on the style so strongly here.\n>\n\nChanged as per the suggestion.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 25 Feb 2021 09:49:15 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Thu, 25 Feb 2021 09:49:15 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Thu, Feb 25, 2021 at 6:52 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> \n> > Recently we have mildly changed to the direction to utilize the\n> > compiler warning about enum coverage in switch struct. (Maybe we need\n> > another compiler option that enables that check for switch'es with the\n> > default case, though.) In that light, the direction is a switch\n> > without the default case then Assert if none of the cases is stepped\n> > on. This is what apply_dispatch does. Slightly different version of\n> > the same would be the following. This is more natural than the above.\n> >\n> > statestr = NULL;\n> > swtich(state)\n> > {\n> > case RECOVERY_NOT_PAUSED:\n> > statestr = \"not paused\";\n> > break;\n> > ...\n> > }\n> >\n> > Assert (statestr != NULL);\n> > return cstring_to_text(statestr);\n> >\n> > If the enum had many (more than ten or so?) values and it didn't seem\n> > stable I push that a bit strongly but it actually consists of only\n> > three values and not likely to get further values. So I don't insist\n> > on the style so strongly here.\n> >\n> \n> Changed as per the suggestion.\n\nThanks for your patience and sorry for having annoyed you.\n\nThe latest version applies (almost) cleanly to the current master and\nworks fine.\nI don't have further comment on this.\n\nI'll wait for a day before marking this RfC in case anyone have\nfurther comments.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 25 Feb 2021 16:12:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Thu, Feb 25, 2021 at 12:42 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Thanks for your patience and sorry for having annoyed you.\n\nThank you very much for your review and inputs.\n\n> The latest version applies (almost) cleanly to the current master and\n> works fine.\n> I don't have further comment on this.\n>\n> I'll wait for a day before marking this RfC in case anyone have\n> further comments.\n\nOkay.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Feb 2021 13:22:53 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "At Thu, 25 Feb 2021 13:22:53 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Thu, Feb 25, 2021 at 12:42 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > The latest version applies (almost) cleanly to the current master and\n> > works fine.\n> > I don't have further comment on this.\n> >\n> > I'll wait for a day before marking this RfC in case anyone have\n> > further comments.\n> \n> Okay.\n\nHearing nothing, done that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 26 Feb 2021 17:03:01 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Fri, Feb 26, 2021 at 1:33 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 25 Feb 2021 13:22:53 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > On Thu, Feb 25, 2021 at 12:42 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > The latest version applies (almost) cleanly to the current master and\n> > > works fine.\n> > > I don't have further comment on this.\n> > >\n> > > I'll wait for a day before marking this RfC in case anyone have\n> > > further comments.\n> >\n> > Okay.\n>\n> Hearing nothing, done that.\n\nThanks.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 Mar 2021 10:37:55 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Mon, Mar 1, 2021 at 12:08 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > I'll wait for a day before marking this RfC in case anyone have\n> > > > further comments.\n> > >\n> > > Okay.\n> >\n> > Hearing nothing, done that.\n>\n> Thanks.\n\nCommitted with minor cosmetic changes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Mar 2021 15:34:02 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is Recovery actually paused?" }, { "msg_contents": "On Fri, 12 Mar 2021 at 2:04 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Mar 1, 2021 at 12:08 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > I'll wait for a day before marking this RfC in case anyone have\n> > > > > further comments.\n> > > >\n> > > > Okay.\n> > >\n> > > Hearing nothing, done that.\n> >\n> > Thanks.\n>\n> Committed with minor cosmetic changes.\n\n\nThanks Robert.\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, 12 Mar 2021 at 2:04 AM, Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Mar 1, 2021 at 12:08 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > I'll wait for a day before marking this RfC in case anyone have\n> > > > further comments.\n> > >\n> > > Okay.\n> >\n> > Hearing nothing, done that.\n>\n> Thanks.\n\nCommitted with minor cosmetic changes.Thanks Robert.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 12 Mar 2021 06:21:59 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is Recovery actually paused?" } ]
[ { "msg_contents": "Hi hackers,\r\nI write a path for soupport parallel distinct, union and aggregate using batch sort.\r\nsteps:\r\n 1. generate hash value for group clauses values, and using mod hash value save to batch\r\n 2. end of outer plan, wait all other workers finish write to batch\r\n 3. echo worker get a unique batch number, call tuplesort_performsort() function finish this batch sort\r\n 4. return row for this batch\r\n 5. if not end of all batchs, got step 3\r\n\r\nBatchSort paln make sure same tuple(group clause) return in same range, so Unique(or GroupAggregate) plan can work.\r\n\r\npath 2 for parallel aggregate, this is a simple use\r\nbut regress failed for partitionwise aggregation difference plan\r\nfrom GatherMerge->Sort->Append->...\r\nto Sort->Gahter->Append->...\r\nI have no idea how to modify it.\r\n\r\nSame idea I writed a batch shared tuple store for HashAgg in our PG version, I will send patch for PG14 when I finish it.\r\n\r\n\r\nThe following is a description in Chinese\r\n英语不好,所以这里写点中文,希望上面写的不对的地方请大家帮忙纠正一下。\r\nBatchSort的工作原理\r\n 1. 先按group clause计算出hash值,并按取模的值放入不同的批次\r\n 2. 当下层plan返回所有的行后,等待所有其它的工作进程结束\r\n 3. 每一个工作进程索取一个唯一的一个批次, 并调用tuplesort_performsort()函数完成最终排序\r\n 4. 返回本批次的所有行\r\n 5. 如果所有的批次没有读完,则返回第3步\r\nBatchSort plan能保证相同的数据(按分给表达式)在同一个周期内返回,所以几个去重和分组相关的plan可以正常工作。\r\n第2个补丁是支持并行分组的,只做一次分组,而不是并行进程做每一次分组后,主进程再进行二次分组。\r\n这个补丁导致了regress测试中的partitionwise aggregation失败,原来的执行计划有所变更。\r\n补丁只写了一个简单的使用BatchSort plan的方法,可能还需要添加其它用法。\r\n\r\n用同样的思想我写了一个使用shared tuple store的HashAgg在我们的AntDB版本中(最新版本暂未开源),适配完PG14版本后我会发出来。\r\n打个广告:欢迎关注我们亚信公司基于PG的分布式数据库产品AntDB,开源地址 https://github.com/ADBSQL/AntDB\r\n\r\n\r\nbucoo@sohu.com", "msg_date": "Mon, 19 Oct 2020 22:42:57 +0800", "msg_from": "\"bucoo@sohu.com\" <bucoo@sohu.com>", "msg_from_op": true, "msg_subject": "parallel distinct union and aggregate support patch" }, { "msg_contents": "On Tue, Oct 20, 2020 at 3:49 AM bucoo@sohu.com <bucoo@sohu.com> wrote:\n> I write a path for soupport parallel distinct, union and aggregate using batch sort.\n> steps:\n> 1. generate hash value for group clauses values, and using mod hash value save to batch\n> 2. end of outer plan, wait all other workers finish write to batch\n> 3. echo worker get a unique batch number, call tuplesort_performsort() function finish this batch sort\n> 4. return row for this batch\n> 5. if not end of all batchs, got step 3\n>\n> BatchSort paln make sure same tuple(group clause) return in same range, so Unique(or GroupAggregate) plan can work.\n\nHi!\n\nInteresting work! In the past a few people have speculated about a\nParallel Repartition operator that could partition tuples a bit like\nthis, so that each process gets a different set of partitions. Here\nyou combine that with a sort. By doing both things in one node, you\navoid a lot of overheads (writing into a tuplestore once in the\nrepartitioning node, and then once again in the sort node, with tuples\nbeing copied one-by-one between the two nodes).\n\nIf I understood correctly, the tuples emitted by Parallel Batch Sort\nin each process are ordered by (hash(key, ...) % npartitions, key,\n...), but the path is claiming to be ordered by (key, ...), no?\nThat's enough for Unique and Aggregate to give the correct answer,\nbecause they really only require equal keys to be consecutive (and in\nthe same process), but maybe some other plan could break?\n\n\n", "msg_date": "Wed, 21 Oct 2020 04:27:46 +0000", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "On Mon, Oct 19, 2020 at 8:19 PM bucoo@sohu.com <bucoo@sohu.com> wrote:\n>\n> Hi hackers,\n> I write a path for soupport parallel distinct, union and aggregate using batch sort.\n> steps:\n> 1. generate hash value for group clauses values, and using mod hash value save to batch\n> 2. end of outer plan, wait all other workers finish write to batch\n> 3. echo worker get a unique batch number, call tuplesort_performsort() function finish this batch sort\n> 4. return row for this batch\n> 5. if not end of all batchs, got step 3\n>\n> BatchSort paln make sure same tuple(group clause) return in same range, so Unique(or GroupAggregate) plan can work.\n\nInteresting idea. So IIUC, whenever a worker is scanning the tuple it\nwill directly put it into the respective batch(shared tuple store),\nbased on the hash on grouping column and once all the workers are\ndoing preparing the batch then each worker will pick those baches one\nby one, perform sort and finish the aggregation. I think there is a\nscope of improvement that instead of directly putting the tuple to the\nbatch what if the worker does the partial aggregations and then it\nplaces the partially aggregated rows in the shared tuple store based\non the hash value and then the worker can pick the batch by batch. By\ndoing this way, we can avoid doing large sorts. And then this\napproach can also be used with the hash aggregate, I mean the\npartially aggregated data by the hash aggregate can be put into the\nrespective batch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Oct 2020 14:38:03 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "> If I understood correctly, the tuples emitted by Parallel Batch Sort\r\n> in each process are ordered by (hash(key, ...) % npartitions, key,\r\n> ...), but the path is claiming to be ordered by (key, ...), no?\r\n> That's enough for Unique and Aggregate to give the correct answer,\r\n> because they really only require equal keys to be consecutive (and in\r\n> the same process), but maybe some other plan could break?\r\n\r\nThe path not claiming to be ordered by (key, ...), the path save PathKey(s) in BatchSortPath::batchkeys, not Path::pathkeys.\r\nI don't understand \"but maybe some other plan could break\", mean some on path using this path? no, BathSortPath on for some special path(Unique, GroupAgg ...).\r\n\r\n\r\n\r\nbucoo@sohu.com\r\n \r\nFrom: Thomas Munro\r\nDate: 2020-10-21 12:27\r\nTo: bucoo@sohu.com\r\nCC: pgsql-hackers\r\nSubject: Re: parallel distinct union and aggregate support patch\r\nOn Tue, Oct 20, 2020 at 3:49 AM bucoo@sohu.com <bucoo@sohu.com> wrote:\r\n> I write a path for soupport parallel distinct, union and aggregate using batch sort.\r\n> steps:\r\n> 1. generate hash value for group clauses values, and using mod hash value save to batch\r\n> 2. end of outer plan, wait all other workers finish write to batch\r\n> 3. echo worker get a unique batch number, call tuplesort_performsort() function finish this batch sort\r\n> 4. return row for this batch\r\n> 5. if not end of all batchs, got step 3\r\n>\r\n> BatchSort paln make sure same tuple(group clause) return in same range, so Unique(or GroupAggregate) plan can work.\r\n \r\nHi!\r\n \r\nInteresting work! In the past a few people have speculated about a\r\nParallel Repartition operator that could partition tuples a bit like\r\nthis, so that each process gets a different set of partitions. Here\r\nyou combine that with a sort. By doing both things in one node, you\r\navoid a lot of overheads (writing into a tuplestore once in the\r\nrepartitioning node, and then once again in the sort node, with tuples\r\nbeing copied one-by-one between the two nodes).\r\n \r\nIf I understood correctly, the tuples emitted by Parallel Batch Sort\r\nin each process are ordered by (hash(key, ...) % npartitions, key,\r\n...), but the path is claiming to be ordered by (key, ...), no?\r\nThat's enough for Unique and Aggregate to give the correct answer,\r\nbecause they really only require equal keys to be consecutive (and in\r\nthe same process), but maybe some other plan could break?\r\n\n\n> If I understood correctly, the tuples emitted by Parallel Batch Sort> in each process are ordered by (hash(key, ...) % npartitions, key,> ...), but the path is claiming to be ordered by (key, ...), no?> That's enough for Unique and Aggregate to give the correct answer,> because they really only require equal keys to be consecutive (and in> the same process), but maybe some other plan could break?The path not claiming to be ordered by (key, ...), the path save PathKey(s) in BatchSortPath::batchkeys, not Path::pathkeys.I don't understand \"but maybe some other plan could break\", mean some on path using this path? no, BathSortPath on for some special path(Unique, GroupAgg ...).\n\nbucoo@sohu.com\n From: Thomas MunroDate: 2020-10-21 12:27To: bucoo@sohu.comCC: pgsql-hackersSubject: Re: parallel distinct union and aggregate support patchOn Tue, Oct 20, 2020 at 3:49 AM bucoo@sohu.com <bucoo@sohu.com> wrote:\n> I write a path for soupport parallel distinct, union and aggregate using batch sort.\n> steps:\n>  1. generate hash value for group clauses values, and using mod hash value save to batch\n>  2. end of outer plan, wait all other workers finish write to batch\n>  3. echo worker get a unique batch number, call tuplesort_performsort() function finish this batch sort\n>  4. return row for this batch\n>  5. if not end of all batchs, got step 3\n>\n> BatchSort paln make sure same tuple(group clause) return in same range, so Unique(or GroupAggregate) plan can work.\n \nHi!\n \nInteresting work!  In the past a few people have speculated about a\nParallel Repartition operator that could partition tuples a bit like\nthis, so that each process gets a different set of partitions.  Here\nyou combine that with a sort.  By doing both things in one node, you\navoid a lot of overheads (writing into a tuplestore once in the\nrepartitioning node, and then once again in the sort node, with tuples\nbeing copied one-by-one between the two nodes).\n \nIf I understood correctly, the tuples emitted by Parallel Batch Sort\nin each process are ordered by (hash(key, ...) % npartitions, key,\n...), but the path is claiming to be ordered by (key, ...), no?\nThat's enough for Unique and Aggregate to give the correct answer,\nbecause they really only require equal keys to be consecutive (and in\nthe same process), but maybe some other plan could break?", "msg_date": "Fri, 23 Oct 2020 11:20:31 +0800", "msg_from": "\"bucoo@sohu.com\" <bucoo@sohu.com>", "msg_from_op": true, "msg_subject": "Re: Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "> Interesting idea. So IIUC, whenever a worker is scanning the tuple it\r\n> will directly put it into the respective batch(shared tuple store),\r\n> based on the hash on grouping column and once all the workers are\r\n> doing preparing the batch then each worker will pick those baches one\r\n> by one, perform sort and finish the aggregation. I think there is a\r\n> scope of improvement that instead of directly putting the tuple to the\r\n> batch what if the worker does the partial aggregations and then it\r\n> places the partially aggregated rows in the shared tuple store based\r\n> on the hash value and then the worker can pick the batch by batch. By\r\n> doing this way, we can avoid doing large sorts. And then this\r\n> approach can also be used with the hash aggregate, I mean the\r\n> partially aggregated data by the hash aggregate can be put into the\r\n> respective batch.\r\n\r\nGood idea. Batch sort suitable for large aggregate result rows,\r\nin large aggregate result using partial aggregation maybe out of memory,\r\nand all aggregate functions must support partial(using batch sort this is unnecessary).\r\n\r\nActually i written a batch hash store for hash aggregate(for pg11) like this idea,\r\nbut not write partial aggregations to shared tuple store, it's write origin tuple and hash value\r\nto shared tuple store, But it's not support parallel grouping sets.\r\nI'am trying to write parallel hash aggregate support using batch shared tuple store for PG14,\r\nand need support parallel grouping sets hash aggregate.\r\n\n\n> Interesting idea.  So IIUC, whenever a worker is scanning the tuple it> will directly put it into the respective batch(shared tuple store),> based on the hash on grouping column and once all the workers are> doing preparing the batch then each worker will pick those baches one> by one, perform sort and finish the aggregation.  I think there is a> scope of improvement that instead of directly putting the tuple to the> batch what if the worker does the partial aggregations and then it> places the partially aggregated rows in the shared tuple store based> on the hash value and then the worker can pick the batch by batch.  By> doing this way, we can avoid doing large sorts.  And then this> approach can also be used with the hash aggregate, I mean the> partially aggregated data by the hash aggregate can be put into the> respective batch.Good idea. Batch sort suitable for large aggregate result rows,in large aggregate result using partial aggregation maybe out of memory,and all aggregate functions must support partial(using batch sort this is unnecessary).Actually i written a batch hash store for hash aggregate(for pg11) like this idea,but not write partial aggregations to shared tuple store, it's write origin tuple and hash valueto shared tuple store, But it's not support parallel grouping sets.I'am trying to write parallel hash aggregate support using batch shared tuple store for PG14,and need support parallel grouping sets hash aggregate.", "msg_date": "Fri, 23 Oct 2020 14:28:42 +0800", "msg_from": "\"bucoo@sohu.com\" <bucoo@sohu.com>", "msg_from_op": true, "msg_subject": "Re: Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "On Fri, Oct 23, 2020 at 11:58 AM bucoo@sohu.com <bucoo@sohu.com> wrote:\n>\n> > Interesting idea. So IIUC, whenever a worker is scanning the tuple it\n> > will directly put it into the respective batch(shared tuple store),\n> > based on the hash on grouping column and once all the workers are\n> > doing preparing the batch then each worker will pick those baches one\n> > by one, perform sort and finish the aggregation. I think there is a\n> > scope of improvement that instead of directly putting the tuple to the\n> > batch what if the worker does the partial aggregations and then it\n> > places the partially aggregated rows in the shared tuple store based\n> > on the hash value and then the worker can pick the batch by batch. By\n> > doing this way, we can avoid doing large sorts. And then this\n> > approach can also be used with the hash aggregate, I mean the\n> > partially aggregated data by the hash aggregate can be put into the\n> > respective batch.\n>\n> Good idea. Batch sort suitable for large aggregate result rows,\n> in large aggregate result using partial aggregation maybe out of memory,\n> and all aggregate functions must support partial(using batch sort this is unnecessary).\n>\n> Actually i written a batch hash store for hash aggregate(for pg11) like this idea,\n> but not write partial aggregations to shared tuple store, it's write origin tuple and hash value\n> to shared tuple store, But it's not support parallel grouping sets.\n> I'am trying to write parallel hash aggregate support using batch shared tuple store for PG14,\n> and need support parallel grouping sets hash aggregate.\n\nI was trying to look into this patch to understand the logic in more\ndetail. Actually, there are no comments at all so it's really hard to\nunderstand what the code is trying to do.\n\nI was reading the below functions, which is the main entry point for\nthe batch sort.\n\n+static TupleTableSlot *ExecBatchSortPrepare(PlanState *pstate)\n+{\n...\n+ for (;;)\n+ {\n...\n+ tuplesort_puttupleslot(state->batches[hash%node->numBatches], slot);\n+ }\n+\n+ for (i=node->numBatches;i>0;)\n+ tuplesort_performsort(state->batches[--i]);\n+build_already_done_:\n+ if (parallel)\n+ {\n+ for (i=node->numBatches;i>0;)\n+ {\n+ --i;\n+ if (state->batches[i])\n+ {\n+ tuplesort_end(state->batches[i]);\n+ state->batches[i] = NULL;\n+ }\n+ }\n\nI did not understand this part, that once each worker has performed\ntheir local batch-wise sort why we are clearing the baches? I mean\nindividual workers have their on batches so eventually they supposed\nto get merged. Can you explain this part and also it will be better\nif you can add the comments.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Oct 2020 15:27:16 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "On Thu, Oct 22, 2020 at 5:08 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Interesting idea. So IIUC, whenever a worker is scanning the tuple it\n> will directly put it into the respective batch(shared tuple store),\n> based on the hash on grouping column and once all the workers are\n> doing preparing the batch then each worker will pick those baches one\n> by one, perform sort and finish the aggregation. I think there is a\n> scope of improvement that instead of directly putting the tuple to the\n> batch what if the worker does the partial aggregations and then it\n> places the partially aggregated rows in the shared tuple store based\n> on the hash value and then the worker can pick the batch by batch. By\n> doing this way, we can avoid doing large sorts. And then this\n> approach can also be used with the hash aggregate, I mean the\n> partially aggregated data by the hash aggregate can be put into the\n> respective batch.\n\nI am not sure if this would be a win if the typical group size is\nsmall and the transition state has to be serialized/deserialized.\nPossibly we need multiple strategies, but I guess we'd have to test\nperformance to be sure.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 27 Oct 2020 08:12:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "On Tue, Oct 27, 2020 at 3:27 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Oct 23, 2020 at 11:58 AM bucoo@sohu.com <bucoo@sohu.com> wrote:\n> >\n> > > Interesting idea. So IIUC, whenever a worker is scanning the tuple it\n> > > will directly put it into the respective batch(shared tuple store),\n> > > based on the hash on grouping column and once all the workers are\n> > > doing preparing the batch then each worker will pick those baches one\n> > > by one, perform sort and finish the aggregation. I think there is a\n> > > scope of improvement that instead of directly putting the tuple to the\n> > > batch what if the worker does the partial aggregations and then it\n> > > places the partially aggregated rows in the shared tuple store based\n> > > on the hash value and then the worker can pick the batch by batch. By\n> > > doing this way, we can avoid doing large sorts. And then this\n> > > approach can also be used with the hash aggregate, I mean the\n> > > partially aggregated data by the hash aggregate can be put into the\n> > > respective batch.\n> >\n> > Good idea. Batch sort suitable for large aggregate result rows,\n> > in large aggregate result using partial aggregation maybe out of memory,\n> > and all aggregate functions must support partial(using batch sort this is unnecessary).\n> >\n> > Actually i written a batch hash store for hash aggregate(for pg11) like this idea,\n> > but not write partial aggregations to shared tuple store, it's write origin tuple and hash value\n> > to shared tuple store, But it's not support parallel grouping sets.\n> > I'am trying to write parallel hash aggregate support using batch shared tuple store for PG14,\n> > and need support parallel grouping sets hash aggregate.\n>\n> I was trying to look into this patch to understand the logic in more\n> detail. Actually, there are no comments at all so it's really hard to\n> understand what the code is trying to do.\n>\n> I was reading the below functions, which is the main entry point for\n> the batch sort.\n>\n> +static TupleTableSlot *ExecBatchSortPrepare(PlanState *pstate)\n> +{\n> ...\n> + for (;;)\n> + {\n> ...\n> + tuplesort_puttupleslot(state->batches[hash%node->numBatches], slot);\n> + }\n> +\n> + for (i=node->numBatches;i>0;)\n> + tuplesort_performsort(state->batches[--i]);\n> +build_already_done_:\n> + if (parallel)\n> + {\n> + for (i=node->numBatches;i>0;)\n> + {\n> + --i;\n> + if (state->batches[i])\n> + {\n> + tuplesort_end(state->batches[i]);\n> + state->batches[i] = NULL;\n> + }\n> + }\n>\n> I did not understand this part, that once each worker has performed\n> their local batch-wise sort why we are clearing the baches? I mean\n> individual workers have their on batches so eventually they supposed\n> to get merged. Can you explain this part and also it will be better\n> if you can add the comments.\n\nI think I got this, IIUC, each worker is initializing the shared\nshort and performing the batch-wise sorting and we will wait on a\nbarrier so that all the workers can finish with their sorting. Once\nthat is done the workers will coordinate and pick the batch by batch\nand perform the final merge for the batch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Oct 2020 19:52:50 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "On Tue, Oct 27, 2020 at 5:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Oct 22, 2020 at 5:08 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Interesting idea. So IIUC, whenever a worker is scanning the tuple it\n> > will directly put it into the respective batch(shared tuple store),\n> > based on the hash on grouping column and once all the workers are\n> > doing preparing the batch then each worker will pick those baches one\n> > by one, perform sort and finish the aggregation. I think there is a\n> > scope of improvement that instead of directly putting the tuple to the\n> > batch what if the worker does the partial aggregations and then it\n> > places the partially aggregated rows in the shared tuple store based\n> > on the hash value and then the worker can pick the batch by batch. By\n> > doing this way, we can avoid doing large sorts. And then this\n> > approach can also be used with the hash aggregate, I mean the\n> > partially aggregated data by the hash aggregate can be put into the\n> > respective batch.\n>\n> I am not sure if this would be a win if the typical group size is\n> small and the transition state has to be serialized/deserialized.\n> Possibly we need multiple strategies, but I guess we'd have to test\n> performance to be sure.\n\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Oct 2020 19:53:31 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "> On Tue, Oct 27, 2020 at 3:27 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> >\r\n> > On Fri, Oct 23, 2020 at 11:58 AM bucoo@sohu.com <bucoo@sohu.com> wrote:\r\n> > >\r\n> > > > Interesting idea. So IIUC, whenever a worker is scanning the tuple it\r\n> > > > will directly put it into the respective batch(shared tuple store),\r\n> > > > based on the hash on grouping column and once all the workers are\r\n> > > > doing preparing the batch then each worker will pick those baches one\r\n> > > > by one, perform sort and finish the aggregation. I think there is a\r\n> > > > scope of improvement that instead of directly putting the tuple to the\r\n> > > > batch what if the worker does the partial aggregations and then it\r\n> > > > places the partially aggregated rows in the shared tuple store based\r\n> > > > on the hash value and then the worker can pick the batch by batch. By\r\n> > > > doing this way, we can avoid doing large sorts. And then this\r\n> > > > approach can also be used with the hash aggregate, I mean the\r\n> > > > partially aggregated data by the hash aggregate can be put into the\r\n> > > > respective batch.\r\n> > >\r\n> > > Good idea. Batch sort suitable for large aggregate result rows,\r\n> > > in large aggregate result using partial aggregation maybe out of memory,\r\n> > > and all aggregate functions must support partial(using batch sort this is unnecessary).\r\n> > >\r\n> > > Actually i written a batch hash store for hash aggregate(for pg11) like this idea,\r\n> > > but not write partial aggregations to shared tuple store, it's write origin tuple and hash value\r\n> > > to shared tuple store, But it's not support parallel grouping sets.\r\n> > > I'am trying to write parallel hash aggregate support using batch shared tuple store for PG14,\r\n> > > and need support parallel grouping sets hash aggregate.\r\n> >\r\n> > I was trying to look into this patch to understand the logic in more\r\n> > detail. Actually, there are no comments at all so it's really hard to\r\n> > understand what the code is trying to do.\r\n> >\r\n> > I was reading the below functions, which is the main entry point for\r\n> > the batch sort.\r\n> >\r\n> > +static TupleTableSlot *ExecBatchSortPrepare(PlanState *pstate)\r\n> > +{\r\n> > ...\r\n> > + for (;;)\r\n> > + {\r\n> > ...\r\n> > + tuplesort_puttupleslot(state->batches[hash%node->numBatches], slot);\r\n> > + }\r\n> > +\r\n> > + for (i=node->numBatches;i>0;)\r\n> > + tuplesort_performsort(state->batches[--i]);\r\n> > +build_already_done_:\r\n> > + if (parallel)\r\n> > + {\r\n> > + for (i=node->numBatches;i>0;)\r\n> > + {\r\n> > + --i;\r\n> > + if (state->batches[i])\r\n> > + {\r\n> > + tuplesort_end(state->batches[i]);\r\n> > + state->batches[i] = NULL;\r\n> > + }\r\n> > + }\r\n> >\r\n> > I did not understand this part, that once each worker has performed\r\n> > their local batch-wise sort why we are clearing the baches? I mean\r\n> > individual workers have their on batches so eventually they supposed\r\n> > to get merged. Can you explain this part and also it will be better\r\n> > if you can add the comments.\r\n> \r\n> I think I got this, IIUC, each worker is initializing the shared\r\n> short and performing the batch-wise sorting and we will wait on a\r\n> barrier so that all the workers can finish with their sorting. Once\r\n> that is done the workers will coordinate and pick the batch by batch\r\n> and perform the final merge for the batch.\r\n\r\nYes, it is. Each worker open the shared sort as \"worker\" (nodeBatchSort.c:134),\r\nend of all worker performing, pick one batch and open it as \"leader\"(nodeBatchSort.c:54).\r\n\r\n\n\n> On Tue, Oct 27, 2020 at 3:27 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:> >> > On Fri, Oct 23, 2020 at 11:58 AM bucoo@sohu.com <bucoo@sohu.com> wrote:> > >> > > > Interesting idea.  So IIUC, whenever a worker is scanning the tuple it> > > > will directly put it into the respective batch(shared tuple store),> > > > based on the hash on grouping column and once all the workers are> > > > doing preparing the batch then each worker will pick those baches one> > > > by one, perform sort and finish the aggregation.  I think there is a> > > > scope of improvement that instead of directly putting the tuple to the> > > > batch what if the worker does the partial aggregations and then it> > > > places the partially aggregated rows in the shared tuple store based> > > > on the hash value and then the worker can pick the batch by batch.  By> > > > doing this way, we can avoid doing large sorts.  And then this> > > > approach can also be used with the hash aggregate, I mean the> > > > partially aggregated data by the hash aggregate can be put into the> > > > respective batch.> > >> > > Good idea. Batch sort suitable for large aggregate result rows,> > > in large aggregate result using partial aggregation maybe out of memory,> > > and all aggregate functions must support partial(using batch sort this is unnecessary).> > >> > > Actually i written a batch hash store for hash aggregate(for pg11) like this idea,> > > but not write partial aggregations to shared tuple store, it's write origin tuple and hash value> > > to shared tuple store, But it's not support parallel grouping sets.> > > I'am trying to write parallel hash aggregate support using batch shared tuple store for PG14,> > > and need support parallel grouping sets hash aggregate.> >> > I was trying to look into this patch to understand the logic in more> > detail.  Actually, there are no comments at all so it's really hard to> > understand what the code is trying to do.> >> > I was reading the below functions, which is the main entry point for> > the batch sort.> >> > +static TupleTableSlot *ExecBatchSortPrepare(PlanState *pstate)> > +{> > ...> > + for (;;)> > + {> > ...> > + tuplesort_puttupleslot(state->batches[hash%node->numBatches], slot);> > + }> > +> > + for (i=node->numBatches;i>0;)> > + tuplesort_performsort(state->batches[--i]);> > +build_already_done_:> > + if (parallel)> > + {> > + for (i=node->numBatches;i>0;)> > + {> > + --i;> > + if (state->batches[i])> > + {> > + tuplesort_end(state->batches[i]);> > + state->batches[i] = NULL;> > + }> > + }> >> > I did not understand this part, that once each worker has performed> > their local batch-wise sort why we are clearing the baches?  I mean> > individual workers have their on batches so eventually they supposed> > to get merged.  Can you explain this part and also it will be better> > if you can add the comments.>  > I think I got this,  IIUC, each worker is initializing the shared> short and performing the batch-wise sorting and we will wait on a> barrier so that all the workers can finish with their sorting.  Once> that is done the workers will coordinate and pick the batch by batch> and perform the final merge for the batch.Yes, it is. Each worker open the shared sort as \"worker\" (nodeBatchSort.c:134),end of all worker performing, pick one batch and open it as \"leader\"(nodeBatchSort.c:54).", "msg_date": "Wed, 28 Oct 2020 09:58:53 +0800", "msg_from": "\"bucoo@sohu.com\" <bucoo@sohu.com>", "msg_from_op": true, "msg_subject": "Re: Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "Hi\r\nHere is patch for parallel distinct union aggregate and grouping sets support using batch hash agg.\r\nPlease review.\r\n\r\nhow to use:\r\nset enable_batch_hashagg = on\r\n\r\nhow to work:\r\nlike batch sort, but not sort each batch, just save hash value in each rows\r\n\r\nunfinished work:\r\nnot support rescan yet. welcome to add. Actually I don't really understand how rescan works in parallel mode.\r\n\r\nother:\r\npatch 1 base on branch master(80f8eb79e24d9b7963eaf17ce846667e2c6b6e6f)\r\npatch 1 and 2 see https://www.postgresql.org/message-id/2020101922424962544053%40sohu.com \r\npatch 3:\r\n extpand shared tuple store and add batch store module.\r\n By the way, use atomic operations instead LWLock for shared tuple store get next read page.\r\npatch 4:\r\n using batch hash agg support parallels\r\n\r\n \r\n发件人: bucoo@sohu.com\r\n发送时间: 2020-10-19 22:42\r\n收件人: pgsql-hackers\r\n主题: parallel distinct union and aggregate support patch\r\nHi hackers,\r\nI write a path for soupport parallel distinct, union and aggregate using batch sort.\r\nsteps:\r\n 1. generate hash value for group clauses values, and using mod hash value save to batch\r\n 2. end of outer plan, wait all other workers finish write to batch\r\n 3. echo worker get a unique batch number, call tuplesort_performsort() function finish this batch sort\r\n 4. return row for this batch\r\n 5. if not end of all batchs, got step 3\r\n\r\nBatchSort paln make sure same tuple(group clause) return in same range, so Unique(or GroupAggregate) plan can work.\r\n\r\npath 2 for parallel aggregate, this is a simple use\r\nbut regress failed for partitionwise aggregation difference plan\r\nfrom GatherMerge->Sort->Append->...\r\nto Sort->Gahter->Append->...\r\nI have no idea how to modify it.\r\n\r\nSame idea I writed a batch shared tuple store for HashAgg in our PG version, I will send patch for PG14 when I finish it.\r\n\r\n\r\nThe following is a description in Chinese\r\n英语不好,所以这里写点中文,希望上面写的不对的地方请大家帮忙纠正一下。\r\nBatchSort的工作原理\r\n 1. 先按group clause计算出hash值,并按取模的值放入不同的批次\r\n 2. 当下层plan返回所有的行后,等待所有其它的工作进程结束\r\n 3. 每一个工作进程索取一个唯一的一个批次, 并调用tuplesort_performsort()函数完成最终排序\r\n 4. 返回本批次的所有行\r\n 5. 如果所有的批次没有读完,则返回第3步\r\nBatchSort plan能保证相同的数据(按分给表达式)在同一个周期内返回,所以几个去重和分组相关的plan可以正常工作。\r\n第2个补丁是支持并行分组的,只做一次分组,而不是并行进程做每一次分组后,主进程再进行二次分组。\r\n这个补丁导致了regress测试中的partitionwise aggregation失败,原来的执行计划有所变更。\r\n补丁只写了一个简单的使用BatchSort plan的方法,可能还需要添加其它用法。\r\n\r\n用同样的思想我写了一个使用shared tuple store的HashAgg在我们的AntDB版本中(最新版本暂未开源),适配完PG14版本后我会发出来。\r\n打个广告:欢迎关注我们亚信公司基于PG的分布式数据库产品AntDB,开源地址 https://github.com/ADBSQL/AntDB\r\n\r\n\r\nbucoo@sohu.com", "msg_date": "Wed, 28 Oct 2020 17:37:40 +0800", "msg_from": "\"bucoo@sohu.com\" <bucoo@sohu.com>", "msg_from_op": true, "msg_subject": "Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "Hi,\n\nOn Wed, Oct 28, 2020 at 05:37:40PM +0800, bucoo@sohu.com wrote:\n>Hi\n>Here is patch for parallel distinct union aggregate and grouping sets support using batch hash agg.\n>Please review.\n>\n>how to use:\n>set enable_batch_hashagg = on\n>\n>how to work:\n>like batch sort, but not sort each batch, just save hash value in each rows\n>\n>unfinished work:\n>not support rescan yet. welcome to add. Actually I don't really understand how rescan works in parallel mode.\n>\n>other:\n>patch 1 base on branch master(80f8eb79e24d9b7963eaf17ce846667e2c6b6e6f)\n>patch 1 and 2 see https://www.postgresql.org/message-id/2020101922424962544053%40sohu.com\n>patch 3:\n> extpand shared tuple store and add batch store module.\n> By the way, use atomic operations instead LWLock for shared tuple store get next read page.\n>patch 4:\n> using batch hash agg support parallels\n>\n\nThanks for the patch!\n\nTwo generic comments:\n\n1) It's better to always include the whole patch series - including the\nparts that have not changed. Otherwise people have to scavenge the\nthread and search for all the pieces, which may be a source of issues.\nAlso, it confuses the patch tester [1] which tries to apply patches from\na single message, so it will fail for this one.\n\n2) I suggest you try to describe the goal of these patches, using some\nexample queries, explain output etc. Right now the reviewers have to\nreverse engineer the patches and deduce what the intention was, which\nmay be causing unnecessary confusion etc. If this was my patch, I'd try\nto create a couple examples (CREATE TABLE + SELECT + EXPLAIN) showing\nhow the patch changes the query plan, showing speedup etc.\n\n\nI'd like to do a review and some testing, and this would make it much\neasier for me.\n\n\nkind regards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 28 Oct 2020 13:31:12 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "> 1) It's better to always include the whole patch series - including the\r\n> parts that have not changed. Otherwise people have to scavenge the\r\n> thread and search for all the pieces, which may be a source of issues.\r\n> Also, it confuses the patch tester [1] which tries to apply patches from\r\n> a single message, so it will fail for this one.\r\n Pathes 3 and 4 do not rely on 1 and 2 in code.\r\n But, it will fail when you apply the apatches 3 and 4 directly, because\r\n they are written after 1 and 2.\r\n I can generate a new single patch if you need.\r\n\r\n> 2) I suggest you try to describe the goal of these patches, using some\r\n> example queries, explain output etc. Right now the reviewers have to\r\n> reverse engineer the patches and deduce what the intention was, which\r\n> may be causing unnecessary confusion etc. If this was my patch, I'd try\r\n> to create a couple examples (CREATE TABLE + SELECT + EXPLAIN) showing\r\n> how the patch changes the query plan, showing speedup etc.\r\n I written some example queries in to regress, include \"unique\" \"union\"\r\n \"group by\" and \"group by grouping sets\".\r\n here is my tests, they are not in regress\r\n```sql\r\nbegin;\r\ncreate table gtest(id integer, txt text);\r\ninsert into gtest select t1.id,'txt'||t1.id from (select generate_series(1,1000*1000) id) t1,(select generate_series(1,10) id) t2;\r\nanalyze gtest;\r\ncommit;\r\nset jit = off;\r\n\\timing on\r\n```\r\nnormal aggregate times\r\n```\r\nset enable_batch_hashagg = off;\r\nexplain (costs off,analyze,verbose)\r\nselect sum(id),txt from gtest group by txt;\r\n QUERY PLAN\r\n-------------------------------------------------------------------------------------------------------------\r\n Finalize GroupAggregate (actual time=6469.279..8947.024 rows=1000000 loops=1)\r\n Output: sum(id), txt\r\n Group Key: gtest.txt\r\n -> Gather Merge (actual time=6469.245..8165.930 rows=1000058 loops=1)\r\n Output: txt, (PARTIAL sum(id))\r\n Workers Planned: 2\r\n Workers Launched: 2\r\n -> Sort (actual time=6356.471..7133.832 rows=333353 loops=3)\r\n Output: txt, (PARTIAL sum(id))\r\n Sort Key: gtest.txt\r\n Sort Method: external merge Disk: 11608kB\r\n Worker 0: actual time=6447.665..7349.431 rows=317512 loops=1\r\n Sort Method: external merge Disk: 10576kB\r\n Worker 1: actual time=6302.882..7061.157 rows=333301 loops=1\r\n Sort Method: external merge Disk: 11112kB\r\n -> Partial HashAggregate (actual time=2591.487..4430.437 rows=333353 loops=3)\r\n Output: txt, PARTIAL sum(id)\r\n Group Key: gtest.txt\r\n Batches: 17 Memory Usage: 4241kB Disk Usage: 113152kB\r\n Worker 0: actual time=2584.345..4486.407 rows=317512 loops=1\r\n Batches: 17 Memory Usage: 4241kB Disk Usage: 101392kB\r\n Worker 1: actual time=2584.369..4393.244 rows=333301 loops=1\r\n Batches: 17 Memory Usage: 4241kB Disk Usage: 112832kB\r\n -> Parallel Seq Scan on public.gtest (actual time=0.691..603.990 rows=3333333 loops=3)\r\n Output: id, txt\r\n Worker 0: actual time=0.104..607.146 rows=3174970 loops=1\r\n Worker 1: actual time=0.100..603.951 rows=3332785 loops=1\r\n Planning Time: 0.226 ms\r\n Execution Time: 9021.058 ms\r\n(29 rows)\r\n\r\nTime: 9022.251 ms (00:09.022)\r\n\r\nset enable_batch_hashagg = on;\r\nexplain (costs off,analyze,verbose)\r\nselect sum(id),txt from gtest group by txt;\r\n QUERY PLAN\r\n-------------------------------------------------------------------------------------------------\r\n Gather (actual time=3116.666..5740.826 rows=1000000 loops=1)\r\n Output: (sum(id)), txt\r\n Workers Planned: 2\r\n Workers Launched: 2\r\n -> Parallel BatchHashAggregate (actual time=3103.181..5464.948 rows=333333 loops=3)\r\n Output: sum(id), txt\r\n Group Key: gtest.txt\r\n Worker 0: actual time=3094.550..5486.992 rows=326082 loops=1\r\n Worker 1: actual time=3099.562..5480.111 rows=324729 loops=1\r\n -> Parallel Seq Scan on public.gtest (actual time=0.791..656.601 rows=3333333 loops=3)\r\n Output: id, txt\r\n Worker 0: actual time=0.080..646.053 rows=3057680 loops=1\r\n Worker 1: actual time=0.070..662.754 rows=3034370 loops=1\r\n Planning Time: 0.243 ms\r\n Execution Time: 5788.981 ms\r\n(15 rows)\r\n\r\nTime: 5790.143 ms (00:05.790)\r\n```\r\n\r\ngrouping sets times\r\n```\r\nset enable_batch_hashagg = off;\r\nexplain (costs off,analyze,verbose)\r\nselect sum(id),txt from gtest group by grouping sets(id,txt,());\r\n QUERY PLAN\r\n------------------------------------------------------------------------------------------\r\n GroupAggregate (actual time=9454.707..38921.885 rows=2000001 loops=1)\r\n Output: sum(id), txt, id\r\n Group Key: gtest.id\r\n Group Key: ()\r\n Sort Key: gtest.txt\r\n Group Key: gtest.txt\r\n -> Sort (actual time=9454.679..11804.071 rows=10000000 loops=1)\r\n Output: txt, id\r\n Sort Key: gtest.id\r\n Sort Method: external merge Disk: 254056kB\r\n -> Seq Scan on public.gtest (actual time=2.250..2419.031 rows=10000000 loops=1)\r\n Output: txt, id\r\n Planning Time: 0.230 ms\r\n Execution Time: 39203.883 ms\r\n(14 rows)\r\n\r\nTime: 39205.339 ms (00:39.205)\r\n\r\nset enable_batch_hashagg = on;\r\nexplain (costs off,analyze,verbose)\r\nselect sum(id),txt from gtest group by grouping sets(id,txt,());\r\n QUERY PLAN\r\n-------------------------------------------------------------------------------------------------\r\n Gather (actual time=5931.776..14353.957 rows=2000001 loops=1)\r\n Output: (sum(id)), txt, id\r\n Workers Planned: 2\r\n Workers Launched: 2\r\n -> Parallel BatchHashAggregate (actual time=5920.963..13897.852 rows=666667 loops=3)\r\n Output: sum(id), txt, id\r\n Group Key: gtest.id\r\n Group Key: ()\r\n Group Key: gtest.txt\r\n Worker 0: actual time=5916.370..14062.461 rows=513810 loops=1\r\n Worker 1: actual time=5916.037..13932.847 rows=775901 loops=1\r\n -> Parallel Seq Scan on public.gtest (actual time=0.399..688.273 rows=3333333 loops=3)\r\n Output: id, txt\r\n Worker 0: actual time=0.052..690.955 rows=3349990 loops=1\r\n Worker 1: actual time=0.050..691.595 rows=3297070 loops=1\r\n Planning Time: 0.157 ms\r\n Execution Time: 14598.416 ms\r\n(17 rows)\r\n\r\nTime: 14599.437 ms (00:14.599)\r\n```\r\n\n\n> 1) It's better to always include the whole patch series - including the> parts that have not changed. Otherwise people have to scavenge the> thread and search for all the pieces, which may be a source of issues.> Also, it confuses the patch tester [1] which tries to apply patches from> a single message, so it will fail for this one. Pathes 3 and 4 do not rely on 1 and 2 in code. But, it will fail when you apply the apatches 3 and 4 directly, because they are written after 1 and 2. I can generate a new single patch if you need.> 2) I suggest you try to describe the goal of these patches, using some> example queries, explain output etc. Right now the reviewers have to> reverse engineer the patches and deduce what the intention was, which> may be causing unnecessary confusion etc. If this was my patch, I'd try> to create a couple examples (CREATE TABLE + SELECT + EXPLAIN) showing> how the patch changes the query plan, showing speedup etc. I written some example queries in to regress, include \"unique\" \"union\" \"group by\" and \"group by grouping sets\". here is my tests, they are not in regress```sqlbegin;create table gtest(id integer, txt text);insert into gtest select t1.id,'txt'||t1.id from (select generate_series(1,1000*1000) id) t1,(select generate_series(1,10) id) t2;analyze gtest;commit;set jit = off;\\timing on```normal aggregate times```set enable_batch_hashagg = off;explain (costs off,analyze,verbose)select sum(id),txt from gtest group by txt;                                                 QUERY PLAN------------------------------------------------------------------------------------------------------------- Finalize GroupAggregate (actual time=6469.279..8947.024 rows=1000000 loops=1)   Output: sum(id), txt   Group Key: gtest.txt   ->  Gather Merge (actual time=6469.245..8165.930 rows=1000058 loops=1)         Output: txt, (PARTIAL sum(id))         Workers Planned: 2         Workers Launched: 2         ->  Sort (actual time=6356.471..7133.832 rows=333353 loops=3)               Output: txt, (PARTIAL sum(id))               Sort Key: gtest.txt               Sort Method: external merge  Disk: 11608kB               Worker 0:  actual time=6447.665..7349.431 rows=317512 loops=1                 Sort Method: external merge  Disk: 10576kB               Worker 1:  actual time=6302.882..7061.157 rows=333301 loops=1                 Sort Method: external merge  Disk: 11112kB               ->  Partial HashAggregate (actual time=2591.487..4430.437 rows=333353 loops=3)                     Output: txt, PARTIAL sum(id)                     Group Key: gtest.txt                     Batches: 17  Memory Usage: 4241kB  Disk Usage: 113152kB                     Worker 0:  actual time=2584.345..4486.407 rows=317512 loops=1                       Batches: 17  Memory Usage: 4241kB  Disk Usage: 101392kB                     Worker 1:  actual time=2584.369..4393.244 rows=333301 loops=1                       Batches: 17  Memory Usage: 4241kB  Disk Usage: 112832kB                     ->  Parallel Seq Scan on public.gtest (actual time=0.691..603.990 rows=3333333 loops=3)                           Output: id, txt                           Worker 0:  actual time=0.104..607.146 rows=3174970 loops=1                           Worker 1:  actual time=0.100..603.951 rows=3332785 loops=1 Planning Time: 0.226 ms Execution Time: 9021.058 ms(29 rows)Time: 9022.251 ms (00:09.022)set enable_batch_hashagg = on;explain (costs off,analyze,verbose)select sum(id),txt from gtest group by txt;                                           QUERY PLAN------------------------------------------------------------------------------------------------- Gather (actual time=3116.666..5740.826 rows=1000000 loops=1)   Output: (sum(id)), txt   Workers Planned: 2   Workers Launched: 2   ->  Parallel BatchHashAggregate (actual time=3103.181..5464.948 rows=333333 loops=3)         Output: sum(id), txt         Group Key: gtest.txt         Worker 0:  actual time=3094.550..5486.992 rows=326082 loops=1         Worker 1:  actual time=3099.562..5480.111 rows=324729 loops=1         ->  Parallel Seq Scan on public.gtest (actual time=0.791..656.601 rows=3333333 loops=3)               Output: id, txt               Worker 0:  actual time=0.080..646.053 rows=3057680 loops=1               Worker 1:  actual time=0.070..662.754 rows=3034370 loops=1 Planning Time: 0.243 ms Execution Time: 5788.981 ms(15 rows)Time: 5790.143 ms (00:05.790)```grouping sets times```set enable_batch_hashagg = off;explain (costs off,analyze,verbose)select sum(id),txt from gtest group by grouping sets(id,txt,());                                        QUERY PLAN------------------------------------------------------------------------------------------ GroupAggregate (actual time=9454.707..38921.885 rows=2000001 loops=1)   Output: sum(id), txt, id   Group Key: gtest.id   Group Key: ()   Sort Key: gtest.txt     Group Key: gtest.txt   ->  Sort (actual time=9454.679..11804.071 rows=10000000 loops=1)         Output: txt, id         Sort Key: gtest.id         Sort Method: external merge  Disk: 254056kB         ->  Seq Scan on public.gtest (actual time=2.250..2419.031 rows=10000000 loops=1)               Output: txt, id Planning Time: 0.230 ms Execution Time: 39203.883 ms(14 rows)Time: 39205.339 ms (00:39.205)set enable_batch_hashagg = on;explain (costs off,analyze,verbose)select sum(id),txt from gtest group by grouping sets(id,txt,());                                           QUERY PLAN------------------------------------------------------------------------------------------------- Gather (actual time=5931.776..14353.957 rows=2000001 loops=1)   Output: (sum(id)), txt, id   Workers Planned: 2   Workers Launched: 2   ->  Parallel BatchHashAggregate (actual time=5920.963..13897.852 rows=666667 loops=3)         Output: sum(id), txt, id         Group Key: gtest.id         Group Key: ()         Group Key: gtest.txt         Worker 0:  actual time=5916.370..14062.461 rows=513810 loops=1         Worker 1:  actual time=5916.037..13932.847 rows=775901 loops=1         ->  Parallel Seq Scan on public.gtest (actual time=0.399..688.273 rows=3333333 loops=3)               Output: id, txt               Worker 0:  actual time=0.052..690.955 rows=3349990 loops=1               Worker 1:  actual time=0.050..691.595 rows=3297070 loops=1 Planning Time: 0.157 ms Execution Time: 14598.416 ms(17 rows)Time: 14599.437 ms (00:14.599)```", "msg_date": "Thu, 29 Oct 2020 15:23:25 +0800", "msg_from": "\"bucoo@sohu.com\" <bucoo@sohu.com>", "msg_from_op": true, "msg_subject": "Re: Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "On Thu, Oct 29, 2020 at 12:53 PM bucoo@sohu.com <bucoo@sohu.com> wrote:\n>\n> > 1) It's better to always include the whole patch series - including the\n> > parts that have not changed. Otherwise people have to scavenge the\n> > thread and search for all the pieces, which may be a source of issues.\n> > Also, it confuses the patch tester [1] which tries to apply patches from\n> > a single message, so it will fail for this one.\n> Pathes 3 and 4 do not rely on 1 and 2 in code.\n> But, it will fail when you apply the apatches 3 and 4 directly, because\n> they are written after 1 and 2.\n> I can generate a new single patch if you need.\n>\n> > 2) I suggest you try to describe the goal of these patches, using some\n> > example queries, explain output etc. Right now the reviewers have to\n> > reverse engineer the patches and deduce what the intention was, which\n> > may be causing unnecessary confusion etc. If this was my patch, I'd try\n> > to create a couple examples (CREATE TABLE + SELECT + EXPLAIN) showing\n> > how the patch changes the query plan, showing speedup etc.\n> I written some example queries in to regress, include \"unique\" \"union\"\n> \"group by\" and \"group by grouping sets\".\n> here is my tests, they are not in regress\n> ```sql\n> begin;\n> create table gtest(id integer, txt text);\n> insert into gtest select t1.id,'txt'||t1.id from (select generate_series(1,1000*1000) id) t1,(select generate_series(1,10) id) t2;\n> analyze gtest;\n> commit;\n> set jit = off;\n> \\timing on\n> ```\n> normal aggregate times\n> ```\n> set enable_batch_hashagg = off;\n> explain (costs off,analyze,verbose)\n> select sum(id),txt from gtest group by txt;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------\n> Finalize GroupAggregate (actual time=6469.279..8947.024 rows=1000000 loops=1)\n> Output: sum(id), txt\n> Group Key: gtest.txt\n> -> Gather Merge (actual time=6469.245..8165.930 rows=1000058 loops=1)\n> Output: txt, (PARTIAL sum(id))\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Sort (actual time=6356.471..7133.832 rows=333353 loops=3)\n> Output: txt, (PARTIAL sum(id))\n> Sort Key: gtest.txt\n> Sort Method: external merge Disk: 11608kB\n> Worker 0: actual time=6447.665..7349.431 rows=317512 loops=1\n> Sort Method: external merge Disk: 10576kB\n> Worker 1: actual time=6302.882..7061.157 rows=333301 loops=1\n> Sort Method: external merge Disk: 11112kB\n> -> Partial HashAggregate (actual time=2591.487..4430.437 rows=333353 loops=3)\n> Output: txt, PARTIAL sum(id)\n> Group Key: gtest.txt\n> Batches: 17 Memory Usage: 4241kB Disk Usage: 113152kB\n> Worker 0: actual time=2584.345..4486.407 rows=317512 loops=1\n> Batches: 17 Memory Usage: 4241kB Disk Usage: 101392kB\n> Worker 1: actual time=2584.369..4393.244 rows=333301 loops=1\n> Batches: 17 Memory Usage: 4241kB Disk Usage: 112832kB\n> -> Parallel Seq Scan on public.gtest (actual time=0.691..603.990 rows=3333333 loops=3)\n> Output: id, txt\n> Worker 0: actual time=0.104..607.146 rows=3174970 loops=1\n> Worker 1: actual time=0.100..603.951 rows=3332785 loops=1\n> Planning Time: 0.226 ms\n> Execution Time: 9021.058 ms\n> (29 rows)\n>\n> Time: 9022.251 ms (00:09.022)\n>\n> set enable_batch_hashagg = on;\n> explain (costs off,analyze,verbose)\n> select sum(id),txt from gtest group by txt;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------\n> Gather (actual time=3116.666..5740.826 rows=1000000 loops=1)\n> Output: (sum(id)), txt\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Parallel BatchHashAggregate (actual time=3103.181..5464.948 rows=333333 loops=3)\n> Output: sum(id), txt\n> Group Key: gtest.txt\n> Worker 0: actual time=3094.550..5486.992 rows=326082 loops=1\n> Worker 1: actual time=3099.562..5480.111 rows=324729 loops=1\n> -> Parallel Seq Scan on public.gtest (actual time=0.791..656.601 rows=3333333 loops=3)\n> Output: id, txt\n> Worker 0: actual time=0.080..646.053 rows=3057680 loops=1\n> Worker 1: actual time=0.070..662.754 rows=3034370 loops=1\n> Planning Time: 0.243 ms\n> Execution Time: 5788.981 ms\n> (15 rows)\n>\n> Time: 5790.143 ms (00:05.790)\n> ```\n>\n> grouping sets times\n> ```\n> set enable_batch_hashagg = off;\n> explain (costs off,analyze,verbose)\n> select sum(id),txt from gtest group by grouping sets(id,txt,());\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------\n> GroupAggregate (actual time=9454.707..38921.885 rows=2000001 loops=1)\n> Output: sum(id), txt, id\n> Group Key: gtest.id\n> Group Key: ()\n> Sort Key: gtest.txt\n> Group Key: gtest.txt\n> -> Sort (actual time=9454.679..11804.071 rows=10000000 loops=1)\n> Output: txt, id\n> Sort Key: gtest.id\n> Sort Method: external merge Disk: 254056kB\n> -> Seq Scan on public.gtest (actual time=2.250..2419.031 rows=10000000 loops=1)\n> Output: txt, id\n> Planning Time: 0.230 ms\n> Execution Time: 39203.883 ms\n> (14 rows)\n>\n> Time: 39205.339 ms (00:39.205)\n>\n> set enable_batch_hashagg = on;\n> explain (costs off,analyze,verbose)\n> select sum(id),txt from gtest group by grouping sets(id,txt,());\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------\n> Gather (actual time=5931.776..14353.957 rows=2000001 loops=1)\n> Output: (sum(id)), txt, id\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Parallel BatchHashAggregate (actual time=5920.963..13897.852 rows=666667 loops=3)\n> Output: sum(id), txt, id\n> Group Key: gtest.id\n> Group Key: ()\n> Group Key: gtest.txt\n> Worker 0: actual time=5916.370..14062.461 rows=513810 loops=1\n> Worker 1: actual time=5916.037..13932.847 rows=775901 loops=1\n> -> Parallel Seq Scan on public.gtest (actual time=0.399..688.273 rows=3333333 loops=3)\n> Output: id, txt\n> Worker 0: actual time=0.052..690.955 rows=3349990 loops=1\n> Worker 1: actual time=0.050..691.595 rows=3297070 loops=1\n> Planning Time: 0.157 ms\n> Execution Time: 14598.416 ms\n> (17 rows)\n>\n> Time: 14599.437 ms (00:14.599)\n> ```\n\nI have done some performance testing with TPCH to see the impact on\nthe different query plan, I could see there are a lot of plan changes\nacross various queries but out of those, there are few queries where\nthese patches gave noticeable gain query13 and query17 (I have\nattached the plan for these 2 queries).\n\nTest details:\n----------------\nTPCH scale factor 50 (database size 112GB)\nwork_mem 20GB, shared buffers: 20GB max_parallel_workers_per_gather=4\n\nMachine information:\nArchitecture: x86_64\nCPU(s): 56\nThread(s) per core: 2\nCore(s) per socket: 14\nSocket(s): 2\nNUMA node(s): 2\nModel name: Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz\n\nObservation:\nIn the TPCH test, I have noticed that the major gain we are getting in\nthis patch is because we are able to use the parallelism where we were\nnot able to use due to the limitation of the parallel aggregate.\nBasically, for computing final aggregated results we need to break the\nparallelism because the worker is only performing the partial\naggregate and after that, we had to gather all the partially\naggregated results and do the finalize aggregate. Now, with this\npatch, since we are batching the results we are able to compute the\nfinal aggregate within the workers itself and that enables us to get\nthe parallelism in more cases.\n\nExample:\nIf we observe the output of plan 13(13.explain_head.out), the subquery\nis performing the aggregate and the outer query is doing the grouping\non the aggregated value of the subquery, due to this we are not\nselecting the parallelism in the head because in the inner aggregation\nthe number of groups is huge and if we select the parallelism we need\nto transfer a lot of tuple through the tuple queue and we will also\nhave to serialize/deserialize those many transition values. And the\nouter query needs the final aggregated results from the inner query so\nwe can not select the parallelism. Now with the batch\naggregate(13.explain_patch.out), we are able to compute the finalize\naggregation within the workers itself and that enabled us to continue\nthe parallelism till the top node. The execution time for this query\nis now reduced to 57sec from 238sec which is 4X faster.\n\nI will perform some more tests with different scale factors and\nanalyze the behavior of this.\n\n\n\n\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 3 Nov 2020 18:06:55 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "On Tue, Nov 3, 2020 at 6:06 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Oct 29, 2020 at 12:53 PM bucoo@sohu.com <bucoo@sohu.com> wrote:\n> >\n> > > 1) It's better to always include the whole patch series - including the\n> > > parts that have not changed. Otherwise people have to scavenge the\n> > > thread and search for all the pieces, which may be a source of issues.\n> > > Also, it confuses the patch tester [1] which tries to apply patches from\n> > > a single message, so it will fail for this one.\n> > Pathes 3 and 4 do not rely on 1 and 2 in code.\n> > But, it will fail when you apply the apatches 3 and 4 directly, because\n> > they are written after 1 and 2.\n> > I can generate a new single patch if you need.\n> >\n> > > 2) I suggest you try to describe the goal of these patches, using some\n> > > example queries, explain output etc. Right now the reviewers have to\n> > > reverse engineer the patches and deduce what the intention was, which\n> > > may be causing unnecessary confusion etc. If this was my patch, I'd try\n> > > to create a couple examples (CREATE TABLE + SELECT + EXPLAIN) showing\n> > > how the patch changes the query plan, showing speedup etc.\n> > I written some example queries in to regress, include \"unique\" \"union\"\n> > \"group by\" and \"group by grouping sets\".\n> > here is my tests, they are not in regress\n> > ```sql\n> > begin;\n> > create table gtest(id integer, txt text);\n> > insert into gtest select t1.id,'txt'||t1.id from (select generate_series(1,1000*1000) id) t1,(select generate_series(1,10) id) t2;\n> > analyze gtest;\n> > commit;\n> > set jit = off;\n> > \\timing on\n> > ```\n> > normal aggregate times\n> > ```\n> > set enable_batch_hashagg = off;\n> > explain (costs off,analyze,verbose)\n> > select sum(id),txt from gtest group by txt;\n> > QUERY PLAN\n> > -------------------------------------------------------------------------------------------------------------\n> > Finalize GroupAggregate (actual time=6469.279..8947.024 rows=1000000 loops=1)\n> > Output: sum(id), txt\n> > Group Key: gtest.txt\n> > -> Gather Merge (actual time=6469.245..8165.930 rows=1000058 loops=1)\n> > Output: txt, (PARTIAL sum(id))\n> > Workers Planned: 2\n> > Workers Launched: 2\n> > -> Sort (actual time=6356.471..7133.832 rows=333353 loops=3)\n> > Output: txt, (PARTIAL sum(id))\n> > Sort Key: gtest.txt\n> > Sort Method: external merge Disk: 11608kB\n> > Worker 0: actual time=6447.665..7349.431 rows=317512 loops=1\n> > Sort Method: external merge Disk: 10576kB\n> > Worker 1: actual time=6302.882..7061.157 rows=333301 loops=1\n> > Sort Method: external merge Disk: 11112kB\n> > -> Partial HashAggregate (actual time=2591.487..4430.437 rows=333353 loops=3)\n> > Output: txt, PARTIAL sum(id)\n> > Group Key: gtest.txt\n> > Batches: 17 Memory Usage: 4241kB Disk Usage: 113152kB\n> > Worker 0: actual time=2584.345..4486.407 rows=317512 loops=1\n> > Batches: 17 Memory Usage: 4241kB Disk Usage: 101392kB\n> > Worker 1: actual time=2584.369..4393.244 rows=333301 loops=1\n> > Batches: 17 Memory Usage: 4241kB Disk Usage: 112832kB\n> > -> Parallel Seq Scan on public.gtest (actual time=0.691..603.990 rows=3333333 loops=3)\n> > Output: id, txt\n> > Worker 0: actual time=0.104..607.146 rows=3174970 loops=1\n> > Worker 1: actual time=0.100..603.951 rows=3332785 loops=1\n> > Planning Time: 0.226 ms\n> > Execution Time: 9021.058 ms\n> > (29 rows)\n> >\n> > Time: 9022.251 ms (00:09.022)\n> >\n> > set enable_batch_hashagg = on;\n> > explain (costs off,analyze,verbose)\n> > select sum(id),txt from gtest group by txt;\n> > QUERY PLAN\n> > -------------------------------------------------------------------------------------------------\n> > Gather (actual time=3116.666..5740.826 rows=1000000 loops=1)\n> > Output: (sum(id)), txt\n> > Workers Planned: 2\n> > Workers Launched: 2\n> > -> Parallel BatchHashAggregate (actual time=3103.181..5464.948 rows=333333 loops=3)\n> > Output: sum(id), txt\n> > Group Key: gtest.txt\n> > Worker 0: actual time=3094.550..5486.992 rows=326082 loops=1\n> > Worker 1: actual time=3099.562..5480.111 rows=324729 loops=1\n> > -> Parallel Seq Scan on public.gtest (actual time=0.791..656.601 rows=3333333 loops=3)\n> > Output: id, txt\n> > Worker 0: actual time=0.080..646.053 rows=3057680 loops=1\n> > Worker 1: actual time=0.070..662.754 rows=3034370 loops=1\n> > Planning Time: 0.243 ms\n> > Execution Time: 5788.981 ms\n> > (15 rows)\n> >\n> > Time: 5790.143 ms (00:05.790)\n> > ```\n> >\n> > grouping sets times\n> > ```\n> > set enable_batch_hashagg = off;\n> > explain (costs off,analyze,verbose)\n> > select sum(id),txt from gtest group by grouping sets(id,txt,());\n> > QUERY PLAN\n> > ------------------------------------------------------------------------------------------\n> > GroupAggregate (actual time=9454.707..38921.885 rows=2000001 loops=1)\n> > Output: sum(id), txt, id\n> > Group Key: gtest.id\n> > Group Key: ()\n> > Sort Key: gtest.txt\n> > Group Key: gtest.txt\n> > -> Sort (actual time=9454.679..11804.071 rows=10000000 loops=1)\n> > Output: txt, id\n> > Sort Key: gtest.id\n> > Sort Method: external merge Disk: 254056kB\n> > -> Seq Scan on public.gtest (actual time=2.250..2419.031 rows=10000000 loops=1)\n> > Output: txt, id\n> > Planning Time: 0.230 ms\n> > Execution Time: 39203.883 ms\n> > (14 rows)\n> >\n> > Time: 39205.339 ms (00:39.205)\n> >\n> > set enable_batch_hashagg = on;\n> > explain (costs off,analyze,verbose)\n> > select sum(id),txt from gtest group by grouping sets(id,txt,());\n> > QUERY PLAN\n> > -------------------------------------------------------------------------------------------------\n> > Gather (actual time=5931.776..14353.957 rows=2000001 loops=1)\n> > Output: (sum(id)), txt, id\n> > Workers Planned: 2\n> > Workers Launched: 2\n> > -> Parallel BatchHashAggregate (actual time=5920.963..13897.852 rows=666667 loops=3)\n> > Output: sum(id), txt, id\n> > Group Key: gtest.id\n> > Group Key: ()\n> > Group Key: gtest.txt\n> > Worker 0: actual time=5916.370..14062.461 rows=513810 loops=1\n> > Worker 1: actual time=5916.037..13932.847 rows=775901 loops=1\n> > -> Parallel Seq Scan on public.gtest (actual time=0.399..688.273 rows=3333333 loops=3)\n> > Output: id, txt\n> > Worker 0: actual time=0.052..690.955 rows=3349990 loops=1\n> > Worker 1: actual time=0.050..691.595 rows=3297070 loops=1\n> > Planning Time: 0.157 ms\n> > Execution Time: 14598.416 ms\n> > (17 rows)\n> >\n> > Time: 14599.437 ms (00:14.599)\n> > ```\n>\n> I have done some performance testing with TPCH to see the impact on\n> the different query plan, I could see there are a lot of plan changes\n> across various queries but out of those, there are few queries where\n> these patches gave noticeable gain query13 and query17 (I have\n> attached the plan for these 2 queries).\n>\n> Test details:\n> ----------------\n> TPCH scale factor 50 (database size 112GB)\n> work_mem 20GB, shared buffers: 20GB max_parallel_workers_per_gather=4\n>\n> Machine information:\n> Architecture: x86_64\n> CPU(s): 56\n> Thread(s) per core: 2\n> Core(s) per socket: 14\n> Socket(s): 2\n> NUMA node(s): 2\n> Model name: Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz\n>\n> Observation:\n> In the TPCH test, I have noticed that the major gain we are getting in\n> this patch is because we are able to use the parallelism where we were\n> not able to use due to the limitation of the parallel aggregate.\n> Basically, for computing final aggregated results we need to break the\n> parallelism because the worker is only performing the partial\n> aggregate and after that, we had to gather all the partially\n> aggregated results and do the finalize aggregate. Now, with this\n> patch, since we are batching the results we are able to compute the\n> final aggregate within the workers itself and that enables us to get\n> the parallelism in more cases.\n>\n> Example:\n> If we observe the output of plan 13(13.explain_head.out), the subquery\n> is performing the aggregate and the outer query is doing the grouping\n> on the aggregated value of the subquery, due to this we are not\n> selecting the parallelism in the head because in the inner aggregation\n> the number of groups is huge and if we select the parallelism we need\n> to transfer a lot of tuple through the tuple queue and we will also\n> have to serialize/deserialize those many transition values. And the\n> outer query needs the final aggregated results from the inner query so\n> we can not select the parallelism. Now with the batch\n> aggregate(13.explain_patch.out), we are able to compute the finalize\n> aggregation within the workers itself and that enabled us to continue\n> the parallelism till the top node. The execution time for this query\n> is now reduced to 57sec from 238sec which is 4X faster.\n>\n> I will perform some more tests with different scale factors and\n> analyze the behavior of this.\n\nI have started reviewing these patches, I have a couple of review comments.\n\nSome general comment to make code more readable\n\n1. Comments are missing in the patch, even there are no function\nheader comments to explain the overall idea about the function.\n I think adding comments will make it easier to review the patch.\n\n2. Code is not written as per the Postgres coding guideline, the\ncommon problems observed with the patch are\n a) There should be an empty line after the variable declaration section\n b) In the function definition, the function return type and the\nfunction name should not be in the same line\n\nChange\n\n+static bool ExecNextParallelBatchSort(BatchSortState *state)\n{\n}\nto\nstatic bool\nExecNextParallelBatchSort(BatchSortState *state)\n{\n}\n\nc) While typecasting the variable the spacing is not used properly and\nuniformly, you can refer to other code and fix it.\n\n*Specific comments to patch 0001*\n\n1.\n+#define BATCH_SORT_MAX_BATCHES 512\n\nDid you decide this number based on some experiment or is there some\nanalysis behind selecting this number?\n\n2.\n+BatchSortState* ExecInitBatchSort(BatchSort *node, EState *estate, int eflags)\n+{\n+ BatchSortState *state;\n+ TypeCacheEntry *typentry;\n....\n+ for (i=0;i<node->numGroupCols;++i)\n+ {\n...\n+ InitFunctionCallInfoData(*fcinfo, flinfo, 1, attr->attcollation, NULL, NULL);\n+ fcinfo->args[0].isnull = false;\n+ state->groupFuns = lappend(state->groupFuns, fcinfo);\n+ }\n\n From the variable naming, it appeared like the batch sort is dependent\nupon the grouping node. I think instead of using the name\nnumGroupCols and groupFuns we need to use names that are more relevant\nto the batch sort something like numSortKey.\n\n3.\n+ if (eflags & (EXEC_FLAG_REWIND | EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK))\n+ {\n+ /* for now, we only using in group aggregate */\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"not support execute flag(s) %d for group sort\", eflags)));\n+ }\n\nInstead of ereport, you should just put an Assert for the unsupported\nflag or elog.\n\n4.\n+ state = makeNode(BatchSortState);\n+ state->ps.plan = (Plan*) node;\n+ state->ps.state = estate;\n+ state->ps.ExecProcNode = ExecBatchSortPrepare;\n\nI think the main executor entry function should be named ExecBatchSort\ninstead of ExecBatchSortPrepare, it will look more consistent with the\nother executor machinery.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 8 Nov 2020 11:54:04 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "On Sun, Nov 8, 2020 at 11:54 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Nov 3, 2020 at 6:06 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Thu, Oct 29, 2020 at 12:53 PM bucoo@sohu.com <bucoo@sohu.com> wrote:\n> > >\n> > > > 1) It's better to always include the whole patch series - including the\n> > > > parts that have not changed. Otherwise people have to scavenge the\n> > > > thread and search for all the pieces, which may be a source of issues.\n> > > > Also, it confuses the patch tester [1] which tries to apply patches from\n> > > > a single message, so it will fail for this one.\n> > > Pathes 3 and 4 do not rely on 1 and 2 in code.\n> > > But, it will fail when you apply the apatches 3 and 4 directly, because\n> > > they are written after 1 and 2.\n> > > I can generate a new single patch if you need.\n> > >\n> > > > 2) I suggest you try to describe the goal of these patches, using some\n> > > > example queries, explain output etc. Right now the reviewers have to\n> > > > reverse engineer the patches and deduce what the intention was, which\n> > > > may be causing unnecessary confusion etc. If this was my patch, I'd try\n> > > > to create a couple examples (CREATE TABLE + SELECT + EXPLAIN) showing\n> > > > how the patch changes the query plan, showing speedup etc.\n> > > I written some example queries in to regress, include \"unique\" \"union\"\n> > > \"group by\" and \"group by grouping sets\".\n> > > here is my tests, they are not in regress\n> > > ```sql\n> > > begin;\n> > > create table gtest(id integer, txt text);\n> > > insert into gtest select t1.id,'txt'||t1.id from (select generate_series(1,1000*1000) id) t1,(select generate_series(1,10) id) t2;\n> > > analyze gtest;\n> > > commit;\n> > > set jit = off;\n> > > \\timing on\n> > > ```\n> > > normal aggregate times\n> > > ```\n> > > set enable_batch_hashagg = off;\n> > > explain (costs off,analyze,verbose)\n> > > select sum(id),txt from gtest group by txt;\n> > > QUERY PLAN\n> > > -------------------------------------------------------------------------------------------------------------\n> > > Finalize GroupAggregate (actual time=6469.279..8947.024 rows=1000000 loops=1)\n> > > Output: sum(id), txt\n> > > Group Key: gtest.txt\n> > > -> Gather Merge (actual time=6469.245..8165.930 rows=1000058 loops=1)\n> > > Output: txt, (PARTIAL sum(id))\n> > > Workers Planned: 2\n> > > Workers Launched: 2\n> > > -> Sort (actual time=6356.471..7133.832 rows=333353 loops=3)\n> > > Output: txt, (PARTIAL sum(id))\n> > > Sort Key: gtest.txt\n> > > Sort Method: external merge Disk: 11608kB\n> > > Worker 0: actual time=6447.665..7349.431 rows=317512 loops=1\n> > > Sort Method: external merge Disk: 10576kB\n> > > Worker 1: actual time=6302.882..7061.157 rows=333301 loops=1\n> > > Sort Method: external merge Disk: 11112kB\n> > > -> Partial HashAggregate (actual time=2591.487..4430.437 rows=333353 loops=3)\n> > > Output: txt, PARTIAL sum(id)\n> > > Group Key: gtest.txt\n> > > Batches: 17 Memory Usage: 4241kB Disk Usage: 113152kB\n> > > Worker 0: actual time=2584.345..4486.407 rows=317512 loops=1\n> > > Batches: 17 Memory Usage: 4241kB Disk Usage: 101392kB\n> > > Worker 1: actual time=2584.369..4393.244 rows=333301 loops=1\n> > > Batches: 17 Memory Usage: 4241kB Disk Usage: 112832kB\n> > > -> Parallel Seq Scan on public.gtest (actual time=0.691..603.990 rows=3333333 loops=3)\n> > > Output: id, txt\n> > > Worker 0: actual time=0.104..607.146 rows=3174970 loops=1\n> > > Worker 1: actual time=0.100..603.951 rows=3332785 loops=1\n> > > Planning Time: 0.226 ms\n> > > Execution Time: 9021.058 ms\n> > > (29 rows)\n> > >\n> > > Time: 9022.251 ms (00:09.022)\n> > >\n> > > set enable_batch_hashagg = on;\n> > > explain (costs off,analyze,verbose)\n> > > select sum(id),txt from gtest group by txt;\n> > > QUERY PLAN\n> > > -------------------------------------------------------------------------------------------------\n> > > Gather (actual time=3116.666..5740.826 rows=1000000 loops=1)\n> > > Output: (sum(id)), txt\n> > > Workers Planned: 2\n> > > Workers Launched: 2\n> > > -> Parallel BatchHashAggregate (actual time=3103.181..5464.948 rows=333333 loops=3)\n> > > Output: sum(id), txt\n> > > Group Key: gtest.txt\n> > > Worker 0: actual time=3094.550..5486.992 rows=326082 loops=1\n> > > Worker 1: actual time=3099.562..5480.111 rows=324729 loops=1\n> > > -> Parallel Seq Scan on public.gtest (actual time=0.791..656.601 rows=3333333 loops=3)\n> > > Output: id, txt\n> > > Worker 0: actual time=0.080..646.053 rows=3057680 loops=1\n> > > Worker 1: actual time=0.070..662.754 rows=3034370 loops=1\n> > > Planning Time: 0.243 ms\n> > > Execution Time: 5788.981 ms\n> > > (15 rows)\n> > >\n> > > Time: 5790.143 ms (00:05.790)\n> > > ```\n> > >\n> > > grouping sets times\n> > > ```\n> > > set enable_batch_hashagg = off;\n> > > explain (costs off,analyze,verbose)\n> > > select sum(id),txt from gtest group by grouping sets(id,txt,());\n> > > QUERY PLAN\n> > > ------------------------------------------------------------------------------------------\n> > > GroupAggregate (actual time=9454.707..38921.885 rows=2000001 loops=1)\n> > > Output: sum(id), txt, id\n> > > Group Key: gtest.id\n> > > Group Key: ()\n> > > Sort Key: gtest.txt\n> > > Group Key: gtest.txt\n> > > -> Sort (actual time=9454.679..11804.071 rows=10000000 loops=1)\n> > > Output: txt, id\n> > > Sort Key: gtest.id\n> > > Sort Method: external merge Disk: 254056kB\n> > > -> Seq Scan on public.gtest (actual time=2.250..2419.031 rows=10000000 loops=1)\n> > > Output: txt, id\n> > > Planning Time: 0.230 ms\n> > > Execution Time: 39203.883 ms\n> > > (14 rows)\n> > >\n> > > Time: 39205.339 ms (00:39.205)\n> > >\n> > > set enable_batch_hashagg = on;\n> > > explain (costs off,analyze,verbose)\n> > > select sum(id),txt from gtest group by grouping sets(id,txt,());\n> > > QUERY PLAN\n> > > -------------------------------------------------------------------------------------------------\n> > > Gather (actual time=5931.776..14353.957 rows=2000001 loops=1)\n> > > Output: (sum(id)), txt, id\n> > > Workers Planned: 2\n> > > Workers Launched: 2\n> > > -> Parallel BatchHashAggregate (actual time=5920.963..13897.852 rows=666667 loops=3)\n> > > Output: sum(id), txt, id\n> > > Group Key: gtest.id\n> > > Group Key: ()\n> > > Group Key: gtest.txt\n> > > Worker 0: actual time=5916.370..14062.461 rows=513810 loops=1\n> > > Worker 1: actual time=5916.037..13932.847 rows=775901 loops=1\n> > > -> Parallel Seq Scan on public.gtest (actual time=0.399..688.273 rows=3333333 loops=3)\n> > > Output: id, txt\n> > > Worker 0: actual time=0.052..690.955 rows=3349990 loops=1\n> > > Worker 1: actual time=0.050..691.595 rows=3297070 loops=1\n> > > Planning Time: 0.157 ms\n> > > Execution Time: 14598.416 ms\n> > > (17 rows)\n> > >\n> > > Time: 14599.437 ms (00:14.599)\n> > > ```\n> >\n> > I have done some performance testing with TPCH to see the impact on\n> > the different query plan, I could see there are a lot of plan changes\n> > across various queries but out of those, there are few queries where\n> > these patches gave noticeable gain query13 and query17 (I have\n> > attached the plan for these 2 queries).\n> >\n> > Test details:\n> > ----------------\n> > TPCH scale factor 50 (database size 112GB)\n> > work_mem 20GB, shared buffers: 20GB max_parallel_workers_per_gather=4\n> >\n> > Machine information:\n> > Architecture: x86_64\n> > CPU(s): 56\n> > Thread(s) per core: 2\n> > Core(s) per socket: 14\n> > Socket(s): 2\n> > NUMA node(s): 2\n> > Model name: Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz\n> >\n> > Observation:\n> > In the TPCH test, I have noticed that the major gain we are getting in\n> > this patch is because we are able to use the parallelism where we were\n> > not able to use due to the limitation of the parallel aggregate.\n> > Basically, for computing final aggregated results we need to break the\n> > parallelism because the worker is only performing the partial\n> > aggregate and after that, we had to gather all the partially\n> > aggregated results and do the finalize aggregate. Now, with this\n> > patch, since we are batching the results we are able to compute the\n> > final aggregate within the workers itself and that enables us to get\n> > the parallelism in more cases.\n> >\n> > Example:\n> > If we observe the output of plan 13(13.explain_head.out), the subquery\n> > is performing the aggregate and the outer query is doing the grouping\n> > on the aggregated value of the subquery, due to this we are not\n> > selecting the parallelism in the head because in the inner aggregation\n> > the number of groups is huge and if we select the parallelism we need\n> > to transfer a lot of tuple through the tuple queue and we will also\n> > have to serialize/deserialize those many transition values. And the\n> > outer query needs the final aggregated results from the inner query so\n> > we can not select the parallelism. Now with the batch\n> > aggregate(13.explain_patch.out), we are able to compute the finalize\n> > aggregation within the workers itself and that enabled us to continue\n> > the parallelism till the top node. The execution time for this query\n> > is now reduced to 57sec from 238sec which is 4X faster.\n> >\n> > I will perform some more tests with different scale factors and\n> > analyze the behavior of this.\n>\n> I have started reviewing these patches, I have a couple of review comments.\n>\n> Some general comment to make code more readable\n>\n> 1. Comments are missing in the patch, even there are no function\n> header comments to explain the overall idea about the function.\n> I think adding comments will make it easier to review the patch.\n>\n> 2. Code is not written as per the Postgres coding guideline, the\n> common problems observed with the patch are\n> a) There should be an empty line after the variable declaration section\n> b) In the function definition, the function return type and the\n> function name should not be in the same line\n>\n> Change\n>\n> +static bool ExecNextParallelBatchSort(BatchSortState *state)\n> {\n> }\n> to\n> static bool\n> ExecNextParallelBatchSort(BatchSortState *state)\n> {\n> }\n>\n> c) While typecasting the variable the spacing is not used properly and\n> uniformly, you can refer to other code and fix it.\n>\n> *Specific comments to patch 0001*\n>\n> 1.\n> +#define BATCH_SORT_MAX_BATCHES 512\n>\n> Did you decide this number based on some experiment or is there some\n> analysis behind selecting this number?\n>\n> 2.\n> +BatchSortState* ExecInitBatchSort(BatchSort *node, EState *estate, int eflags)\n> +{\n> + BatchSortState *state;\n> + TypeCacheEntry *typentry;\n> ....\n> + for (i=0;i<node->numGroupCols;++i)\n> + {\n> ...\n> + InitFunctionCallInfoData(*fcinfo, flinfo, 1, attr->attcollation, NULL, NULL);\n> + fcinfo->args[0].isnull = false;\n> + state->groupFuns = lappend(state->groupFuns, fcinfo);\n> + }\n>\n> From the variable naming, it appeared like the batch sort is dependent\n> upon the grouping node. I think instead of using the name\n> numGroupCols and groupFuns we need to use names that are more relevant\n> to the batch sort something like numSortKey.\n>\n> 3.\n> + if (eflags & (EXEC_FLAG_REWIND | EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK))\n> + {\n> + /* for now, we only using in group aggregate */\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"not support execute flag(s) %d for group sort\", eflags)));\n> + }\n>\n> Instead of ereport, you should just put an Assert for the unsupported\n> flag or elog.\n>\n> 4.\n> + state = makeNode(BatchSortState);\n> + state->ps.plan = (Plan*) node;\n> + state->ps.state = estate;\n> + state->ps.ExecProcNode = ExecBatchSortPrepare;\n>\n> I think the main executor entry function should be named ExecBatchSort\n> instead of ExecBatchSortPrepare, it will look more consistent with the\n> other executor machinery.\n\n1.\n+void cost_batchsort(Path *path, PlannerInfo *root,\n+ List *batchkeys, Cost input_cost,\n+ double tuples, int width,\n+ Cost comparison_cost, int sort_mem,\n+ uint32 numGroupCols, uint32 numBatches)\n+{\n+ Cost startup_cost = input_cost;\n+ Cost run_cost = 0;\n+ double input_bytes = relation_byte_size(tuples, width);\n+ double batch_bytes = input_bytes / numBatches;\n+ double batch_tuples = tuples / numBatches;\n+ long sort_mem_bytes = sort_mem * 1024L;\n+\n+ if (sort_mem_bytes < (64*1024))\n+ sort_mem_bytes = (64*1024);\n+\n+ if (!enable_batch_sort)\n+ startup_cost += disable_cost;\n\nYou don't need to write a duplicate function for this, you can reuse\nthe cost_tuplesort function with some minor changes.\n\n\n2. I have one more suggestion, currently, the batches are picked by\nworkers dynamically and the benefit of that is the work distribution\nis quite flexible. But one downside I see with this approach is that\nif we want to make this parallelism to the upper node for example\nmerge join, therein we can imagine the merge join with both side nodes\nas BatchSort. But the problem is if the worker picks the batch\ndynamically then the worker need to pick the same batch on both sides\nso for that the right side node should be aware of what batch got\npicked on the left side node so for doing that we might have to\nintroduce a different join node say BatchWiseMergeJoin. Whereas if we\nmake the batches as per the worker number then each sort node can be\nprocessed independently without knowing what is happening on the other\nside.\n\n3. I have also done some performance tests especially with the small\ngroup size, basically, the cases where parallel aggregate is not\npicked due to the small group size, and with the new patch the\nparallel aggregate is possible now.\n\nSetup: I have used TPCH database with S.F 50 and executed an\naggregation query on the ORDER table\n\nNumber of rows in order table: 75000000\nTotal table size: 18 GB\n\nWork_mem: 10GB\n\npostgres=# explain (analyze, verbose) select sum(o_totalprice) from\norders group by o_custkey;\n\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=2506201.00..2570706.04 rows=5160403 width=40)\n(actual time=94002.681..98733.002 rows=4999889 loops=1)\n Output: sum(o_totalprice), o_custkey\n Group Key: orders.o_custkey\n Batches: 1 Memory Usage: 2228241kB\n -> Seq Scan on public.orders (cost=0.00..2131201.00 rows=75000000\nwidth=16) (actual time=0.042..12930.981 rows=75000000 loops=1)\n Output: o_orderkey, o_custkey, o_orderstatus, o_totalprice,\no_orderdate, o_orderpriority, o_clerk, o_shippriority, o_comment\n Planning Time: 0.317 ms\n Execution Time: 99230.242 ms\n\n\npostgres=# set enable_batch_sort=on;\nSET\npostgres=# explain (analyze, verbose) select sum(o_totalprice) from\norders group by o_custkey;\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------\n---------\n Gather (cost=1616576.00..1761358.55 rows=40316 width=40) (actual\ntime=18516.549..28811.164 rows=4999889 loops=1)\n Output: (sum(o_totalprice)), o_custkey\n Workers Planned: 4\n Workers Launched: 4\n -> GroupAggregate (cost=1615576.00..1756326.99 rows=10079\nwidth=40) (actual time=18506.051..28131.650 rows=999978 loops=5)\n Output: sum(o_totalprice), o_custkey\n Group Key: orders.o_custkey\n Worker 0: actual time=18502.746..28406.868 rows=995092 loops=1\n Worker 1: actual time=18502.339..28518.559 rows=1114511 loops=1\n Worker 2: actual time=18503.233..28461.975 rows=985574 loops=1\n Worker 3: actual time=18506.026..28409.130 rows=1005414 loops=1\n -> Parallel BatchSort (cost=1615576.00..1662451.00\nrows=18750000 width=16) (actual time=18505.982..21839.567\nrows=15000000 loops=5)\n Output: o_custkey, o_totalprice\n Sort Key: orders.o_custkey\n batches: 512\n Worker 0: actual time=18502.666..21945.442 rows=14925544 loops=1\n Worker 1: actual time=18502.270..21979.350 rows=16714443 loops=1\n Worker 2: actual time=18503.144..21933.151 rows=14784292 loops=1\n Worker 3: actual time=18505.950..21943.312 rows=15081559 loops=1\n -> Parallel Seq Scan on public.orders\n(cost=0.00..1568701.00 rows=18750000 width=16) (actual\ntime=0.082..4662.390 rows=15000000\nloops=5)\n Output: o_custkey, o_totalprice\n Worker 0: actual time=0.079..4720.424\nrows=15012981 loops=1\n Worker 1: actual time=0.083..4710.919\nrows=15675399 loops=1\n Worker 2: actual time=0.082..4663.096\nrows=14558663 loops=1\n Worker 3: actual time=0.104..4625.940\nrows=14496910 loops=1\n Planning Time: 0.281 ms\n Execution Time: 29504.248 ms\n\n\npostgres=# set enable_batch_hashagg =on;\npostgres=# set enable_batch_sort=off;\npostgres=# explain (analyze, verbose) select sum(o_totalprice) from\norders group by o_custkey;\n\nQUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------\n---\n Gather (cost=1755004.00..2287170.56 rows=5160403 width=40) (actual\ntime=12935.338..27064.962 rows=4999889 loops=1)\n Output: (sum(o_totalprice)), o_custkey\n Workers Planned: 4\n Workers Launched: 4\n -> Parallel BatchHashAggregate (cost=1754004.00..1770130.26\nrows=1290101 width=40) (actual time=12987.830..24726.348 rows=999978\nloops=5)\n Output: sum(o_totalprice), o_custkey\n Group Key: orders.o_custkey\n Worker 0: actual time=13013.228..25078.902 rows=999277 loops=1\n Worker 1: actual time=12917.375..25456.751 rows=1100607 loops=1\n Worker 2: actual time=13041.088..24022.445 rows=900562 loops=1\n Worker 3: actual time=13032.732..25230.101 rows=1001386 loops=1\n -> Parallel Seq Scan on public.orders\n(cost=0.00..1568701.00 rows=18750000 width=16) (actual\ntime=0.059..2764.881 rows=15000000 loops=\n5)\n Output: o_orderkey, o_custkey, o_orderstatus,\no_totalprice, o_orderdate, o_orderpriority, o_clerk, o_shippriority,\no_comment\n Worker 0: actual time=0.056..2754.621 rows=14924063 loops=1\n Worker 1: actual time=0.063..2815.688 rows=16241825 loops=1\n Worker 2: actual time=0.067..2750.927 rows=14064529 loops=1\n Worker 3: actual time=0.055..2753.620 rows=14699841 loops=1\n Planning Time: 0.209 ms\n Execution Time: 27728.363 ms\n(19 rows)\n\n\nI think both parallel batch-wise grouping aggregate and the batch-wise\nhash aggregate are giving very huge improvement when the typical group\nsize is small.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 17 Nov 2020 16:53:48 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "I also had a quick look at the patch and the comments made so far. Summary:\n\n1. The performance results are promising.\n\n2. The code needs comments.\n\nRegarding the design:\n\nThomas Munro mentioned the idea of a \"Parallel Repartition\" node that \nwould redistribute tuples like this. As I understand it, the difference \nis that this BatchSort implementation collects all tuples in a tuplesort \nor a tuplestore, while a Parallel Repartition node would just \nredistribute the tuples to the workers, without buffering. The receiving \nworker could put the tuples to a tuplestore or sort if needed.\n\nI think a non-buffering Reparttion node would be simpler, and thus \nbetter. In these patches, you have a BatchSort node, and batchstore, but \na simple Parallel Repartition node could do both. For example, to \nimplement distinct:\n\nGather\n- > Unique\n -> Sort\n -> Parallel Redistribute\n -> Parallel Seq Scan\n\nAnd a Hash Agg would look like this:\n\nGather\n- > Hash Agg\n -> Parallel Redistribute\n -> Parallel Seq Scan\n\n\nI'm marking this as Waiting on Author in the commitfest.\n\n- Heikki\n\n\n", "msg_date": "Fri, 27 Nov 2020 17:55:25 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "On Fri, Nov 27, 2020 at 10:55 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I think a non-buffering Reparttion node would be simpler, and thus\n> better. In these patches, you have a BatchSort node, and batchstore, but\n> a simple Parallel Repartition node could do both. For example, to\n> implement distinct:\n>\n> Gather\n> - > Unique\n> -> Sort\n> -> Parallel Redistribute\n> -> Parallel Seq Scan\n>\n> And a Hash Agg would look like this:\n>\n> Gather\n> - > Hash Agg\n> -> Parallel Redistribute\n> -> Parallel Seq Scan\n>\n> I'm marking this as Waiting on Author in the commitfest.\n\nI'm also intrigued by the parallel redistribute operator -- it seems\nlike it might be more flexible than this approach. However, I'm\nconcerned that there may be deadlock risks. If there is no buffer, or\na fixed-size buffer, the buffer might be full, and process trying to\njam tuples into the parallel redistribute would have to wait. Now if A\ncan wait for B and at the same time B can wait for A, deadlock will\nensue. In a naive implementation, this could happen with a single\nparallel redistribute operator: worker 1 is trying to send a tuple to\nworker 2, which can't receive it because it's busy sending a tuple to\nworker 1. That could probably be fixed by arranging for workers to try\nto try to receive data whenever they block in the middle of sending\ndata. However, in general there can be multiple nodes that cause\nwaiting in the tree: any number of Parallel Redistribute nodes, plus a\nGather, plus maybe other stuff. The cheap way out of that problem is\nto use a buffer that can grow arbitrarily large, but that's not\nterribly satisfying either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 28 Nov 2020 18:16:42 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "On Fri, Nov 27, 2020 at 9:25 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> I also had a quick look at the patch and the comments made so far. Summary:\n>\n> 1. The performance results are promising.\n>\n> 2. The code needs comments.\n>\n> Regarding the design:\n>\n> Thomas Munro mentioned the idea of a \"Parallel Repartition\" node that\n> would redistribute tuples like this. As I understand it, the difference\n> is that this BatchSort implementation collects all tuples in a tuplesort\n> or a tuplestore, while a Parallel Repartition node would just\n> redistribute the tuples to the workers, without buffering.\n\nI think the advantage of the \"Parallel BatchSort\" is that it give\nflexibility to pick the batches dynamically by the worker after the\nrepartition. OTOH if we distribute batches directly based on the\nworker number the advantage is that the operator will be quite\nflexible, e.g. if we want to implement the merge join we can just\nplace the \"Parallel Repartition\" node above both side of the scan node\nand we will simply get the batch wise merge join because each worker\nknows their batch. Whereas if we allow workers to dynamically pick\nthe batch the right side node needs to know which batch to pick\nbecause it is dynamically picked, I mean it is not as difficult\nbecause it is the same worker but it seems less flexible.\n\n The receiving\n> worker could put the tuples to a tuplestore or sort if needed.\n\nIf we are using it without buffering then the sending worker can\ndirectly put the tuple into the respective sort/tuplestore node.\n\n> I think a non-buffering Reparttion node would be simpler, and thus\n> better. In these patches, you have a BatchSort node, and batchstore, but\n> a simple Parallel Repartition node could do both. For example, to\n> implement distinct:\n>\n> Gather\n> - > Unique\n> -> Sort\n> -> Parallel Redistribute\n> -> Parallel Seq Scan\n>\n> And a Hash Agg would look like this:\n>\n> Gather\n> - > Hash Agg\n> -> Parallel Redistribute\n> -> Parallel Seq Scan\n>\n>\n> I'm marking this as Waiting on Author in the commitfest.\n\nI agree that the simple parallel redistribute/repartition node will be\nflexible and could do both, but I see one problem. Basically, if we\nuse the common operator then first the Parallel Redistribute operator\nwill use the tuplestore for redistributing the data as per the worker\nand then each worker might use the disk again to sort their respective\ndata. Instead of that while redistributing the data itself we can use\nthe parallel sort so that each worker gets their respective batch in\nform of sorted tapes.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 29 Nov 2020 11:53:18 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "> 1.\r\n> +#define BATCH_SORT_MAX_BATCHES 512\r\n> \r\n> Did you decide this number based on some experiment or is there some\r\n> analysis behind selecting this number?\r\nWhen there are too few batches, if a certain process works too slowly, it will cause unbalanced load.\r\nWhen there are too many batches, FD will open and close files frequently.\r\n\r\n> 2.\r\n> +BatchSortState* ExecInitBatchSort(BatchSort *node, EState *estate, int eflags)\r\n> +{\r\n> + BatchSortState *state;\r\n> + TypeCacheEntry *typentry;\r\n> ....\r\n> + for (i=0;i<node->numGroupCols;++i)\r\n> + {\r\n> ...\r\n> + InitFunctionCallInfoData(*fcinfo, flinfo, 1, attr->attcollation, NULL, NULL);\r\n> + fcinfo->args[0].isnull = false;\r\n> + state->groupFuns = lappend(state->groupFuns, fcinfo);\r\n> + }\r\n> \r\n> From the variable naming, it appeared like the batch sort is dependent\r\n> upon the grouping node. I think instead of using the name\r\n> numGroupCols and groupFuns we need to use names that are more relevant\r\n> to the batch sort something like numSortKey.\r\nNot all data types support both sorting and hashing calculations, such as user-defined data types.\r\nWe do not need all columns to support hash calculation when we batch, so I used two variables.\r\n\r\n> 3.\r\n> + if (eflags & (EXEC_FLAG_REWIND | EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK))\r\n> + {\r\n> + /* for now, we only using in group aggregate */\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\r\n> + errmsg(\"not support execute flag(s) %d for group sort\", eflags)));\r\n> + }\r\n> \r\n> Instead of ereport, you should just put an Assert for the unsupported\r\n> flag or elog.\r\nIn fact, this is an unfinished feature, BatchSort should also support these features, welcome to supplement.\r\n\r\n> 4.\r\n> + state = makeNode(BatchSortState);\r\n> + state->ps.plan = (Plan*) node;\r\n> + state->ps.state = estate;\r\n> + state->ps.ExecProcNode = ExecBatchSortPrepare;\r\n> \r\n> I think the main executor entry function should be named ExecBatchSort\r\n> instead of ExecBatchSortPrepare, it will look more consistent with the\r\n> other executor machinery.\r\nThe job of the ExecBatchSortPrepare function is to preprocess the data (batch and pre-sort),\r\nand when its work ends, it will call \"ExecSetExecProcNode(pstate, ExecBatchSort)\" to return the data to the ExecBatchSort function.\r\nThere is another advantage of dividing into two functions, \r\nIt is not necessary to judge whether tuplesort is now available every time the function is processed to improve the subtle performance.\r\nAnd I think this code is clearer.\r\n\r\n\n\n> 1.> +#define BATCH_SORT_MAX_BATCHES 512>  > Did you decide this number based on some experiment or is there some> analysis behind selecting this number?When there are too few batches, if a certain process works too slowly, it will cause unbalanced load.When there are too many batches, FD will open and close files frequently.> 2.> +BatchSortState* ExecInitBatchSort(BatchSort *node, EState *estate, int eflags)> +{> + BatchSortState *state;> + TypeCacheEntry *typentry;> ....> + for (i=0;i<node->numGroupCols;++i)> + {> ...> + InitFunctionCallInfoData(*fcinfo, flinfo, 1, attr->attcollation, NULL, NULL);> + fcinfo->args[0].isnull = false;> + state->groupFuns = lappend(state->groupFuns, fcinfo);> + }>  > From the variable naming, it appeared like the batch sort is dependent> upon the grouping node.  I think instead of using the name> numGroupCols and groupFuns we need to use names that are more relevant> to the batch sort something like numSortKey.Not all data types support both sorting and hashing calculations, such as user-defined data types.We do not need all columns to support hash calculation when we batch, so I used two variables.> 3.> + if (eflags & (EXEC_FLAG_REWIND | EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK))> + {> + /* for now, we only using in group aggregate */> + ereport(ERROR,> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),> + errmsg(\"not support execute flag(s) %d for group sort\", eflags)));> + }>  > Instead of ereport, you should just put an Assert for the unsupported> flag or elog.In fact, this is an unfinished feature, BatchSort should also support these features, welcome to supplement.> 4.> + state = makeNode(BatchSortState);> + state->ps.plan = (Plan*) node;> + state->ps.state = estate;> + state->ps.ExecProcNode = ExecBatchSortPrepare;>  > I think the main executor entry function should be named ExecBatchSort> instead of ExecBatchSortPrepare, it will look more consistent with the> other executor machinery.The job of the ExecBatchSortPrepare function is to preprocess the data (batch and pre-sort),and when its work ends, it will call \"ExecSetExecProcNode(pstate, ExecBatchSort)\" to return the data to the ExecBatchSort function.There is another advantage of dividing into two functions, It is not necessary to judge whether tuplesort is now available every time the function is processed to improve the subtle performance.And I think this code is clearer.", "msg_date": "Tue, 1 Dec 2020 01:27:11 +0800", "msg_from": "\"bucoo@sohu.com\" <bucoo@sohu.com>", "msg_from_op": true, "msg_subject": "Re: Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "Now, I rewrite batch hashagg and sort, add some comment and combin too patches. base on master 2ad78a87f018260d4474eee63187e1cc73c9b976.\r\nThey are support rescan and change GUC enable_batch_hashagg/enable_batch_sort to max_hashagg_batches/max_sort_batch, default value is \"0\"(mean is disable).\r\nThe \"max_hashagg_batches\" in grouping sets each chain using this value, maybe we need a better algorithm.\r\nDo not set \"max_sort_batch\" too large, because each tuplesort's work memory is \"work_mem/max_sort_batch\".\r\n\r\nNext step I want use batch sort add parallel merge join(thinks Dilip Kumar) and except/intersect support after this patch commit, welcome to discuss.\r\n\r\nSome test result:\r\nhash group by: 17,974.797 ms -> 10,137.909 ms\r\nsort group by: 117,475.380 ms -> 34,830.489 ms\r\ngrouping sets: 91,915.597 ms -> 24,585.103 ms\r\nunion: 95,765.297 ms -> 21,416.414 ms\r\n\r\n---------------------------test details-------------------------------\r\nMachine information:\r\nArchitecture: x86_64\r\nCPU(s): 88\r\nThread(s) per core: 2\r\nCore(s) per socket: 22\r\nSocket(s): 2\r\nNUMA node(s): 2\r\nModel name: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz\r\n\r\nprepare data:\r\nbegin;\r\ncreate table gtest(id integer, txt text);\r\ninsert into gtest select t1.id,'txt'||t1.id from (select generate_series(1,10*1000*1000) id) t1,(select generate_series(1,10) id) t2;\r\nanalyze gtest;\r\ncommit;\r\nset max_parallel_workers_per_gather=8;\r\nset work_mem = '100MB';\r\n\r\nhash aggregate:\r\nexplain (verbose,costs off,analyze)\r\nselect sum(id),txt from gtest group by txt;\r\n QUERY PLAN \r\n---------------------------------------------------------------------------------------------------------\r\n Finalize HashAggregate (actual time=10832.805..17403.671 rows=10000000 loops=1)\r\n Output: sum(id), txt\r\n Group Key: gtest.txt\r\n Batches: 29 Memory Usage: 102489kB Disk Usage: 404696kB\r\n -> Gather (actual time=4389.345..7227.279 rows=10000058 loops=1)\r\n Output: txt, (PARTIAL sum(id))\r\n Workers Planned: 6\r\n Workers Launched: 6\r\n -> Partial HashAggregate (actual time=4353.147..5992.183 rows=1428580 loops=7)\r\n Output: txt, PARTIAL sum(id)\r\n Group Key: gtest.txt\r\n Batches: 5 Memory Usage: 110641kB Disk Usage: 238424kB\r\n Worker 0: actual time=4347.155..5954.088 rows=1398608 loops=1\r\n Batches: 5 Memory Usage: 114737kB Disk Usage: 203928kB\r\n Worker 1: actual time=4347.061..6209.121 rows=1443046 loops=1\r\n Batches: 5 Memory Usage: 114737kB Disk Usage: 224384kB\r\n Worker 2: actual time=4347.175..5882.065 rows=1408238 loops=1\r\n Batches: 5 Memory Usage: 110641kB Disk Usage: 216360kB\r\n Worker 3: actual time=4347.193..6015.830 rows=1477568 loops=1\r\n Batches: 5 Memory Usage: 110641kB Disk Usage: 240824kB\r\n Worker 4: actual time=4347.210..5950.730 rows=1404288 loops=1\r\n Batches: 5 Memory Usage: 110641kB Disk Usage: 214872kB\r\n Worker 5: actual time=4347.482..6064.460 rows=1439454 loops=1\r\n Batches: 5 Memory Usage: 110641kB Disk Usage: 239400kB\r\n -> Parallel Seq Scan on public.gtest (actual time=0.051..1216.378 rows=14285714 loops=7)\r\n Output: id, txt\r\n Worker 0: actual time=0.048..1219.133 rows=13986000 loops=1\r\n Worker 1: actual time=0.047..1214.860 rows=14430370 loops=1\r\n Worker 2: actual time=0.051..1222.124 rows=14082300 loops=1\r\n Worker 3: actual time=0.061..1213.851 rows=14775580 loops=1\r\n Worker 4: actual time=0.073..1216.712 rows=14042795 loops=1\r\n Worker 5: actual time=0.049..1210.870 rows=14394480 loops=1\r\n Planning Time: 0.673 ms\r\n Execution Time: 17974.797 ms\r\nbatch hash aggregate:\r\nset max_hashagg_batches = 100;\r\nexplain (verbose,costs off,analyze)\r\nselect sum(id),txt from gtest group by txt;\r\n QUERY PLAN \r\n---------------------------------------------------------------------------------------------------\r\n Gather (actual time=5050.110..9757.292 rows=10000000 loops=1)\r\n Output: (sum(id)), txt\r\n Workers Planned: 6\r\n Workers Launched: 6\r\n -> Parallel BatchHashAggregate (actual time=5032.178..7810.979 rows=1428571 loops=7)\r\n Output: sum(id), txt\r\n Group Key: gtest.txt\r\n Worker 0: actual time=5016.488..7694.715 rows=1399958 loops=1\r\n Worker 1: actual time=5021.651..7942.628 rows=1501753 loops=1\r\n Worker 2: actual time=5018.327..7944.842 rows=1400176 loops=1\r\n Worker 3: actual time=5082.977..7973.635 rows=1400818 loops=1\r\n Worker 4: actual time=5019.229..7847.522 rows=1499952 loops=1\r\n Worker 5: actual time=5017.086..7667.116 rows=1398470 loops=1\r\n -> Parallel Seq Scan on public.gtest (actual time=0.055..1378.237 rows=14285714 loops=7)\r\n Output: id, txt\r\n Worker 0: actual time=0.057..1349.870 rows=14533515 loops=1\r\n Worker 1: actual time=0.052..1376.305 rows=13847620 loops=1\r\n Worker 2: actual time=0.068..1382.226 rows=13836705 loops=1\r\n Worker 3: actual time=0.071..1405.669 rows=13856130 loops=1\r\n Worker 4: actual time=0.055..1406.186 rows=14677345 loops=1\r\n Worker 5: actual time=0.045..1351.142 rows=15344825 loops=1\r\n Planning Time: 0.250 ms\r\n Execution Time: 10137.909 ms\r\n\r\nsort aggregate:\r\nset enable_hashagg = off;\r\nset max_hashagg_batches = 0;\r\nexplain (verbose,costs off,analyze)\r\nselect sum(id),txt from gtest group by txt;\r\n QUERY PLAN \r\n----------------------------------------------------------------------------------------------------------------\r\n Finalize GroupAggregate (actual time=10370.559..116494.922 rows=10000000 loops=1)\r\n Output: sum(id), txt\r\n Group Key: gtest.txt\r\n -> Gather Merge (actual time=10370.487..112470.148 rows=10000059 loops=1)\r\n Output: txt, (PARTIAL sum(id))\r\n Workers Planned: 6\r\n Workers Launched: 6\r\n -> Partial GroupAggregate (actual time=8608.563..24526.716 rows=1428580 loops=7)\r\n Output: txt, PARTIAL sum(id)\r\n Group Key: gtest.txt\r\n Worker 0: actual time=8283.755..18641.475 rows=887626 loops=1\r\n Worker 1: actual time=8303.984..26206.673 rows=1536832 loops=1\r\n Worker 2: actual time=8290.611..28110.145 rows=1676544 loops=1\r\n Worker 3: actual time=10347.326..29912.135 rows=1783536 loops=1\r\n Worker 4: actual time=8329.604..20262.795 rows=980352 loops=1\r\n Worker 5: actual time=8322.877..27957.446 rows=1758958 loops=1\r\n -> Sort (actual time=8608.501..21752.009 rows=14285714 loops=7)\r\n Output: txt, id\r\n Sort Key: gtest.txt\r\n Sort Method: external merge Disk: 349760kB\r\n Worker 0: actual time=8283.648..16831.068 rows=8876115 loops=1\r\n Sort Method: external merge Disk: 225832kB\r\n Worker 1: actual time=8303.927..23053.078 rows=15368320 loops=1\r\n Sort Method: external merge Disk: 391008kB\r\n Worker 2: actual time=8290.556..24735.395 rows=16765440 loops=1\r\n Sort Method: external merge Disk: 426552kB\r\n Worker 3: actual time=10347.264..26438.333 rows=17835210 loops=1\r\n Sort Method: external merge Disk: 453768kB\r\n Worker 4: actual time=8329.534..18248.302 rows=9803520 loops=1\r\n Sort Method: external merge Disk: 249408kB\r\n Worker 5: actual time=8322.827..24480.383 rows=17589430 loops=1\r\n Sort Method: external merge Disk: 447520kB\r\n -> Parallel Seq Scan on public.gtest (actual time=51.618..1530.850 rows=14285714 loops=7)\r\n Output: txt, id\r\n Worker 0: actual time=49.907..1001.606 rows=8876115 loops=1\r\n Worker 1: actual time=51.011..1665.980 rows=15368320 loops=1\r\n Worker 2: actual time=50.087..1812.426 rows=16765440 loops=1\r\n Worker 3: actual time=51.010..1828.299 rows=17835210 loops=1\r\n Worker 4: actual time=42.614..1077.896 rows=9803520 loops=1\r\n Worker 5: actual time=51.010..1790.012 rows=17589430 loops=1\r\n Planning Time: 0.119 ms\r\n Execution Time: 117475.380 ms\r\nbatch sort aggregate:\r\nset max_sort_batches = 21;\r\nexplain (verbose,costs off,analyze)\r\nselect sum(id),txt from gtest group by txt;\r\n QUERY PLAN \r\n----------------------------------------------------------------------------------------------------------\r\n Gather (actual time=18699.622..34438.083 rows=10000000 loops=1)\r\n Output: (sum(id)), txt\r\n Workers Planned: 6\r\n Workers Launched: 6\r\n -> GroupAggregate (actual time=18671.875..31121.607 rows=1428571 loops=7)\r\n Output: sum(id), txt\r\n Group Key: gtest.txt\r\n Worker 0: actual time=18669.038..30913.680 rows=1427622 loops=1\r\n Worker 1: actual time=18674.356..31045.516 rows=1430448 loops=1\r\n Worker 2: actual time=18677.565..31375.340 rows=1427636 loops=1\r\n Worker 3: actual time=18667.879..31359.458 rows=1427935 loops=1\r\n Worker 4: actual time=18669.760..31263.414 rows=1430220 loops=1\r\n Worker 5: actual time=18645.428..30813.141 rows=1427411 loops=1\r\n -> Parallel BatchSort (actual time=18671.796..29348.606 rows=14285714 loops=7)\r\n Output: txt, id\r\n Sort Key: gtest.txt\r\n batches: 21\r\n Worker 0: actual time=18668.856..29172.519 rows=14276220 loops=1\r\n Worker 1: actual time=18674.287..29280.794 rows=14304480 loops=1\r\n Worker 2: actual time=18677.501..29569.974 rows=14276360 loops=1\r\n Worker 3: actual time=18667.801..29558.286 rows=14279350 loops=1\r\n Worker 4: actual time=18669.689..29468.636 rows=14302200 loops=1\r\n Worker 5: actual time=18645.367..29076.665 rows=14274110 loops=1\r\n -> Parallel Seq Scan on public.gtest (actual time=50.164..1893.727 rows=14285714 loops=7)\r\n Output: txt, id\r\n Worker 0: actual time=50.058..1818.959 rows=13953440 loops=1\r\n Worker 1: actual time=50.974..1723.268 rows=13066735 loops=1\r\n Worker 2: actual time=48.050..1855.469 rows=13985175 loops=1\r\n Worker 3: actual time=49.640..1791.897 rows=12673240 loops=1\r\n Worker 4: actual time=48.027..1932.927 rows=14586880 loops=1\r\n Worker 5: actual time=51.151..2094.981 rows=16360290 loops=1\r\n Planning Time: 0.160 ms\r\n Execution Time: 34830.489 ms\r\n\r\nnormal grouping sets:\r\nset enable_hashagg = on;\r\nset max_sort_batches = 0;\r\nset max_hashagg_batches = 0;\r\nexplain (costs off,verbose,analyze)\r\nselect sum(id),txt from gtest group by grouping sets(id,txt,());\r\n QUERY PLAN \r\n----------------------------------------------------------------------------------------------------------\r\n MixedAggregate (actual time=4563.123..90348.608 rows=20000001 loops=1)\r\n Output: sum(id), txt, id\r\n Hash Key: gtest.txt\r\n Group Key: gtest.id\r\n Group Key: ()\r\n Batches: 29 Memory Usage: 114737kB Disk Usage: 3241968kB\r\n -> Gather Merge (actual time=4563.070..39429.593 rows=100000000 loops=1)\r\n Output: txt, id\r\n Workers Planned: 6\r\n Workers Launched: 6\r\n -> Sort (actual time=4493.638..7532.910 rows=14285714 loops=7)\r\n Output: txt, id\r\n Sort Key: gtest.id\r\n Sort Method: external merge Disk: 353080kB\r\n Worker 0: actual time=4474.665..7853.595 rows=14327510 loops=1\r\n Sort Method: external merge Disk: 364528kB\r\n Worker 1: actual time=4492.273..7796.141 rows=14613250 loops=1\r\n Sort Method: external merge Disk: 371776kB\r\n Worker 2: actual time=4472.937..7626.318 rows=14339905 loops=1\r\n Sort Method: external merge Disk: 364840kB\r\n Worker 3: actual time=4480.141..7730.419 rows=14406135 loops=1\r\n Sort Method: external merge Disk: 366528kB\r\n Worker 4: actual time=4490.723..7581.102 rows=13971200 loops=1\r\n Sort Method: external merge Disk: 355096kB\r\n Worker 5: actual time=4482.204..7894.434 rows=14464410 loops=1\r\n Sort Method: external merge Disk: 368008kB\r\n -> Parallel Seq Scan on public.gtest (actual time=27.040..1514.516 rows=14285714 loops=7)\r\n Output: txt, id\r\n Worker 0: actual time=23.111..1514.219 rows=14327510 loops=1\r\n Worker 1: actual time=22.696..1528.771 rows=14613250 loops=1\r\n Worker 2: actual time=23.119..1519.190 rows=14339905 loops=1\r\n Worker 3: actual time=22.705..1525.183 rows=14406135 loops=1\r\n Worker 4: actual time=23.134..1509.694 rows=13971200 loops=1\r\n Worker 5: actual time=23.652..1516.585 rows=14464410 loops=1\r\n Planning Time: 0.162 ms\r\n Execution Time: 91915.597 ms\r\n\r\nbatch grouping sets:\r\nset max_hashagg_batches = 100;\r\nexplain (costs off,verbose,analyze)\r\nselect sum(id),txt from gtest group by grouping sets(id,txt,());\r\n QUERY PLAN \r\n---------------------------------------------------------------------------------------------------\r\n Gather (actual time=9082.581..23203.803 rows=20000001 loops=1)\r\n Output: (sum(id)), txt, id\r\n Workers Planned: 6\r\n Workers Launched: 6\r\n -> Parallel BatchHashAggregate (actual time=9040.895..15911.190 rows=2857143 loops=7)\r\n Output: sum(id), txt, id\r\n Group Key: gtest.id\r\n Group Key: ()\r\n Group Key: gtest.txt\r\n Worker 0: actual time=9031.714..15499.292 rows=3101124 loops=1\r\n Worker 1: actual time=9038.217..15403.655 rows=3100997 loops=1\r\n Worker 2: actual time=9030.557..15157.267 rows=3103320 loops=1\r\n Worker 3: actual time=9034.391..15537.851 rows=3100505 loops=1\r\n Worker 4: actual time=9037.079..19823.359 rows=1400191 loops=1\r\n Worker 5: actual time=9032.359..15012.338 rows=3097137 loops=1\r\n -> Parallel Seq Scan on public.gtest (actual time=0.052..1506.109 rows=14285714 loops=7)\r\n Output: id, txt\r\n Worker 0: actual time=0.058..1521.705 rows=13759375 loops=1\r\n Worker 1: actual time=0.054..1514.218 rows=13758635 loops=1\r\n Worker 2: actual time=0.062..1531.244 rows=14456270 loops=1\r\n Worker 3: actual time=0.050..1506.569 rows=14451930 loops=1\r\n Worker 4: actual time=0.053..1495.908 rows=15411240 loops=1\r\n Worker 5: actual time=0.055..1503.382 rows=14988885 loops=1\r\n Planning Time: 0.160 ms\r\n Execution Time: 24585.103 ms\r\n\r\nnormal union:\r\nset max_hashagg_batches = 0;\r\nset max_sort_batches = 0;\r\nexplain (verbose,costs false,analyze)\r\nselect * from gtest union select * from gtest;\r\n QUERY PLAN \r\n---------------------------------------------------------------------------------------------------------\r\n Unique (actual time=53939.294..94666.573 rows=10000000 loops=1)\r\n Output: gtest.id, gtest.txt\r\n -> Sort (actual time=53939.292..76581.157 rows=200000000 loops=1)\r\n Output: gtest.id, gtest.txt\r\n Sort Key: gtest.id, gtest.txt\r\n Sort Method: external merge Disk: 4871024kB\r\n -> Append (actual time=0.020..25832.476 rows=200000000 loops=1)\r\n -> Seq Scan on public.gtest (actual time=0.019..7074.113 rows=100000000 loops=1)\r\n Output: gtest.id, gtest.txt\r\n -> Seq Scan on public.gtest gtest_1 (actual time=0.006..7067.898 rows=100000000 loops=1)\r\n Output: gtest_1.id, gtest_1.txt\r\n Planning Time: 0.152 ms\r\n Execution Time: 95765.297 ms\r\n\r\nbatch hash aggregate union:\r\nset max_hashagg_batches = 100;\r\nexplain (verbose,costs false,analyze)\r\nselect * from gtest union select * from gtest;\r\n QUERY PLAN \r\n-----------------------------------------------------------------------------------------------------------------\r\n Gather (actual time=11623.986..21021.317 rows=10000000 loops=1)\r\n Output: gtest.id, gtest.txt\r\n Workers Planned: 6\r\n Workers Launched: 6\r\n -> Parallel BatchHashAggregate (actual time=11636.753..16584.067 rows=1428571 loops=7)\r\n Output: gtest.id, gtest.txt\r\n Group Key: gtest.id, gtest.txt\r\n Worker 0: actual time=11631.225..16846.376 rows=1500587 loops=1\r\n Worker 1: actual time=11553.019..16233.006 rows=1397874 loops=1\r\n Worker 2: actual time=11581.523..16807.962 rows=1499049 loops=1\r\n Worker 3: actual time=11593.865..16416.381 rows=1399579 loops=1\r\n Worker 4: actual time=11772.115..16783.605 rows=1400961 loops=1\r\n Worker 5: actual time=11702.415..16571.841 rows=1400943 loops=1\r\n -> Parallel Append (actual time=0.047..4339.450 rows=28571429 loops=7)\r\n Worker 0: actual time=0.062..4396.130 rows=28591565 loops=1\r\n Worker 1: actual time=0.053..4383.983 rows=29536360 loops=1\r\n Worker 2: actual time=0.045..4305.253 rows=28282900 loops=1\r\n Worker 3: actual time=0.053..4295.805 rows=28409625 loops=1\r\n Worker 4: actual time=0.061..4314.450 rows=28363645 loops=1\r\n Worker 5: actual time=0.015..4311.121 rows=29163585 loops=1\r\n -> Parallel Seq Scan on public.gtest (actual time=0.030..1201.563 rows=14285714 loops=7)\r\n Output: gtest.id, gtest.txt\r\n Worker 0: actual time=0.019..281.903 rows=3277090 loops=1\r\n Worker 1: actual time=0.050..2473.135 rows=29536360 loops=1\r\n Worker 2: actual time=0.021..273.766 rows=3252955 loops=1\r\n Worker 3: actual time=0.018..285.911 rows=3185145 loops=1\r\n Worker 4: actual time=0.058..2387.626 rows=28363645 loops=1\r\n Worker 5: actual time=0.013..2432.342 rows=29163585 loops=1\r\n -> Parallel Seq Scan on public.gtest gtest_1 (actual time=0.048..2140.373 rows=25000000 loops=4)\r\n Output: gtest_1.id, gtest_1.txt\r\n Worker 0: actual time=0.059..2173.690 rows=25314475 loops=1\r\n Worker 2: actual time=0.043..2114.314 rows=25029945 loops=1\r\n Worker 3: actual time=0.050..2142.670 rows=25224480 loops=1\r\n Planning Time: 0.137 ms\r\n Execution Time: 21416.414 ms\r\n\r\n\r\nbucoo@sohu.com", "msg_date": "Mon, 25 Jan 2021 22:14:40 +0800", "msg_from": "\"bucoo@sohu.com\" <bucoo@sohu.com>", "msg_from_op": true, "msg_subject": "Re: Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "On 1/25/21 9:14 AM, bucoo@sohu.com wrote:\n> Now, I rewrite batch hashagg and sort, add some comment and combin too \n> patches. base on master 2ad78a87f018260d4474eee63187e1cc73c9b976.\n> They are support rescan and change GUC \n> enable_batch_hashagg/enable_batch_sort to \n> max_hashagg_batches/max_sort_batch, default value is \"0\"(mean is disable).\n> The \"max_hashagg_batches\" in grouping sets each chain using this value, \n> maybe we need a better algorithm.\n> Do not set \"max_sort_batch\" too large, because each tuplesort's work \n> memory is \"work_mem/max_sort_batch\".\n> \n> Next step I want use batch sort add parallel merge join(thinks Dilip \n> Kumar) and except/intersect support after this patch commit, welcome to \n> discuss.\n\nThis patch has not gotten any review in the last two CFs and is unlikely \nto be committed for PG14 so I have moved it to the 2021-07 CF. A rebase \nis also required so marked Waiting for Author.\n\nI can see this is a work in progress, but you may want to consider the \nseveral suggestions that an unbuffered approach might be better.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Mon, 29 Mar 2021 09:36:30 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "> This patch has not gotten any review in the last two CFs and is unlikely\r\n> to be committed for PG14 so I have moved it to the 2021-07 CF. A rebase\r\n> is also required so marked Waiting for Author.\r\n> \r\n> I can see this is a work in progress, but you may want to consider the\r\n> several suggestions that an unbuffered approach might be better.\r\n\r\nI have written a plan with similar functions, It is known that the following two situations do not work well.\r\n1. Under \"Parallel Append\" plan\r\n Gather\r\n -> Parallel Append\r\n -> Agg\r\n -> Parallel Redistribute(1)\r\n -> ...\r\n -> Agg\r\n -> Parallel Redistribute(2)\r\n -> ...\r\n when parallel worker 1 execute \"Parallel Redistribute(1)\" and worker execute \"Parallel Redistribute(2)\",\r\n both \"Parallel Redistribute\" plan can not send tuples to other worker(both worker are stuck),\r\n because outher worker's memory buffer run out soon.\r\n\r\n2. Under \"Nestloop\" plan\r\n Gather\r\n -> Nestloop(1)\r\n -> Nestloop(2)\r\n -> Parallel Redistribute\r\n -> ...\r\n -> IndexScan\r\n -> Agg\r\n At some point might be the case: parallel worker 1 executing Agg and \"Parallel Redistribute\" plan's memory buffer is full,\r\n worker 2 executing \"Parallel Redistribute\" and it waiting worker 1 eat \"Parallel Redistribute\" plan's memory buffer,\r\n it's stuck.\r\n\r\n\r\n\r\n\r\nbucoo@sohu.com\r\n\r\n\n\n> This patch has not gotten any review in the last two CFs and is unlikely> to be committed for PG14 so I have moved it to the 2021-07 CF. A rebase> is also required so marked Waiting for Author.>  > I can see this is a work in progress, but you may want to consider the> several suggestions that an unbuffered approach might be better.I have written a plan with similar functions, It is known that the following two situations do not work well.1. Under \"Parallel Append\" plan  Gather  -> Parallel Append      -> Agg          -> Parallel Redistribute(1)              -> ...      -> Agg          -> Parallel Redistribute(2)              -> ...  when parallel worker 1 execute \"Parallel Redistribute(1)\" and worker execute \"Parallel Redistribute(2)\",  both \"Parallel Redistribute\" plan can not send tuples to other worker(both worker are stuck),  because outher worker's memory buffer run out soon.2. Under \"Nestloop\" plan  Gather  -> Nestloop(1)      -> Nestloop(2)          -> Parallel Redistribute              -> ...          -> IndexScan      -> Agg  At some point might be the case: parallel worker 1 executing Agg and \"Parallel Redistribute\" plan's memory buffer is full,  worker 2 executing \"Parallel Redistribute\" and it waiting worker 1 eat \"Parallel Redistribute\" plan's memory buffer,  it's stuck.\n\nbucoo@sohu.com", "msg_date": "Tue, 30 Mar 2021 17:32:45 +0800", "msg_from": "\"bucoo@sohu.com\" <bucoo@sohu.com>", "msg_from_op": true, "msg_subject": "Re: Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "On Tue, 30 Mar 2021 at 22:33, bucoo@sohu.com <bucoo@sohu.com> wrote:\n> I have written a plan with similar functions, It is known that the following two situations do not work well.\n\nI read through this thread and also wondered about a Parallel\nPartition type operator. It also seems to me that if it could be done\nthis way then you could just plug in existing nodes to get Sorting and\nAggregation rather than having to modify existing nodes to get them to\ndo what you need.\n\n From what I've seen looking over the thread, a few people suggested\nthis and I didn't see anywhere where you responded to them about the\nidea. Just so you're aware, contributing to PostgreSQL is not a case\nof throwing code at a wall and seeing which parts stick. You need to\ninteract and respond to people reviewing your work. This is especially\ntrue for the people who actually have the authority to merge any of\nyour work with the main code repo.\n\nIt seems to me you might be getting off to a bad start and you might\nnot be aware of this process. So I hope this email will help put you\non track.\n\nSome of the people that you've not properly responded to include:\n\nThomas Munro: PostgreSQL committer. Wrote Parallel Hash Join.\nRobert Hass: PostgreSQL committer. Wrote much of the original parallel\nquery code\nHeikki Linnakangas: PostgreSQL committer. Worked on many parts of the\nplanner and executor. Also works for the company that develops\nGreenplum, a massively parallel processing RDBMS, based on Postgres.\n\nYou might find other information in [1].\n\nIf I wanted to do what you want to do, I think those 3 people might be\nsome of the last people I'd pick to ignore questions from! :-)\n\nAlso, I'd say also copying in Tom Lane randomly when he's not shown\nany interest in the patch here is likely not a good way of making\nforward progress. You might find that it might bubble up on his radar\nif you start constructively interacting with the people who have\nquestioned your design. I'd say that should be your next step.\n\nThe probability of anyone merging any of your code without properly\ndiscussing the design with the appropriate people are either very\nclose to zero or actually zero.\n\nI hope this email helps you get on track.\n\nDavid\n\n[1] https://www.postgresql.org/community/contributors/\n\n\n", "msg_date": "Tue, 6 Jul 2021 14:48:47 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "Sorry, this email was marked spam by sohu, so I didn't notice it, and last few months I work hard for merge PostgreSQL 14 to our cluster version(github.com/ADBSQL/AntDB).\r\n\r\nI have an idea how to make \"Parallel Redistribute\" work, even under \"Parallel Append\" and \"Nestloop\". but \"grouping sets\" can not work in parallel mode using \"Parallel Redistribute\".\r\nWait days please, path coming soon.\r\n\r\n\r\n \r\nFrom: David Rowley\r\nDate: 2021-07-06 10:48\r\nTo: bucoo@sohu.com\r\nCC: David Steele; pgsql-hackers; tgl; Dilip Kumar; Thomas Munro; Tomas Vondra; hlinnaka; robertmhaas; pgsql\r\nSubject: Re: Re: parallel distinct union and aggregate support patch\r\nOn Tue, 30 Mar 2021 at 22:33, bucoo@sohu.com <bucoo@sohu.com> wrote:\r\n> I have written a plan with similar functions, It is known that the following two situations do not work well.\r\n \r\nI read through this thread and also wondered about a Parallel\r\nPartition type operator. It also seems to me that if it could be done\r\nthis way then you could just plug in existing nodes to get Sorting and\r\nAggregation rather than having to modify existing nodes to get them to\r\ndo what you need.\r\n \r\nFrom what I've seen looking over the thread, a few people suggested\r\nthis and I didn't see anywhere where you responded to them about the\r\nidea. Just so you're aware, contributing to PostgreSQL is not a case\r\nof throwing code at a wall and seeing which parts stick. You need to\r\ninteract and respond to people reviewing your work. This is especially\r\ntrue for the people who actually have the authority to merge any of\r\nyour work with the main code repo.\r\n \r\nIt seems to me you might be getting off to a bad start and you might\r\nnot be aware of this process. So I hope this email will help put you\r\non track.\r\n \r\nSome of the people that you've not properly responded to include:\r\n \r\nThomas Munro: PostgreSQL committer. Wrote Parallel Hash Join.\r\nRobert Hass: PostgreSQL committer. Wrote much of the original parallel\r\nquery code\r\nHeikki Linnakangas: PostgreSQL committer. Worked on many parts of the\r\nplanner and executor. Also works for the company that develops\r\nGreenplum, a massively parallel processing RDBMS, based on Postgres.\r\n \r\nYou might find other information in [1].\r\n \r\nIf I wanted to do what you want to do, I think those 3 people might be\r\nsome of the last people I'd pick to ignore questions from! :-)\r\n \r\nAlso, I'd say also copying in Tom Lane randomly when he's not shown\r\nany interest in the patch here is likely not a good way of making\r\nforward progress. You might find that it might bubble up on his radar\r\nif you start constructively interacting with the people who have\r\nquestioned your design. I'd say that should be your next step.\r\n \r\nThe probability of anyone merging any of your code without properly\r\ndiscussing the design with the appropriate people are either very\r\nclose to zero or actually zero.\r\n \r\nI hope this email helps you get on track.\r\n \r\nDavid\r\n \r\n[1] https://www.postgresql.org/community/contributors/\r\n\n\nSorry, this email was marked spam by sohu, so I didn't notice it, and last few months I work hard for merge PostgreSQL 14 to our cluster version(github.com/ADBSQL/AntDB).I have an idea how to make \"Parallel Redistribute\" work, even under \"Parallel Append\" and \"Nestloop\". but \"grouping sets\" can not work in parallel mode using \"Parallel Redistribute\".Wait days please, path coming soon.\n\n From: David RowleyDate: 2021-07-06 10:48To: bucoo@sohu.comCC: David Steele; pgsql-hackers; tgl; Dilip Kumar; Thomas Munro; Tomas Vondra; hlinnaka; robertmhaas; pgsqlSubject: Re: Re: parallel distinct union and aggregate support patchOn Tue, 30 Mar 2021 at 22:33, bucoo@sohu.com <bucoo@sohu.com> wrote:\n> I have written a plan with similar functions, It is known that the following two situations do not work well.\n \nI read through this thread and also wondered about a Parallel\nPartition type operator.  It also seems to me that if it could be done\nthis way then you could just plug in existing nodes to get Sorting and\nAggregation rather than having to modify existing nodes to get them to\ndo what you need.\n \nFrom what I've seen looking over the thread, a few people suggested\nthis and I didn't see anywhere where you responded to them about the\nidea.  Just so you're aware, contributing to PostgreSQL is not a case\nof throwing code at a wall and seeing which parts stick.  You need to\ninteract and respond to people reviewing your work. This is especially\ntrue for the people who actually have the authority to merge any of\nyour work with the main code repo.\n \nIt seems to me you might be getting off to a bad start and you might\nnot be aware of this process. So I hope this email will help put you\non track.\n \nSome of the people that you've not properly responded to include:\n \nThomas Munro:  PostgreSQL committer. Wrote Parallel Hash Join.\nRobert Hass: PostgreSQL committer. Wrote much of the original parallel\nquery code\nHeikki Linnakangas: PostgreSQL committer. Worked on many parts of the\nplanner and executor. Also works for the company that develops\nGreenplum, a massively parallel processing RDBMS, based on Postgres.\n \nYou might find other information in [1].\n \nIf I wanted to do what you want to do, I think those 3 people might be\nsome of the last people I'd pick to ignore questions from! :-)\n \nAlso, I'd say also copying in Tom Lane randomly when he's not shown\nany interest in the patch here is likely not a good way of making\nforward progress.  You might find that it might bubble up on his radar\nif you start constructively interacting with the people who have\nquestioned your design.  I'd say that should be your next step.\n \nThe probability of anyone merging any of your code without properly\ndiscussing the design with the appropriate people are either very\nclose to zero or actually zero.\n \nI hope this email helps you get on track.\n \nDavid\n \n[1] https://www.postgresql.org/community/contributors/", "msg_date": "Wed, 21 Jul 2021 15:36:14 +0800", "msg_from": "\"bucoo@sohu.com\" <bucoo@sohu.com>", "msg_from_op": true, "msg_subject": "Re: Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "That are busy days, sorry patchs too later.\r\nHere is an unbuffered plan Redistribute for parallel aggregate/distinct/union, \r\nlike this(when new GUC param redistribute_query_size large then 0):\r\n Gather\r\n -> Finalize HashAggregate\r\n -> Parallel Redistribute\r\n -> Partial HashAggregate\r\n -> Parallel Seq Scan on test\r\n0001-xxx.patch:\r\nFix cost_subqueryscan() get wrong parallel cost, it always same as none parallel path.\r\nIf not apply this patch parallel union always can't be choose.\r\n\r\nHow Redistribute work:\r\nEach have N*MQ + 1*SharedTuplestore, N is parallel workers number(include leader).\r\n1. Alloc shared memory for Redistribute(using plan parallel worker number).\r\n2. Leader worker after all parallel workers launched change \"final_worker_num\" to launched workers number.\r\n3. Each worker try to get a unique part number. part number count is \"final_worker_num\".\r\n4. If get a invalid part number return null tuple.\r\n5. Try read tuple from MQ, if get a tuple then return it, else goto next step.\r\n6-0. Get tuple from outer, if get a tuple compute mod as \"hash value % final_worker_num\", else goto step 7.\r\n6-1. If mod equal our part number then return this tuple.\r\n6-2. Use mod get part's MQ and try write tuple to the MQ, if write success got step 6-0.\r\n6-3. Write tuple to part's SharedTuplestore.\r\n7. Read tuple from MQ, if get a tuple then return it, else close all opend MQ and goto next step.\r\n8. Read tuple from SharedTuplestore, if get a tuple then return it, else close it and goto next step.\r\n9. Try get next unique part number, if get an invalid part number then return null tuple, else goto step 7.\r\n\r\nIn step \"6-2\" we can't use shm_mq_send() function, because it maybe write partial data,\r\nif this happend we must write remaining data to this MQ, so we must wait other worker read some date from this MQ.\r\nHowever, we do't want to wait(this may cause all worker to wait for each other).\r\nSo, I write a new function named shm_mq_send_once(). It like shm_mq_send, but return would block immediately when\r\nno space for write data and \"do not write any data\" to MQ.\r\nThis will cause a problem, when MQ ring size small then tuple size, it never write to MQ(write to SharedTuplestore).\r\nSo it's best to make sure that MQ has enough space for tuple(change GUC param \"redistribute_query_size\").\r\n\r\nExecute comparison\r\nprepare data:\r\nbegin;\r\ncreate table gtest(id integer, txt text);\r\ninsert into gtest select t1.id,'txt'||t1.id from (select generate_series(1,10*1000*1000) id) t1,(select generate_series(1,10) id) t2;\r\nanalyze gtest;\r\ncommit;\r\nset max_parallel_workers_per_gather=8;\r\nset work_mem = '256MB';\r\n\r\nhash aggregate\r\nexplain (verbose,analyze,costs off)\r\nselect sum(id),txt from gtest group by txt;\r\n QUERY PLAN\r\n---------------------------------------------------------------------------------------------------------\r\n Finalize HashAggregate (actual time=11733.519..19075.309 rows=10000000 loops=1)\r\n Output: sum(id), txt\r\n Group Key: gtest.txt\r\n Batches: 21 Memory Usage: 262201kB Disk Usage: 359808kB\r\n -> Gather (actual time=5540.052..8029.550 rows=10000056 loops=1)\r\n Output: txt, (PARTIAL sum(id))\r\n Workers Planned: 6\r\n Workers Launched: 6\r\n -> Partial HashAggregate (actual time=5534.690..5914.643 rows=1428579 loops=7)\r\n Output: txt, PARTIAL sum(id)\r\n Group Key: gtest.txt\r\n Batches: 1 Memory Usage: 188433kB\r\n Worker 0: actual time=5533.956..5913.461 rows=1443740 loops=1\r\n Batches: 1 Memory Usage: 188433kB\r\n Worker 1: actual time=5533.552..5913.595 rows=1400439 loops=1\r\n Batches: 1 Memory Usage: 188433kB\r\n Worker 2: actual time=5533.553..5913.357 rows=1451759 loops=1\r\n Batches: 1 Memory Usage: 188433kB\r\n Worker 3: actual time=5533.834..5907.952 rows=1379830 loops=1\r\n Batches: 1 Memory Usage: 180241kB\r\n Worker 4: actual time=5533.782..5912.408 rows=1428060 loops=1\r\n Batches: 1 Memory Usage: 188433kB\r\n Worker 5: actual time=5534.271..5910.458 rows=1426987 loops=1\r\n Batches: 1 Memory Usage: 188433kB\r\n -> Parallel Seq Scan on public.gtest (actual time=0.022..1523.231 rows=14285714 loops=7)\r\n Output: id, txt\r\n Worker 0: actual time=0.032..1487.403 rows=14437315 loops=1\r\n Worker 1: actual time=0.016..1635.675 rows=14004315 loops=1\r\n Worker 2: actual time=0.015..1482.005 rows=14517505 loops=1\r\n Worker 3: actual time=0.017..1664.469 rows=13798225 loops=1\r\n Worker 4: actual time=0.018..1471.233 rows=14280520 loops=1\r\n Worker 5: actual time=0.030..1463.973 rows=14269790 loops=1\r\n Planning Time: 0.075 ms\r\n Execution Time: 19575.976 ms\r\n\r\nparallel hash aggregate\r\nset redistribute_query_size = '256kB';\r\nexplain (verbose,analyze,costs off)\r\nselect sum(id),txt from gtest group by txt;\r\n QUERY PLAN\r\n---------------------------------------------------------------------------------------------------------------\r\n Gather (actual time=9710.061..11372.560 rows=10000000 loops=1)\r\n Output: (sum(id)), txt\r\n Workers Planned: 6\r\n Workers Launched: 6\r\n -> Finalize HashAggregate (actual time=9703.098..10082.575 rows=1428571 loops=7)\r\n Output: sum(id), txt\r\n Group Key: gtest.txt\r\n Batches: 1 Memory Usage: 188433kB\r\n Worker 0: actual time=9701.365..10077.995 rows=1428857 loops=1\r\n Batches: 1 Memory Usage: 188433kB\r\n Worker 1: actual time=9701.415..10095.876 rows=1430065 loops=1\r\n Batches: 1 Memory Usage: 188433kB\r\n Worker 2: actual time=9701.315..10077.635 rows=1425811 loops=1\r\n Batches: 1 Memory Usage: 188433kB\r\n Worker 3: actual time=9703.047..10088.985 rows=1427745 loops=1\r\n Batches: 1 Memory Usage: 188433kB\r\n Worker 4: actual time=9703.166..10077.937 rows=1431644 loops=1\r\n Batches: 1 Memory Usage: 188433kB\r\n Worker 5: actual time=9701.809..10076.922 rows=1426156 loops=1\r\n Batches: 1 Memory Usage: 188433kB\r\n -> Parallel Redistribute (actual time=5593.440..9036.392 rows=1428579 loops=7)\r\n Output: txt, (PARTIAL sum(id))\r\n Hash Key: gtest.txt\r\n Parts: 1 Disk Usage: 0kB Disk Rows: 0\r\n Worker 0: actual time=5591.812..9036.394 rows=1428865 loops=1\r\n Parts: 1 Disk Usage: 0kB Disk Rows: 0\r\n Worker 1: actual time=5591.773..9002.576 rows=1430072 loops=1\r\n Parts: 1 Disk Usage: 0kB Disk Rows: 0\r\n Worker 2: actual time=5591.774..9039.341 rows=1425817 loops=1\r\n Parts: 1 Disk Usage: 0kB Disk Rows: 0\r\n Worker 3: actual time=5593.635..9040.148 rows=1427753 loops=1\r\n Parts: 1 Disk Usage: 0kB Disk Rows: 0\r\n Worker 4: actual time=5593.565..9044.528 rows=1431652 loops=1\r\n Parts: 1 Disk Usage: 0kB Disk Rows: 0\r\n Worker 5: actual time=5592.220..9043.953 rows=1426167 loops=1\r\n Parts: 1 Disk Usage: 0kB Disk Rows: 0\r\n -> Partial HashAggregate (actual time=5566.237..5990.671 rows=1428579 loops=7)\r\n Output: txt, PARTIAL sum(id)\r\n Group Key: gtest.txt\r\n Batches: 1 Memory Usage: 188433kB\r\n Worker 0: actual time=5565.941..5997.635 rows=1449687 loops=1\r\n Batches: 1 Memory Usage: 188433kB\r\n Worker 1: actual time=5565.930..6073.977 rows=1400013 loops=1\r\n Batches: 1 Memory Usage: 188433kB\r\n Worker 2: actual time=5565.945..5975.454 rows=1446727 loops=1\r\n Batches: 1 Memory Usage: 188433kB\r\n Worker 3: actual time=5567.673..5981.978 rows=1396379 loops=1\r\n Batches: 1 Memory Usage: 180241kB\r\n Worker 4: actual time=5567.622..5972.500 rows=1415832 loops=1\r\n Batches: 1 Memory Usage: 188433kB\r\n Worker 5: actual time=5566.148..5962.503 rows=1415665 loops=1\r\n Batches: 1 Memory Usage: 188433kB\r\n -> Parallel Seq Scan on public.gtest (actual time=0.022..1520.647 rows=14285714 loops=7)\r\n Output: id, txt\r\n Worker 0: actual time=0.021..1476.653 rows=14496785 loops=1\r\n Worker 1: actual time=0.020..1519.023 rows=14000060 loops=1\r\n Worker 2: actual time=0.020..1476.707 rows=14467185 loops=1\r\n Worker 3: actual time=0.019..1654.088 rows=13963715 loops=1\r\n Worker 4: actual time=0.027..1527.803 rows=14158235 loops=1\r\n Worker 5: actual time=0.030..1514.247 rows=14156570 loops=1\r\n Planning Time: 0.080 ms\r\n Execution Time: 11830.773 ms", "msg_date": "Wed, 15 Sep 2021 18:34:01 +0800", "msg_from": "\"bucoo@sohu.com\" <bucoo@sohu.com>", "msg_from_op": true, "msg_subject": "Re: Re: parallel distinct union and aggregate support patch" }, { "msg_contents": "> On 29 Mar 2021, at 15:36, David Steele <david@pgmasters.net> wrote:\n\n> A rebase is also required so marked Waiting for Author.\n\nMany months on and this patch still needs a rebase to apply, and the thread has\nstalled. I'm marking this Returned with Feedback. Please feel free to open a\nnew entry if you return to this patch.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 8 Nov 2021 23:19:33 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: parallel distinct union and aggregate support patch" } ]
[ { "msg_contents": "Hackers,\n\nPlease find access/xlog_internal.h refactored in the attached patch series. This header is included from many places, including external tools. It is aesthetically displeasing to have something called \"internal\" used from so many places, especially when many of those places do not deal directly with the internal workings of xlog. But it is even worse that multiple files include this header for no reason.\n\nA small portion of access/xlog_internal.h defines the RmgrData struct, and in support of this struct the header includes a number of other headers. Files that include access/xlog_internal.h indirectly include these other headers, which most do not need. (Only 3 out of 41 files involved actually need that portion of the header.) For third-party tools which deal with backup, restore, or replication matters, including xlog_internal.h is necessary to get macros for calculating xlog file names, but doing so also indirectly pulls in other headers, increasing the risk of unwanted symbol collisions. Some colleagues and I ran into this exact problem in a C++ program that uses both xlog_internal.h and the Boost C++ library.\n\n\n0001 - Removes gratuitous inclusion of access/xlog_internal.h from *.c files in core that are currently including it without need. The following files are so modified:\n\n src/backend/access/transam/rmgr.c\n src/backend/postmaster/bgwriter.c\n src/backend/replication/logical/decode.c\n src/backend/replication/logical/logical.c\n src/backend/replication/logical/logicalfuncs.c\n src/backend/replication/logical/worker.c\n src/bin/pg_basebackup/pg_recvlogical.c\n src/bin/pg_rewind/timeline.c\n src/bin/pg_waldump/rmgrdesc.c\n\n0002 - Moves RmgrData from access/xlog_internal.h into a new file access/rmgr_internal.h. I clearly did not waste time thinking of a clever file name. Bikeshedding welcome. Most files which currently include xlog_internal.h do not need the definition of RmgrData. As it stands now, inclusion of xlog_internal.h indirectly includes the following headers:\n\n <fcntl.h>\n \"access/rmgr.h\"\n \"access/rmgrlist.h\"\n \"access/transam.h\"\n \"access/xlogdefs.h\"\n \"access/xlogreader.h\"\n \"access/xlogrecord.h\"\n \"catalog/catversion.h\"\n \"common/relpath.h\"\n \"datatype/timestamp.h\"\n \"lib/stringinfo.h\"\n \"pgtime.h\"\n \"port/pg_bswap.h\"\n \"port/pg_crc32c.h\"\n \"storage/backendid.h\"\n \"storage/block.h\"\n \"storage/relfilenode.h\"\n\nAfter refactoring, the inclusion of xlog_internal.h includes indirectly only these headers:\n\n <fcntl.h>\n \"access/xlogdefs.h\"\n \"datatype/timestamp.h\"\n \"pgtime.h\"\n\nand only these files need to be altered to include the new rmgr_internal.h header:\n\n src/backend/access/transam/rmgr.c\n src/backend/access/transam/xlog.c\n src/backend/utils/misc/guc.c\n\nThoughts?\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 19 Oct 2020 18:29:27 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Reduce the dependence on access/xlog_internal.h" }, { "msg_contents": "Hi,\n\nOn 2020-10-19 18:29:27 -0700, Mark Dilger wrote:\n> Please find access/xlog_internal.h refactored in the attached patch\n> series. This header is included from many places, including external\n> tools. It is aesthetically displeasing to have something called\n> \"internal\" used from so many places, especially when many of those\n> places do not deal directly with the internal workings of xlog. But\n> it is even worse that multiple files include this header for no\n> reason.\n\n\n> 0002 - Moves RmgrData from access/xlog_internal.h into a new file access/rmgr_internal.h. I clearly did not waste time thinking of a clever file name. Bikeshedding welcome. Most files which currently include xlog_internal.h do not need the definition of RmgrData. As it stands now, inclusion of xlog_internal.h indirectly includes the following headers:\n> \n> After refactoring, the inclusion of xlog_internal.h includes indirectly only these headers:\n> \n> and only these files need to be altered to include the new rmgr_internal.h header:\n> \n> src/backend/access/transam/rmgr.c\n> src/backend/access/transam/xlog.c\n> src/backend/utils/misc/guc.c\n> \n> Thoughts?\n\nIt's not clear why the correct direction here is to make\nxlog_internals.h less \"low level\" by moving things into headers like\nrmgr_internal.h, rather than moving the widely used parts of\nxlog_internal.h elsewhere.\n\n\n\n\n> A small portion of access/xlog_internal.h defines the RmgrData struct,\n> and in support of this struct the header includes a number of other\n> headers. Files that include access/xlog_internal.h indirectly include\n> these other headers, which most do not need. (Only 3 out of 41 files\n> involved actually need that portion of the header.) For third-party\n> tools which deal with backup, restore, or replication matters,\n> including xlog_internal.h is necessary to get macros for calculating\n> xlog file names, but doing so also indirectly pulls in other headers,\n> increasing the risk of unwanted symbol collisions. Some colleagues\n> and I ran into this exact problem in a C++ program that uses both\n> xlog_internal.h and the Boost C++ library.\n\nIt seems better to me to just use forward declarations for StringInfo\nand XLogReaderState (and just generally use them mroe aggressively). We\ndon't need the functions for dealing with those datatypes here.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 19 Oct 2020 19:05:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Reduce the dependence on access/xlog_internal.h" }, { "msg_contents": "> On Oct 19, 2020, at 7:05 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> Hi,\n> \n> On 2020-10-19 18:29:27 -0700, Mark Dilger wrote:\n>> Please find access/xlog_internal.h refactored in the attached patch\n>> series. This header is included from many places, including external\n>> tools. It is aesthetically displeasing to have something called\n>> \"internal\" used from so many places, especially when many of those\n>> places do not deal directly with the internal workings of xlog. But\n>> it is even worse that multiple files include this header for no\n>> reason.\n> \n> \n>> 0002 - Moves RmgrData from access/xlog_internal.h into a new file access/rmgr_internal.h. I clearly did not waste time thinking of a clever file name. Bikeshedding welcome. Most files which currently include xlog_internal.h do not need the definition of RmgrData. As it stands now, inclusion of xlog_internal.h indirectly includes the following headers:\n>> \n>> After refactoring, the inclusion of xlog_internal.h includes indirectly only these headers:\n>> \n>> and only these files need to be altered to include the new rmgr_internal.h header:\n>> \n>> src/backend/access/transam/rmgr.c\n>> src/backend/access/transam/xlog.c\n>> src/backend/utils/misc/guc.c\n>> \n>> Thoughts?\n> \n> It's not clear why the correct direction here is to make\n> xlog_internals.h less \"low level\" by moving things into headers like\n> rmgr_internal.h, rather than moving the widely used parts of\n> xlog_internal.h elsewhere.\n\nThanks for reviewing!\n\n>> A small portion of access/xlog_internal.h defines the RmgrData struct,\n>> and in support of this struct the header includes a number of other\n>> headers. Files that include access/xlog_internal.h indirectly include\n>> these other headers, which most do not need. (Only 3 out of 41 files\n>> involved actually need that portion of the header.) For third-party\n>> tools which deal with backup, restore, or replication matters,\n>> including xlog_internal.h is necessary to get macros for calculating\n>> xlog file names, but doing so also indirectly pulls in other headers,\n>> increasing the risk of unwanted symbol collisions. Some colleagues\n>> and I ran into this exact problem in a C++ program that uses both\n>> xlog_internal.h and the Boost C++ library.\n> \n> It seems better to me to just use forward declarations for StringInfo\n> and XLogReaderState (and just generally use them mroe aggressively). We\n> don't need the functions for dealing with those datatypes here.\n\nYeah, those are good points. Please find attached version 2 of the patch set which preserves the cleanup of the first version's 0001 patch, and introduces two new patches, 0002 and 0003:\n\n0002 - Moves commonly used stuff from xlog_internal.h into other headers\n\n0003 - Uses forward declarations for StringInfo and XLogReaderState so as to not need to include lib/stringinfo.h nor access/xlogreader.h from access/xlog_internal.h\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 20 Oct 2020 19:25:11 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Reduce the dependence on access/xlog_internal.h" } ]
[ { "msg_contents": "Hi,\n\nI think we need to add some statistics to pg_stat_wal view.\n\nAlthough there are some parameter related WAL,\nthere are few statistics for tuning them.\n\nI think it's better to provide the following statistics.\nPlease let me know your comments.\n\n```\npostgres=# SELECT * from pg_stat_wal;\n-[ RECORD 1 ]-------+------------------------------\nwal_records | 2000224\nwal_fpi | 47\nwal_bytes | 248216337\nwal_buffers_full | 20954\nwal_init_file | 8\nwal_write_backend | 20960\nwal_write_walwriter | 46\nwal_write_time | 51\nwal_sync_backend | 7\nwal_sync_walwriter | 8\nwal_sync_time | 0\nstats_reset | 2020-10-20 11:04:51.307771+09\n```\n\n1. Basic statistics of WAL activity\n\n- wal_records: Total number of WAL records generated\n- wal_fpi: Total number of WAL full page images generated\n- wal_bytes: Total amount of WAL bytes generated\n\nTo understand DB's performance, first, we will check the performance\ntrends for the entire database instance.\nFor example, if the number of wal_fpi becomes higher, users may tune\n\"wal_compression\", \"checkpoint_timeout\" and so on.\n\nAlthough users can check the above statistics via EXPLAIN, auto_explain,\nautovacuum and pg_stat_statements now,\nif users want to see the performance trends for the entire database,\nthey must recalculate the statistics.\n\nI think it is useful to add the sum of the basic statistics.\n\n\n2. WAL segment file creation\n\n- wal_init_file: Total number of WAL segment files created.\n\nTo create a new WAL file may have an impact on the performance of\na write-heavy workload generating lots of WAL. If this number is \nreported high,\nto reduce the number of this initialization, we can tune WAL-related \nparameters\nso that more \"recycled\" WAL files can be held.\n\n\n\n3. Number of when WAL is flushed\n\n- wal_write_backend : Total number of WAL data written to the disk by \nbackends\n- wal_write_walwriter : Total number of WAL data written to the disk by \nwalwriter\n- wal_sync_backend : Total number of WAL data synced to the disk by \nbackends\n- wal_sync_walwriter : Total number of WAL data synced to the disk by \nwalwrite\n\nI think it's useful for tuning \"synchronous_commit\" and \"commit_delay\" \nfor query executions.\nIf the number of WAL is flushed is high, users can know \n\"synchronous_commit\" is useful for the workload.\n\nAlso, it's useful for tuning \"wal_writer_delay\" and \n\"wal_writer_flush_after\" for wal writer.\nIf the number is high, users can change the parameter for performance.\n\n\n4. Wait time when WAL is flushed\n\n- wal_write_time : Total amount of time that has been spent in the \nportion of\n WAL data was written to disk by backend \nand walwriter, in milliseconds\n (if track-io-timing is enabled, \notherwise zero.)\n- wal_sync_time : Total amount of time that has been spent in the \nportion of\n WAL data was synced to disk by backend \nand walwriter, in milliseconds\n (if track-io-timing is enabled, \notherwise zero.)\n\nIf the time becomes much higher, users can detect the possibility of \ndisk failure.\nSince users can see how much flush time occupies of the query execution \ntime,\nit may lead to query tuning and so on.\n\n\nBest Regards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Tue, 20 Oct 2020 11:31:11 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Add statistics to pg_stat_wal view for wal related parameter tuning" }, { "msg_contents": "On Tue, Oct 20, 2020 at 8:01 AM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> I think we need to add some statistics to pg_stat_wal view.\n>\n> Although there are some parameter related WAL,\n> there are few statistics for tuning them.\n>\n> I think it's better to provide the following statistics.\n> Please let me know your comments.\n>\n> ```\n> postgres=# SELECT * from pg_stat_wal;\n> -[ RECORD 1 ]-------+------------------------------\n> wal_records | 2000224\n> wal_fpi | 47\n> wal_bytes | 248216337\n> wal_buffers_full | 20954\n> wal_init_file | 8\n> wal_write_backend | 20960\n> wal_write_walwriter | 46\n> wal_write_time | 51\n> wal_sync_backend | 7\n> wal_sync_walwriter | 8\n> wal_sync_time | 0\n> stats_reset | 2020-10-20 11:04:51.307771+09\n> ```\n>\n> 1. Basic statistics of WAL activity\n>\n> - wal_records: Total number of WAL records generated\n> - wal_fpi: Total number of WAL full page images generated\n> - wal_bytes: Total amount of WAL bytes generated\n>\n> To understand DB's performance, first, we will check the performance\n> trends for the entire database instance.\n> For example, if the number of wal_fpi becomes higher, users may tune\n> \"wal_compression\", \"checkpoint_timeout\" and so on.\n>\n> Although users can check the above statistics via EXPLAIN, auto_explain,\n> autovacuum and pg_stat_statements now,\n> if users want to see the performance trends for the entire database,\n> they must recalculate the statistics.\n>\n\nHere, do you mean to say 'entire cluster' instead of 'entire database'\nbecause it seems these stats are getting collected for the entire\ncluster?\n\n> I think it is useful to add the sum of the basic statistics.\n>\n\nThere is an argument that it is better to view these stats at the\nstatement-level so that one can know which statements are causing most\nWAL and then try to rate-limit them if required in the application and\nanyway they can get the aggregate of all the WAL if they want. We have\nadded these stats in PG-13, so do we have any evidence that the\nalready added stats don't provide enough information? I understand\nthat you are trying to display the accumulated stats here which if\nrequired users/DBA need to compute with the currently provided stats.\nOTOH, sometimes adding more ways to do some things causes difficulty\nfor users to understand and learn.\n\nI see that we also need to add extra code to capture these stats (some\nof which is in performance-critical path especially in\nXLogInsertRecord) which again makes me a bit uncomfortable. It might\nbe that it is all fine as it is very important to collect these stats\nat cluster-level in spite that the same information can be gathered at\nstatement-level to help customers but I don't see a very strong case\nfor that in your proposal.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 20 Oct 2020 09:16:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "On 2020-10-20 12:46, Amit Kapila wrote:\n> On Tue, Oct 20, 2020 at 8:01 AM Masahiro Ikeda \n> <ikedamsh@oss.nttdata.com> wrote:\n>> \n>> Hi,\n>> \n>> I think we need to add some statistics to pg_stat_wal view.\n>> \n>> Although there are some parameter related WAL,\n>> there are few statistics for tuning them.\n>> \n>> I think it's better to provide the following statistics.\n>> Please let me know your comments.\n>> \n>> ```\n>> postgres=# SELECT * from pg_stat_wal;\n>> -[ RECORD 1 ]-------+------------------------------\n>> wal_records | 2000224\n>> wal_fpi | 47\n>> wal_bytes | 248216337\n>> wal_buffers_full | 20954\n>> wal_init_file | 8\n>> wal_write_backend | 20960\n>> wal_write_walwriter | 46\n>> wal_write_time | 51\n>> wal_sync_backend | 7\n>> wal_sync_walwriter | 8\n>> wal_sync_time | 0\n>> stats_reset | 2020-10-20 11:04:51.307771+09\n>> ```\n>> \n>> 1. Basic statistics of WAL activity\n>> \n>> - wal_records: Total number of WAL records generated\n>> - wal_fpi: Total number of WAL full page images generated\n>> - wal_bytes: Total amount of WAL bytes generated\n>> \n>> To understand DB's performance, first, we will check the performance\n>> trends for the entire database instance.\n>> For example, if the number of wal_fpi becomes higher, users may tune\n>> \"wal_compression\", \"checkpoint_timeout\" and so on.\n>> \n>> Although users can check the above statistics via EXPLAIN, \n>> auto_explain,\n>> autovacuum and pg_stat_statements now,\n>> if users want to see the performance trends for the entire database,\n>> they must recalculate the statistics.\n>> \n> \n> Here, do you mean to say 'entire cluster' instead of 'entire database'\n> because it seems these stats are getting collected for the entire\n> cluster?\n\nThanks for your comments.\nYes, I wanted to say 'entire cluster'.\n\n>> I think it is useful to add the sum of the basic statistics.\n>> \n> \n> There is an argument that it is better to view these stats at the\n> statement-level so that one can know which statements are causing most\n> WAL and then try to rate-limit them if required in the application and\n> anyway they can get the aggregate of all the WAL if they want. We have\n> added these stats in PG-13, so do we have any evidence that the\n> already added stats don't provide enough information? I understand\n> that you are trying to display the accumulated stats here which if\n> required users/DBA need to compute with the currently provided stats.\n> OTOH, sometimes adding more ways to do some things causes difficulty\n> for users to understand and learn.\n\nI agreed that the statement-level stat is important and I understood \nthat we can\nknow the aggregated WAL stats of pg_stat_statement view and autovacuum's \nlog.\nBut now, WAL stats generated by autovacuum can be output to logs and it \nis not\neasy to aggregate them. Since WAL writes impacts for the entire cluster, \nI thought\nit's natural to provide accumulated value.\n\n> I see that we also need to add extra code to capture these stats (some\n> of which is in performance-critical path especially in\n> XLogInsertRecord) which again makes me a bit uncomfortable. It might\n> be that it is all fine as it is very important to collect these stats\n> at cluster-level in spite that the same information can be gathered at\n> statement-level to help customers but I don't see a very strong case\n> for that in your proposal.\n\nAlso about performance, I thought there are few impacts because it\nincrements stats in memory. If I can implement to reuse pgWalUsage's\nvalue which already collects these stats, there is no impact in \nXLogInsertRecord.\nFor example, how about pg_stat_wal() calculates the accumulated\nvalue of wal_records, wal_fpi, and wal_bytes to use pgWalUsage's value?\n\nRegards\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 20 Oct 2020 16:11:29 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "On Tue, Oct 20, 2020 at 12:41 PM Masahiro Ikeda\n<ikedamsh@oss.nttdata.com> wrote:\n>\n> On 2020-10-20 12:46, Amit Kapila wrote:\n> > On Tue, Oct 20, 2020 at 8:01 AM Masahiro Ikeda\n> >> 1. Basic statistics of WAL activity\n> >>\n> >> - wal_records: Total number of WAL records generated\n> >> - wal_fpi: Total number of WAL full page images generated\n> >> - wal_bytes: Total amount of WAL bytes generated\n> >>\n> >> To understand DB's performance, first, we will check the performance\n> >> trends for the entire database instance.\n> >> For example, if the number of wal_fpi becomes higher, users may tune\n> >> \"wal_compression\", \"checkpoint_timeout\" and so on.\n> >>\n> >> Although users can check the above statistics via EXPLAIN,\n> >> auto_explain,\n> >> autovacuum and pg_stat_statements now,\n> >> if users want to see the performance trends for the entire database,\n> >> they must recalculate the statistics.\n> >>\n> >\n> > Here, do you mean to say 'entire cluster' instead of 'entire database'\n> > because it seems these stats are getting collected for the entire\n> > cluster?\n>\n> Thanks for your comments.\n> Yes, I wanted to say 'entire cluster'.\n>\n> >> I think it is useful to add the sum of the basic statistics.\n> >>\n> >\n> > There is an argument that it is better to view these stats at the\n> > statement-level so that one can know which statements are causing most\n> > WAL and then try to rate-limit them if required in the application and\n> > anyway they can get the aggregate of all the WAL if they want. We have\n> > added these stats in PG-13, so do we have any evidence that the\n> > already added stats don't provide enough information? I understand\n> > that you are trying to display the accumulated stats here which if\n> > required users/DBA need to compute with the currently provided stats.\n> > OTOH, sometimes adding more ways to do some things causes difficulty\n> > for users to understand and learn.\n>\n> I agreed that the statement-level stat is important and I understood\n> that we can\n> know the aggregated WAL stats of pg_stat_statement view and autovacuum's\n> log.\n> But now, WAL stats generated by autovacuum can be output to logs and it\n> is not\n> easy to aggregate them. Since WAL writes impacts for the entire cluster,\n> I thought\n> it's natural to provide accumulated value.\n>\n\nI think it is other way i.e if we would have accumulated stats then it\nmakes sense to provide those at statement-level because one would like\nto know the exact cause of more WAL activity. Say it is due to an\nautovacuum or due to the particular set of statements then it would\neasier for users to do something about it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Oct 2020 10:11:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "I think it's really a more convenient way to collect wal usage information,\r\nwith it we can query when I want. Several points on my side:\r\n\r\n1. It will be nice If you provide a chance to reset the information in WalStats,\r\nso that we can reset it without restart the database.\r\n\r\n2. I think 'wal_write_backend' is mean wal write times happen in\r\nbackends. The describe in document is not so clear, suggest rewrite it.\r\n\r\n3. I do not think it's a correct describe in document for 'wal_buffers_full'.\r\n\r\n4. Quite strange to collect twice in XLogInsertRecord() for xl_tot_len,\r\nm_wal_records, m_wal_fpi.\r\n\r\n5. I notice some code in issue_xlog_fsync() function to collect sync info,\r\na standby may call the issue_xlog_fsync() too, which do not want to\r\nto collect this info. I think this need some change avoid run by standby\r\nside. \r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\nI think it's really a more convenient way to collect wal usage information,with it we can query when I want. Several points on my side:1. It will be nice If you provide a chance to reset the information in WalStats,so that we can reset it without restart the database.2. I think 'wal_write_backend' is mean wal write times happen inbackends. The describe in document is not so clear, suggest rewrite it.3. I do not think it's a correct describe in document for 'wal_buffers_full'.4. Quite strange to collect twice in XLogInsertRecord() for xl_tot_len,m_wal_records, m_wal_fpi.5. I notice some code in issue_xlog_fsync() function to collect sync info,a standby may call the issue_xlog_fsync() too, which do not want toto collect this info. I think this need some change avoid run by standbyside. \n\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Wed, 21 Oct 2020 14:54:48 +0800", "msg_from": "\"lchch1990@sina.cn\" <lchch1990@sina.cn>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "At Tue, 20 Oct 2020 16:11:29 +0900, Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote in \n> On 2020-10-20 12:46, Amit Kapila wrote:\n> > I see that we also need to add extra code to capture these stats (some\n> > of which is in performance-critical path especially in\n> > XLogInsertRecord) which again makes me a bit uncomfortable. It might\n> > be that it is all fine as it is very important to collect these stats\n> > at cluster-level in spite that the same information can be gathered at\n> > statement-level to help customers but I don't see a very strong case\n> > for that in your proposal.\n\nWe should avoid that duplication as possible even if the both number\nare important.\n\n> Also about performance, I thought there are few impacts because it\n> increments stats in memory. If I can implement to reuse pgWalUsage's\n> value which already collects these stats, there is no impact in\n> XLogInsertRecord.\n> For example, how about pg_stat_wal() calculates the accumulated\n> value of wal_records, wal_fpi, and wal_bytes to use pgWalUsage's\n> value?\n\nI don't think that works, but it would work that pgstat_send_wal()\ntakes the difference of that values between two successive calls.\n\nWalUsage prevWalUsage;\n...\npgstat_send_wal()\n{\n..\n /* fill in some values using pgWalUsage */\n WalStats.m_wal_bytes = pgWalUsage.wal_bytes - prevWalUsage.wal_bytes;\n WalStats.m_wal_records = pgWalUsage.wal_records - prevWalUsage.wal_records;\n WalStats.m_wal_wal_fpi = pgWalUsage.wal_fpi - prevWalUsage.wal_fpi;\n...\n pgstat_send(&WalStats, sizeof(WalStats));\n\n /* remember the current numbers */\n prevWalUsage = pgWalUsage; \n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 21 Oct 2020 18:03:23 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "On 2020-10-21 13:41, Amit Kapila wrote:\n> On Tue, Oct 20, 2020 at 12:41 PM Masahiro Ikeda\n> <ikedamsh@oss.nttdata.com> wrote:\n>> \n>> On 2020-10-20 12:46, Amit Kapila wrote:\n>> > On Tue, Oct 20, 2020 at 8:01 AM Masahiro Ikeda\n>> >> 1. Basic statistics of WAL activity\n>> >>\n>> >> - wal_records: Total number of WAL records generated\n>> >> - wal_fpi: Total number of WAL full page images generated\n>> >> - wal_bytes: Total amount of WAL bytes generated\n>> >>\n>> >> To understand DB's performance, first, we will check the performance\n>> >> trends for the entire database instance.\n>> >> For example, if the number of wal_fpi becomes higher, users may tune\n>> >> \"wal_compression\", \"checkpoint_timeout\" and so on.\n>> >>\n>> >> Although users can check the above statistics via EXPLAIN,\n>> >> auto_explain,\n>> >> autovacuum and pg_stat_statements now,\n>> >> if users want to see the performance trends for the entire database,\n>> >> they must recalculate the statistics.\n>> >>\n>> >\n>> > Here, do you mean to say 'entire cluster' instead of 'entire database'\n>> > because it seems these stats are getting collected for the entire\n>> > cluster?\n>> \n>> Thanks for your comments.\n>> Yes, I wanted to say 'entire cluster'.\n>> \n>> >> I think it is useful to add the sum of the basic statistics.\n>> >>\n>> >\n>> > There is an argument that it is better to view these stats at the\n>> > statement-level so that one can know which statements are causing most\n>> > WAL and then try to rate-limit them if required in the application and\n>> > anyway they can get the aggregate of all the WAL if they want. We have\n>> > added these stats in PG-13, so do we have any evidence that the\n>> > already added stats don't provide enough information? I understand\n>> > that you are trying to display the accumulated stats here which if\n>> > required users/DBA need to compute with the currently provided stats.\n>> > OTOH, sometimes adding more ways to do some things causes difficulty\n>> > for users to understand and learn.\n>> \n>> I agreed that the statement-level stat is important and I understood\n>> that we can\n>> know the aggregated WAL stats of pg_stat_statement view and \n>> autovacuum's\n>> log.\n>> But now, WAL stats generated by autovacuum can be output to logs and \n>> it\n>> is not\n>> easy to aggregate them. Since WAL writes impacts for the entire \n>> cluster,\n>> I thought\n>> it's natural to provide accumulated value.\n>> \n> \n> I think it is other way i.e if we would have accumulated stats then it\n> makes sense to provide those at statement-level because one would like\n> to know the exact cause of more WAL activity. Say it is due to an\n> autovacuum or due to the particular set of statements then it would\n> easier for users to do something about it.\n\nOK, I'll remove them.\nDo you have any comments for other statistics?\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 22 Oct 2020 10:09:21 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "On 2020-10-21 15:54, lchch1990@sina.cn wrote:\n> I think it's really a more convenient way to collect wal usage\n> information,\n> with it we can query when I want. Several points on my side:\n\nThanks for your comments.\n\n\n> 1. It will be nice If you provide a chance to reset the information in\n> WalStats,\n> so that we can reset it without restart the database.\n\nI agree.\n\nNow, pg_stat_wal supports reset all informantion in WalStats\nusing pg_stat_reset_shared('wal') function.\n\nIsn't it enough?\n\n\n> 2. I think 'wal_write_backend' is mean wal write times happen in\n> backends. The describe in document is not so clear, suggest rewrite\n> it.\n\nOK, I'll rewrite to \"Total number of times backends write WAL data to \nthe disk\".\n\n\n> 3. I do not think it's a correct describe in document for\n> 'wal_buffers_full'.\n\nWhere should I rewrite the description? If my understanding is not \ncorrect, please let me know.\n\n\n> 4. Quite strange to collect twice in XLogInsertRecord() for\n> xl_tot_len,\n> m_wal_records, m_wal_fpi.\n\nYes, Amit-san pointed me that too.\nI'll remove them from pg_stat_wal since pg_stat_statements and vacuum \nlog\nalready shows the related statistics and there is a comment it's enough.\n\nAnyway, if you need to the accumulated above statistics in pg_stat_wal,\nplease let me know.\n\n\n> 5. I notice some code in issue_xlog_fsync() function to collect sync\n> info,\n> a standby may call the issue_xlog_fsync() too, which do not want to\n> to collect this info. I think this need some change avoid run by\n> standby\n> side.\n\nThanks, I will fix it.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 22 Oct 2020 10:34:28 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "On 2020-10-21 18:03, Kyotaro Horiguchi wrote:\n> At Tue, 20 Oct 2020 16:11:29 +0900, Masahiro Ikeda\n> <ikedamsh@oss.nttdata.com> wrote in\n>> On 2020-10-20 12:46, Amit Kapila wrote:\n>> > I see that we also need to add extra code to capture these stats (some\n>> > of which is in performance-critical path especially in\n>> > XLogInsertRecord) which again makes me a bit uncomfortable. It might\n>> > be that it is all fine as it is very important to collect these stats\n>> > at cluster-level in spite that the same information can be gathered at\n>> > statement-level to help customers but I don't see a very strong case\n>> > for that in your proposal.\n> \n> We should avoid that duplication as possible even if the both number\n> are important.\n> \n>> Also about performance, I thought there are few impacts because it\n>> increments stats in memory. If I can implement to reuse pgWalUsage's\n>> value which already collects these stats, there is no impact in\n>> XLogInsertRecord.\n>> For example, how about pg_stat_wal() calculates the accumulated\n>> value of wal_records, wal_fpi, and wal_bytes to use pgWalUsage's\n>> value?\n> \n> I don't think that works, but it would work that pgstat_send_wal()\n> takes the difference of that values between two successive calls.\n> \n> WalUsage prevWalUsage;\n> ...\n> pgstat_send_wal()\n> {\n> ..\n> /* fill in some values using pgWalUsage */\n> WalStats.m_wal_bytes = pgWalUsage.wal_bytes - \n> prevWalUsage.wal_bytes;\n> WalStats.m_wal_records = pgWalUsage.wal_records - \n> prevWalUsage.wal_records;\n> WalStats.m_wal_wal_fpi = pgWalUsage.wal_fpi - \n> prevWalUsage.wal_fpi;\n> ...\n> pgstat_send(&WalStats, sizeof(WalStats));\n> \n> /* remember the current numbers */\n> prevWalUsage = pgWalUsage;\n\nThanks for your advice. This code can avoid the performance impact of \ncritical code.\n\nBy the way, what do you think to add these statistics to the pg_stat_wal \nview?\nI thought to remove the above statistics because there is advice that \nPG13's features,\nfor example, pg_stat_statement view, vacuum log, and so on can cover \nuse-cases.\n\nregards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 22 Oct 2020 10:44:53 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "At Thu, 22 Oct 2020 10:44:53 +0900, Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote in \n> On 2020-10-21 18:03, Kyotaro Horiguchi wrote:\n> > At Tue, 20 Oct 2020 16:11:29 +0900, Masahiro Ikeda\n> > <ikedamsh@oss.nttdata.com> wrote in\n> >> On 2020-10-20 12:46, Amit Kapila wrote:\n> >> > I see that we also need to add extra code to capture these stats (some\n> >> > of which is in performance-critical path especially in\n> >> > XLogInsertRecord) which again makes me a bit uncomfortable. It might\n> >> > be that it is all fine as it is very important to collect these stats\n> >> > at cluster-level in spite that the same information can be gathered at\n> >> > statement-level to help customers but I don't see a very strong case\n> >> > for that in your proposal.\n> > We should avoid that duplication as possible even if the both number\n> > are important.\n> > \n> >> Also about performance, I thought there are few impacts because it\n> >> increments stats in memory. If I can implement to reuse pgWalUsage's\n> >> value which already collects these stats, there is no impact in\n> >> XLogInsertRecord.\n> >> For example, how about pg_stat_wal() calculates the accumulated\n> >> value of wal_records, wal_fpi, and wal_bytes to use pgWalUsage's\n> >> value?\n> > I don't think that works, but it would work that pgstat_send_wal()\n> > takes the difference of that values between two successive calls.\n> > WalUsage prevWalUsage;\n> > ...\n> > pgstat_send_wal()\n> > {\n> > ..\n> > /* fill in some values using pgWalUsage */\n> > WalStats.m_wal_bytes = pgWalUsage.wal_bytes - prevWalUsage.wal_bytes;\n> > WalStats.m_wal_records = pgWalUsage.wal_records -\n> > prevWalUsage.wal_records;\n> > WalStats.m_wal_wal_fpi = pgWalUsage.wal_fpi - prevWalUsage.wal_fpi;\n> > ...\n> > pgstat_send(&WalStats, sizeof(WalStats));\n> > /* remember the current numbers */\n> > prevWalUsage = pgWalUsage;\n> \n> Thanks for your advice. This code can avoid the performance impact of\n> critical code.\n> \n> By the way, what do you think to add these statistics to the\n> pg_stat_wal view?\n> I thought to remove the above statistics because there is advice that\n> PG13's features,\n> for example, pg_stat_statement view, vacuum log, and so on can cover\n> use-cases.\n\n\nAt Thu, 22 Oct 2020 10:09:21 +0900, Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote in \n> >> I agreed that the statement-level stat is important and I understood\n> >> that we can\n> >> know the aggregated WAL stats of pg_stat_statement view and\n> >> autovacuum's\n> >> log.\n> >> But now, WAL stats generated by autovacuum can be output to logs and\n> >> it\n> >> is not\n> >> easy to aggregate them. Since WAL writes impacts for the entire\n> >> cluster,\n> >> I thought\n> >> it's natural to provide accumulated value.\n> >> \n> > I think it is other way i.e if we would have accumulated stats then it\n> > makes sense to provide those at statement-level because one would like\n> > to know the exact cause of more WAL activity. Say it is due to an\n> > autovacuum or due to the particular set of statements then it would\n> > easier for users to do something about it.\n> \n> OK, I'll remove them.\n> Do you have any comments for other statistics?\n\nThat discussion comes from the fact that the code adds duplicate code\nin a hot path. If we that extra cost doesn't exist, we are free to\nadd the accumulated values in pg_stat_wal. I think they are useful for\nstats-collecting tools as far as we can do that without such an extra\ncost.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 22 Oct 2020 13:54:16 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "Hi,\n\nThanks for your comments and advice. I updated the patch.\n\nOn 2020-10-21 18:03, Kyotaro Horiguchi wrote:\n> At Tue, 20 Oct 2020 16:11:29 +0900, Masahiro Ikeda\n> <ikedamsh@oss.nttdata.com> wrote in\n>> On 2020-10-20 12:46, Amit Kapila wrote:\n>> > I see that we also need to add extra code to capture these stats (some\n>> > of which is in performance-critical path especially in\n>> > XLogInsertRecord) which again makes me a bit uncomfortable. It might\n>> > be that it is all fine as it is very important to collect these stats\n>> > at cluster-level in spite that the same information can be gathered at\n>> > statement-level to help customers but I don't see a very strong case\n>> > for that in your proposal.\n> \n> We should avoid that duplication as possible even if the both number\n> are important.\n> \n>> Also about performance, I thought there are few impacts because it\n>> increments stats in memory. If I can implement to reuse pgWalUsage's\n>> value which already collects these stats, there is no impact in\n>> XLogInsertRecord.\n>> For example, how about pg_stat_wal() calculates the accumulated\n>> value of wal_records, wal_fpi, and wal_bytes to use pgWalUsage's\n>> value?\n> \n> I don't think that works, but it would work that pgstat_send_wal()\n> takes the difference of that values between two successive calls.\n> \n> WalUsage prevWalUsage;\n> ...\n> pgstat_send_wal()\n> {\n> ..\n> /* fill in some values using pgWalUsage */\n> WalStats.m_wal_bytes = pgWalUsage.wal_bytes - \n> prevWalUsage.wal_bytes;\n> WalStats.m_wal_records = pgWalUsage.wal_records - \n> prevWalUsage.wal_records;\n> WalStats.m_wal_wal_fpi = pgWalUsage.wal_fpi - \n> prevWalUsage.wal_fpi;\n> ...\n> pgstat_send(&WalStats, sizeof(WalStats));\n> \n> /* remember the current numbers */\n> prevWalUsage = pgWalUsage;\n\nThanks for Horiguchi-san's advice, I changed to reuse pgWalUsage\nwhich is already defined and eliminates the extra overhead.\n\n> 5. I notice some code in issue_xlog_fsync() function to collect sync \n> info,\n> a standby may call the issue_xlog_fsync() too, which do not want to\n> to collect this info. I think this need some change avoid run by\n> standby side.\n\nIIUC, issue_xlog_fsync is called by wal receiver on standby side.\nBut it doesn't send collected statistics to the stats collecter.\nSo, I think it's not necessary to change the code to avoid collecting \nthe stats on the standby side.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Thu, 29 Oct 2020 17:03:56 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "\n\nOn 2020/10/29 17:03, Masahiro Ikeda wrote:\n> Hi,\n> \n> Thanks for your comments and advice. I updated the patch.\n> \n> On 2020-10-21 18:03, Kyotaro Horiguchi wrote:\n>> At Tue, 20 Oct 2020 16:11:29 +0900, Masahiro Ikeda\n>> <ikedamsh@oss.nttdata.com> wrote in\n>>> On 2020-10-20 12:46, Amit Kapila wrote:\n>>> > I see that we also need to add extra code to capture these stats (some\n>>> > of which is in performance-critical path especially in\n>>> > XLogInsertRecord) which again makes me a bit uncomfortable. It might\n>>> > be that it is all fine as it is very important to collect these stats\n>>> > at cluster-level in spite that the same information can be gathered at\n>>> > statement-level to help customers but I don't see a very strong case\n>>> > for that in your proposal.\n>>\n>> We should avoid that duplication as possible even if the both number\n>> are important.\n>>\n>>> Also about performance, I thought there are few impacts because it\n>>> increments stats in memory. If I can implement to reuse pgWalUsage's\n>>> value which already collects these stats, there is no impact in\n>>> XLogInsertRecord.\n>>> For example, how about pg_stat_wal() calculates the accumulated\n>>> value of wal_records, wal_fpi, and wal_bytes to use pgWalUsage's\n>>> value?\n>>\n>> I don't think that works, but it would work that pgstat_send_wal()\n>> takes the difference of that values between two successive calls.\n>>\n>> WalUsage prevWalUsage;\n>> ...\n>> pgstat_send_wal()\n>> {\n>> ..\n>> �� /* fill in some values using pgWalUsage */\n>> �� WalStats.m_wal_bytes�� = pgWalUsage.wal_bytes�� - prevWalUsage.wal_bytes;\n>> �� WalStats.m_wal_records = pgWalUsage.wal_records - prevWalUsage.wal_records;\n>> �� WalStats.m_wal_wal_fpi = pgWalUsage.wal_fpi���� - prevWalUsage.wal_fpi;\n>> ...\n>> �� pgstat_send(&WalStats, sizeof(WalStats));\n>>\n>> �� /* remember the current numbers */\n>> �� prevWalUsage = pgWalUsage;\n> \n> Thanks for Horiguchi-san's advice, I changed to reuse pgWalUsage\n> which is already defined and eliminates the extra overhead.\n\n+\t/* fill in some values using pgWalUsage */\n+\tWalStats.m_wal_bytes = pgWalUsage.wal_bytes - prevWalUsage.wal_bytes;\n+\tWalStats.m_wal_records = pgWalUsage.wal_records - prevWalUsage.wal_records;\n+\tWalStats.m_wal_fpi = pgWalUsage.wal_fpi - prevWalUsage.wal_fpi;\n\nIt's better to use WalUsageAccumDiff() here?\n\nprevWalUsage needs to be initialized with pgWalUsage?\n\n+\t\t\t\tif (AmWalWriterProcess()){\n+\t\t\t\t\tWalStats.m_wal_write_walwriter++;\n+\t\t\t\t}\n+\t\t\t\telse\n+\t\t\t\t{\n+\t\t\t\t\tWalStats.m_wal_write_backend++;\n+\t\t\t\t}\n\nI think that it's better not to separate m_wal_write_xxx into two for\nwalwriter and other processes. Instead, we can use one m_wal_write_xxx\ncounter and make pgstat_send_wal() send also the process type to\nthe stats collector. Then the stats collector can accumulate the counters\nper process type if necessary. If we adopt this approach, we can easily\nextend pg_stat_wal so that any fields can be reported per process type.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 30 Oct 2020 11:50:59 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "\n\nOn 2020/10/20 11:31, Masahiro Ikeda wrote:\n> Hi,\n> \n> I think we need to add some statistics to pg_stat_wal view.\n> \n> Although there are some parameter related WAL,\n> there are few statistics for tuning them.\n> \n> I think it's better to provide the following statistics.\n> Please let me know your comments.\n> \n> ```\n> postgres=# SELECT * from pg_stat_wal;\n> -[ RECORD 1 ]-------+------------------------------\n> wal_records         | 2000224\n> wal_fpi             | 47\n> wal_bytes           | 248216337\n> wal_buffers_full    | 20954\n> wal_init_file       | 8\n> wal_write_backend   | 20960\n> wal_write_walwriter | 46\n> wal_write_time      | 51\n> wal_sync_backend    | 7\n> wal_sync_walwriter  | 8\n> wal_sync_time       | 0\n> stats_reset         | 2020-10-20 11:04:51.307771+09\n> ```\n> \n> 1. Basic statistics of WAL activity\n> \n> - wal_records: Total number of WAL records generated\n> - wal_fpi: Total number of WAL full page images generated\n> - wal_bytes: Total amount of WAL bytes generated\n> \n> To understand DB's performance, first, we will check the performance\n> trends for the entire database instance.\n> For example, if the number of wal_fpi becomes higher, users may tune\n> \"wal_compression\", \"checkpoint_timeout\" and so on.\n> \n> Although users can check the above statistics via EXPLAIN, auto_explain,\n> autovacuum and pg_stat_statements now,\n> if users want to see the performance trends  for the entire database,\n> they must recalculate the statistics.\n> \n> I think it is useful to add the sum of the basic statistics.\n> \n> \n> 2.  WAL segment file creation\n> \n> - wal_init_file: Total number of WAL segment files created.\n> \n> To create a new WAL file may have an impact on the performance of\n> a write-heavy workload generating lots of WAL. If this number is reported high,\n> to reduce the number of this initialization, we can tune WAL-related parameters\n> so that more \"recycled\" WAL files can be held.\n> \n> \n> \n> 3. Number of when WAL is flushed\n> \n> - wal_write_backend : Total number of WAL data written to the disk by backends\n> - wal_write_walwriter : Total number of WAL data written to the disk by walwriter\n> - wal_sync_backend : Total number of WAL data synced to the disk by backends\n> - wal_sync_walwriter : Total number of WAL data synced to the disk by walwrite\n> \n> I think it's useful for tuning \"synchronous_commit\" and \"commit_delay\" for query executions.\n> If the number of WAL is flushed is high, users can know \"synchronous_commit\" is useful for the workload.\n\nI just wonder how useful these counters are. Even without these counters,\nwe already know synchronous_commit=off is likely to cause the better\nperformance (but has the risk of data loss). So ISTM that these counters are\nnot so useful when tuning synchronous_commit.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 30 Oct 2020 12:00:27 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "On 2020-10-30 11:50, Fujii Masao wrote:\n> On 2020/10/29 17:03, Masahiro Ikeda wrote:\n>> Hi,\n>> \n>> Thanks for your comments and advice. I updated the patch.\n>> \n>> On 2020-10-21 18:03, Kyotaro Horiguchi wrote:\n>>> At Tue, 20 Oct 2020 16:11:29 +0900, Masahiro Ikeda\n>>> <ikedamsh@oss.nttdata.com> wrote in\n>>>> On 2020-10-20 12:46, Amit Kapila wrote:\n>>>> > I see that we also need to add extra code to capture these stats (some\n>>>> > of which is in performance-critical path especially in\n>>>> > XLogInsertRecord) which again makes me a bit uncomfortable. It might\n>>>> > be that it is all fine as it is very important to collect these stats\n>>>> > at cluster-level in spite that the same information can be gathered at\n>>>> > statement-level to help customers but I don't see a very strong case\n>>>> > for that in your proposal.\n>>> \n>>> We should avoid that duplication as possible even if the both number\n>>> are important.\n>>> \n>>>> Also about performance, I thought there are few impacts because it\n>>>> increments stats in memory. If I can implement to reuse pgWalUsage's\n>>>> value which already collects these stats, there is no impact in\n>>>> XLogInsertRecord.\n>>>> For example, how about pg_stat_wal() calculates the accumulated\n>>>> value of wal_records, wal_fpi, and wal_bytes to use pgWalUsage's\n>>>> value?\n>>> \n>>> I don't think that works, but it would work that pgstat_send_wal()\n>>> takes the difference of that values between two successive calls.\n>>> \n>>> WalUsage prevWalUsage;\n>>> ...\n>>> pgstat_send_wal()\n>>> {\n>>> ..\n>>>    /* fill in some values using pgWalUsage */\n>>>    WalStats.m_wal_bytes   = pgWalUsage.wal_bytes   - \n>>> prevWalUsage.wal_bytes;\n>>>    WalStats.m_wal_records = pgWalUsage.wal_records - \n>>> prevWalUsage.wal_records;\n>>>    WalStats.m_wal_wal_fpi = pgWalUsage.wal_fpi     - \n>>> prevWalUsage.wal_fpi;\n>>> ...\n>>>    pgstat_send(&WalStats, sizeof(WalStats));\n>>> \n>>>    /* remember the current numbers */\n>>>    prevWalUsage = pgWalUsage;\n>> \n>> Thanks for Horiguchi-san's advice, I changed to reuse pgWalUsage\n>> which is already defined and eliminates the extra overhead.\n> \n> +\t/* fill in some values using pgWalUsage */\n> +\tWalStats.m_wal_bytes = pgWalUsage.wal_bytes - prevWalUsage.wal_bytes;\n> +\tWalStats.m_wal_records = pgWalUsage.wal_records - \n> prevWalUsage.wal_records;\n> +\tWalStats.m_wal_fpi = pgWalUsage.wal_fpi - prevWalUsage.wal_fpi;\n> \n> It's better to use WalUsageAccumDiff() here?\n\nYes, thanks. I fixed it.\n\n> prevWalUsage needs to be initialized with pgWalUsage?\n> \n> +\t\t\t\tif (AmWalWriterProcess()){\n> +\t\t\t\t\tWalStats.m_wal_write_walwriter++;\n> +\t\t\t\t}\n> +\t\t\t\telse\n> +\t\t\t\t{\n> +\t\t\t\t\tWalStats.m_wal_write_backend++;\n> +\t\t\t\t}\n> \n> I think that it's better not to separate m_wal_write_xxx into two for\n> walwriter and other processes. Instead, we can use one m_wal_write_xxx\n> counter and make pgstat_send_wal() send also the process type to\n> the stats collector. Then the stats collector can accumulate the \n> counters\n> per process type if necessary. If we adopt this approach, we can easily\n> extend pg_stat_wal so that any fields can be reported per process type.\n\nI'll remove the above source code because these counters are not useful.\n\n\nOn 2020-10-30 12:00, Fujii Masao wrote:\n> On 2020/10/20 11:31, Masahiro Ikeda wrote:\n>> Hi,\n>> \n>> I think we need to add some statistics to pg_stat_wal view.\n>> \n>> Although there are some parameter related WAL,\n>> there are few statistics for tuning them.\n>> \n>> I think it's better to provide the following statistics.\n>> Please let me know your comments.\n>> \n>> ```\n>> postgres=# SELECT * from pg_stat_wal;\n>> -[ RECORD 1 ]-------+------------------------------\n>> wal_records         | 2000224\n>> wal_fpi             | 47\n>> wal_bytes           | 248216337\n>> wal_buffers_full    | 20954\n>> wal_init_file       | 8\n>> wal_write_backend   | 20960\n>> wal_write_walwriter | 46\n>> wal_write_time      | 51\n>> wal_sync_backend    | 7\n>> wal_sync_walwriter  | 8\n>> wal_sync_time       | 0\n>> stats_reset         | 2020-10-20 11:04:51.307771+09\n>> ```\n>> \n>> 1. Basic statistics of WAL activity\n>> \n>> - wal_records: Total number of WAL records generated\n>> - wal_fpi: Total number of WAL full page images generated\n>> - wal_bytes: Total amount of WAL bytes generated\n>> \n>> To understand DB's performance, first, we will check the performance\n>> trends for the entire database instance.\n>> For example, if the number of wal_fpi becomes higher, users may tune\n>> \"wal_compression\", \"checkpoint_timeout\" and so on.\n>> \n>> Although users can check the above statistics via EXPLAIN, \n>> auto_explain,\n>> autovacuum and pg_stat_statements now,\n>> if users want to see the performance trends  for the entire database,\n>> they must recalculate the statistics.\n>> \n>> I think it is useful to add the sum of the basic statistics.\n>> \n>> \n>> 2.  WAL segment file creation\n>> \n>> - wal_init_file: Total number of WAL segment files created.\n>> \n>> To create a new WAL file may have an impact on the performance of\n>> a write-heavy workload generating lots of WAL. If this number is \n>> reported high,\n>> to reduce the number of this initialization, we can tune WAL-related \n>> parameters\n>> so that more \"recycled\" WAL files can be held.\n>> \n>> \n>> \n>> 3. Number of when WAL is flushed\n>> \n>> - wal_write_backend : Total number of WAL data written to the disk by \n>> backends\n>> - wal_write_walwriter : Total number of WAL data written to the disk \n>> by walwriter\n>> - wal_sync_backend : Total number of WAL data synced to the disk by \n>> backends\n>> - wal_sync_walwriter : Total number of WAL data synced to the disk by \n>> walwrite\n>> \n>> I think it's useful for tuning \"synchronous_commit\" and \"commit_delay\" \n>> for query executions.\n>> If the number of WAL is flushed is high, users can know \n>> \"synchronous_commit\" is useful for the workload.\n> \n> I just wonder how useful these counters are. Even without these \n> counters,\n> we already know synchronous_commit=off is likely to cause the better\n> performance (but has the risk of data loss). So ISTM that these \n> counters are\n> not so useful when tuning synchronous_commit.\n\nThanks, my understanding was wrong.\nI agreed that your comments.\n\nI merged the statistics of *_backend and *_walwriter.\nI think the sum of them is useful to calculate the average per \nwrite/sync time.\nFor example, per write time is equals wal_write_time / wal_write.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Fri, 06 Nov 2020 10:25:07 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "\n\nOn 2020/11/06 10:25, Masahiro Ikeda wrote:\n> On 2020-10-30 11:50, Fujii Masao wrote:\n>> On 2020/10/29 17:03, Masahiro Ikeda wrote:\n>>> Hi,\n>>>\n>>> Thanks for your comments and advice. I updated the patch.\n>>>\n>>> On 2020-10-21 18:03, Kyotaro Horiguchi wrote:\n>>>> At Tue, 20 Oct 2020 16:11:29 +0900, Masahiro Ikeda\n>>>> <ikedamsh@oss.nttdata.com> wrote in\n>>>>> On 2020-10-20 12:46, Amit Kapila wrote:\n>>>>> > I see that we also need to add extra code to capture these stats (some\n>>>>> > of which is in performance-critical path especially in\n>>>>> > XLogInsertRecord) which again makes me a bit uncomfortable. It might\n>>>>> > be that it is all fine as it is very important to collect these stats\n>>>>> > at cluster-level in spite that the same information can be gathered at\n>>>>> > statement-level to help customers but I don't see a very strong case\n>>>>> > for that in your proposal.\n>>>>\n>>>> We should avoid that duplication as possible even if the both number\n>>>> are important.\n>>>>\n>>>>> Also about performance, I thought there are few impacts because it\n>>>>> increments stats in memory. If I can implement to reuse pgWalUsage's\n>>>>> value which already collects these stats, there is no impact in\n>>>>> XLogInsertRecord.\n>>>>> For example, how about pg_stat_wal() calculates the accumulated\n>>>>> value of wal_records, wal_fpi, and wal_bytes to use pgWalUsage's\n>>>>> value?\n>>>>\n>>>> I don't think that works, but it would work that pgstat_send_wal()\n>>>> takes the difference of that values between two successive calls.\n>>>>\n>>>> WalUsage prevWalUsage;\n>>>> ...\n>>>> pgstat_send_wal()\n>>>> {\n>>>> ..\n>>>>    /* fill in some values using pgWalUsage */\n>>>>    WalStats.m_wal_bytes   = pgWalUsage.wal_bytes   - prevWalUsage.wal_bytes;\n>>>>    WalStats.m_wal_records = pgWalUsage.wal_records - prevWalUsage.wal_records;\n>>>>    WalStats.m_wal_wal_fpi = pgWalUsage.wal_fpi     - prevWalUsage.wal_fpi;\n>>>> ...\n>>>>    pgstat_send(&WalStats, sizeof(WalStats));\n>>>>\n>>>>    /* remember the current numbers */\n>>>>    prevWalUsage = pgWalUsage;\n>>>\n>>> Thanks for Horiguchi-san's advice, I changed to reuse pgWalUsage\n>>> which is already defined and eliminates the extra overhead.\n>>\n>> +    /* fill in some values using pgWalUsage */\n>> +    WalStats.m_wal_bytes = pgWalUsage.wal_bytes - prevWalUsage.wal_bytes;\n>> +    WalStats.m_wal_records = pgWalUsage.wal_records - prevWalUsage.wal_records;\n>> +    WalStats.m_wal_fpi = pgWalUsage.wal_fpi - prevWalUsage.wal_fpi;\n>>\n>> It's better to use WalUsageAccumDiff() here?\n> \n> Yes, thanks. I fixed it.\n> \n>> prevWalUsage needs to be initialized with pgWalUsage?\n>>\n>> +                if (AmWalWriterProcess()){\n>> +                    WalStats.m_wal_write_walwriter++;\n>> +                }\n>> +                else\n>> +                {\n>> +                    WalStats.m_wal_write_backend++;\n>> +                }\n>>\n>> I think that it's better not to separate m_wal_write_xxx into two for\n>> walwriter and other processes. Instead, we can use one m_wal_write_xxx\n>> counter and make pgstat_send_wal() send also the process type to\n>> the stats collector. Then the stats collector can accumulate the counters\n>> per process type if necessary. If we adopt this approach, we can easily\n>> extend pg_stat_wal so that any fields can be reported per process type.\n> \n> I'll remove the above source code because these counters are not useful.\n> \n> \n> On 2020-10-30 12:00, Fujii Masao wrote:\n>> On 2020/10/20 11:31, Masahiro Ikeda wrote:\n>>> Hi,\n>>>\n>>> I think we need to add some statistics to pg_stat_wal view.\n>>>\n>>> Although there are some parameter related WAL,\n>>> there are few statistics for tuning them.\n>>>\n>>> I think it's better to provide the following statistics.\n>>> Please let me know your comments.\n>>>\n>>> ```\n>>> postgres=# SELECT * from pg_stat_wal;\n>>> -[ RECORD 1 ]-------+------------------------------\n>>> wal_records         | 2000224\n>>> wal_fpi             | 47\n>>> wal_bytes           | 248216337\n>>> wal_buffers_full    | 20954\n>>> wal_init_file       | 8\n>>> wal_write_backend   | 20960\n>>> wal_write_walwriter | 46\n>>> wal_write_time      | 51\n>>> wal_sync_backend    | 7\n>>> wal_sync_walwriter  | 8\n>>> wal_sync_time       | 0\n>>> stats_reset         | 2020-10-20 11:04:51.307771+09\n>>> ```\n>>>\n>>> 1. Basic statistics of WAL activity\n>>>\n>>> - wal_records: Total number of WAL records generated\n>>> - wal_fpi: Total number of WAL full page images generated\n>>> - wal_bytes: Total amount of WAL bytes generated\n>>>\n>>> To understand DB's performance, first, we will check the performance\n>>> trends for the entire database instance.\n>>> For example, if the number of wal_fpi becomes higher, users may tune\n>>> \"wal_compression\", \"checkpoint_timeout\" and so on.\n>>>\n>>> Although users can check the above statistics via EXPLAIN, auto_explain,\n>>> autovacuum and pg_stat_statements now,\n>>> if users want to see the performance trends  for the entire database,\n>>> they must recalculate the statistics.\n>>>\n>>> I think it is useful to add the sum of the basic statistics.\n>>>\n>>>\n>>> 2.  WAL segment file creation\n>>>\n>>> - wal_init_file: Total number of WAL segment files created.\n>>>\n>>> To create a new WAL file may have an impact on the performance of\n>>> a write-heavy workload generating lots of WAL. If this number is reported high,\n>>> to reduce the number of this initialization, we can tune WAL-related parameters\n>>> so that more \"recycled\" WAL files can be held.\n>>>\n>>>\n>>>\n>>> 3. Number of when WAL is flushed\n>>>\n>>> - wal_write_backend : Total number of WAL data written to the disk by backends\n>>> - wal_write_walwriter : Total number of WAL data written to the disk by walwriter\n>>> - wal_sync_backend : Total number of WAL data synced to the disk by backends\n>>> - wal_sync_walwriter : Total number of WAL data synced to the disk by walwrite\n>>>\n>>> I think it's useful for tuning \"synchronous_commit\" and \"commit_delay\" for query executions.\n>>> If the number of WAL is flushed is high, users can know \"synchronous_commit\" is useful for the workload.\n>>\n>> I just wonder how useful these counters are. Even without these counters,\n>> we already know synchronous_commit=off is likely to cause the better\n>> performance (but has the risk of data loss). So ISTM that these counters are\n>> not so useful when tuning synchronous_commit.\n> \n> Thanks, my understanding was wrong.\n> I agreed that your comments.\n> \n> I merged the statistics of *_backend and *_walwriter.\n> I think the sum of them is useful to calculate the average per write/sync time.\n> For example, per write time is equals wal_write_time / wal_write.\n\nUnderstood.\n\nThanks for updating the patch!\n\npatching file src/include/catalog/pg_proc.dat\nHunk #1 FAILED at 5491.\n1 out of 1 hunk FAILED -- saving rejects to file src/include/catalog/pg_proc.dat.rej\n\nI got this failure when applying the patch. Could you update the patch?\n\n\n- Number of times WAL data was written to the disk because WAL buffers got full\n+ Total number of times WAL data written to the disk because WAL buffers got full\n\nIsn't \"was\" necessary between \"data\" and \"written\"?\n\n\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>wal_bytes</structfield> <type>bigint</type>\n\nShouldn't the type of wal_bytes be numeric because the total number of\nWAL bytes can exceed the range of bigint? I think that the type of\npg_stat_statements.wal_bytes is also numeric for the same reason.\n\n\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>wal_write_time</structfield> <type>bigint</type>\n\nShouldn't the type of wal_xxx_time be double precision,\nlike pg_stat_database.blk_write_time?\n\n\nEven when fsync is set to off or wal_sync_method is set to open_sync,\nwal_sync is incremented. Isn't this behavior confusing?\n\n\n+ Total amount of time that has been spent in the portion of\n+ WAL data was written to disk by backend and walwriter, in milliseconds\n+ (if <xref linkend=\"guc-track-io-timing\"/> is enabled, otherwise zero)\n\nWith the patch, track_io_timing controls both IO for data files and\nWAL files. But we may want to track only either of them. So it's better\nto extend track_io_timing so that we can specify the tracking target\nin the parameter? For example, we can make track_io_timing accept\ndata, wal and all. Or we should introduce new GUC for WAL, e.g.,\ntrack_wal_io_timing? Thought?\n\nI'm afraid that \"by backend and walwriter\" part can make us thinkg\nincorrectly that WAL writes by other processes like autovacuum\nare not tracked.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 12 Nov 2020 14:58:59 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "\n\nOn 2020/11/12 14:58, Fujii Masao wrote:\n> \n> \n> On 2020/11/06 10:25, Masahiro Ikeda wrote:\n>> On 2020-10-30 11:50, Fujii Masao wrote:\n>>> On 2020/10/29 17:03, Masahiro Ikeda wrote:\n>>>> Hi,\n>>>>\n>>>> Thanks for your comments and advice. I updated the patch.\n>>>>\n>>>> On 2020-10-21 18:03, Kyotaro Horiguchi wrote:\n>>>>> At Tue, 20 Oct 2020 16:11:29 +0900, Masahiro Ikeda\n>>>>> <ikedamsh@oss.nttdata.com> wrote in\n>>>>>> On 2020-10-20 12:46, Amit Kapila wrote:\n>>>>>> > I see that we also need to add extra code to capture these stats (some\n>>>>>> > of which is in performance-critical path especially in\n>>>>>> > XLogInsertRecord) which again makes me a bit uncomfortable. It might\n>>>>>> > be that it is all fine as it is very important to collect these stats\n>>>>>> > at cluster-level in spite that the same information can be gathered at\n>>>>>> > statement-level to help customers but I don't see a very strong case\n>>>>>> > for that in your proposal.\n>>>>>\n>>>>> We should avoid that duplication as possible even if the both number\n>>>>> are important.\n>>>>>\n>>>>>> Also about performance, I thought there are few impacts because it\n>>>>>> increments stats in memory. If I can implement to reuse pgWalUsage's\n>>>>>> value which already collects these stats, there is no impact in\n>>>>>> XLogInsertRecord.\n>>>>>> For example, how about pg_stat_wal() calculates the accumulated\n>>>>>> value of wal_records, wal_fpi, and wal_bytes to use pgWalUsage's\n>>>>>> value?\n>>>>>\n>>>>> I don't think that works, but it would work that pgstat_send_wal()\n>>>>> takes the difference of that values between two successive calls.\n>>>>>\n>>>>> WalUsage prevWalUsage;\n>>>>> ...\n>>>>> pgstat_send_wal()\n>>>>> {\n>>>>> ..\n>>>>>    /* fill in some values using pgWalUsage */\n>>>>>    WalStats.m_wal_bytes   = pgWalUsage.wal_bytes   - prevWalUsage.wal_bytes;\n>>>>>    WalStats.m_wal_records = pgWalUsage.wal_records - prevWalUsage.wal_records;\n>>>>>    WalStats.m_wal_wal_fpi = pgWalUsage.wal_fpi     - prevWalUsage.wal_fpi;\n>>>>> ...\n>>>>>    pgstat_send(&WalStats, sizeof(WalStats));\n>>>>>\n>>>>>    /* remember the current numbers */\n>>>>>    prevWalUsage = pgWalUsage;\n>>>>\n>>>> Thanks for Horiguchi-san's advice, I changed to reuse pgWalUsage\n>>>> which is already defined and eliminates the extra overhead.\n>>>\n>>> +    /* fill in some values using pgWalUsage */\n>>> +    WalStats.m_wal_bytes = pgWalUsage.wal_bytes - prevWalUsage.wal_bytes;\n>>> +    WalStats.m_wal_records = pgWalUsage.wal_records - prevWalUsage.wal_records;\n>>> +    WalStats.m_wal_fpi = pgWalUsage.wal_fpi - prevWalUsage.wal_fpi;\n>>>\n>>> It's better to use WalUsageAccumDiff() here?\n>>\n>> Yes, thanks. I fixed it.\n>>\n>>> prevWalUsage needs to be initialized with pgWalUsage?\n>>>\n>>> +                if (AmWalWriterProcess()){\n>>> +                    WalStats.m_wal_write_walwriter++;\n>>> +                }\n>>> +                else\n>>> +                {\n>>> +                    WalStats.m_wal_write_backend++;\n>>> +                }\n>>>\n>>> I think that it's better not to separate m_wal_write_xxx into two for\n>>> walwriter and other processes. Instead, we can use one m_wal_write_xxx\n>>> counter and make pgstat_send_wal() send also the process type to\n>>> the stats collector. Then the stats collector can accumulate the counters\n>>> per process type if necessary. If we adopt this approach, we can easily\n>>> extend pg_stat_wal so that any fields can be reported per process type.\n>>\n>> I'll remove the above source code because these counters are not useful.\n>>\n>>\n>> On 2020-10-30 12:00, Fujii Masao wrote:\n>>> On 2020/10/20 11:31, Masahiro Ikeda wrote:\n>>>> Hi,\n>>>>\n>>>> I think we need to add some statistics to pg_stat_wal view.\n>>>>\n>>>> Although there are some parameter related WAL,\n>>>> there are few statistics for tuning them.\n>>>>\n>>>> I think it's better to provide the following statistics.\n>>>> Please let me know your comments.\n>>>>\n>>>> ```\n>>>> postgres=# SELECT * from pg_stat_wal;\n>>>> -[ RECORD 1 ]-------+------------------------------\n>>>> wal_records         | 2000224\n>>>> wal_fpi             | 47\n>>>> wal_bytes           | 248216337\n>>>> wal_buffers_full    | 20954\n>>>> wal_init_file       | 8\n>>>> wal_write_backend   | 20960\n>>>> wal_write_walwriter | 46\n>>>> wal_write_time      | 51\n>>>> wal_sync_backend    | 7\n>>>> wal_sync_walwriter  | 8\n>>>> wal_sync_time       | 0\n>>>> stats_reset         | 2020-10-20 11:04:51.307771+09\n>>>> ```\n>>>>\n>>>> 1. Basic statistics of WAL activity\n>>>>\n>>>> - wal_records: Total number of WAL records generated\n>>>> - wal_fpi: Total number of WAL full page images generated\n>>>> - wal_bytes: Total amount of WAL bytes generated\n>>>>\n>>>> To understand DB's performance, first, we will check the performance\n>>>> trends for the entire database instance.\n>>>> For example, if the number of wal_fpi becomes higher, users may tune\n>>>> \"wal_compression\", \"checkpoint_timeout\" and so on.\n>>>>\n>>>> Although users can check the above statistics via EXPLAIN, auto_explain,\n>>>> autovacuum and pg_stat_statements now,\n>>>> if users want to see the performance trends  for the entire database,\n>>>> they must recalculate the statistics.\n>>>>\n>>>> I think it is useful to add the sum of the basic statistics.\n>>>>\n>>>>\n>>>> 2.  WAL segment file creation\n>>>>\n>>>> - wal_init_file: Total number of WAL segment files created.\n>>>>\n>>>> To create a new WAL file may have an impact on the performance of\n>>>> a write-heavy workload generating lots of WAL. If this number is reported high,\n>>>> to reduce the number of this initialization, we can tune WAL-related parameters\n>>>> so that more \"recycled\" WAL files can be held.\n>>>>\n>>>>\n>>>>\n>>>> 3. Number of when WAL is flushed\n>>>>\n>>>> - wal_write_backend : Total number of WAL data written to the disk by backends\n>>>> - wal_write_walwriter : Total number of WAL data written to the disk by walwriter\n>>>> - wal_sync_backend : Total number of WAL data synced to the disk by backends\n>>>> - wal_sync_walwriter : Total number of WAL data synced to the disk by walwrite\n>>>>\n>>>> I think it's useful for tuning \"synchronous_commit\" and \"commit_delay\" for query executions.\n>>>> If the number of WAL is flushed is high, users can know \"synchronous_commit\" is useful for the workload.\n>>>\n>>> I just wonder how useful these counters are. Even without these counters,\n>>> we already know synchronous_commit=off is likely to cause the better\n>>> performance (but has the risk of data loss). So ISTM that these counters are\n>>> not so useful when tuning synchronous_commit.\n>>\n>> Thanks, my understanding was wrong.\n>> I agreed that your comments.\n>>\n>> I merged the statistics of *_backend and *_walwriter.\n>> I think the sum of them is useful to calculate the average per write/sync time.\n>> For example, per write time is equals wal_write_time / wal_write.\n> \n> Understood.\n> \n> Thanks for updating the patch!\n> \n> patching file src/include/catalog/pg_proc.dat\n> Hunk #1 FAILED at 5491.\n> 1 out of 1 hunk FAILED -- saving rejects to file src/include/catalog/pg_proc.dat.rej\n> \n> I got this failure when applying the patch. Could you update the patch?\n> \n> \n> -       Number of times WAL data was written to the disk because WAL buffers got full\n> +       Total number of times WAL data written to the disk because WAL buffers got full\n> \n> Isn't \"was\" necessary between \"data\" and \"written\"?\n> \n> \n> +      <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> +       <structfield>wal_bytes</structfield> <type>bigint</type>\n> \n> Shouldn't the type of wal_bytes be numeric because the total number of\n> WAL bytes can exceed the range of bigint? I think that the type of\n> pg_stat_statements.wal_bytes is also numeric for the same reason.\n> \n> \n> +      <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> +       <structfield>wal_write_time</structfield> <type>bigint</type>\n> \n> Shouldn't the type of wal_xxx_time be double precision,\n> like pg_stat_database.blk_write_time?\n> \n> \n> Even when fsync is set to off or wal_sync_method is set to open_sync,\n> wal_sync is incremented. Isn't this behavior confusing?\n> \n> \n> +       Total amount of time that has been spent in the portion of\n> +       WAL data was written to disk by backend and walwriter, in milliseconds\n> +       (if <xref linkend=\"guc-track-io-timing\"/> is enabled, otherwise zero)\n> \n> With the patch, track_io_timing controls both IO for data files and\n> WAL files. But we may want to track only either of them. So it's better\n> to extend track_io_timing so that we can specify the tracking target\n> in the parameter? For example, we can make track_io_timing accept\n> data, wal and all. Or we should introduce new GUC for WAL, e.g.,\n> track_wal_io_timing? Thought?\n> \n> I'm afraid that \"by backend and walwriter\" part can make us thinkg\n> incorrectly that WAL writes by other processes like autovacuum\n> are not tracked.\n\n pgstat_send_wal(void)\n {\n+\t/* fill in some values using pgWalUsage */\n+\tWalUsage walusage;\n+\tmemset(&walusage, 0, sizeof(WalUsage));\n+\tWalUsageAccumDiff(&walusage, &pgWalUsage, &prevWalUsage);\n\nAt the first call to pgstat_send_wal(), prevWalUsage has not been set to\nthe previous value of pgWalUsage. So the calculation result of\nWalUsageAccumDiff() can be incorrect. To address this issue,\nprevWalUsage should be set to pgWalUsage or both should be initialized\nwith 0 at the beginning of the process, for example?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 12 Nov 2020 16:27:36 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": ">Now, pg_stat_wal supports reset all informantion in WalStats\r\n>using pg_stat_reset_shared('wal') function.\r\n>Isn't it enough?\r\nYes it ok, sorry I miss this infomation.\r\n\r\n\r\n>> 3. I do not think it's a correct describe in document for\r\n>> 'wal_buffers_full'.\r\n \r\n>Where should I rewrite the description? If my understanding is not\r\n>correct, please let me know.\r\nSorry I have not described it clearly, because I can not understand the meaning of this\r\ncolumn after I read the describe in document.\r\nAnd now I read the source code of walwrite and found the place where 'wal_buffers_full'\r\nadded is for a backend to wait a wal buffer which is occupied by other wal page, so the \r\nbackend flush the old page in the wal buffer(after wait it can).\r\nSo i think the origin decribe in document is not so in point, we can describe it such as\r\n'Total number of times WAL data written to the disk because a backend yelled a wal buffer\r\nfor an advanced wal page.\r\n\r\nSorry if my understand is wrong.\r\n\r\n\r\n\r\n\n\n>Now, pg_stat_wal supports reset all informantion in WalStats>using pg_stat_reset_shared('wal') function.>Isn't it enough?Yes it ok, sorry I miss this infomation.>> 3. I do not think it's a correct describe in document for>> 'wal_buffers_full'. >Where should I rewrite the description? If my understanding is not>correct, please let me know.Sorry I have not described it clearly, because I can not understand the meaning of thiscolumn after I read the describe in document.And now I read the source code of walwrite and found the place where 'wal_buffers_full'added is for a backend to wait a wal buffer which is occupied by other wal page, so the backend flush the old page in the wal buffer(after wait it can).So i think the origin decribe in document is not so in point, we can describe it such as'Total number of times WAL data written to the disk because a backend yelled a wal bufferfor an advanced wal page.Sorry if my understand is wrong.", "msg_date": "Fri, 13 Nov 2020 11:32:23 +0800", "msg_from": "\"lchch1990@sina.cn\" <lchch1990@sina.cn>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "On 2020-11-12 16:27, Fujii Masao wrote:\n> On 2020/11/12 14:58, Fujii Masao wrote:\n>> \n>> \n>> On 2020/11/06 10:25, Masahiro Ikeda wrote:\n>>> On 2020-10-30 11:50, Fujii Masao wrote:\n>>>> On 2020/10/29 17:03, Masahiro Ikeda wrote:\n>>>>> Hi,\n>>>>> \n>>>>> Thanks for your comments and advice. I updated the patch.\n>>>>> \n>>>>> On 2020-10-21 18:03, Kyotaro Horiguchi wrote:\n>>>>>> At Tue, 20 Oct 2020 16:11:29 +0900, Masahiro Ikeda\n>>>>>> <ikedamsh@oss.nttdata.com> wrote in\n>>>>>>> On 2020-10-20 12:46, Amit Kapila wrote:\n>>>>>>> > I see that we also need to add extra code to capture these stats (some\n>>>>>>> > of which is in performance-critical path especially in\n>>>>>>> > XLogInsertRecord) which again makes me a bit uncomfortable. It might\n>>>>>>> > be that it is all fine as it is very important to collect these stats\n>>>>>>> > at cluster-level in spite that the same information can be gathered at\n>>>>>>> > statement-level to help customers but I don't see a very strong case\n>>>>>>> > for that in your proposal.\n>>>>>> \n>>>>>> We should avoid that duplication as possible even if the both \n>>>>>> number\n>>>>>> are important.\n>>>>>> \n>>>>>>> Also about performance, I thought there are few impacts because \n>>>>>>> it\n>>>>>>> increments stats in memory. If I can implement to reuse \n>>>>>>> pgWalUsage's\n>>>>>>> value which already collects these stats, there is no impact in\n>>>>>>> XLogInsertRecord.\n>>>>>>> For example, how about pg_stat_wal() calculates the accumulated\n>>>>>>> value of wal_records, wal_fpi, and wal_bytes to use pgWalUsage's\n>>>>>>> value?\n>>>>>> \n>>>>>> I don't think that works, but it would work that pgstat_send_wal()\n>>>>>> takes the difference of that values between two successive calls.\n>>>>>> \n>>>>>> WalUsage prevWalUsage;\n>>>>>> ...\n>>>>>> pgstat_send_wal()\n>>>>>> {\n>>>>>> ..\n>>>>>>    /* fill in some values using pgWalUsage */\n>>>>>>    WalStats.m_wal_bytes   = pgWalUsage.wal_bytes   - \n>>>>>> prevWalUsage.wal_bytes;\n>>>>>>    WalStats.m_wal_records = pgWalUsage.wal_records - \n>>>>>> prevWalUsage.wal_records;\n>>>>>>    WalStats.m_wal_wal_fpi = pgWalUsage.wal_fpi     - \n>>>>>> prevWalUsage.wal_fpi;\n>>>>>> ...\n>>>>>>    pgstat_send(&WalStats, sizeof(WalStats));\n>>>>>> \n>>>>>>    /* remember the current numbers */\n>>>>>>    prevWalUsage = pgWalUsage;\n>>>>> \n>>>>> Thanks for Horiguchi-san's advice, I changed to reuse pgWalUsage\n>>>>> which is already defined and eliminates the extra overhead.\n>>>> \n>>>> +    /* fill in some values using pgWalUsage */\n>>>> +    WalStats.m_wal_bytes = pgWalUsage.wal_bytes - \n>>>> prevWalUsage.wal_bytes;\n>>>> +    WalStats.m_wal_records = pgWalUsage.wal_records - \n>>>> prevWalUsage.wal_records;\n>>>> +    WalStats.m_wal_fpi = pgWalUsage.wal_fpi - prevWalUsage.wal_fpi;\n>>>> \n>>>> It's better to use WalUsageAccumDiff() here?\n>>> \n>>> Yes, thanks. I fixed it.\n>>> \n>>>> prevWalUsage needs to be initialized with pgWalUsage?\n>>>> \n>>>> +                if (AmWalWriterProcess()){\n>>>> +                    WalStats.m_wal_write_walwriter++;\n>>>> +                }\n>>>> +                else\n>>>> +                {\n>>>> +                    WalStats.m_wal_write_backend++;\n>>>> +                }\n>>>> \n>>>> I think that it's better not to separate m_wal_write_xxx into two \n>>>> for\n>>>> walwriter and other processes. Instead, we can use one \n>>>> m_wal_write_xxx\n>>>> counter and make pgstat_send_wal() send also the process type to\n>>>> the stats collector. Then the stats collector can accumulate the \n>>>> counters\n>>>> per process type if necessary. If we adopt this approach, we can \n>>>> easily\n>>>> extend pg_stat_wal so that any fields can be reported per process \n>>>> type.\n>>> \n>>> I'll remove the above source code because these counters are not \n>>> useful.\n>>> \n>>> \n>>> On 2020-10-30 12:00, Fujii Masao wrote:\n>>>> On 2020/10/20 11:31, Masahiro Ikeda wrote:\n>>>>> Hi,\n>>>>> \n>>>>> I think we need to add some statistics to pg_stat_wal view.\n>>>>> \n>>>>> Although there are some parameter related WAL,\n>>>>> there are few statistics for tuning them.\n>>>>> \n>>>>> I think it's better to provide the following statistics.\n>>>>> Please let me know your comments.\n>>>>> \n>>>>> ```\n>>>>> postgres=# SELECT * from pg_stat_wal;\n>>>>> -[ RECORD 1 ]-------+------------------------------\n>>>>> wal_records         | 2000224\n>>>>> wal_fpi             | 47\n>>>>> wal_bytes           | 248216337\n>>>>> wal_buffers_full    | 20954\n>>>>> wal_init_file       | 8\n>>>>> wal_write_backend   | 20960\n>>>>> wal_write_walwriter | 46\n>>>>> wal_write_time      | 51\n>>>>> wal_sync_backend    | 7\n>>>>> wal_sync_walwriter  | 8\n>>>>> wal_sync_time       | 0\n>>>>> stats_reset         | 2020-10-20 11:04:51.307771+09\n>>>>> ```\n>>>>> \n>>>>> 1. Basic statistics of WAL activity\n>>>>> \n>>>>> - wal_records: Total number of WAL records generated\n>>>>> - wal_fpi: Total number of WAL full page images generated\n>>>>> - wal_bytes: Total amount of WAL bytes generated\n>>>>> \n>>>>> To understand DB's performance, first, we will check the \n>>>>> performance\n>>>>> trends for the entire database instance.\n>>>>> For example, if the number of wal_fpi becomes higher, users may \n>>>>> tune\n>>>>> \"wal_compression\", \"checkpoint_timeout\" and so on.\n>>>>> \n>>>>> Although users can check the above statistics via EXPLAIN, \n>>>>> auto_explain,\n>>>>> autovacuum and pg_stat_statements now,\n>>>>> if users want to see the performance trends  for the entire \n>>>>> database,\n>>>>> they must recalculate the statistics.\n>>>>> \n>>>>> I think it is useful to add the sum of the basic statistics.\n>>>>> \n>>>>> \n>>>>> 2.  WAL segment file creation\n>>>>> \n>>>>> - wal_init_file: Total number of WAL segment files created.\n>>>>> \n>>>>> To create a new WAL file may have an impact on the performance of\n>>>>> a write-heavy workload generating lots of WAL. If this number is \n>>>>> reported high,\n>>>>> to reduce the number of this initialization, we can tune \n>>>>> WAL-related parameters\n>>>>> so that more \"recycled\" WAL files can be held.\n>>>>> \n>>>>> \n>>>>> \n>>>>> 3. Number of when WAL is flushed\n>>>>> \n>>>>> - wal_write_backend : Total number of WAL data written to the disk \n>>>>> by backends\n>>>>> - wal_write_walwriter : Total number of WAL data written to the \n>>>>> disk by walwriter\n>>>>> - wal_sync_backend : Total number of WAL data synced to the disk by \n>>>>> backends\n>>>>> - wal_sync_walwriter : Total number of WAL data synced to the disk \n>>>>> by walwrite\n>>>>> \n>>>>> I think it's useful for tuning \"synchronous_commit\" and \n>>>>> \"commit_delay\" for query executions.\n>>>>> If the number of WAL is flushed is high, users can know \n>>>>> \"synchronous_commit\" is useful for the workload.\n>>>> \n>>>> I just wonder how useful these counters are. Even without these \n>>>> counters,\n>>>> we already know synchronous_commit=off is likely to cause the better\n>>>> performance (but has the risk of data loss). So ISTM that these \n>>>> counters are\n>>>> not so useful when tuning synchronous_commit.\n>>> \n>>> Thanks, my understanding was wrong.\n>>> I agreed that your comments.\n>>> \n>>> I merged the statistics of *_backend and *_walwriter.\n>>> I think the sum of them is useful to calculate the average per \n>>> write/sync time.\n>>> For example, per write time is equals wal_write_time / wal_write.\n>> \n>> Understood.\n>> \n>> Thanks for updating the patch!\n>> \n>> patching file src/include/catalog/pg_proc.dat\n>> Hunk #1 FAILED at 5491.\n>> 1 out of 1 hunk FAILED -- saving rejects to file \n>> src/include/catalog/pg_proc.dat.rej\n>> \n>> I got this failure when applying the patch. Could you update the \n>> patch?\n>> \n>> \n>> -       Number of times WAL data was written to the disk because WAL \n>> buffers got full\n>> +       Total number of times WAL data written to the disk because WAL \n>> buffers got full\n>> \n>> Isn't \"was\" necessary between \"data\" and \"written\"?\n>> \n>> \n>> +      <entry role=\"catalog_table_entry\"><para \n>> role=\"column_definition\">\n>> +       <structfield>wal_bytes</structfield> <type>bigint</type>\n>> \n>> Shouldn't the type of wal_bytes be numeric because the total number of\n>> WAL bytes can exceed the range of bigint? I think that the type of\n>> pg_stat_statements.wal_bytes is also numeric for the same reason.\n>> \n>> \n>> +      <entry role=\"catalog_table_entry\"><para \n>> role=\"column_definition\">\n>> +       <structfield>wal_write_time</structfield> <type>bigint</type>\n>> \n>> Shouldn't the type of wal_xxx_time be double precision,\n>> like pg_stat_database.blk_write_time?\n>> \n>> \n>> Even when fsync is set to off or wal_sync_method is set to open_sync,\n>> wal_sync is incremented. Isn't this behavior confusing?\n>> \n>> \n>> +       Total amount of time that has been spent in the portion of\n>> +       WAL data was written to disk by backend and walwriter, in \n>> milliseconds\n>> +       (if <xref linkend=\"guc-track-io-timing\"/> is enabled, \n>> otherwise zero)\n>> \n>> With the patch, track_io_timing controls both IO for data files and\n>> WAL files. But we may want to track only either of them. So it's \n>> better\n>> to extend track_io_timing so that we can specify the tracking target\n>> in the parameter? For example, we can make track_io_timing accept\n>> data, wal and all. Or we should introduce new GUC for WAL, e.g.,\n>> track_wal_io_timing? Thought?\n>> \n>> I'm afraid that \"by backend and walwriter\" part can make us thinkg\n>> incorrectly that WAL writes by other processes like autovacuum\n>> are not tracked.\n> \n> pgstat_send_wal(void)\n> {\n> +\t/* fill in some values using pgWalUsage */\n> +\tWalUsage walusage;\n> +\tmemset(&walusage, 0, sizeof(WalUsage));\n> +\tWalUsageAccumDiff(&walusage, &pgWalUsage, &prevWalUsage);\n> \n> At the first call to pgstat_send_wal(), prevWalUsage has not been set \n> to\n> the previous value of pgWalUsage. So the calculation result of\n> WalUsageAccumDiff() can be incorrect. To address this issue,\n> prevWalUsage should be set to pgWalUsage or both should be initialized\n> with 0 at the beginning of the process, for example?\n\nI forgot to handle it, thanks.\nAlthough I initialized it in pgstat_initialize(),\nif there is better way please let me know.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 16 Nov 2020 16:33:05 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "On 2020-11-12 14:58, Fujii Masao wrote:\n> On 2020/11/06 10:25, Masahiro Ikeda wrote:\n>> On 2020-10-30 11:50, Fujii Masao wrote:\n>>> On 2020/10/29 17:03, Masahiro Ikeda wrote:\n>>>> Hi,\n>>>> \n>>>> Thanks for your comments and advice. I updated the patch.\n>>>> \n>>>> On 2020-10-21 18:03, Kyotaro Horiguchi wrote:\n>>>>> At Tue, 20 Oct 2020 16:11:29 +0900, Masahiro Ikeda\n>>>>> <ikedamsh@oss.nttdata.com> wrote in\n>>>>>> On 2020-10-20 12:46, Amit Kapila wrote:\n>>>>>> > I see that we also need to add extra code to capture these stats (some\n>>>>>> > of which is in performance-critical path especially in\n>>>>>> > XLogInsertRecord) which again makes me a bit uncomfortable. It might\n>>>>>> > be that it is all fine as it is very important to collect these stats\n>>>>>> > at cluster-level in spite that the same information can be gathered at\n>>>>>> > statement-level to help customers but I don't see a very strong case\n>>>>>> > for that in your proposal.\n>>>>> \n>>>>> We should avoid that duplication as possible even if the both \n>>>>> number\n>>>>> are important.\n>>>>> \n>>>>>> Also about performance, I thought there are few impacts because it\n>>>>>> increments stats in memory. If I can implement to reuse \n>>>>>> pgWalUsage's\n>>>>>> value which already collects these stats, there is no impact in\n>>>>>> XLogInsertRecord.\n>>>>>> For example, how about pg_stat_wal() calculates the accumulated\n>>>>>> value of wal_records, wal_fpi, and wal_bytes to use pgWalUsage's\n>>>>>> value?\n>>>>> \n>>>>> I don't think that works, but it would work that pgstat_send_wal()\n>>>>> takes the difference of that values between two successive calls.\n>>>>> \n>>>>> WalUsage prevWalUsage;\n>>>>> ...\n>>>>> pgstat_send_wal()\n>>>>> {\n>>>>> ..\n>>>>>    /* fill in some values using pgWalUsage */\n>>>>>    WalStats.m_wal_bytes   = pgWalUsage.wal_bytes   - \n>>>>> prevWalUsage.wal_bytes;\n>>>>>    WalStats.m_wal_records = pgWalUsage.wal_records - \n>>>>> prevWalUsage.wal_records;\n>>>>>    WalStats.m_wal_wal_fpi = pgWalUsage.wal_fpi     - \n>>>>> prevWalUsage.wal_fpi;\n>>>>> ...\n>>>>>    pgstat_send(&WalStats, sizeof(WalStats));\n>>>>> \n>>>>>    /* remember the current numbers */\n>>>>>    prevWalUsage = pgWalUsage;\n>>>> \n>>>> Thanks for Horiguchi-san's advice, I changed to reuse pgWalUsage\n>>>> which is already defined and eliminates the extra overhead.\n>>> \n>>> +    /* fill in some values using pgWalUsage */\n>>> +    WalStats.m_wal_bytes = pgWalUsage.wal_bytes - \n>>> prevWalUsage.wal_bytes;\n>>> +    WalStats.m_wal_records = pgWalUsage.wal_records - \n>>> prevWalUsage.wal_records;\n>>> +    WalStats.m_wal_fpi = pgWalUsage.wal_fpi - prevWalUsage.wal_fpi;\n>>> \n>>> It's better to use WalUsageAccumDiff() here?\n>> \n>> Yes, thanks. I fixed it.\n>> \n>>> prevWalUsage needs to be initialized with pgWalUsage?\n>>> \n>>> +                if (AmWalWriterProcess()){\n>>> +                    WalStats.m_wal_write_walwriter++;\n>>> +                }\n>>> +                else\n>>> +                {\n>>> +                    WalStats.m_wal_write_backend++;\n>>> +                }\n>>> \n>>> I think that it's better not to separate m_wal_write_xxx into two for\n>>> walwriter and other processes. Instead, we can use one \n>>> m_wal_write_xxx\n>>> counter and make pgstat_send_wal() send also the process type to\n>>> the stats collector. Then the stats collector can accumulate the \n>>> counters\n>>> per process type if necessary. If we adopt this approach, we can \n>>> easily\n>>> extend pg_stat_wal so that any fields can be reported per process \n>>> type.\n>> \n>> I'll remove the above source code because these counters are not \n>> useful.\n>> \n>> \n>> On 2020-10-30 12:00, Fujii Masao wrote:\n>>> On 2020/10/20 11:31, Masahiro Ikeda wrote:\n>>>> Hi,\n>>>> \n>>>> I think we need to add some statistics to pg_stat_wal view.\n>>>> \n>>>> Although there are some parameter related WAL,\n>>>> there are few statistics for tuning them.\n>>>> \n>>>> I think it's better to provide the following statistics.\n>>>> Please let me know your comments.\n>>>> \n>>>> ```\n>>>> postgres=# SELECT * from pg_stat_wal;\n>>>> -[ RECORD 1 ]-------+------------------------------\n>>>> wal_records         | 2000224\n>>>> wal_fpi             | 47\n>>>> wal_bytes           | 248216337\n>>>> wal_buffers_full    | 20954\n>>>> wal_init_file       | 8\n>>>> wal_write_backend   | 20960\n>>>> wal_write_walwriter | 46\n>>>> wal_write_time      | 51\n>>>> wal_sync_backend    | 7\n>>>> wal_sync_walwriter  | 8\n>>>> wal_sync_time       | 0\n>>>> stats_reset         | 2020-10-20 11:04:51.307771+09\n>>>> ```\n>>>> \n>>>> 1. Basic statistics of WAL activity\n>>>> \n>>>> - wal_records: Total number of WAL records generated\n>>>> - wal_fpi: Total number of WAL full page images generated\n>>>> - wal_bytes: Total amount of WAL bytes generated\n>>>> \n>>>> To understand DB's performance, first, we will check the performance\n>>>> trends for the entire database instance.\n>>>> For example, if the number of wal_fpi becomes higher, users may tune\n>>>> \"wal_compression\", \"checkpoint_timeout\" and so on.\n>>>> \n>>>> Although users can check the above statistics via EXPLAIN, \n>>>> auto_explain,\n>>>> autovacuum and pg_stat_statements now,\n>>>> if users want to see the performance trends  for the entire \n>>>> database,\n>>>> they must recalculate the statistics.\n>>>> \n>>>> I think it is useful to add the sum of the basic statistics.\n>>>> \n>>>> \n>>>> 2.  WAL segment file creation\n>>>> \n>>>> - wal_init_file: Total number of WAL segment files created.\n>>>> \n>>>> To create a new WAL file may have an impact on the performance of\n>>>> a write-heavy workload generating lots of WAL. If this number is \n>>>> reported high,\n>>>> to reduce the number of this initialization, we can tune WAL-related \n>>>> parameters\n>>>> so that more \"recycled\" WAL files can be held.\n>>>> \n>>>> \n>>>> \n>>>> 3. Number of when WAL is flushed\n>>>> \n>>>> - wal_write_backend : Total number of WAL data written to the disk \n>>>> by backends\n>>>> - wal_write_walwriter : Total number of WAL data written to the disk \n>>>> by walwriter\n>>>> - wal_sync_backend : Total number of WAL data synced to the disk by \n>>>> backends\n>>>> - wal_sync_walwriter : Total number of WAL data synced to the disk \n>>>> by walwrite\n>>>> \n>>>> I think it's useful for tuning \"synchronous_commit\" and \n>>>> \"commit_delay\" for query executions.\n>>>> If the number of WAL is flushed is high, users can know \n>>>> \"synchronous_commit\" is useful for the workload.\n>>> \n>>> I just wonder how useful these counters are. Even without these \n>>> counters,\n>>> we already know synchronous_commit=off is likely to cause the better\n>>> performance (but has the risk of data loss). So ISTM that these \n>>> counters are\n>>> not so useful when tuning synchronous_commit.\n>> \n>> Thanks, my understanding was wrong.\n>> I agreed that your comments.\n>> \n>> I merged the statistics of *_backend and *_walwriter.\n>> I think the sum of them is useful to calculate the average per \n>> write/sync time.\n>> For example, per write time is equals wal_write_time / wal_write.\n> \n> Understood.\n> \n> Thanks for updating the patch!\n\nThanks for your comments.\n\n> patching file src/include/catalog/pg_proc.dat\n> Hunk #1 FAILED at 5491.\n> 1 out of 1 hunk FAILED -- saving rejects to file\n> src/include/catalog/pg_proc.dat.rej\n> \n> I got this failure when applying the patch. Could you update the patch?\n\nThanks, I updated the patch.\n\n> - Number of times WAL data was written to the disk because WAL\n> buffers got full\n> + Total number of times WAL data written to the disk because WAL\n> buffers got full\n> \n> Isn't \"was\" necessary between \"data\" and \"written\"?\n\nYes, I fixed it.\n\n> + <entry role=\"catalog_table_entry\"><para \n> role=\"column_definition\">\n> + <structfield>wal_bytes</structfield> <type>bigint</type>\n> \n> Shouldn't the type of wal_bytes be numeric because the total number of\n> WAL bytes can exceed the range of bigint? I think that the type of\n> pg_stat_statements.wal_bytes is also numeric for the same reason.\n\nThanks, I fixed it.\n\nSince I cast the type of wal_bytes from PgStat_Counter to uint64,\nI changed the type of PgStat_MsgWal and PgStat_WalStats too.\n\n> + <entry role=\"catalog_table_entry\"><para \n> role=\"column_definition\">\n> + <structfield>wal_write_time</structfield> <type>bigint</type>\n> \n> Shouldn't the type of wal_xxx_time be double precision,\n> like pg_stat_database.blk_write_time?\n\nThanks, I changed it.\n\n> Even when fsync is set to off or wal_sync_method is set to open_sync,\n> wal_sync is incremented. Isn't this behavior confusing?\n> \n> \n> + Total amount of time that has been spent in the portion of\n> + WAL data was written to disk by backend and walwriter, in \n> milliseconds\n> + (if <xref linkend=\"guc-track-io-timing\"/> is enabled, otherwise \n> zero)\n> \n> With the patch, track_io_timing controls both IO for data files and\n> WAL files. But we may want to track only either of them. So it's better\n> to extend track_io_timing so that we can specify the tracking target\n> in the parameter? For example, we can make track_io_timing accept\n> data, wal and all. Or we should introduce new GUC for WAL, e.g.,\n> track_wal_io_timing? Thought?\n\nOK, I introduced the new GUC \"track_wal_io_timing\".\n\n> I'm afraid that \"by backend and walwriter\" part can make us thinkg\n> incorrectly that WAL writes by other processes like autovacuum\n> are not tracked.\n\nSorry, I removed \"by backend and walwriter\".\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Mon, 16 Nov 2020 16:35:23 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "On 2020-11-13 12:32, lchch1990@sina.cn wrote:\n>> Now, pg_stat_wal supports reset all informantion in WalStats\n>> using pg_stat_reset_shared('wal') function.\n>> Isn't it enough?\n> Yes it ok, sorry I miss this infomation.\n\nOK.\n\n>>> 3. I do not think it's a correct describe in document for\n>>> 'wal_buffers_full'.\n> \n>> Where should I rewrite the description? If my understanding is not\n>> correct, please let me know.\n> Sorry I have not described it clearly, because I can not understand\n> the meaning of this\n> column after I read the describe in document.\n> And now I read the source code of walwrite and found the place where\n> 'wal_buffers_full'\n> added is for a backend to wait a wal buffer which is occupied by other\n> wal page, so the\n> backend flush the old page in the wal buffer(after wait it can).\n> So i think the origin decribe in document is not so in point, we can\n> describe it such as\n> 'Total number of times WAL data written to the disk because a backend\n> yelled a wal buffer\n> for an advanced wal page.\n> \n> Sorry if my understand is wrong.\n\nThanks for your comments.\n\nYou're understanding is almost the same as mine.\nIt describes when not only backends but also other backgrounds \ninitialize a new wal page,\nwal buffer's space is already used and there is no space.\n\n> 'Total number of times WAL data written to the disk because a backend\n> yelled a wal buffer for an advanced wal page'\n\nThanks for your suggestion.\nI wondered that users may confuse about how to use \"wal_buffers_full\" \nand how to tune parameters.\n\nI thought the reason which wal buffer has no space is\nimportant for users to tune the wal_buffers parameter.\n\nHow about the following comments?\n\n'Total number of times WAL data was written to the disk because WAL \nbuffers got full\n when to initialize a new WAL page'\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 16 Nov 2020 18:24:10 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "\n\nOn 2020/11/16 16:35, Masahiro Ikeda wrote:\n> On 2020-11-12 14:58, Fujii Masao wrote:\n>> On 2020/11/06 10:25, Masahiro Ikeda wrote:\n>>> On 2020-10-30 11:50, Fujii Masao wrote:\n>>>> On 2020/10/29 17:03, Masahiro Ikeda wrote:\n>>>>> Hi,\n>>>>>\n>>>>> Thanks for your comments and advice. I updated the patch.\n>>>>>\n>>>>> On 2020-10-21 18:03, Kyotaro Horiguchi wrote:\n>>>>>> At Tue, 20 Oct 2020 16:11:29 +0900, Masahiro Ikeda\n>>>>>> <ikedamsh@oss.nttdata.com> wrote in\n>>>>>>> On 2020-10-20 12:46, Amit Kapila wrote:\n>>>>>>> > I see that we also need to add extra code to capture these stats (some\n>>>>>>> > of which is in performance-critical path especially in\n>>>>>>> > XLogInsertRecord) which again makes me a bit uncomfortable. It might\n>>>>>>> > be that it is all fine as it is very important to collect these stats\n>>>>>>> > at cluster-level in spite that the same information can be gathered at\n>>>>>>> > statement-level to help customers but I don't see a very strong case\n>>>>>>> > for that in your proposal.\n>>>>>>\n>>>>>> We should avoid that duplication as possible even if the both number\n>>>>>> are important.\n>>>>>>\n>>>>>>> Also about performance, I thought there are few impacts because it\n>>>>>>> increments stats in memory. If I can implement to reuse pgWalUsage's\n>>>>>>> value which already collects these stats, there is no impact in\n>>>>>>> XLogInsertRecord.\n>>>>>>> For example, how about pg_stat_wal() calculates the accumulated\n>>>>>>> value of wal_records, wal_fpi, and wal_bytes to use pgWalUsage's\n>>>>>>> value?\n>>>>>>\n>>>>>> I don't think that works, but it would work that pgstat_send_wal()\n>>>>>> takes the difference of that values between two successive calls.\n>>>>>>\n>>>>>> WalUsage prevWalUsage;\n>>>>>> ...\n>>>>>> pgstat_send_wal()\n>>>>>> {\n>>>>>> ..\n>>>>>>    /* fill in some values using pgWalUsage */\n>>>>>>    WalStats.m_wal_bytes   = pgWalUsage.wal_bytes   - prevWalUsage.wal_bytes;\n>>>>>>    WalStats.m_wal_records = pgWalUsage.wal_records - prevWalUsage.wal_records;\n>>>>>>    WalStats.m_wal_wal_fpi = pgWalUsage.wal_fpi     - prevWalUsage.wal_fpi;\n>>>>>> ...\n>>>>>>    pgstat_send(&WalStats, sizeof(WalStats));\n>>>>>>\n>>>>>>    /* remember the current numbers */\n>>>>>>    prevWalUsage = pgWalUsage;\n>>>>>\n>>>>> Thanks for Horiguchi-san's advice, I changed to reuse pgWalUsage\n>>>>> which is already defined and eliminates the extra overhead.\n>>>>\n>>>> +    /* fill in some values using pgWalUsage */\n>>>> +    WalStats.m_wal_bytes = pgWalUsage.wal_bytes - prevWalUsage.wal_bytes;\n>>>> +    WalStats.m_wal_records = pgWalUsage.wal_records - prevWalUsage.wal_records;\n>>>> +    WalStats.m_wal_fpi = pgWalUsage.wal_fpi - prevWalUsage.wal_fpi;\n>>>>\n>>>> It's better to use WalUsageAccumDiff() here?\n>>>\n>>> Yes, thanks. I fixed it.\n>>>\n>>>> prevWalUsage needs to be initialized with pgWalUsage?\n>>>>\n>>>> +                if (AmWalWriterProcess()){\n>>>> +                    WalStats.m_wal_write_walwriter++;\n>>>> +                }\n>>>> +                else\n>>>> +                {\n>>>> +                    WalStats.m_wal_write_backend++;\n>>>> +                }\n>>>>\n>>>> I think that it's better not to separate m_wal_write_xxx into two for\n>>>> walwriter and other processes. Instead, we can use one m_wal_write_xxx\n>>>> counter and make pgstat_send_wal() send also the process type to\n>>>> the stats collector. Then the stats collector can accumulate the counters\n>>>> per process type if necessary. If we adopt this approach, we can easily\n>>>> extend pg_stat_wal so that any fields can be reported per process type.\n>>>\n>>> I'll remove the above source code because these counters are not useful.\n>>>\n>>>\n>>> On 2020-10-30 12:00, Fujii Masao wrote:\n>>>> On 2020/10/20 11:31, Masahiro Ikeda wrote:\n>>>>> Hi,\n>>>>>\n>>>>> I think we need to add some statistics to pg_stat_wal view.\n>>>>>\n>>>>> Although there are some parameter related WAL,\n>>>>> there are few statistics for tuning them.\n>>>>>\n>>>>> I think it's better to provide the following statistics.\n>>>>> Please let me know your comments.\n>>>>>\n>>>>> ```\n>>>>> postgres=# SELECT * from pg_stat_wal;\n>>>>> -[ RECORD 1 ]-------+------------------------------\n>>>>> wal_records         | 2000224\n>>>>> wal_fpi             | 47\n>>>>> wal_bytes           | 248216337\n>>>>> wal_buffers_full    | 20954\n>>>>> wal_init_file       | 8\n>>>>> wal_write_backend   | 20960\n>>>>> wal_write_walwriter | 46\n>>>>> wal_write_time      | 51\n>>>>> wal_sync_backend    | 7\n>>>>> wal_sync_walwriter  | 8\n>>>>> wal_sync_time       | 0\n>>>>> stats_reset         | 2020-10-20 11:04:51.307771+09\n>>>>> ```\n>>>>>\n>>>>> 1. Basic statistics of WAL activity\n>>>>>\n>>>>> - wal_records: Total number of WAL records generated\n>>>>> - wal_fpi: Total number of WAL full page images generated\n>>>>> - wal_bytes: Total amount of WAL bytes generated\n>>>>>\n>>>>> To understand DB's performance, first, we will check the performance\n>>>>> trends for the entire database instance.\n>>>>> For example, if the number of wal_fpi becomes higher, users may tune\n>>>>> \"wal_compression\", \"checkpoint_timeout\" and so on.\n>>>>>\n>>>>> Although users can check the above statistics via EXPLAIN, auto_explain,\n>>>>> autovacuum and pg_stat_statements now,\n>>>>> if users want to see the performance trends  for the entire database,\n>>>>> they must recalculate the statistics.\n>>>>>\n>>>>> I think it is useful to add the sum of the basic statistics.\n>>>>>\n>>>>>\n>>>>> 2.  WAL segment file creation\n>>>>>\n>>>>> - wal_init_file: Total number of WAL segment files created.\n>>>>>\n>>>>> To create a new WAL file may have an impact on the performance of\n>>>>> a write-heavy workload generating lots of WAL. If this number is reported high,\n>>>>> to reduce the number of this initialization, we can tune WAL-related parameters\n>>>>> so that more \"recycled\" WAL files can be held.\n>>>>>\n>>>>>\n>>>>>\n>>>>> 3. Number of when WAL is flushed\n>>>>>\n>>>>> - wal_write_backend : Total number of WAL data written to the disk by backends\n>>>>> - wal_write_walwriter : Total number of WAL data written to the disk by walwriter\n>>>>> - wal_sync_backend : Total number of WAL data synced to the disk by backends\n>>>>> - wal_sync_walwriter : Total number of WAL data synced to the disk by walwrite\n>>>>>\n>>>>> I think it's useful for tuning \"synchronous_commit\" and \"commit_delay\" for query executions.\n>>>>> If the number of WAL is flushed is high, users can know \"synchronous_commit\" is useful for the workload.\n>>>>\n>>>> I just wonder how useful these counters are. Even without these counters,\n>>>> we already know synchronous_commit=off is likely to cause the better\n>>>> performance (but has the risk of data loss). So ISTM that these counters are\n>>>> not so useful when tuning synchronous_commit.\n>>>\n>>> Thanks, my understanding was wrong.\n>>> I agreed that your comments.\n>>>\n>>> I merged the statistics of *_backend and *_walwriter.\n>>> I think the sum of them is useful to calculate the average per write/sync time.\n>>> For example, per write time is equals wal_write_time / wal_write.\n>>\n>> Understood.\n>>\n>> Thanks for updating the patch!\n> \n> Thanks for your comments.\n> \n>> patching file src/include/catalog/pg_proc.dat\n>> Hunk #1 FAILED at 5491.\n>> 1 out of 1 hunk FAILED -- saving rejects to file\n>> src/include/catalog/pg_proc.dat.rej\n>>\n>> I got this failure when applying the patch. Could you update the patch?\n> \n> Thanks, I updated the patch.\n> \n>> -       Number of times WAL data was written to the disk because WAL\n>> buffers got full\n>> +       Total number of times WAL data written to the disk because WAL\n>> buffers got full\n>>\n>> Isn't \"was\" necessary between \"data\" and \"written\"?\n> \n> Yes, I fixed it.\n> \n>> +      <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>> +       <structfield>wal_bytes</structfield> <type>bigint</type>\n>>\n>> Shouldn't the type of wal_bytes be numeric because the total number of\n>> WAL bytes can exceed the range of bigint? I think that the type of\n>> pg_stat_statements.wal_bytes is also numeric for the same reason.\n> \n> Thanks, I fixed it.\n> \n> Since I cast the type of wal_bytes from PgStat_Counter to uint64,\n> I changed the type of PgStat_MsgWal and PgStat_WalStats too.\n> \n>> +      <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>> +       <structfield>wal_write_time</structfield> <type>bigint</type>\n>>\n>> Shouldn't the type of wal_xxx_time be double precision,\n>> like pg_stat_database.blk_write_time?\n> \n> Thanks, I changed it.\n> \n>> Even when fsync is set to off or wal_sync_method is set to open_sync,\n>> wal_sync is incremented. Isn't this behavior confusing?\n\nWhat do you think about this comment?\n\nI found that we discussed track-WAL-IO-timing feature at the past discussion\nabout the similar feature [1]. But the feature was droppped from the proposal\npatch because there was the performance concern. So probably we need to\nrevisit the past discussion and benchmark the performance. Thought?\n\nIf track-WAL-IO-timing feature may cause performance regression,\nit might be an idea to extract wal_records, wal_fpi and wal_bytes parts\nfrom the patch and commit it at first.\n\n[1]\nhttps://postgr.es/m/CAJrrPGc6APFUGYNcPe4qcNxpL8gXKYv1KST+vwJcFtCSCEySnA@mail.gmail.com\n\n\n>>\n>>\n>> +       Total amount of time that has been spent in the portion of\n>> +       WAL data was written to disk by backend and walwriter, in milliseconds\n>> +       (if <xref linkend=\"guc-track-io-timing\"/> is enabled, otherwise zero)\n>>\n>> With the patch, track_io_timing controls both IO for data files and\n>> WAL files. But we may want to track only either of them. So it's better\n>> to extend track_io_timing so that we can specify the tracking target\n>> in the parameter? For example, we can make track_io_timing accept\n>> data, wal and all. Or we should introduce new GUC for WAL, e.g.,\n>> track_wal_io_timing? Thought?\n> \n> OK, I introduced the new GUC \"track_wal_io_timing\".\n> \n>> I'm afraid that \"by backend and walwriter\" part can make us thinkg\n>> incorrectly that WAL writes by other processes like autovacuum\n>> are not tracked.\n> \n> Sorry, I removed \"by backend and walwriter\".\n\nThanks for updating the patch!\n\n+WalUsage prevWalUsage;\n\nISTM that we can declare this as static variable because\nit's used only in pgstat.c.\n\n+\tmemset(&walusage, 0, sizeof(WalUsage));\n+\tWalUsageAccumDiff(&walusage, &pgWalUsage, &prevWalUsage);\n\nThis memset seems unnecessary.\n\n \t/* We assume this initializes to zeroes */\n \tstatic const PgStat_MsgWal all_zeroes;\n\nThis declaration of the variable should be placed around\nthe top of pgstat_send_wal().\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 17 Nov 2020 11:46:33 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "\n\nOn 2020/11/16 18:24, Masahiro Ikeda wrote:\n> On 2020-11-13 12:32, lchch1990@sina.cn wrote:\n>>> Now, pg_stat_wal supports reset all informantion in WalStats\n>>> using pg_stat_reset_shared('wal') function.\n>>> Isn't it enough?\n>> Yes it ok, sorry I miss this infomation.\n> \n> OK.\n> \n>>>> 3. I do not think it's a correct describe in document for\n>>>> 'wal_buffers_full'.\n>>\n>>> Where should I rewrite the description? If my understanding is not\n>>> correct, please let me know.\n>> Sorry I have not described it clearly, because I can not understand\n>> the meaning of this\n>> column after I read the describe in document.\n>> And now I read the source code of walwrite and found the place where\n>> 'wal_buffers_full'\n>> added is for a backend to wait a wal buffer which is occupied by other\n>> wal page, so the\n>> backend flush the old page in the wal buffer(after wait it can).\n>> So i think the origin decribe in document is not so in point, we can\n>> describe it such as\n>> 'Total number of times WAL data written to the disk because a backend\n>> yelled a wal buffer\n>> for an advanced wal page.\n>>\n>> Sorry if my understand is wrong.\n> \n> Thanks for your comments.\n> \n> You're understanding is almost the same as mine.\n> It describes when not only backends but also other backgrounds initialize a new wal page,\n> wal buffer's space is already used and there is no space.\n> \n>> 'Total number of times WAL data written to the disk because a backend\n>> yelled a wal buffer for an advanced wal page'\n> \n> Thanks for your suggestion.\n> I wondered that users may confuse about how to use \"wal_buffers_full\" and how to tune parameters.\n> \n> I thought the reason which wal buffer has no space is\n> important for users to tune the wal_buffers parameter.\n> \n> How about the following comments?\n> \n> 'Total number of times WAL data was written to the disk because WAL buffers got full\n>  when to initialize a new WAL page'\n\nOr what about the following?\n\nTotal number of times WAL data was written to the disk, to claim the buffer page to insert new WAL data when the WAL buffers got filled up with unwritten WAL data.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 17 Nov 2020 11:53:45 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": ">> Thanks for your comments.\r\n>>\r\n>> You're understanding is almost the same as mine.\r\n>> It describes when not only backends but also other backgrounds initialize a new wal page,\r\n>> wal buffer's space is already used and there is no space.\r\n>>\r\n>>> 'Total number of times WAL data written to the disk because a backend\r\n>>> yelled a wal buffer for an advanced wal page'\r\n>>\r\n>> Thanks for your suggestion.\r\n>> I wondered that users may confuse about how to use \"wal_buffers_full\" and how to tune parameters.\r\n>>\r\n>> I thought the reason which wal buffer has no space is\r\n>> important for users to tune the wal_buffers parameter.\r\n>>\r\n>> How about the following comments?\r\n>>\r\n>> 'Total number of times WAL data was written to the disk because WAL buffers got full\r\n>> when to initialize a new WAL page'\r\n>Or what about the following?\r\n>Total number of times WAL data was written to the disk, to claim the buffer page to insert new\r\n>WAL data when the WAL buffers got filled up with unwritten WAL data.\r\nAs my understand we can not say 'full' because every wal page mapped a special wal buffer slot.\r\nWhen a wal page need to be write, but the buffer slot was occupied by other wal page. It need to\r\nwait the wal buffer slot released. So i think we should say it 'occupied' not 'full'.\r\n\r\nMaybe:\r\nTotal number of times WAL data was written to the disk, to claim the buffer page to insert new\r\nWAL data when the special WAL buffer occupied by other page.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\r\n\n\n>> Thanks for your comments.>>>> You're understanding is almost the same as mine.>> It describes when not only backends but also other backgrounds initialize a new wal page,>> wal buffer's space is already used and there is no space.>>>>> 'Total number of times WAL data written to the disk because a backend>>> yelled a wal buffer for an advanced wal page'>>>> Thanks for your suggestion.>> I wondered that users may confuse about how to use \"wal_buffers_full\" and how to tune parameters.>>>> I thought the reason which wal buffer has no space is>> important for users to tune the wal_buffers parameter.>>>> How about the following comments?>>>> 'Total number of times WAL data was written to the disk because WAL buffers got full>>   when to initialize a new WAL page'>Or what about the following?>Total number of times WAL data was written to the disk, to claim the buffer page to insert new>WAL data when the WAL buffers got filled up with unwritten WAL data.As my understand we can not say 'full' because every wal page mapped a special wal buffer slot.When a wal page need to be write, but the buffer slot was occupied by other wal page. It need towait the wal buffer slot released. So i think we should say it 'occupied' not 'full'.Maybe:\nTotal number of times WAL data was written to the disk, to claim the buffer page to insert newWAL data when the special WAL buffer occupied by other page.\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Tue, 17 Nov 2020 11:53:37 +0800", "msg_from": "\"lchch1990@sina.cn\" <lchch1990@sina.cn>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "On 2020-11-17 11:46, Fujii Masao wrote:\n> On 2020/11/16 16:35, Masahiro Ikeda wrote:\n>> On 2020-11-12 14:58, Fujii Masao wrote:\n>>> On 2020/11/06 10:25, Masahiro Ikeda wrote:\n>>>> On 2020-10-30 11:50, Fujii Masao wrote:\n>>>>> On 2020/10/29 17:03, Masahiro Ikeda wrote:\n>>>>>> Hi,\n>>>>>> \n>>>>>> Thanks for your comments and advice. I updated the patch.\n>>>>>> \n>>>>>> On 2020-10-21 18:03, Kyotaro Horiguchi wrote:\n>>>>>>> At Tue, 20 Oct 2020 16:11:29 +0900, Masahiro Ikeda\n>>>>>>> <ikedamsh@oss.nttdata.com> wrote in\n>>>>>>>> On 2020-10-20 12:46, Amit Kapila wrote:\n>>>>>>>> > I see that we also need to add extra code to capture these stats (some\n>>>>>>>> > of which is in performance-critical path especially in\n>>>>>>>> > XLogInsertRecord) which again makes me a bit uncomfortable. It might\n>>>>>>>> > be that it is all fine as it is very important to collect these stats\n>>>>>>>> > at cluster-level in spite that the same information can be gathered at\n>>>>>>>> > statement-level to help customers but I don't see a very strong case\n>>>>>>>> > for that in your proposal.\n>>>>>>> \n>>>>>>> We should avoid that duplication as possible even if the both \n>>>>>>> number\n>>>>>>> are important.\n>>>>>>> \n>>>>>>>> Also about performance, I thought there are few impacts because \n>>>>>>>> it\n>>>>>>>> increments stats in memory. If I can implement to reuse \n>>>>>>>> pgWalUsage's\n>>>>>>>> value which already collects these stats, there is no impact in\n>>>>>>>> XLogInsertRecord.\n>>>>>>>> For example, how about pg_stat_wal() calculates the accumulated\n>>>>>>>> value of wal_records, wal_fpi, and wal_bytes to use pgWalUsage's\n>>>>>>>> value?\n>>>>>>> \n>>>>>>> I don't think that works, but it would work that \n>>>>>>> pgstat_send_wal()\n>>>>>>> takes the difference of that values between two successive calls.\n>>>>>>> \n>>>>>>> WalUsage prevWalUsage;\n>>>>>>> ...\n>>>>>>> pgstat_send_wal()\n>>>>>>> {\n>>>>>>> ..\n>>>>>>>    /* fill in some values using pgWalUsage */\n>>>>>>>    WalStats.m_wal_bytes   = pgWalUsage.wal_bytes   - \n>>>>>>> prevWalUsage.wal_bytes;\n>>>>>>>    WalStats.m_wal_records = pgWalUsage.wal_records - \n>>>>>>> prevWalUsage.wal_records;\n>>>>>>>    WalStats.m_wal_wal_fpi = pgWalUsage.wal_fpi     - \n>>>>>>> prevWalUsage.wal_fpi;\n>>>>>>> ...\n>>>>>>>    pgstat_send(&WalStats, sizeof(WalStats));\n>>>>>>> \n>>>>>>>    /* remember the current numbers */\n>>>>>>>    prevWalUsage = pgWalUsage;\n>>>>>> \n>>>>>> Thanks for Horiguchi-san's advice, I changed to reuse pgWalUsage\n>>>>>> which is already defined and eliminates the extra overhead.\n>>>>> \n>>>>> +    /* fill in some values using pgWalUsage */\n>>>>> +    WalStats.m_wal_bytes = pgWalUsage.wal_bytes - \n>>>>> prevWalUsage.wal_bytes;\n>>>>> +    WalStats.m_wal_records = pgWalUsage.wal_records - \n>>>>> prevWalUsage.wal_records;\n>>>>> +    WalStats.m_wal_fpi = pgWalUsage.wal_fpi - \n>>>>> prevWalUsage.wal_fpi;\n>>>>> \n>>>>> It's better to use WalUsageAccumDiff() here?\n>>>> \n>>>> Yes, thanks. I fixed it.\n>>>> \n>>>>> prevWalUsage needs to be initialized with pgWalUsage?\n>>>>> \n>>>>> +                if (AmWalWriterProcess()){\n>>>>> +                    WalStats.m_wal_write_walwriter++;\n>>>>> +                }\n>>>>> +                else\n>>>>> +                {\n>>>>> +                    WalStats.m_wal_write_backend++;\n>>>>> +                }\n>>>>> \n>>>>> I think that it's better not to separate m_wal_write_xxx into two \n>>>>> for\n>>>>> walwriter and other processes. Instead, we can use one \n>>>>> m_wal_write_xxx\n>>>>> counter and make pgstat_send_wal() send also the process type to\n>>>>> the stats collector. Then the stats collector can accumulate the \n>>>>> counters\n>>>>> per process type if necessary. If we adopt this approach, we can \n>>>>> easily\n>>>>> extend pg_stat_wal so that any fields can be reported per process \n>>>>> type.\n>>>> \n>>>> I'll remove the above source code because these counters are not \n>>>> useful.\n>>>> \n>>>> \n>>>> On 2020-10-30 12:00, Fujii Masao wrote:\n>>>>> On 2020/10/20 11:31, Masahiro Ikeda wrote:\n>>>>>> Hi,\n>>>>>> \n>>>>>> I think we need to add some statistics to pg_stat_wal view.\n>>>>>> \n>>>>>> Although there are some parameter related WAL,\n>>>>>> there are few statistics for tuning them.\n>>>>>> \n>>>>>> I think it's better to provide the following statistics.\n>>>>>> Please let me know your comments.\n>>>>>> \n>>>>>> ```\n>>>>>> postgres=# SELECT * from pg_stat_wal;\n>>>>>> -[ RECORD 1 ]-------+------------------------------\n>>>>>> wal_records         | 2000224\n>>>>>> wal_fpi             | 47\n>>>>>> wal_bytes           | 248216337\n>>>>>> wal_buffers_full    | 20954\n>>>>>> wal_init_file       | 8\n>>>>>> wal_write_backend   | 20960\n>>>>>> wal_write_walwriter | 46\n>>>>>> wal_write_time      | 51\n>>>>>> wal_sync_backend    | 7\n>>>>>> wal_sync_walwriter  | 8\n>>>>>> wal_sync_time       | 0\n>>>>>> stats_reset         | 2020-10-20 11:04:51.307771+09\n>>>>>> ```\n>>>>>> \n>>>>>> 1. Basic statistics of WAL activity\n>>>>>> \n>>>>>> - wal_records: Total number of WAL records generated\n>>>>>> - wal_fpi: Total number of WAL full page images generated\n>>>>>> - wal_bytes: Total amount of WAL bytes generated\n>>>>>> \n>>>>>> To understand DB's performance, first, we will check the \n>>>>>> performance\n>>>>>> trends for the entire database instance.\n>>>>>> For example, if the number of wal_fpi becomes higher, users may \n>>>>>> tune\n>>>>>> \"wal_compression\", \"checkpoint_timeout\" and so on.\n>>>>>> \n>>>>>> Although users can check the above statistics via EXPLAIN, \n>>>>>> auto_explain,\n>>>>>> autovacuum and pg_stat_statements now,\n>>>>>> if users want to see the performance trends  for the entire \n>>>>>> database,\n>>>>>> they must recalculate the statistics.\n>>>>>> \n>>>>>> I think it is useful to add the sum of the basic statistics.\n>>>>>> \n>>>>>> \n>>>>>> 2.  WAL segment file creation\n>>>>>> \n>>>>>> - wal_init_file: Total number of WAL segment files created.\n>>>>>> \n>>>>>> To create a new WAL file may have an impact on the performance of\n>>>>>> a write-heavy workload generating lots of WAL. If this number is \n>>>>>> reported high,\n>>>>>> to reduce the number of this initialization, we can tune \n>>>>>> WAL-related parameters\n>>>>>> so that more \"recycled\" WAL files can be held.\n>>>>>> \n>>>>>> \n>>>>>> \n>>>>>> 3. Number of when WAL is flushed\n>>>>>> \n>>>>>> - wal_write_backend : Total number of WAL data written to the disk \n>>>>>> by backends\n>>>>>> - wal_write_walwriter : Total number of WAL data written to the \n>>>>>> disk by walwriter\n>>>>>> - wal_sync_backend : Total number of WAL data synced to the disk \n>>>>>> by backends\n>>>>>> - wal_sync_walwriter : Total number of WAL data synced to the disk \n>>>>>> by walwrite\n>>>>>> \n>>>>>> I think it's useful for tuning \"synchronous_commit\" and \n>>>>>> \"commit_delay\" for query executions.\n>>>>>> If the number of WAL is flushed is high, users can know \n>>>>>> \"synchronous_commit\" is useful for the workload.\n>>>>> \n>>>>> I just wonder how useful these counters are. Even without these \n>>>>> counters,\n>>>>> we already know synchronous_commit=off is likely to cause the \n>>>>> better\n>>>>> performance (but has the risk of data loss). So ISTM that these \n>>>>> counters are\n>>>>> not so useful when tuning synchronous_commit.\n>>>> \n>>>> Thanks, my understanding was wrong.\n>>>> I agreed that your comments.\n>>>> \n>>>> I merged the statistics of *_backend and *_walwriter.\n>>>> I think the sum of them is useful to calculate the average per \n>>>> write/sync time.\n>>>> For example, per write time is equals wal_write_time / wal_write.\n>>> \n>>> Understood.\n>>> \n>>> Thanks for updating the patch!\n>> \n>> Thanks for your comments.\n>> \n>>> patching file src/include/catalog/pg_proc.dat\n>>> Hunk #1 FAILED at 5491.\n>>> 1 out of 1 hunk FAILED -- saving rejects to file\n>>> src/include/catalog/pg_proc.dat.rej\n>>> \n>>> I got this failure when applying the patch. Could you update the \n>>> patch?\n>> \n>> Thanks, I updated the patch.\n>> \n>>> -       Number of times WAL data was written to the disk because WAL\n>>> buffers got full\n>>> +       Total number of times WAL data written to the disk because \n>>> WAL\n>>> buffers got full\n>>> \n>>> Isn't \"was\" necessary between \"data\" and \"written\"?\n>> \n>> Yes, I fixed it.\n>> \n>>> +      <entry role=\"catalog_table_entry\"><para \n>>> role=\"column_definition\">\n>>> +       <structfield>wal_bytes</structfield> <type>bigint</type>\n>>> \n>>> Shouldn't the type of wal_bytes be numeric because the total number \n>>> of\n>>> WAL bytes can exceed the range of bigint? I think that the type of\n>>> pg_stat_statements.wal_bytes is also numeric for the same reason.\n>> \n>> Thanks, I fixed it.\n>> \n>> Since I cast the type of wal_bytes from PgStat_Counter to uint64,\n>> I changed the type of PgStat_MsgWal and PgStat_WalStats too.\n>> \n>>> +      <entry role=\"catalog_table_entry\"><para \n>>> role=\"column_definition\">\n>>> +       <structfield>wal_write_time</structfield> <type>bigint</type>\n>>> \n>>> Shouldn't the type of wal_xxx_time be double precision,\n>>> like pg_stat_database.blk_write_time?\n>> \n>> Thanks, I changed it.\n>> \n>>> Even when fsync is set to off or wal_sync_method is set to open_sync,\n>>> wal_sync is incremented. Isn't this behavior confusing?\n> \n> What do you think about this comment?\n\nSorry, I'll change to increment wal_sync and wal_sync_time only\nif a specific fsync method is called.\n\n> I found that we discussed track-WAL-IO-timing feature at the past \n> discussion\n> about the similar feature [1]. But the feature was droppped from the \n> proposal\n> patch because there was the performance concern. So probably we need to\n> revisit the past discussion and benchmark the performance. Thought?\n> \n> If track-WAL-IO-timing feature may cause performance regression,\n> it might be an idea to extract wal_records, wal_fpi and wal_bytes parts\n> from the patch and commit it at first.\n> \n> [1]\n> https://postgr.es/m/CAJrrPGc6APFUGYNcPe4qcNxpL8gXKYv1KST+vwJcFtCSCEySnA@mail.gmail.com\n\nThanks, I'll check the thread.\nI agree to add basic statistics at first and I attached the patch.\n\n>>> \n>>> \n>>> +       Total amount of time that has been spent in the portion of\n>>> +       WAL data was written to disk by backend and walwriter, in \n>>> milliseconds\n>>> +       (if <xref linkend=\"guc-track-io-timing\"/> is enabled, \n>>> otherwise zero)\n>>> \n>>> With the patch, track_io_timing controls both IO for data files and\n>>> WAL files. But we may want to track only either of them. So it's \n>>> better\n>>> to extend track_io_timing so that we can specify the tracking target\n>>> in the parameter? For example, we can make track_io_timing accept\n>>> data, wal and all. Or we should introduce new GUC for WAL, e.g.,\n>>> track_wal_io_timing? Thought?\n>> \n>> OK, I introduced the new GUC \"track_wal_io_timing\".\n>> \n>>> I'm afraid that \"by backend and walwriter\" part can make us thinkg\n>>> incorrectly that WAL writes by other processes like autovacuum\n>>> are not tracked.\n>> \n>> Sorry, I removed \"by backend and walwriter\".\n> \n> Thanks for updating the patch!\n> \n> +WalUsage prevWalUsage;\n> \n> ISTM that we can declare this as static variable because\n> it's used only in pgstat.c.\n\nThanks, I fixed it.\n\n> +\tmemset(&walusage, 0, sizeof(WalUsage));\n> +\tWalUsageAccumDiff(&walusage, &pgWalUsage, &prevWalUsage);\n> \n> This memset seems unnecessary.\n\nI couldn't understand why this memset is unnecessary.\nSince WalUsageAccumDiff not only calculates the difference but also adds \nthe value,\nI thought walusage needs to be initialized.\n\n\n> \t/* We assume this initializes to zeroes */\n> \tstatic const PgStat_MsgWal all_zeroes;\n> \n> This declaration of the variable should be placed around\n> the top of pgstat_send_wal().\n\nSorry, I fixed it.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Thu, 19 Nov 2020 16:31:09 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "On 2020-11-17 12:53, lchch1990@sina.cn wrote:\n>>> Thanks for your comments.\n>>> \n>>> You're understanding is almost the same as mine.\n>>> It describes when not only backends but also other backgrounds\n> initialize a new wal page,\n>>> wal buffer's space is already used and there is no space.\n>>> \n>>>> 'Total number of times WAL data written to the disk because a\n> backend\n>>>> yelled a wal buffer for an advanced wal page'\n>>> \n>>> Thanks for your suggestion.\n>>> I wondered that users may confuse about how to use\n> \"wal_buffers_full\" and how to tune parameters.\n>>> \n>>> I thought the reason which wal buffer has no space is\n>>> important for users to tune the wal_buffers parameter.\n>>> \n>>> How about the following comments?\n>>> \n>>> 'Total number of times WAL data was written to the disk because WAL\n> buffers got full\n>>> when to initialize a new WAL page'\n>> Or what about the following?\n>> Total number of times WAL data was written to the disk, to claim the\n> buffer page to insert new\n>> WAL data when the WAL buffers got filled up with unwritten WAL data.\n> As my understand we can not say 'full' because every wal page mapped a\n> special wal buffer slot.\n> When a wal page need to be write, but the buffer slot was occupied by\n> other wal page. It need to\n> wait the wal buffer slot released. So i think we should say it\n> 'occupied' not 'full'.\n> \n> Maybe:\n> Total number of times WAL data was written to the disk, to claim the\n> buffer page to insert new\n> WAL data when the special WAL buffer occupied by other page.\n\nOK, I will change the above sentence since there are some sentences\nlike \"space occupied by\", \"disk blocks occupied\", and so on in the \ndocuments.\n\nDo we need to change the column name from \"wal_buffers_full\"\nto another name like \"wal_buffers_all_occupied\"?\n\nRegards\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 19 Nov 2020 17:03:20 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "\n\nOn 2020/11/19 16:31, Masahiro Ikeda wrote:\n> On 2020-11-17 11:46, Fujii Masao wrote:\n>> On 2020/11/16 16:35, Masahiro Ikeda wrote:\n>>> On 2020-11-12 14:58, Fujii Masao wrote:\n>>>> On 2020/11/06 10:25, Masahiro Ikeda wrote:\n>>>>> On 2020-10-30 11:50, Fujii Masao wrote:\n>>>>>> On 2020/10/29 17:03, Masahiro Ikeda wrote:\n>>>>>>> Hi,\n>>>>>>>\n>>>>>>> Thanks for your comments and advice. I updated the patch.\n>>>>>>>\n>>>>>>> On 2020-10-21 18:03, Kyotaro Horiguchi wrote:\n>>>>>>>> At Tue, 20 Oct 2020 16:11:29 +0900, Masahiro Ikeda\n>>>>>>>> <ikedamsh@oss.nttdata.com> wrote in\n>>>>>>>>> On 2020-10-20 12:46, Amit Kapila wrote:\n>>>>>>>>> > I see that we also need to add extra code to capture these stats (some\n>>>>>>>>> > of which is in performance-critical path especially in\n>>>>>>>>> > XLogInsertRecord) which again makes me a bit uncomfortable. It might\n>>>>>>>>> > be that it is all fine as it is very important to collect these stats\n>>>>>>>>> > at cluster-level in spite that the same information can be gathered at\n>>>>>>>>> > statement-level to help customers but I don't see a very strong case\n>>>>>>>>> > for that in your proposal.\n>>>>>>>>\n>>>>>>>> We should avoid that duplication as possible even if the both number\n>>>>>>>> are important.\n>>>>>>>>\n>>>>>>>>> Also about performance, I thought there are few impacts because it\n>>>>>>>>> increments stats in memory. If I can implement to reuse pgWalUsage's\n>>>>>>>>> value which already collects these stats, there is no impact in\n>>>>>>>>> XLogInsertRecord.\n>>>>>>>>> For example, how about pg_stat_wal() calculates the accumulated\n>>>>>>>>> value of wal_records, wal_fpi, and wal_bytes to use pgWalUsage's\n>>>>>>>>> value?\n>>>>>>>>\n>>>>>>>> I don't think that works, but it would work that pgstat_send_wal()\n>>>>>>>> takes the difference of that values between two successive calls.\n>>>>>>>>\n>>>>>>>> WalUsage prevWalUsage;\n>>>>>>>> ...\n>>>>>>>> pgstat_send_wal()\n>>>>>>>> {\n>>>>>>>> ..\n>>>>>>>>    /* fill in some values using pgWalUsage */\n>>>>>>>>    WalStats.m_wal_bytes   = pgWalUsage.wal_bytes   - prevWalUsage.wal_bytes;\n>>>>>>>>    WalStats.m_wal_records = pgWalUsage.wal_records - prevWalUsage.wal_records;\n>>>>>>>>    WalStats.m_wal_wal_fpi = pgWalUsage.wal_fpi     - prevWalUsage.wal_fpi;\n>>>>>>>> ...\n>>>>>>>>    pgstat_send(&WalStats, sizeof(WalStats));\n>>>>>>>>\n>>>>>>>>    /* remember the current numbers */\n>>>>>>>>    prevWalUsage = pgWalUsage;\n>>>>>>>\n>>>>>>> Thanks for Horiguchi-san's advice, I changed to reuse pgWalUsage\n>>>>>>> which is already defined and eliminates the extra overhead.\n>>>>>>\n>>>>>> +    /* fill in some values using pgWalUsage */\n>>>>>> +    WalStats.m_wal_bytes = pgWalUsage.wal_bytes - prevWalUsage.wal_bytes;\n>>>>>> +    WalStats.m_wal_records = pgWalUsage.wal_records - prevWalUsage.wal_records;\n>>>>>> +    WalStats.m_wal_fpi = pgWalUsage.wal_fpi - prevWalUsage.wal_fpi;\n>>>>>>\n>>>>>> It's better to use WalUsageAccumDiff() here?\n>>>>>\n>>>>> Yes, thanks. I fixed it.\n>>>>>\n>>>>>> prevWalUsage needs to be initialized with pgWalUsage?\n>>>>>>\n>>>>>> +                if (AmWalWriterProcess()){\n>>>>>> +                    WalStats.m_wal_write_walwriter++;\n>>>>>> +                }\n>>>>>> +                else\n>>>>>> +                {\n>>>>>> +                    WalStats.m_wal_write_backend++;\n>>>>>> +                }\n>>>>>>\n>>>>>> I think that it's better not to separate m_wal_write_xxx into two for\n>>>>>> walwriter and other processes. Instead, we can use one m_wal_write_xxx\n>>>>>> counter and make pgstat_send_wal() send also the process type to\n>>>>>> the stats collector. Then the stats collector can accumulate the counters\n>>>>>> per process type if necessary. If we adopt this approach, we can easily\n>>>>>> extend pg_stat_wal so that any fields can be reported per process type.\n>>>>>\n>>>>> I'll remove the above source code because these counters are not useful.\n>>>>>\n>>>>>\n>>>>> On 2020-10-30 12:00, Fujii Masao wrote:\n>>>>>> On 2020/10/20 11:31, Masahiro Ikeda wrote:\n>>>>>>> Hi,\n>>>>>>>\n>>>>>>> I think we need to add some statistics to pg_stat_wal view.\n>>>>>>>\n>>>>>>> Although there are some parameter related WAL,\n>>>>>>> there are few statistics for tuning them.\n>>>>>>>\n>>>>>>> I think it's better to provide the following statistics.\n>>>>>>> Please let me know your comments.\n>>>>>>>\n>>>>>>> ```\n>>>>>>> postgres=# SELECT * from pg_stat_wal;\n>>>>>>> -[ RECORD 1 ]-------+------------------------------\n>>>>>>> wal_records         | 2000224\n>>>>>>> wal_fpi             | 47\n>>>>>>> wal_bytes           | 248216337\n>>>>>>> wal_buffers_full    | 20954\n>>>>>>> wal_init_file       | 8\n>>>>>>> wal_write_backend   | 20960\n>>>>>>> wal_write_walwriter | 46\n>>>>>>> wal_write_time      | 51\n>>>>>>> wal_sync_backend    | 7\n>>>>>>> wal_sync_walwriter  | 8\n>>>>>>> wal_sync_time       | 0\n>>>>>>> stats_reset         | 2020-10-20 11:04:51.307771+09\n>>>>>>> ```\n>>>>>>>\n>>>>>>> 1. Basic statistics of WAL activity\n>>>>>>>\n>>>>>>> - wal_records: Total number of WAL records generated\n>>>>>>> - wal_fpi: Total number of WAL full page images generated\n>>>>>>> - wal_bytes: Total amount of WAL bytes generated\n>>>>>>>\n>>>>>>> To understand DB's performance, first, we will check the performance\n>>>>>>> trends for the entire database instance.\n>>>>>>> For example, if the number of wal_fpi becomes higher, users may tune\n>>>>>>> \"wal_compression\", \"checkpoint_timeout\" and so on.\n>>>>>>>\n>>>>>>> Although users can check the above statistics via EXPLAIN, auto_explain,\n>>>>>>> autovacuum and pg_stat_statements now,\n>>>>>>> if users want to see the performance trends  for the entire database,\n>>>>>>> they must recalculate the statistics.\n>>>>>>>\n>>>>>>> I think it is useful to add the sum of the basic statistics.\n>>>>>>>\n>>>>>>>\n>>>>>>> 2.  WAL segment file creation\n>>>>>>>\n>>>>>>> - wal_init_file: Total number of WAL segment files created.\n>>>>>>>\n>>>>>>> To create a new WAL file may have an impact on the performance of\n>>>>>>> a write-heavy workload generating lots of WAL. If this number is reported high,\n>>>>>>> to reduce the number of this initialization, we can tune WAL-related parameters\n>>>>>>> so that more \"recycled\" WAL files can be held.\n>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>> 3. Number of when WAL is flushed\n>>>>>>>\n>>>>>>> - wal_write_backend : Total number of WAL data written to the disk by backends\n>>>>>>> - wal_write_walwriter : Total number of WAL data written to the disk by walwriter\n>>>>>>> - wal_sync_backend : Total number of WAL data synced to the disk by backends\n>>>>>>> - wal_sync_walwriter : Total number of WAL data synced to the disk by walwrite\n>>>>>>>\n>>>>>>> I think it's useful for tuning \"synchronous_commit\" and \"commit_delay\" for query executions.\n>>>>>>> If the number of WAL is flushed is high, users can know \"synchronous_commit\" is useful for the workload.\n>>>>>>\n>>>>>> I just wonder how useful these counters are. Even without these counters,\n>>>>>> we already know synchronous_commit=off is likely to cause the better\n>>>>>> performance (but has the risk of data loss). So ISTM that these counters are\n>>>>>> not so useful when tuning synchronous_commit.\n>>>>>\n>>>>> Thanks, my understanding was wrong.\n>>>>> I agreed that your comments.\n>>>>>\n>>>>> I merged the statistics of *_backend and *_walwriter.\n>>>>> I think the sum of them is useful to calculate the average per write/sync time.\n>>>>> For example, per write time is equals wal_write_time / wal_write.\n>>>>\n>>>> Understood.\n>>>>\n>>>> Thanks for updating the patch!\n>>>\n>>> Thanks for your comments.\n>>>\n>>>> patching file src/include/catalog/pg_proc.dat\n>>>> Hunk #1 FAILED at 5491.\n>>>> 1 out of 1 hunk FAILED -- saving rejects to file\n>>>> src/include/catalog/pg_proc.dat.rej\n>>>>\n>>>> I got this failure when applying the patch. Could you update the patch?\n>>>\n>>> Thanks, I updated the patch.\n>>>\n>>>> -       Number of times WAL data was written to the disk because WAL\n>>>> buffers got full\n>>>> +       Total number of times WAL data written to the disk because WAL\n>>>> buffers got full\n>>>>\n>>>> Isn't \"was\" necessary between \"data\" and \"written\"?\n>>>\n>>> Yes, I fixed it.\n>>>\n>>>> +      <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>>>> +       <structfield>wal_bytes</structfield> <type>bigint</type>\n>>>>\n>>>> Shouldn't the type of wal_bytes be numeric because the total number of\n>>>> WAL bytes can exceed the range of bigint? I think that the type of\n>>>> pg_stat_statements.wal_bytes is also numeric for the same reason.\n>>>\n>>> Thanks, I fixed it.\n>>>\n>>> Since I cast the type of wal_bytes from PgStat_Counter to uint64,\n>>> I changed the type of PgStat_MsgWal and PgStat_WalStats too.\n>>>\n>>>> +      <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>>>> +       <structfield>wal_write_time</structfield> <type>bigint</type>\n>>>>\n>>>> Shouldn't the type of wal_xxx_time be double precision,\n>>>> like pg_stat_database.blk_write_time?\n>>>\n>>> Thanks, I changed it.\n>>>\n>>>> Even when fsync is set to off or wal_sync_method is set to open_sync,\n>>>> wal_sync is incremented. Isn't this behavior confusing?\n>>\n>> What do you think about this comment?\n> \n> Sorry, I'll change to increment wal_sync and wal_sync_time only\n> if a specific fsync method is called.\n> \n>> I found that we discussed track-WAL-IO-timing feature at the past discussion\n>> about the similar feature [1]. But the feature was droppped from the proposal\n>> patch because there was the performance concern. So probably we need to\n>> revisit the past discussion and benchmark the performance. Thought?\n>>\n>> If track-WAL-IO-timing feature may cause performance regression,\n>> it might be an idea to extract wal_records, wal_fpi and wal_bytes parts\n>> from the patch and commit it at first.\n>>\n>> [1]\n>> https://postgr.es/m/CAJrrPGc6APFUGYNcPe4qcNxpL8gXKYv1KST+vwJcFtCSCEySnA@mail.gmail.com\n> \n> Thanks, I'll check the thread.\n> I agree to add basic statistics at first and I attached the patch.\n\nThanks!\n\n+\t\t/* Send WAL statistics */\n+\t\tpgstat_send_wal();\n\nThis is not necessary because walwriter generates no WAL data?\n\n> \n>>>>\n>>>>\n>>>> +       Total amount of time that has been spent in the portion of\n>>>> +       WAL data was written to disk by backend and walwriter, in milliseconds\n>>>> +       (if <xref linkend=\"guc-track-io-timing\"/> is enabled, otherwise zero)\n>>>>\n>>>> With the patch, track_io_timing controls both IO for data files and\n>>>> WAL files. But we may want to track only either of them. So it's better\n>>>> to extend track_io_timing so that we can specify the tracking target\n>>>> in the parameter? For example, we can make track_io_timing accept\n>>>> data, wal and all. Or we should introduce new GUC for WAL, e.g.,\n>>>> track_wal_io_timing? Thought?\n>>>\n>>> OK, I introduced the new GUC \"track_wal_io_timing\".\n>>>\n>>>> I'm afraid that \"by backend and walwriter\" part can make us thinkg\n>>>> incorrectly that WAL writes by other processes like autovacuum\n>>>> are not tracked.\n>>>\n>>> Sorry, I removed \"by backend and walwriter\".\n>>\n>> Thanks for updating the patch!\n>>\n>> +WalUsage prevWalUsage;\n>>\n>> ISTM that we can declare this as static variable because\n>> it's used only in pgstat.c.\n> \n> Thanks, I fixed it.\n> \n>> +    memset(&walusage, 0, sizeof(WalUsage));\n>> +    WalUsageAccumDiff(&walusage, &pgWalUsage, &prevWalUsage);\n>>\n>> This memset seems unnecessary.\n> \n> I couldn't understand why this memset is unnecessary.\n> Since WalUsageAccumDiff not only calculates the difference but also adds the value,\n> I thought walusage needs to be initialized.\n\nYes, you're right! Sorry for noise...\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 25 Nov 2020 20:19:36 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "On 2020-11-25 20:19, Fujii Masao wrote:\n> On 2020/11/19 16:31, Masahiro Ikeda wrote:\n>> On 2020-11-17 11:46, Fujii Masao wrote:\n>>> On 2020/11/16 16:35, Masahiro Ikeda wrote:\n>>>> On 2020-11-12 14:58, Fujii Masao wrote:\n>>>>> On 2020/11/06 10:25, Masahiro Ikeda wrote:\n>>>>>> On 2020-10-30 11:50, Fujii Masao wrote:\n>>>>>>> On 2020/10/29 17:03, Masahiro Ikeda wrote:\n>>>>>>>> Hi,\n>>>>>>>> \n>>>>>>>> Thanks for your comments and advice. I updated the patch.\n>>>>>>>> \n>>>>>>>> On 2020-10-21 18:03, Kyotaro Horiguchi wrote:\n>>>>>>>>> At Tue, 20 Oct 2020 16:11:29 +0900, Masahiro Ikeda\n>>>>>>>>> <ikedamsh@oss.nttdata.com> wrote in\n>>>>>>>>>> On 2020-10-20 12:46, Amit Kapila wrote:\n>>>>>>>>>> > I see that we also need to add extra code to capture these stats (some\n>>>>>>>>>> > of which is in performance-critical path especially in\n>>>>>>>>>> > XLogInsertRecord) which again makes me a bit uncomfortable. It might\n>>>>>>>>>> > be that it is all fine as it is very important to collect these stats\n>>>>>>>>>> > at cluster-level in spite that the same information can be gathered at\n>>>>>>>>>> > statement-level to help customers but I don't see a very strong case\n>>>>>>>>>> > for that in your proposal.\n>>>>>>>>> \n>>>>>>>>> We should avoid that duplication as possible even if the both \n>>>>>>>>> number\n>>>>>>>>> are important.\n>>>>>>>>> \n>>>>>>>>>> Also about performance, I thought there are few impacts \n>>>>>>>>>> because it\n>>>>>>>>>> increments stats in memory. If I can implement to reuse \n>>>>>>>>>> pgWalUsage's\n>>>>>>>>>> value which already collects these stats, there is no impact \n>>>>>>>>>> in\n>>>>>>>>>> XLogInsertRecord.\n>>>>>>>>>> For example, how about pg_stat_wal() calculates the \n>>>>>>>>>> accumulated\n>>>>>>>>>> value of wal_records, wal_fpi, and wal_bytes to use \n>>>>>>>>>> pgWalUsage's\n>>>>>>>>>> value?\n>>>>>>>>> \n>>>>>>>>> I don't think that works, but it would work that \n>>>>>>>>> pgstat_send_wal()\n>>>>>>>>> takes the difference of that values between two successive \n>>>>>>>>> calls.\n>>>>>>>>> \n>>>>>>>>> WalUsage prevWalUsage;\n>>>>>>>>> ...\n>>>>>>>>> pgstat_send_wal()\n>>>>>>>>> {\n>>>>>>>>> ..\n>>>>>>>>>    /* fill in some values using pgWalUsage */\n>>>>>>>>>    WalStats.m_wal_bytes   = pgWalUsage.wal_bytes   - \n>>>>>>>>> prevWalUsage.wal_bytes;\n>>>>>>>>>    WalStats.m_wal_records = pgWalUsage.wal_records - \n>>>>>>>>> prevWalUsage.wal_records;\n>>>>>>>>>    WalStats.m_wal_wal_fpi = pgWalUsage.wal_fpi     - \n>>>>>>>>> prevWalUsage.wal_fpi;\n>>>>>>>>> ...\n>>>>>>>>>    pgstat_send(&WalStats, sizeof(WalStats));\n>>>>>>>>> \n>>>>>>>>>    /* remember the current numbers */\n>>>>>>>>>    prevWalUsage = pgWalUsage;\n>>>>>>>> \n>>>>>>>> Thanks for Horiguchi-san's advice, I changed to reuse pgWalUsage\n>>>>>>>> which is already defined and eliminates the extra overhead.\n>>>>>>> \n>>>>>>> +    /* fill in some values using pgWalUsage */\n>>>>>>> +    WalStats.m_wal_bytes = pgWalUsage.wal_bytes - \n>>>>>>> prevWalUsage.wal_bytes;\n>>>>>>> +    WalStats.m_wal_records = pgWalUsage.wal_records - \n>>>>>>> prevWalUsage.wal_records;\n>>>>>>> +    WalStats.m_wal_fpi = pgWalUsage.wal_fpi - \n>>>>>>> prevWalUsage.wal_fpi;\n>>>>>>> \n>>>>>>> It's better to use WalUsageAccumDiff() here?\n>>>>>> \n>>>>>> Yes, thanks. I fixed it.\n>>>>>> \n>>>>>>> prevWalUsage needs to be initialized with pgWalUsage?\n>>>>>>> \n>>>>>>> +                if (AmWalWriterProcess()){\n>>>>>>> +                    WalStats.m_wal_write_walwriter++;\n>>>>>>> +                }\n>>>>>>> +                else\n>>>>>>> +                {\n>>>>>>> +                    WalStats.m_wal_write_backend++;\n>>>>>>> +                }\n>>>>>>> \n>>>>>>> I think that it's better not to separate m_wal_write_xxx into two \n>>>>>>> for\n>>>>>>> walwriter and other processes. Instead, we can use one \n>>>>>>> m_wal_write_xxx\n>>>>>>> counter and make pgstat_send_wal() send also the process type to\n>>>>>>> the stats collector. Then the stats collector can accumulate the \n>>>>>>> counters\n>>>>>>> per process type if necessary. If we adopt this approach, we can \n>>>>>>> easily\n>>>>>>> extend pg_stat_wal so that any fields can be reported per process \n>>>>>>> type.\n>>>>>> \n>>>>>> I'll remove the above source code because these counters are not \n>>>>>> useful.\n>>>>>> \n>>>>>> \n>>>>>> On 2020-10-30 12:00, Fujii Masao wrote:\n>>>>>>> On 2020/10/20 11:31, Masahiro Ikeda wrote:\n>>>>>>>> Hi,\n>>>>>>>> \n>>>>>>>> I think we need to add some statistics to pg_stat_wal view.\n>>>>>>>> \n>>>>>>>> Although there are some parameter related WAL,\n>>>>>>>> there are few statistics for tuning them.\n>>>>>>>> \n>>>>>>>> I think it's better to provide the following statistics.\n>>>>>>>> Please let me know your comments.\n>>>>>>>> \n>>>>>>>> ```\n>>>>>>>> postgres=# SELECT * from pg_stat_wal;\n>>>>>>>> -[ RECORD 1 ]-------+------------------------------\n>>>>>>>> wal_records         | 2000224\n>>>>>>>> wal_fpi             | 47\n>>>>>>>> wal_bytes           | 248216337\n>>>>>>>> wal_buffers_full    | 20954\n>>>>>>>> wal_init_file       | 8\n>>>>>>>> wal_write_backend   | 20960\n>>>>>>>> wal_write_walwriter | 46\n>>>>>>>> wal_write_time      | 51\n>>>>>>>> wal_sync_backend    | 7\n>>>>>>>> wal_sync_walwriter  | 8\n>>>>>>>> wal_sync_time       | 0\n>>>>>>>> stats_reset         | 2020-10-20 11:04:51.307771+09\n>>>>>>>> ```\n>>>>>>>> \n>>>>>>>> 1. Basic statistics of WAL activity\n>>>>>>>> \n>>>>>>>> - wal_records: Total number of WAL records generated\n>>>>>>>> - wal_fpi: Total number of WAL full page images generated\n>>>>>>>> - wal_bytes: Total amount of WAL bytes generated\n>>>>>>>> \n>>>>>>>> To understand DB's performance, first, we will check the \n>>>>>>>> performance\n>>>>>>>> trends for the entire database instance.\n>>>>>>>> For example, if the number of wal_fpi becomes higher, users may \n>>>>>>>> tune\n>>>>>>>> \"wal_compression\", \"checkpoint_timeout\" and so on.\n>>>>>>>> \n>>>>>>>> Although users can check the above statistics via EXPLAIN, \n>>>>>>>> auto_explain,\n>>>>>>>> autovacuum and pg_stat_statements now,\n>>>>>>>> if users want to see the performance trends  for the entire \n>>>>>>>> database,\n>>>>>>>> they must recalculate the statistics.\n>>>>>>>> \n>>>>>>>> I think it is useful to add the sum of the basic statistics.\n>>>>>>>> \n>>>>>>>> \n>>>>>>>> 2.  WAL segment file creation\n>>>>>>>> \n>>>>>>>> - wal_init_file: Total number of WAL segment files created.\n>>>>>>>> \n>>>>>>>> To create a new WAL file may have an impact on the performance \n>>>>>>>> of\n>>>>>>>> a write-heavy workload generating lots of WAL. If this number is \n>>>>>>>> reported high,\n>>>>>>>> to reduce the number of this initialization, we can tune \n>>>>>>>> WAL-related parameters\n>>>>>>>> so that more \"recycled\" WAL files can be held.\n>>>>>>>> \n>>>>>>>> \n>>>>>>>> \n>>>>>>>> 3. Number of when WAL is flushed\n>>>>>>>> \n>>>>>>>> - wal_write_backend : Total number of WAL data written to the \n>>>>>>>> disk by backends\n>>>>>>>> - wal_write_walwriter : Total number of WAL data written to the \n>>>>>>>> disk by walwriter\n>>>>>>>> - wal_sync_backend : Total number of WAL data synced to the disk \n>>>>>>>> by backends\n>>>>>>>> - wal_sync_walwriter : Total number of WAL data synced to the \n>>>>>>>> disk by walwrite\n>>>>>>>> \n>>>>>>>> I think it's useful for tuning \"synchronous_commit\" and \n>>>>>>>> \"commit_delay\" for query executions.\n>>>>>>>> If the number of WAL is flushed is high, users can know \n>>>>>>>> \"synchronous_commit\" is useful for the workload.\n>>>>>>> \n>>>>>>> I just wonder how useful these counters are. Even without these \n>>>>>>> counters,\n>>>>>>> we already know synchronous_commit=off is likely to cause the \n>>>>>>> better\n>>>>>>> performance (but has the risk of data loss). So ISTM that these \n>>>>>>> counters are\n>>>>>>> not so useful when tuning synchronous_commit.\n>>>>>> \n>>>>>> Thanks, my understanding was wrong.\n>>>>>> I agreed that your comments.\n>>>>>> \n>>>>>> I merged the statistics of *_backend and *_walwriter.\n>>>>>> I think the sum of them is useful to calculate the average per \n>>>>>> write/sync time.\n>>>>>> For example, per write time is equals wal_write_time / wal_write.\n>>>>> \n>>>>> Understood.\n>>>>> \n>>>>> Thanks for updating the patch!\n>>>> \n>>>> Thanks for your comments.\n>>>> \n>>>>> patching file src/include/catalog/pg_proc.dat\n>>>>> Hunk #1 FAILED at 5491.\n>>>>> 1 out of 1 hunk FAILED -- saving rejects to file\n>>>>> src/include/catalog/pg_proc.dat.rej\n>>>>> \n>>>>> I got this failure when applying the patch. Could you update the \n>>>>> patch?\n>>>> \n>>>> Thanks, I updated the patch.\n>>>> \n>>>>> -       Number of times WAL data was written to the disk because \n>>>>> WAL\n>>>>> buffers got full\n>>>>> +       Total number of times WAL data written to the disk because \n>>>>> WAL\n>>>>> buffers got full\n>>>>> \n>>>>> Isn't \"was\" necessary between \"data\" and \"written\"?\n>>>> \n>>>> Yes, I fixed it.\n>>>> \n>>>>> +      <entry role=\"catalog_table_entry\"><para \n>>>>> role=\"column_definition\">\n>>>>> +       <structfield>wal_bytes</structfield> <type>bigint</type>\n>>>>> \n>>>>> Shouldn't the type of wal_bytes be numeric because the total number \n>>>>> of\n>>>>> WAL bytes can exceed the range of bigint? I think that the type of\n>>>>> pg_stat_statements.wal_bytes is also numeric for the same reason.\n>>>> \n>>>> Thanks, I fixed it.\n>>>> \n>>>> Since I cast the type of wal_bytes from PgStat_Counter to uint64,\n>>>> I changed the type of PgStat_MsgWal and PgStat_WalStats too.\n>>>> \n>>>>> +      <entry role=\"catalog_table_entry\"><para \n>>>>> role=\"column_definition\">\n>>>>> +       <structfield>wal_write_time</structfield> \n>>>>> <type>bigint</type>\n>>>>> \n>>>>> Shouldn't the type of wal_xxx_time be double precision,\n>>>>> like pg_stat_database.blk_write_time?\n>>>> \n>>>> Thanks, I changed it.\n>>>> \n>>>>> Even when fsync is set to off or wal_sync_method is set to \n>>>>> open_sync,\n>>>>> wal_sync is incremented. Isn't this behavior confusing?\n>>> \n>>> What do you think about this comment?\n>> \n>> Sorry, I'll change to increment wal_sync and wal_sync_time only\n>> if a specific fsync method is called.\n>> \n>>> I found that we discussed track-WAL-IO-timing feature at the past \n>>> discussion\n>>> about the similar feature [1]. But the feature was droppped from the \n>>> proposal\n>>> patch because there was the performance concern. So probably we need \n>>> to\n>>> revisit the past discussion and benchmark the performance. Thought?\n>>> \n>>> If track-WAL-IO-timing feature may cause performance regression,\n>>> it might be an idea to extract wal_records, wal_fpi and wal_bytes \n>>> parts\n>>> from the patch and commit it at first.\n>>> \n>>> [1]\n>>> https://postgr.es/m/CAJrrPGc6APFUGYNcPe4qcNxpL8gXKYv1KST+vwJcFtCSCEySnA@mail.gmail.com\n>> \n>> Thanks, I'll check the thread.\n>> I agree to add basic statistics at first and I attached the patch.\n> \n> Thanks!\n> \n> +\t\t/* Send WAL statistics */\n> +\t\tpgstat_send_wal();\n> \n> This is not necessary because walwriter generates no WAL data?\n\nNo, it's not necessary.\nThanks. I fixed it.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Thu, 26 Nov 2020 16:07:37 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "On 2020/11/26 16:07, Masahiro Ikeda wrote:\n> On 2020-11-25 20:19, Fujii Masao wrote:\n>> On 2020/11/19 16:31, Masahiro Ikeda wrote:\n>>> On 2020-11-17 11:46, Fujii Masao wrote:\n>>>> On 2020/11/16 16:35, Masahiro Ikeda wrote:\n>>>>> On 2020-11-12 14:58, Fujii Masao wrote:\n>>>>>> On 2020/11/06 10:25, Masahiro Ikeda wrote:\n>>>>>>> On 2020-10-30 11:50, Fujii Masao wrote:\n>>>>>>>> On 2020/10/29 17:03, Masahiro Ikeda wrote:\n>>>>>>>>> Hi,\n>>>>>>>>>\n>>>>>>>>> Thanks for your comments and advice. I updated the patch.\n>>>>>>>>>\n>>>>>>>>> On 2020-10-21 18:03, Kyotaro Horiguchi wrote:\n>>>>>>>>>> At Tue, 20 Oct 2020 16:11:29 +0900, Masahiro Ikeda\n>>>>>>>>>> <ikedamsh@oss.nttdata.com> wrote in\n>>>>>>>>>>> On 2020-10-20 12:46, Amit Kapila wrote:\n>>>>>>>>>>> > I see that we also need to add extra code to capture these stats (some\n>>>>>>>>>>> > of which is in performance-critical path especially in\n>>>>>>>>>>> > XLogInsertRecord) which again makes me a bit uncomfortable. It might\n>>>>>>>>>>> > be that it is all fine as it is very important to collect these stats\n>>>>>>>>>>> > at cluster-level in spite that the same information can be gathered at\n>>>>>>>>>>> > statement-level to help customers but I don't see a very strong case\n>>>>>>>>>>> > for that in your proposal.\n>>>>>>>>>>\n>>>>>>>>>> We should avoid that duplication as possible even if the both number\n>>>>>>>>>> are important.\n>>>>>>>>>>\n>>>>>>>>>>> Also about performance, I thought there are few impacts because it\n>>>>>>>>>>> increments stats in memory. If I can implement to reuse pgWalUsage's\n>>>>>>>>>>> value which already collects these stats, there is no impact in\n>>>>>>>>>>> XLogInsertRecord.\n>>>>>>>>>>> For example, how about pg_stat_wal() calculates the accumulated\n>>>>>>>>>>> value of wal_records, wal_fpi, and wal_bytes to use pgWalUsage's\n>>>>>>>>>>> value?\n>>>>>>>>>>\n>>>>>>>>>> I don't think that works, but it would work that pgstat_send_wal()\n>>>>>>>>>> takes the difference of that values between two successive calls.\n>>>>>>>>>>\n>>>>>>>>>> WalUsage prevWalUsage;\n>>>>>>>>>> ...\n>>>>>>>>>> pgstat_send_wal()\n>>>>>>>>>> {\n>>>>>>>>>> ..\n>>>>>>>>>>    /* fill in some values using pgWalUsage */\n>>>>>>>>>>    WalStats.m_wal_bytes   = pgWalUsage.wal_bytes   - prevWalUsage.wal_bytes;\n>>>>>>>>>>    WalStats.m_wal_records = pgWalUsage.wal_records - prevWalUsage.wal_records;\n>>>>>>>>>>    WalStats.m_wal_wal_fpi = pgWalUsage.wal_fpi     - prevWalUsage.wal_fpi;\n>>>>>>>>>> ...\n>>>>>>>>>>    pgstat_send(&WalStats, sizeof(WalStats));\n>>>>>>>>>>\n>>>>>>>>>>    /* remember the current numbers */\n>>>>>>>>>>    prevWalUsage = pgWalUsage;\n>>>>>>>>>\n>>>>>>>>> Thanks for Horiguchi-san's advice, I changed to reuse pgWalUsage\n>>>>>>>>> which is already defined and eliminates the extra overhead.\n>>>>>>>>\n>>>>>>>> +    /* fill in some values using pgWalUsage */\n>>>>>>>> +    WalStats.m_wal_bytes = pgWalUsage.wal_bytes - prevWalUsage.wal_bytes;\n>>>>>>>> +    WalStats.m_wal_records = pgWalUsage.wal_records - prevWalUsage.wal_records;\n>>>>>>>> +    WalStats.m_wal_fpi = pgWalUsage.wal_fpi - prevWalUsage.wal_fpi;\n>>>>>>>>\n>>>>>>>> It's better to use WalUsageAccumDiff() here?\n>>>>>>>\n>>>>>>> Yes, thanks. I fixed it.\n>>>>>>>\n>>>>>>>> prevWalUsage needs to be initialized with pgWalUsage?\n>>>>>>>>\n>>>>>>>> +                if (AmWalWriterProcess()){\n>>>>>>>> +                    WalStats.m_wal_write_walwriter++;\n>>>>>>>> +                }\n>>>>>>>> +                else\n>>>>>>>> +                {\n>>>>>>>> +                    WalStats.m_wal_write_backend++;\n>>>>>>>> +                }\n>>>>>>>>\n>>>>>>>> I think that it's better not to separate m_wal_write_xxx into two for\n>>>>>>>> walwriter and other processes. Instead, we can use one m_wal_write_xxx\n>>>>>>>> counter and make pgstat_send_wal() send also the process type to\n>>>>>>>> the stats collector. Then the stats collector can accumulate the counters\n>>>>>>>> per process type if necessary. If we adopt this approach, we can easily\n>>>>>>>> extend pg_stat_wal so that any fields can be reported per process type.\n>>>>>>>\n>>>>>>> I'll remove the above source code because these counters are not useful.\n>>>>>>>\n>>>>>>>\n>>>>>>> On 2020-10-30 12:00, Fujii Masao wrote:\n>>>>>>>> On 2020/10/20 11:31, Masahiro Ikeda wrote:\n>>>>>>>>> Hi,\n>>>>>>>>>\n>>>>>>>>> I think we need to add some statistics to pg_stat_wal view.\n>>>>>>>>>\n>>>>>>>>> Although there are some parameter related WAL,\n>>>>>>>>> there are few statistics for tuning them.\n>>>>>>>>>\n>>>>>>>>> I think it's better to provide the following statistics.\n>>>>>>>>> Please let me know your comments.\n>>>>>>>>>\n>>>>>>>>> ```\n>>>>>>>>> postgres=# SELECT * from pg_stat_wal;\n>>>>>>>>> -[ RECORD 1 ]-------+------------------------------\n>>>>>>>>> wal_records         | 2000224\n>>>>>>>>> wal_fpi             | 47\n>>>>>>>>> wal_bytes           | 248216337\n>>>>>>>>> wal_buffers_full    | 20954\n>>>>>>>>> wal_init_file       | 8\n>>>>>>>>> wal_write_backend   | 20960\n>>>>>>>>> wal_write_walwriter | 46\n>>>>>>>>> wal_write_time      | 51\n>>>>>>>>> wal_sync_backend    | 7\n>>>>>>>>> wal_sync_walwriter  | 8\n>>>>>>>>> wal_sync_time       | 0\n>>>>>>>>> stats_reset         | 2020-10-20 11:04:51.307771+09\n>>>>>>>>> ```\n>>>>>>>>>\n>>>>>>>>> 1. Basic statistics of WAL activity\n>>>>>>>>>\n>>>>>>>>> - wal_records: Total number of WAL records generated\n>>>>>>>>> - wal_fpi: Total number of WAL full page images generated\n>>>>>>>>> - wal_bytes: Total amount of WAL bytes generated\n>>>>>>>>>\n>>>>>>>>> To understand DB's performance, first, we will check the performance\n>>>>>>>>> trends for the entire database instance.\n>>>>>>>>> For example, if the number of wal_fpi becomes higher, users may tune\n>>>>>>>>> \"wal_compression\", \"checkpoint_timeout\" and so on.\n>>>>>>>>>\n>>>>>>>>> Although users can check the above statistics via EXPLAIN, auto_explain,\n>>>>>>>>> autovacuum and pg_stat_statements now,\n>>>>>>>>> if users want to see the performance trends  for the entire database,\n>>>>>>>>> they must recalculate the statistics.\n>>>>>>>>>\n>>>>>>>>> I think it is useful to add the sum of the basic statistics.\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> 2.  WAL segment file creation\n>>>>>>>>>\n>>>>>>>>> - wal_init_file: Total number of WAL segment files created.\n>>>>>>>>>\n>>>>>>>>> To create a new WAL file may have an impact on the performance of\n>>>>>>>>> a write-heavy workload generating lots of WAL. If this number is reported high,\n>>>>>>>>> to reduce the number of this initialization, we can tune WAL-related parameters\n>>>>>>>>> so that more \"recycled\" WAL files can be held.\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> 3. Number of when WAL is flushed\n>>>>>>>>>\n>>>>>>>>> - wal_write_backend : Total number of WAL data written to the disk by backends\n>>>>>>>>> - wal_write_walwriter : Total number of WAL data written to the disk by walwriter\n>>>>>>>>> - wal_sync_backend : Total number of WAL data synced to the disk by backends\n>>>>>>>>> - wal_sync_walwriter : Total number of WAL data synced to the disk by walwrite\n>>>>>>>>>\n>>>>>>>>> I think it's useful for tuning \"synchronous_commit\" and \"commit_delay\" for query executions.\n>>>>>>>>> If the number of WAL is flushed is high, users can know \"synchronous_commit\" is useful for the workload.\n>>>>>>>>\n>>>>>>>> I just wonder how useful these counters are. Even without these counters,\n>>>>>>>> we already know synchronous_commit=off is likely to cause the better\n>>>>>>>> performance (but has the risk of data loss). So ISTM that these counters are\n>>>>>>>> not so useful when tuning synchronous_commit.\n>>>>>>>\n>>>>>>> Thanks, my understanding was wrong.\n>>>>>>> I agreed that your comments.\n>>>>>>>\n>>>>>>> I merged the statistics of *_backend and *_walwriter.\n>>>>>>> I think the sum of them is useful to calculate the average per write/sync time.\n>>>>>>> For example, per write time is equals wal_write_time / wal_write.\n>>>>>>\n>>>>>> Understood.\n>>>>>>\n>>>>>> Thanks for updating the patch!\n>>>>>\n>>>>> Thanks for your comments.\n>>>>>\n>>>>>> patching file src/include/catalog/pg_proc.dat\n>>>>>> Hunk #1 FAILED at 5491.\n>>>>>> 1 out of 1 hunk FAILED -- saving rejects to file\n>>>>>> src/include/catalog/pg_proc.dat.rej\n>>>>>>\n>>>>>> I got this failure when applying the patch. Could you update the patch?\n>>>>>\n>>>>> Thanks, I updated the patch.\n>>>>>\n>>>>>> -       Number of times WAL data was written to the disk because WAL\n>>>>>> buffers got full\n>>>>>> +       Total number of times WAL data written to the disk because WAL\n>>>>>> buffers got full\n>>>>>>\n>>>>>> Isn't \"was\" necessary between \"data\" and \"written\"?\n>>>>>\n>>>>> Yes, I fixed it.\n>>>>>\n>>>>>> +      <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>>>>>> +       <structfield>wal_bytes</structfield> <type>bigint</type>\n>>>>>>\n>>>>>> Shouldn't the type of wal_bytes be numeric because the total number of\n>>>>>> WAL bytes can exceed the range of bigint? I think that the type of\n>>>>>> pg_stat_statements.wal_bytes is also numeric for the same reason.\n>>>>>\n>>>>> Thanks, I fixed it.\n>>>>>\n>>>>> Since I cast the type of wal_bytes from PgStat_Counter to uint64,\n>>>>> I changed the type of PgStat_MsgWal and PgStat_WalStats too.\n>>>>>\n>>>>>> +      <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>>>>>> +       <structfield>wal_write_time</structfield> <type>bigint</type>\n>>>>>>\n>>>>>> Shouldn't the type of wal_xxx_time be double precision,\n>>>>>> like pg_stat_database.blk_write_time?\n>>>>>\n>>>>> Thanks, I changed it.\n>>>>>\n>>>>>> Even when fsync is set to off or wal_sync_method is set to open_sync,\n>>>>>> wal_sync is incremented. Isn't this behavior confusing?\n>>>>\n>>>> What do you think about this comment?\n>>>\n>>> Sorry, I'll change to increment wal_sync and wal_sync_time only\n>>> if a specific fsync method is called.\n>>>\n>>>> I found that we discussed track-WAL-IO-timing feature at the past discussion\n>>>> about the similar feature [1]. But the feature was droppped from the proposal\n>>>> patch because there was the performance concern. So probably we need to\n>>>> revisit the past discussion and benchmark the performance. Thought?\n>>>>\n>>>> If track-WAL-IO-timing feature may cause performance regression,\n>>>> it might be an idea to extract wal_records, wal_fpi and wal_bytes parts\n>>>> from the patch and commit it at first.\n>>>>\n>>>> [1]\n>>>> https://postgr.es/m/CAJrrPGc6APFUGYNcPe4qcNxpL8gXKYv1KST+vwJcFtCSCEySnA@mail.gmail.com\n>>>\n>>> Thanks, I'll check the thread.\n>>> I agree to add basic statistics at first and I attached the patch.\n>>\n>> Thanks!\n>>\n>> +        /* Send WAL statistics */\n>> +        pgstat_send_wal();\n>>\n>> This is not necessary because walwriter generates no WAL data?\n> \n> No, it's not necessary.\n> Thanks. I fixed it.\n\nThanks for updating the patch! I applied cosmetic changes to it.\nFor example, I added more comments. Patch attached.\nBarring no objection, I will commit this patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 1 Dec 2020 14:01:41 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "\n\nOn 2020/12/01 14:01, Fujii Masao wrote:\n> \n> \n> On 2020/11/26 16:07, Masahiro Ikeda wrote:\n>> On 2020-11-25 20:19, Fujii Masao wrote:\n>>> On 2020/11/19 16:31, Masahiro Ikeda wrote:\n>>>> On 2020-11-17 11:46, Fujii Masao wrote:\n>>>>> On 2020/11/16 16:35, Masahiro Ikeda wrote:\n>>>>>> On 2020-11-12 14:58, Fujii Masao wrote:\n>>>>>>> On 2020/11/06 10:25, Masahiro Ikeda wrote:\n>>>>>>>> On 2020-10-30 11:50, Fujii Masao wrote:\n>>>>>>>>> On 2020/10/29 17:03, Masahiro Ikeda wrote:\n>>>>>>>>>> Hi,\n>>>>>>>>>>\n>>>>>>>>>> Thanks for your comments and advice. I updated the patch.\n>>>>>>>>>>\n>>>>>>>>>> On 2020-10-21 18:03, Kyotaro Horiguchi wrote:\n>>>>>>>>>>> At Tue, 20 Oct 2020 16:11:29 +0900, Masahiro Ikeda\n>>>>>>>>>>> <ikedamsh@oss.nttdata.com> wrote in\n>>>>>>>>>>>> On 2020-10-20 12:46, Amit Kapila wrote:\n>>>>>>>>>>>> > I see that we also need to add extra code to capture these stats (some\n>>>>>>>>>>>> > of which is in performance-critical path especially in\n>>>>>>>>>>>> > XLogInsertRecord) which again makes me a bit uncomfortable. It might\n>>>>>>>>>>>> > be that it is all fine as it is very important to collect these stats\n>>>>>>>>>>>> > at cluster-level in spite that the same information can be gathered at\n>>>>>>>>>>>> > statement-level to help customers but I don't see a very strong case\n>>>>>>>>>>>> > for that in your proposal.\n>>>>>>>>>>>\n>>>>>>>>>>> We should avoid that duplication as possible even if the both number\n>>>>>>>>>>> are important.\n>>>>>>>>>>>\n>>>>>>>>>>>> Also about performance, I thought there are few impacts because it\n>>>>>>>>>>>> increments stats in memory. If I can implement to reuse pgWalUsage's\n>>>>>>>>>>>> value which already collects these stats, there is no impact in\n>>>>>>>>>>>> XLogInsertRecord.\n>>>>>>>>>>>> For example, how about pg_stat_wal() calculates the accumulated\n>>>>>>>>>>>> value of wal_records, wal_fpi, and wal_bytes to use pgWalUsage's\n>>>>>>>>>>>> value?\n>>>>>>>>>>>\n>>>>>>>>>>> I don't think that works, but it would work that pgstat_send_wal()\n>>>>>>>>>>> takes the difference of that values between two successive calls.\n>>>>>>>>>>>\n>>>>>>>>>>> WalUsage prevWalUsage;\n>>>>>>>>>>> ...\n>>>>>>>>>>> pgstat_send_wal()\n>>>>>>>>>>> {\n>>>>>>>>>>> ..\n>>>>>>>>>>>    /* fill in some values using pgWalUsage */\n>>>>>>>>>>>    WalStats.m_wal_bytes   = pgWalUsage.wal_bytes   - prevWalUsage.wal_bytes;\n>>>>>>>>>>>    WalStats.m_wal_records = pgWalUsage.wal_records - prevWalUsage.wal_records;\n>>>>>>>>>>>    WalStats.m_wal_wal_fpi = pgWalUsage.wal_fpi     - prevWalUsage.wal_fpi;\n>>>>>>>>>>> ...\n>>>>>>>>>>>    pgstat_send(&WalStats, sizeof(WalStats));\n>>>>>>>>>>>\n>>>>>>>>>>>    /* remember the current numbers */\n>>>>>>>>>>>    prevWalUsage = pgWalUsage;\n>>>>>>>>>>\n>>>>>>>>>> Thanks for Horiguchi-san's advice, I changed to reuse pgWalUsage\n>>>>>>>>>> which is already defined and eliminates the extra overhead.\n>>>>>>>>>\n>>>>>>>>> +    /* fill in some values using pgWalUsage */\n>>>>>>>>> +    WalStats.m_wal_bytes = pgWalUsage.wal_bytes - prevWalUsage.wal_bytes;\n>>>>>>>>> +    WalStats.m_wal_records = pgWalUsage.wal_records - prevWalUsage.wal_records;\n>>>>>>>>> +    WalStats.m_wal_fpi = pgWalUsage.wal_fpi - prevWalUsage.wal_fpi;\n>>>>>>>>>\n>>>>>>>>> It's better to use WalUsageAccumDiff() here?\n>>>>>>>>\n>>>>>>>> Yes, thanks. I fixed it.\n>>>>>>>>\n>>>>>>>>> prevWalUsage needs to be initialized with pgWalUsage?\n>>>>>>>>>\n>>>>>>>>> +                if (AmWalWriterProcess()){\n>>>>>>>>> +                    WalStats.m_wal_write_walwriter++;\n>>>>>>>>> +                }\n>>>>>>>>> +                else\n>>>>>>>>> +                {\n>>>>>>>>> +                    WalStats.m_wal_write_backend++;\n>>>>>>>>> +                }\n>>>>>>>>>\n>>>>>>>>> I think that it's better not to separate m_wal_write_xxx into two for\n>>>>>>>>> walwriter and other processes. Instead, we can use one m_wal_write_xxx\n>>>>>>>>> counter and make pgstat_send_wal() send also the process type to\n>>>>>>>>> the stats collector. Then the stats collector can accumulate the counters\n>>>>>>>>> per process type if necessary. If we adopt this approach, we can easily\n>>>>>>>>> extend pg_stat_wal so that any fields can be reported per process type.\n>>>>>>>>\n>>>>>>>> I'll remove the above source code because these counters are not useful.\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> On 2020-10-30 12:00, Fujii Masao wrote:\n>>>>>>>>> On 2020/10/20 11:31, Masahiro Ikeda wrote:\n>>>>>>>>>> Hi,\n>>>>>>>>>>\n>>>>>>>>>> I think we need to add some statistics to pg_stat_wal view.\n>>>>>>>>>>\n>>>>>>>>>> Although there are some parameter related WAL,\n>>>>>>>>>> there are few statistics for tuning them.\n>>>>>>>>>>\n>>>>>>>>>> I think it's better to provide the following statistics.\n>>>>>>>>>> Please let me know your comments.\n>>>>>>>>>>\n>>>>>>>>>> ```\n>>>>>>>>>> postgres=# SELECT * from pg_stat_wal;\n>>>>>>>>>> -[ RECORD 1 ]-------+------------------------------\n>>>>>>>>>> wal_records         | 2000224\n>>>>>>>>>> wal_fpi             | 47\n>>>>>>>>>> wal_bytes           | 248216337\n>>>>>>>>>> wal_buffers_full    | 20954\n>>>>>>>>>> wal_init_file       | 8\n>>>>>>>>>> wal_write_backend   | 20960\n>>>>>>>>>> wal_write_walwriter | 46\n>>>>>>>>>> wal_write_time      | 51\n>>>>>>>>>> wal_sync_backend    | 7\n>>>>>>>>>> wal_sync_walwriter  | 8\n>>>>>>>>>> wal_sync_time       | 0\n>>>>>>>>>> stats_reset         | 2020-10-20 11:04:51.307771+09\n>>>>>>>>>> ```\n>>>>>>>>>>\n>>>>>>>>>> 1. Basic statistics of WAL activity\n>>>>>>>>>>\n>>>>>>>>>> - wal_records: Total number of WAL records generated\n>>>>>>>>>> - wal_fpi: Total number of WAL full page images generated\n>>>>>>>>>> - wal_bytes: Total amount of WAL bytes generated\n>>>>>>>>>>\n>>>>>>>>>> To understand DB's performance, first, we will check the performance\n>>>>>>>>>> trends for the entire database instance.\n>>>>>>>>>> For example, if the number of wal_fpi becomes higher, users may tune\n>>>>>>>>>> \"wal_compression\", \"checkpoint_timeout\" and so on.\n>>>>>>>>>>\n>>>>>>>>>> Although users can check the above statistics via EXPLAIN, auto_explain,\n>>>>>>>>>> autovacuum and pg_stat_statements now,\n>>>>>>>>>> if users want to see the performance trends  for the entire database,\n>>>>>>>>>> they must recalculate the statistics.\n>>>>>>>>>>\n>>>>>>>>>> I think it is useful to add the sum of the basic statistics.\n>>>>>>>>>>\n>>>>>>>>>>\n>>>>>>>>>> 2.  WAL segment file creation\n>>>>>>>>>>\n>>>>>>>>>> - wal_init_file: Total number of WAL segment files created.\n>>>>>>>>>>\n>>>>>>>>>> To create a new WAL file may have an impact on the performance of\n>>>>>>>>>> a write-heavy workload generating lots of WAL. If this number is reported high,\n>>>>>>>>>> to reduce the number of this initialization, we can tune WAL-related parameters\n>>>>>>>>>> so that more \"recycled\" WAL files can be held.\n>>>>>>>>>>\n>>>>>>>>>>\n>>>>>>>>>>\n>>>>>>>>>> 3. Number of when WAL is flushed\n>>>>>>>>>>\n>>>>>>>>>> - wal_write_backend : Total number of WAL data written to the disk by backends\n>>>>>>>>>> - wal_write_walwriter : Total number of WAL data written to the disk by walwriter\n>>>>>>>>>> - wal_sync_backend : Total number of WAL data synced to the disk by backends\n>>>>>>>>>> - wal_sync_walwriter : Total number of WAL data synced to the disk by walwrite\n>>>>>>>>>>\n>>>>>>>>>> I think it's useful for tuning \"synchronous_commit\" and \"commit_delay\" for query executions.\n>>>>>>>>>> If the number of WAL is flushed is high, users can know \"synchronous_commit\" is useful for the workload.\n>>>>>>>>>\n>>>>>>>>> I just wonder how useful these counters are. Even without these counters,\n>>>>>>>>> we already know synchronous_commit=off is likely to cause the better\n>>>>>>>>> performance (but has the risk of data loss). So ISTM that these counters are\n>>>>>>>>> not so useful when tuning synchronous_commit.\n>>>>>>>>\n>>>>>>>> Thanks, my understanding was wrong.\n>>>>>>>> I agreed that your comments.\n>>>>>>>>\n>>>>>>>> I merged the statistics of *_backend and *_walwriter.\n>>>>>>>> I think the sum of them is useful to calculate the average per write/sync time.\n>>>>>>>> For example, per write time is equals wal_write_time / wal_write.\n>>>>>>>\n>>>>>>> Understood.\n>>>>>>>\n>>>>>>> Thanks for updating the patch!\n>>>>>>\n>>>>>> Thanks for your comments.\n>>>>>>\n>>>>>>> patching file src/include/catalog/pg_proc.dat\n>>>>>>> Hunk #1 FAILED at 5491.\n>>>>>>> 1 out of 1 hunk FAILED -- saving rejects to file\n>>>>>>> src/include/catalog/pg_proc.dat.rej\n>>>>>>>\n>>>>>>> I got this failure when applying the patch. Could you update the patch?\n>>>>>>\n>>>>>> Thanks, I updated the patch.\n>>>>>>\n>>>>>>> -       Number of times WAL data was written to the disk because WAL\n>>>>>>> buffers got full\n>>>>>>> +       Total number of times WAL data written to the disk because WAL\n>>>>>>> buffers got full\n>>>>>>>\n>>>>>>> Isn't \"was\" necessary between \"data\" and \"written\"?\n>>>>>>\n>>>>>> Yes, I fixed it.\n>>>>>>\n>>>>>>> +      <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>>>>>>> +       <structfield>wal_bytes</structfield> <type>bigint</type>\n>>>>>>>\n>>>>>>> Shouldn't the type of wal_bytes be numeric because the total number of\n>>>>>>> WAL bytes can exceed the range of bigint? I think that the type of\n>>>>>>> pg_stat_statements.wal_bytes is also numeric for the same reason.\n>>>>>>\n>>>>>> Thanks, I fixed it.\n>>>>>>\n>>>>>> Since I cast the type of wal_bytes from PgStat_Counter to uint64,\n>>>>>> I changed the type of PgStat_MsgWal and PgStat_WalStats too.\n>>>>>>\n>>>>>>> +      <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>>>>>>> +       <structfield>wal_write_time</structfield> <type>bigint</type>\n>>>>>>>\n>>>>>>> Shouldn't the type of wal_xxx_time be double precision,\n>>>>>>> like pg_stat_database.blk_write_time?\n>>>>>>\n>>>>>> Thanks, I changed it.\n>>>>>>\n>>>>>>> Even when fsync is set to off or wal_sync_method is set to open_sync,\n>>>>>>> wal_sync is incremented. Isn't this behavior confusing?\n>>>>>\n>>>>> What do you think about this comment?\n>>>>\n>>>> Sorry, I'll change to increment wal_sync and wal_sync_time only\n>>>> if a specific fsync method is called.\n>>>>\n>>>>> I found that we discussed track-WAL-IO-timing feature at the past discussion\n>>>>> about the similar feature [1]. But the feature was droppped from the proposal\n>>>>> patch because there was the performance concern. So probably we need to\n>>>>> revisit the past discussion and benchmark the performance. Thought?\n>>>>>\n>>>>> If track-WAL-IO-timing feature may cause performance regression,\n>>>>> it might be an idea to extract wal_records, wal_fpi and wal_bytes parts\n>>>>> from the patch and commit it at first.\n>>>>>\n>>>>> [1]\n>>>>> https://postgr.es/m/CAJrrPGc6APFUGYNcPe4qcNxpL8gXKYv1KST+vwJcFtCSCEySnA@mail.gmail.com\n>>>>\n>>>> Thanks, I'll check the thread.\n>>>> I agree to add basic statistics at first and I attached the patch.\n>>>\n>>> Thanks!\n>>>\n>>> +        /* Send WAL statistics */\n>>> +        pgstat_send_wal();\n>>>\n>>> This is not necessary because walwriter generates no WAL data?\n>>\n>> No, it's not necessary.\n>> Thanks. I fixed it.\n> \n> Thanks for updating the patch! I applied cosmetic changes to it.\n> For example, I added more comments. Patch attached.\n> Barring no objection, I will commit this patch.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 2 Dec 2020 13:52:43 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "Hi,\n\nOn 2020-12-02 13:52:43 +0900, Fujii Masao wrote:\n> Pushed. Thanks!\n\nWhy are wal_records/fpi long, instead of uint64?\n\tlong\t\twal_records;\t/* # of WAL records produced */\n\tlong\t\twal_fpi;\t\t/* # of WAL full page images produced */\n\tuint64\t\twal_bytes;\t\t/* size of WAL records produced */\n\nlong is only 4 byte e.g. on windows, and it is entirely possible to wrap\na 4 byte record counter. It's also somewhat weird that wal_bytes is\nunsigned, but the others are signed?\n\nThis is made doubly weird because on the SQL level you chose to make\nwal_records, wal_fpi bigint. And wal_bytes numeric?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Dec 2020 13:16:50 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "Hi,\n\nOn 2020-12-21 13:16:50 -0800, Andres Freund wrote:\n> On 2020-12-02 13:52:43 +0900, Fujii Masao wrote:\n> > Pushed. Thanks!\n>\n> Why are wal_records/fpi long, instead of uint64?\n> \tlong\t\twal_records;\t/* # of WAL records produced */\n> \tlong\t\twal_fpi;\t\t/* # of WAL full page images produced */\n> \tuint64\t\twal_bytes;\t\t/* size of WAL records produced */\n>\n> long is only 4 byte e.g. on windows, and it is entirely possible to wrap\n> a 4 byte record counter. It's also somewhat weird that wal_bytes is\n> unsigned, but the others are signed?\n>\n> This is made doubly weird because on the SQL level you chose to make\n> wal_records, wal_fpi bigint. And wal_bytes numeric?\n\nSome more things:\n- There's both PgStat_MsgWal WalStats; and static PgStat_WalStats walStats;\n that seems *WAY* too confusing. And the former imo shouldn't be\n global.\n- AdvanceXLInsertBuffer() does WalStats.m_wal_buffers_full, but as far\n as I can tell there's nothing actually sending that?\n\n- Andres\n\n\n", "msg_date": "Mon, 21 Dec 2020 16:39:35 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "Thanks for your comments.\n\nOn 2020-12-22 09:39, Andres Freund wrote:\n> Hi,\n> \n> On 2020-12-21 13:16:50 -0800, Andres Freund wrote:\n>> On 2020-12-02 13:52:43 +0900, Fujii Masao wrote:\n>> > Pushed. Thanks!\n>> \n>> Why are wal_records/fpi long, instead of uint64?\n>> \tlong\t\twal_records;\t/* # of WAL records produced */\n>> \tlong\t\twal_fpi;\t\t/* # of WAL full page images produced */\n>> \tuint64\t\twal_bytes;\t\t/* size of WAL records produced */\n>> \n>> long is only 4 byte e.g. on windows, and it is entirely possible to \n>> wrap\n>> a 4 byte record counter. It's also somewhat weird that wal_bytes is\n>> unsigned, but the others are signed?\n>> \n>> This is made doubly weird because on the SQL level you chose to make\n>> wal_records, wal_fpi bigint. And wal_bytes numeric?\n\nI'm sorry I don't know the reason.\n\nThe following thread is related to the patch and the type of wal_bytes\nis changed from long to uint64 because XLogRecPtr is uint64.\nhttps://www.postgresql.org/message-id/flat/20200402144438.GF64485%40nol#1f0127c98df430104c63426fdc285c20\n\nI assumed that the reason why the type of wal_records/fpi is long\nis BufferUsage have the members (i.e, shared_blks_hit) of the same \ntypes.\n\nSo, I think it's better if to change the type of wal_records/fpi from \nlong to uint64,\nto change the types of BufferUsage's members too.\n\n\n> Some more things:\n> - There's both PgStat_MsgWal WalStats; and static PgStat_WalStats \n> walStats;\n> that seems *WAY* too confusing. And the former imo shouldn't be\n> global.\n\nSorry for the confusing name.\nI referenced the following variable name.\n\n static PgStat_MsgSLRU SLRUStats[SLRU_NUM_ELEMENTS];\n static PgStat_SLRUStats slruStats[SLRU_NUM_ELEMENTS];\n\nHow about to change from \"PgStat_MsgWal WalStats\"\nto \"PgStat_MsgWal MsgWalStats\"?\n\nIs it better to change the following name too?\n \"PgStat_MsgBgWriter BgWriterStats;\"\n \"static PgStat_MsgSLRU SLRUStats[SLRU_NUM_ELEMENTS];\"\n\nSince PgStat_MsgWal is called in xlog.c and pgstat.c,\nI thought it's should be global.\n\n> - AdvanceXLInsertBuffer() does WalStats.m_wal_buffers_full, but as far\n> as I can tell there's nothing actually sending that?\n\nIIUC, when pgstat_send_wal() is called by backends and so on,\nit is sent to the statistic collector and it is exposed via pg_stat_wal \nview.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 22 Dec 2020 11:16:43 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" }, { "msg_contents": "On 2020-12-22 11:16, Masahiro Ikeda wrote:\n> Thanks for your comments.\n> \n> On 2020-12-22 09:39, Andres Freund wrote:\n>> Hi,\n>> \n>> On 2020-12-21 13:16:50 -0800, Andres Freund wrote:\n>>> On 2020-12-02 13:52:43 +0900, Fujii Masao wrote:\n>>> > Pushed. Thanks!\n>>> \n>>> Why are wal_records/fpi long, instead of uint64?\n>>> \tlong\t\twal_records;\t/* # of WAL records produced */\n>>> \tlong\t\twal_fpi;\t\t/* # of WAL full page images produced */\n>>> \tuint64\t\twal_bytes;\t\t/* size of WAL records produced */\n>>> \n>>> long is only 4 byte e.g. on windows, and it is entirely possible to \n>>> wrap\n>>> a 4 byte record counter. It's also somewhat weird that wal_bytes is\n>>> unsigned, but the others are signed?\n>>> \n>>> This is made doubly weird because on the SQL level you chose to make\n>>> wal_records, wal_fpi bigint. And wal_bytes numeric?\n> \n> I'm sorry I don't know the reason.\n> \n> The following thread is related to the patch and the type of wal_bytes\n> is changed from long to uint64 because XLogRecPtr is uint64.\n> https://www.postgresql.org/message-id/flat/20200402144438.GF64485%40nol#1f0127c98df430104c63426fdc285c20\n> \n> I assumed that the reason why the type of wal_records/fpi is long\n> is BufferUsage have the members (i.e, shared_blks_hit) of the same \n> types.\n> \n> So, I think it's better if to change the type of wal_records/fpi from\n> long to uint64,\n> to change the types of BufferUsage's members too.\n\nI've done a little more research so I'll share the results.\n\nIUCC, theoretically this leads to caliculate the statistics less,\nbut actually, it's not happened.\n\nThe above \"wal_records\", \"wal_fpi\" are accumulation values and when \nWalUsageAccumDiff()\nis called, we can know how many wals are generated for specific \nexecutions,\nfor example, planning/executing a query, processing a utility command, \nand vacuuming one relation.\n\nThe following variable has accumulated \"wal_records\" and \"wal_fpi\" per \nprocess.\n\n```\ntypedef struct WalUsage\n{\n\tlong\t\twal_records;\t/* # of WAL records produced */\n\tlong\t\twal_fpi;\t\t/* # of WAL full page images produced */\n\tuint64\t\twal_bytes;\t\t/* size of WAL records produced */\n} WalUsage;\n\nWalUsage\tpgWalUsage;\n```\n\nAlthough this may be overflow, it doesn't affect to caliculate the \ndifference\nof wal usage between some execution points. If to generate over 2 \nbillion wal\nrecords per executions, 4 bytes is not enough and collected statistics \nwill be\nlost, but I think it's not happened.\n\n\nIn addition, \"wal_records\" and \"wal_fpi\" values sent by processes are\naccumulated in the statistic collector and their types are \nPgStat_Counter(int64).\n\n```\ntypedef struct PgStat_WalStats\n{\n\tPgStat_Counter wal_records;\n\tPgStat_Counter wal_fpi;\n\tuint64\t\twal_bytes;\n\tPgStat_Counter wal_buffers_full;\n\tTimestampTz stat_reset_timestamp;\n} PgStat_WalStats;\n```\n\n\n>> Some more things:\n>> - There's both PgStat_MsgWal WalStats; and static PgStat_WalStats \n>> walStats;\n>> that seems *WAY* too confusing. And the former imo shouldn't be\n>> global.\n> \n> Sorry for the confusing name.\n> I referenced the following variable name.\n> \n> static PgStat_MsgSLRU SLRUStats[SLRU_NUM_ELEMENTS];\n> static PgStat_SLRUStats slruStats[SLRU_NUM_ELEMENTS];\n> \n> How about to change from \"PgStat_MsgWal WalStats\"\n> to \"PgStat_MsgWal MsgWalStats\"?\n> \n> Is it better to change the following name too?\n> \"PgStat_MsgBgWriter BgWriterStats;\"\n> \"static PgStat_MsgSLRU SLRUStats[SLRU_NUM_ELEMENTS];\"\n> \n> Since PgStat_MsgWal is called in xlog.c and pgstat.c,\n> I thought it's should be global.\n\nI made an attached patch to rename the above variable names.\nWhat do you think?\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Wed, 20 Jan 2021 12:48:27 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add statistics to pg_stat_wal view for wal related parameter\n tuning" } ]
[ { "msg_contents": "While working on another patch, I figured adding a \nselect_common_typmod() to go along with select_common_type() and \nselect_common_collation() would be handy. Typmods were previously \ncombined using hand-coded logic in several places, and not at all in \nother places. The logic in select_common_typmod() isn't very exciting, \nbut it makes the code more compact and readable in a few locations, and \nin the future we can perhaps do more complicated things if desired.\n\nThere might have been a tiny bug in transformValuesClause() because \nwhile consolidating the typmods it does not take into account whether \nthe types are actually the same (as more correctly done in \ntransformSetOperationTree() and buildMergedJoinVar()).\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 20 Oct 2020 10:58:18 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "select_common_typmod" }, { "msg_contents": "On 20/10/2020 11:58, Peter Eisentraut wrote:\n> While working on another patch, I figured adding a\n> select_common_typmod() to go along with select_common_type() and\n> select_common_collation() would be handy. Typmods were previously\n> combined using hand-coded logic in several places, and not at all in\n> other places. The logic in select_common_typmod() isn't very exciting,\n> but it makes the code more compact and readable in a few locations, and\n> in the future we can perhaps do more complicated things if desired.\n\nMakes sense.\n\n> There might have been a tiny bug in transformValuesClause() because\n> while consolidating the typmods it does not take into account whether\n> the types are actually the same (as more correctly done in\n> transformSetOperationTree() and buildMergedJoinVar()).\n\nHmm, it seems so, but I could not come up with a test case to reach that \ncodepath. I think you'd need to create two types in the same \ntypcategory, but with different and incompatible typmod formats.\n\nThe patch also adds typmod resolution for hypothetical ordered-set \naggregate arguments. I couldn't come up with a test case that would \ntickle that codepath either, but it seems like a sensible change. You \nmight want to mention it in the commit message though.\n\n- Heikki\n\n\n", "msg_date": "Mon, 26 Oct 2020 11:05:11 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: select_common_typmod" }, { "msg_contents": "On 2020-10-26 10:05, Heikki Linnakangas wrote:\n>> There might have been a tiny bug in transformValuesClause() because\n>> while consolidating the typmods it does not take into account whether\n>> the types are actually the same (as more correctly done in\n>> transformSetOperationTree() and buildMergedJoinVar()).\n> \n> Hmm, it seems so, but I could not come up with a test case to reach that\n> codepath. I think you'd need to create two types in the same\n> typcategory, but with different and incompatible typmod formats.\n\nYeah, something like that.\n\n> The patch also adds typmod resolution for hypothetical ordered-set\n> aggregate arguments. I couldn't come up with a test case that would\n> tickle that codepath either, but it seems like a sensible change. You\n> might want to mention it in the commit message though.\n\nOK, committed with that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 27 Oct 2020 18:12:05 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: select_common_typmod" } ]
[ { "msg_contents": "Hello,\nAfter restoring 'directory' backup with pg_restore (PostgreSQL 10.6) I've\ngot a message:\npg_restore: [directory archiver] could not close data file: Success\npg_restore: [parallel archiver] a worker process died unexpectedly\nIn this thread:\nhttps://www.postgresql.org/message-id/CAFcNs%2Bos5ExGvXMBrBBzzuJJamoHt5-zdJdxX39nkVG0KUxwsw%40mail.gmail.com\nthere is only one answer. I'm interesting, is it normal behaivor of\npg_restore and backup restored normaly or not ?\n\nRegards, Andrii\n\nHello,After restoring 'directory' backup with pg_restore (PostgreSQL 10.6) I've got a message:pg_restore: [directory archiver] could not close data file: Successpg_restore: [parallel archiver] a worker process died unexpectedlyIn this thread: https://www.postgresql.org/message-id/CAFcNs%2Bos5ExGvXMBrBBzzuJJamoHt5-zdJdxX39nkVG0KUxwsw%40mail.gmail.comthere is only one answer. I'm interesting, is it normal behaivor of pg_restore and backup restored normaly or not ?Regards, Andrii", "msg_date": "Tue, 20 Oct 2020 13:48:25 +0300", "msg_from": "Andrii Tkach <and7ua@gmail.com>", "msg_from_op": true, "msg_subject": "Error in pg_restore (could not close data file: Success)" }, { "msg_contents": "At Tue, 20 Oct 2020 13:48:25 +0300, Andrii Tkach <and7ua@gmail.com> wrote in \n> Hello,\n> After restoring 'directory' backup with pg_restore (PostgreSQL 10.6) I've\n> got a message:\n> pg_restore: [directory archiver] could not close data file: Success\n> pg_restore: [parallel archiver] a worker process died unexpectedly\n> In this thread:\n> https://www.postgresql.org/message-id/CAFcNs%2Bos5ExGvXMBrBBzzuJJamoHt5-zdJdxX39nkVG0KUxwsw%40mail.gmail.com\n> there is only one answer. I'm interesting, is it normal behaivor of\n> pg_restore and backup restored normaly or not ?\n\nThat would be a broken compressed file, maybe caused by disk full.\n\nThis reminded me of a thread. The issue above seems to be the same\nwith this:\n\nhttps://www.postgresql.org/message-id/flat/20200416.181945.759179589924840062.horikyota.ntt%40gmail.com#ed85c5fda64873c45811be4e3027a2ea\n\nMe> Hmm. Sounds reasonable. I'm going to do that. Thanks!\n\nBut somehow that haven't happened, I'll come up with a new version.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 21 Oct 2020 13:45:15 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error in pg_restore (could not close data file: Success)" }, { "msg_contents": "At Wed, 21 Oct 2020 13:45:15 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> https://www.postgresql.org/message-id/flat/20200416.181945.759179589924840062.horikyota.ntt%40gmail.com#ed85c5fda64873c45811be4e3027a2ea\n> \n> Me> Hmm. Sounds reasonable. I'm going to do that. Thanks!\n> \n> But somehow that haven't happened, I'll come up with a new version.\n\npg_restore shows the following error instead of \"Success\" for broken\ncompressed file.\n\npg_restore: error: could not close data file \"d/3149.dat\": zlib error: error reading or writing compressed file\n\n\n0001:\n\ncfclose() calls fatal() instead of returning the result to the callers\non error, which isobviously okay for all existing callers that are\nhandling errors from the function. Other callers ignored the returned\nvalue but we should fatal() on error of the function.\n\nAt least for me, gzerror doesn't return a message (specifically,\nreturns NULL) after gzclose failure so currently cfclose shows its own\nmessages for erros of gzclose(). Am I missing something?\n\n0002:\n\ncfread has the same code with get_cfp_error() and cfgetc uses\nsterror() after gzgetc(). It would be suitable for a separate patch,\nbut 0002 fixes those bugs. I changed _EndBlob() to show the cause of\nan error.\n\nDid not do in this patch:\n\nWe could do further small refactoring to remove temporary variables in\npg_backup_directory.c for _StartData(), InitArchiveFmt_Directory,\n_LoadBlobs(), _StartBlobs() and _CloseArchive(), but I left them as is\nfor the ease of back-patching.\n\nNow that we have the file name in the context variable so we could\nshow the file name in all error messages, but that change was large\nand there's a part where that change is a bit more complex so I didn't\ndo that.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 21 Oct 2020 15:20:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error in pg_restore (could not close data file: Success)" }, { "msg_contents": "Maybe it would be better to commit this patches to mainstream, but I don't\nrealy know.\n\nср, 21 окт. 2020 г. в 09:20, Kyotaro Horiguchi <horikyota.ntt@gmail.com>:\n\n> At Wed, 21 Oct 2020 13:45:15 +0900 (JST), Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com> wrote in\n> >\n> https://www.postgresql.org/message-id/flat/20200416.181945.759179589924840062.horikyota.ntt%40gmail.com#ed85c5fda64873c45811be4e3027a2ea\n> >\n> > Me> Hmm. Sounds reasonable. I'm going to do that. Thanks!\n> >\n> > But somehow that haven't happened, I'll come up with a new version.\n>\n> pg_restore shows the following error instead of \"Success\" for broken\n> compressed file.\n>\n> pg_restore: error: could not close data file \"d/3149.dat\": zlib error:\n> error reading or writing compressed file\n>\n>\n> 0001:\n>\n> cfclose() calls fatal() instead of returning the result to the callers\n> on error, which isobviously okay for all existing callers that are\n> handling errors from the function. Other callers ignored the returned\n> value but we should fatal() on error of the function.\n>\n> At least for me, gzerror doesn't return a message (specifically,\n> returns NULL) after gzclose failure so currently cfclose shows its own\n> messages for erros of gzclose(). Am I missing something?\n>\n> 0002:\n>\n> cfread has the same code with get_cfp_error() and cfgetc uses\n> sterror() after gzgetc(). It would be suitable for a separate patch,\n> but 0002 fixes those bugs. I changed _EndBlob() to show the cause of\n> an error.\n>\n> Did not do in this patch:\n>\n> We could do further small refactoring to remove temporary variables in\n> pg_backup_directory.c for _StartData(), InitArchiveFmt_Directory,\n> _LoadBlobs(), _StartBlobs() and _CloseArchive(), but I left them as is\n> for the ease of back-patching.\n>\n> Now that we have the file name in the context variable so we could\n> show the file name in all error messages, but that change was large\n> and there's a part where that change is a bit more complex so I didn't\n> do that.\n>\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\nMaybe it would be better to commit this patches to mainstream, but I don't realy know.ср, 21 окт. 2020 г. в 09:20, Kyotaro Horiguchi <horikyota.ntt@gmail.com>:At Wed, 21 Oct 2020 13:45:15 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> https://www.postgresql.org/message-id/flat/20200416.181945.759179589924840062.horikyota.ntt%40gmail.com#ed85c5fda64873c45811be4e3027a2ea\n> \n> Me> Hmm. Sounds reasonable.  I'm going to do that.  Thanks!\n> \n> But somehow that haven't happened, I'll come up with a new version.\n\npg_restore shows the following error instead of \"Success\" for broken\ncompressed file.\n\npg_restore: error: could not close data file \"d/3149.dat\": zlib error: error reading or writing compressed file\n\n\n0001:\n\ncfclose() calls fatal() instead of returning the result to the callers\non error, which isobviously okay for all existing callers that are\nhandling errors from the function. Other callers ignored the returned\nvalue but we should fatal() on error of the function.\n\nAt least for me, gzerror doesn't return a message (specifically,\nreturns NULL) after gzclose failure so currently cfclose shows its own\nmessages for erros of gzclose().  Am I missing something?\n\n0002:\n\ncfread has the same code with get_cfp_error() and cfgetc uses\nsterror() after gzgetc(). It would be suitable for a separate patch,\nbut 0002 fixes those bugs.  I changed _EndBlob() to show the cause of\nan error.\n\nDid not do in this patch:\n\nWe could do further small refactoring to remove temporary variables in\npg_backup_directory.c for _StartData(), InitArchiveFmt_Directory,\n_LoadBlobs(), _StartBlobs() and _CloseArchive(), but I left them as is\nfor the ease of back-patching.\n\nNow that we have the file name in the context variable so we could\nshow the file name in all error messages, but that change was large\nand there's a part where that change is a bit more complex so I didn't\ndo that.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 23 Oct 2020 11:43:55 +0300", "msg_from": "Andrii Tkach <and7ua@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Error in pg_restore (could not close data file: Success)" } ]
[ { "msg_contents": "I noticed a few days ago that method backup() in PostgresNode uses\npg_basebackup without specifying a checkpoint mode -- and the default is\na spread checkpoint, which may cause any tests that use that to take\nslightly longer than the bare minimum.\n\nI propose to make it use a fast checkpoint, as per the attached.\n\n-- \n�lvaro Herrera", "msg_date": "Tue, 20 Oct 2020 12:01:00 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "PostgresNode::backup uses spread checkpoint?" }, { "msg_contents": "On 10/20/20 11:01 AM, Alvaro Herrera wrote:\n> I noticed a few days ago that method backup() in PostgresNode uses\n> pg_basebackup without specifying a checkpoint mode -- and the default is\n> a spread checkpoint, which may cause any tests that use that to take\n> slightly longer than the bare minimum.\n> \n> I propose to make it use a fast checkpoint, as per the attached.\n\n+1.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 20 Oct 2020 11:13:34 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: PostgresNode::backup uses spread checkpoint?" }, { "msg_contents": "On Tue, Oct 20, 2020 at 11:13:34AM -0400, David Steele wrote:\n> On 10/20/20 11:01 AM, Alvaro Herrera wrote:\n>> I noticed a few days ago that method backup() in PostgresNode uses\n>> pg_basebackup without specifying a checkpoint mode -- and the default is\n>> a spread checkpoint, which may cause any tests that use that to take\n>> slightly longer than the bare minimum.\n>> \n>> I propose to make it use a fast checkpoint, as per the attached.\n> \n> +1.\n\n+1.\n\n- $self->host, '-p', $self->port, '--no-sync');\n+ $self->host, '-p', $self->port, '-cfast', '--no-sync');\n\nSome nits: I would recommend to use the long option name, and list\nthe option name and its value as two separate arguments of the\ncommand.\n--\nMichael", "msg_date": "Wed, 21 Oct 2020 07:55:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PostgresNode::backup uses spread checkpoint?" }, { "msg_contents": "On Wed, Oct 21, 2020 at 07:55:18AM +0900, Michael Paquier wrote:\n> Some nits: I would recommend to use the long option name, and list\n> the option name and its value as two separate arguments of the\n> command.\n\nFor the archives: this got applied as of 831611b.\n--\nMichael", "msg_date": "Thu, 22 Oct 2020 10:53:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PostgresNode::backup uses spread checkpoint?" } ]
[ { "msg_contents": "A recent user complaint [1] led me to investigate what ECPG does with\nembedded quotes (that is, quotes-meant-to-be-data) in SQL identifiers\nand strings. AFAICS, it gets it wrong. For example, if you write\nthe literal 'abc''def' in an EXEC SQL command, that will come out the\nother end as 'abc'def', triggering a syntax error in the backend.\nLikewise, \"abc\"\"def\" is reduced to \"abc\"def\" which is wrong syntax.\n\nIt looks to me like a sufficient fix is just to keep these quote\nsequences as-is within a converted string, so that the attached\nappears to fix it. I added some documentation too, since there\ndoesn't seem to be anything there now explaining how it's supposed\nto work.\n\nI doubt this is safely back-patchable, since anybody who's working\naround the existing misbehavior (as I see sql/dyntest.pgc is doing)\nwould not appreciate it changing under them in a minor release.\nBut I think we can fix it in v14.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CA%2B4qtLct1L%3DgUordX4c_AdctJ%2BvZBsebYYLBk18LX8dLHthktg%40mail.gmail.com", "msg_date": "Tue, 20 Oct 2020 15:46:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "ECPG gets embedded quotes wrong" }, { "msg_contents": "I wrote:\n> It looks to me like a sufficient fix is just to keep these quote\n> sequences as-is within a converted string, so that the attached\n> appears to fix it.\n\nPoking at this further, I noticed that there's a semi-related bug\nthat this patch changes the behavior for, without fixing it exactly.\nThat has to do with use of a string literal as \"execstring\" in ECPG's\nPREPARE ... FROM and EXECUTE IMMEDIATE commands. Right now, it\nappears that there is simply no way to write a double quote as part\nof the SQL command in this context. The EXECUTE IMMEDIATE docs say\nthat such a literal is a \"C string\", so one would figure that \\\"\n(backslash-double quote) is the way, but that just produces syntax\nerrors. The reason is that ECPG's lexer is in SQL mode at this point\nso it thinks the double-quoted string is a SQL quoted identifier, in\nwhich backslash isn't special so the double quote terminates the\nidentifier. Ooops. Knowing this, you might try writing two double\nquotes, but that doesn't work either, because the <xd>{xddouble}\nlexer rule converts that to one double quote, and you end up with\nan unterminated literal in the translated C code rather than in the\nECPG input.\n\nMy patch above modifies this to the extent that two double quotes\ncome out as two double quotes in the translated C code, but that\njust results in nothing at all, since the C compiler sees adjacent\nstring literals, which the C standard commands it to concatenate.\nThen you probably get a mysterious syntax error from the backend\nbecause it thinks your intended-to-be SQL quoted identifier isn't\nquoted. However, this is the behavior a C programmer would expect\nfor adjacent double quotes in a literal, so maybe people wouldn't\nsee it as mysterious.\n\nAnyway, what to do?\n\n1. Nothing, except document that you can't put a double quote into\nthe C string literal in these commands.\n\n2. Make two-double-quotes work to produce a data double quote,\nwhich I think could be done fairly easily with some post-processing\nin the execstring production. However, this doesn't have much to\nrecommend it other than being easily implementable. C programmers\nwould not think it's natural, and the fact that backslash sequences\nother than \\\" would work as a C programmer expects doesn't help.\n\n3. Find a way to lex the literal per C rules, as the EXECUTE IMMEDIATE\ndocs clearly imply we should. (The PREPARE docs are silent on the\npoint AFAICS.) Unfortunately, this seems darn near impossible unless\nwe want to make IMMEDIATE (more) reserved. Since it's currently\nunreserved, the grammar can't tell which flavor of EXEC SQL EXECUTE ...\nit's dealing with until it looks ahead past the name-or-IMMEDIATE token,\nso that it must lex the literal (if any) too soon. I tried putting in a\nmid-rule action to switch the lexer back to C mode but failed because of\nthat ambiguity. Maybe we could make it work with a bunch of refactoring,\nbut it would be ugly and subtle code, in both the grammar and lexer.\n\nOn the whole I'm inclined to go with #1. There's a reason why nobody has\ncomplained about this in twenty years, which is that the syntaxes with\na string literal are completely useless. There's no point in writing\nEXEC SQL EXECUTE IMMEDIATE \"SQL-statement\" when you can just write\nEXEC SQL SQL-statement, and similarly for PREPARE. (The other variant\nthat takes the string from a C variable is useful, but that one doesn't\nhave any weird quoting problem.) So I can't see expending the effort\nfor #3, and I don't feel like adding and documenting the wart of #2.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Oct 2020 20:35:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: ECPG gets embedded quotes wrong" }, { "msg_contents": "I wrote:\n> Poking at this further, I noticed that there's a semi-related bug\n> that this patch changes the behavior for, without fixing it exactly.\n> That has to do with use of a string literal as \"execstring\" in ECPG's\n> PREPARE ... FROM and EXECUTE IMMEDIATE commands. Right now, it\n> appears that there is simply no way to write a double quote as part\n> of the SQL command in this context.\n\nIn the other thread, 1250kv pointed out that you can use an octal\nescape (\\042) to get a quote mark. That's pretty grotty, but it\ndoes work in existing ECPG releases as well as with this patch.\n\nSo now I think the best answer for this part is just to document that\nworkaround. Given the lack of complaints up to now, it's definitely not\nworth the amount of trouble that'd be needed to have a cleaner solution.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Oct 2020 13:34:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: ECPG gets embedded quotes wrong" } ]
[ { "msg_contents": "Hi,\n\nI was running “make installcheck” with the following settings:\n\nSET geqo_threshold=2;\nSET geqo_generations=1000;\nSETT geqo_pool_size=1000;\nSET enable_partitionwise_join to true;\n\nAnd, realized that “partition_join” test crashed. It is reproducible for both 12.3 and 13.0 (I’ve not tested further).\n\nMinimal steps to reproduce:\n\nSET geqo_threshold=2;\nSET geqo_generations=1000;\nSET geqo_pool_size=1000;\nSET enable_partitionwise_join to true;\n\nCREATE TABLE prt1 (a int, b int, c varchar) PARTITION BY RANGE(a);\nCREATE TABLE prt1_p1 PARTITION OF prt1 FOR VALUES FROM (0) TO (250);\nCREATE TABLE prt2 (a int, b int, c varchar) PARTITION BY RANGE(b);\nCREATE TABLE prt2_p1 PARTITION OF prt2 FOR VALUES FROM (0) TO (250);\n\nEXPLAIN (COSTS OFF)\nSELECT t1.a,\n ss.t2a,\n ss.t2c\nFROM prt1 t1\nLEFT JOIN LATERAL\n (SELECT t2.a AS t2a,\n t3.a AS t3a,\n t2.b t2b,\n t2.c t2c,\n least(t1.a, t2.a, t3.b)\n FROM prt1 t2\n JOIN prt2 t3 ON (t2.a = t3.b)) ss ON t1.c = ss.t2c\nWHERE (t1.b + coalesce(ss.t2b, 0)) = 0\nORDER BY t1.a;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nTime: 4.966 ms\n@:-!>\n\n\n\nTop of the backtrace on PG 13.0:\n\n(lldb) bt\n* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x1700000100)\n * frame #0: 0x0000000108b255c0 postgres`bms_is_subset(a=0x0000001700000100, b=0x00007fac37834db8) at bitmapset.c:327:13\n frame #1: 0x0000000108b65b55 postgres`generate_join_implied_equalities_normal(root=0x00007fac37815640, ec=0x00007fac3781d2c0, join_relids=0x00007fac37834db8, outer_relids=0x00007fac3781a9f8, inner_relids=0x00007fac37087608) at equivclass.c:1324:8\n frame #2: 0x0000000108b659a9 postgres`generate_join_implied_equalities(root=0x00007fac37815640, join_relids=0x00007fac37834db8, outer_relids=0x00007fac3781a9f8, inner_rel=0x00007fac370873f0) at equivclass.c:1197:14\n frame #3: 0x0000000108ba71a3 postgres`build_joinrel_restrictlist(root=<unavailable>, joinrel=0x00007fac37834ba0, outer_rel=0x00007fac37802f10, inner_rel=0x00007fac370873f0) at relnode.c:1079:8\n frame #4: 0x0000000108ba6fe0 postgres`build_join_rel(root=0x00007fac37815640, joinrelids=0x00007fac370873c8, outer_rel=0x00007fac37802f10, inner_rel=0x00007fac370873f0, sjinfo=0x00007fac3781c540, restrictlist_ptr=0x00007ffee72c9668) at relnode.c:709:17\n frame #5: 0x0000000108b6e552 postgres`make_join_rel(root=0x00007fac37815640, rel1=0x00007fac37802f10, rel2=0x00007fac370873f0) at joinrels.c:746:12\n frame #6: 0x0000000108b58d68 postgres`merge_clump(root=0x00007fac37815640, clumps=0x00007fac37087348, new_clump=0x00007fac37087320, num_gene=3, force=<unavailable>) at geqo_eval.c:260:14\n frame #7: 0x0000000108b58bee postgres`gimme_tree(root=<unavailable>, tour=0x00007fac378248c8, num_gene=<unavailable>) at geqo_eval.c:199:12\n frame #8: 0x0000000108b58ab9 postgres`geqo_eval(root=0x00007fac37815640, tour=0x00007fac378248c8, num_gene=3) at geqo_eval.c:102:12\n frame #9: 0x0000000108b592b8 postgres`random_init_pool(root=0x00007fac37815640, pool=0x00007fac37824828) at geqo_pool.c:109:25\n frame #10: 0x0000000108b58fb7 postgres`geqo(root=0x00007fac37815640, number_of_rels=<unavailable>, initial_rels=<unavailable>) at geqo_main.c:114:2\n frame #11: 0x0000000108b5988f postgres`make_one_rel(root=0x00007fac37815640, joinlist=0x00007fac3781cf08) at allpaths.c:227:8\n frame #12: 0x0000000108b7f187 postgres`query_planner(root=0x00007fac37815640, qp_callback=<unavailable>, qp_extra=0x00007ffee7\n….\n\nThanks,\nOnder\n\n\n\n\n\n\n\n\n\nHi, \n \nI was running “make installcheck” with the following settings:\n \nSET geqo_threshold=2;\nSET geqo_generations=1000;\nSETT geqo_pool_size=1000;\nSET enable_partitionwise_join to true;\n \nAnd, realized that “partition_join” test crashed. It is reproducible for both 12.3 and 13.0  (I’ve not tested further).\n\n \nMinimal steps to reproduce:\n \nSET geqo_threshold=2;\nSET geqo_generations=1000;\nSET geqo_pool_size=1000;\nSET enable_partitionwise_join to true;\n \nCREATE TABLE prt1 (a int, b int, c varchar) PARTITION BY RANGE(a);\nCREATE TABLE prt1_p1 PARTITION OF prt1 FOR VALUES FROM (0) TO (250);\nCREATE TABLE prt2 (a int, b int, c varchar) PARTITION BY RANGE(b);\nCREATE TABLE prt2_p1 PARTITION OF prt2 FOR VALUES FROM (0) TO (250);\n \nEXPLAIN (COSTS OFF)\nSELECT t1.a,\n       ss.t2a,\n       ss.t2c\nFROM prt1 t1\nLEFT JOIN LATERAL\n  (SELECT t2.a AS t2a,\n          t3.a AS t3a,\n          t2.b t2b,\n          t2.c t2c,\n          least(t1.a, t2.a, t3.b)\n   FROM prt1 t2\n   JOIN prt2 t3 ON (t2.a = t3.b)) ss ON t1.c = ss.t2c\nWHERE (t1.b + coalesce(ss.t2b, 0)) = 0\nORDER BY t1.a;\nserver closed the connection unexpectedly\n                This probably means the server terminated abnormally\n                before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nTime: 4.966 ms\n@:-!>\n \n \n \nTop of the backtrace on PG 13.0:\n \n(lldb) bt\n* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x1700000100)\n  * frame #0: 0x0000000108b255c0 postgres`bms_is_subset(a=0x0000001700000100, b=0x00007fac37834db8) at bitmapset.c:327:13\n    frame #1: 0x0000000108b65b55 postgres`generate_join_implied_equalities_normal(root=0x00007fac37815640, ec=0x00007fac3781d2c0, join_relids=0x00007fac37834db8, outer_relids=0x00007fac3781a9f8, inner_relids=0x00007fac37087608)\n at equivclass.c:1324:8\n    frame #2: 0x0000000108b659a9 postgres`generate_join_implied_equalities(root=0x00007fac37815640, join_relids=0x00007fac37834db8, outer_relids=0x00007fac3781a9f8, inner_rel=0x00007fac370873f0) at equivclass.c:1197:14\n    frame #3: 0x0000000108ba71a3 postgres`build_joinrel_restrictlist(root=<unavailable>, joinrel=0x00007fac37834ba0, outer_rel=0x00007fac37802f10, inner_rel=0x00007fac370873f0) at relnode.c:1079:8\n    frame #4: 0x0000000108ba6fe0 postgres`build_join_rel(root=0x00007fac37815640, joinrelids=0x00007fac370873c8, outer_rel=0x00007fac37802f10, inner_rel=0x00007fac370873f0, sjinfo=0x00007fac3781c540, restrictlist_ptr=0x00007ffee72c9668)\n at relnode.c:709:17\n    frame #5: 0x0000000108b6e552 postgres`make_join_rel(root=0x00007fac37815640, rel1=0x00007fac37802f10, rel2=0x00007fac370873f0) at joinrels.c:746:12\n    frame #6: 0x0000000108b58d68 postgres`merge_clump(root=0x00007fac37815640, clumps=0x00007fac37087348, new_clump=0x00007fac37087320, num_gene=3, force=<unavailable>) at geqo_eval.c:260:14\n    frame #7: 0x0000000108b58bee postgres`gimme_tree(root=<unavailable>, tour=0x00007fac378248c8, num_gene=<unavailable>) at geqo_eval.c:199:12\n    frame #8: 0x0000000108b58ab9 postgres`geqo_eval(root=0x00007fac37815640, tour=0x00007fac378248c8, num_gene=3) at geqo_eval.c:102:12\n    frame #9: 0x0000000108b592b8 postgres`random_init_pool(root=0x00007fac37815640, pool=0x00007fac37824828) at geqo_pool.c:109:25\n    frame #10: 0x0000000108b58fb7 postgres`geqo(root=0x00007fac37815640, number_of_rels=<unavailable>, initial_rels=<unavailable>) at geqo_main.c:114:2\n    frame #11: 0x0000000108b5988f postgres`make_one_rel(root=0x00007fac37815640, joinlist=0x00007fac3781cf08) at allpaths.c:227:8\n    frame #12: 0x0000000108b7f187 postgres`query_planner(root=0x00007fac37815640, qp_callback=<unavailable>, qp_extra=0x00007ffee7\n….\n \nThanks,\nOnder", "msg_date": "Wed, 21 Oct 2020 10:49:36 +0000", "msg_from": "Onder Kalaci <onderk@microsoft.com>", "msg_from_op": true, "msg_subject": "Combination of geqo and enable_partitionwise_join leads to crashes in\n the regression tests" }, { "msg_contents": "Hi,\n\nI think this is already discussed here: https://www.postgresql.org/message-id/flat/CAExHW5tgiLsYC_CLcaKHFFc8H56C0s9mCu_0OpahGxn%3DhUi_Pg%40mail.gmail.com#db54218ab7bb9e1484cdcc52abf2d324\n\nSorry for missing that thread before sending the mail.\n\n\nFrom: Onder Kalaci <onderk@microsoft.com>\nDate: Wednesday, 21 October 2020 12:49\nTo: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nSubject: Combination of geqo and enable_partitionwise_join leads to crashes in the regression tests\nHi,\n\nI was running “make installcheck” with the following settings:\n\nSET geqo_threshold=2;\nSET geqo_generations=1000;\nSETT geqo_pool_size=1000;\nSET enable_partitionwise_join to true;\n\nAnd, realized that “partition_join” test crashed. It is reproducible for both 12.3 and 13.0 (I’ve not tested further).\n\nMinimal steps to reproduce:\n\nSET geqo_threshold=2;\nSET geqo_generations=1000;\nSET geqo_pool_size=1000;\nSET enable_partitionwise_join to true;\n\nCREATE TABLE prt1 (a int, b int, c varchar) PARTITION BY RANGE(a);\nCREATE TABLE prt1_p1 PARTITION OF prt1 FOR VALUES FROM (0) TO (250);\nCREATE TABLE prt2 (a int, b int, c varchar) PARTITION BY RANGE(b);\nCREATE TABLE prt2_p1 PARTITION OF prt2 FOR VALUES FROM (0) TO (250);\n\nEXPLAIN (COSTS OFF)\nSELECT t1.a,\n ss.t2a,\n ss.t2c\nFROM prt1 t1\nLEFT JOIN LATERAL\n (SELECT t2.a AS t2a,\n t3.a AS t3a,\n t2.b t2b,\n t2.c t2c,\n least(t1.a, t2.a, t3.b)\n FROM prt1 t2\n JOIN prt2 t3 ON (t2.a = t3.b)) ss ON t1.c = ss.t2c\nWHERE (t1.b + coalesce(ss.t2b, 0)) = 0\nORDER BY t1.a;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nTime: 4.966 ms\n@:-!>\n\n\n\nTop of the backtrace on PG 13.0:\n\n(lldb) bt\n* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x1700000100)\n * frame #0: 0x0000000108b255c0 postgres`bms_is_subset(a=0x0000001700000100, b=0x00007fac37834db8) at bitmapset.c:327:13\n frame #1: 0x0000000108b65b55 postgres`generate_join_implied_equalities_normal(root=0x00007fac37815640, ec=0x00007fac3781d2c0, join_relids=0x00007fac37834db8, outer_relids=0x00007fac3781a9f8, inner_relids=0x00007fac37087608) at equivclass.c:1324:8\n frame #2: 0x0000000108b659a9 postgres`generate_join_implied_equalities(root=0x00007fac37815640, join_relids=0x00007fac37834db8, outer_relids=0x00007fac3781a9f8, inner_rel=0x00007fac370873f0) at equivclass.c:1197:14\n frame #3: 0x0000000108ba71a3 postgres`build_joinrel_restrictlist(root=<unavailable>, joinrel=0x00007fac37834ba0, outer_rel=0x00007fac37802f10, inner_rel=0x00007fac370873f0) at relnode.c:1079:8\n frame #4: 0x0000000108ba6fe0 postgres`build_join_rel(root=0x00007fac37815640, joinrelids=0x00007fac370873c8, outer_rel=0x00007fac37802f10, inner_rel=0x00007fac370873f0, sjinfo=0x00007fac3781c540, restrictlist_ptr=0x00007ffee72c9668) at relnode.c:709:17\n frame #5: 0x0000000108b6e552 postgres`make_join_rel(root=0x00007fac37815640, rel1=0x00007fac37802f10, rel2=0x00007fac370873f0) at joinrels.c:746:12\n frame #6: 0x0000000108b58d68 postgres`merge_clump(root=0x00007fac37815640, clumps=0x00007fac37087348, new_clump=0x00007fac37087320, num_gene=3, force=<unavailable>) at geqo_eval.c:260:14\n frame #7: 0x0000000108b58bee postgres`gimme_tree(root=<unavailable>, tour=0x00007fac378248c8, num_gene=<unavailable>) at geqo_eval.c:199:12\n frame #8: 0x0000000108b58ab9 postgres`geqo_eval(root=0x00007fac37815640, tour=0x00007fac378248c8, num_gene=3) at geqo_eval.c:102:12\n frame #9: 0x0000000108b592b8 postgres`random_init_pool(root=0x00007fac37815640, pool=0x00007fac37824828) at geqo_pool.c:109:25\n frame #10: 0x0000000108b58fb7 postgres`geqo(root=0x00007fac37815640, number_of_rels=<unavailable>, initial_rels=<unavailable>) at geqo_main.c:114:2\n frame #11: 0x0000000108b5988f postgres`make_one_rel(root=0x00007fac37815640, joinlist=0x00007fac3781cf08) at allpaths.c:227:8\n frame #12: 0x0000000108b7f187 postgres`query_planner(root=0x00007fac37815640, qp_callback=<unavailable>, qp_extra=0x00007ffee7\n….\n\nThanks,\nOnder\n\n\n\n\n\n\n\n\n\nHi,\n \nI think this is already discussed  here: \nhttps://www.postgresql.org/message-id/flat/CAExHW5tgiLsYC_CLcaKHFFc8H56C0s9mCu_0OpahGxn%3DhUi_Pg%40mail.gmail.com#db54218ab7bb9e1484cdcc52abf2d324\n \nSorry for missing that thread before sending the mail.\n \n \n\nFrom:\nOnder Kalaci <onderk@microsoft.com>\nDate: Wednesday, 21 October 2020 12:49\nTo: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nSubject: Combination of geqo and enable_partitionwise_join leads to crashes in the regression tests\n\nHi, \n \nI was running “make installcheck” with the following settings:\n \nSET geqo_threshold=2;\nSET geqo_generations=1000;\nSETT geqo_pool_size=1000;\nSET enable_partitionwise_join to true;\n \nAnd, realized that “partition_join” test crashed. It is reproducible for both 12.3 and 13.0  (I’ve not tested further).\n\n \nMinimal steps to reproduce:\n \nSET geqo_threshold=2;\nSET geqo_generations=1000;\nSET geqo_pool_size=1000;\nSET enable_partitionwise_join to true;\n \nCREATE TABLE prt1 (a int, b int, c varchar) PARTITION BY RANGE(a);\nCREATE TABLE prt1_p1 PARTITION OF prt1 FOR VALUES FROM (0) TO (250);\nCREATE TABLE prt2 (a int, b int, c varchar) PARTITION BY RANGE(b);\nCREATE TABLE prt2_p1 PARTITION OF prt2 FOR VALUES FROM (0) TO (250);\n \nEXPLAIN (COSTS OFF)\nSELECT t1.a,\n       ss.t2a,\n       ss.t2c\nFROM prt1 t1\nLEFT JOIN LATERAL\n  (SELECT t2.a AS t2a,\n          t3.a AS t3a,\n          t2.b t2b,\n          t2.c t2c,\n          least(t1.a, t2.a, t3.b)\n   FROM prt1 t2\n   JOIN prt2 t3 ON (t2.a = t3.b)) ss ON t1.c = ss.t2c\nWHERE (t1.b + coalesce(ss.t2b, 0)) = 0\nORDER BY t1.a;\nserver closed the connection unexpectedly\n                This probably means the server terminated abnormally\n                before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nTime: 4.966 ms\n@:-!>\n \n \n \nTop of the backtrace on PG 13.0:\n \n(lldb) bt\n* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x1700000100)\n  * frame #0: 0x0000000108b255c0 postgres`bms_is_subset(a=0x0000001700000100, b=0x00007fac37834db8) at bitmapset.c:327:13\n    frame #1: 0x0000000108b65b55 postgres`generate_join_implied_equalities_normal(root=0x00007fac37815640, ec=0x00007fac3781d2c0, join_relids=0x00007fac37834db8, outer_relids=0x00007fac3781a9f8, inner_relids=0x00007fac37087608)\n at equivclass.c:1324:8\n    frame #2: 0x0000000108b659a9 postgres`generate_join_implied_equalities(root=0x00007fac37815640, join_relids=0x00007fac37834db8, outer_relids=0x00007fac3781a9f8, inner_rel=0x00007fac370873f0) at equivclass.c:1197:14\n    frame #3: 0x0000000108ba71a3 postgres`build_joinrel_restrictlist(root=<unavailable>, joinrel=0x00007fac37834ba0, outer_rel=0x00007fac37802f10, inner_rel=0x00007fac370873f0) at relnode.c:1079:8\n    frame #4: 0x0000000108ba6fe0 postgres`build_join_rel(root=0x00007fac37815640, joinrelids=0x00007fac370873c8, outer_rel=0x00007fac37802f10, inner_rel=0x00007fac370873f0, sjinfo=0x00007fac3781c540, restrictlist_ptr=0x00007ffee72c9668)\n at relnode.c:709:17\n    frame #5: 0x0000000108b6e552 postgres`make_join_rel(root=0x00007fac37815640, rel1=0x00007fac37802f10, rel2=0x00007fac370873f0) at joinrels.c:746:12\n    frame #6: 0x0000000108b58d68 postgres`merge_clump(root=0x00007fac37815640, clumps=0x00007fac37087348, new_clump=0x00007fac37087320, num_gene=3, force=<unavailable>) at geqo_eval.c:260:14\n    frame #7: 0x0000000108b58bee postgres`gimme_tree(root=<unavailable>, tour=0x00007fac378248c8, num_gene=<unavailable>) at geqo_eval.c:199:12\n    frame #8: 0x0000000108b58ab9 postgres`geqo_eval(root=0x00007fac37815640, tour=0x00007fac378248c8, num_gene=3) at geqo_eval.c:102:12\n    frame #9: 0x0000000108b592b8 postgres`random_init_pool(root=0x00007fac37815640, pool=0x00007fac37824828) at geqo_pool.c:109:25\n    frame #10: 0x0000000108b58fb7 postgres`geqo(root=0x00007fac37815640, number_of_rels=<unavailable>, initial_rels=<unavailable>) at geqo_main.c:114:2\n    frame #11: 0x0000000108b5988f postgres`make_one_rel(root=0x00007fac37815640, joinlist=0x00007fac3781cf08) at allpaths.c:227:8\n    frame #12: 0x0000000108b7f187 postgres`query_planner(root=0x00007fac37815640, qp_callback=<unavailable>, qp_extra=0x00007ffee7\n….\n \nThanks,\nOnder", "msg_date": "Wed, 21 Oct 2020 10:54:48 +0000", "msg_from": "Onder Kalaci <onderk@microsoft.com>", "msg_from_op": true, "msg_subject": "Re: Combination of geqo and enable_partitionwise_join leads to\n crashes in the regression tests" } ]
[ { "msg_contents": "Hi,\n\nCurrently pg_terminate_backend(), sends SIGTERM to the backend process but\ndoesn't ensure it's exit. There are chances that backends still are\nrunning(even after pg_terminate_backend() is called) until the interrupts\nare processed(using ProcessInterrupts()). This could cause problems\nespecially in testing, for instance in a sql file right after\npg_terminate_backend(), if any test case depends on the backend's\nnon-existence[1], but the backend is not terminated. As discussed in [1],\nwe have wait_pid()(see regress.c and sql/dblink.sql), but it's not usable\nacross the system. In [1], we thought it would be better to have functions\nensuring the backend's exit on the similar lines of pg_terminate_backend().\n\nI propose to have two functions:\n\n1. pg_terminate_backend_and_wait() -- which sends SIGTERM to the backend\nand wait's until it's exit.\n2. pg_wait_backend() -- which waits for a given backend process. Note that\nthis function has to be used carefully after pg_terminate_backend(), if\nused on a backend that's not ternmited it simply keeps waiting in a loop.\n\nAttaching a WIP patch herewith.\n\nThoughts?\n\n[1]\nhttps://www.postgresql.org/message-id/flat/f31cc4da-a7ea-677f-cf64-a2f9db854bf5%40oss.nttdata.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 21 Oct 2020 18:32:15 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "A new function to wait for the backend exit after termination" }, { "msg_contents": "On Wed, Oct 21, 2020 at 3:02 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> Hi,\n>\n> Currently pg_terminate_backend(), sends SIGTERM to the backend process but\n> doesn't ensure it's exit. There are chances that backends still are\n> running(even after pg_terminate_backend() is called) until the interrupts\n> are processed(using ProcessInterrupts()). This could cause problems\n> especially in testing, for instance in a sql file right after\n> pg_terminate_backend(), if any test case depends on the backend's\n> non-existence[1], but the backend is not terminated. As discussed in [1],\n> we have wait_pid()(see regress.c and sql/dblink.sql), but it's not usable\n> across the system. In [1], we thought it would be better to have functions\n> ensuring the backend's exit on the similar lines of pg_terminate_backend().\n>\n> I propose to have two functions:\n>\n> 1. pg_terminate_backend_and_wait() -- which sends SIGTERM to the backend\n> and wait's until it's exit.\n>\n\nI think it would be nicer to have a pg_terminate_backend(pid, wait=false),\nso a function with a second parameter which defaults to the current\nbehaviour of not waiting. And it might be a good idea to also give it a\ntimeout parameter?\n\n\n> 2. pg_wait_backend() -- which waits for a given backend process. Note that\n> this function has to be used carefully after pg_terminate_backend(), if\n> used on a backend that's not ternmited it simply keeps waiting in a loop.\n>\n\nIt seems this one also very much would need a timeout value.\n\nAnd surely we should show some sort of wait event when it's waiting.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Oct 21, 2020 at 3:02 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:Hi,Currently pg_terminate_backend(), sends SIGTERM to the backend process but doesn't ensure it's exit. There are chances that backends still are running(even after pg_terminate_backend() is called) until the interrupts are processed(using ProcessInterrupts()). This could cause problems especially in testing, for instance in a sql file right after pg_terminate_backend(), if any test case depends on the backend's non-existence[1], but the backend is not terminated. As discussed in [1], we have wait_pid()(see regress.c and sql/dblink.sql), but it's not usable across the system. In [1], we thought it would be better to have functions ensuring the backend's exit on the similar lines of pg_terminate_backend().I propose to have two functions:1. pg_terminate_backend_and_wait() -- which sends SIGTERM to the backend and wait's until it's exit.I think it would be nicer to have a pg_terminate_backend(pid, wait=false), so a function with a second parameter which defaults to the current behaviour of not waiting. And it might be a good idea to also give it a timeout parameter? 2. pg_wait_backend() -- which waits for a given backend process. Note that this function has to be used carefully after pg_terminate_backend(), if used on a backend that's not ternmited it simply keeps waiting in a loop.It seems this one also very much would need a timeout value.And surely we should show some sort of wait event when it's waiting.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 21 Oct 2020 15:13:36 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Wed, Oct 21, 2020 at 6:13 AM Magnus Hagander <magnus@hagander.net> wrote:\n\n> On Wed, Oct 21, 2020 at 3:02 PM Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n>> Hi,\n>>\n>> Currently pg_terminate_backend(), sends SIGTERM to the backend process\n>> but doesn't ensure it's exit. There are chances that backends still are\n>> running(even after pg_terminate_backend() is called) until the interrupts\n>> are processed(using ProcessInterrupts()). This could cause problems\n>> especially in testing, for instance in a sql file right after\n>> pg_terminate_backend(), if any test case depends on the backend's\n>> non-existence[1], but the backend is not terminated. As discussed in [1],\n>> we have wait_pid()(see regress.c and sql/dblink.sql), but it's not usable\n>> across the system. In [1], we thought it would be better to have functions\n>> ensuring the backend's exit on the similar lines of pg_terminate_backend().\n>>\n>> I propose to have two functions:\n>>\n>> 1. pg_terminate_backend_and_wait() -- which sends SIGTERM to the backend\n>> and wait's until it's exit.\n>>\n>\n> I think it would be nicer to have a pg_terminate_backend(pid, wait=false),\n> so a function with a second parameter which defaults to the current\n> behaviour of not waiting. And it might be a good idea to also give it a\n> timeout parameter?\n>\n\nAgreed on the overload, and the timeouts make sense too - with the caller\ndeciding whether a timeout results in a failure or a false return value.\n\n\n>\n>> 2. pg_wait_backend() -- which waits for a given backend process. Note\n>> that this function has to be used carefully after pg_terminate_backend(),\n>> if used on a backend that's not ternmited it simply keeps waiting in a loop.\n>>\n>\n> It seems this one also very much would need a timeout value.\n>\n>\nIs there a requirement for waiting to be superuser only? You are not\naffecting any session but your own during the waiting period.\n\nI could imagine, in theory at least, wanting to wait for a backend to go\nidle as well as for it disappearing. Scope creep in terms of this patch's\ngoal but worth at least considering now.\n\nDavid J.\n\nOn Wed, Oct 21, 2020 at 6:13 AM Magnus Hagander <magnus@hagander.net> wrote:On Wed, Oct 21, 2020 at 3:02 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:Hi,Currently pg_terminate_backend(), sends SIGTERM to the backend process but doesn't ensure it's exit. There are chances that backends still are running(even after pg_terminate_backend() is called) until the interrupts are processed(using ProcessInterrupts()). This could cause problems especially in testing, for instance in a sql file right after pg_terminate_backend(), if any test case depends on the backend's non-existence[1], but the backend is not terminated. As discussed in [1], we have wait_pid()(see regress.c and sql/dblink.sql), but it's not usable across the system. In [1], we thought it would be better to have functions ensuring the backend's exit on the similar lines of pg_terminate_backend().I propose to have two functions:1. pg_terminate_backend_and_wait() -- which sends SIGTERM to the backend and wait's until it's exit.I think it would be nicer to have a pg_terminate_backend(pid, wait=false), so a function with a second parameter which defaults to the current behaviour of not waiting. And it might be a good idea to also give it a timeout parameter?Agreed on the overload, and the timeouts make sense too - with the caller deciding whether a timeout results in a failure or a false return value. 2. pg_wait_backend() -- which waits for a given backend process. Note that this function has to be used carefully after pg_terminate_backend(), if used on a backend that's not ternmited it simply keeps waiting in a loop.It seems this one also very much would need a timeout value.Is there a requirement for waiting to be superuser only?  You are not affecting any session but your own during the waiting period.I could imagine, in theory at least, wanting to wait for a backend to go idle as well as for it disappearing.  Scope creep in terms of this patch's goal but worth at least considering now.David J.", "msg_date": "Wed, 21 Oct 2020 07:30:45 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "Thanks for the feedback.\n\nOn Wed, Oct 21, 2020 at 6:43 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n>> Currently pg_terminate_backend(), sends SIGTERM to the backend process but doesn't ensure it's exit. There are chances that backends still are running(even after pg_terminate_backend() is called) until the interrupts are processed(using ProcessInterrupts()). This could cause problems especially in testing, for instance in a sql file right after pg_terminate_backend(), if any test case depends on the backend's non-existence[1], but the backend is not terminated. As discussed in [1], we have wait_pid()(see regress.c and sql/dblink.sql), but it's not usable across the system. In [1], we thought it would be better to have functions ensuring the backend's exit on the similar lines of pg_terminate_backend().\n>>\n>> I propose to have two functions:\n>>\n>> 1. pg_terminate_backend_and_wait() -- which sends SIGTERM to the backend and wait's until it's exit.\n>\n> I think it would be nicer to have a pg_terminate_backend(pid, wait=false), so a function with a second parameter which defaults to the current behaviour of not waiting. And it might be a good idea to also give it a timeout parameter?\n>\n\n+1 to have pg_terminate_backend(pid, wait=false, timeout), timeout in\nmilliseconds only valid if wait = true.\n\n>\n>> 2. pg_wait_backend() -- which waits for a given backend process. Note that this function has to be used carefully after pg_terminate_backend(), if used on a backend that's not ternmited it simply keeps waiting in a loop.\n>\n> It seems this one also very much would need a timeout value.\n>\n> And surely we should show some sort of wait event when it's waiting.\n>\n\nYes for this function too we can have a timeout value.\npg_wait_backend(pid, timeout), timeout in milliseconds.\n\nI think we can use WaitLatch with the given timeout and with a new\nwait event type WAIT_EVENT_BACKEND_SHUTDOWN instead of pg_usleep for\nachieving the given timeout mechanism. With WaitLatch we would also\nget the waiting event in stats. Thoughts?\n\n rc = WaitLatch(MyLatch,\n WL_LATCH_SET | WL_POSTMASTER_DEATH, timeout,\n WAIT_EVENT_BACKEND_SHUTDOWN);\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Oct 2020 07:50:01 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On 2020-10-21 15:13:36 +0200, Magnus Hagander wrote:\n> It seems this one also very much would need a timeout value.\n\nI'm not really against that, but I wonder if we just end up\nreimplementing statement timeout...\n\n\n", "msg_date": "Wed, 21 Oct 2020 19:35:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "Thanks for the feedback.\n\nOn Wed, Oct 21, 2020 at 8:01 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Wed, Oct 21, 2020 at 6:13 AM Magnus Hagander <magnus@hagander.net> wrote:\n>>\n>> I think it would be nicer to have a pg_terminate_backend(pid, wait=false), so a function with a second parameter which defaults to the current behaviour of not waiting. And it might be a good idea to also give it a timeout parameter?\n>\n> Agreed on the overload, and the timeouts make sense too - with the caller deciding whether a timeout results in a failure or a false return value.\n>\n\nIf the backend is terminated within the user specified timeout then\nthe function returns true, otherwise false.\n\n>\n>>> 2. pg_wait_backend() -- which waits for a given backend process. Note that this function has to be used carefully after pg_terminate_backend(), if used on a backend that's not ternmited it simply keeps waiting in a loop.\n>>\n>> It seems this one also very much would need a timeout value.\n>\n> Is there a requirement for waiting to be superuser only? You are not affecting any session but your own during the waiting period.\n>\n\nIIUC, in the same patch instead of returning an error in case of\nnon-superusers, do we need to wait for user provided timeout\nmilliseconds until the current user becomes superuser and then throw\nerror if still non-superuser, and proceed further if superuser?\n\nDo we need to have a new function that waits until a current\nnon-superuser in a session becomes superuser?\n\nSomething else?\n\n>\n> I could imagine, in theory at least, wanting to wait for a backend to go idle as well as for it disappearing. Scope creep in terms of this patch's goal but worth at least considering now.\n>\n\nIIUC, do we need a new option, something like pg_wait_backend(pid,\ntimeout, waituntil) where \"waituntil\" if specified \"idle\" waits until\nthe given backend goes to idle mode, or \"termination\" waits until\ntermination?\n\nIf my understanding is wrong, could you please explain more?\n\n\n", "msg_date": "Thu, 22 Oct 2020 08:16:18 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Wednesday, October 21, 2020, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> Thanks for the feedback.\n>\n> On Wed, Oct 21, 2020 at 8:01 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > On Wed, Oct 21, 2020 at 6:13 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >>\n> >> I think it would be nicer to have a pg_terminate_backend(pid,\n> wait=false), so a function with a second parameter which defaults to the\n> current behaviour of not waiting. And it might be a good idea to also give\n> it a timeout parameter?\n> >\n> > Agreed on the overload, and the timeouts make sense too - with the\n> caller deciding whether a timeout results in a failure or a false return\n> value.\n> >\n>\n> If the backend is terminated within the user specified timeout then\n> the function returns true, otherwise false.\n\n\nI’m suggesting an option for the second case to fail instead of returning\nfalse.\n\n\n> >\n> >>> 2. pg_wait_backend() -- which waits for a given backend process. Note\n> that this function has to be used carefully after pg_terminate_backend(),\n> if used on a backend that's not ternmited it simply keeps waiting in a loop.\n> >>\n> >> It seems this one also very much would need a timeout value.\n> >\n> > Is there a requirement for waiting to be superuser only? You are not\n> affecting any session but your own during the waiting period.\n> >\n>\n> IIUC, in the same patch instead of returning an error in case of\n> non-superusers, do we need to wait for user provided timeout\n> milliseconds until the current user becomes superuser and then throw\n> error if still non-superuser, and proceed further if superuser?\n>\n> Do we need to have a new function that waits until a current\n> non-superuser in a session becomes superuser?\n>\n> Something else?\n\n\nNot sure how that would even be possible mid-statement. I was suggesting\nremoving the superuser check altogether and letting any user execute “wait”.\n\n\n> >\n> > I could imagine, in theory at least, wanting to wait for a backend to go\n> idle as well as for it disappearing. Scope creep in terms of this patch's\n> goal but worth at least considering now.\n> >\n>\n> IIUC, do we need a new option, something like pg_wait_backend(pid,\n> timeout, waituntil) where \"waituntil\" if specified \"idle\" waits until\n> the given backend goes to idle mode, or \"termination\" waits until\n> termination?\n>\n> If my understanding is wrong, could you please explain more?\n>\n\nYes, this describes what i was thinking.\n\nDavid J.\n\nOn Wednesday, October 21, 2020, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:Thanks for the feedback.\n\nOn Wed, Oct 21, 2020 at 8:01 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Wed, Oct 21, 2020 at 6:13 AM Magnus Hagander <magnus@hagander.net> wrote:\n>>\n>> I think it would be nicer to have a pg_terminate_backend(pid, wait=false), so a function with a second parameter which defaults to the current behaviour of not waiting. And it might be a good idea to also give it a timeout parameter?\n>\n> Agreed on the overload, and the timeouts make sense too - with the caller deciding whether a timeout results in a failure or a false return value.\n>\n\nIf the backend is terminated within the user specified timeout then\nthe function returns true, otherwise false.I’m suggesting an option for the second case to fail instead of returning false.\n\n>\n>>> 2. pg_wait_backend() -- which waits for a given backend process. Note that this function has to be used carefully after pg_terminate_backend(), if used on a backend that's not ternmited it simply keeps waiting in a loop.\n>>\n>> It seems this one also very much would need a timeout value.\n>\n> Is there a requirement for waiting to be superuser only?  You are not affecting any session but your own during the waiting period.\n>\n\nIIUC, in the same patch instead of returning an error in case of\nnon-superusers, do we need to wait for user provided timeout\nmilliseconds until the current user becomes superuser and then throw\nerror if still non-superuser, and proceed further if superuser?\n\nDo we need to have a new function that waits until a current\nnon-superuser in a session becomes superuser?\n\nSomething else?Not sure how that would even be possible mid-statement.  I was suggesting removing the superuser check altogether and letting any user execute “wait”.\n\n>\n> I could imagine, in theory at least, wanting to wait for a backend to go idle as well as for it disappearing.  Scope creep in terms of this patch's goal but worth at least considering now.\n>\n\nIIUC, do we need a new option, something like pg_wait_backend(pid,\ntimeout, waituntil) where \"waituntil\" if specified \"idle\" waits until\nthe given backend goes to idle mode, or \"termination\" waits until\ntermination?\n\nIf my understanding is wrong, could you please explain more?\nYes, this describes what i was thinking.David J.", "msg_date": "Wed, 21 Oct 2020 20:09:35 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Thu, Oct 22, 2020 at 8:39 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n>> If the backend is terminated within the user specified timeout then\n>> the function returns true, otherwise false.\n>\n> I’m suggesting an option for the second case to fail instead of returning false.\n>\n\nThat seems fine.\n\n>\n>> >\n>> > I could imagine, in theory at least, wanting to wait for a backend to go idle as well as for it disappearing. Scope creep in terms of this patch's goal but worth at least considering now.\n>> >\n>>\n>> IIUC, do we need a new option, something like pg_wait_backend(pid,\n>> timeout, waituntil) where \"waituntil\" if specified \"idle\" waits until\n>> the given backend goes to idle mode, or \"termination\" waits until\n>> termination?\n>>\n>> If my understanding is wrong, could you please explain more?\n>\n>\n> Yes, this describes what i was thinking.\n>\n\n+1.\n\nI will implement these functionality and post a new patch soon.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Oct 2020 09:42:33 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Wed, Oct 21, 2020 at 6:43 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> I think it would be nicer to have a pg_terminate_backend(pid, wait=false), so a function with a second parameter which defaults to the current behaviour of not waiting. And it might be a good idea to also give it a timeout parameter?\n>\n\nDone.\n\n>\n>> 2. pg_wait_backend() -- which waits for a given backend process. Note that this function has to be used carefully after pg_terminate_backend(), if used on a backend that's not ternmited it simply keeps waiting in a loop.\n>\n> It seems this one also very much would need a timeout value.\n>\n\nDone.\n\n>\n> And surely we should show some sort of wait event when it's waiting.\n>\n\nAdded two wait events.\n\n>\n>> If the backend is terminated within the user specified timeout then\n>> the function returns true, otherwise false.\n>\n> I’m suggesting an option for the second case to fail instead of returning false.\n>\n\nDone.\n\n>\n> > I could imagine, in theory at least, wanting to wait for a backend to go idle as well as for it disappearing. Scope creep in terms of this patch's goal but worth at least considering now.\n>\n> IIUC, do we need a new option, something like pg_wait_backend(pid,\n> timeout, waituntil) where \"waituntil\" if specified \"idle\" waits until\n> the given backend goes to idle mode, or \"termination\" waits until\n> termination?\n>\n\nDone.\n\nAttaching a v2 patch herewith.\n\nThoughts and feedback are welcome.\n\nBelow things are still pending, which I plan to work on soon:\n\n1. More testing and addition of test cases into the regression test suite.\n2. Addition of the new function information into the docs.\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 28 Oct 2020 17:20:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "\n\nOn 2020/10/28 20:50, Bharath Rupireddy wrote:\n> On Wed, Oct 21, 2020 at 6:43 PM Magnus Hagander <magnus@hagander.net> wrote:\n>>\n>> I think it would be nicer to have a pg_terminate_backend(pid, wait=false), so a function with a second parameter which defaults to the current behaviour of not waiting. And it might be a good idea to also give it a timeout parameter?\n>>\n> \n> Done.\n> \n>>\n>>> 2. pg_wait_backend() -- which waits for a given backend process. Note that this function has to be used carefully after pg_terminate_backend(), if used on a backend that's not ternmited it simply keeps waiting in a loop.\n>>\n>> It seems this one also very much would need a timeout value.\n>>\n> \n> Done.\n> \n>>\n>> And surely we should show some sort of wait event when it's waiting.\n>>\n> \n> Added two wait events.\n> \n>>\n>>> If the backend is terminated within the user specified timeout then\n>>> the function returns true, otherwise false.\n>>\n>> I’m suggesting an option for the second case to fail instead of returning false.\n>>\n> \n> Done.\n\nI prefer that false is returned when the timeout happens,\nlike pg_promote() does.\n\n> \n>>\n>>> I could imagine, in theory at least, wanting to wait for a backend to go idle as well as for it disappearing. Scope creep in terms of this patch's goal but worth at least considering now.\n>>\n>> IIUC, do we need a new option, something like pg_wait_backend(pid,\n>> timeout, waituntil) where \"waituntil\" if specified \"idle\" waits until\n>> the given backend goes to idle mode, or \"termination\" waits until\n>> termination?\n\nIsn't this wait-for-idle mode fragile? Because there is no guarantee\nthat the backend is still in idle state when pg_wait_backend(idle) returns.\n\n>>\n> \n> Done.\n> \n> Attaching a v2 patch herewith.\n> \n> Thoughts and feedback are welcome.\n\nThanks for the patch!\n\nWhen the specified timeout is negative, the following error is thrown *after*\nSIGTERM is signaled to the target backend. This seems strange to me.\nThe timeout value should be verified at the beginning of the function, instead.\n\n ERROR: timeout cannot be negative\n\n\npg_terminate_backend(xxx, false) failed with the following error. I think\nit's more helpful if the function can work even without the timeout value.\nThat is, what about redefining the function in src/backend/catalog/system_views.sql\nand specifying the DEFAULT values for the arguments \"wait\" and \"timeout\"?\nThe similar function \"pg_promote\" would be good reference to you.\n\n ERROR: function pg_terminate_backend(integer, boolean) does not exist at character 8\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 28 Oct 2020 22:11:42 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "Thanks for the comments.\n\nOn Wed, Oct 28, 2020 at 6:41 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> I prefer that false is returned when the timeout happens,\n> like pg_promote() does.\n>\n\nEarlier it was suggested to error out on timeout. Since users can not\nguess on time it takes to terminate or become idle, throwing error\nseems to be odd on timeout. And also in case if the given pid is not a\nbackend pid, we are throwing a warning and returning false but not\nerror. Similarly we can return false on timeout, if required a\nwarning. Thoughts?\n\n>\n> >> IIUC, do we need a new option, something like pg_wait_backend(pid,\n> >> timeout, waituntil) where \"waituntil\" if specified \"idle\" waits until\n> >> the given backend goes to idle mode, or \"termination\" waits until\n> >> termination?\n>\n> Isn't this wait-for-idle mode fragile? Because there is no guarantee\n> that the backend is still in idle state when pg_wait_backend(idle) returns.\n>\n\nYeah this can happen. By the time pg_wait_backend returns we could\nhave the idle state of the backend changed. Looks like this is also a\nproblem with the existing pgstat_get_backend_current_activity()\nfunction. There we have a comment saying below and the function\nreturns a pointer to the current activity string. Maybe we could have\nsimilar comments about the usage in the document?\n\n * It is the caller's responsibility to invoke this only for backends whose\n * state is expected to remain stable while the result is in use.\n\nDoes this problem exist even if we use pg_stat_activity()?\n\n>\n> When the specified timeout is negative, the following error is thrown *after*\n> SIGTERM is signaled to the target backend. This seems strange to me.\n> The timeout value should be verified at the beginning of the function, instead.\n>\n> ERROR: timeout cannot be negative\n>\n\nOkay. I will change that.\n\n>\n> pg_terminate_backend(xxx, false) failed with the following error. I think\n> it's more helpful if the function can work even without the timeout value.\n> That is, what about redefining the function in src/backend/catalog/system_views.sql\n> and specifying the DEFAULT values for the arguments \"wait\" and \"timeout\"?\n> The similar function \"pg_promote\" would be good reference to you.\n>\n> ERROR: function pg_terminate_backend(integer, boolean) does not exist at character 8\n>\n\nYeah. This seems good. I will have false as default value for the wait\nparameter. I have defined the timeout to be in milliseconds, then how\nabout having a default value of 100 milliseconds?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Oct 2020 19:19:55 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Wed, Oct 28, 2020 at 6:50 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> Thanks for the comments.\n>\n> On Wed, Oct 28, 2020 at 6:41 PM Fujii Masao <masao.fujii@oss.nttdata.com>\n> wrote:\n> >\n> > I prefer that false is returned when the timeout happens,\n> > like pg_promote() does.\n> >\n>\n> Earlier it was suggested to error out on timeout.\n\n\nFor consideration. I'll give a point for being consistent with other\nexisting functions, and it wouldn't be hard to extend should we want to add\nthe option later, so while the more flexible API seems better on its face\nlimiting ourselves to boolean false isn't a big deal to me; especially as\nI've yet to write code that would make use of this feature.\n\nSince users can not\n> guess on time it takes to terminate or become idle, throwing error\n> seems to be odd on timeout.\n\n\nI don't see how the one follows from the other.\n\nAnd also in case if the given pid is not a\n> backend pid, we are throwing a warning and returning false but not\n> error.\n\nSimilarly we can return false on timeout, if required a\n> warning. Thoughts?\n>\n\nIMO, if there are multiple ways to return false then all of them should\nemit a notice or warning describing which of the false conditions was hit.\n\n\n> >\n> > >> IIUC, do we need a new option, something like pg_wait_backend(pid,\n> > >> timeout, waituntil) where \"waituntil\" if specified \"idle\" waits until\n> > >> the given backend goes to idle mode, or \"termination\" waits until\n> > >> termination?\n> >\n> > Isn't this wait-for-idle mode fragile? Because there is no guarantee\n> > that the backend is still in idle state when pg_wait_backend(idle)\n> returns.\n> >\n>\n>\nI was thinking this would be useful for orchestration. However, as you\nsay, its a pretty fragile method. I withdraw the suggestion. What I would\nreplace it with is a pg_wait_for_notify(payload_test) function that allows\nan SQL user to plug itself into the listen/notify feature and pause the\nsession until a notification arrives. The session it is coordinating with\nwould simply notify just before ending its script/transaction.\n\nDavid J.\n\nOn Wed, Oct 28, 2020 at 6:50 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:Thanks for the comments.\n\nOn Wed, Oct 28, 2020 at 6:41 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> I prefer that false is returned when the timeout happens,\n> like pg_promote() does.\n>\n\nEarlier it was suggested to error out on timeout.For consideration.  I'll give a point for being consistent with other existing functions, and it wouldn't be hard to extend should we want to add the option later, so while the more flexible API seems better on its face limiting ourselves to boolean false isn't a big deal to me; especially as I've yet to write code that would make use of this feature. Since users can not\nguess on time it takes to terminate or become idle, throwing error\nseems to be odd on timeout. I don't see how the one follows from the other.And also in case if the given pid is not a\nbackend pid, we are throwing a warning and returning false but not\nerror. Similarly we can return false on timeout, if required a\nwarning. Thoughts?IMO, if there are multiple ways to return false then all of them should emit a notice or warning describing which of the false conditions was hit.\n\n>\n> >> IIUC, do we need a new option, something like pg_wait_backend(pid,\n> >> timeout, waituntil) where \"waituntil\" if specified \"idle\" waits until\n> >> the given backend goes to idle mode, or \"termination\" waits until\n> >> termination?\n>\n> Isn't this wait-for-idle mode fragile? Because there is no guarantee\n> that the backend is still in idle state when pg_wait_backend(idle) returns.\n>I was thinking this would be useful for orchestration.  However, as you say, its a pretty fragile method.  I withdraw the suggestion.  What I would replace it with is a pg_wait_for_notify(payload_test) function that allows an SQL user to plug itself into the listen/notify feature and pause the session until a notification arrives.  The session it is coordinating with would simply notify just before ending its script/transaction.David J.", "msg_date": "Wed, 28 Oct 2020 07:21:09 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Wed, Oct 28, 2020 at 7:51 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n>> And also in case if the given pid is not a\n>> backend pid, we are throwing a warning and returning false but not\n>> error.\n>>\n>> Similarly we can return false on timeout, if required a\n>> warning. Thoughts?\n>\n> IMO, if there are multiple ways to return false then all of them should emit a notice or warning describing which of the false conditions was hit.\n>\n\nCurrently there are two possibilities in pg_teriminate_backend where a\nwarning is thrown and false is returned. 1. when the process with a\ngiven pid is not a backend 2. when we can not send the SIGTERM to the\ngiven backend.\n\nI will add another case to throw the warning and return false when\ntimeout occurs.\n\n>>\n>> > >> IIUC, do we need a new option, something like pg_wait_backend(pid,\n>> > >> timeout, waituntil) where \"waituntil\" if specified \"idle\" waits until\n>> > >> the given backend goes to idle mode, or \"termination\" waits until\n>> > >> termination?\n>> >\n>> > Isn't this wait-for-idle mode fragile? Because there is no guarantee\n>> > that the backend is still in idle state when pg_wait_backend(idle) returns.\n>>\n> I was thinking this would be useful for orchestration. However, as you say, its a pretty fragile method. I withdraw the suggestion.\n>\n\nSo, pg_wait_backend(pid, timeout) waits until the backend with a given\npid is terminated?\n\n>\n>What I would replace it with is a pg_wait_for_notify(payload_test) function that allows an SQL user to plug itself into the listen/notify feature and pause the session until a notification arrives. The session it is coordinating with would >simply notify just before ending its script/transaction.\n>\n\nWhy does one session need to listen and wait until another session\nnotifies? If my understanding is wrong, could you please elaborate on\nthe above point, the usage and the use case?\n\n>\n>For consideration. I'll give a point for being consistent with other existing functions, and it wouldn't be hard to extend should we want to add the option later, so while the more flexible API seems better on its face limiting ourselves to >boolean false isn't a big deal to me; especially as I've yet to write code that would make use of this feature.\n>\n\nI see that this pg_wait_backend(pid, timeout) functionality can be\nright away used in two places, one in dblink.sql where wait_pid is\nbeing used, second in postgres_fdw.sql where\nterminate_backend_and_wait() is being used. However we can make these\nchanges as part of another patch set after the proposed two new\nfunctions are finalized and reviewed.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 29 Oct 2020 10:43:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Wed, Oct 28, 2020 at 10:14 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Wed, Oct 28, 2020 at 7:51 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n>\n> > I was thinking this would be useful for orchestration. However, as you\n> say, its a pretty fragile method. I withdraw the suggestion.\n> >\n>\n> So, pg_wait_backend(pid, timeout) waits until the backend with a given\n> pid is terminated?\n>\n>\nYes. The original proposal.\n\n> >\n> >What I would replace it with is a pg_wait_for_notify(payload_test)\n> function that allows an SQL user to plug itself into the listen/notify\n> feature and pause the session until a notification arrives. The session it\n> is coordinating with would >simply notify just before ending its\n> script/transaction.\n> >\n>\n> Why does one session need to listen and wait until another session\n> notifies? If my understanding is wrong, could you please elaborate on\n> the above point, the usage and the use case?\n>\n\nTheory, but I imagine writing an isolation test like test script where the\ntwo sessions wait for notifications instead of sleep for random amounts of\ntime.\n\nMore generally, psql is very powerful but doesn't allow scripting to plug\ninto pub/sub. I don't have a concrete use case for why it should but the\ncapability doesn't seem far-fetched.\n\nI'm not saying this is something that is needed, rather it would seem more\nuseful than wait_for_idle.\n\nDavid J.\n\nOn Wed, Oct 28, 2020 at 10:14 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Wed, Oct 28, 2020 at 7:51 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> I was thinking this would be useful for orchestration.  However, as you say, its a pretty fragile method.  I withdraw the suggestion.\n>\n\nSo, pg_wait_backend(pid, timeout) waits until the backend with a given\npid is terminated?\nYes.  The original proposal.\n>\n>What I would replace it with is a pg_wait_for_notify(payload_test) function that allows an SQL user to plug itself into the listen/notify feature and pause the session until a notification arrives.  The session it is coordinating with would >simply notify just before ending its script/transaction.\n>\n\nWhy does one session need to listen and wait until another session\nnotifies? If my understanding is wrong, could you please elaborate on\nthe above point, the usage and the use case?Theory, but I imagine writing an isolation test like test script where the two sessions wait for notifications instead of sleep for random amounts of time.More generally, psql is very powerful but doesn't allow scripting to plug into pub/sub.  I don't have a concrete use case for why it should but the capability doesn't seem far-fetched.I'm not saying this is something that is needed, rather it would seem more useful than wait_for_idle.David J.", "msg_date": "Wed, 28 Oct 2020 22:21:00 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Wed, Oct 28, 2020 at 6:41 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> I prefer that false is returned when the timeout happens,\n> like pg_promote() does.\n>\n\nDone.\n\n>\n> When the specified timeout is negative, the following error is thrown *after*\n> SIGTERM is signaled to the target backend. This seems strange to me.\n> The timeout value should be verified at the beginning of the function, instead.\n>\n> ERROR: timeout cannot be negative\n>\n\nI'm not throwing error for this case, instead a warning and returning\nfalse. This is to keep it consistent with other cases such as the\ngiven pid is not a backend pid.\n\nAttaching the v3 patch. I tried to address the review comments\nreceived so far and added documentation. I tested the patch locally\nhere. I saw that we don't have any test cases for existing\npg_terminate_backend(), do we need to add test cases into regression\nsuites for these two new functions?\n\nPlease review the v3 patch and let me know comments.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 31 Oct 2020 16:28:18 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nI have tested the patch against current master branch (commit:6742e14959a3033d946ab3d67f5ce4c99367d332)\r\nBoth functions work without a problem and as expected.\r\nJust a tiny comment/suggestion.\r\nspecifying a -ve timeout in pg_terminate_backed rightly throws an error, \r\nI am not sure if it would be right or a wrong approach but I guess we can ignore -ve\r\ntimeout in pg_terminate_backend function when wait (second argument) is false.\r\n\r\ne.g. pg_terminate_backend(12320, false,-1); -- ignore -1 timout since wait is false\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Mon, 30 Nov 2020 14:39:31 +0000", "msg_from": "Muhammad Usama <m.usama@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Mon, Nov 30, 2020 at 8:10 PM Muhammad Usama <m.usama@gmail.com> wrote:\n>\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: not tested\n>\n> I have tested the patch against current master branch (commit:6742e14959a3033d946ab3d67f5ce4c99367d332)\n> Both functions work without a problem and as expected.\n>\n\nThanks!\n\n>\n> Just a tiny comment/suggestion.\n> specifying a -ve timeout in pg_terminate_backed rightly throws an error,\n> I am not sure if it would be right or a wrong approach but I guess we can ignore -ve\n> timeout in pg_terminate_backend function when wait (second argument) is false.\n>\n> e.g. pg_terminate_backend(12320, false,-1); -- ignore -1 timout since wait is false\n>\n\nIMO, that's not a good idea. I see it this way, for any function first\nthe input args have to be validated. If okay, then follows the use of\nthose args and the main functionality. I can also see pg_promote(),\nwhich first does the input timeout validation throwing error if it is\n<= 0.\n\nWe can retain the existing behaviour.\n\n>\n> The new status of this patch is: Ready for Committer\n>\n\nThanks!\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 2 Dec 2020 14:00:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Wed, Dec 2, 2020 at 1:30 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Mon, Nov 30, 2020 at 8:10 PM Muhammad Usama <m.usama@gmail.com> wrote:\n> >\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: tested, passed\n> > Implements feature: tested, passed\n> > Spec compliant: tested, passed\n> > Documentation: not tested\n> >\n> > I have tested the patch against current master branch\n> (commit:6742e14959a3033d946ab3d67f5ce4c99367d332)\n> > Both functions work without a problem and as expected.\n> >\n>\n> Thanks!\n>\n> >\n> > Just a tiny comment/suggestion.\n> > specifying a -ve timeout in pg_terminate_backed rightly throws an error,\n> > I am not sure if it would be right or a wrong approach but I guess we\n> can ignore -ve\n> > timeout in pg_terminate_backend function when wait (second argument) is\n> false.\n> >\n> > e.g. pg_terminate_backend(12320, false,-1); -- ignore -1 timout since\n> wait is false\n> >\n>\n> IMO, that's not a good idea. I see it this way, for any function first\n> the input args have to be validated. If okay, then follows the use of\n> those args and the main functionality. I can also see pg_promote(),\n> which first does the input timeout validation throwing error if it is\n> <= 0.\n>\n> We can retain the existing behaviour.\n>\n\nAgreed!\n\nThanks\nBest regards\nMuhammad Usama\n\n\n>\n> >\n> > The new status of this patch is: Ready for Committer\n> >\n>\n> Thanks!\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nOn Wed, Dec 2, 2020 at 1:30 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Mon, Nov 30, 2020 at 8:10 PM Muhammad Usama <m.usama@gmail.com> wrote:\n>\n> The following review has been posted through the commitfest application:\n> make installcheck-world:  tested, passed\n> Implements feature:       tested, passed\n> Spec compliant:           tested, passed\n> Documentation:            not tested\n>\n> I have tested the patch against current master branch (commit:6742e14959a3033d946ab3d67f5ce4c99367d332)\n> Both functions work without a problem and as expected.\n>\n\nThanks!\n\n>\n> Just a tiny comment/suggestion.\n> specifying a -ve timeout in pg_terminate_backed rightly throws an error,\n> I am not sure if it would be right or a wrong approach but I guess we can ignore -ve\n> timeout in pg_terminate_backend function when wait (second argument) is false.\n>\n> e.g.  pg_terminate_backend(12320, false,-1); -- ignore -1 timout since wait is false\n>\n\nIMO, that's not a good idea. I see it this way, for any function first\nthe input args have to be validated. If okay, then follows the use of\nthose args and the main functionality. I can also see pg_promote(),\nwhich first does the input timeout validation throwing error if it is\n<= 0.\n\nWe can retain the existing behaviour.Agreed!ThanksBest regardsMuhammad Usama \n\n>\n> The new status of this patch is: Ready for Committer\n>\n\nThanks!\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 2 Dec 2020 18:46:43 +0500", "msg_from": "Muhammad Usama <m.usama@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "Hi\r\n\r\nI take a look into the patch, and here some comments.\r\n\r\n1.\r\n+\r\n+\tereport(WARNING,\r\n+\t\t\t(errmsg(\"could not wait for the termination of the backend with PID %d within %ld milliseconds\",\r\n+\t\t\t\t\tpid, timeout)));\r\n+\r\n\r\nThe code use %ld to print int64 type.\r\nHow about use INT64_FORMAT, which looks more appropriate. \r\n\r\n2.\r\n+\tif (timeout <= 0)\r\n+\t{\r\n+\t\tereport(WARNING,\r\n+\t\t\t\t(errmsg(\"timeout cannot be negative or zero: %ld\", timeout)));\r\n+\t\tPG_RETURN_BOOL(r);\r\n+\t}\r\n\r\nThe same as 1.\r\n\r\n3.\r\n+pg_terminate_backend_and_wait(PG_FUNCTION_ARGS)\r\n+{\r\n+\tint \tpid = PG_GETARG_DATUM(0);\r\n\r\n+pg_wait_backend(PG_FUNCTION_ARGS)\r\n+{\r\n+\tint\t\tpid = PG_GETARG_INT32(0);\r\n\r\nThe code use different macro to get pid,\r\nHow about use PG_GETARG_INT32(0) for each one.\r\n\r\n\r\nI changed the status to 'wait on anthor'.\r\nThe others of the patch LGTM, \r\nI think it can be changed to Ready for Committer again, when this comment is confirmed.\r\n\r\n\r\nBest regards,\r\nhouzj\r\n\r\n\r\n\r\n\r\n\n\n", "msg_date": "Thu, 3 Dec 2020 01:54:33 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: A new function to wait for the backend exit after termination" }, { "msg_contents": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com> writes:\n> +\tereport(WARNING,\n> +\t\t\t(errmsg(\"could not wait for the termination of the backend with PID %d within %ld milliseconds\",\n> +\t\t\t\t\tpid, timeout)));\n\n> The code use %ld to print int64 type.\n> How about use INT64_FORMAT, which looks more appropriate. \n\nThis is a translatable message, so INT64_FORMAT is no good -- we need\nsomething that is the same across platforms. The current project standard\nfor this problem is to use \"%lld\" and explicitly cast the argument to long\nlong int to match that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Dec 2020 21:01:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "> > +\tereport(WARNING,\n> > +\t\t\t(errmsg(\"could not wait for the termination of the\n> backend with PID %d within %ld milliseconds\",\n> > +\t\t\t\t\tpid, timeout)));\n> \n> > The code use %ld to print int64 type.\n> > How about use INT64_FORMAT, which looks more appropriate.\n> \n> This is a translatable message, so INT64_FORMAT is no good -- we need\n> something that is the same across platforms. The current project standard\n> for this problem is to use \"%lld\" and explicitly cast the argument to long\n> long int to match that.\n\nThank you for pointing out that,\nAnd sorry for did not think of it.\n\nYes, we can use %lld, (long long int) timeout.\n\nBest regards,\nhouzj\n\n\n\n\n", "msg_date": "Thu, 3 Dec 2020 02:21:51 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: A new function to wait for the backend exit after termination" }, { "msg_contents": "> \r\n> I changed the status to 'wait on anthor'.\r\n> The others of the patch LGTM,\r\n> I think it can be changed to Ready for Committer again, when this comment\r\n> is confirmed.\r\n> \r\n\r\nI am Sorry I forgot a possible typo comment.\r\n\r\n+{ oid => '16386', descr => 'terminate a backend process and wait for it\\'s exit or until timeout occurs'\r\n\r\nDoes the following change looks better?\r\n\r\nWait for it\\'s exit => Wait for its exit\r\n\r\nBest regards,\r\nhouzj\r\n\n\n", "msg_date": "Thu, 3 Dec 2020 03:32:46 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: A new function to wait for the backend exit after termination" }, { "msg_contents": "Thanks for the review.\n\nOn Thu, Dec 3, 2020 at 7:24 AM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n>\n> 1.\n> +\n> + ereport(WARNING,\n> + (errmsg(\"could not wait for the termination of the backend with PID %d within %ld milliseconds\",\n> + pid, timeout)));\n> +\n>\n> The code use %ld to print int64 type.\n> How about use INT64_FORMAT, which looks more appropriate.\n>\n\nChanged it to use %lld and typecasting timeout to (long long int) as\nsuggested by Tom.\n\n>\n> 2.\n> + if (timeout <= 0)\n> + {\n> + ereport(WARNING,\n> + (errmsg(\"timeout cannot be negative or zero: %ld\", timeout)));\n> + PG_RETURN_BOOL(r);\n> + }\n>\n> The same as 1.\n>\n\nChanged.\n\n>\n> 3.\n> +pg_terminate_backend_and_wait(PG_FUNCTION_ARGS)\n> +{\n> + int pid = PG_GETARG_DATUM(0);\n>\n> +pg_wait_backend(PG_FUNCTION_ARGS)\n> +{\n> + int pid = PG_GETARG_INT32(0);\n>\n> The code use different macro to get pid,\n> How about use PG_GETARG_INT32(0) for each one.\n>\n\nChanged.\n\n> I am Sorry I forgot a possible typo comment.\n>\n> +{ oid => '16386', descr => 'terminate a backend process and wait for it\\'s exit or until timeout occurs'\n>\n> Does the following change looks better?\n>\n> Wait for it\\'s exit => Wait for its exit\n>\n\nChanged.\n\n>\n> I changed the status to 'wait on anthor'.\n> The others of the patch LGTM,\n> I think it can be changed to Ready for Committer again, when this comment is confirmed.\n>\n\nAttaching v4 patch. Please have a look.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 3 Dec 2020 09:26:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "Hi,\r\n\r\n- however only superusers can terminate superuser backends.\r\n+ however only superusers can terminate superuser backends. When no\r\n+ <parameter>wait</parameter> and <parameter>timeout</parameter> are\r\n+ provided, only SIGTERM is sent to the backend with the given process\r\n+ ID and <literal>false</literal> is returned immediately. But the\r\n\r\nI test the case when no wait and timeout are provided.\r\nTrue is returned as the following which seems different from the doc.\r\n\r\npostgres=# select pg_terminate_backend(pid);\r\n pg_terminate_backend \r\n----------------------\r\n t\r\n(1 row)\r\n\r\nBest regards,\r\nhouzj\r\n\r\n\n\n", "msg_date": "Fri, 4 Dec 2020 03:13:42 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Fri, Dec 4, 2020 at 8:44 AM Hou, Zhijie <houzj.fnst@cn.fujitsu.com>\nwrote:\n>\n> - however only superusers can terminate superuser backends.\n> + however only superusers can terminate superuser backends. When no\n> + <parameter>wait</parameter> and <parameter>timeout</parameter>\nare\n> + provided, only SIGTERM is sent to the backend with the given\nprocess\n> + ID and <literal>false</literal> is returned immediately. But the\n>\n> I test the case when no wait and timeout are provided.\n> True is returned as the following which seems different from the doc.\n>\n> postgres=# select pg_terminate_backend(pid);\n> pg_terminate_backend\n> ----------------------\n> t\n> (1 row)\n>\n\nThanks for pointing that out. I reworded that statement. Attaching v5\npatch. Please have a look.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 4 Dec 2020 11:59:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "Hi,\r\n\r\nWhen test pg_terminate_backend_and_wait with parallel query.\r\nI noticed that the function is not defined as parallel safe.\r\n\r\nI am not very familiar with the standard about whether a function should be parallel safe.\r\nBut I found the following function are all defined as parallel safe:\r\n\r\npg_promote\r\npg_terminate_backend(integer)\r\npg_sleep*\r\n\r\nIs there a reason why pg_terminate_backend_and_wait are not parallel safe ?\r\n(I'm sorry if I miss something in previous mails.)\r\n\r\nBest regards,\r\nhouzj\r\n\n\n", "msg_date": "Fri, 4 Dec 2020 08:31:40 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Fri, Dec 4, 2020 at 2:02 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n>\n> Hi,\n>\n> When test pg_terminate_backend_and_wait with parallel query.\n> I noticed that the function is not defined as parallel safe.\n>\n> I am not very familiar with the standard about whether a function should be parallel safe.\n> But I found the following function are all defined as parallel safe:\n>\n> pg_promote\n> pg_terminate_backend(integer)\n> pg_sleep*\n>\n> Is there a reason why pg_terminate_backend_and_wait are not parallel safe ?\n> (I'm sorry if I miss something in previous mails.)\n>\n\nI'm not quite sure of a use case where existing pg_terminate_backend()\nor for that matter the new pg_terminate_backend_and_wait() and\npg_wait_backend() will ever get used from parallel workers. Having\nsaid that, I marked the new functions as parallel safe to keep it the\nway it is with existing pg_terminate_backend().\n\npostgres=# select proparallel, proname, prosrc from pg_proc where\nproname IN ('pg_wait_backend', 'pg_terminate_backend');\n proparallel | proname | prosrc\n-------------+----------------------+-------------------------------\n s | pg_terminate_backend | pg_terminate_backend\n s | pg_wait_backend | pg_wait_backend\n s | pg_terminate_backend | pg_terminate_backend_and_wait\n(3 rows)\n\nAttaching v6 patch. Please have a look.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 4 Dec 2020 14:43:28 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nThanks for the new patch, the patch LGTM and works as expected\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Fri, 04 Dec 2020 09:29:01 +0000", "msg_from": "hou zhijie <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Fri, Dec 4, 2020 at 10:13 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Dec 4, 2020 at 2:02 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> >\n> > Hi,\n> >\n> > When test pg_terminate_backend_and_wait with parallel query.\n> > I noticed that the function is not defined as parallel safe.\n> >\n> > I am not very familiar with the standard about whether a function should be parallel safe.\n> > But I found the following function are all defined as parallel safe:\n> >\n> > pg_promote\n> > pg_terminate_backend(integer)\n> > pg_sleep*\n> >\n> > Is there a reason why pg_terminate_backend_and_wait are not parallel safe ?\n> > (I'm sorry if I miss something in previous mails.)\n> >\n>\n> I'm not quite sure of a use case where existing pg_terminate_backend()\n> or for that matter the new pg_terminate_backend_and_wait() and\n> pg_wait_backend() will ever get used from parallel workers. Having\n> said that, I marked the new functions as parallel safe to keep it the\n> way it is with existing pg_terminate_backend().\n>\n> postgres=# select proparallel, proname, prosrc from pg_proc where\n> proname IN ('pg_wait_backend', 'pg_terminate_backend');\n> proparallel | proname | prosrc\n> -------------+----------------------+-------------------------------\n> s | pg_terminate_backend | pg_terminate_backend\n> s | pg_wait_backend | pg_wait_backend\n> s | pg_terminate_backend | pg_terminate_backend_and_wait\n> (3 rows)\n>\n> Attaching v6 patch. Please have a look.\n\nTaking another look at this patch. Here are a few more comments:\n\nFor pg_terminate_backend, wouldn't it be easier to just create one\nfunction that has a default for wait and a default for timeout?\nInstead of having one version that takes one argument, and another\nversion that takes 3? Seems that would also simplify the\nimplementation by not having to set things up and call indirectly?\n\npg_wait_backend() \"checks the existence of the session\", and \"returns\ntrue on success\". It's unclear from that what's considered a success.\nAlso, technically, it only checks for the existence of the backend and\nnot the session inside, I think?\n\nBut also the fact is that it returns true when the backend is *gone*,\nwhich I think is a very strange definition of \"success\". In fact,\nisn't pg_wait_backend() is a pretty bad name for a function that does\nthis? Maybe pg_wait_for_backend_termination()? (the internal function\nhas a name that more matches what it does, but the SQL function does\nnot)\n\nWhy is the for(;;) loop in pg_wait_until_termination not a do {}\nwhile(remainingtime > 0)?\n\nThe wait event needs to be added to the list in the documentation.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Sat, 6 Mar 2021 18:06:38 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Sat, Mar 6, 2021 at 10:36 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> > Attaching v6 patch. Please have a look.\n>\n> Taking another look at this patch. Here are a few more comments:\n\nThanks for the review comments.\n\n> For pg_terminate_backend, wouldn't it be easier to just create one\n> function that has a default for wait and a default for timeout?\n> Instead of having one version that takes one argument, and another\n> version that takes 3? Seems that would also simplify the\n> implementation by not having to set things up and call indirectly?\n\nDone.\n\n> pg_wait_backend() \"checks the existence of the session\", and \"returns\n> true on success\". It's unclear from that what's considered a success.\n> Also, technically, it only checks for the existence of the backend and\n> not the session inside, I think?\n> But also the fact is that it returns true when the backend is *gone*,\n> which I think is a very strange definition of \"success\".\n\nYes, it only checks the existence of the backend process. Changed the\nphrasing a bit to make things clear.\n\n> In fact, isn't pg_wait_backend() is a pretty bad name for a function that does\n> this? Maybe pg_wait_for_backend_termination()? (the internal function\n> has a name that more matches what it does, but the SQL function does\n> not)\n\npg_wait_for_backend_termination LGTM, so changed pg_wait_backend to that name.\n\n> Why is the for(;;) loop in pg_wait_until_termination not a do {}\n> while(remainingtime > 0)?\n\nDone.\n\n> The wait event needs to be added to the list in the documentation.\n\nAdded to monitoring.sgml's IPC wait event type.\n\nAttaching v7 patch for further review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 7 Mar 2021 14:39:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Sun, Mar 7, 2021 at 2:39 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Attaching v7 patch for further review.\n\nAttaching v8 patch after rebasing on to the latest master.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 15 Mar 2021 08:57:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "\n\nOn 2021/03/15 12:27, Bharath Rupireddy wrote:\n> On Sun, Mar 7, 2021 at 2:39 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> Attaching v7 patch for further review.\n> \n> Attaching v8 patch after rebasing on to the latest master.\n\nThanks for rebasing the patch!\n\n- WAIT_EVENT_XACT_GROUP_UPDATE\n+ WAIT_EVENT_XACT_GROUP_UPDATE,\n+ WAIT_EVENT_BACKEND_TERMINATION\n\nThese should be listed in alphabetical order.\n\nIn pg_wait_until_termination's do-while loop, ResetLatch() should be called. Otherwise, it would enter busy-loop after any signal arrives. Because the latch is kept set and WaitLatch() always exits immediately in that case.\n\n+\t/*\n+\t * Wait in steps of waittime milliseconds until this function exits or\n+\t * timeout.\n+\t */\n+\tint64\twaittime = 10;\n\n10 ms per cycle seems too frequent?\n\n+\t\t\tereport(WARNING,\n+\t\t\t\t\t(errmsg(\"timeout cannot be negative or zero: %lld\",\n+\t\t\t\t\t\t\t(long long int) timeout)));\n+\n+\t\t\tresult = false;\n\nIMO the parameter should be verified before doing the actual thing.\n\nWhy is WARNING thrown in this case? Isn't it better to throw ERROR like pg_promote() does?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 15 Mar 2021 14:08:15 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Mon, Mar 15, 2021 at 10:38 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2021/03/15 12:27, Bharath Rupireddy wrote:\n> > On Sun, Mar 7, 2021 at 2:39 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> Attaching v7 patch for further review.\n> >\n> > Attaching v8 patch after rebasing on to the latest master.\n>\n> Thanks for rebasing the patch!\n\nThanks for reviewing.\n\n> - WAIT_EVENT_XACT_GROUP_UPDATE\n> + WAIT_EVENT_XACT_GROUP_UPDATE,\n> + WAIT_EVENT_BACKEND_TERMINATION\n>\n> These should be listed in alphabetical order.\n\nDone.\n\n> In pg_wait_until_termination's do-while loop, ResetLatch() should be called. Otherwise, it would enter busy-loop after any signal arrives. Because the latch is kept set and WaitLatch() always exits immediately in that case.\n\nDone.\n\n> + /*\n> + * Wait in steps of waittime milliseconds until this function exits or\n> + * timeout.\n> + */\n> + int64 waittime = 10;\n>\n> 10 ms per cycle seems too frequent?\n\nIncreased it to 100msec.\n\n> + ereport(WARNING,\n> + (errmsg(\"timeout cannot be negative or zero: %lld\",\n> + (long long int) timeout)));\n> +\n> + result = false;\n>\n> IMO the parameter should be verified before doing the actual thing.\n\nDone.\n\n> Why is WARNING thrown in this case? Isn't it better to throw ERROR like pg_promote() does?\n\nDone.\n\nAttaching v9 patch for further review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 16 Mar 2021 15:08:04 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Tue, Mar 16, 2021 at 10:38 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Mar 15, 2021 at 10:38 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n> > On 2021/03/15 12:27, Bharath Rupireddy wrote:\n> > > On Sun, Mar 7, 2021 at 2:39 PM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >> Attaching v7 patch for further review.\n> > >\n> > > Attaching v8 patch after rebasing on to the latest master.\n> >\n> > Thanks for rebasing the patch!\n>\n> Thanks for reviewing.\n>\n> > - WAIT_EVENT_XACT_GROUP_UPDATE\n> > + WAIT_EVENT_XACT_GROUP_UPDATE,\n> > + WAIT_EVENT_BACKEND_TERMINATION\n> >\n> > These should be listed in alphabetical order.\n>\n> Done.\n>\n> > In pg_wait_until_termination's do-while loop, ResetLatch() should be called. Otherwise, it would enter busy-loop after any signal arrives. Because the latch is kept set and WaitLatch() always exits immediately in that case.\n>\n> Done.\n>\n> > + /*\n> > + * Wait in steps of waittime milliseconds until this function exits or\n> > + * timeout.\n> > + */\n> > + int64 waittime = 10;\n> >\n> > 10 ms per cycle seems too frequent?\n>\n> Increased it to 100msec.\n>\n> > + ereport(WARNING,\n> > + (errmsg(\"timeout cannot be negative or zero: %lld\",\n> > + (long long int) timeout)));\n> > +\n> > + result = false;\n> >\n> > IMO the parameter should be verified before doing the actual thing.\n>\n> Done.\n>\n> > Why is WARNING thrown in this case? Isn't it better to throw ERROR like pg_promote() does?\n>\n> Done.\n>\n> Attaching v9 patch for further review.\n\nAlmost there :)\n\n\nDoes it really make sense that pg_wait_for_backend_termination()\ndefaults to waiting *100 milliseconds*, and then logs a warning? That\nseems extremely short if I'm explicitly asking it to wait.\n\nI'd argue that 100ms is too short for pg_terminate_backend() as well,\nbut I think it's a bit more reasonable there.\n\nWait events should be in alphabetical order in pgstat_get_wait_ipc()\nas well, not just in the header (which was adjusted per Fujii's\ncomment)\n\n\n+ (errmsg(\"could not wait for the termination of\nthe backend with PID %d within %lld milliseconds\",\n\nThat's not true though? The wait succeeded, it just timed out? Isn't\nitm ore like \"backend with PID %d did not terminate within %lld\nmilliseconds\"?\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 16 Mar 2021 17:18:31 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Tue, Mar 16, 2021 at 9:48 PM Magnus Hagander <magnus@hagander.net> wrote:\n> Does it really make sense that pg_wait_for_backend_termination()\n> defaults to waiting *100 milliseconds*, and then logs a warning? That\n> seems extremely short if I'm explicitly asking it to wait.\n\nI increased the default wait timeout to 5seconds.\n\n> Wait events should be in alphabetical order in pgstat_get_wait_ipc()\n> as well, not just in the header (which was adjusted per Fujii's\n> comment)\n\nDone.\n\n>\n> + (errmsg(\"could not wait for the termination of\n> the backend with PID %d within %lld milliseconds\",\n>\n> That's not true though? The wait succeeded, it just timed out? Isn't\n> itm ore like \"backend with PID %d did not terminate within %lld\n> milliseconds\"?\n\nLooks better. Done.\n\nAttaching v10 patch for further review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 17 Mar 2021 07:01:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "At Wed, 17 Mar 2021 07:01:39 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> Attaching v10 patch for further review.\n\nThe time-out mechanism doesn't count remainingtime as expected,\nconcretely it does the following.\n\ndo {\n kill();\n WaitLatch(WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH, waittime);\n ResetLatch(MyLatch);\n remainingtime -= waittime;\n} while (remainingtime > 0);\n\nSo, the WaitLatch doesn't consume as much time as the set waittime in\ncase of latch set. remainingtime reduces faster than the real at the\niteration.\n\nIt wouldn't happen actually but I concern about PID recycling. We can\nmake sure to get rid of the fear by checking for our BEENTRY instead\nof PID. However, it seems to me that some additional function is\nneeded in pgstat.c so that we can check the realtime value of\nPgBackendStatus, which might be too much.\n\n\n+\t/* If asked to wait, check whether the timeout value is valid or not. */\n+\tif (wait && pid != MyProcPid)\n+\t{\n+\t\ttimeout = PG_GETARG_INT64(2);\n+\n+\t\tif (timeout <= 0)\n+\t\t\tereport(ERROR,\n+\t\t\t\t\t(errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n+\t\t\t\t\t errmsg(\"\\\"timeout\\\" must not be negative or zero\")));\n\nThis means that pg_terminate_backend accepts negative timeouts when\nterminating myself, which looks odd.\n\nIs there any reason to reject 0 as timeout?\n\n+\t * Wait only if requested and the termination is successful. Self\n+\t * termination is allowed but waiting is not.\n+\t */\n+\tif (wait && pid != MyProcPid && result)\n+\t\tresult = pg_wait_until_termination(pid, timeout);\n\nWhy don't we wait for myself to be terminated? There's no guarantee\nthat myself will be terminated without failure. (I agree that that is\nnot so useful, but I think there's no reason not to do so.)\n\n\nThe first suggested signature for pg_terminate_backend() with timeout\nwas pg_terminate_backend(pid, timeout). The current signature (pid,\nwait?, timeout) looks redundant. Maybe the reason for rejecting 0\nastimeout is pg_terminate_backend(pid, true, 0) looks odd but it we\ncan wait forever in that case (as other features does). On the other\nhand pg_terminate_backend(pid, false, 100) is apparently odd but this\npatch doesn't seem to reject it. If there's no considerable reason\nfor the current signature, I would suggest that:\n\npg_terminate_backend(pid, timeout), where it waits forever if timeout\nis zero and waits for the timeout if positive. Negative values are not\naccepted.\n\nThat being said, I didn't find the disucssion about allowing default\ntimeout value by separating the boolean, if it is the consensus on\nthis thread, sorry for the noise.\n\n\n+\t\t\t\tereport(WARNING,\n+\t\t\t\t\t\t(errmsg(\"could not check the existence of the backend with PID %d: %m\",\n+\t\t\t\t\t\t\t\tpid)));\n+\t\t\t\treturn false;\n\nI think this is worth ERROR. We can avoid this handling if we look\ninto PgBackendEntry instead.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 17 Mar 2021 11:58:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Wed, Mar 17, 2021 at 8:28 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 17 Mar 2021 07:01:39 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > Attaching v10 patch for further review.\n>\n> The time-out mechanism doesn't count remainingtime as expected,\n> concretely it does the following.\n>\n> do {\n> kill();\n> WaitLatch(WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH, waittime);\n> ResetLatch(MyLatch);\n> remainingtime -= waittime;\n> } while (remainingtime > 0);\n>\n> So, the WaitLatch doesn't consume as much time as the set waittime in\n> case of latch set. remainingtime reduces faster than the real at the\n> iteration.\n\nWaitLatch can exit without waiting for the waittime duration whenever\nthe MyLatch is set (SetLatch). Now the question is how frequently\nSetLatch can get called in a backend? For instance, if we keep calling\npg_reload_conf in any of the backends in the cluster, then the\nSetLatch will be called and the timeout in pg_wait_until_termination\nwill be reached fastly. I see that this problem can also exist in\ncase of pg_promote function. Similarly it may exist in other places\nwhere we have WaitLatch for timeouts.\n\nIMO, the frequency of SetLatch calls may not be that much in real time\nscenarios. If at all, the latch gets set too frequently, then the\nterminate and wait functions might timeout earlier. But is it a\ncritical problem to worry about? (IMHO, it's not that critical) If\nyes, we might as well need to fix it (I don't know how?) in other\ncritical areas like pg_promote?\n\n> It wouldn't happen actually but I concern about PID recycling. We can\n> make sure to get rid of the fear by checking for our BEENTRY instead\n> of PID. However, it seems to me that some additional function is\n> needed in pgstat.c so that we can check the realtime value of\n> PgBackendStatus, which might be too much.\n\nThe aim of the wait logic is to ensure that the process is gone from\nthe system processes that is why using kill(), not it's entries are\ngone from the shared memory.\n\n> + /* If asked to wait, check whether the timeout value is valid or not. */\n> + if (wait && pid != MyProcPid)\n> + {\n> + timeout = PG_GETARG_INT64(2);\n> +\n> + if (timeout <= 0)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n> + errmsg(\"\\\"timeout\\\" must not be negative or zero\")));\n>\n> This means that pg_terminate_backend accepts negative timeouts when\n> terminating myself, which looks odd.\n\nI will change this.\n\n> Is there any reason to reject 0 as timeout?\n\nActually, timeout 0 should mean that \"don't wait\" and we can error out\non negative values. Thoughts?\n\n> + * Wait only if requested and the termination is successful. Self\n> + * termination is allowed but waiting is not.\n> + */\n> + if (wait && pid != MyProcPid && result)\n> + result = pg_wait_until_termination(pid, timeout);\n>\n> Why don't we wait for myself to be terminated? There's no guarantee\n> that myself will be terminated without failure. (I agree that that is\n> not so useful, but I think there's no reason not to do so.)\n\nWe could programmatically allow it to wait in case of self termination\nand it doesn't make any difference to the user, they would see\n\"Terminating connection due to administrator command\" FATAL error. I\ncan remove pid != MyProcPid.\n\n> The first suggested signature for pg_terminate_backend() with timeout\n> was pg_terminate_backend(pid, timeout). The current signature (pid,\n> wait?, timeout) looks redundant. Maybe the reason for rejecting 0\n> astimeout is pg_terminate_backend(pid, true, 0) looks odd but it we\n> can wait forever in that case (as other features does). On the other\n> hand pg_terminate_backend(pid, false, 100) is apparently odd but this\n> patch doesn't seem to reject it. If there's no considerable reason\n> for the current signature, I would suggest that:\n>\n> pg_terminate_backend(pid, timeout), where it waits forever if timeout\n> is zero and waits for the timeout if positive. Negative values are not\n> accepted.\n\nSo, as stated above, how about a timeout 0 (which is default) telling\n\"don't wait\", negative error out, a positive milliseconds value\nindicating that we should wait after termination?\n\nAnd for pg_wait_for_backend_termination timeout 0 or negative, we error out?\n\nIMO, the above semantics are better than timeout 0 meaning \"wait\nforever\". Thoughts?\n\n> + ereport(WARNING,\n> + (errmsg(\"could not check the existence of the backend with PID %d: %m\",\n> + pid)));\n> + return false;\n>\n> I think this is worth ERROR. We can avoid this handling if we look\n> into PgBackendEntry instead.\n\nI will change it to ERROR.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 17 Mar 2021 14:33:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "Hi,\nw.r.t. WaitLatch(), if its return value is WL_TIMEOUT, we know the\nspecified timeout has elapsed.\nIt seems WaitLatch() can be enhanced to also return the actual duration of\nthe wait.\nThis way, the caller can utilize the duration directly.\n\nAs for other places where WaitLatch() is called, similar change can be\napplied on a per-case basis (with separate patches, not under this topic).\n\nCheers\n\nOn Wed, Mar 17, 2021 at 2:04 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Wed, Mar 17, 2021 at 8:28 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Wed, 17 Mar 2021 07:01:39 +0530, Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote in\n> > > Attaching v10 patch for further review.\n> >\n> > The time-out mechanism doesn't count remainingtime as expected,\n> > concretely it does the following.\n> >\n> > do {\n> > kill();\n> > WaitLatch(WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH, waittime);\n> > ResetLatch(MyLatch);\n> > remainingtime -= waittime;\n> > } while (remainingtime > 0);\n> >\n> > So, the WaitLatch doesn't consume as much time as the set waittime in\n> > case of latch set. remainingtime reduces faster than the real at the\n> > iteration.\n>\n> WaitLatch can exit without waiting for the waittime duration whenever\n> the MyLatch is set (SetLatch). Now the question is how frequently\n> SetLatch can get called in a backend? For instance, if we keep calling\n> pg_reload_conf in any of the backends in the cluster, then the\n> SetLatch will be called and the timeout in pg_wait_until_termination\n> will be reached fastly. I see that this problem can also exist in\n> case of pg_promote function. Similarly it may exist in other places\n> where we have WaitLatch for timeouts.\n>\n> IMO, the frequency of SetLatch calls may not be that much in real time\n> scenarios. If at all, the latch gets set too frequently, then the\n> terminate and wait functions might timeout earlier. But is it a\n> critical problem to worry about? (IMHO, it's not that critical) If\n> yes, we might as well need to fix it (I don't know how?) in other\n> critical areas like pg_promote?\n>\n> > It wouldn't happen actually but I concern about PID recycling. We can\n> > make sure to get rid of the fear by checking for our BEENTRY instead\n> > of PID. However, it seems to me that some additional function is\n> > needed in pgstat.c so that we can check the realtime value of\n> > PgBackendStatus, which might be too much.\n>\n> The aim of the wait logic is to ensure that the process is gone from\n> the system processes that is why using kill(), not it's entries are\n> gone from the shared memory.\n>\n> > + /* If asked to wait, check whether the timeout value is valid or\n> not. */\n> > + if (wait && pid != MyProcPid)\n> > + {\n> > + timeout = PG_GETARG_INT64(2);\n> > +\n> > + if (timeout <= 0)\n> > + ereport(ERROR,\n> > +\n> (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n> > + errmsg(\"\\\"timeout\\\" must not be\n> negative or zero\")));\n> >\n> > This means that pg_terminate_backend accepts negative timeouts when\n> > terminating myself, which looks odd.\n>\n> I will change this.\n>\n> > Is there any reason to reject 0 as timeout?\n>\n> Actually, timeout 0 should mean that \"don't wait\" and we can error out\n> on negative values. Thoughts?\n>\n> > + * Wait only if requested and the termination is successful. Self\n> > + * termination is allowed but waiting is not.\n> > + */\n> > + if (wait && pid != MyProcPid && result)\n> > + result = pg_wait_until_termination(pid, timeout);\n> >\n> > Why don't we wait for myself to be terminated? There's no guarantee\n> > that myself will be terminated without failure. (I agree that that is\n> > not so useful, but I think there's no reason not to do so.)\n>\n> We could programmatically allow it to wait in case of self termination\n> and it doesn't make any difference to the user, they would see\n> \"Terminating connection due to administrator command\" FATAL error. I\n> can remove pid != MyProcPid.\n>\n> > The first suggested signature for pg_terminate_backend() with timeout\n> > was pg_terminate_backend(pid, timeout). The current signature (pid,\n> > wait?, timeout) looks redundant. Maybe the reason for rejecting 0\n> > astimeout is pg_terminate_backend(pid, true, 0) looks odd but it we\n> > can wait forever in that case (as other features does). On the other\n> > hand pg_terminate_backend(pid, false, 100) is apparently odd but this\n> > patch doesn't seem to reject it. If there's no considerable reason\n> > for the current signature, I would suggest that:\n> >\n> > pg_terminate_backend(pid, timeout), where it waits forever if timeout\n> > is zero and waits for the timeout if positive. Negative values are not\n> > accepted.\n>\n> So, as stated above, how about a timeout 0 (which is default) telling\n> \"don't wait\", negative error out, a positive milliseconds value\n> indicating that we should wait after termination?\n>\n> And for pg_wait_for_backend_termination timeout 0 or negative, we error\n> out?\n>\n> IMO, the above semantics are better than timeout 0 meaning \"wait\n> forever\". Thoughts?\n>\n> > + ereport(WARNING,\n> > + (errmsg(\"could not check\n> the existence of the backend with PID %d: %m\",\n> > + pid)));\n> > + return false;\n> >\n> > I think this is worth ERROR. We can avoid this handling if we look\n> > into PgBackendEntry instead.\n>\n> I will change it to ERROR.\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\n\nHi,w.r.t. WaitLatch(), if its return value is WL_TIMEOUT, we know the specified timeout has elapsed.It seems WaitLatch() can be enhanced to also return the actual duration of the wait.This way, the caller can utilize the duration directly.As for other places where WaitLatch() is called, similar change can be applied on a per-case basis (with separate patches, not under this topic).CheersOn Wed, Mar 17, 2021 at 2:04 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Wed, Mar 17, 2021 at 8:28 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 17 Mar 2021 07:01:39 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > Attaching v10 patch for further review.\n>\n> The time-out mechanism doesn't count remainingtime as expected,\n> concretely it does the following.\n>\n> do {\n>   kill();\n>   WaitLatch(WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH, waittime);\n>   ResetLatch(MyLatch);\n>   remainingtime -= waittime;\n> } while (remainingtime > 0);\n>\n> So, the WaitLatch doesn't consume as much time as the set waittime in\n> case of latch set. remainingtime reduces faster than the real at the\n> iteration.\n\nWaitLatch can exit without waiting for the waittime duration whenever\nthe MyLatch is set (SetLatch). Now the question is how frequently\nSetLatch can get called in a backend? For instance, if we keep calling\npg_reload_conf in any of the backends in the cluster, then the\nSetLatch will be called and the timeout in pg_wait_until_termination\nwill be reached fastly.  I see that this problem can also exist in\ncase of pg_promote function. Similarly it may exist in other places\nwhere we have WaitLatch for timeouts.\n\nIMO, the frequency of SetLatch calls may not be that much in real time\nscenarios. If at all, the latch gets set too frequently, then the\nterminate and wait functions might timeout earlier. But is it a\ncritical problem to worry about? (IMHO, it's not that critical) If\nyes, we might as well need to fix it (I don't know how?) in other\ncritical areas like pg_promote?\n\n> It wouldn't happen actually but I concern about PID recycling. We can\n> make sure to get rid of the fear by checking for our BEENTRY instead\n> of PID.  However, it seems to me that some additional function is\n> needed in pgstat.c so that we can check the realtime value of\n> PgBackendStatus, which might be too much.\n\nThe aim of the wait logic is to ensure that the process is gone from\nthe system processes that is why using kill(), not it's entries are\ngone from the shared memory.\n\n> +       /* If asked to wait, check whether the timeout value is valid or not. */\n> +       if (wait && pid != MyProcPid)\n> +       {\n> +               timeout = PG_GETARG_INT64(2);\n> +\n> +               if (timeout <= 0)\n> +                       ereport(ERROR,\n> +                                       (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n> +                                        errmsg(\"\\\"timeout\\\" must not be negative or zero\")));\n>\n> This means that pg_terminate_backend accepts negative timeouts when\n> terminating myself, which looks odd.\n\nI will change this.\n\n> Is there any reason to reject 0 as timeout?\n\nActually, timeout 0 should mean that \"don't wait\" and we can error out\non negative values. Thoughts?\n\n> +        * Wait only if requested and the termination is successful. Self\n> +        * termination is allowed but waiting is not.\n> +        */\n> +       if (wait && pid != MyProcPid && result)\n> +               result = pg_wait_until_termination(pid, timeout);\n>\n> Why don't we wait for myself to be terminated?  There's no guarantee\n> that myself will be terminated without failure.  (I agree that that is\n> not so useful, but I think there's no reason not to do so.)\n\nWe could programmatically allow it to wait in case of self termination\nand it doesn't make any difference to the user, they would see\n\"Terminating connection due to administrator command\" FATAL error. I\ncan remove pid != MyProcPid.\n\n> The first suggested signature for pg_terminate_backend() with timeout\n> was pg_terminate_backend(pid, timeout).  The current signature (pid,\n> wait?, timeout) looks redundant.  Maybe the reason for rejecting 0\n> astimeout is pg_terminate_backend(pid, true, 0) looks odd but it we\n> can wait forever in that case (as other features does).  On the other\n> hand pg_terminate_backend(pid, false, 100) is apparently odd but this\n> patch doesn't seem to reject it.  If there's no considerable reason\n> for the current signature, I would suggest that:\n>\n> pg_terminate_backend(pid, timeout), where it waits forever if timeout\n> is zero and waits for the timeout if positive. Negative values are not\n> accepted.\n\nSo, as stated above, how about a timeout 0 (which is default) telling\n\"don't wait\", negative error out, a positive milliseconds value\nindicating that we should wait after termination?\n\nAnd for pg_wait_for_backend_termination timeout 0 or negative, we error out?\n\nIMO, the above semantics are better than timeout 0 meaning \"wait\nforever\". Thoughts?\n\n> +                               ereport(WARNING,\n> +                                               (errmsg(\"could not check the existence of the backend with PID %d: %m\",\n> +                                                               pid)));\n> +                               return false;\n>\n> I think this is worth ERROR. We can avoid this handling if we look\n> into PgBackendEntry instead.\n\nI will change it to ERROR.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 17 Mar 2021 02:19:23 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "\n\nOn 2021/03/17 11:58, Kyotaro Horiguchi wrote:\n> The first suggested signature for pg_terminate_backend() with timeout\n> was pg_terminate_backend(pid, timeout). The current signature (pid,\n> wait?, timeout) looks redundant. Maybe the reason for rejecting 0\n> astimeout is pg_terminate_backend(pid, true, 0) looks odd but it we\n> can wait forever in that case (as other features does).\n\nI'm afraid that \"waiting forever\" can cause something like deadlock situation,\nas follows. We have no mechanism to detect this for now.\n\n1. backend 1 took the lock on the relation A.\n2. backend 2 took the lock on the relation B.\n3. backend 1 tries to take the lock on the relation B and is waiting for\n the lock to be released.\n4. backend 2 accidentally executes pg_wait_for_backend_termination() with\n the pid of backend 1, and then is waiting for backend 1 to be terminated.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 18 Mar 2021 16:16:54 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Thu, Mar 18, 2021 at 12:46 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2021/03/17 11:58, Kyotaro Horiguchi wrote:\n> > The first suggested signature for pg_terminate_backend() with timeout\n> > was pg_terminate_backend(pid, timeout). The current signature (pid,\n> > wait?, timeout) looks redundant. Maybe the reason for rejecting 0\n> > astimeout is pg_terminate_backend(pid, true, 0) looks odd but it we\n> > can wait forever in that case (as other features does).\n>\n> I'm afraid that \"waiting forever\" can cause something like deadlock situation,\n> as follows. We have no mechanism to detect this for now.\n>\n> 1. backend 1 took the lock on the relation A.\n> 2. backend 2 took the lock on the relation B.\n> 3. backend 1 tries to take the lock on the relation B and is waiting for\n> the lock to be released.\n> 4. backend 2 accidentally executes pg_wait_for_backend_termination() with\n> the pid of backend 1, and then is waiting for backend 1 to be terminated.\n\nYeah this can happen.\n\nSo, as stated upthread, how about a timeout 0 (which is default)\ntelling \"don't wait\", erroring out on negative value and when\nspecified a positive milliseconds value, then wait for that amount of\ntime. With this semantics, we can remove the wait flag for\npg_terminate_backend(pid, 0). Thoughts?\n\nAnd for pg_wait_for_backend_termination timeout 0 or negative, we\nerror out. Thoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Mar 2021 13:11:37 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Thu, Mar 18, 2021 at 1:11 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Mar 18, 2021 at 12:46 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n> > On 2021/03/17 11:58, Kyotaro Horiguchi wrote:\n> > > The first suggested signature for pg_terminate_backend() with timeout\n> > > was pg_terminate_backend(pid, timeout). The current signature (pid,\n> > > wait?, timeout) looks redundant. Maybe the reason for rejecting 0\n> > > astimeout is pg_terminate_backend(pid, true, 0) looks odd but it we\n> > > can wait forever in that case (as other features does).\n> >\n> > I'm afraid that \"waiting forever\" can cause something like deadlock situation,\n> > as follows. We have no mechanism to detect this for now.\n> >\n> > 1. backend 1 took the lock on the relation A.\n> > 2. backend 2 took the lock on the relation B.\n> > 3. backend 1 tries to take the lock on the relation B and is waiting for\n> > the lock to be released.\n> > 4. backend 2 accidentally executes pg_wait_for_backend_termination() with\n> > the pid of backend 1, and then is waiting for backend 1 to be terminated.\n>\n> Yeah this can happen.\n>\n> So, as stated upthread, how about a timeout 0 (which is default)\n> telling \"don't wait\", erroring out on negative value and when\n> specified a positive milliseconds value, then wait for that amount of\n> time. With this semantics, we can remove the wait flag for\n> pg_terminate_backend(pid, 0). Thoughts?\n>\n> And for pg_wait_for_backend_termination timeout 0 or negative, we\n> error out. Thoughts?\n\nAttaching v11 patch that removed the wait boolean flag in the\npg_terminate_backend and timeout 0 indicates \"no wait\", negative value\n\"errors out\", positive value \"waits for those many milliseconds\". Also\naddressed other review comments that I received upthread. Please\nreview v11 further.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 19 Mar 2021 11:37:26 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Fri, Mar 19, 2021 at 11:37 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Attaching v11 patch that removed the wait boolean flag in the\n> pg_terminate_backend and timeout 0 indicates \"no wait\", negative value\n> \"errors out\", positive value \"waits for those many milliseconds\". Also\n> addressed other review comments that I received upthread. Please\n> review v11 further.\n\nAttaching v12 patch after rebasing onto the latest master.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 5 Apr 2021 08:51:03 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Mon, Apr 5, 2021 at 5:21 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Mar 19, 2021 at 11:37 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Attaching v11 patch that removed the wait boolean flag in the\n> > pg_terminate_backend and timeout 0 indicates \"no wait\", negative value\n> > \"errors out\", positive value \"waits for those many milliseconds\". Also\n> > addressed other review comments that I received upthread. Please\n> > review v11 further.\n>\n> Attaching v12 patch after rebasing onto the latest master.\n\nI've applied this patch with some minor changes.\n\nI rewrote some parts of the documentation to make it more focused on\nthe end user rather than the implementation. I also made a small\nsimplification in pg_terminate_backend() which removes the \"wait\"\nvariable (seems like a bit of a leftover since the time when it was a\nseparate argument). And picked a correct oid for the function (oids\n8000-9999 should be used for new patches, 16386 is in the user area of\noids)\n\nThanks!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Thu, 8 Apr 2021 11:41:17 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Thu, Apr 08, 2021 at 11:41:17AM +0200, Magnus Hagander wrote:\n> I've applied this patch with some minor changes.\n\nI wondered if the new pg_wait_for_backend_termination() could replace\nregress.c:wait_pid(). I think it can't, because the new function requires the\nbackend to still be present in the procarray:\n\n\tproc = BackendPidGetProc(pid);\n\n\tif (proc == NULL)\n\t{\n\t\tereport(WARNING,\n\t\t\t\t(errmsg(\"PID %d is not a PostgreSQL server process\", pid)));\n\n\t\tPG_RETURN_BOOL(false);\n\t}\n\n\tPG_RETURN_BOOL(pg_wait_until_termination(pid, timeout));\n\nIf a backend has left the procarray but not yet left the kernel process table,\nthis function will issue the warning and not wait for the final exit. Given\nthat limitation, is pg_wait_for_backend_termination() useful enough? If\nwaiting for procarray departure is enough, should pg_wait_until_termination()\ncheck BackendPidGetProc(pid) instead of kill(0, pid), so it can return\nearlier? I can see the value of adding the pg_terminate_backend() timeout\nargument, in any case.\n\n\n", "msg_date": "Mon, 31 May 2021 20:48:58 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Tue, Jun 1, 2021 at 9:19 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Thu, Apr 08, 2021 at 11:41:17AM +0200, Magnus Hagander wrote:\n> > I've applied this patch with some minor changes.\n\nThanks for taking a look at this function.\n\n> I wondered if the new pg_wait_for_backend_termination() could replace\n> regress.c:wait_pid().\n\nI was earlier thinking of replacing the wait_pid() with the new\nfunction but arrived at a similar conclusion as yours.\n\n> I think it can't, because the new function requires the\n> backend to still be present in the procarray:\n>\n> proc = BackendPidGetProc(pid);\n>\n> if (proc == NULL)\n> {\n> ereport(WARNING,\n> (errmsg(\"PID %d is not a PostgreSQL server process\", pid)));\n>\n> PG_RETURN_BOOL(false);\n> }\n>\n> PG_RETURN_BOOL(pg_wait_until_termination(pid, timeout));\n>\n> If a backend has left the procarray but not yet left the kernel process table,\n> this function will issue the warning and not wait for the final exit.\n\nYes, if the backend is not in procarray but still in the kernel\nprocess table, it emits a warning \"PID %d is not a PostgreSQL server\nprocess\" and returns false.\n\n> Given that limitation, is pg_wait_for_backend_termination() useful enough? If\n> waiting for procarray departure is enough, should pg_wait_until_termination()\n> check BackendPidGetProc(pid) instead of kill(0, pid), so it can return\n> earlier?\n\nWe can just remove BackendPidGetProc(pid) in\npg_wait_for_backend_termination. With this change, we can get rid of\nthe wait_pid() from regress.c. But, my concern is that the\npg_wait_for_backend_termination() can also check non-postgres server\nprocess pid. Is this okay? In that case, this function becomes a\ngeneric(OS level function) rather than a postgres server specific\nfunction. I'm not sure if all agree to that. Thoughts?\n\n> I can see the value of adding the pg_terminate_backend() timeout\n> argument, in any case.\n\nTrue. We can leave pg_terminate_backend() as is.\n\nWith Regards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 1 Jun 2021 13:25:24 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Tue, Jun 01, 2021 at 01:25:24PM +0530, Bharath Rupireddy wrote:\n> On Tue, Jun 1, 2021 at 9:19 AM Noah Misch <noah@leadboat.com> wrote:\n> > Given that limitation, is pg_wait_for_backend_termination() useful enough? If\n> > waiting for procarray departure is enough, should pg_wait_until_termination()\n> > check BackendPidGetProc(pid) instead of kill(0, pid), so it can return\n> > earlier?\n> \n> We can just remove BackendPidGetProc(pid) in\n> pg_wait_for_backend_termination. With this change, we can get rid of\n> the wait_pid() from regress.c. But, my concern is that the\n> pg_wait_for_backend_termination() can also check non-postgres server\n> process pid. Is this okay?\n\nIt may or may not be okay. I would not feel good about it.\n\n> In that case, this function becomes a\n> generic(OS level function) rather than a postgres server specific\n> function. I'm not sure if all agree to that. Thoughts?\n\nMy preference is to remove pg_wait_for_backend_termination(). The use case\nthat prompted this thread used pg_terminate_backend(pid, 180000); it doesn't\nneed pg_wait_for_backend_termination().\n\n\n", "msg_date": "Fri, 4 Jun 2021 18:32:36 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Sat, Jun 5, 2021 at 7:02 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Tue, Jun 01, 2021 at 01:25:24PM +0530, Bharath Rupireddy wrote:\n> > On Tue, Jun 1, 2021 at 9:19 AM Noah Misch <noah@leadboat.com> wrote:\n> > > Given that limitation, is pg_wait_for_backend_termination() useful enough? If\n> > > waiting for procarray departure is enough, should pg_wait_until_termination()\n> > > check BackendPidGetProc(pid) instead of kill(0, pid), so it can return\n> > > earlier?\n> >\n> > We can just remove BackendPidGetProc(pid) in\n> > pg_wait_for_backend_termination. With this change, we can get rid of\n> > the wait_pid() from regress.c. But, my concern is that the\n> > pg_wait_for_backend_termination() can also check non-postgres server\n> > process pid. Is this okay?\n>\n> It may or may not be okay. I would not feel good about it.\n>\n> > In that case, this function becomes a\n> > generic(OS level function) rather than a postgres server specific\n> > function. I'm not sure if all agree to that. Thoughts?\n>\n> My preference is to remove pg_wait_for_backend_termination(). The use case\n> that prompted this thread used pg_terminate_backend(pid, 180000); it doesn't\n> need pg_wait_for_backend_termination().\n\nI was earlier thinking that the function\npg_wait_for_backend_termination() will be useful:\n1) If the user wants to pg_terminate_backend(<<pid>>); and\npg_wait_for_backend_termination(<<pid>>, <<timeout>>); separately. It\nseems like the proc array entry will be removed as part of SITERM\nprocessing (see [1]) and the BackendPidGetProc will return NULL. So,\nit's not useful here.\n2) If the user wants to pg_wait_for_backend_termination(<<pid>>,\n<<timeout>>);, thinking that some event might cause the backend to be\nterminated within the <<timeout>>. So, it's still useful here.\n\n[1]\n(gdb) bt\n#0 ProcArrayRemove (proc=0x55b27f26356c\n<CleanupInvalidationState+278>, latestXid=32764)\n at procarray.c:526\n#1 0x000055b27f281c9d in RemoveProcFromArray (code=1, arg=0) at proc.c:812\n#2 0x000055b27f2542ce in shmem_exit (code=1) at ipc.c:272\n#3 0x000055b27f2540d5 in proc_exit_prepare (code=1) at ipc.c:194\n#4 0x000055b27f254022 in proc_exit (code=1) at ipc.c:107\n#5 0x000055b27f449479 in errfinish (filename=0x55b27f61cd65\n\"postgres.c\", lineno=3191,\n funcname=0x55b27f61e770 <__func__.40727> \"ProcessInterrupts\") at elog.c:666\n#6 0x000055b27f29097e in ProcessInterrupts () at postgres.c:3191\n#7 0x000055b27f28cbf0 in ProcessClientReadInterrupt (blocked=true) at\npostgres.c:499\n\nWith Regards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 5 Jun 2021 12:06:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Sat, Jun 05, 2021 at 12:06:46PM +0530, Bharath Rupireddy wrote:\n> On Sat, Jun 5, 2021 at 7:02 AM Noah Misch <noah@leadboat.com> wrote:\n> > On Tue, Jun 01, 2021 at 01:25:24PM +0530, Bharath Rupireddy wrote:\n> > > On Tue, Jun 1, 2021 at 9:19 AM Noah Misch <noah@leadboat.com> wrote:\n> > > > Given that limitation, is pg_wait_for_backend_termination() useful enough? If\n> > > > waiting for procarray departure is enough, should pg_wait_until_termination()\n> > > > check BackendPidGetProc(pid) instead of kill(0, pid), so it can return\n> > > > earlier?\n> > >\n> > > We can just remove BackendPidGetProc(pid) in\n> > > pg_wait_for_backend_termination. With this change, we can get rid of\n> > > the wait_pid() from regress.c. But, my concern is that the\n> > > pg_wait_for_backend_termination() can also check non-postgres server\n> > > process pid. Is this okay?\n> >\n> > It may or may not be okay. I would not feel good about it.\n> >\n> > > In that case, this function becomes a\n> > > generic(OS level function) rather than a postgres server specific\n> > > function. I'm not sure if all agree to that. Thoughts?\n> >\n> > My preference is to remove pg_wait_for_backend_termination(). The use case\n> > that prompted this thread used pg_terminate_backend(pid, 180000); it doesn't\n> > need pg_wait_for_backend_termination().\n> \n> I was earlier thinking that the function\n> pg_wait_for_backend_termination() will be useful:\n> 1) If the user wants to pg_terminate_backend(<<pid>>); and\n> pg_wait_for_backend_termination(<<pid>>, <<timeout>>); separately. It\n> seems like the proc array entry will be removed as part of SITERM\n> processing (see [1]) and the BackendPidGetProc will return NULL. So,\n> it's not useful here.\n> 2) If the user wants to pg_wait_for_backend_termination(<<pid>>,\n> <<timeout>>);, thinking that some event might cause the backend to be\n> terminated within the <<timeout>>. So, it's still useful here.\n\nThat is factual. That pg_wait_for_backend_termination() appears to be useful\nfor (1) but isn't useful for (1) reduces its value. I think it reduces the\nvalue slightly below zero. Relevant to that, if a user doesn't care about the\ndistinction between \"backend has left the procarray\" and \"backend's PID has\nleft the kernel process table\", that user can poll pg_stat_activity to achieve\nthe same level of certainty that pg_wait_for_backend_termination() offers.\n\n\n", "msg_date": "Sat, 5 Jun 2021 12:08:01 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Sat, Jun 05, 2021 at 12:08:01PM -0700, Noah Misch wrote:\n> > > My preference is to remove pg_wait_for_backend_termination(). The use case\n> > > that prompted this thread used pg_terminate_backend(pid, 180000); it doesn't\n> > > need pg_wait_for_backend_termination().\n\nIs this an Opened Issue ?\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 11 Jun 2021 20:54:08 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Fri, Jun 11, 2021 at 08:54:08PM -0500, Justin Pryzby wrote:\n> On Sat, Jun 05, 2021 at 12:08:01PM -0700, Noah Misch wrote:\n> > > > My preference is to remove pg_wait_for_backend_termination(). The use case\n> > > > that prompted this thread used pg_terminate_backend(pid, 180000); it doesn't\n> > > > need pg_wait_for_backend_termination().\n> \n> Is this an Opened Issue ?\n\nAn Open Item? Not really, since there's no objective defect. Nonetheless,\nthe attached is what I'd like to use.", "msg_date": "Fri, 11 Jun 2021 21:37:50 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Fri, Jun 11, 2021 at 09:37:50PM -0700, Noah Misch wrote:\n> On Fri, Jun 11, 2021 at 08:54:08PM -0500, Justin Pryzby wrote:\n> > On Sat, Jun 05, 2021 at 12:08:01PM -0700, Noah Misch wrote:\n> > > > > My preference is to remove pg_wait_for_backend_termination(). The use case\n> > > > > that prompted this thread used pg_terminate_backend(pid, 180000); it doesn't\n> > > > > need pg_wait_for_backend_termination().\n> > \n> > Is this an Opened Issue ?\n> \n> An Open Item? Not really, since there's no objective defect. Nonetheless,\n> the attached is what I'd like to use.\n\nI think of this as a list of stuff to avoid forgetting that needs to be\naddressed or settled before the release.\n\nIf the value of the new function is marginal, it may be good to remove it, else\nwe're committed to supporting it.\n\nEven if it's not removed, the descriptions should be cleaned up.\n\n| src/include/catalog/pg_proc.dat- descr => 'terminate a backend process and if timeout is specified, wait for its exit or until timeout occurs',\n=> I think doesn't need to change or mention the optional timeout at all\n\n| src/include/catalog/pg_proc.dat-{ oid => '2137', descr => 'wait for a backend process exit or timeout occurs',\n=> should just say \"wait for a backend process to exit\". The timeout has a default.\n\n\n", "msg_date": "Sat, 12 Jun 2021 00:12:12 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Sat, Jun 12, 2021 at 12:12:12AM -0500, Justin Pryzby wrote:\n> Even if it's not removed, the descriptions should be cleaned up.\n> \n> | src/include/catalog/pg_proc.dat- descr => 'terminate a backend process and if timeout is specified, wait for its exit or until timeout occurs',\n> => I think doesn't need to change or mention the optional timeout at all\n\nAgreed, these strings generally give less detail. I can revert that to the\nv13 wording, 'terminate a server process'.\n\n\n", "msg_date": "Sat, 12 Jun 2021 08:21:39 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Sat, Jun 12, 2021 at 08:21:39AM -0700, Noah Misch wrote:\n> On Sat, Jun 12, 2021 at 12:12:12AM -0500, Justin Pryzby wrote:\n> > Even if it's not removed, the descriptions should be cleaned up.\n> > \n> > | src/include/catalog/pg_proc.dat- descr => 'terminate a backend process and if timeout is specified, wait for its exit or until timeout occurs',\n> > => I think doesn't need to change or mention the optional timeout at all\n> \n> Agreed, these strings generally give less detail. I can revert that to the\n> v13 wording, 'terminate a server process'.\n\nMaybe you'd also update the release notes.\n\nI suggest some edits from the remaining parts of the original patch.\n\ndiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\nindex fbc80c1403..b7383bc8aa 100644\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -24998,7 +24998,7 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n milliseconds) and greater than zero, the function waits until the\n process is actually terminated or until the given time has passed. If\n the process is terminated, the function\n- returns <literal>true</literal>. On timeout a warning is emitted and\n+ returns <literal>true</literal>. On timeout, a warning is emitted and\n <literal>false</literal> is returned.\n </para></entry>\n </row>\ndiff --git a/src/backend/storage/ipc/signalfuncs.c b/src/backend/storage/ipc/signalfuncs.c\nindex 837699481c..f12c417854 100644\n--- a/src/backend/storage/ipc/signalfuncs.c\n+++ b/src/backend/storage/ipc/signalfuncs.c\n@@ -187,12 +187,12 @@ pg_wait_until_termination(int pid, int64 timeout)\n }\n \n /*\n- * Signal to terminate a backend process. This is allowed if you are a member\n- * of the role whose process is being terminated. If timeout input argument is\n- * 0 (which is default), then this function just signals the backend and\n- * doesn't wait. Otherwise it waits until given the timeout milliseconds or no\n- * process has the given PID and returns true. On timeout, a warning is emitted\n- * and false is returned.\n+ * Send a signal to terminate a backend process. This is allowed if you are a\n+ * member of the role whose process is being terminated. If the timeout input\n+ * argument is 0, then this function just signals the backend and returns true.\n+ * If timeout is nonzero, then it waits until no process has the given PID; if\n+ * the process ends within the timeout, true is returned, and if the timeout is\n+ * exceeded, a warning is emitted and false is returned.\n *\n * Note that only superusers can signal superuser-owned processes.\n */\n@@ -201,7 +201,7 @@ pg_terminate_backend(PG_FUNCTION_ARGS)\n {\n \tint\t\t\tpid;\n \tint\t\t\tr;\n-\tint\t\t\ttimeout;\n+\tint\t\t\ttimeout; /* milliseconds */\n \n \tpid = PG_GETARG_INT32(0);\n \ttimeout = PG_GETARG_INT64(1);\n\n\n", "msg_date": "Sat, 12 Jun 2021 13:27:43 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Sat, Jun 12, 2021 at 10:07 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Fri, Jun 11, 2021 at 08:54:08PM -0500, Justin Pryzby wrote:\n> > On Sat, Jun 05, 2021 at 12:08:01PM -0700, Noah Misch wrote:\n> > > > > My preference is to remove pg_wait_for_backend_termination(). The use case\n> > > > > that prompted this thread used pg_terminate_backend(pid, 180000); it doesn't\n> > > > > need pg_wait_for_backend_termination().\n> >\n> > Is this an Opened Issue ?\n>\n> An Open Item? Not really, since there's no objective defect. Nonetheless,\n> the attached is what I'd like to use.\n\nThanks. +1 to remove the pg_wait_for_backend_termination function. The\npatch basically looks good to me. I'm attaching an updated patch. I\ncorrected a minor typo in the commit message, took docs and code\ncomment changes suggested by Justin Pryzby.\n\nPlease note that release notes still have the headline \"Add functions\nto wait for backend termination\" of the original commit that added the\npg_wait_for_backend_termination. With the removal of it, I'm not quite\nsure if we retain the the commit message or tweak it to something like\n\"Add optional timeout parameter to pg_terminate_backend\".\n<!--\nAuthor: Magnus Hagander <magnus@hagander.net>\n2021-04-08 [aaf043257] Add functions to wait for backend termination\n-->\n\nWith Regards,\nBharath Rupireddy.", "msg_date": "Mon, 14 Jun 2021 19:34:59 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Sat, Jun 12, 2021 at 01:27:43PM -0500, Justin Pryzby wrote:\n> On Sat, Jun 12, 2021 at 08:21:39AM -0700, Noah Misch wrote:\n> > On Sat, Jun 12, 2021 at 12:12:12AM -0500, Justin Pryzby wrote:\n> > > Even if it's not removed, the descriptions should be cleaned up.\n> > > \n> > > | src/include/catalog/pg_proc.dat- descr => 'terminate a backend process and if timeout is specified, wait for its exit or until timeout occurs',\n> > > => I think doesn't need to change or mention the optional timeout at all\n> > \n> > Agreed, these strings generally give less detail. I can revert that to the\n> > v13 wording, 'terminate a server process'.\n> \n> Maybe you'd also update the release notes.\n\nWhat part of the notes did you expect to change that the patch did not change?\n\n> I suggest some edits from the remaining parts of the original patch.\n\nThese look good. I will push them when I push the other part.\n\n\n", "msg_date": "Mon, 14 Jun 2021 09:40:27 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Mon, Jun 14, 2021 at 09:40:27AM -0700, Noah Misch wrote:\n> > > Agreed, these strings generally give less detail. I can revert that to the\n> > > v13 wording, 'terminate a server process'.\n\n...\n\n> > Maybe you'd also update the release notes.\n> \n> What part of the notes did you expect to change that the patch did not change?\n\nSorry, I didn't notice that your patch already adjusted the v14 notes.\n\nNote that Bharath also corrected your commit message to say \"unable *to*\", and\nrevert the verbose pg_proc.dat descr change.\n\nThanks,\n-- \nJustin\n\n\n", "msg_date": "Mon, 14 Jun 2021 11:46:34 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" }, { "msg_contents": "On Mon, Jun 14, 2021 at 07:34:59PM +0530, Bharath Rupireddy wrote:\n> Thanks. +1 to remove the pg_wait_for_backend_termination function. The\n> patch basically looks good to me. I'm attaching an updated patch. I\n> corrected a minor typo in the commit message, took docs and code\n> comment changes suggested by Justin Pryzby.\n\nPushed as two commits. I used your log message typo fix. Here were the diffs\nin your v2 and not in an earlier patch:\n\n> -+ Add an optional wait parameter to <link\n> ++ Add an optional timeout parameter to <link\n\nI used this.\n\n> -+\tint\t\t\ttimeout; /* milliseconds */\n> ++\tint\t\t\ttimeout; /* milliseconds */\n\npgindent chooses a third option, so I ran pgindent instead of using this.\n\n> Please note that release notes still have the headline \"Add functions\n> to wait for backend termination\" of the original commit that added the\n> pg_wait_for_backend_termination. With the removal of it, I'm not quite\n> sure if we retain the the commit message or tweak it to something like\n> \"Add optional timeout parameter to pg_terminate_backend\".\n> <!--\n> Author: Magnus Hagander <magnus@hagander.net>\n> 2021-04-08 [aaf043257] Add functions to wait for backend termination\n> -->\n\nThat part is supposed to mirror \"git log --pretty=format:%s\", no matter what\nhappens later. The next set of release note updates might add my latest\ncommit (5f1df62) to this SGML comment, on another line.\n\n\n", "msg_date": "Mon, 14 Jun 2021 17:40:40 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: A new function to wait for the backend exit after termination" } ]
[ { "msg_contents": "Hackers,\n\nRe-sending from -docs [1] with attachment in order to add to commitfest.\n\nDavid J.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/159981394174.31338.7014519396749859167%40wrigleys.postgresql.org\n\nHackers,Re-sending from -docs [1] with attachment in order to add to commitfest.David J.[1] https://www.postgresql.org/message-id/flat/159981394174.31338.7014519396749859167%40wrigleys.postgresql.org", "msg_date": "Wed, 21 Oct 2020 07:58:48 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "[patch] [doc] Add SELECT clause literals to queries section headers" }, { "msg_contents": ">\n> Hackers,\n>\n> Re-sending from -docs [1] with attachment in order to add to commitfest.\n>\n> David J.\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/159981394174.31338.7014519396749859167%40wrigleys.postgresql.org\n>\n\nedit: attaching the patch", "msg_date": "Wed, 21 Oct 2020 08:04:21 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [patch] [doc] Add SELECT clause literals to queries section\n headers" }, { "msg_contents": "On 21/10/2020 18:04, David G. Johnston wrote:\n>> Hackers,\n>>\n>> Re-sending from -docs [1] with attachment in order to add to commitfest.\n>>\n>> David J.\n>>\n>> [1]\n>> https://www.postgresql.org/message-id/flat/159981394174.31338.7014519396749859167%40wrigleys.postgresql.org\n> \n> edit: attaching the patch\n\nSeems like a good idea. Applied, thanks.\n\n- Heikki\n\n\n", "msg_date": "Mon, 2 Nov 2020 12:57:58 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Add SELECT clause literals to queries section\n headers" } ]
[ { "msg_contents": "Hackers,\n\nOver in -docs [1], where I attached the wrong patch anyway, the poster\nneeded some clarity regarding view updating. A minor documentation patch\nis attached providing just that.\n\nDavid J.\n\n[1] https://www.postgresql.org/message-id/20200303174248.GB5019%40panix.com", "msg_date": "Wed, 21 Oct 2020 08:14:09 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "[patch] [doc] Introduce view updating options more succinctly" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: tested, passed\n\nI wonder, why this patch didn't get a review during the CF.\r\nThis minor improvement looks good to me, so I mark it Ready for Committer.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Mon, 30 Nov 2020 20:21:19 +0000", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Introduce view updating options more succinctly" }, { "msg_contents": "On Mon, Nov 30, 2020 at 9:22 PM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n>\n> I wonder, why this patch didn't get a review during the CF.\n> This minor improvement looks good to me, so I mark it Ready for Committer.\n\nAgreed, and how it managed to pass multiple CFs without getting applied :)\n\nI've applied it now. Thanks, David!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Sat, 6 Mar 2021 17:40:37 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Introduce view updating options more succinctly" } ]
[ { "msg_contents": "Hackers,\n\nBug # 16519 [1] is another report of confusion regarding trying to use\nparameters in improper locations - specifically the SET ROLE command within\npl/pgsql. I'm re-attaching the doc patch and am adding it to the\ncommitfest.\n\nDavid J.\n\n[1]\nhttps://www.postgresql.org/message-id/16519-9ef04828d058a319%40postgresql.org", "msg_date": "Wed, 21 Oct 2020 08:21:52 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "[patch] [doc] Minor variable related cleanup and rewording of plpgsql\n docs" }, { "msg_contents": "čt 26. 11. 2020 v 6:41 odesílatel David G. Johnston <\ndavid.g.johnston@gmail.com> napsal:\n\n> Hackers,\n>\n> Bug # 16519 [1] is another report of confusion regarding trying to use\n> parameters in improper locations - specifically the SET ROLE command within\n> pl/pgsql. I'm re-attaching the doc patch and am adding it to the\n> commitfest.\n>\n\nI checked this patch, and I think so it is correct - my comments are just\nabout enhancing by some examples\n\nMaybe for following sentence the some examples can be practical\n\n+ If the SQL command being executed is incapable of returning a result\n+ it does not accept query parameters.\n </para>\n\n+ it does not accept query parameters (usually DDL commands like CREATE\nTABLE, DROP TABLE, ALTER ... ).\n\n+ Query parameters will only be substituted in places where\nsyntactically allowed\n+ (in particular, identifiers can never be replaced with a query\nparameter.)\n+ As an extreme case, consider this example of poor programming style:\n\nIn this case, I miss the more precious specification of identifiers\n\n+ (in particular, SQL identifiers (like schema, table, column names) can\nnever be replaced with a query parameter.)\n\nRegards\n\nPavel\n\n\n\n> David J.\n>\n> [1]\n> https://www.postgresql.org/message-id/16519-9ef04828d058a319%40postgresql.org\n>\n>\n\nčt 26. 11. 2020 v 6:41 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:Hackers,Bug # 16519 [1] is another report of confusion regarding trying to use parameters in improper locations - specifically the SET ROLE command within pl/pgsql.  I'm re-attaching the doc patch and am adding it to the commitfest.I checked this patch, and I think so it is correct - my comments are just about enhancing by some examplesMaybe for following sentence the some examples can be practical +     If the SQL command being executed is incapable of returning a result+     it does not accept query parameters.     </para>+ it does not accept query parameters (usually DDL commands like CREATE TABLE, DROP TABLE, ALTER ... ).+    Query parameters will only be substituted in places where syntactically allowed+    (in particular, identifiers can never be replaced with a query parameter.)+    As an extreme case, consider this example of poor programming style:In this case, I miss the more precious specification of identifiers+    (in particular, SQL identifiers (like schema, table, column names) can never be replaced with a query parameter.)RegardsPavelDavid J.[1] https://www.postgresql.org/message-id/16519-9ef04828d058a319%40postgresql.org", "msg_date": "Thu, 26 Nov 2020 08:48:48 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Minor variable related cleanup and rewording of\n plpgsql docs" }, { "msg_contents": "On Thu, Nov 26, 2020 at 12:49 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> čt 26. 11. 2020 v 6:41 odesílatel David G. Johnston <\n> david.g.johnston@gmail.com> napsal:\n>\n>> Hackers,\n>>\n>> Bug # 16519 [1] is another report of confusion regarding trying to use\n>> parameters in improper locations - specifically the SET ROLE command within\n>> pl/pgsql. I'm re-attaching the doc patch and am adding it to the\n>> commitfest.\n>>\n>\n> I checked this patch, and I think so it is correct - my comments are just\n> about enhancing by some examples\n>\n>>\n>>\nThank you for the review.\n\nv2 attached.\n\nI added examples in the two places you noted.\n\nUpon re-reading, I decided that opening up the section by including\neverything then fitting in parameters with an exception for utility\ncommands (without previously/otherwise identifying them) forced some\nundesirable verbosity. Instead, I opened up with the utility commands as\nthe main body of non-result returning commands and then moved onto\ndelete/insert/update non-returning cases when the subsequent paragraph\nregarding parameters can then refer to the second class (by way of\nexcluding the first class). This seems to flow better, IMO.\n\nDavid J.", "msg_date": "Sun, 29 Nov 2020 20:24:02 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [patch] [doc] Minor variable related cleanup and rewording of\n plpgsql docs" }, { "msg_contents": "po 30. 11. 2020 v 4:24 odesílatel David G. Johnston <\ndavid.g.johnston@gmail.com> napsal:\n\n> On Thu, Nov 26, 2020 at 12:49 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>>\n>>\n>> čt 26. 11. 2020 v 6:41 odesílatel David G. Johnston <\n>> david.g.johnston@gmail.com> napsal:\n>>\n>>> Hackers,\n>>>\n>>> Bug # 16519 [1] is another report of confusion regarding trying to use\n>>> parameters in improper locations - specifically the SET ROLE command within\n>>> pl/pgsql. I'm re-attaching the doc patch and am adding it to the\n>>> commitfest.\n>>>\n>>\n>> I checked this patch, and I think so it is correct - my comments are just\n>> about enhancing by some examples\n>>\n>>>\n>>>\n> Thank you for the review.\n>\n> v2 attached.\n>\n> I added examples in the two places you noted.\n>\n> Upon re-reading, I decided that opening up the section by including\n> everything then fitting in parameters with an exception for utility\n> commands (without previously/otherwise identifying them) forced some\n> undesirable verbosity. Instead, I opened up with the utility commands as\n> the main body of non-result returning commands and then moved onto\n> delete/insert/update non-returning cases when the subsequent paragraph\n> regarding parameters can then refer to the second class (by way of\n> excluding the first class). This seems to flow better, IMO.\n>\n\nI have no objections, but maybe these pages are a little bit unclear\ngenerally, because the core of the problem is not described.\n\nPersonally I miss a description of the reason why variables cannot be used\n- the description \"variables cannot be used in statements without result\"\nis true, but it is not core information.\n\nThe important fact is usage of an execution plan or not. The statements\nwith an execution plan can be parametrized (DML - INSERT, UPDATE, DELETE),\nand SELECT. The statements without execution plans should not be\nparametrized. The only execution via execution plan executor allows\nparametrization. DDL statements are executed via utility execution, and\ntheir parameterization is not supported.\n\nRegards\n\nPavel\n\n\n\n\n> David J.\n>\n>\n\npo 30. 11. 2020 v 4:24 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:On Thu, Nov 26, 2020 at 12:49 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:čt 26. 11. 2020 v 6:41 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:Hackers,Bug # 16519 [1] is another report of confusion regarding trying to use parameters in improper locations - specifically the SET ROLE command within pl/pgsql.  I'm re-attaching the doc patch and am adding it to the commitfest.I checked this patch, and I think so it is correct - my comments are just about enhancing by some examplesThank you for the review.v2 attached.I added examples in the two places you noted.Upon re-reading, I decided that opening up the section by including everything then fitting in parameters with an exception for utility commands (without previously/otherwise identifying them) forced some undesirable verbosity.  Instead, I opened up with the utility commands as the main body of non-result returning commands and then moved onto delete/insert/update non-returning cases when the subsequent paragraph regarding parameters can then refer to the second class (by way of excluding the first class).  This seems to flow better, IMO.I have no objections, but maybe these pages are a little bit unclear generally, because the core of the problem is not described.Personally I miss a description of the reason why variables cannot be used - the description \"variables cannot be used in statements without result\" is true, but it is not core information. The important fact is usage of an execution plan or not. The statements with an execution plan can be parametrized (DML - INSERT, UPDATE, DELETE), and SELECT. The statements without execution plans should not be parametrized. The only execution via execution plan executor allows parametrization. DDL statements are executed via utility execution, and their parameterization is not supported.RegardsPavelDavid J.", "msg_date": "Mon, 30 Nov 2020 08:51:15 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Minor variable related cleanup and rewording of\n plpgsql docs" }, { "msg_contents": "On Mon, Nov 30, 2020 at 12:51 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> po 30. 11. 2020 v 4:24 odesílatel David G. Johnston <\n> david.g.johnston@gmail.com> napsal:\n>\n>> On Thu, Nov 26, 2020 at 12:49 AM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>>>\n>>>\n>>> čt 26. 11. 2020 v 6:41 odesílatel David G. Johnston <\n>>> david.g.johnston@gmail.com> napsal:\n>>>\n>>>> Hackers,\n>>>>\n>>>> Bug # 16519 [1] is another report of confusion regarding trying to use\n>>>> parameters in improper locations - specifically the SET ROLE command within\n>>>> pl/pgsql. I'm re-attaching the doc patch and am adding it to the\n>>>> commitfest.\n>>>>\n>>>\n>>> I checked this patch, and I think so it is correct - my comments are\n>>> just about enhancing by some examples\n>>>\n>>>>\n>>>>\n>> Thank you for the review.\n>>\n>> v2 attached.\n>>\n>> I added examples in the two places you noted.\n>>\n>> Upon re-reading, I decided that opening up the section by including\n>> everything then fitting in parameters with an exception for utility\n>> commands (without previously/otherwise identifying them) forced some\n>> undesirable verbosity. Instead, I opened up with the utility commands as\n>> the main body of non-result returning commands and then moved onto\n>> delete/insert/update non-returning cases when the subsequent paragraph\n>> regarding parameters can then refer to the second class (by way of\n>> excluding the first class). This seems to flow better, IMO.\n>>\n>\n> I have no objections, but maybe these pages are a little bit unclear\n> generally, because the core of the problem is not described.\n>\n>\n\n> Personally I miss a description of the reason why variables cannot be used\n> - the description \"variables cannot be used in statements without result\"\n> is true, but it is not core information.\n>\n\nIn the section \"executing commands that don't return results\" it does seem\nlike core information...but I get your point.\n\n\n> The important fact is usage of an execution plan or not.\n>\n\nThis is already mentioned in the linked-to section:\n\n\"Variable substitution currently works only in SELECT, INSERT, UPDATE, and\nDELETE commands, because the main SQL engine allows query parameters only\nin these commands. To use a non-constant name or value in other statement\ntypes (generically called utility statements), you must construct the\nutility statement as a string and EXECUTE it.\"\n\nI didn't feel the need to repeat that material in full in the \"no results\"\nsection. I left that pointing out the \"results\" dynamic there would be\nuseful since the original wording seemed to forget about the presence of\nutility commands altogether which was confusing for that section.\n\nDavid J.\n\nOn Mon, Nov 30, 2020 at 12:51 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:po 30. 11. 2020 v 4:24 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:On Thu, Nov 26, 2020 at 12:49 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:čt 26. 11. 2020 v 6:41 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:Hackers,Bug # 16519 [1] is another report of confusion regarding trying to use parameters in improper locations - specifically the SET ROLE command within pl/pgsql.  I'm re-attaching the doc patch and am adding it to the commitfest.I checked this patch, and I think so it is correct - my comments are just about enhancing by some examplesThank you for the review.v2 attached.I added examples in the two places you noted.Upon re-reading, I decided that opening up the section by including everything then fitting in parameters with an exception for utility commands (without previously/otherwise identifying them) forced some undesirable verbosity.  Instead, I opened up with the utility commands as the main body of non-result returning commands and then moved onto delete/insert/update non-returning cases when the subsequent paragraph regarding parameters can then refer to the second class (by way of excluding the first class).  This seems to flow better, IMO.I have no objections, but maybe these pages are a little bit unclear generally, because the core of the problem is not described. Personally I miss a description of the reason why variables cannot be used - the description \"variables cannot be used in statements without result\" is true, but it is not core information. In the section \"executing commands that don't return results\" it does seem like core information...but I get your point.The important fact is usage of an execution plan or not.This is already mentioned in the linked-to section:\"Variable substitution currently works only in SELECT, INSERT, UPDATE, and DELETE commands, because the main SQL engine allows query parameters only in these commands. To use a non-constant name or value in other statement types (generically called utility statements), you must construct the utility statement as a string and EXECUTE it.\"I didn't feel the need to repeat that material in full in the \"no results\" section.  I left that pointing out the \"results\" dynamic there would be useful since the original wording seemed to forget about the presence of utility commands altogether which was confusing for that section.David J.", "msg_date": "Mon, 30 Nov 2020 08:06:28 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [patch] [doc] Minor variable related cleanup and rewording of\n plpgsql docs" }, { "msg_contents": "po 30. 11. 2020 v 16:06 odesílatel David G. Johnston <\ndavid.g.johnston@gmail.com> napsal:\n\n> On Mon, Nov 30, 2020 at 12:51 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>>\n>>\n>> po 30. 11. 2020 v 4:24 odesílatel David G. Johnston <\n>> david.g.johnston@gmail.com> napsal:\n>>\n>>> On Thu, Nov 26, 2020 at 12:49 AM Pavel Stehule <pavel.stehule@gmail.com>\n>>> wrote:\n>>>\n>>>>\n>>>>\n>>>> čt 26. 11. 2020 v 6:41 odesílatel David G. Johnston <\n>>>> david.g.johnston@gmail.com> napsal:\n>>>>\n>>>>> Hackers,\n>>>>>\n>>>>> Bug # 16519 [1] is another report of confusion regarding trying to use\n>>>>> parameters in improper locations - specifically the SET ROLE command within\n>>>>> pl/pgsql. I'm re-attaching the doc patch and am adding it to the\n>>>>> commitfest.\n>>>>>\n>>>>\n>>>> I checked this patch, and I think so it is correct - my comments are\n>>>> just about enhancing by some examples\n>>>>\n>>>>>\n>>>>>\n>>> Thank you for the review.\n>>>\n>>> v2 attached.\n>>>\n>>> I added examples in the two places you noted.\n>>>\n>>> Upon re-reading, I decided that opening up the section by including\n>>> everything then fitting in parameters with an exception for utility\n>>> commands (without previously/otherwise identifying them) forced some\n>>> undesirable verbosity. Instead, I opened up with the utility commands as\n>>> the main body of non-result returning commands and then moved onto\n>>> delete/insert/update non-returning cases when the subsequent paragraph\n>>> regarding parameters can then refer to the second class (by way of\n>>> excluding the first class). This seems to flow better, IMO.\n>>>\n>>\n>> I have no objections, but maybe these pages are a little bit unclear\n>> generally, because the core of the problem is not described.\n>>\n>>\n>\n>> Personally I miss a description of the reason why variables cannot be\n>> used - the description \"variables cannot be used in statements without\n>> result\" is true, but it is not core information.\n>>\n>\n> In the section \"executing commands that don't return results\" it does seem\n> like core information...but I get your point.\n>\n>\n>> The important fact is usage of an execution plan or not.\n>>\n>\n> This is already mentioned in the linked-to section:\n>\n> \"Variable substitution currently works only in SELECT, INSERT, UPDATE, and\n> DELETE commands, because the main SQL engine allows query parameters only\n> in these commands. To use a non-constant name or value in other statement\n> types (generically called utility statements), you must construct the\n> utility statement as a string and EXECUTE it.\"\n>\n> I didn't feel the need to repeat that material in full in the \"no results\"\n> section. I left that pointing out the \"results\" dynamic there would be\n> useful since the original wording seemed to forget about the presence of\n> utility commands altogether which was confusing for that section.\n>\n\nok\n\nPavel\n\n\n> David J.\n>\n>\n\npo 30. 11. 2020 v 16:06 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:On Mon, Nov 30, 2020 at 12:51 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:po 30. 11. 2020 v 4:24 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:On Thu, Nov 26, 2020 at 12:49 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:čt 26. 11. 2020 v 6:41 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:Hackers,Bug # 16519 [1] is another report of confusion regarding trying to use parameters in improper locations - specifically the SET ROLE command within pl/pgsql.  I'm re-attaching the doc patch and am adding it to the commitfest.I checked this patch, and I think so it is correct - my comments are just about enhancing by some examplesThank you for the review.v2 attached.I added examples in the two places you noted.Upon re-reading, I decided that opening up the section by including everything then fitting in parameters with an exception for utility commands (without previously/otherwise identifying them) forced some undesirable verbosity.  Instead, I opened up with the utility commands as the main body of non-result returning commands and then moved onto delete/insert/update non-returning cases when the subsequent paragraph regarding parameters can then refer to the second class (by way of excluding the first class).  This seems to flow better, IMO.I have no objections, but maybe these pages are a little bit unclear generally, because the core of the problem is not described. Personally I miss a description of the reason why variables cannot be used - the description \"variables cannot be used in statements without result\" is true, but it is not core information. In the section \"executing commands that don't return results\" it does seem like core information...but I get your point.The important fact is usage of an execution plan or not.This is already mentioned in the linked-to section:\"Variable substitution currently works only in SELECT, INSERT, UPDATE, and DELETE commands, because the main SQL engine allows query parameters only in these commands. To use a non-constant name or value in other statement types (generically called utility statements), you must construct the utility statement as a string and EXECUTE it.\"I didn't feel the need to repeat that material in full in the \"no results\" section.  I left that pointing out the \"results\" dynamic there would be useful since the original wording seemed to forget about the presence of utility commands altogether which was confusing for that section.okPavelDavid J.", "msg_date": "Mon, 30 Nov 2020 16:37:02 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Minor variable related cleanup and rewording of\n plpgsql docs" }, { "msg_contents": "On 11/30/20 10:37 AM, Pavel Stehule wrote:\n> po 30. 11. 2020 v 16:06 odesílatel David G. Johnston \n> \n> ok\nThis patch looks reasonable to me overall.\n\nA few comments:\n\n1) PL/SQL seems to be used in a few places where I believe PL/pgSQL is \nmeant. This was pre-existing but now seems like a good opportunity to \nfix it, unless I am misunderstanding.\n\n2) I think:\n\n+ makes the command behave like <command>SELECT</command>, which is \ndescribed\n\nflows a little better as:\n\n+ makes the command behave like <command>SELECT</command>, as described\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 9 Mar 2021 12:03:31 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Minor variable related cleanup and rewording of\n plpgsql docs" }, { "msg_contents": "út 9. 3. 2021 v 18:03 odesílatel David Steele <david@pgmasters.net> napsal:\n\n> On 11/30/20 10:37 AM, Pavel Stehule wrote:\n> > po 30. 11. 2020 v 16:06 odesílatel David G. Johnston\n> >\n> > ok\n> This patch looks reasonable to me overall.\n>\n> A few comments:\n>\n> 1) PL/SQL seems to be used in a few places where I believe PL/pgSQL is\n> meant. This was pre-existing but now seems like a good opportunity to\n> fix it, unless I am misunderstanding.\n>\n\n+1\n\n\n> 2) I think:\n>\n> + makes the command behave like <command>SELECT</command>, which is\n> described\n>\n> flows a little better as:\n>\n> + makes the command behave like <command>SELECT</command>, as described\n>\n\nI am not native speaker, so I am not able to evaluate it.\n\nRegards\n\nPavel\n\n\n> Regards,\n> --\n> -David\n> david@pgmasters.net\n>\n\nút 9. 3. 2021 v 18:03 odesílatel David Steele <david@pgmasters.net> napsal:On 11/30/20 10:37 AM, Pavel Stehule wrote:\n> po 30. 11. 2020 v 16:06 odesílatel David G. Johnston \n> \n> ok\nThis patch looks reasonable to me overall.\n\nA few comments:\n\n1) PL/SQL seems to be used in a few places where I believe PL/pgSQL is \nmeant. This was pre-existing but now seems like a good opportunity to \nfix it, unless I am misunderstanding.+1 \n\n2) I think:\n\n+     makes the command behave like <command>SELECT</command>, which is \ndescribed\n\nflows a little better as:\n\n+     makes the command behave like <command>SELECT</command>, as describedI am not native speaker, so I am not able to evaluate it.RegardsPavel\n\nRegards,\n-- \n-David\ndavid@pgmasters.net", "msg_date": "Tue, 9 Mar 2021 18:44:46 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Minor variable related cleanup and rewording of\n plpgsql docs" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> 1) PL/SQL seems to be used in a few places where I believe PL/pgSQL is \n> meant. This was pre-existing but now seems like a good opportunity to \n> fix it, unless I am misunderstanding.\n\nPL/SQL is Oracle's function language, which PL/pgSQL is modeled on.\nAt least some of the mentions of PL/SQL are probably intentional,\nso you'll have to look closely not just search-and-replace.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Mar 2021 13:08:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Minor variable related cleanup and rewording of\n plpgsql docs" }, { "msg_contents": "On Tue, Mar 9, 2021 at 10:45 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> út 9. 3. 2021 v 18:03 odesílatel David Steele <david@pgmasters.net>\n> napsal:\n>\n>> On 11/30/20 10:37 AM, Pavel Stehule wrote:\n>> > po 30. 11. 2020 v 16:06 odesílatel David G. Johnston\n>> >\n>> > ok\n>> This patch looks reasonable to me overall.\n>>\n>> A few comments:\n>>\n>> 1) PL/SQL seems to be used in a few places where I believe PL/pgSQL is\n>> meant. This was pre-existing but now seems like a good opportunity to\n>> fix it, unless I am misunderstanding.\n>>\n>\n> +1\n>\n\nI vaguely recall looking for this back in October and not finding anything\nthat needed fixing in the area I was working in. The ready-for-commit can\nstand without further investigation. Feel free to look for and fix\noversights of this nature if you feel they exist.\n\n\n>\n>> 2) I think:\n>>\n>> + makes the command behave like <command>SELECT</command>, which is\n>> described\n>>\n>> flows a little better as:\n>>\n>> + makes the command behave like <command>SELECT</command>, as\n>> described\n>>\n>\n> I am not native speaker, so I am not able to evaluate it.\n>\n\n\"which is described\" is perfectly valid. I don't know that \"as described\"\nis materially better from a flow perspective (I agree it reads a tiny bit\nbetter) but either seems to adequately communicate the intended point so I\nwouldn't gripe if it was changed during commit.\n\nI intend to leave the patch as-is though since as written it is\ncommittable, this second comment is just style and the first is scope creep.\n\nDavid J.\n\nOn Tue, Mar 9, 2021 at 10:45 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:út 9. 3. 2021 v 18:03 odesílatel David Steele <david@pgmasters.net> napsal:On 11/30/20 10:37 AM, Pavel Stehule wrote:\n> po 30. 11. 2020 v 16:06 odesílatel David G. Johnston \n> \n> ok\nThis patch looks reasonable to me overall.\n\nA few comments:\n\n1) PL/SQL seems to be used in a few places where I believe PL/pgSQL is \nmeant. This was pre-existing but now seems like a good opportunity to \nfix it, unless I am misunderstanding.+1I vaguely recall looking for this back in October and not finding anything that needed fixing in the area I was working in.  The ready-for-commit can stand without further investigation.  Feel free to look for and fix oversights of this nature if you feel they exist. \n\n2) I think:\n\n+     makes the command behave like <command>SELECT</command>, which is \ndescribed\n\nflows a little better as:\n\n+     makes the command behave like <command>SELECT</command>, as describedI am not native speaker, so I am not able to evaluate it.\"which is described\" is perfectly valid.  I don't know that \"as described\" is materially better from a flow perspective (I agree it reads a tiny bit better) but either seems to adequately communicate the intended point so I wouldn't gripe if it was changed during commit.I intend to leave the patch as-is though since as written it is committable, this second comment is just style and the first is scope creep.David J.", "msg_date": "Tue, 9 Mar 2021 13:05:17 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [patch] [doc] Minor variable related cleanup and rewording of\n plpgsql docs" }, { "msg_contents": "On 3/9/21 1:08 PM, Tom Lane wrote:\n> David Steele <david@pgmasters.net> writes:\n>> 1) PL/SQL seems to be used in a few places where I believe PL/pgSQL is\n>> meant. This was pre-existing but now seems like a good opportunity to\n>> fix it, unless I am misunderstanding.\n> \n> PL/SQL is Oracle's function language, which PL/pgSQL is modeled on.\n> At least some of the mentions of PL/SQL are probably intentional,\n> so you'll have to look closely not just search-and-replace.\n\nAh, yes. That's what I get for just reading the patch and not looking at \nthe larger context.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 9 Mar 2021 19:28:47 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Minor variable related cleanup and rewording of\n plpgsql docs" }, { "msg_contents": "I looked over the v2 patch. Parts of it seem like improvements but\nother parts definitely don't. In particular, I thought you introduced\na great deal of confusion in 43.5.2 (Executing a Command with No Result).\nThe statement that you can write a non-result-returning SQL command as-is\nis true in general, and ought not be confused with the question of whether\nyou can insert variable values into it. Also, starting with a spongy\ndefinition of \"utility command\" and then contrasting with that does not\nseem to me to add clarity.\n\nI attach a v3 that I like better, although there's room to disagree\nabout that. I've always felt that the separation between 43.5.2 and\n43.5.3 was rather artificial --- it's okay I guess for describing\nhow to handle command output, but we end up with considerable\nduplication when it comes to describing how to insert values into a\ncommand. It's tempting to try re-splitting it to separate optimizable\nfrom non-optimizable statements; but maybe that'd just end with\ndifferent duplication.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 12 Mar 2021 15:08:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Minor variable related cleanup and rewording of\n plpgsql docs" }, { "msg_contents": "Hi\n\npá 12. 3. 2021 v 21:08 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> I looked over the v2 patch. Parts of it seem like improvements but\n> other parts definitely don't. In particular, I thought you introduced\n> a great deal of confusion in 43.5.2 (Executing a Command with No Result).\n> The statement that you can write a non-result-returning SQL command as-is\n> is true in general, and ought not be confused with the question of whether\n> you can insert variable values into it. Also, starting with a spongy\n> definition of \"utility command\" and then contrasting with that does not\n> seem to me to add clarity.\n>\n> I attach a v3 that I like better, although there's room to disagree\n> about that. I've always felt that the separation between 43.5.2 and\n> 43.5.3 was rather artificial --- it's okay I guess for describing\n> how to handle command output, but we end up with considerable\n> duplication when it comes to describing how to insert values into a\n> command. It's tempting to try re-splitting it to separate optimizable\n> from non-optimizable statements; but maybe that'd just end with\n> different duplication.\n>\n\nI am not sure if people can understand the \"optimizable command\" term. More\ncommon categories are DML, DDL and SELECT. Maybe it is easier to say. DDL\nstatements don't support parametrizations, and then the variables cannot be\nused there.\n\n\n\n> regards, tom lane\n>\n>\n\nHipá 12. 3. 2021 v 21:08 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:I looked over the v2 patch.  Parts of it seem like improvements but\nother parts definitely don't.  In particular, I thought you introduced\na great deal of confusion in 43.5.2 (Executing a Command with No Result).\nThe statement that you can write a non-result-returning SQL command as-is\nis true in general, and ought not be confused with the question of whether\nyou can insert variable values into it.  Also, starting with a spongy\ndefinition of \"utility command\" and then contrasting with that does not\nseem to me to add clarity.\n\nI attach a v3 that I like better, although there's room to disagree\nabout that.  I've always felt that the separation between 43.5.2 and\n43.5.3 was rather artificial --- it's okay I guess for describing\nhow to handle command output, but we end up with considerable\nduplication when it comes to describing how to insert values into a\ncommand.  It's tempting to try re-splitting it to separate optimizable\nfrom non-optimizable statements; but maybe that'd just end with\ndifferent duplication.I am not sure if people can understand the \"optimizable command\" term. More common categories are DML, DDL and SELECT. Maybe it is easier to say. DDL statements don't support parametrizations, and then the variables cannot be used there.\n\n                        regards, tom lane", "msg_date": "Fri, 12 Mar 2021 21:17:41 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Minor variable related cleanup and rewording of\n plpgsql docs" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> pá 12. 3. 2021 v 21:08 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> I attach a v3 that I like better, although there's room to disagree\n>> about that.\n\n> I am not sure if people can understand the \"optimizable command\" term. More\n> common categories are DML, DDL and SELECT. Maybe it is easier to say. DDL\n> statements don't support parametrizations, and then the variables cannot be\n> used there.\n\nYeah, but DML/DDL is a pretty squishy separation as well, besides\nwhich it'd mislead people for cases such as CREATE TABLE AS SELECT.\n(Admittedly, I didn't mention that in my version either, but if you\nthink in terms of whether the optimizer will be applied then you\nwill draw the right conclusion.)\n\nMaybe there's no way out but to specifically list the statement types\nwe can insert query parameters in.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Mar 2021 15:36:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Minor variable related cleanup and rewording of\n plpgsql docs" }, { "msg_contents": "pá 12. 3. 2021 v 21:36 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > pá 12. 3. 2021 v 21:08 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >> I attach a v3 that I like better, although there's room to disagree\n> >> about that.\n>\n> > I am not sure if people can understand the \"optimizable command\" term.\n> More\n> > common categories are DML, DDL and SELECT. Maybe it is easier to say. DDL\n> > statements don't support parametrizations, and then the variables cannot\n> be\n> > used there.\n>\n> Yeah, but DML/DDL is a pretty squishy separation as well, besides\n> which it'd mislead people for cases such as CREATE TABLE AS SELECT.\n> (Admittedly, I didn't mention that in my version either, but if you\n> think in terms of whether the optimizer will be applied then you\n> will draw the right conclusion.)\n>\n\nCan it be better to use word planner instead of optimizer? An optimization\nis too common a word, and unfortunately a lot of people have no idea what\noptimization in SQL means.\n\nIt can be pretty hard, because the people that have problems here don't\nknow what is a plan or what is an optimization.\n\n\n> Maybe there's no way out but to specifically list the statement types\n> we can insert query parameters in.\n>\n\ncan be\n\n\n> regards, tom lane\n>\n\npá 12. 3. 2021 v 21:36 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> pá 12. 3. 2021 v 21:08 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> I attach a v3 that I like better, although there's room to disagree\n>> about that.\n\n> I am not sure if people can understand the \"optimizable command\" term. More\n> common categories are DML, DDL and SELECT. Maybe it is easier to say. DDL\n> statements don't support parametrizations, and then the variables cannot be\n> used there.\n\nYeah, but DML/DDL is a pretty squishy separation as well, besides\nwhich it'd mislead people for cases such as CREATE TABLE AS SELECT.\n(Admittedly, I didn't mention that in my version either, but if you\nthink in terms of whether the optimizer will be applied then you\nwill draw the right conclusion.)Can it be better to use word planner instead of optimizer? An optimization is too common a word, and unfortunately a lot of people have no idea what optimization in SQL means.It can be pretty hard, because the people that have problems here don't know what is a plan or what is an optimization. \n\nMaybe there's no way out but to specifically list the statement types\nwe can insert query parameters in.can be \n\n                        regards, tom lane", "msg_date": "Fri, 12 Mar 2021 21:48:41 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Minor variable related cleanup and rewording of\n plpgsql docs" }, { "msg_contents": "On Fri, Mar 12, 2021 at 1:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > pá 12. 3. 2021 v 21:08 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >> I attach a v3 that I like better, although there's room to disagree\n> >> about that.\n>\n> > I am not sure if people can understand the \"optimizable command\" term.\n> More\n> > common categories are DML, DDL and SELECT. Maybe it is easier to say. DDL\n> > statements don't support parametrizations, and then the variables cannot\n> be\n> > used there.\n>\n> Yeah, but DML/DDL is a pretty squishy separation as well, besides\n> which it'd mislead people for cases such as CREATE TABLE AS SELECT.\n> (Admittedly, I didn't mention that in my version either, but if you\n> think in terms of whether the optimizer will be applied then you\n> will draw the right conclusion.)\n>\n\nRelated to an earlier email though, \"CREATE TABLE AS SELECT\" gets optimized\nbut \"COPY (SELECT) TO\" doesn't...\n\nDML/DDL has the merit of being chapters 5 and 6 in the documentation (with\n7 being SELECT).\n\nI do agree that the delineation of \"returns records or not\" is not ideal\nhere. SELECT, then INSERT/UPDATE/DELETE (due to their shared RETURNING\ndynamic), then \"DML commands\", then \"DMS exceptions\" (these last two\nideally leveraging the conceptual work noted above). That said, I do not\nthink this is such a big issue as to warrant that much of a rewrite. But\nin lieu of that, and based upon responses given on the mailing lists,\n\"utility commands\" seems preferable to optimizable commands. Defining,\neither by name or by behavior, what utility commands are is needed though,\nideally outside of this chapter. Then a paragraph in the \"no result\"\nsection should list explicitly those utility commands that are an\nexception, since they have an attached SELECT statement that does get\noptimized.\n\nMaybe in Chapter 4, with some forward references, some of this can be\ncovered and the exceptions to the rule (like CREATE TABLE AS) can be\nmentioned.\n\nTo address your point about \"utility commands\", lacking an external\ndefinition to link to, I would change it to be \"everything except\nINSERT/UPDATE/DELETE, which are described below, as well as EXPLAIN and\nSELECT which are described in the next section\". From there I like my\nproposed flow into INSERT/UPDATE/DELETE w/o RETURNING, then from there the\nRETURNING pointing forward to these being SELECT-like in behavior.\n\nAdding a note about using EXECUTE works for me.\n\nCalling EXPLAIN a utility command seems incorrect given that it behaves\njust like a query. If it quacks like a duck...\n\nWhat other row set returning commands are you considering as being utility?\n\n\n> Maybe there's no way out but to specifically list the statement types\n> we can insert query parameters in.\n>\n\nIn the following I'm confused as to why \"column reference\" is specified\nsince those are not substituted:\n\n\"Parameters will only be substituted in places where a parameter or\ncolumn reference is syntactically allowed.\"\n\nI'm not married to my explicit calling out of identifiers not being\nsubstitutable but that does tend to be what people try to do.\n\nI'm good with the Pl/SQL wording proposal.\n\nDavid J.\n\nOn Fri, Mar 12, 2021 at 1:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> pá 12. 3. 2021 v 21:08 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> I attach a v3 that I like better, although there's room to disagree\n>> about that.\n\n> I am not sure if people can understand the \"optimizable command\" term. More\n> common categories are DML, DDL and SELECT. Maybe it is easier to say. DDL\n> statements don't support parametrizations, and then the variables cannot be\n> used there.\n\nYeah, but DML/DDL is a pretty squishy separation as well, besides\nwhich it'd mislead people for cases such as CREATE TABLE AS SELECT.\n(Admittedly, I didn't mention that in my version either, but if you\nthink in terms of whether the optimizer will be applied then you\nwill draw the right conclusion.)Related to an earlier email though, \"CREATE TABLE AS SELECT\" gets optimized but \"COPY (SELECT) TO\" doesn't...DML/DDL has the merit of being chapters 5 and 6 in the documentation (with 7 being SELECT).I do agree that the delineation of \"returns records or not\" is not ideal here.  SELECT, then INSERT/UPDATE/DELETE (due to their shared RETURNING dynamic), then \"DML commands\", then \"DMS exceptions\" (these last two ideally leveraging the conceptual work noted above).  That said, I do not think this is such a big issue as to warrant that much of a rewrite.  But in lieu of that, and based upon responses given on the mailing lists, \"utility commands\" seems preferable to optimizable commands.  Defining, either by name or by behavior, what utility commands are is needed though, ideally outside of this chapter.  Then a paragraph in the \"no result\" section should list explicitly those utility commands that are an exception, since they have an attached SELECT statement that does get optimized.Maybe in Chapter 4, with some forward references, some of this can be covered and the exceptions to the rule (like CREATE TABLE AS) can be mentioned.  To address your point about \"utility commands\", lacking an external definition to link to, I would change it to be \"everything except INSERT/UPDATE/DELETE, which are described below, as well as EXPLAIN and SELECT which are described in the next section\".  From there I like my proposed flow into INSERT/UPDATE/DELETE w/o RETURNING, then from there the RETURNING pointing forward to these being SELECT-like in behavior.Adding a note about using EXECUTE works for me.Calling EXPLAIN a utility command seems incorrect given that it behaves just like a query.  If it quacks like a duck...What other row set returning commands are you considering as being utility?\n\nMaybe there's no way out but to specifically list the statement types\nwe can insert query parameters in.In the following I'm confused as to why \"column reference\" is specified since those are not substituted:\"Parameters will only be substituted in places where a parameter orcolumn reference is syntactically allowed.\"I'm not married to my explicit calling out of identifiers not being substitutable but that does tend to be what people try to do.I'm good with the Pl/SQL wording proposal.David J.", "msg_date": "Fri, 12 Mar 2021 14:30:41 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [patch] [doc] Minor variable related cleanup and rewording of\n plpgsql docs" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> I do agree that the delineation of \"returns records or not\" is not ideal\n> here. SELECT, then INSERT/UPDATE/DELETE (due to their shared RETURNING\n> dynamic), then \"DML commands\", then \"DMS exceptions\" (these last two\n> ideally leveraging the conceptual work noted above). That said, I do not\n> think this is such a big issue as to warrant that much of a rewrite.\n\nI took a stab at doing that, just to see what it might look like.\nI thought it comes out pretty well, really -- see what you think.\n\n(This still uses the terminology \"optimizable statement\", but I'm open\nto replacing that with something else.)\n\n> In the following I'm confused as to why \"column reference\" is specified\n> since those are not substituted:\n> \"Parameters will only be substituted in places where a parameter or\n> column reference is syntactically allowed.\"\n\nThe meaning of \"column reference\" there is, I think, a reference to\na column of a table being read by a query. In the counterexample\nof \"INSERT INTO mytable (col) ...\", \"col\" cannot be replaced by a\ndata value. But in \"INSERT INTO mytable (col) SELECT foo FROM bar\",\n\"foo\" is a candidate for replacement, even though it's likely meant\nas a reference to bar.foo.\n\n> I'm not married to my explicit calling out of identifiers not being\n> substitutable but that does tend to be what people try to do.\n\nThe problem I had with it was that it didn't help clarify this\ndistinction. I'm certainly open to changes that do clarify that.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 12 Mar 2021 16:40:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Minor variable related cleanup and rewording of\n plpgsql docs" }, { "msg_contents": "I wrote:\n> I took a stab at doing that, just to see what it might look like.\n> I thought it comes out pretty well, really -- see what you think.\n\nHearing nothing further, I pushed that after another round of\ncopy-editing. There's still plenty of time to revise it if\nanybody has further comments.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Mar 2021 13:11:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Minor variable related cleanup and rewording of\n plpgsql docs" } ]
[ { "msg_contents": "Hackers,\n\nMoving this over from -general [1]\n\nDavid J.\n\n[1]\nhttps://www.postgresql.org/message-id/CAKFQuwaM1K%3DprJNwKnoaC2AyDFn-7OvtCpmQ23bcVe5Z%3DLKA3Q%40mail.gmail.com", "msg_date": "Wed, 21 Oct 2020 08:29:26 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "[patch] [doc] Clarify temporary table name shadowing in CREATE TABLE" }, { "msg_contents": "On Wed, Oct 21, 2020 at 5:29 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> Hackers,\n>\n> Moving this over from -general [1]\n\n\nApplied, thanks!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 2 Nov 2020 15:02:03 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: [patch] [doc] Clarify temporary table name shadowing in CREATE\n TABLE" } ]
[ { "msg_contents": "While working on commit 85c54287a, I noticed a few things I did not\nmuch care for in do_connect(). These don't quite seem to rise to\nthe level of back-patchable bugs, but they're still not great:\n\n* The initial stanza that complains about\n\n\tif (!o_conn && (!dbname || !user || !host || !port))\n\nseems woefully obsolete. In the first place, it's pretty silly\nto equate a \"complete connection specification\" with having just\nthose four values; the whole point of 85c54287a and predecessors\nis that other settings such as sslmode may be just as important.\nIn the second place, this fails to consider the possibility that\nwe only have a connstring parameter --- which may nonetheless\nprovide all the required settings. And in the third place,\nthis clearly wasn't revisited when we added explicit control of\nwhether or not we're supposed to re-use parameters from the old\nconnection. It's very silly to insist on having an o_conn if we're\ngoing to ignore it anyway.\n\nI think the reason we've not had complaints about this is that the\nsituation normally doesn't arise in interactive sessions (since we\nwon't release the old connection voluntarily), while scripts are\nlikely not designed to cope with connection losses anyway. These\nfacts militate against spending a whole lot of effort on a fix,\nbut still we ought to reduce the silliness factor. What I propose\nis to complain if we have no o_conn *and* we are asked to re-use\nparameters from it. Otherwise, it's fine.\n\n* I really don't like the bit about silently ignoring user, host,\nand port parameters if we see that the first parameter is a connstring.\nThat's as user-unfriendly as can be. It should be a syntax error\nto specify both; the documentation certainly implies that it is.\n\n* The old-style-syntax code path understands that it should re-use\nthe old password (if any) when the user, host, and port settings\nhaven't changed. The connstring code path was too lazy to make\nthat work, but now that we're deconstructing the connstring there's\nvery little excuse for not having it act the same way.\n\nThe attached patch fixes these things and documents the password\nbehavior, which for some reason went unmentioned before. Along\nthe way I simplified the mechanism for re-using a password a bit;\nthere's no reason to treat it so much differently from re-using\nother parameters.\n\nAny objections?\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 21 Oct 2020 18:59:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Mop-up around psql's \\connect behavior" }, { "msg_contents": "On 10/21/20 18:59, Tom Lane wrote:\n\n> I think the reason we've not had complaints about this is that the\n> situation normally doesn't arise in interactive sessions (since we\n> won't release the old connection voluntarily), while scripts are\n> likely not designed to cope with connection losses anyway. These\n> facts militate against spending a whole lot of effort on a fix,\n> but still we ought to reduce the silliness factor. What I propose\n> is to complain if we have no o_conn *and* we are asked to re-use\n> parameters from it. Otherwise, it's fine.\n\nI've been getting around it just by saying\n\n \\c \"connstring\" . . .\n\nwhich works. It gives me a tiny thrill every time I do it, like I'm\ngetting away with something. Which is why I haven't been complaining.\n\nI suppose I wouldn't complain if it were fixed, either.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 21 Oct 2020 19:04:49 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Mop-up around psql's \\connect behavior" }, { "msg_contents": "At Wed, 21 Oct 2020 18:59:04 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> While working on commit 85c54287a, I noticed a few things I did not\n> much care for in do_connect(). These don't quite seem to rise to\n> the level of back-patchable bugs, but they're still not great:\n> \n> * The initial stanza that complains about\n> \n> \tif (!o_conn && (!dbname || !user || !host || !port))\n> \n> seems woefully obsolete. In the first place, it's pretty silly\n> to equate a \"complete connection specification\" with having just\n> those four values; the whole point of 85c54287a and predecessors\n> is that other settings such as sslmode may be just as important.\n> In the second place, this fails to consider the possibility that\n> we only have a connstring parameter --- which may nonetheless\n> provide all the required settings. And in the third place,\n> this clearly wasn't revisited when we added explicit control of\n> whether or not we're supposed to re-use parameters from the old\n> connection. It's very silly to insist on having an o_conn if we're\n> going to ignore it anyway.\n\nSounds reasonable.\n\n> I think the reason we've not had complaints about this is that the\n> situation normally doesn't arise in interactive sessions (since we\n> won't release the old connection voluntarily), while scripts are\n> likely not designed to cope with connection losses anyway. These\n> facts militate against spending a whole lot of effort on a fix,\n> but still we ought to reduce the silliness factor. What I propose\n> is to complain if we have no o_conn *and* we are asked to re-use\n> parameters from it. Otherwise, it's fine.\n\nThe reason I haven't complain about this is I don't reconnect by \\c\nafter involuntary disconnection. (That is, C-d then psql again:p) But\nonce it got on my mind, it might be strange that just \\c or \\c\n-reuse-previous=y doesn't reconnect a broken session. It might be\nbetter we reuse the previous connection parameter even if the\nconnection has been lost, but this would be another issue.\n\n> * I really don't like the bit about silently ignoring user, host,\n> and port parameters if we see that the first parameter is a connstring.\n> That's as user-unfriendly as can be. It should be a syntax error\n> to specify both; the documentation certainly implies that it is.\n\n+1\n\n> * The old-style-syntax code path understands that it should re-use\n> the old password (if any) when the user, host, and port settings\n> haven't changed. The connstring code path was too lazy to make\n> that work, but now that we're deconstructing the connstring there's\n> very little excuse for not having it act the same way.\n\n+1 (I thought sslmode might affect but that is wrong since cert\nauthenticaion cannot be turned off from command line.)\n\n> The attached patch fixes these things and documents the password\n> behavior, which for some reason went unmentioned before. Along\n> the way I simplified the mechanism for re-using a password a bit;\n> there's no reason to treat it so much differently from re-using\n> other parameters.\n\nLooks fine.\n\n> Any objections?\n\nNope from me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 22 Oct 2020 12:05:15 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Mop-up around psql's \\connect behavior" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Wed, 21 Oct 2020 18:59:04 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>> ... What I propose\n>> is to complain if we have no o_conn *and* we are asked to re-use\n>> parameters from it. Otherwise, it's fine.\n\n> The reason I haven't complain about this is I don't reconnect by \\c\n> after involuntary disconnection. (That is, C-d then psql again:p)\n\nYeah, me too.\n\n> But once it got on my mind, it might be strange that just \\c or \\c\n> -reuse-previous=y doesn't reconnect a broken session. It might be\n> better we reuse the previous connection parameter even if the\n> connection has been lost, but this would be another issue.\n\nI did actually look into saving the active connection's PQconninfo\nimmediately at connection establishment and then referring to it in any\nsubsequent \\connect. Then things could work the same even if the original\nconnection had failed meanwhile. But there are technical details that\nmake that a lot harder than it seems on the surface --- mainly, that the\nway do_connect works now requires that it have a copy of the PQconninfo\ndata that it can scribble on, and that won't do if we need the saved\nPQconninfo to persist when a \\connect attempt fails. That could be dealt\nwith with enough new code, but I didn't think it was worth the trouble.\n(Note that we developers face the server-crashed scenario a whole lot more\noften than normal people ;-), so we probably overrate how useful it'd be\nto be able to reconnect in that case.)\n\n>> * The old-style-syntax code path understands that it should re-use\n>> the old password (if any) when the user, host, and port settings\n>> haven't changed. The connstring code path was too lazy to make\n>> that work, but now that we're deconstructing the connstring there's\n>> very little excuse for not having it act the same way.\n\n> +1 (I thought sslmode might affect but that is wrong since cert\n> authenticaion cannot be turned off from command line.)\n\nYeah. That could affect whether the server asks for a password at\nall, but you have to really stretch to think of cases where it could\nresult in needing a *different* password. An example perhaps is\nwhere pg_hba.conf is configured to do, say, LDAP auth on SSL connections\nand normal password auth on non-SSL, and the LDAP server has a different\npassword than what is in pg_authid. But that seems like something nobody\ncould want. Also notice that unlike the previous code, with my patch\ndo_connect will always (barring --no-password) give you an opportunity\nto interactively supply a password, even if we initially reused an\nold password and it didn't work. So it seems like this will cope\neven if you do have a setup as wacko as that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Oct 2020 00:34:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Mop-up around psql's \\connect behavior" }, { "msg_contents": "At Thu, 22 Oct 2020 00:34:20 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > At Wed, 21 Oct 2020 18:59:04 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> > But once it got on my mind, it might be strange that just \\c or \\c\n> > -reuse-previous=y doesn't reconnect a broken session. It might be\n> > better we reuse the previous connection parameter even if the\n> > connection has been lost, but this would be another issue.\n> \n> I did actually look into saving the active connection's PQconninfo\n> immediately at connection establishment and then referring to it in any\n> subsequent \\connect. Then things could work the same even if the original\n> connection had failed meanwhile. But there are technical details that\n> make that a lot harder than it seems on the surface --- mainly, that the\n> way do_connect works now requires that it have a copy of the PQconninfo\n> data that it can scribble on, and that won't do if we need the saved\n> PQconninfo to persist when a \\connect attempt fails. That could be dealt\n> with with enough new code, but I didn't think it was worth the trouble.\n\nAgreed.\n\n> (Note that we developers face the server-crashed scenario a whole lot more\n> often than normal people ;-), so we probably overrate how useful it'd be\n> to be able to reconnect in that case.)\n\nAgreed^2.\n\n> >> * The old-style-syntax code path understands that it should re-use\n> >> the old password (if any) when the user, host, and port settings\n> >> haven't changed. The connstring code path was too lazy to make\n> >> that work, but now that we're deconstructing the connstring there's\n> >> very little excuse for not having it act the same way.\n> \n> > +1 (I thought sslmode might affect but that is wrong since cert\n> > authenticaion cannot be turned off from command line.)\n> \n> Yeah. That could affect whether the server asks for a password at\n> all, but you have to really stretch to think of cases where it could\n> result in needing a *different* password. An example perhaps is\n> where pg_hba.conf is configured to do, say, LDAP auth on SSL connections\n> and normal password auth on non-SSL, and the LDAP server has a different\n> password than what is in pg_authid. But that seems like something nobody\n> could want. Also notice that unlike the previous code, with my patch\n> do_connect will always (barring --no-password) give you an opportunity\n> to interactively supply a password, even if we initially reused an\n> old password and it didn't work. So it seems like this will cope\n> even if you do have a setup as wacko as that.\n\nI thought of that scenarios and conclused as the same. Sounds\nreasonable.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 22 Oct 2020 14:26:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Mop-up around psql's \\connect behavior" }, { "msg_contents": "I wrote:\n> I did actually look into saving the active connection's PQconninfo\n> immediately at connection establishment and then referring to it in any\n> subsequent \\connect. Then things could work the same even if the original\n> connection had failed meanwhile. But there are technical details that\n> make that a lot harder than it seems on the surface --- mainly, that the\n> way do_connect works now requires that it have a copy of the PQconninfo\n> data that it can scribble on, and that won't do if we need the saved\n> PQconninfo to persist when a \\connect attempt fails. That could be dealt\n> with with enough new code, but I didn't think it was worth the trouble.\n\nActually ... I'd no sooner pushed that patch than I realized that there\n*is* an easy, if rather grotty, way to deal with this. We can just not\nissue PQfinish on the old/dead connection until we've successfully made\na new one. PQconninfo doesn't care if the connection is in BAD state.\n\nTo avoid introducing weird behaviors, we can't keep the logically-dead\nconnection in pset.db, but introducing a separate variable to hold such\na connection doesn't seem too awful. So that leads me to the attached\npatch, which is able to reconnect even if we lost the connection:\n\nregression=# select 1;\n ?column? \n----------\n 1\n(1 row)\n\n-- in another window, stop the server, then:\n\nregression=# select 1;\nFATAL: terminating connection due to administrator command\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\n--- now restart the server, and:\n\n!?> \\c\nYou are now connected to database \"regression\" as user \"postgres\" via socket in \"/tmp\" at port \"5432\".\n\nI would not have wanted to accept a patch that did it the other way,\nbecause it would have been a mess, but this seems small enough to\nbe worth doing. The only real objection I can see is that it could\nhold a server connection open when the user thinks there is none;\nbut that could only happen in a non-interactive script, and it does\nnot seem like a big problem in that case. We could alternatively\nnot stash the \"dead\" connection after a non-interactive \\connect\nfailure, but I doubt that's better.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 22 Oct 2020 15:23:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Mop-up around psql's \\connect behavior" }, { "msg_contents": "At Thu, 22 Oct 2020 15:23:04 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> I wrote:\n> > I did actually look into saving the active connection's PQconninfo\n> > immediately at connection establishment and then referring to it in any\n> > subsequent \\connect. Then things could work the same even if the original\n> > connection had failed meanwhile. But there are technical details that\n> > make that a lot harder than it seems on the surface --- mainly, that the\n> > way do_connect works now requires that it have a copy of the PQconninfo\n> > data that it can scribble on, and that won't do if we need the saved\n> > PQconninfo to persist when a \\connect attempt fails. That could be dealt\n> > with with enough new code, but I didn't think it was worth the trouble.\n> \n> Actually ... I'd no sooner pushed that patch than I realized that there\n> *is* an easy, if rather grotty, way to deal with this. We can just not\n> issue PQfinish on the old/dead connection until we've successfully made\n> a new one. PQconninfo doesn't care if the connection is in BAD state.\n> \n> To avoid introducing weird behaviors, we can't keep the logically-dead\n> connection in pset.db, but introducing a separate variable to hold such\n> a connection doesn't seem too awful. So that leads me to the attached\n> patch, which is able to reconnect even if we lost the connection:\n\nSounds reasonable.\n\n> regression=# select 1;\n> ?column? \n> ----------\n> 1\n> (1 row)\n> \n> -- in another window, stop the server, then:\n> \n> regression=# select 1;\n> FATAL: terminating connection due to administrator command\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> \n> --- now restart the server, and:\n> \n> !?> \\c\n> You are now connected to database \"regression\" as user \"postgres\" via socket in \"/tmp\" at port \"5432\".\n\nLooks good to me. I'm very happy with the result.\n\n> I would not have wanted to accept a patch that did it the other way,\n> because it would have been a mess, but this seems small enough to\n> be worth doing. The only real objection I can see is that it could\n> hold a server connection open when the user thinks there is none;\n> but that could only happen in a non-interactive script, and it does\n> not seem like a big problem in that case. We could alternatively\n> not stash the \"dead\" connection after a non-interactive \\connect\n> failure, but I doubt that's better.\n\nAgreed. Thanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 23 Oct 2020 14:28:38 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Mop-up around psql's \\connect behavior" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Thu, 22 Oct 2020 15:23:04 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>> ... The only real objection I can see is that it could\n>> hold a server connection open when the user thinks there is none;\n>> but that could only happen in a non-interactive script, and it does\n>> not seem like a big problem in that case. We could alternatively\n>> not stash the \"dead\" connection after a non-interactive \\connect\n>> failure, but I doubt that's better.\n\n> Agreed. Thanks!\n\nAfter further thought I decided we *must* do it as per my \"alternative\"\nidea. Consider a script containing\n\t\\c db1 user1 live_server\n\t\\c db2 user2 dead_server\n\t\\c db3\nThe script would be expecting to connect to db3 at dead_server, but\nif we re-use parameters from the first connection then it might\nsuccessfully connect to db3 at live_server. This'd defeat the goal\nof not letting a script accidentally execute commands against the\nwrong database.\n\nSo we have to not save the connection after a failed script \\connect.\nHowever, it seems OK to save after a connection loss whether we're\nin a script or not; that is,\n\n\t\\c db1 user1 server1\n\t...\n\t(connection dies here)\n\t... --- these commands will fail\n\t\\c db2\n\nThe script will be expecting the second \\c to re-use parameters\nfrom the first one, and that will still work as expected.\n\nI went ahead and pushed it after adjusting that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Oct 2020 17:12:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Mop-up around psql's \\connect behavior" }, { "msg_contents": "At Fri, 23 Oct 2020 17:12:44 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > At Thu, 22 Oct 2020 15:23:04 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> >> ... The only real objection I can see is that it could\n> >> hold a server connection open when the user thinks there is none;\n> >> but that could only happen in a non-interactive script, and it does\n> >> not seem like a big problem in that case. We could alternatively\n> >> not stash the \"dead\" connection after a non-interactive \\connect\n> >> failure, but I doubt that's better.\n> \n> > Agreed. Thanks!\n> \n> After further thought I decided we *must* do it as per my \"alternative\"\n> idea. Consider a script containing\n> \t\\c db1 user1 live_server\n> \t\\c db2 user2 dead_server\n> \t\\c db3\n> The script would be expecting to connect to db3 at dead_server, but\n> if we re-use parameters from the first connection then it might\n> successfully connect to db3 at live_server. This'd defeat the goal\n> of not letting a script accidentally execute commands against the\n> wrong database.\n\nHmm. True.\n\n> So we have to not save the connection after a failed script \\connect.\n\nYes, we shouldn't save a connection parameters that haven't made a\nconnection.\n\n> However, it seems OK to save after a connection loss whether we're\n> in a script or not; that is,\n> \n> \t\\c db1 user1 server1\n> \t...\n> \t(connection dies here)\n> \t... --- these commands will fail\n> \t\\c db2\n> \n> The script will be expecting the second \\c to re-use parameters\n> from the first one, and that will still work as expected.\n\nAgreed.\n\n> I went ahead and pushed it after adjusting that.\n\nThanks!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 26 Oct 2020 09:51:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Mop-up around psql's \\connect behavior" } ]
[ { "msg_contents": "Background:\nWe have an installer for our application as part of which we are planning to include archive\npostgresql-13.0-1-windows-x64-binaries.zip which will be extracted along with the installation of our application.\n\nWhen the archive is extracted the folder's permission will belong to the current user who is installing our application (who will belong to the administrators group).\nThis is contradictory to the approach followed by enterprise db installer where a separate user \"postgres\" will own the folders and the process created.\nWe want to simply the installation approach as many times we are hitting permission issues while using the separate \"postgres\" user created.\nThat's why we want to make the current user own the postgres and services.\n\nNote: This is not about the \"postgres\" database user account.\n\nFollowing are the queries with regards to the approach:\n\n 1. Will having an administrator user own the postgres installation and data directory cause any side effects?\n 2. The postgres process also will be owned by an user of administrator group. Will it cause any issues from security perspective?\n\nRegards,\nJoel\n\n\n\n\n\n\n\n\n\nBackground:\nWe have an installer for our application as part of which we are planning to include archive\npostgresql-13.0-1-windows-x64-binaries.zip which will be extracted along with the installation of our application.\n \nWhen the archive is extracted the folder’s permission will belong to the current user who is installing our application (who will belong to the administrators group).\nThis is contradictory to the approach followed by enterprise db installer where a separate user “postgres” will own the folders and the process created.\nWe want to simply the installation approach as many times we are hitting permission issues while using the separate “postgres” user created.\nThat’s why we want to make the current user own the postgres and services.\n \nNote: This is not about the “postgres” database user account.\n \nFollowing are the queries with regards to the approach:\n\n\nWill having an administrator user own the postgres installation and data directory cause any side effects?The postgres process also will be owned by an user of administrator group. Will it cause any issues from security perspective?\n \nRegards,\nJoel", "msg_date": "Thu, 22 Oct 2020 04:46:04 +0000", "msg_from": "\"Joel Mariadasan (jomariad)\" <jomariad@cisco.com>", "msg_from_op": true, "msg_subject": "User accounts on windows" } ]
[ { "msg_contents": "As discussed in [0], here are patches to move the system catalog toast \ntable and index declarations from catalog/toasting.h and \ncatalog/indexing.h to the respective parent tables' catalog/pg_*.h \nfiles. I think it's clearly better to have everything together like this.\n\nThe original reason for having it split was that the old genbki system \nproduced the output in the order of the catalog files it read, so all \nthe toasting and indexing stuff needed to come separately. But this is \nno longer the case.\n\nThe resulting postgres.bki file has some ordering differences *within* \nthe toast and index groups, but these should not be significant. (It's \nbasically done in the order of the parent catalogs now rather than \nwhatever the old file order was.)\n\nIn this patch set, I moved the DECLARE_* lines as is. In the discussion \n[0] some ideas were floated for altering or tweaking these things, but I \nsuggest that can be undertaken as a separate patch set.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/20201006201549.em2meighuapttl7n%40alap3.anarazel.de\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 22 Oct 2020 12:21:11 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Move catalog toast table and index declarations" }, { "msg_contents": "On Thu, Oct 22, 2020 at 6:21 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n> [v1]\n\nHi Peter,\n\nThis part created a syntax error:\n\n--- a/src/include/catalog/unused_oids\n+++ b/src/include/catalog/unused_oids\n@@ -28,7 +28,7 @@ chdir $FindBin::RealBin or die \"could not cd to\n$FindBin::RealBin: $!\\n\";\n use lib \"$FindBin::RealBin/../../backend/catalog/\";\n use Catalog;\n\n-my @input_files = (glob(\"pg_*.h\"), qw(indexing.h));\n+my @input_files = (glob(\"pg_*.h\");\n\nStyle: In genbki.h, \"extern int no_such_variable\" is now out of place.\nAlso, the old comments like \"The macro definitions are just to keep the C\ncompiler from spitting up.\" are now redundant in their new setting.\n\nThe rest looks good to me. unused_oids (once fixed), duplicate_oids, and\nrenumber_oids.pl seem to work fine.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Thu, Oct 22, 2020 at 6:21 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:> [v1]Hi Peter,This part created a syntax error:--- a/src/include/catalog/unused_oids+++ b/src/include/catalog/unused_oids@@ -28,7 +28,7 @@ chdir $FindBin::RealBin or die \"could not cd to $FindBin::RealBin: $!\\n\"; use lib \"$FindBin::RealBin/../../backend/catalog/\"; use Catalog; -my @input_files = (glob(\"pg_*.h\"), qw(indexing.h));+my @input_files = (glob(\"pg_*.h\");Style: In genbki.h, \"extern int no_such_variable\" is now out of place. Also, the old comments like \"The macro definitions are just to keep the C compiler from spitting up.\" are now redundant in their new setting.The rest looks good to me. unused_oids (once fixed), duplicate_oids, and renumber_oids.pl seem to work fine.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Sat, 24 Oct 2020 09:23:56 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Move catalog toast table and index declarations" }, { "msg_contents": "On 2020-10-24 15:23, John Naylor wrote:\n> This part created a syntax error:\n> \n> --- a/src/include/catalog/unused_oids\n> +++ b/src/include/catalog/unused_oids\n> @@ -28,7 +28,7 @@ chdir $FindBin::RealBin or die \"could not cd to \n> $FindBin::RealBin: $!\\n\";\n>  use lib \"$FindBin::RealBin/../../backend/catalog/\";\n>  use Catalog;\n> \n> -my @input_files = (glob(\"pg_*.h\"), qw(indexing.h));\n> +my @input_files = (glob(\"pg_*.h\");\n\nOK, fixing that.\n\n> Style: In genbki.h, \"extern int no_such_variable\" is now out of place. \n> Also, the old comments like \"The macro definitions are just to keep the \n> C compiler from spitting up.\" are now redundant in their new setting.\n\nHmm, I don't really see what's wrong there. Do you mean the macro \ndefinitions should be different, or the comments are wrong, or something \nelse?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 27 Oct 2020 12:42:58 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Move catalog toast table and index declarations" }, { "msg_contents": "On Tue, Oct 27, 2020 at 7:43 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-10-24 15:23, John Naylor wrote:\n> > Style: In genbki.h, \"extern int no_such_variable\" is now out of place.\n> > Also, the old comments like \"The macro definitions are just to keep the\n> > C compiler from spitting up.\" are now redundant in their new setting.\n>\n> Hmm, I don't really see what's wrong there. Do you mean the macro\n> definitions should be different, or the comments are wrong, or something\n> else?\n>\n\nThere's nothing wrong; it's just a minor point of consistency. For the\nfirst part, I mean defined symbols in this file that are invisible to the C\ncompiler are written\n\n#define SOMETHING()\n\nIf some are written\n\n#define SOMETHING() extern int no_such_variable\n\nI imagine some future reader will wonder why there's a difference.\n\nAs for the comments, the entire file is for things meant for scripts to\nread, but have to be put in macro form to be invisible to the compiler. The\nheader comment has\n\n\"genbki.h defines CATALOG(), BKI_BOOTSTRAP and related macros\n * so that the catalog header files can be read by the C compiler.\"\n\nI'm just saying we don't need to carry over the comments I mentioned from\nthe toasting/indexing headers that were specially for those macros.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Tue, Oct 27, 2020 at 7:43 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-10-24 15:23, John Naylor wrote:> Style: In genbki.h, \"extern int no_such_variable\" is now out of place. \n> Also, the old comments like \"The macro definitions are just to keep the \n> C compiler from spitting up.\" are now redundant in their new setting.\n\nHmm, I don't really see what's wrong there.  Do you mean the macro \ndefinitions should be different, or the comments are wrong, or something \nelse?There's nothing wrong; it's just a minor point of consistency. For the first part, I mean defined symbols in this file that are invisible to the C compiler are written#define SOMETHING()If some are written#define SOMETHING() extern int no_such_variableI imagine some future reader will wonder why there's a difference.As for the comments, the entire file is for things meant for scripts to read, but have to be put in macro form to be invisible to the compiler. The header comment has \"genbki.h defines CATALOG(), BKI_BOOTSTRAP and related macros * so that the catalog header files can be read by the C compiler.\"I'm just saying we don't need to carry over the comments I mentioned from the toasting/indexing headers that were specially for those macros.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Tue, 27 Oct 2020 08:12:11 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Move catalog toast table and index declarations" }, { "msg_contents": "On 2020-10-27 13:12, John Naylor wrote:\n> There's nothing wrong; it's just a minor point of consistency. For the \n> first part, I mean defined symbols in this file that are invisible to \n> the C compiler are written\n> \n> #define SOMETHING()\n> \n> If some are written\n> \n> #define SOMETHING() extern int no_such_variable\n> \n> I imagine some future reader will wonder why there's a difference.\n\nThe difference is that CATALOG() is followed in actual use by something like\n\n { ... } FormData_pg_attribute;\n\nso it becomes a valid C statement. For DECLARE_INDEX() etc., we need to \ndo something else to make it valid. I guess this could be explained in \nmore detail (as I'm attempting in this email), but this isn't materially \nchanged by this patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 5 Nov 2020 09:24:08 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Move catalog toast table and index declarations" }, { "msg_contents": "On Thu, Nov 5, 2020 at 4:24 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-10-27 13:12, John Naylor wrote:\n> > There's nothing wrong; it's just a minor point of consistency. For the\n> > first part, I mean defined symbols in this file that are invisible to\n> > the C compiler are written\n> >\n> > #define SOMETHING()\n> >\n> > If some are written\n> >\n> > #define SOMETHING() extern int no_such_variable\n> >\n> > I imagine some future reader will wonder why there's a difference.\n>\n> The difference is that CATALOG() is followed in actual use by something\n> like\n>\n> { ... } FormData_pg_attribute;\n>\n> so it becomes a valid C statement. For DECLARE_INDEX() etc., we need to\n> do something else to make it valid. I guess this could be explained in\n> more detail (as I'm attempting in this email), but this isn't materially\n> changed by this patch.\n>\n\nI think we're talking past eachother. Here's a concrete example:\n\n#define BKI_ROWTYPE_OID(oid,oidmacro)\n#define DECLARE_TOAST(name,toastoid,indexoid) extern int no_such_variable\n\nI understand these to be functionally equivalent as far as what the C\ncompiler sees. If not, I'd be curious to know what the difference is. I was\nthinking this is just a random style difference, and if so, they should be\nthe same now that they're in the same file together:\n\n#define BKI_ROWTYPE_OID(oid,oidmacro)\n#define DECLARE_TOAST(name,toastoid,indexoid)\n\nAnd yes, this doesn't materially change the patch, it's just nitpicking :-)\n. Materially, I believe it's fine.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Thu, Nov 5, 2020 at 4:24 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-10-27 13:12, John Naylor wrote:\n> There's nothing wrong; it's just a minor point of consistency. For the \n> first part, I mean defined symbols in this file that are invisible to \n> the C compiler are written\n> \n> #define SOMETHING()\n> \n> If some are written\n> \n> #define SOMETHING() extern int no_such_variable\n> \n> I imagine some future reader will wonder why there's a difference.\n\nThe difference is that CATALOG() is followed in actual use by something like\n\n     { ... } FormData_pg_attribute;\n\nso it becomes a valid C statement.  For DECLARE_INDEX() etc., we need to \ndo something else to make it valid.  I guess this could be explained in \nmore detail (as I'm attempting in this email), but this isn't materially \nchanged by this patch.\nI think we're talking past eachother. Here's a concrete example:#define BKI_ROWTYPE_OID(oid,oidmacro)#define DECLARE_TOAST(name,toastoid,indexoid) extern int no_such_variableI understand these to be functionally equivalent as far as what the C compiler sees. If not, I'd be curious to know what the difference is. I was thinking this is just a random style difference, and if so, they should be the same now that they're in the same file together: #define BKI_ROWTYPE_OID(oid,oidmacro)#define DECLARE_TOAST(name,toastoid,indexoid)And yes, this doesn't materially change the patch, it's just nitpicking :-) . Materially, I believe it's fine.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Thu, 5 Nov 2020 07:59:03 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Move catalog toast table and index declarations" }, { "msg_contents": "On 2020-11-05 12:59, John Naylor wrote:\n> I think we're talking past eachother. Here's a concrete example:\n> \n> #define BKI_ROWTYPE_OID(oid,oidmacro)\n> #define DECLARE_TOAST(name,toastoid,indexoid) extern int no_such_variable\n> \n> I understand these to be functionally equivalent as far as what the C \n> compiler sees.\n\nThe issue is that you can't have a bare semicolon at the top level of a \nC compilation unit, at least on some compilers. So doing\n\n#define FOO(stuff) /*empty*/\n\nand then\n\nFOO(123);\n\nwon't work. You need to fill the definition of FOO with some stuff to \nmake it valid.\n\nBKI_ROWTYPE_OID on the other hand is not used at the top level like \nthis, so it can be defined to empty.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 5 Nov 2020 19:20:06 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Move catalog toast table and index declarations" }, { "msg_contents": "On Thu, Nov 5, 2020 at 2:20 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-11-05 12:59, John Naylor wrote:\n> > I think we're talking past eachother. Here's a concrete example:\n> >\n> > #define BKI_ROWTYPE_OID(oid,oidmacro)\n> > #define DECLARE_TOAST(name,toastoid,indexoid) extern int no_such_variable\n> >\n> > I understand these to be functionally equivalent as far as what the C\n> > compiler sees.\n>\n> The issue is that you can't have a bare semicolon at the top level of a\n> C compilation unit, at least on some compilers. So doing\n>\n> #define FOO(stuff) /*empty*/\n>\n> and then\n>\n> FOO(123);\n>\n> won't work. You need to fill the definition of FOO with some stuff to\n> make it valid.\n>\n> BKI_ROWTYPE_OID on the other hand is not used at the top level like\n> this, so it can be defined to empty.\n>\n\nThank you for the explanation.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Thu, Nov 5, 2020 at 2:20 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-11-05 12:59, John Naylor wrote:\n> I think we're talking past eachother. Here's a concrete example:\n> \n> #define BKI_ROWTYPE_OID(oid,oidmacro)\n> #define DECLARE_TOAST(name,toastoid,indexoid) extern int no_such_variable\n> \n> I understand these to be functionally equivalent as far as what the C \n> compiler sees.\n\nThe issue is that you can't have a bare semicolon at the top level of a \nC compilation unit, at least on some compilers.  So doing\n\n#define FOO(stuff) /*empty*/\n\nand then\n\nFOO(123);\n\nwon't work.  You need to fill the definition of FOO with some stuff to \nmake it valid.\n\nBKI_ROWTYPE_OID on the other hand is not used at the top level like \nthis, so it can be defined to empty.\nThank you for the explanation.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Fri, 6 Nov 2020 09:00:52 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Move catalog toast table and index declarations" }, { "msg_contents": "On 2020-11-05 12:59, John Naylor wrote:\n> And yes, this doesn't materially change the patch, it's just nitpicking \n> :-) . Materially, I believe it's fine.\n\nOK, committed.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n", "msg_date": "Sat, 7 Nov 2020 12:33:51 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Move catalog toast table and index declarations" } ]
[ { "msg_contents": "Hi\n\n From time to time I find myself in a situation where it would be very useful to\nbe able to programatically determine whether a particular library is included in\n\"shared_preload_libraries\", which accepts a comma-separated list of values.\n\nUnfortunately it's not as simple as splitting the list on the commas, as while\nthat will *usually* work, the following is also valid:\n\n shared_preload_libraries = 'foo,bar,\"baz ,\"'\n\nand reliably splitting it up into its constituent parts would mean re-inventing\na wheel (and worse possibly introducing some regular expressions into the\nprocess, cf. https://xkcd.com/1171/ ).\n\nNow, while it's highly unlikely someone will go to the trouble of creating a\nlibrary name with commas and spaces in it, \"highly unlikely\" is not the same as\n\"will definitely never ever happen\". So it would be very handy to be able to use\nthe same function PostgreSQL uses internally (\"SplitDirectoriesString()\") to\nproduce the guaranteed same result.\n\nAttached patch provides a new function \"pg_setting_value_split()\" which does\nexactly this, i.e. called with a string containing such a list, it calls\n\"SplitDirectoriesString()\" and returns the result as a set of text, e.g.:\n\n postgres# SELECT setting FROM pg_setting_value_split('foo,bar,\"baz ,\"');\n\n setting\n ---------\n foo\n bar\n baz ,\n (3 rows)\n\nthough a more likely use would be:\n\n SELECT setting FROM\npg_setting_value_split(current_setting('shared_preload_libraries'));\n\nOther GUCs this applies to:\n\n - local_preload_libraries\n - session_preload_libraries\n - unix_socket_directories\n\nI will add this to the next CF.\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com", "msg_date": "Fri, 23 Oct 2020 09:53:29 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "proposal: function pg_setting_value_split() to parse\n shared_preload_libraries etc." }, { "msg_contents": "2020年10月23日(金) 9:53 Ian Lawrence Barwick <barwick@gmail.com>:\n>\n> Hi\n>\n> From time to time I find myself in a situation where it would be very useful to\n> be able to programatically determine whether a particular library is included in\n> \"shared_preload_libraries\", which accepts a comma-separated list of values.\n>\n> Unfortunately it's not as simple as splitting the list on the commas, as while\n> that will *usually* work, the following is also valid:\n>\n> shared_preload_libraries = 'foo,bar,\"baz ,\"'\n>\n> and reliably splitting it up into its constituent parts would mean re-inventing\n> a wheel (and worse possibly introducing some regular expressions into the\n> process, cf. https://xkcd.com/1171/ ).\n>\n> Now, while it's highly unlikely someone will go to the trouble of creating a\n> library name with commas and spaces in it, \"highly unlikely\" is not the same as\n> \"will definitely never ever happen\". So it would be very handy to be able to use\n> the same function PostgreSQL uses internally (\"SplitDirectoriesString()\") to\n> produce the guaranteed same result.\n>\n> Attached patch provides a new function \"pg_setting_value_split()\" which does\n> exactly this, i.e. called with a string containing such a list, it calls\n> \"SplitDirectoriesString()\" and returns the result as a set of text, e.g.:\n>\n> postgres# SELECT setting FROM pg_setting_value_split('foo,bar,\"baz ,\"');\n>\n> setting\n> ---------\n> foo\n> bar\n> baz ,\n> (3 rows)\n>\n> though a more likely use would be:\n>\n> SELECT setting FROM\n> pg_setting_value_split(current_setting('shared_preload_libraries'));\n>\n> Other GUCs this applies to:\n>\n> - local_preload_libraries\n> - session_preload_libraries\n> - unix_socket_directories\n\nHaving just submitted this, I realised I'm focussing on the GUCs which call\n\"SplitDirectoriesString()\" (as my specific uses case is for\n\"shared_preload_libraries\")\nbut the patch does not consider the other GUC_LIST_INPUT settings, which\ncall \"SplitIdentifierString()\", so as is, it might produce unexpected\nresults for those.\n\nI'll rethink and submit an updated version.\n\n\nRegards\n\nIan Barwick\n\n\n", "msg_date": "Fri, 23 Oct 2020 11:06:08 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: function pg_setting_value_split() to parse\n shared_preload_libraries etc." }, { "msg_contents": "On 23.10.2020 05:06, Ian Lawrence Barwick wrote:\n> Having just submitted this, I realised I'm focussing on the GUCs which call\n> \"SplitDirectoriesString()\" (as my specific uses case is for\n> \"shared_preload_libraries\")\n> but the patch does not consider the other GUC_LIST_INPUT settings, which\n> call \"SplitIdentifierString()\", so as is, it might produce unexpected\n> results for those.\n>\n> I'll rethink and submit an updated version.\n\nStatus update for a commitfest entry.\n\nThis entry was \"Waiting on author\" during this CF. As I see, the patch \nneeds more work before review, so I changed it to \"Withdrawn\".\nFeel free to resubmit an updated version to a future commitfest.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Sun, 29 Nov 2020 22:24:13 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: proposal: function pg_setting_value_split() to parse\n shared_preload_libraries etc." }, { "msg_contents": "On Mon, 30 Nov 2020 at 03:24, Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n>\n> On 23.10.2020 05:06, Ian Lawrence Barwick wrote:\n> > Having just submitted this, I realised I'm focussing on the GUCs which call\n> > \"SplitDirectoriesString()\" (as my specific uses case is for\n> > \"shared_preload_libraries\")\n> > but the patch does not consider the other GUC_LIST_INPUT settings, which\n> > call \"SplitIdentifierString()\", so as is, it might produce unexpected\n> > results for those.\n> >\n> > I'll rethink and submit an updated version.\n>\n> Status update for a commitfest entry.\n>\n> This entry was \"Waiting on author\" during this CF. As I see, the patch\n> needs more work before review, so I changed it to \"Withdrawn\".\n> Feel free to resubmit an updated version to a future commitfest.\n\n\nFWIW I was looking for this functionality just the other day, for\nparsing synchronous_standby_names . So I'd definitely welcome it.\n\n\n", "msg_date": "Mon, 30 Nov 2020 13:53:14 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: proposal: function pg_setting_value_split() to parse\n shared_preload_libraries etc." }, { "msg_contents": "2020年11月30日(月) 14:53 Craig Ringer <craig.ringer@enterprisedb.com>:\n>\n> On Mon, 30 Nov 2020 at 03:24, Anastasia Lubennikova\n> <a.lubennikova@postgrespro.ru> wrote:\n> >\n> > On 23.10.2020 05:06, Ian Lawrence Barwick wrote:\n> > > Having just submitted this, I realised I'm focussing on the GUCs which call\n> > > \"SplitDirectoriesString()\" (as my specific uses case is for\n> > > \"shared_preload_libraries\")\n> > > but the patch does not consider the other GUC_LIST_INPUT settings, which\n> > > call \"SplitIdentifierString()\", so as is, it might produce unexpected\n> > > results for those.\n> > >\n> > > I'll rethink and submit an updated version.\n> >\n> > Status update for a commitfest entry.\n> >\n> > This entry was \"Waiting on author\" during this CF. As I see, the patch\n> > needs more work before review, so I changed it to \"Withdrawn\".\n> > Feel free to resubmit an updated version to a future commitfest.\n>\n>\n> FWIW I was looking for this functionality just the other day, for\n> parsing synchronous_standby_names . So I'd definitely welcome it.\n\nThanks, useful to know someone else has a use-case for this I'll\nresubmit for the next CF.\n\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 2 Dec 2020 22:36:16 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: function pg_setting_value_split() to parse\n shared_preload_libraries etc." } ]
[ { "msg_contents": "Currently when people want to review a patch, they have to download / apply\n/\nmaintain the branch manually. Would it be helpful that the reviewer can\njust\ngit fetch a remote branch where all the things have been done already. I\nknow\nthat such cost saving is small, but it is a startup cost, so personally I\nthink it is\na good place to improve. Since we already maintain such remote git repo at\ncbfot,\nso can we just expose such URL for each item in commitfest then things\nwould be done?\n\nAt the bottom of cfbot, we have words \"Please send feedback to thomas\nmunro\",\nso I added Tomas in the cc list.\n\n-- \nBest Regards\nAndy Fan\n\nCurrently when people want to review a patch, they have to download / apply /maintain the branch manually.  Would it be helpful that the reviewer can justgit fetch a remote branch where all the things have been done already. I know that such cost saving is small, but it is a startup cost, so personally I think it isa good place to improve. Since we already maintain such remote git repo at cbfot, so can we just expose such URL for each item in commitfest then things would be done?At the bottom of cfbot, we have words \"Please send feedback to thomas munro\",so I added Tomas in the cc list.-- Best RegardsAndy Fan", "msg_date": "Fri, 23 Oct 2020 09:31:57 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Would it be helpful for share the patch merge result from cfbot" }, { "msg_contents": "On Fri, Oct 23, 2020 at 2:32 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> Currently when people want to review a patch, they have to download / apply /\n> maintain the branch manually. Would it be helpful that the reviewer can just\n> git fetch a remote branch where all the things have been done already. I know\n> that such cost saving is small, but it is a startup cost, so personally I think it is\n> a good place to improve. Since we already maintain such remote git repo at cbfot,\n> so can we just expose such URL for each item in commitfest then things would be done?\n>\n> At the bottom of cfbot, we have words \"Please send feedback to thomas munro\",\n> so I added Tomas in the cc list.\n\nHi Andy,\n\nTry this:\n\ngit remote add cfbot https://github.com/postgresql-cfbot/postgresql.git\ngit fetch cfbot commitfest/30/2785\ngit checkout commitfest/30/2785\n\n\n", "msg_date": "Fri, 23 Oct 2020 14:51:25 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Would it be helpful for share the patch merge result from cfbot" }, { "msg_contents": "On Fri, Oct 23, 2020 at 2:51 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Oct 23, 2020 at 2:32 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > Currently when people want to review a patch, they have to download / apply /\n> > maintain the branch manually. Would it be helpful that the reviewer can just\n> > git fetch a remote branch where all the things have been done already. I know\n> > that such cost saving is small, but it is a startup cost, so personally I think it is\n> > a good place to improve. Since we already maintain such remote git repo at cbfot,\n> > so can we just expose such URL for each item in commitfest then things would be done?\n> >\n> > At the bottom of cfbot, we have words \"Please send feedback to thomas munro\",\n> > so I added Tomas in the cc list.\n>\n> Hi Andy,\n>\n> Try this:\n>\n> git remote add cfbot https://github.com/postgresql-cfbot/postgresql.git\n> git fetch cfbot commitfest/30/2785\n> git checkout commitfest/30/2785\n\nAlso, you might like this way of grabbing and applying all the patches\nfrom an archives link and applying them:\n\n$ cat ~/bin/fetch-all-patches.sh\n#!/bin/sh\nfor P in ` curl -s $1 | grep \"\\.patch\" | sed 's|^ *<a\nhref=\"|https://www.postgresql.org|;s|\".*||' ` ; do\n echo $P\n curl -s -O $P\ndone\n$ ~/bin/fetch-all-patches.sh\n'https://www.postgresql.org/message-id/20200718201532.GV23581@telsasoft.com'\nhttps://www.postgresql.org/message-id/attachment/112541/v21-0001-Document-historic-behavior-of-links-to-directori.patch\nhttps://www.postgresql.org/message-id/attachment/112542/v21-0002-pg_stat_file-and-pg_ls_dir_-to-use-lstat.patch\nhttps://www.postgresql.org/message-id/attachment/112543/v21-0003-Add-tests-on-pg_ls_dir-before-changing-it.patch\nhttps://www.postgresql.org/message-id/attachment/112544/v21-0004-Add-pg_ls_dir_metadata-to-list-a-dir-with-file-m.patch\nhttps://www.postgresql.org/message-id/attachment/112545/v21-0005-pg_ls_tmpdir-to-show-directories-and-isdir-argum.patch\nhttps://www.postgresql.org/message-id/attachment/112546/v21-0006-pg_ls_-dir-to-show-directories-and-isdir-column.patch\nhttps://www.postgresql.org/message-id/attachment/112547/v21-0007-Add-pg_ls_dir_recurse-to-show-dir-recursively.patch\nhttps://www.postgresql.org/message-id/attachment/112548/v21-0008-pg_ls_logdir-to-ignore-error-if-initial-top-dir-.patch\nhttps://www.postgresql.org/message-id/attachment/112549/v21-0009-pg_ls_-dir-to-return-all-the-metadata-from-pg_st.patch\nhttps://www.postgresql.org/message-id/attachment/112550/v21-0010-pg_ls_-to-show-file-type-and-show-special-files.patch\n$ git am v21-*.patch\n\n\n", "msg_date": "Fri, 23 Oct 2020 14:57:42 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Would it be helpful for share the patch merge result from cfbot" }, { "msg_contents": "On Fri, Oct 23, 2020 at 9:58 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Fri, Oct 23, 2020 at 2:51 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> > On Fri, Oct 23, 2020 at 2:32 PM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n> > > Currently when people want to review a patch, they have to download /\n> apply /\n> > > maintain the branch manually. Would it be helpful that the reviewer\n> can just\n> > > git fetch a remote branch where all the things have been done already.\n> I know\n> > > that such cost saving is small, but it is a startup cost, so\n> personally I think it is\n> > > a good place to improve. Since we already maintain such remote git\n> repo at cbfot,\n> > > so can we just expose such URL for each item in commitfest then things\n> would be done?\n> > >\n> > > At the bottom of cfbot, we have words \"Please send feedback to thomas\n> munro\",\n> > > so I added Tomas in the cc list.\n> >\n> > Hi Andy,\n> >\n> > Try this:\n> >\n> > git remote add cfbot https://github.com/postgresql-cfbot/postgresql.git\n> > git fetch cfbot commitfest/30/2785\n> > git checkout commitfest/30/2785\n>\n> Also, you might like this way of grabbing and applying all the patches\n> from an archives link and applying them:\n>\n> $ cat ~/bin/fetch-all-patches.sh\n> #!/bin/sh\n> for P in ` curl -s $1 | grep \"\\.patch\" | sed 's|^ *<a\n> href=\"|https://www.postgresql.org|;s|\".*||' ` ; do\n> echo $P\n> curl -s -O $P\n> done\n> $ ~/bin/fetch-all-patches.sh\n> '\n> https://www.postgresql.org/message-id/20200718201532.GV23581@telsasoft.com\n> '\n>\n> https://www.postgresql.org/message-id/attachment/112541/v21-0001-Document-historic-behavior-of-links-to-directori.patch\n>\n> https://www.postgresql.org/message-id/attachment/112542/v21-0002-pg_stat_file-and-pg_ls_dir_-to-use-lstat.patch\n>\n> https://www.postgresql.org/message-id/attachment/112543/v21-0003-Add-tests-on-pg_ls_dir-before-changing-it.patch\n>\n> https://www.postgresql.org/message-id/attachment/112544/v21-0004-Add-pg_ls_dir_metadata-to-list-a-dir-with-file-m.patch\n>\n> https://www.postgresql.org/message-id/attachment/112545/v21-0005-pg_ls_tmpdir-to-show-directories-and-isdir-argum.patch\n>\n> https://www.postgresql.org/message-id/attachment/112546/v21-0006-pg_ls_-dir-to-show-directories-and-isdir-column.patch\n>\n> https://www.postgresql.org/message-id/attachment/112547/v21-0007-Add-pg_ls_dir_recurse-to-show-dir-recursively.patch\n>\n> https://www.postgresql.org/message-id/attachment/112548/v21-0008-pg_ls_logdir-to-ignore-error-if-initial-top-dir-.patch\n>\n> https://www.postgresql.org/message-id/attachment/112549/v21-0009-pg_ls_-dir-to-return-all-the-metadata-from-pg_st.patch\n>\n> https://www.postgresql.org/message-id/attachment/112550/v21-0010-pg_ls_-to-show-file-type-and-show-special-files.patch\n> $ git am v21-*.patch\n>\n\n\nThis is exactly what I want and more than that. Thank you Thomas!\n\n-- \nBest Regards\nAndy Fan\n\nOn Fri, Oct 23, 2020 at 9:58 AM Thomas Munro <thomas.munro@gmail.com> wrote:On Fri, Oct 23, 2020 at 2:51 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Oct 23, 2020 at 2:32 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > Currently when people want to review a patch, they have to download / apply /\n> > maintain the branch manually.  Would it be helpful that the reviewer can just\n> > git fetch a remote branch where all the things have been done already. I know\n> > that such cost saving is small, but it is a startup cost, so personally I think it is\n> > a good place to improve. Since we already maintain such remote git repo at cbfot,\n> > so can we just expose such URL for each item in commitfest then things would be done?\n> >\n> > At the bottom of cfbot, we have words \"Please send feedback to thomas munro\",\n> > so I added Tomas in the cc list.\n>\n> Hi Andy,\n>\n> Try this:\n>\n> git remote add cfbot https://github.com/postgresql-cfbot/postgresql.git\n> git fetch cfbot commitfest/30/2785\n> git checkout commitfest/30/2785\n\nAlso, you might like this way of grabbing and applying all the patches\nfrom an archives link and applying them:\n\n$ cat ~/bin/fetch-all-patches.sh\n#!/bin/sh\nfor P in ` curl -s $1 | grep \"\\.patch\" | sed 's|^ *<a\nhref=\"|https://www.postgresql.org|;s|\".*||' ` ; do\n  echo $P\n  curl -s -O $P\ndone\n$ ~/bin/fetch-all-patches.sh\n'https://www.postgresql.org/message-id/20200718201532.GV23581@telsasoft.com'\nhttps://www.postgresql.org/message-id/attachment/112541/v21-0001-Document-historic-behavior-of-links-to-directori.patch\nhttps://www.postgresql.org/message-id/attachment/112542/v21-0002-pg_stat_file-and-pg_ls_dir_-to-use-lstat.patch\nhttps://www.postgresql.org/message-id/attachment/112543/v21-0003-Add-tests-on-pg_ls_dir-before-changing-it.patch\nhttps://www.postgresql.org/message-id/attachment/112544/v21-0004-Add-pg_ls_dir_metadata-to-list-a-dir-with-file-m.patch\nhttps://www.postgresql.org/message-id/attachment/112545/v21-0005-pg_ls_tmpdir-to-show-directories-and-isdir-argum.patch\nhttps://www.postgresql.org/message-id/attachment/112546/v21-0006-pg_ls_-dir-to-show-directories-and-isdir-column.patch\nhttps://www.postgresql.org/message-id/attachment/112547/v21-0007-Add-pg_ls_dir_recurse-to-show-dir-recursively.patch\nhttps://www.postgresql.org/message-id/attachment/112548/v21-0008-pg_ls_logdir-to-ignore-error-if-initial-top-dir-.patch\nhttps://www.postgresql.org/message-id/attachment/112549/v21-0009-pg_ls_-dir-to-return-all-the-metadata-from-pg_st.patch\nhttps://www.postgresql.org/message-id/attachment/112550/v21-0010-pg_ls_-to-show-file-type-and-show-special-files.patch\n$ git am v21-*.patch\nThis is exactly what I want and more than that.  Thank you Thomas!-- Best RegardsAndy Fan", "msg_date": "Fri, 23 Oct 2020 10:01:14 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Would it be helpful for share the patch merge result from cfbot" }, { "msg_contents": "On Fri, Oct 23, 2020 at 09:31:57AM +0800, Andy Fan wrote:\n> Currently when people want to review a patch, they have to download / apply\n> / maintain the branch manually. Would it be helpful that the reviewer can\n> just git fetch a remote branch where all the things have been done already. I\n> know that such cost saving is small, but it is a startup cost, so personally I\n> think it is a good place to improve. Since we already maintain such remote git repo at\n> cbfot, so can we just expose such URL for each item in commitfest then things\n> would be done?\n> \n> At the bottom of cfbot, we have words \"Please send feedback to thomas\n> munro\", so I added Tomas in the cc list.\n\nIt seems to me that this problem is not completely related to the CF\nbot, no? Automated testing and fetching from a mirror repository\nthat's automated to fetch patches from the mailing list and apply them\non some custom branches looks like something entirely independent to\nme. Saying that, having something like that may be nice to have,\nassuming that we also find a way to track in this repo patches that\nare not able to apply correctly, meaning that we could forcibly apply\nthe patches with the conflicts included in what's committed in the\ntest branch. In terms of my own experience as CFM, I don't think that\nthis would be really helpful for the CF manager, but for reviewers or\nnewcomers, that could help in getting involved into the CF.\n--\nMichael", "msg_date": "Fri, 23 Oct 2020 11:05:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Would it be helpful for share the patch merge result from cfbot" }, { "msg_contents": "On Fri, Oct 23, 2020 at 11:05:34AM +0900, Michael Paquier wrote:\n> It seems to me that this problem is not completely related to the CF\n> bot, no? Automated testing and fetching from a mirror repository\n> that's automated to fetch patches from the mailing list and apply them\n> on some custom branches looks like something entirely independent to\n> me. Saying that, having something like that may be nice to have,\n> assuming that we also find a way to track in this repo patches that\n> are not able to apply correctly, meaning that we could forcibly apply\n> the patches with the conflicts included in what's committed in the\n> test branch. In terms of my own experience as CFM, I don't think that\n> this would be really helpful for the CF manager, but for reviewers or\n> newcomers, that could help in getting involved into the CF.\n\nPlease forget that, I just noticed Thomas' replies.\n\n/me hides\n--\nMichael", "msg_date": "Fri, 23 Oct 2020 11:10:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Would it be helpful for share the patch merge result from cfbot" }, { "msg_contents": "At Fri, 23 Oct 2020 10:01:14 +0800, Andy Fan <zhihui.fan1213@gmail.com> wrote in \n> On Fri, Oct 23, 2020 at 9:58 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> > > Try this:\n> > >\n> > > git remote add cfbot https://github.com/postgresql-cfbot/postgresql.git\n> > > git fetch cfbot commitfest/30/2785\n> > > git checkout commitfest/30/2785\n> >\n> > Also, you might like this way of grabbing and applying all the patches\n> > from an archives link and applying them:\n> >\n> > $ cat ~/bin/fetch-all-patches.sh\n> > #!/bin/sh\n> > for P in ` curl -s $1 | grep \"\\.patch\" | sed 's|^ *<a\n> > href=\"|https://www.postgresql.org|;s|\".*||' ` ; do\n> > echo $P\n> > curl -s -O $P\n> > done\n> > $ ~/bin/fetch-all-patches.sh\n> > '\n> > https://www.postgresql.org/message-id/20200718201532.GV23581@telsasoft.com\n> > '\n...\n> \n> \n> This is exactly what I want and more than that. Thank you Thomas!\n\nThat's awfully useful. Thanks for sharing!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 23 Oct 2020 15:24:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Would it be helpful for share the patch merge result from cfbot" } ]
[ { "msg_contents": "Hi\n\nNot that I've ever had to do this (or would want to do it on a production\nsystem), but this error message seems incorrect:\n\n postgres=# ALTER SYSTEM SET unix_socket_directories =\n'/tmp/sock1','/tmp/sock2';\n ERROR: SET unix_socket_directories takes only one argument\n\nTrivial patch attached.\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com", "msg_date": "Fri, 23 Oct 2020 11:34:06 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "\"unix_socket_directories\" should be GUC_LIST_INPUT?" }, { "msg_contents": "On Fri, Oct 23, 2020 at 11:34:06AM +0900, Ian Lawrence Barwick wrote:\n> Not that I've ever had to do this (or would want to do it on a production\n> system), but this error message seems incorrect:\n> \n> postgres=# ALTER SYSTEM SET unix_socket_directories =\n> '/tmp/sock1','/tmp/sock2';\n> ERROR: SET unix_socket_directories takes only one argument\n> \n> Trivial patch attached.\n\nI have never seen that case, but I think that you are right. Still,\nthat's not the end of it, see by yourself what the following command\ngenerates with only your patch, which is fancy:\nALTER SYSTEM SET unix_socket_directories = '/tmp/sock1','/tmp/, sock2';\n\nWe need an extra GUC_LIST_QUOTE on top of what you are proposing.\n--\nMichael", "msg_date": "Fri, 23 Oct 2020 12:12:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: \"unix_socket_directories\" should be GUC_LIST_INPUT?" }, { "msg_contents": "2020年10月23日(金) 12:12 Michael Paquier <michael@paquier.xyz>:\n>\n> On Fri, Oct 23, 2020 at 11:34:06AM +0900, Ian Lawrence Barwick wrote:\n> > Not that I've ever had to do this (or would want to do it on a production\n> > system), but this error message seems incorrect:\n> >\n> > postgres=# ALTER SYSTEM SET unix_socket_directories =\n> > '/tmp/sock1','/tmp/sock2';\n> > ERROR: SET unix_socket_directories takes only one argument\n> >\n> > Trivial patch attached.\n>\n> I have never seen that case, but I think that you are right. Still,\n> that's not the end of it, see by yourself what the following command\n> generates with only your patch, which is fancy:\n> ALTER SYSTEM SET unix_socket_directories = '/tmp/sock1','/tmp/, sock2';\n>\n> We need an extra GUC_LIST_QUOTE on top of what you are proposing.\n\nAh yes, good point.\n\nUpdated version attached.\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com", "msg_date": "Fri, 23 Oct 2020 12:23:28 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"unix_socket_directories\" should be GUC_LIST_INPUT?" }, { "msg_contents": "On Fri, Oct 23, 2020 at 12:23:28PM +0900, Ian Lawrence Barwick wrote:\n> Updated version attached.\n\nLGTM. Looking at c9b0cbe and the relevant thread it looks like this\npoint was not really covered, so my guess is that this was just\nforgotten:\nhttps://www.postgresql.org/message-id/4FCF6040.5030408@redhat.com\n\nI'll look again at that in the next couple of days and double-check\nthe relevant areas of the code, just in case. It is Friday afternoon\nhere, and I suspect that my mind is missing something obvious.\n--\nMichael", "msg_date": "Fri, 23 Oct 2020 12:31:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: \"unix_socket_directories\" should be GUC_LIST_INPUT?" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I'll look again at that in the next couple of days and double-check\n> the relevant areas of the code, just in case. It is Friday afternoon\n> here, and I suspect that my mind is missing something obvious.\n\nIndeed. The patch fails to update pg_dump.c's\nvariable_is_guc_list_quote(), which exposes the real problem here:\nchanging an existing variable's GUC_LIST_QUOTE property is an API break.\n\nGetting pg_dump to cope with such a situation would be a research project.\nThe easy part of it would be to make variable_is_guc_list_quote() be\nversion-aware; the hard part would be figuring out what to emit so that\nSET clauses will load correctly regardless of which PG version they will\nbe loaded into.\n\nI suspect you're right that this variable should have been marked as a\nlist to start with, but I'm afraid changing it at this point would be\nway more trouble than it's worth.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Oct 2020 23:56:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"unix_socket_directories\" should be GUC_LIST_INPUT?" }, { "msg_contents": "2020年10月23日(金) 12:56 Tom Lane <tgl@sss.pgh.pa.us>:\n>\n> Michael Paquier <michael@paquier.xyz> writes:\n> > I'll look again at that in the next couple of days and double-check\n> > the relevant areas of the code, just in case. It is Friday afternoon\n> > here, and I suspect that my mind is missing something obvious.\n>\n> Indeed. The patch fails to update pg_dump.c's\n> variable_is_guc_list_quote(), which exposes the real problem here:\n> changing an existing variable's GUC_LIST_QUOTE property is an API break.\n\nAha, noted.\n\n> Getting pg_dump to cope with such a situation would be a research project.\n> The easy part of it would be to make variable_is_guc_list_quote() be\n> version-aware; the hard part would be figuring out what to emit so that\n> SET clauses will load correctly regardless of which PG version they will\n> be loaded into.\n>\n> I suspect you're right that this variable should have been marked as a\n> list to start with, but I'm afraid changing it at this point would be\n> way more trouble than it's worth.\n\nThe use-case is admittedly extremely marginal, and presumably hasn't attracted\nany other reports until now. I only noticed as I was poking around in\nthe area and\nit looked inconsistent.\n\nHow about adding a comment along the lines of\n\n/*\n * GUC_LIST_INPUT not set here as the use-case is marginal and modifying it\n * would require an API change.\n */\n\nto clarify why it's like that and prevent someone else trying to \"fix\"\nthe same issue\nin a few year's time?\n\nRegards\n\nIan Barwick\n\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Oct 2020 13:13:56 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"unix_socket_directories\" should be GUC_LIST_INPUT?" }, { "msg_contents": "Ian Lawrence Barwick <barwick@gmail.com> writes:\n> How about adding a comment along the lines of\n\nA comment seems reasonable, but I'd probably write it more like\n\n/*\n * unix_socket_directories should have been marked GUC_LIST_INPUT |\n * GUC_LIST_QUOTE, but it's too late to change it without creating\n * big compatibility problems for pg_dump.\n */\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Oct 2020 00:36:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"unix_socket_directories\" should be GUC_LIST_INPUT?" }, { "msg_contents": "I wrote:\n> * unix_socket_directories should have been marked GUC_LIST_INPUT |\n> * GUC_LIST_QUOTE, but it's too late to change it without creating\n> * big compatibility problems for pg_dump.\n\nAlthough ... just to argue against myself for a moment, how likely\nis it that pg_dump is going to be faced with the need to dump a\nvalue for unix_socket_directories?\n\nGenerally speaking, the value of that variable is sufficiently\nembedded into builds that you aren't going to mess with it.\nIt's close to being part of the FE/BE protocol, since whatever\nbuild of libpq you use is going to know about one or another\nof those directories, and the only reason to have more than one\nis if you have other clients that hard-wire some other directory.\n\nEven ignoring that point, since it's PGC_POSTMASTER, you certainly\naren't going to have cases like function SET clauses or ALTER\nUSER/DATABASE SET commands to dump. Unless pg_dumpall worries\nabout postgresql.auto.conf, which I don't think it does, the actual\nuse-case for it to dump a value for unix_socket_directories is nil\n--- and even having the variable's value set in postgresql.auto.conf\nseems like a seriously niche use-case.\n\nSo maybe we could get away with just changing it. It'd be good to\nverify though that this doesn't break existing string values for\nthe variable, assuming they contain no unlikely characters that'd\nneed quoting.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Oct 2020 00:49:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"unix_socket_directories\" should be GUC_LIST_INPUT?" }, { "msg_contents": "On Fri, Oct 23, 2020 at 12:49:57AM -0400, Tom Lane wrote:\n> Although ... just to argue against myself for a moment, how likely\n> is it that pg_dump is going to be faced with the need to dump a\n> value for unix_socket_directories?\n\nI am trying to think about some scenarios here, but honestly I\ncannot..\n\n> So maybe we could get away with just changing it. It'd be good to\n> verify though that this doesn't break existing string values for\n> the variable, assuming they contain no unlikely characters that'd\n> need quoting.\n\nYeah, that's the kind of things I wanted to check anyway before\nconsidering doing the switch.\n--\nMichael", "msg_date": "Fri, 23 Oct 2020 17:02:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: \"unix_socket_directories\" should be GUC_LIST_INPUT?" }, { "msg_contents": "On 2020-10-23 10:02, Michael Paquier wrote:\n>> So maybe we could get away with just changing it. It'd be good to\n>> verify though that this doesn't break existing string values for\n>> the variable, assuming they contain no unlikely characters that'd\n>> need quoting.\n> \n> Yeah, that's the kind of things I wanted to check anyway before\n> considering doing the switch.\n\nIf we're going to change it I think we need an updated patch that covers \npg_dump. (Even if we argue that pg_dump would not normally dump this \nvariable, keeping it up to date with respect to GUC_LIST_QUOTE seems \nproper.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 27 Oct 2020 11:42:28 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: \"unix_socket_directories\" should be GUC_LIST_INPUT?" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> If we're going to change it I think we need an updated patch that covers \n> pg_dump. (Even if we argue that pg_dump would not normally dump this \n> variable, keeping it up to date with respect to GUC_LIST_QUOTE seems \n> proper.)\n\nRight, I was definitely assuming that that would happen.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Oct 2020 09:45:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"unix_socket_directories\" should be GUC_LIST_INPUT?" }, { "msg_contents": "On Tue, Oct 27, 2020 at 9:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > If we're going to change it I think we need an updated patch that covers\n> > pg_dump. (Even if we argue that pg_dump would not normally dump this\n> > variable, keeping it up to date with respect to GUC_LIST_QUOTE seems\n> > proper.)\n>\n> Right, I was definitely assuming that that would happen.\n\nIf we change this, is it going to be a compatibility break for the\ncontents of postgresql.conf files?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 27 Oct 2020 12:19:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: \"unix_socket_directories\" should be GUC_LIST_INPUT?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> If we change this, is it going to be a compatibility break for the\n> contents of postgresql.conf files?\n\nI think not, at least not for the sorts of values you'd ordinarily\nfind in that variable, say '/tmp, /var/run/postgresql'. Possibly\nthe behavior would change for pathnames containing spaces or the\nlike, but it is probably kinda broken for such cases today anyway.\n\nIn any case, Michael had promised to test this aspect before committing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Oct 2020 12:23:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"unix_socket_directories\" should be GUC_LIST_INPUT?" }, { "msg_contents": "On Tue, Oct 27, 2020 at 12:23:22PM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> If we change this, is it going to be a compatibility break for the\n>> contents of postgresql.conf files?\n> \n> I think not, at least not for the sorts of values you'd ordinarily\n> find in that variable, say '/tmp, /var/run/postgresql'. Possibly\n> the behavior would change for pathnames containing spaces or the\n> like, but it is probably kinda broken for such cases today anyway.\n> \n> In any case, Michael had promised to test this aspect before committing.\n\nPaths with spaces or commas would be fine as long as we associate\nGUC_LIST_QUOTE with GUC_LIST_INPUT so as commas within quoted entries\nare handled consistently. postmaster.c uses SplitDirectoriesString()\nwith a comma do decide how to split things. This discards leading and\ntrailing whitespaces, requires a double-quote to have a matching\ndouble-quote where trailing/leading whitespaces are allowed, and\nnothing to escape quotes. Those patterns fail:\n\"/tmp/repo1,\"/tmp/repo2\n\"/tmp/repo1,/tmp/repo2\nThese are split as a single entry:\n\"/tmp/repo1,/tmp/repo2\"\n\"/tmp/ repo1,/tmp/ repo2\"\n\"/tmp/ repo1 , /tmp/ repo2 \"\nThese are split as two entries:\n\"/tmp/repo1,\",/tmp/repo2\n/tmp /repo1 , /tmp/ repo2\n\"/tmp/\"\"sock1\", \"/tmp/, sock2\" (here first path has one double quote)\n\nIf we use GUC_LIST_INPUT | GUC_LIST_QUOTE, paths are handled the same\nway as the original, but we would run into problems if not using\nGUC_LIST_QUOTE as mentioned upthread.\n\nAnyway, we have a compatibility problem once we use ALTER SYSTEM.\nJust take the following command: \nalter system set unix_socket_directories = '/tmp/sock1, /tmp/sock2';\n\nOn HEAD, this would be treated and parsed as two items. However, with\nthe patch, this becomes one item as this is considered as one single\nelement of the list of paths, as that's written to\npostgresql.auto.conf as '\"/tmp/sock1, /tmp/sock2\"'.\n\nThis last argument would be IMO a reason enough to not do the switch.\nEven if I have never seen cases where ALTER SYSTEM was used with\nunix_socket_directories, we cannot say either that nobody relies on\nthe existing behavior (perhaps some failover solutions care?). So at\nleast we should add a comment as proposed in\nhttps://postgr.es/m/122596.1603427777@sss.pgh.pa.us.\n\nThoughts?\n--\nMichael", "msg_date": "Wed, 4 Nov 2020 12:59:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: \"unix_socket_directories\" should be GUC_LIST_INPUT?" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Anyway, we have a compatibility problem once we use ALTER SYSTEM.\n> Just take the following command: \n> alter system set unix_socket_directories = '/tmp/sock1, /tmp/sock2';\n\n> On HEAD, this would be treated and parsed as two items. However, with\n> the patch, this becomes one item as this is considered as one single\n> element of the list of paths, as that's written to\n> postgresql.auto.conf as '\"/tmp/sock1, /tmp/sock2\"'.\n\n> This last argument would be IMO a reason enough to not do the switch.\n\nI do not think that that's a fatal objection. I doubt anyone has\napplications that are automatically issuing that sort of command and\nwould be broken by a change. I think backwards compatibility is\nsufficiently met if the behavior remains the same for existing\npostgresql.conf entries, which AFAICT it would.\n\nArguably, the whole point of doing something here is to make ALTER\nSYSTEM handle this variable more sensibly. In that context,\n'/tmp/sock1, /tmp/sock2' *should* be taken as one item IMO.\nWe can't change the behavior without, um, changing the behavior.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 Nov 2020 10:47:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"unix_socket_directories\" should be GUC_LIST_INPUT?" }, { "msg_contents": "On Wed, Nov 04, 2020 at 10:47:43AM -0500, Tom Lane wrote:\n> I do not think that that's a fatal objection. I doubt anyone has\n> applications that are automatically issuing that sort of command and\n> would be broken by a change. I think backwards compatibility is\n> sufficiently met if the behavior remains the same for existing\n> postgresql.conf entries, which AFAICT it would.\n\nOK. As far as I know, we parse this variable the same way, so this\ncase would be satisfied.\n\n> Arguably, the whole point of doing something here is to make ALTER\n> SYSTEM handle this variable more sensibly. In that context,\n> '/tmp/sock1, /tmp/sock2' *should* be taken as one item IMO.\n> We can't change the behavior without, um, changing the behavior.\n\nNo arguments against this point either. If you consider all that, the\nswitch can be done with the attached, with the change for pg_dump\nincluded. I have reorganized the list in variable_is_guc_list_quote()\nalphabetically while on it.\n\nRobert, is your previous question answered?\n--\nMichael", "msg_date": "Thu, 5 Nov 2020 09:16:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: \"unix_socket_directories\" should be GUC_LIST_INPUT?" }, { "msg_contents": "On Thu, Nov 05, 2020 at 09:16:10AM +0900, Michael Paquier wrote:\n> No arguments against this point either. If you consider all that, the\n> switch can be done with the attached, with the change for pg_dump\n> included. I have reorganized the list in variable_is_guc_list_quote()\n> alphabetically while on it.\n\nHearing nothing, applied on HEAD.\n--\nMichael", "msg_date": "Sat, 7 Nov 2020 10:47:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: \"unix_socket_directories\" should be GUC_LIST_INPUT?" } ]
[ { "msg_contents": "Hi, hackers\r\n\r\nI find that ALTER TABLE xxx FORCE/NO FORCE ROW LEVEL SECURITY cannot support tab complete.\r\nThe attached add the tab complete for rls.\r\n\r\ndiff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\r\nindex 561fe1dff9..b2b4f1fd4d 100644\r\n--- a/src/bin/psql/tab-complete.c\r\n+++ b/src/bin/psql/tab-complete.c\r\n@@ -1974,10 +1974,10 @@ psql_completion(const char *text, int start, int end)\r\n */\r\n else if (Matches(\"ALTER\", \"TABLE\", MatchAny))\r\n COMPLETE_WITH(\"ADD\", \"ALTER\", \"CLUSTER ON\", \"DISABLE\", \"DROP\",\r\n- \"ENABLE\", \"INHERIT\", \"NO INHERIT\", \"RENAME\", \"RESET\",\r\n+ \"ENABLE\", \"INHERIT\", \"NO\", \"RENAME\", \"RESET\",\r\n \"OWNER TO\", \"SET\", \"VALIDATE CONSTRAINT\",\r\n \"REPLICA IDENTITY\", \"ATTACH PARTITION\",\r\n- \"DETACH PARTITION\");\r\n+ \"DETACH PARTITION\", \"FORCE ROW LEVEL SECURITY\");\r\n /* ALTER TABLE xxx ENABLE */\r\n else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"ENABLE\"))\r\n COMPLETE_WITH(\"ALWAYS\", \"REPLICA\", \"ROW LEVEL SECURITY\", \"RULE\",\r\n@@ -2007,6 +2007,9 @@ psql_completion(const char *text, int start, int end)\r\n /* ALTER TABLE xxx INHERIT */\r\n else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"INHERIT\"))\r\n COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables, \"\");\r\n+ /* ALTER TABLE xxx NO */\r\n+ else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"NO\"))\r\n+ COMPLETE_WITH(\"FORCE ROW LEVEL SECURITY\", \"INHERIT\");\r\n /* ALTER TABLE xxx NO INHERIT */\r\n else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"NO\", \"INHERIT\"))\r\n COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables, \"”);\r\n\r\nBest regards.\r\n\r\n\r\n--\r\nChengDu WenWu Information Technology Co,Ltd.\r\nJapin Li", "msg_date": "Fri, 23 Oct 2020 05:19:00 +0000", "msg_from": "Li Japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "" }, { "msg_contents": "Sorry, I forgot add the subject.\r\n\r\n--\r\nBest regards\r\nJapin Li\r\n\r\nOn Oct 23, 2020, at 1:19 PM, Li Japin <japinli@hotmail.com<mailto:japinli@hotmail.com>> wrote:\r\n\r\nHi, hackers\r\n\r\nI find that ALTER TABLE xxx FORCE/NO FORCE ROW LEVEL SECURITY cannot support tab complete.\r\nThe attached add the tab complete for rls.\r\n\r\ndiff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\r\nindex 561fe1dff9..b2b4f1fd4d 100644\r\n--- a/src/bin/psql/tab-complete.c\r\n+++ b/src/bin/psql/tab-complete.c\r\n@@ -1974,10 +1974,10 @@ psql_completion(const char *text, int start, int end)\r\n */\r\n else if (Matches(\"ALTER\", \"TABLE\", MatchAny))\r\n COMPLETE_WITH(\"ADD\", \"ALTER\", \"CLUSTER ON\", \"DISABLE\", \"DROP\",\r\n- \"ENABLE\", \"INHERIT\", \"NO INHERIT\", \"RENAME\", \"RESET\",\r\n+ \"ENABLE\", \"INHERIT\", \"NO\", \"RENAME\", \"RESET\",\r\n \"OWNER TO\", \"SET\", \"VALIDATE CONSTRAINT\",\r\n \"REPLICA IDENTITY\", \"ATTACH PARTITION\",\r\n- \"DETACH PARTITION\");\r\n+ \"DETACH PARTITION\", \"FORCE ROW LEVEL SECURITY\");\r\n /* ALTER TABLE xxx ENABLE */\r\n else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"ENABLE\"))\r\n COMPLETE_WITH(\"ALWAYS\", \"REPLICA\", \"ROW LEVEL SECURITY\", \"RULE\",\r\n@@ -2007,6 +2007,9 @@ psql_completion(const char *text, int start, int end)\r\n /* ALTER TABLE xxx INHERIT */\r\n else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"INHERIT\"))\r\n COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables, \"\");\r\n+ /* ALTER TABLE xxx NO */\r\n+ else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"NO\"))\r\n+ COMPLETE_WITH(\"FORCE ROW LEVEL SECURITY\", \"INHERIT\");\r\n /* ALTER TABLE xxx NO INHERIT */\r\n else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"NO\", \"INHERIT\"))\r\n COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables, \"”);\r\n\r\nBest regards.\r\n\r\n\r\n--\r\nChengDu WenWu Information Technology Co,Ltd.\r\nJapin Li\r\n\r\n<0001-Add-tab-complete-for-alter-table-rls.patch>\r\n\r\n\n\n\n\n\n\r\nSorry, I forgot add the subject.\r\n\n\n\n--\nBest regards\nJapin Li\n\n\n\n\nOn Oct 23, 2020, at 1:19 PM, Li Japin <japinli@hotmail.com> wrote:\n\n\n\nHi, hackers\r\n\n\nI find that ALTER TABLE xxx FORCE/NO FORCE ROW LEVEL SECURITY cannot support tab complete.\nThe attached add the tab complete for rls.\n\n\n\ndiff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\nindex 561fe1dff9..b2b4f1fd4d 100644\n--- a/src/bin/psql/tab-complete.c\n+++ b/src/bin/psql/tab-complete.c\n@@ -1974,10 +1974,10 @@ psql_completion(const char *text, int start, int end)\n  */\n  else if (Matches(\"ALTER\", \"TABLE\", MatchAny))\n  COMPLETE_WITH(\"ADD\", \"ALTER\", \"CLUSTER ON\", \"DISABLE\", \"DROP\",\n-  \"ENABLE\", \"INHERIT\", \"NO INHERIT\", \"RENAME\", \"RESET\",\n+  \"ENABLE\", \"INHERIT\", \"NO\", \"RENAME\", \"RESET\",\n   \"OWNER TO\", \"SET\", \"VALIDATE CONSTRAINT\",\n   \"REPLICA IDENTITY\", \"ATTACH PARTITION\",\n-  \"DETACH PARTITION\");\n+  \"DETACH PARTITION\", \"FORCE ROW LEVEL SECURITY\");\n  /* ALTER TABLE xxx ENABLE */\n  else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"ENABLE\"))\n  COMPLETE_WITH(\"ALWAYS\", \"REPLICA\", \"ROW LEVEL SECURITY\", \"RULE\",\n@@ -2007,6 +2007,9 @@ psql_completion(const char *text, int start, int end)\n  /* ALTER TABLE xxx INHERIT */\n  else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"INHERIT\"))\n  COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables, \"\");\n+ /* ALTER TABLE xxx NO */\n+ else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"NO\"))\n+ COMPLETE_WITH(\"FORCE ROW LEVEL SECURITY\", \"INHERIT\");\n  /* ALTER TABLE xxx NO INHERIT */\n  else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"NO\", \"INHERIT\"))\n  COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables, \"”);\n\n\n\nBest regards.\n\n\n\n\n\n\n\n\n\n--\nChengDu WenWu Information Technology Co,Ltd.\nJapin Li\n\n\n\n\n\n\n<0001-Add-tab-complete-for-alter-table-rls.patch>", "msg_date": "Fri, 23 Oct 2020 05:22:57 +0000", "msg_from": "Li Japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Tab complete for alter table rls" }, { "msg_contents": "On Fri, Oct 23, 2020 at 05:22:57AM +0000, Li Japin wrote:\n> Sorry, I forgot add the subject.\n\nNo worries. Good catch. I'll try to test that and apply it later,\nbut by reading the code it looks like you got that right.\n--\nMichael", "msg_date": "Fri, 23 Oct 2020 16:37:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Tab complete for alter table rls" }, { "msg_contents": "On Fri, Oct 23, 2020 at 04:37:18PM +0900, Michael Paquier wrote:\n> No worries. Good catch. I'll try to test that and apply it later,\n> but by reading the code it looks like you got that right.\n\nChecked and applied on HEAD, thanks!\n--\nMichael", "msg_date": "Sat, 24 Oct 2020 10:49:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Tab complete for alter table rls" }, { "msg_contents": "Thanks Michael!\n\n--\nBest regards\nJapin Li\n\n\n\n> On Oct 24, 2020, at 9:49 AM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Oct 23, 2020 at 04:37:18PM +0900, Michael Paquier wrote:\n>> No worries. Good catch. I'll try to test that and apply it later,\n>> but by reading the code it looks like you got that right.\n> \n> Checked and applied on HEAD, thanks!\n> --\n> Michael\n\n\n\n", "msg_date": "Sat, 24 Oct 2020 04:09:55 +0000", "msg_from": "Li Japin <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Tab complete for alter table rls" } ]
[ { "msg_contents": "Since this commit, pg_dump CREATEs tables and then ATTACHes them:\n\n|commit 33a53130a89447e171a8268ae0b221bb48af6468\n|Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n|Date: Mon Jun 10 18:56:23 2019 -0400\n|\n| Make pg_dump emit ATTACH PARTITION instead of PARTITION OF (reprise)\n|...\n| This change also has the advantage that the partition is restorable from\n| the dump (as a standalone table) even if its parent table isn't\n| restored.\n\nI like the idea of child tables being independently restorable, but it doesn't\nseem to work.\n\n|psql postgres -c 'DROP TABLE IF EXISTS t' -c 'CREATE TABLE t(i int) PARTITION BY RANGE(i)' -c 'CREATE TABLE t1 PARTITION OF t FOR VALUES FROM (1)TO(2)'\n|pg_dump postgres -Fc -t t1 >dump.t1\n|psql postgres -c 'DROP TABLE t'\n|pg_restore -d postgres ./dump.t1\n|pg_restore: while PROCESSING TOC:\n|pg_restore: from TOC entry 457; 1259 405311409 TABLE t1 pryzbyj\n|pg_restore: error: could not execute query: ERROR: relation \"public.t\" does not exist\n|Command was: CREATE TABLE public.t1 (\n| i integer\n|);\n|ALTER TABLE ONLY public.t ATTACH PARTITION public.t1 FOR VALUES FROM (1) TO (2);\n|\n|pg_restore: error: could not execute query: ERROR: relation \"public.t1\" does not exist\n|Command was: ALTER TABLE public.t1 OWNER TO pryzbyj;\n|\n|pg_restore: from TOC entry 4728; 0 405311409 TABLE DATA t1 pryzbyj\n|pg_restore: error: could not execute query: ERROR: relation \"public.t1\" does not exist\n|Command was: COPY public.t1 (i) FROM stdin;\n|pg_restore: warning: errors ignored on restore: 3\n\nNow that I look, it seems like this is calling PQexec(), which sends a single,\n\"simple\" libpq message with:\n|CREATE TABLE ..; ALTER TABLE .. ATTACH PARTITION;\n..which is transactional, so when the 2nd command fails, the CREATE is rolled back.\nhttps://www.postgresql.org/docs/9.5/libpq-exec.html#LIBPQ-EXEC-MAIN\n\nTelsasoft does a lot of dynamic DDL, so this happens sometimes due to columns\nadded or promoted. Up to now, when this has come up, I've run:\npg_restore |grep -v 'ATTACH PARTITION' |psql. Am I missing something ?\n\nThe idea of being independently restorable maybe originated with Tom's comment\nhere: https://www.postgresql.org/message-id/30049.1555537881%40sss.pgh.pa.us\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 23 Oct 2020 00:29:40 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "pg_dump, ATTACH, and independently restorable child partitions" }, { "msg_contents": "On Fri, Oct 23, 2020 at 12:29:40AM -0500, Justin Pryzby wrote:\n> Since this commit, pg_dump CREATEs tables and then ATTACHes them:\n> \n> |commit 33a53130a89447e171a8268ae0b221bb48af6468\n> |Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> |Date: Mon Jun 10 18:56:23 2019 -0400\n> |\n> | Make pg_dump emit ATTACH PARTITION instead of PARTITION OF (reprise)\n> |...\n> | This change also has the advantage that the partition is restorable from\n> | the dump (as a standalone table) even if its parent table isn't\n> | restored.\n> \n> I like the idea of child tables being independently restorable, but it doesn't\n> seem to work.\n...\n> Now that I look, it seems like this is calling PQexec(), which sends a single,\n> \"simple\" libpq message with:\n> |CREATE TABLE ..; ALTER TABLE .. ATTACH PARTITION;\n> ..which is transactional, so when the 2nd command fails, the CREATE is rolled back.\n> https://www.postgresql.org/docs/9.5/libpq-exec.html#LIBPQ-EXEC-MAIN\n\nThe easy fix is to add an explicit begin/commit.\n\n-- \nJustin", "msg_date": "Sat, 24 Oct 2020 14:59:49 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump, ATTACH, and independently restorable child partitions" }, { "msg_contents": "On Sat, Oct 24, 2020 at 02:59:49PM -0500, Justin Pryzby wrote:\n> On Fri, Oct 23, 2020 at 12:29:40AM -0500, Justin Pryzby wrote:\n> > Since this commit, pg_dump CREATEs tables and then ATTACHes them:\n> > \n> > |commit 33a53130a89447e171a8268ae0b221bb48af6468\n> > |Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > |Date: Mon Jun 10 18:56:23 2019 -0400\n> > |\n> > | Make pg_dump emit ATTACH PARTITION instead of PARTITION OF (reprise)\n> > |...\n> > | This change also has the advantage that the partition is restorable from\n> > | the dump (as a standalone table) even if its parent table isn't\n> > | restored.\n> > \n> > I like the idea of child tables being independently restorable, but it doesn't\n> > seem to work.\n> ...\n> > Now that I look, it seems like this is calling PQexec(), which sends a single,\n> > \"simple\" libpq message with:\n> > |CREATE TABLE ..; ALTER TABLE .. ATTACH PARTITION;\n> > ..which is transactional, so when the 2nd command fails, the CREATE is rolled back.\n> > https://www.postgresql.org/docs/9.5/libpq-exec.html#LIBPQ-EXEC-MAIN\n> \n> The easy fix is to add an explicit begin/commit.\n\nNow with updated test script.\n\n-- \nJustin", "msg_date": "Thu, 29 Oct 2020 12:00:20 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump, ATTACH, and independently restorable child partitions" }, { "msg_contents": "On 2020-Oct-24, Justin Pryzby wrote:\n\n> On Fri, Oct 23, 2020 at 12:29:40AM -0500, Justin Pryzby wrote:\n\n> > Now that I look, it seems like this is calling PQexec(), which sends a single,\n> > \"simple\" libpq message with:\n> > |CREATE TABLE ..; ALTER TABLE .. ATTACH PARTITION;\n> > ..which is transactional, so when the 2nd command fails, the CREATE is rolled back.\n> > https://www.postgresql.org/docs/9.5/libpq-exec.html#LIBPQ-EXEC-MAIN\n> \n> The easy fix is to add an explicit begin/commit.\n\nHmm, I think this throws a warning when used with \"pg_restore -1\",\nright? I don't think that's sufficient reason to discard the idea, but\nit be better to find some other way.\n\nI have no ideas ATM :-(\n\n\n\n", "msg_date": "Fri, 6 Nov 2020 23:18:35 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pg_dump, ATTACH, and independently restorable child partitions" }, { "msg_contents": "On Fri, Nov 06, 2020 at 11:18:35PM -0300, Alvaro Herrera wrote:\n> On 2020-Oct-24, Justin Pryzby wrote:\n> \n> > On Fri, Oct 23, 2020 at 12:29:40AM -0500, Justin Pryzby wrote:\n> \n> > > Now that I look, it seems like this is calling PQexec(), which sends a single,\n> > > \"simple\" libpq message with:\n> > > |CREATE TABLE ..; ALTER TABLE .. ATTACH PARTITION;\n> > > ..which is transactional, so when the 2nd command fails, the CREATE is rolled back.\n> > > https://www.postgresql.org/docs/9.5/libpq-exec.html#LIBPQ-EXEC-MAIN\n> > \n> > The easy fix is to add an explicit begin/commit.\n> \n> Hmm, I think this throws a warning when used with \"pg_restore -1\",\n> right? I don't think that's sufficient reason to discard the idea, but\n> it be better to find some other way.\n\nWorse, right ? It'd commit in the middle and then continue outside of a txn.\nI guess there's no test case for this :(\n\n> I have no ideas ATM :-(\n\n1. Maybe pg_restore ExecuteSqlCommandBuf() should (always?) call\nExecuteSimpleCommands() instead of ExecuteSqlCommand(). It doesn't seem to\nbreak anything (although that surprised me).\n\n2. Otherwise, the createStmt would need to be split into a createStmt2 or a\nchar *createStmt[], which I think would then require changing the output\nformat. It seems clearly better to keep the sql commands split up initially\nthan to reverse engineer them during restore.\n\nI tried using \\x01 to separate commands, and strtok to split them to run them\nindividually. But that breaks the pg_dumpall tests. As an experiment, I used\n\\x00, which is somewhat invasive but actually works.\n\nObviously patching pg_dump will affect only future backups, and the pg_restore\npatch allows independently restoring parent tables in existing dumps.\n\n-- \nJustin", "msg_date": "Fri, 20 Nov 2020 09:20:55 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump, ATTACH, and independently restorable child partitions" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> 1. Maybe pg_restore ExecuteSqlCommandBuf() should (always?) call\n> ExecuteSimpleCommands() instead of ExecuteSqlCommand(). It doesn't seem to\n> break anything (although that surprised me).\n\nThat certainly does break everything, which I imagine is the reason\nwhy the cfbot shows that this patch is failing the pg_upgrade tests.\nNote the comments for ExecuteSimpleCommands:\n\n * We have to lex the data to the extent of identifying literals and quoted\n * identifiers, so that we can recognize statement-terminating semicolons.\n * We assume that INSERT data will not contain SQL comments, E'' literals,\n * or dollar-quoted strings, so this is much simpler than a full SQL lexer.\n\nIOW, where that says \"Simple\", it means *simple* --- in practice,\nwe only risk using it on commands that we know pg_dump itself built\nearlier. There is no reasonable prospect of getting pg_restore to\nsplit arbitrary SQL at command boundaries. We'd need something\ncomparable to psql's lexer, which is huge, and from a future-proofing\nstandpoint it would be just awful. (The worst that happens if psql\nmisparses your string is that it won't send the command when you\nexpected. If pg_restore misparses stuff, your best case is that the\nrestore fails cleanly; the worst case could easily result in\nSQL-injection compromises.) So I think we cannot follow this\napproach.\n\nWhat we'd need to do if we want this to work with direct-to-DB restore\nis to split off the ATTACH PARTITION command as a separate TOC entry.\nThat doesn't seem amazingly difficult, and it would even offer the\npossibility that you could extract the partition standalone without\nhaving to ignore errors. (You'd use -l/-L to select the CREATE TABLE,\nthe data, etc, but not the ATTACH object.)\n\nThat would possibly come out as a larger patch than you have here,\nbut maybe not by much. I don't think there's too much more involved\nthan setting up the proper command strings and calling ArchiveEntry().\nYou'd need to do some testing to verify that cases like --clean\nwork sanely.\n\nAlso, I read the 0002 patch briefly and couldn't make heads or tails\nof it, except that it seemed to be abusing the PQExpBuffer abstraction\nwell beyond anything I'd consider acceptable. If you want separate\nstrings, make a PQExpBuffer for each one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Nov 2020 18:35:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump, ATTACH, and independently restorable child partitions" }, { "msg_contents": "On Wed, Nov 25, 2020 at 06:35:19PM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > 1. Maybe pg_restore ExecuteSqlCommandBuf() should (always?) call\n> > ExecuteSimpleCommands() instead of ExecuteSqlCommand(). It doesn't seem to\n> > break anything (although that surprised me).\n> \n> That certainly does break everything, which I imagine is the reason\n> why the cfbot shows that this patch is failing the pg_upgrade tests.\n\nThanks for looking, I have tried this.\n\n> What we'd need to do if we want this to work with direct-to-DB restore\n> is to split off the ATTACH PARTITION command as a separate TOC entry.\n> That doesn't seem amazingly difficult, and it would even offer the\n> possibility that you could extract the partition standalone without\n> having to ignore errors. (You'd use -l/-L to select the CREATE TABLE,\n> the data, etc, but not the ATTACH object.)\n\n\n-- \nJustin", "msg_date": "Wed, 2 Dec 2020 16:50:13 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump, ATTACH, and independently restorable child partitions" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> [ v4-0001-pg_dump-output-separate-object-for-ALTER-TABLE.AT.patch ]\n\nThe cfbot is being picky about this:\n\n3218pg_dump.c: In function ‘dumpTableAttach’:\n3219pg_dump.c:15600:42: error: suggest parentheses around comparison in operand of ‘&’ [-Werror=parentheses]\n3220 if (attachinfo->partitionTbl->dobj.dump & DUMP_COMPONENT_DEFINITION == 0)\n3221 ^\n3222cc1: all warnings being treated as errors\n\nwhich if I've got the precedence straight is indeed a bug.\n\nPersonally I'd probably write\n\n if (!(attachinfo->partitionTbl->dobj.dump & DUMP_COMPONENT_DEFINITION))\n\nas it seems like a boolean test to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Dec 2020 12:13:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump, ATTACH, and independently restorable child partitions" }, { "msg_contents": "On Fri, Dec 04, 2020 at 12:13:05PM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > [ v4-0001-pg_dump-output-separate-object-for-ALTER-TABLE.AT.patch ]\n> \n> The cfbot is being picky about this:\n> \n> 3218pg_dump.c: In function ‘dumpTableAttach’:\n> 3219pg_dump.c:15600:42: error: suggest parentheses around comparison in operand of ‘&’ [-Werror=parentheses]\n> 3220 if (attachinfo->partitionTbl->dobj.dump & DUMP_COMPONENT_DEFINITION == 0)\n> 3221 ^\n> 3222cc1: all warnings being treated as errors\n> \n> which if I've got the precedence straight is indeed a bug.\n\nOops - from a last-minute edit.\nI missed it due to cfboot being slow, and clogged up with duplicate entries.\nThis also adds/updates comments.\n\n-- \nJustin", "msg_date": "Fri, 4 Dec 2020 12:02:31 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump, ATTACH, and independently restorable child partitions" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> [ v5-0001-pg_dump-output-separate-object-for-ALTER-TABLE.AT.patch ]\n\nPushed with mostly cosmetic edits.\n\nOne thing I changed that isn't cosmetic is that I set the ArchiveEntry's\nowner to be the owner of the child table. Although we aren't going to\ndo any sort of ALTER OWNER on this, it's still important that the owner\nbe marked as someone who has the right permissions to issue the ALTER.\nThe default case is that the original user will issue the ATTACH, which\nbasically only works if you run the restore as superuser. It looks to\nme like you copied this decision from the INDEX ATTACH code, which is\njust as broken --- I'm going to go fix/backpatch that momentarily.\n\nAnother thing that bothers me still is that it's not real clear that\nthis code plays nicely with selective dumps, because it's not doing\nanything to set the dobj.dump field in a non-default way (which in\nturn means that the dobj.dump test in dumpTableAttach can never fire).\nIt seems like it might be possible to emit a TABLE ATTACH object\neven though one or both of the referenced tables didn't get dumped.\nIn some desultory testing I couldn't get that to actually happen, but\nmaybe I just didn't push on it in the right way. I'd be happier about\nthis if we set the flags with something along the lines of\n\n\tattachinfo->dobj.dump = attachinfo->parentTbl->dobj.dump &\n\t\t\t\tattachinfo->partitionTbl->dobj.dump;\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Jan 2021 21:28:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump, ATTACH, and independently restorable child partitions" }, { "msg_contents": "On Wed, Nov 25, 2020 at 06:35:19PM -0500, Tom Lane wrote:\n> What we'd need to do if we want this to work with direct-to-DB restore \n> is to split off the ATTACH PARTITION command as a separate TOC entry. \n> That doesn't seem amazingly difficult, and it would even offer the \n> possibility that you could extract the partition standalone without \n> having to ignore errors. (You'd use -l/-L to select the CREATE TABLE, \n> the data, etc, but not the ATTACH object.) \n\nOn Mon, Jan 11, 2021 at 09:28:18PM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > [ v5-0001-pg_dump-output-separate-object-for-ALTER-TABLE.AT.patch ]\n> \n> Pushed with mostly cosmetic edits.\n\nThanks for pushing (9a4c0e36f).\n\nShould this be included in the release notes ?\n\nIt's a user-visible change visible in pg_restore -l. Someone might be\nsurprised that the attach \"object\" needs to be included for restore -L to\nbehave the same as it use to.\n\n--\n-- Name: cdrs_2021_08_22; Type: TABLE ATTACH; Schema: child; Owner: telsasoft\n--\n\n7949; 1259 1635139558 TABLE child cdrs_2021_08_24 telsasoft\n62164; 0 0 TABLE ATTACH child cdrs_2021_08_24 telsasoft\n; depends on: 7949\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 30 Aug 2021 10:42:49 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump, ATTACH, and independently restorable child partitions" } ]
[ { "msg_contents": "Am trying to clone postgresql git, getting error\n\nD:\\sridhar>git clone https://git.postgresql.org/git/postgresql.git\nCloning into 'postgresql'...\nremote: Enumerating objects: 806507, done.\nremote: Counting objects: 100% (806507/806507), done.\nremote: Compressing objects: 100% (122861/122861), done.\nerror: RPC failed; curl 18 transfer closed with 3265264 bytes remaining to\nread\nfatal: the remote end hung up unexpectedly\nfatal: early EOF\nfatal: index-pack failed\n\nPlease let me know anything as am doing this for first time\n\nThanks\nSridhar BN\n\nAm trying to clone postgresql git, getting errorD:\\sridhar>git clone https://git.postgresql.org/git/postgresql.gitCloning into 'postgresql'...remote: Enumerating objects: 806507, done.remote: Counting objects: 100% (806507/806507), done.remote: Compressing objects: 100% (122861/122861), done.error: RPC failed; curl 18 transfer closed with 3265264 bytes remaining to readfatal: the remote end hung up unexpectedlyfatal: early EOFfatal: index-pack failedPlease let me know anything as am doing this for first timeThanksSridhar BN", "msg_date": "Fri, 23 Oct 2020 16:39:15 +0530", "msg_from": "Sridhar N Bamandlapally <sridhar.bn1@gmail.com>", "msg_from_op": true, "msg_subject": "git clone failed in windows" }, { "msg_contents": "git clone repository showing failed from Visual studio\n\n[image: git-clone-error.PNG]\n\nPlease let me know is there any issue,\n\nThanks\nSridhar BN\n\n\nOn Fri, Oct 23, 2020 at 4:39 PM Sridhar N Bamandlapally <\nsridhar.bn1@gmail.com> wrote:\n\n> Am trying to clone postgresql git, getting error\n>\n> D:\\sridhar>git clone https://git.postgresql.org/git/postgresql.git\n> Cloning into 'postgresql'...\n> remote: Enumerating objects: 806507, done.\n> remote: Counting objects: 100% (806507/806507), done.\n> remote: Compressing objects: 100% (122861/122861), done.\n> error: RPC failed; curl 18 transfer closed with 3265264 bytes remaining to\n> read\n> fatal: the remote end hung up unexpectedly\n> fatal: early EOF\n> fatal: index-pack failed\n>\n> Please let me know anything as am doing this for first time\n>\n> Thanks\n> Sridhar BN\n>\n>\n>", "msg_date": "Fri, 23 Oct 2020 17:55:53 +0530", "msg_from": "Sridhar N Bamandlapally <sridhar.bn1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: git clone failed in windows" }, { "msg_contents": "Hi Sridhar!\n\n> 23 окт. 2020 г., в 16:09, Sridhar N Bamandlapally <sridhar.bn1@gmail.com> написал(а):\n> \n> Am trying to clone postgresql git, getting error\n> \n> D:\\sridhar>git clone https://git.postgresql.org/git/postgresql.git\n> Cloning into 'postgresql'...\n> remote: Enumerating objects: 806507, done.\n> remote: Counting objects: 100% (806507/806507), done.\n> remote: Compressing objects: 100% (122861/122861), done.\n> error: RPC failed; curl 18 transfer closed with 3265264 bytes remaining to read\n> fatal: the remote end hung up unexpectedly\n> fatal: early EOF\n> fatal: index-pack failed\n\nIt seems like your internet connection is not stable enough.\nAs an alternative you can try to clone https://github.com/postgres/postgres\nIt's synced with official repository you mentioned and allows you to have your fork for personal branches.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 23 Oct 2020 17:35:23 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: git clone failed in windows" }, { "msg_contents": "On Fri, Oct 23, 2020 at 4:39 PM Sridhar N Bamandlapally\n<sridhar.bn1@gmail.com> wrote:\n>\n> Am trying to clone postgresql git, getting error\n>\n> D:\\sridhar>git clone https://git.postgresql.org/git/postgresql.git\n> Cloning into 'postgresql'...\n> remote: Enumerating objects: 806507, done.\n> remote: Counting objects: 100% (806507/806507), done.\n> remote: Compressing objects: 100% (122861/122861), done.\n> error: RPC failed; curl 18 transfer closed with 3265264 bytes remaining to read\n> fatal: the remote end hung up unexpectedly\n> fatal: early EOF\n> fatal: index-pack failed\n>\n\nI have also just tried this and it failed with same error. However, it\nworked when I tried 'git clone\ngit://git.postgresql.org/git/postgresql.git'. I don't know what is the\nissue.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 23 Oct 2020 18:09:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: git clone failed in windows" }, { "msg_contents": "On Fri, Oct 23, 2020 at 1:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Oct 23, 2020 at 4:39 PM Sridhar N Bamandlapally\n> <sridhar.bn1@gmail.com> wrote:\n> >\n> > Am trying to clone postgresql git, getting error\n> >\n> > D:\\sridhar>git clone https://git.postgresql.org/git/postgresql.git\n> > Cloning into 'postgresql'...\n> > remote: Enumerating objects: 806507, done.\n> > remote: Counting objects: 100% (806507/806507), done.\n> > remote: Compressing objects: 100% (122861/122861), done.\n> > error: RPC failed; curl 18 transfer closed with 3265264 bytes remaining\n> to read\n> > fatal: the remote end hung up unexpectedly\n> > fatal: early EOF\n> > fatal: index-pack failed\n> >\n>\n> I have also just tried this and it failed with same error. However, it\n> worked when I tried 'git clone\n> git://git.postgresql.org/git/postgresql.git'. I don't know what is the\n> issue.\n>\n\nIt worked for me with https. Can you try again? It may be that the Varnish\ncache was doing it's meditation thing for some reason. I can't see anything\nobvious on the system though - nothing in the logs, and the services have\nall been up for days.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: http://www.enterprisedb.com\n\nOn Fri, Oct 23, 2020 at 1:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Oct 23, 2020 at 4:39 PM Sridhar N Bamandlapally\n<sridhar.bn1@gmail.com> wrote:\n>\n> Am trying to clone postgresql git, getting error\n>\n> D:\\sridhar>git clone https://git.postgresql.org/git/postgresql.git\n> Cloning into 'postgresql'...\n> remote: Enumerating objects: 806507, done.\n> remote: Counting objects: 100% (806507/806507), done.\n> remote: Compressing objects: 100% (122861/122861), done.\n> error: RPC failed; curl 18 transfer closed with 3265264 bytes remaining to read\n> fatal: the remote end hung up unexpectedly\n> fatal: early EOF\n> fatal: index-pack failed\n>\n\nI have also just tried this and it failed with same error. However, it\nworked when I tried 'git clone\ngit://git.postgresql.org/git/postgresql.git'. I don't know what is the\nissue.It worked for me with https. Can you try again? It may be that the Varnish cache was doing it's meditation thing for some reason. I can't see anything obvious on the system though - nothing in the logs, and the services have all been up for days. -- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEDB: http://www.enterprisedb.com", "msg_date": "Fri, 23 Oct 2020 13:51:05 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: git clone failed in windows" }, { "msg_contents": "On Fri, Oct 23, 2020 at 6:21 PM Dave Page <dpage@pgadmin.org> wrote:\n>\n> On Fri, Oct 23, 2020 at 1:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Fri, Oct 23, 2020 at 4:39 PM Sridhar N Bamandlapally\n>> <sridhar.bn1@gmail.com> wrote:\n>> >\n>> > Am trying to clone postgresql git, getting error\n>> >\n>> > D:\\sridhar>git clone https://git.postgresql.org/git/postgresql.git\n>> > Cloning into 'postgresql'...\n>> > remote: Enumerating objects: 806507, done.\n>> > remote: Counting objects: 100% (806507/806507), done.\n>> > remote: Compressing objects: 100% (122861/122861), done.\n>> > error: RPC failed; curl 18 transfer closed with 3265264 bytes remaining to read\n>> > fatal: the remote end hung up unexpectedly\n>> > fatal: early EOF\n>> > fatal: index-pack failed\n>> >\n>>\n>> I have also just tried this and it failed with same error. However, it\n>> worked when I tried 'git clone\n>> git://git.postgresql.org/git/postgresql.git'. I don't know what is the\n>> issue.\n>\n>\n> It worked for me with https. Can you try again?\n>\n\nThis time it worked but on the third try.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 23 Oct 2020 19:03:10 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: git clone failed in windows" }, { "msg_contents": "Thanks All,\n\n it wotked with\ngit://git.postgresql.org/git/postgresql.git\n\nThanks\nSridhar\n\n\nOn Fri, Oct 23, 2020 at 7:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Oct 23, 2020 at 6:21 PM Dave Page <dpage@pgadmin.org> wrote:\n> >\n> > On Fri, Oct 23, 2020 at 1:39 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >>\n> >> On Fri, Oct 23, 2020 at 4:39 PM Sridhar N Bamandlapally\n> >> <sridhar.bn1@gmail.com> wrote:\n> >> >\n> >> > Am trying to clone postgresql git, getting error\n> >> >\n> >> > D:\\sridhar>git clone https://git.postgresql.org/git/postgresql.git\n> >> > Cloning into 'postgresql'...\n> >> > remote: Enumerating objects: 806507, done.\n> >> > remote: Counting objects: 100% (806507/806507), done.\n> >> > remote: Compressing objects: 100% (122861/122861), done.\n> >> > error: RPC failed; curl 18 transfer closed with 3265264 bytes\n> remaining to read\n> >> > fatal: the remote end hung up unexpectedly\n> >> > fatal: early EOF\n> >> > fatal: index-pack failed\n> >> >\n> >>\n> >> I have also just tried this and it failed with same error. However, it\n> >> worked when I tried 'git clone\n> >> git://git.postgresql.org/git/postgresql.git'. I don't know what is the\n> >> issue.\n> >\n> >\n> > It worked for me with https. Can you try again?\n> >\n>\n> This time it worked but on the third try.\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nThanks All, it wotked with git://git.postgresql.org/git/postgresql.git  ThanksSridharOn Fri, Oct 23, 2020 at 7:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Oct 23, 2020 at 6:21 PM Dave Page <dpage@pgadmin.org> wrote:\n>\n> On Fri, Oct 23, 2020 at 1:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Fri, Oct 23, 2020 at 4:39 PM Sridhar N Bamandlapally\n>> <sridhar.bn1@gmail.com> wrote:\n>> >\n>> > Am trying to clone postgresql git, getting error\n>> >\n>> > D:\\sridhar>git clone https://git.postgresql.org/git/postgresql.git\n>> > Cloning into 'postgresql'...\n>> > remote: Enumerating objects: 806507, done.\n>> > remote: Counting objects: 100% (806507/806507), done.\n>> > remote: Compressing objects: 100% (122861/122861), done.\n>> > error: RPC failed; curl 18 transfer closed with 3265264 bytes remaining to read\n>> > fatal: the remote end hung up unexpectedly\n>> > fatal: early EOF\n>> > fatal: index-pack failed\n>> >\n>>\n>> I have also just tried this and it failed with same error. However, it\n>> worked when I tried 'git clone\n>> git://git.postgresql.org/git/postgresql.git'. I don't know what is the\n>> issue.\n>\n>\n> It worked for me with https. Can you try again?\n>\n\nThis time it worked but on the third try.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Fri, 23 Oct 2020 19:20:48 +0530", "msg_from": "Sridhar N Bamandlapally <sridhar.bn1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: git clone failed in windows" } ]
[ { "msg_contents": "Hi!\n\nI'm working on providing smooth failover to a CDC system in HA cluster.\nCurrently, we do not replicate logical slots and when we promote a replica. This renders impossible continuation of changed data capture (CDC) from new primary after failover.\n\nWe cannot start logical replication from LSN different from LSN of a slot. And cannot create a slot on LSN in the past, particularly before or right after promotion.\n\nThis leads to massive waste of network bandwidth in our installations, due to necessity of initial table sync.\n\nWe are considering to use the extension that creates replication slot with LSN in the past [0]. I understand that there might be some caveats with logical replication, but do not see scale of possible implications of this approach. User get error if WAL is rotated or waits if LSN is not reached yet, this seems perfectly fine for us. In most of our cases when CDC agent detects failover and goes to new primary there are plenty of old WALs to restart CDC.\n\nAre there strong reasons why we do not allow creation of slots with given LSNs, possibly within narrow LSN range (but wider that just GetXLogInsertRecPtr())?\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n[0] https://github.com/x4m/pg_tm_aux/blob/master/pg_tm_aux.c#L74-L77\n\n\n\n", "msg_date": "Fri, 23 Oct 2020 17:30:40 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": true, "msg_subject": "Logical replication from HA cluster" } ]
[ { "msg_contents": "Hello,\n\nI've found a behavior change with pg_class.reltuples on btree index. With only\ninsert activity on a table, when an index is processed, its related reltuples\nis set to 0. Here is a demo script:\n\n -- force index cleanup\n set vacuum_cleanup_index_scale_factor to 0;\n\n drop table if exists t;\n create table t as select i from generate_series(1, 100) i;\n create index t_i on t(i);\n\n -- after index creation its reltuples is correct\n select reltuples from pg_class where relname = 't_i' \n -- result: reltuples | 100\n\n -- vacuum set index reltuples to 0\n vacuum t;\n select reltuples from pg_class where relname = 't_i' \n -- result: reltuples | 0\n\n -- analyze set it back to correct value\n analyze t;\n select reltuples from pg_class where relname = 't_i' \n -- result: reltuples | 100\n\n -- insert + vacuum reset it again to 0\n insert into t values(101);\n vacuum (verbose off, analyze on, index_cleanup on) t;\n select reltuples from pg_class where relname = 't_i' \n -- result: reltuples | 0\n\n -- delete + vacuum set it back to correct value\n delete from t where i=10;\n vacuum (verbose off, analyze on, index_cleanup on) t;\n select reltuples from pg_class where relname = 't_i' \n -- result: reltuples | 100\n\n -- and back to 0 again with insert+vacuum\n insert into t values(102);\n vacuum (verbose off, analyze on, index_cleanup on) t;\n select reltuples from pg_class where relname = 't_i' \n -- result: reltuples | 0\n\nBefore 0d861bbb70, btvacuumpage was adding to relation stats the number of\nleaving lines in the block using:\n\n stats->num_index_tuples += maxoff - minoff + 1;\n\nAfter 0d861bbb70, it is set using new variable nhtidslive:\n\n stats->num_index_tuples += nhtidslive\n\nHowever, nhtidslive is only incremented if callback (IndexBulkDeleteCallback)\nis set, which seems not to be the case on select-only workload.\n\nA naive fix might be to use \"maxoff - minoff + 1\" when callback==NULL.\n\nThoughts?\n\nRegards,\n\n\n", "msg_date": "Fri, 23 Oct 2020 17:44:51 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "vacuum -vs reltuples on insert only index" }, { "msg_contents": "On Fri, Oct 23, 2020 at 8:51 AM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n> Before 0d861bbb70, btvacuumpage was adding to relation stats the number of\n> leaving lines in the block using:\n>\n> stats->num_index_tuples += maxoff - minoff + 1;\n>\n> After 0d861bbb70, it is set using new variable nhtidslive:\n>\n> stats->num_index_tuples += nhtidslive\n>\n> However, nhtidslive is only incremented if callback (IndexBulkDeleteCallback)\n> is set, which seems not to be the case on select-only workload.\n\nI agree that that's a bug.\n\n> A naive fix might be to use \"maxoff - minoff + 1\" when callback==NULL.\n\nThe problem with that is that we really should use nhtidslive (or\nsomething like it), and we're not really willing to do the work to get\nthat information when callback==NULL. We could use \"maxoff - minoff +\n1\" in the way you suggest, but that will be only ~30% of what\nnhtidslive would be in pages where deduplication is maximally\neffective (which is not at all uncommon -- you only need about 10 TIDs\nper distinct value for the space savings to saturate like this).\n\nGIN does this for cleanup (but not for delete, which has a real count\navailable):\n\n/*\n * XXX we always report the heap tuple count as the number of index\n * entries. This is bogus if the index is partial, but it's real hard to\n * tell how many distinct heap entries are referenced by a GIN index.\n */\nstats->num_index_tuples = Max(info->num_heap_tuples, 0);\nstats->estimated_count = info->estimated_count;\n\nI suspect that we need to move in this direction within nbtree. I'm a\nbit concerned about the partial index problem, though. I suppose maybe\nwe could do it the old way (which won't account for posting list\ntuples) during cleanup as you suggest, but only use the final figure\nwhen it turns out to have been a partial indexes. For other indexes we\ncould do what GIN does here.\n\nAnybody else have thoughts on this?\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 23 Oct 2020 11:10:39 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: vacuum -vs reltuples on insert only index" }, { "msg_contents": "On Fri, Oct 23, 2020 at 11:10 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I suspect that we need to move in this direction within nbtree. I'm a\n> bit concerned about the partial index problem, though. I suppose maybe\n> we could do it the old way (which won't account for posting list\n> tuples) during cleanup as you suggest, but only use the final figure\n> when it turns out to have been a partial indexes. For other indexes we\n> could do what GIN does here.\n\nActually, it seems better to always count num_index_tuples the old way\nduring cleanup-only index VACUUMs, despite the inaccuracy that that\ncreates with posting list tuples. The inaccuracy is at least a fixed\nand relatively small inaccuracy, since nbtree doesn't have posting\nlist compression or a pending list mechanism (unlike GIN). This\napproach avoids calculating a num_index_tuples value that is less than\nthe number of distinct values in the index, which seems important.\nTaking a more sophisticated approach seems unnecessary, especially\ngiven that we need something that can be backpatched to Postgres 13.\n\nAttached is my proposed fix, which takes this approach. I will commit\nthis on Wednesday or Thursday, barring any objections.\n\nThanks\n-- \nPeter Geoghegan", "msg_date": "Mon, 2 Nov 2020 10:03:29 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: vacuum -vs reltuples on insert only index" }, { "msg_contents": "On Mon, Nov 2, 2020 at 10:03 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is my proposed fix, which takes this approach. I will commit\n> this on Wednesday or Thursday, barring any objections.\n\nJust to be clear: I am not proposing that we set\n'IndexBulkDeleteResult.estimated_count = false' here, even though\nthere is a certain sense in which we now accept an unreliable figure\nin Postgres 13. This is not what GIN does. That approach doesn't seem\nappropriate for nbtree + deduplication, which is much closer to nbtree\nin Postgres 12 than to GIN. I believe that the final num_index_tuples\nvalue (generated during cleanup-only nbtree VACUUM) is in general\nsufficiently reliable to not be treated as an estimate by vacuumlazy.c\n-- the pg_class entry for the index should still be updated in\nupdate_index_statistics().\n\nIn other words, I think that the remaining posting-list related\ninaccuracies are comparable to the existing inaccuracies caused by\nconcurrent page splits during nbtree vacuuming (I describe the problem\nright next to an old comment about that issue, in fact). What we have\nin both cases is an artifact of how the data is physically represented\nand the difficulty it causes us during vacuuming, in certain cases.\nThere are known error bars. That's why we shouldn't treat\nnum_index_tuples as merely an estimate.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 2 Nov 2020 12:06:17 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: vacuum -vs reltuples on insert only index" }, { "msg_contents": "Just one more postscript...\n\nOn Mon, Nov 2, 2020 at 12:06 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Just to be clear: I am not proposing that we set\n> 'IndexBulkDeleteResult.estimated_count = false' here\n\nI meant 'IndexBulkDeleteResult.estimated_count = true'. So my patch\ndoesn't touch that field at all.\n\n> In other words, I think that the remaining posting-list related\n> inaccuracies are comparable to the existing inaccuracies caused by\n> concurrent page splits during nbtree vacuuming (I describe the problem\n> right next to an old comment about that issue, in fact).\n\nI meant the inaccuracies that remain *once my patch is committed*.\n(Clearly the current behavior of setting pg_class.reltuples to zero\nduring cleanup-only vacuuming is a bug.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 2 Nov 2020 12:19:58 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: vacuum -vs reltuples on insert only index" }, { "msg_contents": "On Mon, Nov 2, 2020 at 10:03 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Actually, it seems better to always count num_index_tuples the old way\n> during cleanup-only index VACUUMs, despite the inaccuracy that that\n> creates with posting list tuples.\n\nPushed a fix for this just now.\n\nThanks for the report!\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 4 Nov 2020 18:44:03 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: vacuum -vs reltuples on insert only index" }, { "msg_contents": "On Wed, 4 Nov 2020 18:44:03 -0800\nPeter Geoghegan <pg@bowt.ie> wrote:\n\n> On Mon, Nov 2, 2020 at 10:03 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Actually, it seems better to always count num_index_tuples the old way\n> > during cleanup-only index VACUUMs, despite the inaccuracy that that\n> > creates with posting list tuples. \n> \n> Pushed a fix for this just now.\n> \n> Thanks for the report!\n\nSorry I couldn't give some more feedback on your thoughts on time...\n\nThank you for your investigation and fix!\n\nRegards,\n\n\n", "msg_date": "Mon, 9 Nov 2020 15:50:49 +0100", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: vacuum -vs reltuples on insert only index" } ]
[ { "msg_contents": "I've been wondering recently why the external canonical form of types\nlike char and varchar doesn't match the typname in pg_type.\nAdditionally, the alternative/extended names are hardcoded in\nformat_type.c rather than being an additional column in that catalog\ntable.\n\nI would have assumed there were largely historical reasons for this,\nbut I see the following relevant comments in that file:\n\n/*\n* See if we want to special-case the output for certain built-in types.\n* Note that these special cases should all correspond to special\n* productions in gram.y, to ensure that the type name will be taken as a\n* system type, not a user type of the same name.\n*\n* If we do not provide a special-case output here, the type name will be\n* handled the same way as a user type name --- in particular, it will be\n* double-quoted if it matches any lexer keyword. This behavior is\n* essential for some cases, such as types \"bit\" and \"char\".\n*/\n\nBut I'm not following what would actually break if it weren't done\nthis way. Is the issue that a user defined type (in a different\nschema, perhaps?) could overshadow the system type?\n\nAnd would it make more sense (though I'm not volunteering right now to\nwrite such a patch :D) to have these names be an additional column on\npg_type so that they can be queried by the user?\n\nThanks,\nJames\n\n\n", "msg_date": "Fri, 23 Oct 2020 14:06:32 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "[var]char versus character [varying]" }, { "msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> I've been wondering recently why the external canonical form of types\n> like char and varchar doesn't match the typname in pg_type.\n\nMostly because the SQL standard wants certain spellings, some of\nwhich aren't even single words (e.g. DOUBLE PRECISION). There\nare cases where we could have changed internal names to match up\nwith the spec name, but that won't work for all cases, and people\nhave some attachment to the existing names anyway.\n\n> But I'm not following what would actually break if it weren't done\n> this way. Is the issue that a user defined type (in a different\n> schema, perhaps?) could overshadow the system type?\n\nThat's one thing, and the rules about typmods are another. For\ninstance the spec says that BIT without any other decoration means\nBIT(1), so that we have this:\n\nregression=# select '111'::bit;\n bit \n-----\n 1\n(1 row)\n\nversus\n\nregression=# select '111'::\"bit\";\n bit \n-----\n 111\n(1 row)\n\nThe latter means \"bit without any length constraint\", which is\nsomething the spec doesn't actually support. So when we have\nbit with typmod -1, we must spell it \"bit\" with quotes.\n\n> And would it make more sense (though I'm not volunteering right now to\n> write such a patch :D) to have these names be an additional column on\n> pg_type so that they can be queried by the user?\n\nNot particularly, because some of these types actually have several\ndifferent spec-approved spellings, eg VARCHAR, CHAR VARYING,\nCHARACTER VARYING are all in the standard.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Oct 2020 14:21:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [var]char versus character [varying]" } ]
[ { "msg_contents": "Hi\n\n\n\nI noticed that when casting a string to boolean value with input 'of' it still cast it to 'f'. I think with 'of', it should give an error because 'off' is the expected candidate. This may not be intended so I made a simple patch to address this. \n\n\n```\n\npostgres=# select cast('of' as boolean);\n\n bool \n\n------\n\n f\n\n(1 row)\n\n```\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca", "msg_date": "Fri, 23 Oct 2020 16:56:58 -0700", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": true, "msg_subject": "minor problem in boolean cast" }, { "msg_contents": "Cary Huang <cary.huang@highgo.ca> writes:\n> I noticed that when casting a string to boolean value with input 'of' it still cast it to 'f'. I think with 'of', it should give an error because 'off' is the expected candidate. This may not be intended so I made a simple patch to address this. \n\nIt's absolutely intended, and documented:\n\nhttps://www.postgresql.org/docs/devel/datatype-boolean.html\n\nNote the bit about \"Unique prefixes of these strings are also accepted\".\n\nThe code comment just above parse_bool() says the same.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Oct 2020 20:04:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: minor problem in boolean cast" } ]
[ { "msg_contents": "Hi\n\nI found the comment of function get_attgenerated(Oid relid, AttrNumber attnum) seems wrong.\nIt seems the function is given the attribute number not the name.\n\n/*\n * get_attgenerated\n *\n- *\t\tGiven the relation id and the attribute name,\n+ *\t\tGiven the relation id and the attribute number,\n *\t\treturn the \"attgenerated\" field from the attribute relation.\n *\n *\t\tErrors if not found.\n\nBest regards,\nhouzj", "msg_date": "Sun, 25 Oct 2020 01:22:55 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "Fix typo in src/backend/utils/cache/lsyscache.c" }, { "msg_contents": "On Sun, 25 Oct 2020 at 14:23, Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> I found the comment of function get_attgenerated(Oid relid, AttrNumber attnum) seems wrong.\n> It seems the function is given the attribute number not the name.\n\nThanks. Pushed.\n\nDavid\n\n\n", "msg_date": "Sun, 25 Oct 2020 22:42:08 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix typo in src/backend/utils/cache/lsyscache.c" } ]
[ { "msg_contents": "Hello,\n\nA french user recently complained that with an index created using\ngin_trgm_ops (or gist_trgm_ops), you can use the index with a clause\nlike\n\ncol LIKE 'something'\n\nbut not\n\ncol = 'something'\n\neven though both clauses are technically identical. That's clearly\nnot a high priority thing to support, but looking at the code it seems\nto me that this could be achieved quite simply: just adding a new\noperator = in the opclass, with an operator strategy number that falls\nback doing exactly what LikeStrategyNumber is doing and that's it.\nThere shouldn't be any wrong results, even using wildcards as the\nrecheck will remove any incorrect one.\n\nDid I miss something? And if not would such a patch be welcome?\n\n\n", "msg_date": "Sun, 25 Oct 2020 19:32:29 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> A french user recently complained that with an index created using\n> gin_trgm_ops (or gist_trgm_ops), you can use the index with a clause\n> like\n> col LIKE 'something'\n> but not\n> col = 'something'\n\nHuh, I'd supposed we did that already.\n\n> even though both clauses are technically identical. That's clearly\n> not a high priority thing to support, but looking at the code it seems\n> to me that this could be achieved quite simply: just adding a new\n> operator = in the opclass, with an operator strategy number that falls\n> back doing exactly what LikeStrategyNumber is doing and that's it.\n> There shouldn't be any wrong results, even using wildcards as the\n> recheck will remove any incorrect one.\n\nI think you may be overoptimistic about being able to use the identical\ncode path without regard for LIKE wildcards; but certainly it should be\npossible to do this with not a lot of new code. +1.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 25 Oct 2020 17:03:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "On Mon, Oct 26, 2020 at 5:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > A french user recently complained that with an index created using\n> > gin_trgm_ops (or gist_trgm_ops), you can use the index with a clause\n> > like\n> > col LIKE 'something'\n> > but not\n> > col = 'something'\n>\n> Huh, I'd supposed we did that already.\n>\n> > even though both clauses are technically identical. That's clearly\n> > not a high priority thing to support, but looking at the code it seems\n> > to me that this could be achieved quite simply: just adding a new\n> > operator = in the opclass, with an operator strategy number that falls\n> > back doing exactly what LikeStrategyNumber is doing and that's it.\n> > There shouldn't be any wrong results, even using wildcards as the\n> > recheck will remove any incorrect one.\n>\n> I think you may be overoptimistic about being able to use the identical\n> code path without regard for LIKE wildcards; but certainly it should be\n> possible to do this with not a lot of new code. +1.\n\nWell, that's what I was thinking too, but I tried all the possible\nwildcard combinations I could think of and I couldn't find any case\nyielding wrong results. As far as I can see the index scans return at\nleast all the required rows, and all extraneous rows are correctly\nremoved either by heap recheck or index recheck.\n\nI'm attaching a patch POC pach with regression tests covering those\ncombinations. I also found a typo in the 1.4--1.5 pg_trgm upgrade\nscript, so I'm also attaching a patch for that.", "msg_date": "Mon, 26 Oct 2020 12:02:46 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "On Mon, Oct 26, 2020 at 12:02 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, Oct 26, 2020 at 5:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Julien Rouhaud <rjuju123@gmail.com> writes:\n> > > A french user recently complained that with an index created using\n> > > gin_trgm_ops (or gist_trgm_ops), you can use the index with a clause\n> > > like\n> > > col LIKE 'something'\n> > > but not\n> > > col = 'something'\n> >\n> > Huh, I'd supposed we did that already.\n> >\n> > > even though both clauses are technically identical. That's clearly\n> > > not a high priority thing to support, but looking at the code it seems\n> > > to me that this could be achieved quite simply: just adding a new\n> > > operator = in the opclass, with an operator strategy number that falls\n> > > back doing exactly what LikeStrategyNumber is doing and that's it.\n> > > There shouldn't be any wrong results, even using wildcards as the\n> > > recheck will remove any incorrect one.\n> >\n> > I think you may be overoptimistic about being able to use the identical\n> > code path without regard for LIKE wildcards; but certainly it should be\n> > possible to do this with not a lot of new code. +1.\n>\n> Well, that's what I was thinking too, but I tried all the possible\n> wildcard combinations I could think of and I couldn't find any case\n> yielding wrong results. As far as I can see the index scans return at\n> least all the required rows, and all extraneous rows are correctly\n> removed either by heap recheck or index recheck.\n>\n> I'm attaching a patch POC pach with regression tests covering those\n> combinations. I also found a typo in the 1.4--1.5 pg_trgm upgrade\n> script, so I'm also attaching a patch for that.\n\nOops, I forgot to git-add the 1.5--1.6.sql upgrade script in the previous patch.", "msg_date": "Mon, 26 Oct 2020 12:10:12 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Mon, Oct 26, 2020 at 5:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think you may be overoptimistic about being able to use the identical\n>> code path without regard for LIKE wildcards; but certainly it should be\n>> possible to do this with not a lot of new code. +1.\n\n> Well, that's what I was thinking too, but I tried all the possible\n> wildcard combinations I could think of and I couldn't find any case\n> yielding wrong results. As far as I can see the index scans return at\n> least all the required rows, and all extraneous rows are correctly\n> removed either by heap recheck or index recheck.\n\nBut \"does it get the right answers\" isn't the only figure of merit.\nIf the index scan visits far more rows than necessary, that's bad.\nMaybe it's OK given that we only make trigrams from alphanumerics,\nbut I'm not quite sure.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Oct 2020 00:19:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "On Mon, Oct 26, 2020 at 12:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Mon, Oct 26, 2020 at 5:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I think you may be overoptimistic about being able to use the identical\n> >> code path without regard for LIKE wildcards; but certainly it should be\n> >> possible to do this with not a lot of new code. +1.\n>\n> > Well, that's what I was thinking too, but I tried all the possible\n> > wildcard combinations I could think of and I couldn't find any case\n> > yielding wrong results. As far as I can see the index scans return at\n> > least all the required rows, and all extraneous rows are correctly\n> > removed either by heap recheck or index recheck.\n>\n> But \"does it get the right answers\" isn't the only figure of merit.\n> If the index scan visits far more rows than necessary, that's bad.\n> Maybe it's OK given that we only make trigrams from alphanumerics,\n> but I'm not quite sure.\n\nAh, yes this might lead to bad performance if the \"fake wildcard\"\nmatches too many rows, but this shouldn't be a very common use case,\nand the only alternative for that might be to create trigrams for non\nalphanumerics characters. I didn't try to do that because it would\nmean meaningful overhead for mainstream usage of pg_trgm, and would\nalso mean on-disk format break. In my opinion supporting = should be\na best effort, especially for such corner cases.\n\n\n", "msg_date": "Mon, 26 Oct 2020 12:38:08 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "On Mon, Oct 26, 2020 at 7:38 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> Ah, yes this might lead to bad performance if the \"fake wildcard\"\n> matches too many rows, but this shouldn't be a very common use case,\n> and the only alternative for that might be to create trigrams for non\n> alphanumerics characters. I didn't try to do that because it would\n> mean meaningful overhead for mainstream usage of pg_trgm, and would\n> also mean on-disk format break. In my opinion supporting = should be\n> a best effort, especially for such corner cases.\n\nIt would be more efficient to generate trigrams for equal operator\nusing generate_trgm() instead of generate_wildcard_trgm(). It some\ncases it would generate more trigrams. For instance generate_trgm()\nwould generate '__a', '_ab', 'ab_' for '%ab%' while\ngenerate_wildcard_trgm() would generate nothing.\n\nAlso I wonder how our costing would work if there are multiple indices\nof the same column. We should clearly prefer btree than pg_trgm\ngist/gin, and I believe our costing provides this. But we also should\nprefer btree_gist/btree_gin than pg_trgm gist/gin, and I'm not sure\nour costing provides this especially for gist.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 26 Oct 2020 20:50:05 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nHi,\r\n\r\nthis patch implements a useful and missing feature. Thank you.\r\n\r\nIt includes documentation, which to a non-native speaker as myself seems appropriate.\r\nIt includes comprehensive tests that cover the implemented cases.\r\n\r\nIn the thread Alexander has pointed out, quote:\r\n\"It would be more efficient to generate trigrams for equal operator\r\nusing generate_trgm() instead of generate_wildcard_trgm()\"\r\n\r\nI will echo the sentiment, though from a slightly different and possibly not\r\nas important point of view. The method used to extract trigrams from the query\r\nshould match the method used to extract trigrams from the values when they\r\nget added to the index. This is gin_extract_value_trgm() and is indeed using\r\ngenerate_trgm().\r\n\r\nI have no opinion over Alexander's second comment regarding costing.\r\n\r\nI change the status to 'Waiting on Author', but please feel free to override\r\nmy opinion if you feel I am wrong and reset it to 'Needs review'.\r\n\r\nCheers,\r\n//Georgios\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Wed, 11 Nov 2020 12:33:11 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "On Wed, Nov 11, 2020 at 8:34 PM Georgios Kokolatos\n<gkokolatos@protonmail.com> wrote:\n>\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> Hi,\n>\n> this patch implements a useful and missing feature. Thank you.\n>\n> It includes documentation, which to a non-native speaker as myself seems appropriate.\n> It includes comprehensive tests that cover the implemented cases.\n>\n> In the thread Alexander has pointed out, quote:\n> \"It would be more efficient to generate trigrams for equal operator\n> using generate_trgm() instead of generate_wildcard_trgm()\"\n>\n> I will echo the sentiment, though from a slightly different and possibly not\n> as important point of view. The method used to extract trigrams from the query\n> should match the method used to extract trigrams from the values when they\n> get added to the index. This is gin_extract_value_trgm() and is indeed using\n> generate_trgm().\n>\n> I have no opinion over Alexander's second comment regarding costing.\n>\n> I change the status to 'Waiting on Author', but please feel free to override\n> my opinion if you feel I am wrong and reset it to 'Needs review'.\n\nThanks for the reminder Georgios! Thanks a lot Alexander for the review!\n\nIndeed, I should have used generate_trgm() rather than\ngenerate_wildcard_trgm(). IIUC, the rest of the code should still be\ndoing the same as [I]LikeStrategyNumber. I attach a v3 with that\nmodification.\n\nFor the costing, I tried this naive dataset:\n\nCREATE TABLE t1 AS select md5(random()::text) AS val from\ngenerate_series(1, 100000);\nCREATE INDEX t1_btree ON t1 (val);\nCREATE INDEX t1_gist ON t1 USING gist (val gist_trgm_ops);\n\nCost are like this (all default configuration, using any random existing entry):\n\n# EXPLAIN ANALYZE SELECT * FROM t1 where val =\n'8dcf324ce38428e4d27a363953ac1c51';\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Index Only Scan using t1_btree on t1 (cost=0.42..4.44 rows=1\nwidth=33) (actual time=0.192..0.194 rows=1 loops=1)\n Index Cond: (val = '8dcf324ce38428e4d27a363953ac1c51'::text)\n Heap Fetches: 0\n Planning Time: 0.133 ms\n Execution Time: 0.222 ms\n(5 rows)\n\n# EXPLAIN ANALYZE SELECT * FROM t1 where val =\n'8dcf324ce38428e4d27a363953ac1c51';\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------\n Index Scan using t1_gist on t1 (cost=0.28..8.30 rows=1 width=33)\n(actual time=0.542..2.359 rows=1 loops=1)\n Index Cond: (val = '8dcf324ce38428e4d27a363953ac1c51'::text)\n Planning Time: 0.189 ms\n Execution Time: 2.382 ms\n(4 rows)\n\n# EXPLAIN ANALYZE SELECT * FROM t1 where val =\n'8dcf324ce38428e4d27a363953ac1c51';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on t1 (cost=400.01..404.02 rows=1 width=33) (actual\ntime=2.486..2.488 rows=1 loops=1)\n Recheck Cond: (val = '8dcf324ce38428e4d27a363953ac1c51'::text)\n Heap Blocks: exact=1\n -> Bitmap Index Scan on t1_gin (cost=0.00..400.01 rows=1 width=0)\n(actual time=2.474..2.474 rows=1 loops=1)\n Index Cond: (val = '8dcf324ce38428e4d27a363953ac1c51'::text)\n Planning Time: 0.206 ms\n Execution Time: 2.611 ms\n\nSo assuming that this dataset is representative enough, costing indeed\nprefers a btree index over a gist/gin index, which should avoid\nregression with this change.\n\nGin is however quite off, likely because it's a bitmap index scan\nrather than an index scan, so gist is preferred in this scenario.\nThat's not ideal, but I'm not sure that there are many people having\nboth gin_trgm_ops and gist_trgm_ops.", "msg_date": "Fri, 13 Nov 2020 17:50:09 +0800", "msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Friday, November 13, 2020 10:50 AM, Julien Rouhaud <julien.rouhaud@free.fr> wrote:\n\n> On Wed, Nov 11, 2020 at 8:34 PM Georgios Kokolatos\n> gkokolatos@protonmail.com wrote:\n>\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: tested, passed\n> > Implements feature: tested, passed\n> > Spec compliant: not tested\n> > Documentation: not tested\n> > Hi,\n> > this patch implements a useful and missing feature. Thank you.\n> > It includes documentation, which to a non-native speaker as myself seems appropriate.\n> > It includes comprehensive tests that cover the implemented cases.\n> > In the thread Alexander has pointed out, quote:\n> > \"It would be more efficient to generate trigrams for equal operator\n> > using generate_trgm() instead of generate_wildcard_trgm()\"\n> > I will echo the sentiment, though from a slightly different and possibly not\n> > as important point of view. The method used to extract trigrams from the query\n> > should match the method used to extract trigrams from the values when they\n> > get added to the index. This is gin_extract_value_trgm() and is indeed using\n> > generate_trgm().\n> > I have no opinion over Alexander's second comment regarding costing.\n> > I change the status to 'Waiting on Author', but please feel free to override\n> > my opinion if you feel I am wrong and reset it to 'Needs review'.\n>\n> Thanks for the reminder Georgios! Thanks a lot Alexander for the review!\n>\n> Indeed, I should have used generate_trgm() rather than\n> generate_wildcard_trgm(). IIUC, the rest of the code should still be\n> doing the same as [I]LikeStrategyNumber. I attach a v3 with that\n> modification.\n\nGreat, thanks!\n\nv3 looks good.\n\n>\n> For the costing, I tried this naive dataset:\n>\n> CREATE TABLE t1 AS select md5(random()::text) AS val from\n> generate_series(1, 100000);\n> CREATE INDEX t1_btree ON t1 (val);\n> CREATE INDEX t1_gist ON t1 USING gist (val gist_trgm_ops);\n>\n> Cost are like this (all default configuration, using any random existing entry):\n>\n> EXPLAIN ANALYZE SELECT * FROM t1 where val =\n>\n> =============================================\n>\n> '8dcf324ce38428e4d27a363953ac1c51';\n> QUERY PLAN\n>\n> -----------------------------------------------\n>\n> Index Only Scan using t1_btree on t1 (cost=0.42..4.44 rows=1\n> width=33) (actual time=0.192..0.194 rows=1 loops=1)\n> Index Cond: (val = '8dcf324ce38428e4d27a363953ac1c51'::text)\n> Heap Fetches: 0\n> Planning Time: 0.133 ms\n> Execution Time: 0.222 ms\n> (5 rows)\n>\n> EXPLAIN ANALYZE SELECT * FROM t1 where val =\n>\n> =============================================\n>\n> '8dcf324ce38428e4d27a363953ac1c51';\n> QUERY PLAN\n>\n> -----------------------------------------------\n>\n> Index Scan using t1_gist on t1 (cost=0.28..8.30 rows=1 width=33)\n> (actual time=0.542..2.359 rows=1 loops=1)\n> Index Cond: (val = '8dcf324ce38428e4d27a363953ac1c51'::text)\n> Planning Time: 0.189 ms\n> Execution Time: 2.382 ms\n> (4 rows)\n>\n> EXPLAIN ANALYZE SELECT * FROM t1 where val =\n>\n> =============================================\n>\n> '8dcf324ce38428e4d27a363953ac1c51';\n> QUERY PLAN\n>\n> -----------------------------------------------\n>\n> Bitmap Heap Scan on t1 (cost=400.01..404.02 rows=1 width=33) (actual\n> time=2.486..2.488 rows=1 loops=1)\n> Recheck Cond: (val = '8dcf324ce38428e4d27a363953ac1c51'::text)\n> Heap Blocks: exact=1\n> -> Bitmap Index Scan on t1_gin (cost=0.00..400.01 rows=1 width=0)\n> (actual time=2.474..2.474 rows=1 loops=1)\n> Index Cond: (val = '8dcf324ce38428e4d27a363953ac1c51'::text)\n> Planning Time: 0.206 ms\n> Execution Time: 2.611 ms\n>\n> So assuming that this dataset is representative enough, costing indeed\n> prefers a btree index over a gist/gin index, which should avoid\n> regression with this change.\n\nIt sounds reasonable, although I would leave it to a bit more cost savvy\npeople to argue the point.\n\n>\n> Gin is however quite off, likely because it's a bitmap index scan\n> rather than an index scan, so gist is preferred in this scenario.\n> That's not ideal, but I'm not sure that there are many people having\n> both gin_trgm_ops and gist_trgm_ops.\n\nSame as above.\n\nIn short, I think v3 of the patch looks good to change to 'RFC' status.\nGiven the possible costing concerns, I will refrain from changing the\nstatus just yet, to give the opportunity to more reviewers to chime in.\nIf in the next few days there are no more reviews, I will update the\nstatus to 'RFC' to move the patch forward.\n\nThoughts?\n\nCheers,\n//Georgios\n\n\n", "msg_date": "Fri, 13 Nov 2020 10:47:35 +0000", "msg_from": "Georgios <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "Hi!\n\nOn Fri, Nov 13, 2020 at 1:47 PM Georgios <gkokolatos@protonmail.com> wrote:\n> In short, I think v3 of the patch looks good to change to 'RFC' status.\n> Given the possible costing concerns, I will refrain from changing the\n> status just yet, to give the opportunity to more reviewers to chime in.\n> If in the next few days there are no more reviews, I will update the\n> status to 'RFC' to move the patch forward.\n>\n> Thoughts?\n\nI went through and revised this patch. I made the documentation\nstatement less categorical. pg_trgm gist/gin indexes might have lower\nperformance of equality operator search than B-tree. So, we can't\nclaim the B-tree index is always not needed. Also, simple comparison\noperators are <, <=, >, >=, and they are not supported.\n\nI also have checked that btree_gist is preferred over pg_trgm gist\nindex for equality search. Despite our gist cost estimate is quite\ndumb, it selects btree_gist index due to its lower size. So, this\npart also looks good to me.\n\nI'm going to push this if no objections.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sat, 14 Nov 2020 08:30:51 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "On 2020-11-14 06:30, Alexander Korotkov wrote:\n\n> [v4-0001-Handle-equality...in-contrib-pg_trgm.patch (~]\n> \n> I'm going to push this if no objections.\n> \n\nAbout the sgml, in doc/src/sgml/pgtrgm.sgml :\n\n\nBeginning in <productname>PostgreSQL</productname> 14, these indexes \nalso support equality operator (simple comparison operators are not \nsupported).\n\nshould be:\n\nBeginning in <productname>PostgreSQL</productname> 14, these indexes \nalso support the equality operator (simple comparison operators are not \nsupported).\n\n(added 'the')\n\n\nAnd:\n\nAlthough these indexes might have lower the performance of equality \noperator\nsearch than regular B-tree indexes.\n\nshould be (I think - please check the meaning)\n\nAlthough these indexes might have a lower performance with equality \noperator\nsearch than with regular B-tree indexes.\n\n\nI am not sure I understood this last sentence correctly. Does this mean \nthe slower trgm index might be chosen over the faster btree?\n\n\nThanks,\n\nErik Rijkers\n\n\n\n", "msg_date": "Sat, 14 Nov 2020 09:37:09 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "Hi, Erik!\n\nOn Sat, Nov 14, 2020 at 11:37 AM Erik Rijkers <er@xs4all.nl> wrote:\n> On 2020-11-14 06:30, Alexander Korotkov wrote:\n>\n> > [v4-0001-Handle-equality...in-contrib-pg_trgm.patch (~]\n> >\n> > I'm going to push this if no objections.\n> >\n>\n> About the sgml, in doc/src/sgml/pgtrgm.sgml :\n>\n>\n> Beginning in <productname>PostgreSQL</productname> 14, these indexes\n> also support equality operator (simple comparison operators are not\n> supported).\n>\n> should be:\n>\n> Beginning in <productname>PostgreSQL</productname> 14, these indexes\n> also support the equality operator (simple comparison operators are not\n> supported).\n>\n> (added 'the')\n>\n>\n> And:\n>\n> Although these indexes might have lower the performance of equality\n> operator\n> search than regular B-tree indexes.\n>\n> should be (I think - please check the meaning)\n>\n> Although these indexes might have a lower performance with equality\n> operator\n> search than with regular B-tree indexes.\n>\n>\n> I am not sure I understood this last sentence correctly. Does this mean\n> the slower trgm index might be chosen over the faster btree?\n\nThank you for your review.\n\nI mean searching for an equal string with pg_trgm GiST/GIN might be\nslower than the same search with B-tree. If you need both pg_trgm\nsimilarity/pattern search and equal search on your column, you have\nchoice. You can run with a single pg_trgm index, but your search for\nan equal string wouldn't be as fast as with B-tree. Alternatively you\ncan have two indexes: pg_trgm index for similarity/pattern search and\nB-tree index for equality search. Second option gives you a fast\nequality search, but the second B-tree index would take extra space\nand maintenance overhead. For equality search, the B-tree index\nshould be chosen by the planner (and that was tested).\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sat, 14 Nov 2020 13:07:22 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "On Sat, Nov 14, 2020 at 6:07 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> Hi, Erik!\n>\n> On Sat, Nov 14, 2020 at 11:37 AM Erik Rijkers <er@xs4all.nl> wrote:\n> > On 2020-11-14 06:30, Alexander Korotkov wrote:\n> >\n> > > [v4-0001-Handle-equality...in-contrib-pg_trgm.patch (~]\n> > >\n> > > I'm going to push this if no objections.\n> > >\n> >\n> > About the sgml, in doc/src/sgml/pgtrgm.sgml :\n> >\n> >\n> > Beginning in <productname>PostgreSQL</productname> 14, these indexes\n> > also support equality operator (simple comparison operators are not\n> > supported).\n> >\n> > should be:\n> >\n> > Beginning in <productname>PostgreSQL</productname> 14, these indexes\n> > also support the equality operator (simple comparison operators are not\n> > supported).\n> >\n> > (added 'the')\n> >\n> >\n> > And:\n> >\n> > Although these indexes might have lower the performance of equality\n> > operator\n> > search than regular B-tree indexes.\n> >\n> > should be (I think - please check the meaning)\n> >\n> > Although these indexes might have a lower performance with equality\n> > operator\n> > search than with regular B-tree indexes.\n> >\n> >\n> > I am not sure I understood this last sentence correctly. Does this mean\n> > the slower trgm index might be chosen over the faster btree?\n>\n> Thank you for your review.\n>\n> I mean searching for an equal string with pg_trgm GiST/GIN might be\n> slower than the same search with B-tree. If you need both pg_trgm\n> similarity/pattern search and equal search on your column, you have\n> choice. You can run with a single pg_trgm index, but your search for\n> an equal string wouldn't be as fast as with B-tree. Alternatively you\n> can have two indexes: pg_trgm index for similarity/pattern search and\n> B-tree index for equality search. Second option gives you a fast\n> equality search, but the second B-tree index would take extra space\n> and maintenance overhead. For equality search, the B-tree index\n> should be chosen by the planner (and that was tested).\n\nThanks everyone for the review, and thanks Alexander for the modifications!\n\nI agree that it's important to document that those indexes may be less\nperformant than btree indexes. I also agree that there's an\nextraneous \"the\" in the documentation. Maybe this rewrite could be\nbetter?\n\n Note that those indexes may not be as afficient as regulat B-tree indexes\n for equality operator.\n\nWhile at it, there's a small grammar error:\n\n case EqualStrategyNumber:\n- /* Wildcard search is inexact */\n+ /* Wildcard and equal search is inexact */\n\nIt should be /* Wildcard and equal search are inexact */\n\n\n", "msg_date": "Sat, 14 Nov 2020 19:53:59 +0800", "msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "On 2020-11-14 12:53, Julien Rouhaud wrote:\n> On Sat, Nov 14, 2020 at 6:07 PM Alexander Korotkov \n> <aekorotkov@gmail.com> >\n\n> Note that those indexes may not be as afficient as regulat B-tree \n> indexes\n> for equality operator.\n\n\n'afficient as regulat' should be\n'efficient as regular'\n\n\nSorry to be nitpicking - it's the one thing I'm really good at :P\n\nErik\n\n\n", "msg_date": "Sat, 14 Nov 2020 12:57:56 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "On Sat, Nov 14, 2020 at 7:58 PM Erik Rijkers <er@xs4all.nl> wrote:\n>\n> On 2020-11-14 12:53, Julien Rouhaud wrote:\n> > On Sat, Nov 14, 2020 at 6:07 PM Alexander Korotkov\n> > <aekorotkov@gmail.com> >\n>\n> > Note that those indexes may not be as afficient as regulat B-tree\n> > indexes\n> > for equality operator.\n>\n>\n> 'afficient as regulat' should be\n> 'efficient as regular'\n>\n>\n> Sorry to be nitpicking - it's the one thing I'm really good at :P\n\nOops indeed :)\n\n\n", "msg_date": "Sun, 15 Nov 2020 01:26:36 +0800", "msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "On Sat, Nov 14, 2020 at 8:26 PM Julien Rouhaud <julien.rouhaud@free.fr> wrote:\n> On Sat, Nov 14, 2020 at 7:58 PM Erik Rijkers <er@xs4all.nl> wrote:\n> >\n> > On 2020-11-14 12:53, Julien Rouhaud wrote:\n> > > On Sat, Nov 14, 2020 at 6:07 PM Alexander Korotkov\n> > > <aekorotkov@gmail.com> >\n> >\n> > > Note that those indexes may not be as afficient as regulat B-tree\n> > > indexes\n> > > for equality operator.\n> >\n> >\n> > 'afficient as regulat' should be\n> > 'efficient as regular'\n> >\n> >\n> > Sorry to be nitpicking - it's the one thing I'm really good at :P\n>\n> Oops indeed :)\n\nPushed with all the corrections above. Thanks!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sun, 15 Nov 2020 08:55:13 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "On Sun, Nov 15, 2020 at 1:55 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Sat, Nov 14, 2020 at 8:26 PM Julien Rouhaud <julien.rouhaud@free.fr> wrote:\n> > On Sat, Nov 14, 2020 at 7:58 PM Erik Rijkers <er@xs4all.nl> wrote:\n> > >\n> > > On 2020-11-14 12:53, Julien Rouhaud wrote:\n> > > > On Sat, Nov 14, 2020 at 6:07 PM Alexander Korotkov\n> > > > <aekorotkov@gmail.com> >\n> > >\n> > > > Note that those indexes may not be as afficient as regulat B-tree\n> > > > indexes\n> > > > for equality operator.\n> > >\n> > >\n> > > 'afficient as regulat' should be\n> > > 'efficient as regular'\n> > >\n> > >\n> > > Sorry to be nitpicking - it's the one thing I'm really good at :P\n> >\n> > Oops indeed :)\n>\n> Pushed with all the corrections above. Thanks!\n\nThanks a lot!\n\n\n", "msg_date": "Sun, 15 Nov 2020 15:18:25 +0800", "msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "On 2020-11-15 06:55, Alexander Korotkov wrote:\n\n>> > Sorry to be nitpicking - it's the one thing I'm really good at :P\n\nHi Alexander,\n\nThe last touch... (you forgot the missing 'the')\n\nthanks!\n\nErik Rijkers", "msg_date": "Sun, 15 Nov 2020 11:44:23 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "On Sat, Nov 14, 2020 at 12:31 AM Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n>\n> I went through and revised this patch. I made the documentation\n> statement less categorical. pg_trgm gist/gin indexes might have lower\n> performance of equality operator search than B-tree. So, we can't\n> claim the B-tree index is always not needed. Also, simple comparison\n> operators are <, <=, >, >=, and they are not supported.\n>\n\nIs \"simple comparison\" here a well-known term of art? If I read the doc as\ncommitted (which doesn't include the sentence above), and if I didn't\nalready know what it was saying, I would be left wondering which\ncomparisons those are. Could we just say \"inequality operators\"?\n\nCheers,\n\nJeff\n\nOn Sat, Nov 14, 2020 at 12:31 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\nI went through and revised this patch.  I made the documentation\nstatement less categorical.  pg_trgm gist/gin indexes might have lower\nperformance of equality operator search than B-tree.  So, we can't\nclaim the B-tree index is always not needed.  Also, simple comparison\noperators are <, <=, >, >=, and they are not supported.Is \"simple comparison\" here a well-known term of art?  If I read the doc as committed (which doesn't include the sentence above), and if I didn't already know what it was saying, I would be left wondering which comparisons those are.  Could we just say \"inequality operators\"?Cheers,Jeff", "msg_date": "Sun, 15 Nov 2020 18:13:15 -0500", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "On Mon, Nov 16, 2020 at 2:13 AM Jeff Janes <jeff.janes@gmail.com> wrote:\n> On Sat, Nov 14, 2020 at 12:31 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>> I went through and revised this patch. I made the documentation\n>> statement less categorical. pg_trgm gist/gin indexes might have lower\n>> performance of equality operator search than B-tree. So, we can't\n>> claim the B-tree index is always not needed. Also, simple comparison\n>> operators are <, <=, >, >=, and they are not supported.\n>\n> Is \"simple comparison\" here a well-known term of art? If I read the doc as committed (which doesn't include the sentence above), and if I didn't already know what it was saying, I would be left wondering which comparisons those are. Could we just say \"inequality operators\"?\n\nYou're right. \"Simple comparison\" is vague, let's replace it with\n\"inequality\". Pushed, thanks!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 16 Nov 2020 09:12:07 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" }, { "msg_contents": "On Sat, 14 Nov 2020 at 18:31, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> I also have checked that btree_gist is preferred over pg_trgm gist\n> index for equality search. Despite our gist cost estimate is quite\n> dumb, it selects btree_gist index due to its lower size. So, this\n> part also looks good to me.\n>\n> I'm going to push this if no objections.\n\n(Reviving old thread [1] due to a complaint from a customer about a\nperformance regression after upgrading PG13 to PG15)\n\nI think the comparisons you made may have been too simplistic. Do you\nrecall what your test case was?\n\nI tried comparing btree to gist with gist_trgm_ops and found that the\ncost estimates for GIST come out cheaper than btree. Btree only wins\nin the most simplistic tests due to Index Only Scans. The test done in\n[2] seems to have fallen for that mistake.\n\ncreate extension if not exists pg_trgm;\ncreate table a (a varchar(250), b varchar(250), c varchar(250));\ninsert into a select md5(a::text),md5(a::text),md5(a::text) from\ngenerate_Series(1,1000000)a;\ncreate index a_a_btree on a (a);\ncreate index a_a_gist on a using gist (a gist_trgm_ops);\nvacuum freeze analyze a;\n\n-- Gist index wins\nexplain (analyze, buffers) select * from a where a = '1234';\n\n Index Scan using a_a_gist on a (cost=0.41..8.43 rows=1 width=99)\n Index Cond: ((a)::text = '1234'::text)\n Rows Removed by Index Recheck: 1\n Buffers: shared hit=14477\n Planning Time: 0.055 ms\n Execution Time: 23.861 ms\n(6 rows)\n\n-- hack to disable gist index.\nupdate pg_index set indisvalid = false where indexrelid='a_a_gist'::regclass;\nexplain (analyze, buffers) select * from a where a = '1234';\n\n Index Scan using a_a_btree on a (cost=0.42..8.44 rows=1 width=99)\n Index Cond: ((a)::text = '1234'::text)\n Buffers: shared read=3\n Planning:\n Buffers: shared hit=8\n Planning Time: 0.090 ms\n Execution Time: 0.048 ms (497.1 times faster)\n(7 rows)\n\n-- re-enable gist.\nupdate pg_index set indisvalid = true where indexrelid='a_a_gist'::regclass;\n\n-- try a query that supports btree with index only scan. Btree wins.\nexplain (analyze, buffers) select a from a where a = '1234';\n\n Index Only Scan using a_a_btree on a (cost=0.42..4.44 rows=1 width=33)\n Index Cond: (a = '1234'::text)\n Heap Fetches: 0\n Buffers: shared read=3\n Planning Time: 0.185 ms\n Execution Time: 0.235 ms\n(6 rows)\n\n-- Disable IOS and Gist index wins again.\nset enable_indexonlyscan=0;\nexplain (analyze, buffers) select a from a where a = '1234';\n\n Index Scan using a_a_gist on a (cost=0.41..8.43 rows=1 width=33)\n Index Cond: ((a)::text = '1234'::text)\n Rows Removed by Index Recheck: 1\n Buffers: shared hit=11564 read=3811\n Planning Time: 0.118 ms\n Execution Time: 71.270 ms (303.2 times faster)\n(6 rows)\n\nThis does not seem ideal given that the select * with the btree is\n~500 times faster than with the gist plan.\n\nFor now, I've recommended the GIST indexes are moved to another\ntablespace with an increased random_page_cost to try to coax the\nplanner to use the btree index.\n\nI wonder if we can do something to fix this so the different\ntablespace idea isn't the permanent solution. I had a look to see why\nthe GIST costs come out cheaper. It looks like it's the startup cost\ncalculation that's slightly different from the btree costing. The\nattached patch highlights the difference. When applied both indexes\ncome out at the same cost and which one is picked is down to which\nindex has the lower Oid. I've not studied if there's a reason why this\ncode is different in gist.\n\nDavid\n\n[1] https://postgr.es/m/CAPpHfducQ0U8noyb2L3VChsyBMsc5V2Ej2whmEuxmAgHa2jVXg@mail.gmail.com\n[2] https://postgr.es/m/CAOBaU_YkkhakwTG4oA886T4CQsHG5hfY%2BxGA3dTBdZM%2BDTYJWA%40mail.gmail.com", "msg_date": "Tue, 17 Sep 2024 16:42:09 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Supporting = operator in gin/gist_trgm_ops" } ]
[ { "msg_contents": "I find I am allowed to create an ordered-set aggregate with a non-empty\ndirect argument list and no finisher function. Am I right in thinking\nthat's kind of nonsensical, as nothing will ever look at the direct args?\n\nAlso, the syntax summary shows PARALLEL = { SAFE | RESTRICTED | UNSAFE }\nin the ordered-set syntax variant, since 9.6, though that variant\naccepts no combine/serial/deserial functions, and there's also\na note saying \"Partial (including parallel) aggregation is currently\nnot supported for ordered-set aggregates.\"\n\nDoes PARALLEL = { SAFE | RESTRICTED | UNSAFE } on an ordered-set\naggregate affect anything?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sun, 25 Oct 2020 21:32:22 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "A couple questions about ordered-set aggregates" }, { "msg_contents": "On Sun, Oct 25, 2020 at 09:32:22PM -0400, Chapman Flack wrote:\n>I find I am allowed to create an ordered-set aggregate with a non-empty\n>direct argument list and no finisher function. Am I right in thinking\n>that's kind of nonsensical, as nothing will ever look at the direct args?\n>\n>Also, the syntax summary shows PARALLEL = { SAFE | RESTRICTED | UNSAFE }\n>in the ordered-set syntax variant, since 9.6, though that variant\n>accepts no combine/serial/deserial functions, and there's also\n>a note saying \"Partial (including parallel) aggregation is currently\n>not supported for ordered-set aggregates.\"\n>\n>Does PARALLEL = { SAFE | RESTRICTED | UNSAFE } on an ordered-set\n>aggregate affect anything?\n>\n\nI may be missing something, but I believe PARALLEL = SAFE simply means\nit can be executed in the parallel part of the plan. That does not\nrequire support for partial aggregation - we simply don't support\npassing partial results to the leader, hence combine/serial/deserial\nfunctions are not needed.\n\nNot sure about the direct arguments.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sat, 31 Oct 2020 00:49:48 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: A couple questions about ordered-set aggregates" } ]
[ { "msg_contents": "During the discussion on dynamic result sets[0], it became apparent that \nthe current way binary results are requested in the extended query \nprotocol is too cumbersome for some practical uses, and keeping that \nstyle around would also make the proposed protocol extensions very \ncomplicated.\n\nThe premise here is that a client library has hard-coded knowledge on \nhow to deal with binary format for certain, but not all, data types. \n(Most client libraries process everything in text, and some client \nlibraries process everything in binary. Neither of these extremes are \nof concern here.) Such a client always has to request a result row \ndescription (Describe statement) before sending a Bind message, in order \nto be able to pick out the result columns in should request in binary. \nThe feedback was that this extra round trip is often not worth it in \nterms of performance, and so it is not done and binary format is not \nused when it could be.\n\nThe conceptual solution is to allow a client to register for a session \nwhich types it wants to always get in binary, unless it says otherwise. \nIn the discussion in [0], I pondered a new protocol message for that, \nbut after further thought, a GUC setting would do just as well.\n\nThe attached patch implements this. For example, to get int2, int4, \nint8 in binary by default, you could set\n\nSET default_result_formats = '21=1,23=1,20=1';\n\nThis is a list of oid=format pairs.\n\nI think this format satisfies the current requirements of the JDBC \ndriver. But the format could also be extended in the future to allow \ntype names to be listed or some other ways of identifying the types.\n\nIn order to be able to test this via libpq, I had to add a little hack. \nCurrently, PQexecParams() and similar functions can only pass exactly \none result format code, which per protocol is then applied to all result \ncolumns. There is no support for sending zero result format codes to \nmake the session default apply. I enabled this by allowing -1 to be \npassed as the format code. I'm not sure if we want to make this part of \nthe official API, but it would be useful to have something like this \nsomehow.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/6e747f98-835f-2e05-cde5-86ee444a7140%402ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 26 Oct 2020 09:31:20 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "default result formats setting" }, { "msg_contents": "po 26. 10. 2020 v 9:31 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> During the discussion on dynamic result sets[0], it became apparent that\n> the current way binary results are requested in the extended query\n> protocol is too cumbersome for some practical uses, and keeping that\n> style around would also make the proposed protocol extensions very\n> complicated.\n>\n> The premise here is that a client library has hard-coded knowledge on\n> how to deal with binary format for certain, but not all, data types.\n> (Most client libraries process everything in text, and some client\n> libraries process everything in binary. Neither of these extremes are\n> of concern here.) Such a client always has to request a result row\n> description (Describe statement) before sending a Bind message, in order\n> to be able to pick out the result columns in should request in binary.\n> The feedback was that this extra round trip is often not worth it in\n> terms of performance, and so it is not done and binary format is not\n> used when it could be.\n>\n> The conceptual solution is to allow a client to register for a session\n> which types it wants to always get in binary, unless it says otherwise.\n> In the discussion in [0], I pondered a new protocol message for that,\n> but after further thought, a GUC setting would do just as well.\n>\n> The attached patch implements this. For example, to get int2, int4,\n> int8 in binary by default, you could set\n>\n> SET default_result_formats = '21=1,23=1,20=1';\n>\n\nUsing SET statement for this case looks very obscure :/\n\nThis is a protocol related issue, and should be solved by protocol\nextending. I don't think so SQL level is good for that.\n\nMore, this format is not practical for custom types, and the list can be\npretty long.\n\n\n> This is a list of oid=format pairs.\n>\n> I think this format satisfies the current requirements of the JDBC\n> driver. But the format could also be extended in the future to allow\n> type names to be listed or some other ways of identifying the types.\n>\n> In order to be able to test this via libpq, I had to add a little hack.\n> Currently, PQexecParams() and similar functions can only pass exactly\n> one result format code, which per protocol is then applied to all result\n> columns. There is no support for sending zero result format codes to\n> make the session default apply. I enabled this by allowing -1 to be\n> passed as the format code. I'm not sure if we want to make this part of\n> the official API, but it would be useful to have something like this\n> somehow.\n>\n\n+1 to this feature, but -1 for design. It should be solved on protocol\nlevel.\n\nRegards\n\nPavel\n\n>\n>\n> [0]:\n>\n> https://www.postgresql.org/message-id/flat/6e747f98-835f-2e05-cde5-86ee444a7140%402ndquadrant.com\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\npo 26. 10. 2020 v 9:31 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:During the discussion on dynamic result sets[0], it became apparent that \nthe current way binary results are requested in the extended query \nprotocol is too cumbersome for some practical uses, and keeping that \nstyle around would also make the proposed protocol extensions very \ncomplicated.\n\nThe premise here is that a client library has hard-coded knowledge on \nhow to deal with binary format for certain, but not all, data types. \n(Most client libraries process everything in text, and some client \nlibraries process everything in binary.  Neither of these extremes are \nof concern here.)  Such a client always has to request a result row \ndescription (Describe statement) before sending a Bind message, in order \nto be able to pick out the result columns in should request in binary. \nThe feedback was that this extra round trip is often not worth it in \nterms of performance, and so it is not done and binary format is not \nused when it could be.\n\nThe conceptual solution is to allow a client to register for a session \nwhich types it wants to always get in binary, unless it says otherwise. \nIn the discussion in [0], I pondered a new protocol message for that, \nbut after further thought, a GUC setting would do just as well.\n\nThe attached patch implements this.  For example, to get int2, int4, \nint8 in binary by default, you could set\n\nSET default_result_formats = '21=1,23=1,20=1';Using SET statement for this case looks very obscure :/This is a protocol related issue, and should be solved by protocol extending. I don't think so SQL level is good for that.More, this format is not practical for custom types, and the list can be pretty long. \n\nThis is a list of oid=format pairs.\n\nI think this format satisfies the current requirements of the JDBC \ndriver.  But the format could also be extended in the future to allow \ntype names to be listed or some other ways of identifying the types.\n\nIn order to be able to test this via libpq, I had to add a little hack. \nCurrently, PQexecParams() and similar functions can only pass exactly \none result format code, which per protocol is then applied to all result \ncolumns.  There is no support for sending zero result format codes to \nmake the session default apply.  I enabled this by allowing -1 to be \npassed as the format code.  I'm not sure if we want to make this part of \nthe official API, but it would be useful to have something like this \nsomehow. +1 to this feature, but -1 for design. It should be solved on protocol level. RegardsPavel\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/6e747f98-835f-2e05-cde5-86ee444a7140%402ndquadrant.com\n\n-- \nPeter Eisentraut              http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 26 Oct 2020 09:45:27 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> The conceptual solution is to allow a client to register for a session \n> which types it wants to always get in binary, unless it says otherwise. \n\nOK.\n\n> In the discussion in [0], I pondered a new protocol message for that, \n> but after further thought, a GUC setting would do just as well.\n\nI think a GUC is conceptually the wrong level ...\n\n> In order to be able to test this via libpq, I had to add a little hack. \n\n... which is part of the reason why you have to kluge this. I'm not\nentirely certain which levels of the client stack need to know about\nthis, but surely libpq is one.\n\nI'm also quite worried about failures (maybe even security problems)\narising from the \"wrong level\" of the client stack setting the GUC.\n\nIndependently of that, how would you implement \"says otherwise\" here,\nie do a single-query override of the session's prevailing setting?\nMaybe the right thing for that is to define -1 all the way down to the\nprotocol level as meaning \"use the session's per-type default\", and\nthen if you don't want that you can pass 0 or 1. An advantage of that\nis that you couldn't accidentally break an application that wasn't\nready for this feature, because it would not be the default to use it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Oct 2020 10:35:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "On 2020-10-26 09:45, Pavel Stehule wrote:\n> The attached patch implements this.  For example, to get int2, int4,\n> int8 in binary by default, you could set\n> \n> SET default_result_formats = '21=1,23=1,20=1';\n> \n> \n> Using SET statement for this case looks very obscure :/\n> \n> This is a protocol related issue, and should be solved by protocol \n> extending. I don't think so SQL level is good for that.\n\nWe could also make it a protocol message, but it would essentially \nimplement the same thing, just again separately. And then you'd have no \nsupport to inspect the current setting, test out different settings \ninteractively, etc. That seems pretty wasteful and complicated for no \nreal gain.\n\n > More, this format is not practical for custom types, and the list can\n > be pretty long.\n\nThe list is what the list is. I don't see how you can make it any \nshorter. You have to list the data types that you're interested in \nsomehow. Any other ideas?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 5 Nov 2020 21:48:55 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "On 2020-10-26 15:35, Tom Lane wrote:\n>> In the discussion in [0], I pondered a new protocol message for that,\n>> but after further thought, a GUC setting would do just as well.\n> \n> I think a GUC is conceptually the wrong level ...\n\nIt does feel that way, but it gets the job done well and you can use all \nthe functionality already existing, such as being able to inspect \nsettings, temporarily change settings, etc. Otherwise we'd have to \nimplement a lot of things like that again. That would turn this 200 \nline patch into a 2000 line patch without any real additional benefit.\n\n>> In order to be able to test this via libpq, I had to add a little hack.\n> \n> ... which is part of the reason why you have to kluge this. I'm not\n> entirely certain which levels of the client stack need to know about\n> this, but surely libpq is one.\n >\n > I'm also quite worried about failures (maybe even security problems)\n > arising from the \"wrong level\" of the client stack setting the GUC.\n\nI don't think libpq needs to know about this very deeply. The protocol \nprovides format information with the result set. Libpq programs can \nquery that with PQfformat() and act accordingly. Nothing else is needed.\n\nThe real consumer of this would be the JDBC driver, which has built-in \nknowledge of the binary formats of some data types. Libpq doesn't, so \nit wouldn't use this facility anyway. (Not saying someone couldn't \nwrite a higher-level C library that does this, but it doesn't exist now. \n... hmm ... ecpg ...)\n\n> Independently of that, how would you implement \"says otherwise\" here,\n> ie do a single-query override of the session's prevailing setting?\n> Maybe the right thing for that is to define -1 all the way down to the\n> protocol level as meaning \"use the session's per-type default\", and\n> then if you don't want that you can pass 0 or 1. An advantage of that\n> is that you couldn't accidentally break an application that wasn't\n> ready for this feature, because it would not be the default to use it.\n\nYeah, that sounds a lot better. I'll look into that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 5 Nov 2020 22:03:29 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "čt 5. 11. 2020 v 21:48 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> On 2020-10-26 09:45, Pavel Stehule wrote:\n> > The attached patch implements this. For example, to get int2, int4,\n> > int8 in binary by default, you could set\n> >\n> > SET default_result_formats = '21=1,23=1,20=1';\n> >\n> >\n> > Using SET statement for this case looks very obscure :/\n> >\n> > This is a protocol related issue, and should be solved by protocol\n> > extending. I don't think so SQL level is good for that.\n>\n> We could also make it a protocol message, but it would essentially\n> implement the same thing, just again separately. And then you'd have no\n> support to inspect the current setting, test out different settings\n> interactively, etc. That seems pretty wasteful and complicated for no\n> real gain.\n>\n\nIf you need a debug API, then it can be better implemented with functions.\nBut why do you need it on SQL level?\n\nThis is a protocol related thing.\n\n\n> > More, this format is not practical for custom types, and the list can\n> > be pretty long.\n>\n> The list is what the list is. I don't see how you can make it any\n> shorter. You have to list the data types that you're interested in\n> somehow. Any other ideas?\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nčt 5. 11. 2020 v 21:48 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:On 2020-10-26 09:45, Pavel Stehule wrote:\n>     The attached patch implements this.  For example, to get int2, int4,\n>     int8 in binary by default, you could set\n> \n>     SET default_result_formats = '21=1,23=1,20=1';\n> \n> \n> Using SET statement for this case looks very obscure :/\n> \n> This is a protocol related issue, and should be solved by protocol \n> extending. I don't think so SQL level is good for that.\n\nWe could also make it a protocol message, but it would essentially \nimplement the same thing, just again separately.  And then you'd have no \nsupport to inspect the current setting, test out different settings \ninteractively, etc.  That seems pretty wasteful and complicated for no \nreal gain.If you need a debug API, then it can be better implemented with functions. But why do you need it on SQL level?This is a protocol related thing.\n\n > More, this format is not practical for custom types, and the list can\n > be pretty long.\n\nThe list is what the list is.  I don't see how you can make it any \nshorter.  You have to list the data types that you're interested in \nsomehow.  Any other ideas?\n\n-- \nPeter Eisentraut              http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 6 Nov 2020 06:36:18 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "On 2020-11-05 22:03, Peter Eisentraut wrote:\n>> Independently of that, how would you implement \"says otherwise\" here,\n>> ie do a single-query override of the session's prevailing setting?\n>> Maybe the right thing for that is to define -1 all the way down to the\n>> protocol level as meaning \"use the session's per-type default\", and\n>> then if you don't want that you can pass 0 or 1. An advantage of that\n>> is that you couldn't accidentally break an application that wasn't\n>> ready for this feature, because it would not be the default to use it.\n> Yeah, that sounds a lot better. I'll look into that.\n\nHere is a new patch updated to work that way. Feels better now.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/", "msg_date": "Mon, 9 Nov 2020 11:10:52 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "\nOn 11/9/20 5:10 AM, Peter Eisentraut wrote:\n> On 2020-11-05 22:03, Peter Eisentraut wrote:\n>>> Independently of that, how would you implement \"says otherwise\" here,\n>>> ie do a single-query override of the session's prevailing setting?\n>>> Maybe the right thing for that is to define -1 all the way down to the\n>>> protocol level as meaning \"use the session's per-type default\", and\n>>> then if you don't want that you can pass 0 or 1.  An advantage of that\n>>> is that you couldn't accidentally break an application that wasn't\n>>> ready for this feature, because it would not be the default to use it.\n>> Yeah, that sounds a lot better.  I'll look into that.\n>\n> Here is a new patch updated to work that way.  Feels better now.\n>\n\nI think this is conceptually OK, although it feels a bit odd.\n\nMight it be better to have the values as typename={binary,text} pairs\ninstead of oid={0,1} pairs, which are fairly opaque? That might make\nthings easier for things like UDTs where the oid might not be known or\nconstant.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 16 Nov 2020 10:15:25 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "On 2020-11-16 16:15, Andrew Dunstan wrote:\n> I think this is conceptually OK, although it feels a bit odd.\n> \n> Might it be better to have the values as typename={binary,text} pairs\n> instead of oid={0,1} pairs, which are fairly opaque? That might make\n> things easier for things like UDTs where the oid might not be known or\n> constant.\n\nYes, type names would be better. I was hesitant because of all the \nparsing work involved, but I bit the bullet and did it in the new patch.\n\nTo simplify the format, I changed the parameter so it's just a list of \ntypes that you want in binary, rather than type=value pairs. If we ever \nwant to add another format, we would revisit this, but it seems unlikely \nin the near future.\n\nAlso, I have changed the naming of the parameter since this is no longer \nthe \"default\" but something you choose explicitly. I'm thinking in the \ndirection of \"auto\" mode for the naming. Obviously, the name is easy to \ntweak in any case.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/", "msg_date": "Wed, 25 Nov 2020 08:06:57 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "On 11/25/20 2:06 AM, Peter Eisentraut wrote:\n> On 2020-11-16 16:15, Andrew Dunstan wrote:\n>> I think this is conceptually OK, although it feels a bit odd.\n>>\n>> Might it be better to have the values as typename={binary,text} pairs\n>> instead of oid={0,1} pairs, which are fairly opaque? That might make\n>> things easier for things like UDTs where the oid might not be known or\n>> constant.\n> \n> Yes, type names would be better.  I was hesitant because of all the \n> parsing work involved, but I bit the bullet and did it in the new patch.\n> \n> To simplify the format, I changed the parameter so it's just a list of \n> types that you want in binary, rather than type=value pairs.  If we ever \n> want to add another format, we would revisit this, but it seems unlikely \n> in the near future.\n> \n> Also, I have changed the naming of the parameter since this is no longer \n> the \"default\" but something you choose explicitly.  I'm thinking in the \n> direction of \"auto\" mode for the naming.  Obviously, the name is easy to \n> tweak in any case.\n\nAndrew, Tom, does the latest patch address your concerns?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 9 Mar 2021 09:47:28 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> Andrew, Tom, does the latest patch address your concerns?\n\n[ reads patch quickly... ] I think the definition is fine now,\nmodulo possible bikeshedding on the GUC name. (I have no\ngreat suggestion on that right now, but the current proposal\nseems mighty verbose.)\n\nThe implementation feels weird though, mainly in that I don't like\nPeter's choices for where to put the code. pquery.c is not where\nI would have expected to find the support for this, and I do not\nhave any confidence that applying the format conversion while\nfilling portal->formats[] is enough to cover all cases. I'd have\nthought that access/common/printtup.c or somewhere near there\nwould be where to do it.\n\nLikewise, the code associated with caching the results of the type\nOID lookups seems like it should be someplace where you'd be more\nlikely to find (a) type name lookup and (b) caching logic. I'm\nnot quite sure about the best place for that, but we could do\nworse than put it in parse_type.c. (As I recall, the parser\nalready has some caching related to operator lookup, so doing\npart (b) there isn't too much of a stretch.)\n\nAlso, if we need YA string-splitting function, please put it\nbeside the ones that already exist (SplitIdentifierString etc in\nvarlena.c). That way (a) it's available if some other code needs\nit, and (b) when somebody gets around to refactoring all the\nsplitters, they won't have to dig into nooks and crannies to find\nthem.\n\nHaving said that, I wonder if we should define the parameter's\ncontents this way, i.e. as things that parseTypeString will\naccept. At best that's overspecification, e.g. should people\nexpect that varchar(7) and varchar(9) are different things\n(and, perhaps, that such entries *don't* match varchars of other\nlengths?) I think a case could be made for requiring the entries\nto be identifiers exactly matching pg_type.typname, possibly with\nschema qualification. This'd allow tighter verification of the\nGUC value's format in the GUC check hook.\n\nOr we could drop all of that and go back to having it be a list\nof type OIDs, which would remove a *whole lot* of the complexity,\nand I'm not sure that it's materially less friendly. Applications\nhave had to deal with type OIDs in the protocol since forever.\n\nBTW, I wonder whether we still need to restrict the GUC to not\nbe settable from postgresql.conf. The fact that the client has\nto explicitly pass -1 seems to reduce any security issues quite\na bit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Mar 2021 13:04:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "On 09.03.21 19:04, Tom Lane wrote:\n> The implementation feels weird though, mainly in that I don't like\n> Peter's choices for where to put the code. pquery.c is not where\n> I would have expected to find the support for this, and I do not\n> have any confidence that applying the format conversion while\n> filling portal->formats[] is enough to cover all cases. I'd have\n> thought that access/common/printtup.c or somewhere near there\n> would be where to do it.\n\ndone\n\n> Or we could drop all of that and go back to having it be a list\n> of type OIDs, which would remove a *whole lot* of the complexity,\n> and I'm not sure that it's materially less friendly. Applications\n> have had to deal with type OIDs in the protocol since forever.\n\nalso done\n\nThe client driver needs to be able to interpret the OIDs that the \nRowDescription sends back, so it really needs to be able to deal in \nOIDs, and having the option to specify type names won't help it right now.\n\n> BTW, I wonder whether we still need to restrict the GUC to not\n> be settable from postgresql.conf. The fact that the client has\n> to explicitly pass -1 seems to reduce any security issues quite\n> a bit.\n\nThere was no security concern, but I don't think it's useful. The \ndriver would specify \"send int4 in binary, I know how to handle that\". \nThere doesn't seem to be a point in specifying that sort of thing globally.", "msg_date": "Thu, 18 Mar 2021 21:13:16 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "I applied the patch, tried running the test and got the following:\r\n\r\nrm -rf '/Users/hasegeli/Developer/postgres/src/test/modules/libpq_extended'/tmp_check\r\n/bin/sh ../../../../config/install-sh -c -d '/Users/hasegeli/Developer/postgres/src/test/modules/libpq_extended'/tmp_check\r\ncd . && TESTDIR='/Users/hasegeli/Developer/postgres/src/test/modules/libpq_extended' PATH=\"/Users/hasegeli/Developer/postgres/tmp_install/Users/hasegeli/.local/pgsql/bin:$PATH\" DYLD_LIBRARY_PATH=\"/Users/hasegeli/Developer/postgres/tmp_install/Users/hasegeli/.local/pgsql/lib\" PGPORT='65432' PG_REGRESS='/Users/hasegeli/Developer/postgres/src/test/modules/libpq_extended/../../../../src/test/regress/pg_regress' REGRESS_SHLIB='/Users/hasegeli/Developer/postgres/src/test/regress/regress.so' /usr/bin/prove -I ../../../../src/test/perl/ -I . t/*.pl\r\nt/001_result_format.pl .. # Looks like your test exited with 2 before it could output anything.\r\nt/001_result_format.pl .. Dubious, test returned 2 (wstat 512, 0x200)\r\nFailed 4/4 subtests\r\n\r\nTest Summary Report\r\n-------------------\r\nt/001_result_format.pl (Wstat: 512 Tests: 0 Failed: 0)\r\n Non-zero exit status: 2\r\n Parse errors: Bad plan. You planned 4 tests but ran 0.", "msg_date": "Fri, 19 Mar 2021 14:55:24 +0000", "msg_from": "Emre Hasegeli <emre@hasegeli.com>", "msg_from_op": false, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "\nOn 19.03.21 15:55, Emre Hasegeli wrote:\n> I applied the patch, tried running the test and got the following:\n> \n> rm -rf '/Users/hasegeli/Developer/postgres/src/test/modules/libpq_extended'/tmp_check\n> /bin/sh ../../../../config/install-sh -c -d '/Users/hasegeli/Developer/postgres/src/test/modules/libpq_extended'/tmp_check\n> cd . && TESTDIR='/Users/hasegeli/Developer/postgres/src/test/modules/libpq_extended' PATH=\"/Users/hasegeli/Developer/postgres/tmp_install/Users/hasegeli/.local/pgsql/bin:$PATH\" DYLD_LIBRARY_PATH=\"/Users/hasegeli/Developer/postgres/tmp_install/Users/hasegeli/.local/pgsql/lib\" PGPORT='65432' PG_REGRESS='/Users/hasegeli/Developer/postgres/src/test/modules/libpq_extended/../../../../src/test/regress/pg_regress' REGRESS_SHLIB='/Users/hasegeli/Developer/postgres/src/test/regress/regress.so' /usr/bin/prove -I ../../../../src/test/perl/ -I . t/*.pl\n> t/001_result_format.pl .. # Looks like your test exited with 2 before it could output anything.\n> t/001_result_format.pl .. Dubious, test returned 2 (wstat 512, 0x200)\n> Failed 4/4 subtests\n> \n> Test Summary Report\n> -------------------\n> t/001_result_format.pl (Wstat: 512 Tests: 0 Failed: 0)\n> Non-zero exit status: 2\n> Parse errors: Bad plan. You planned 4 tests but ran 0.\n> \n\nCould you look into the log files in that test directory what is going \non? The test setup is closely modeled after \nsrc/test/modules/libpq_pipeline/. Does that one run ok?\n\n\n", "msg_date": "Sun, 21 Mar 2021 20:02:58 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "> Could you look into the log files in that test directory what is going\n> on?\n\nCommand 'test-result-format' not found in\n/Users/hasegeli/Developer/postgres/tmp_install/Users/hasegeli/.local/pgsql/bin,\n/Users/hasegeli/.local/bin, /opt/homebrew/bin, /usr/local/bin,\n/usr/bin, /bin, /usr/sbin, /sbin,\n/Users/hasegeli/Developer/postgres/src/test/modules/libpq_extended at\n/Users/hasegeli/Developer/postgres/src/test/modules/libpq_extended/../../../../src/test/perl/TestLib.pm\nline 818.\n\nMaybe you forgot to commit the file in the test?\n\n> The test setup is closely modeled after\n> src/test/modules/libpq_pipeline/. Does that one run ok?\n\nYes\n\n\n", "msg_date": "Sun, 21 Mar 2021 22:18:00 +0300", "msg_from": "Emre Hasegeli <emre@hasegeli.com>", "msg_from_op": false, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "On 21.03.21 20:18, Emre Hasegeli wrote:\n>> Could you look into the log files in that test directory what is going\n>> on?\n> \n> Command 'test-result-format' not found in\n> /Users/hasegeli/Developer/postgres/tmp_install/Users/hasegeli/.local/pgsql/bin,\n> /Users/hasegeli/.local/bin, /opt/homebrew/bin, /usr/local/bin,\n> /usr/bin, /bin, /usr/sbin, /sbin,\n> /Users/hasegeli/Developer/postgres/src/test/modules/libpq_extended at\n> /Users/hasegeli/Developer/postgres/src/test/modules/libpq_extended/../../../../src/test/perl/TestLib.pm\n> line 818.\n> \n> Maybe you forgot to commit the file in the test?\n\nIndeed. Here is an updated patch.", "msg_date": "Mon, 22 Mar 2021 13:50:40 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "I think this is a good feature that would be useful to JDBC and more.\n\nI don't know the surrounding code very well, but the patch looks good to me.\n\nI agree with Tom Lane that the name of the variable is too verbose.\nMaybe \"auto_binary_types\" is enough. Do we gain much by prefixing\n\"result_format_\"? Wouldn't we use the same variable, if we support\nbinary inputs one day?\n\nIt is nice that the patch comes with the test module. The name\n\"libpq_extended\" sounds a bit vague to me. Maybe it's a better idea\nto call it \"libpq_result_format\" and test \"format=1\" in it as well.\n\nMy last nitpicking about the names is the \"test-result-format\"\ncommand. All the rest of the test modules name the commands with\nunderscores. It would be nicer if this one complies.\n\nThere is one place that needs to be updated on the Makefile of the test:\n\n> +subdir = src/test/modules/libpq_pipeline\n\ns/pipeline/extended/\n\nThen the test runs successfully.\n\n\n", "msg_date": "Wed, 24 Mar 2021 16:03:30 +0300", "msg_from": "Emre Hasegeli <emre@hasegeli.com>", "msg_from_op": false, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "On Thu, Nov 5, 2020 at 3:49 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> We could also make it a protocol message, but it would essentially\n> implement the same thing, just again separately. And then you'd have no\n> support to inspect the current setting, test out different settings\n> interactively, etc. That seems pretty wasteful and complicated for no\n> real gain.\n\nBut ... if it's just a GUC, it can be set by code on the server side\nthat the client knows nothing about, breaking the client. That seems\npretty bad to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Mar 2021 10:49:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> But ... if it's just a GUC, it can be set by code on the server side\n> that the client knows nothing about, breaking the client. That seems\n> pretty bad to me.\n\nIt's impossible for the proposed patch to break *existing* clients,\nbecause they all send requested format 0 or 1, and that is exactly\nwhat they'll get back.\n\nA client that is sending -1 and assuming that it will get back\na particular format could get broken if the GUC doesn't have the\nvalue it thinks, true. But I'd argue that such code is unreasonably\nnon-robust. Can't we solve this by recommending that clients using\nthis feature always double-check which format they actually got?\nISTM that the use-cases for the feature involve checking what data\ntype you got anyway, so that's not an unreasonable added requirement.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Mar 2021 10:58:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "On Wed, Mar 24, 2021 at 10:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > But ... if it's just a GUC, it can be set by code on the server side\n> > that the client knows nothing about, breaking the client. That seems\n> > pretty bad to me.\n>\n> It's impossible for the proposed patch to break *existing* clients,\n> because they all send requested format 0 or 1, and that is exactly\n> what they'll get back.\n\nOK.\n\n> A client that is sending -1 and assuming that it will get back\n> a particular format could get broken if the GUC doesn't have the\n> value it thinks, true. But I'd argue that such code is unreasonably\n> non-robust. Can't we solve this by recommending that clients using\n> this feature always double-check which format they actually got?\n> ISTM that the use-cases for the feature involve checking what data\n> type you got anyway, so that's not an unreasonable added requirement.\n\nI suppose that's a fair idea, but to me it still feels a bit like a\nround peg in the square hole. Suppose for example that there's a\nclient application which wants to talk to a connection pooler which in\nturn wants to talk to the server. Let's also suppose that connection\npooler isn't just a pass-through, but wants to redirect client\nconnections to various servers or even intercept queries and result\nsets and make changes as the data passes by. It can do that by parsing\nSQL and solving the halting problem, whereas if this were a\nprotocol-level option it would be completely doable. Now you could say\n\"well, by that argument, DateStyle ought to be a protocol-level\noption, too,\" and that's pretty a pretty fair criticism of what I'm\nsaying here. On the other hand, I'm not too sure that wouldn't have\nbeen the right call. Using SQL to tailor the wire protocol format\nfeels like some kind of layering inversion to me. I think we should be\nworking toward a state where it's more clear which things are \"owned\"\nat the wire protocol level and which things are \"owned\" at the SQL\nlevel, and this seems to be going in exactly the opposite direction,\nand in fact probably taking things further in that direction than\nwe've ever gone before.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Mar 2021 11:29:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Mar 24, 2021 at 10:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> A client that is sending -1 and assuming that it will get back\n>> a particular format could get broken if the GUC doesn't have the\n>> value it thinks, true. But I'd argue that such code is unreasonably\n>> non-robust. Can't we solve this by recommending that clients using\n>> this feature always double-check which format they actually got?\n>> ISTM that the use-cases for the feature involve checking what data\n>> type you got anyway, so that's not an unreasonable added requirement.\n\n> I suppose that's a fair idea, but to me it still feels a bit like a\n> round peg in the square hole. Suppose for example that there's a\n> client application which wants to talk to a connection pooler which in\n> turn wants to talk to the server. Let's also suppose that connection\n> pooler isn't just a pass-through, but wants to redirect client\n> connections to various servers or even intercept queries and result\n> sets and make changes as the data passes by. It can do that by parsing\n> SQL and solving the halting problem, whereas if this were a\n> protocol-level option it would be completely doable. Now you could say\n> \"well, by that argument, DateStyle ought to be a protocol-level\n> option, too,\" and that's pretty a pretty fair criticism of what I'm\n> saying here. On the other hand, I'm not too sure that wouldn't have\n> been the right call. Using SQL to tailor the wire protocol format\n> feels like some kind of layering inversion to me.\n\nI can't say that I'm 100% comfortable with it either, but the alternative\nseems quite unpleasant, precisely because the client side might have\nmultiple layers involved. If we make it a wire-protocol thing then\na whole lot of client API thrashing is going to ensue to transmit the\ndesired setting up and down the stack. As an example, libpq doesn't\nreally give a darn which data format is returned: it is the application\nusing libpq that would want to be able to set this option. If libpq\nhas to be involved in transmitting the option to the backend, then we\nneed a new libpq API call to tell it to do that. Rinse and repeat\nfor anything that wraps libpq. And, in the end, it's not real clear\nwhich client-side layer *should* have control of this. In some cases\nyou might want the decision to be taken quite high up, because which\nformat is really more efficient will depend on the total usage picture\nfor a given application, which low-level code like libpq wouldn't know.\nHaving a library decide that \"this buck stops with me\" is likely to be\nthe wrong thing.\n\nI do not understand the structure of the client stack for JDBC, but\nI wonder whether there won't be similar issues there.\n\nAs you say, DateStyle and the like are precedents for things that\n*could* break application stacks, and in another universe maybe we'd\nhave managed them differently. In the end though, they've been like\nthat for a long time and we've not heard many complaints about them.\nSo I'm inclined to think that that precedent says this is OK too.\n\nBTW, I thought briefly about whether we could try to lock things down\na bit by marking the GUC as PGC_BACKEND, which would effectively mean\nthat clients would have to send it in the startup packet. However,\nthat would verge on making it unusable for non-built-in datatypes,\nfor which you need to look up the OID first. So I don't think that'd\nbe an improvement.\n\n> I think we should be\n> working toward a state where it's more clear which things are \"owned\"\n> at the wire protocol level and which things are \"owned\" at the SQL\n> level, and this seems to be going in exactly the opposite direction,\n\nI don't think I buy the premise that there are exactly two levels\non the client side.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Mar 2021 12:01:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "On Wed, Mar 24, 2021 at 12:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't think I buy the premise that there are exactly two levels\n> on the client side.\n\nThanks for sharing your thoughts on this. I agree it's a complex\nissue, and the idea that there are possibly more than two logical\nlevels is, for me, maybe your most interesting observation.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Mar 2021 12:05:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: default result formats setting" }, { "msg_contents": "On Sun, 7 Aug 2022 at 09:58, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Mar 24, 2021 at 12:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I don't think I buy the premise that there are exactly two levels\n> > on the client side.\n>\n> Thanks for sharing your thoughts on this. I agree it's a complex\n> issue, and the idea that there are possibly more than two logical\n> levels is, for me, maybe your most interesting observation.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.co <http://www.enterprisedb.com>\n\n\nI'd like to revive this thread.\n\nI have put in a patch to do the same thing.\nPostgreSQL: Re: Proposal to provide the facility to set binary format\noutput for specific OID's per session\n<https://www.postgresql.org/message-id/CADK3HHJFVS1VWxGDKov8XMeFzyxyGJAyzCRQUwjvso+NMo+ofA@mail.gmail.com>\n\nUpthread Tom mused about how the JDBC driver would handle it. I can tell\nyou that it handles it fine with no changes as does the go driver. Further\nas Jack pointed out it provides significant performance benefits.\n\nThe original discussion correctly surmises that the DESCRIBE statement is\nrarely (if ever) used as any advantages of sending are nullified by the\ncost of sending it.\n\nI prefer the GUC as this allows pools to be configured to reset the setting\nwhen returning the connection to the pool and setting it correctly for the\nclient when borrowing the connection.\n\nRegards,\n\nDave\n\n\n>\n\nOn Sun, 7 Aug 2022 at 09:58, Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Mar 24, 2021 at 12:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't think I buy the premise that there are exactly two levels\n> on the client side.\n\nThanks for sharing your thoughts on this. I agree it's a complex\nissue, and the idea that there are possibly more than two logical\nlevels is, for me, maybe your most interesting observation.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.coI'd like to revive this thread.I have put in a patch to do the same thing.PostgreSQL: Re: Proposal to provide the facility to set binary format output for specific OID's per sessionUpthread Tom mused about how the JDBC driver would handle it. I can tell you that it handles it fine with no changes as does the go driver. Further as Jack pointed out it provides significant performance benefits.The original discussion correctly surmises that the DESCRIBE statement is rarely (if ever) used as any advantages of sending are nullified by the cost of sending it.I prefer the GUC as this allows pools to be configured to reset the setting when returning the connection to the pool and setting it correctly for the client when borrowing the connection.Regards,Dave", "msg_date": "Sun, 7 Aug 2022 10:08:35 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: default result formats setting" } ]
[ { "msg_contents": "Hi hackers,\n\n It seems the function `get_variable_numdistinct` ignore the case when stanullfrac is 1.0:\n\n# create table t(a int, b int);\nCREATE TABLE\n# insert into t select i from generate_series(1, 10000)i;\nINSERT 0 10000\ngpadmin=# analyze t;\nANALYZE\n# explain analyze select b, count(1) from t group by b;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n HashAggregate (cost=195.00..197.00 rows=200 width=12) (actual time=5.928..5.930 rows=1 loops=1)\n Group Key: b\n Batches: 1 Memory Usage: 40kB\n -> Seq Scan on t (cost=0.00..145.00 rows=10000 width=4) (actual time=0.018..1.747 rows=10000 loops=1)\n Planning Time: 0.237 ms\n Execution Time: 5.983 ms\n(6 rows)\n\nSo it gives the estimate using the default value: 200.\n\n\nI have added some lines of code to take `stanullfrac ==1.0` into account. With the patch attached, we now get:\n\n# explain analyze select b, count(1) from t group by b;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n HashAggregate (cost=195.00..195.01 rows=1 width=12) (actual time=6.163..6.164 rows=1 loops=1)\n Group Key: b\n Batches: 1 Memory Usage: 24kB\n -> Seq Scan on t (cost=0.00..145.00 rows=10000 width=4) (actual time=0.024..1.823 rows=10000 loops=1)\n Planning Time: 0.535 ms\n Execution Time: 6.344 ms\n(6 rows)\n\nI am not sure if this change is valuable in practical env, but it should go in the correct direction.\n\nAny comments on this are appreciated.", "msg_date": "Mon, 26 Oct 2020 08:42:52 +0000", "msg_from": "Zhenghua Lyu <zlyu@vmware.com>", "msg_from_op": true, "msg_subject": "Should the function get_variable_numdistinct consider the case when\n stanullfrac is 1.0?" }, { "msg_contents": "Zhenghua Lyu <zlyu@vmware.com> writes:\n> It seems the function `get_variable_numdistinct` ignore the case when stanullfrac is 1.0:\n\nI don't like this patch at all. What's the argument for having a special\ncase for this value? When would we ever get exactly 1.0 in practice?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Oct 2020 10:37:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should the function get_variable_numdistinct consider the case\n when stanullfrac is 1.0?" }, { "msg_contents": "Hi,\n when group by multi-columns, it will multiply all the distinct values together, and if one column is all null,\n it also contributes 200 to the final estimate, and if the product is over the relation size, it will be clamp.\n\n So the the value of the agg rel size is not correct, and impacts the upper path's cost estimate, and do not\n give a good plan.\n\n I debug some other queries and find this issue, but not sure if this issue is the root cause of my problem,\n just open a thread here for discussion.\n________________________________\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: Monday, October 26, 2020 10:37 PM\nTo: Zhenghua Lyu <zlyu@vmware.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Should the function get_variable_numdistinct consider the case when stanullfrac is 1.0?\n\nZhenghua Lyu <zlyu@vmware.com> writes:\n> It seems the function `get_variable_numdistinct` ignore the case when stanullfrac is 1.0:\n\nI don't like this patch at all. What's the argument for having a special\ncase for this value? When would we ever get exactly 1.0 in practice?\n\n regards, tom lane\n\n\n\n\n\n\n\n\nHi,\n\n    when group by multi-columns, it will multiply all the distinct values together, and if one column is all null,\n\n    it also contributes 200 to the final estimate, and if the product is over the relation size, it will be clamp.\n\n\n\n\n    So the the value of the agg rel size is not correct, and impacts the upper path's cost estimate, and do not\n\n    give a good plan.\n\n\n\n\n    I debug some other queries and find this issue, but not sure if this issue is the root cause of my problem,\n\n    just open a thread here for discussion.\n\n\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: Monday, October 26, 2020 10:37 PM\nTo: Zhenghua Lyu <zlyu@vmware.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Should the function get_variable_numdistinct consider the case when stanullfrac is 1.0?\n \n\n\nZhenghua Lyu <zlyu@vmware.com> writes:\n>    It seems the function `get_variable_numdistinct` ignore the case when stanullfrac is 1.0:\n\nI don't like this patch at all.  What's the argument for having a special\ncase for this value?  When would we ever get exactly 1.0 in practice?\n\n                        regards, tom lane", "msg_date": "Mon, 26 Oct 2020 15:01:41 +0000", "msg_from": "Zhenghua Lyu <zlyu@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Should the function get_variable_numdistinct consider the case\n when stanullfrac is 1.0?" }, { "msg_contents": "On Mon, Oct 26, 2020 at 03:01:41PM +0000, Zhenghua Lyu wrote:\n>Hi,\n> when group by multi-columns, it will multiply all the distinct values together, and if one column is all null,\n> it also contributes 200 to the final estimate, and if the product is over the relation size, it will be clamp.\n>\n> So the the value of the agg rel size is not correct, and impacts the upper path's cost estimate, and do not\n> give a good plan.\n>\n> I debug some other queries and find this issue, but not sure if this issue is the root cause of my problem,\n> just open a thread here for discussion.\n\nI think we understand what the issue is, in principle - if the column is\nall-null, the ndistinct estimate 200 is bogus and when multiplied with\nestimates for other Vars it may lead to over-estimates. That's a valid\nissue, of course.\n\nThe question is whether the proposed patch is a good way to handle it.\n\nI'm not sure what exactly are Tom's concerns, but I was worried relying\non (stanullfrac == 1.0) might result in abrupt changes in estimates when\nthat's a minor difference. For example if column is \"almost NULL\" we may\nend up with either 1.0 or (1.0 - epsilon) and the question is what\nestimates we end up with ...\n\nImagine a column that is 'almost NULL' - it's 99.99% NULL with a couple\nnon-NULL values. When the ANALYZE samples just NULLs, we'll end up with\n\n n_distinct = 0.0\n stanullfrac = 1.0\n\nand we'll end up estimating either 200 (current estimate) or 1.0 (with\nthis patch). Now, what if stanullfrac is not 1.0 but a little bit less?\nSay only 1 of the 30k rows is non-NULL? Well, in that case we'll not\neven get to this condition, because we'll have\n\n n_distinct = -3.3318996e-05\n stanullfrac = 0.9999667\n\nwhich means get_variable_numdistinct will return from either\n\n if (stadistinct > 0.0)\n return ...\n\nor\n\n if (stadistinct < 0.0)\n return ...\n\nand we'll never even get to that new condition. And by definition, the\nestimate has to be very low, because otherwise we'd need more non-NULL\ndistinct rows in the sample, which makes it less likely to ever see\nstanullfrac being 1.0. And even if we could get a bigger difference\n(say, 50 vs. 1.0), but I don't think that's very different from the\ncurrent situation with 200 as a default.\n\nOf course, using 1.0 in these cases may make us more vulnerable to\nunder-estimates for large tables. But for that to happen we must not\nsample any of the non-NULL values, and if there are many distinct values\nthat's probably even less likely than sampling just one (when we end up\nwith an estimate of 1.0 already).\n\nSo I'm not sure I understand what would be the risk with this ... Tom,\ncan you elaborate why you dislike the patch?\n\n\nBTW we already have a way to improve the estimate - setting n_distinct\nfor the column to 1.0 using ALTER TABLE should do the trick, I think.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sat, 31 Oct 2020 00:40:39 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Should the function get_variable_numdistinct consider the case\n when stanullfrac is 1.0?" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> So I'm not sure I understand what would be the risk with this ... Tom,\n> can you elaborate why you dislike the patch?\n\nI've got a couple issues with the patch as presented.\n\n* As you said, it creates discontinuous behavior for stanullfrac = 1.0\nversus stanullfrac = 1.0 - epsilon. That doesn't seem good.\n\n* It's not apparent why, if ANALYZE's sample is all nulls, we wouldn't\nconclude stadistinct = 0 and thus arrive at the desired answer that\nway. (Since we have a complaint, I'm guessing that ANALYZE might\ndisbelieve its own result and stick in some larger stadistinct. But\nthen maybe that's where to fix this, not here.)\n\n* We generally disbelieve edge-case estimates to begin with. The\nmost obvious example is that we don't accept rowcount estimates that\nare zero. There are also some clamps that disbelieve selectivities\napproaching 0.0 or 1.0 when estimating from a histogram, and I think\nwe have a couple other similar rules. The reason for this is mainly\nthat taking such estimates at face value creates too much risk of\nsevere relative error due to imprecise or out-of-date statistics.\nSo a special case for stanullfrac = 1.0 seems to go directly against\nthat mindset.\n\nI agree that there might be some gold to be mined in this area,\nas we haven't thought particularly hard about high-stanullfrac\nsituations. One idea is to figure what stanullfrac says about the\nnumber of non-null rows, and clamp the get_variable_numdistinct\nresult to be not more than that. But I still would not want to\ntrust an exact zero result.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Oct 2020 20:50:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should the function get_variable_numdistinct consider the case\n when stanullfrac is 1.0?" }, { "msg_contents": "I wrote:\n> * It's not apparent why, if ANALYZE's sample is all nulls, we wouldn't\n> conclude stadistinct = 0 and thus arrive at the desired answer that\n> way. (Since we have a complaint, I'm guessing that ANALYZE might\n> disbelieve its own result and stick in some larger stadistinct. But\n> then maybe that's where to fix this, not here.)\n\nOh, on second thought (and with some testing): ANALYZE *does* report\nstadistinct = 0. The real issue is that get_variable_numdistinct is\nassuming it can use that value as meaning \"stadistinct is unknown\".\nSo maybe we should just fix that, probably by adding an explicit\nbool flag for that condition.\n\nBTW ... I've not looked at the callers, but now I'm wondering whether\nget_variable_numdistinct ought to count NULL as one of the \"distinct\"\nvalues. In applications such as estimating the number of GROUP BY\ngroups, it seems like that would be correct. There might be some\ncallers that don't want it though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Oct 2020 21:04:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should the function get_variable_numdistinct consider the case\n when stanullfrac is 1.0?" } ]
[ { "msg_contents": "Greetings hackers,\n\nI have I hope interesting observation (and nano patch proposal) on system where statistics freshness is a critical factor. Autovacuum/autogathering statistics was tuned to be pretty very aggressive:\nautovacuum_vacuum_cost_delay=0 (makes autovacuum_vacuum_cost_limit irrelevant)\nautovacuum_naptime=1s\nautovacuum_max_workers=4\n\nSome critical table partitions are configured with: autovacuum_analyze_scale_factor=0.001, autovacuum_analyze_threshold=50000 to force auto-analyze jobs pretty often. The interesting logs are like this:\nautomatic analyze of table \"t1\" system usage: CPU: user: 37.52 s, system: 23.01 s, elapsed: 252.14 s\nautomatic analyze of table \"t2\" system usage: CPU: user: 38.70 s, system: 22.63 s, elapsed: 317.33 s\nautomatic analyze of table \"t2\" system usage: CPU: user: 39.38 s, system: 21.43 s, elapsed: 213.58 s\nautomatic analyze of table \"t1\" system usage: CPU: user: 37.91 s, system: 24.49 s, elapsed: 229.45 s\n\nand this is root-cause of my question. As you can see there is huge 3x-4x time discrepancy between \"elapsed\" and user+system which is strange at least for me as there should be no waiting (autovacuum_vacuum_cost_delay=0). According to various tools it is true: Time was wasted somewhere else, but not in the PostgreSQL analyze. The ps(1) and pidstat(1) also report the same for the worker:\n\n06:56:12 AM PID %usr %system %guest %CPU CPU Command\n06:56:13 AM 114774 8.00 10.00 0.00 18.00 18 postgres\n06:56:14 AM 114774 8.00 11.00 0.00 19.00 15 postgres\n06:56:15 AM 114774 5.00 13.00 0.00 18.00 18 postgres\n\n06:56:17 AM PID kB_rd/s kB_wr/s kB_ccwr/s Command\n06:56:18 AM 114774 63746.53 0.00 0.00 postgres\n06:56:19 AM 114774 62896.00 0.00 0.00 postgres\n06:56:20 AM 114774 62920.00 0.00 0.00 postgres\n\nOne could argue that such autoanalyze worker could perform 5x more of work (%CPU -> 100%) here. The I/O system is not performing a lot (total = 242MB/s reads@22k IOPS, 7MB/s writes@7k IOPS, with service time 0.04ms), although reporting high utilization I'm pretty sure it could push much more. There can be up to 3x-4x of such 70-80MB/s analyzes in parallel (let's say 300MB/s for statistics collection alone).\n\nAccording to various gdb backtraces, a lot of time is spent here:\n#0 0x00007f98cdfc9f73 in __pread_nocancel () from /lib64/libpthread.so.0\n#1 0x0000000000741a16 in pread (__offset=811253760, __nbytes=8192, __buf=0x7f9413ab7280, __fd=<optimized out>) at /usr/include/bits/unistd.h:84\n#2 FileRead (file=<optimized out>, buffer=0x7f9413ab7280 \"\\037\\005\", amount=8192, offset=811253760, wait_event_info=167772173) at fd.c:1883\n#3 0x0000000000764b8f in mdread (reln=<optimized out>, forknum=<optimized out>, blocknum=19890902, buffer=0x7f9413ab7280 \"\\037\\005\") at md.c:596\n#4 0x000000000073d69b in ReadBuffer_common (smgr=<optimized out>, relpersistence=112 'p', forkNum=MAIN_FORKNUM, blockNum=19890902, mode=RBM_NORMAL, strategy=0x1102278, hit=0x7fffba7e2d4f)\n at bufmgr.c:897\n#5 0x000000000073e27e in ReadBufferExtended (reln=0x7f98c0c9ded0, forkNum=MAIN_FORKNUM, blockNum=19890902, mode=<optimized out>, strategy=<optimized out>) at bufmgr.c:665\n#6 0x00000000004c2e2f in heapam_scan_analyze_next_block (scan=<optimized out>, blockno=<optimized out>, bstrategy=<optimized out>) at heapam_handler.c:998\n#7 0x0000000000597de1 in table_scan_analyze_next_block (bstrategy=<optimized out>, blockno=<optimized out>, scan=0x10c8098) at ../../../src/include/access/tableam.h:1462\n#8 acquire_sample_rows (onerel=0x7f98c0c9ded0, elevel=13, rows=0x1342e08, targrows=1500000, totalrows=0x7fffba7e3160, totaldeadrows=0x7fffba7e3158) at analyze.c:1048\n#9 0x0000000000596a50 in do_analyze_rel (onerel=0x7f98c0c9ded0, params=0x10744e4, va_cols=0x0, acquirefunc=0x597ca0 <acquire_sample_rows>, relpages=26763525, inh=false,\n in_outer_xact=false, elevel=13) at analyze.c:502\n[..]\n#12 0x00000000006e76b4 in autovacuum_do_vac_analyze (bstrategy=0x1102278, tab=<optimized out>) at autovacuum.c:3118\n[..]\n\nThe interesting thing that targrows=1.5mlns and that blocks are accessed (as expected) in sorted order:\n\nBreakpoint 1, heapam_scan_analyze_next_block (scan=0x10c8098, blockno=19890910, bstrategy=0x1102278) at heapam_handler.c:984\nBreakpoint 1, heapam_scan_analyze_next_block (scan=0x10c8098, blockno=19890912, bstrategy=0x1102278) at heapam_handler.c:984\nBreakpoint 1, heapam_scan_analyze_next_block (scan=0x10c8098, blockno=19890922, bstrategy=0x1102278) at heapam_handler.c:984\nBreakpoint 1, heapam_scan_analyze_next_block (scan=0x10c8098, blockno=19890935, bstrategy=0x1102278) at heapam_handler.c:984\nBreakpoint 1, heapam_scan_analyze_next_block (scan=0x10c8098, blockno=19890996, bstrategy=0x1102278) at heapam_handler.c:984\n\nAnd probably this explains the discrepancy, perf with CPU usage reporting shows a lot of frames waiting on I/O on readaheads done by ext4, trimmed for clarity:\n\n# Children Self sys usr Command Shared Object Symbol\n 63.64% 0.00% 0.00% 0.00% postgres [kernel.kallsyms] [k] entry_SYSCALL_64_after_hwframe\n ---entry_SYSCALL_64_after_hwframe\n do_syscall_64\n |--59.66%--sys_pread64\n | vfs_read\n | --59.09%--__vfs_read\n | --58.24%--generic_file_read_iter\n | |--47.44%--ondemand_readahead\n | | --46.88%--__do_page_cache_readahead\n | | |--32.67%--ext4_mpage_readpages\n | | | |--16.76%--submit_bio\n | | |--10.23%--blk_finish_plug\n[..]\n 63.64% 1.99% 1.99% 0.00% postgres [kernel.kallsyms] [k] do_syscall_64\n |--61.65%--do_syscall_64\n | |--59.66%--sys_pread64\n | | vfs_read\n | | --59.09%--__vfs_read\n | | --58.24%--generic_file_read_iter\n | | |--47.44%--ondemand_readahead\n | | | --46.88%--__do_page_cache_readahead\n\n 61.36% 0.00% 0.00% 0.00% postgres postgres [.] FileRead \n ---FileRead\n __pread_nocancel\n --60.51%--entry_SYSCALL_64_after_hwframe\n do_syscall_64\n --59.66%--sys_pread64\n vfs_read\n --59.09%--__vfs_read\n --58.24%--generic_file_read_iter\n |--47.44%--ondemand_readahead\n | --46.88%--__do_page_cache_readahead\n\n 61.36% 0.85% 0.00% 0.85% postgres libpthread-2.17.so [.] __pread_nocancel\n |--60.51%--__pread_nocancel\n | entry_SYSCALL_64_after_hwframe\n | do_syscall_64\n | --59.66%--sys_pread64\n | vfs_read\n | --59.09%--__vfs_read\n | --58.24%--generic_file_read_iter\n | |--47.44%--ondemand_readahead\n | | --46.88%--__do_page_cache_readahead\n\n\n 59.66% 0.00% 0.00% 0.00% postgres [kernel.kallsyms] [k] sys_pread64\n ---sys_pread64\n vfs_read\n --59.09%--__vfs_read\n --58.24%--generic_file_read_iter\n |--47.44%--ondemand_readahead\n | --46.88%--__do_page_cache_readahead\n | |--32.67%--ext4_mpage_readpages\n\n\n[..] \nPerf --no-children also triple confirms that there isn't any function that is burning a lot inside the worker:\n\n# Overhead sys usr Command Shared Object Symbol\n 5.40% 0.00% 5.40% postgres [vdso] [.] __vdso_clock_gettime\n 5.11% 0.00% 5.11% postgres postgres [.] acquire_sample_rows\n ---acquire_sample_rows\n 3.98% 0.00% 3.98% postgres postgres [.] heapam_scan_analyze_next_tuple\n ---heapam_scan_analyze_next_tuple\n 3.69% 3.69% 0.00% postgres [kernel.kallsyms] [k] pvclock_clocksource_read\n\nMy questions are:\na) does anybody know if it is expected that getrusage() doesn't include readahead times as current thread system time ? (I don't know by may be performed by other kernel threads?) ru_stime is defined as \"This is the total amount of time spent executing in kernel mode\". Maybe the \"executing\" is the keyword here? (waiting != executing?)\n\nb) initially I've wanted to add a new pg_rusage_show_verbose() that would also add ru_inblock, but that wouldn't add much value to the end user. Also adding another timing directly around table_scan_analyze_next_block() seems like the bad idea as it involves locking underneah. So I've tried the most easy approach to simply log $pgStatBlockReadTime as strictly I/O time spent in pread() (ReadBuffer_common() already measures time). The attached patch for PgSQL14-devel in heavy I/O conditions (with track_io_timings=on) logs the following: \n\"LOG: automatic analyze of table \"test.public.t1_default\" system usage: IO read time 0.69 s, CPU: user: 0.18 s, system: 0.13 s, elapsed: 0.92 s\"\nmy interpretation would be that IO reading time was most limiting factor (69/92 = 75%), but *CPU* on kernel side was just 13s. It could give the enduser/DBA the information needed, the information where's the bottleneck given the autovacuum_vacuum_cost_delay=0. In autovacuum_vacuum_cost_delay>0 maybe it would make sense to include also time spent on sleeping?\n\nc) I'm curious if anybody has any I/O related insights into analyze.c processing especially related to readaheads? E.g. maybe disabling readahead would help for PostgreSQL analyze.c usecase on NVMe? Is it worth given that only x% of blocks are needed? The only option I'm aware would be to e.g. hash-partition the table (to introduce parallelism by autovacuums and enable even workers). Any hints or comments?\n\nAll of the above observations from PostgreSQL 12.4 on Linux kernel 4.14 with ext4/striped dm with 3x-4x NVMEs.\n\n-Jakub Wartak.", "msg_date": "Mon, 26 Oct 2020 12:21:01 +0000", "msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>", "msg_from_op": true, "msg_subject": "automatic analyze: readahead - add \"IO read time\" log message" }, { "msg_contents": "Greetings,\n\n* Jakub Wartak (Jakub.Wartak@tomtom.com) wrote:\n> I have I hope interesting observation (and nano patch proposal) on system where statistics freshness is a critical factor. Autovacuum/autogathering statistics was tuned to be pretty very aggressive:\n> autovacuum_vacuum_cost_delay=0 (makes autovacuum_vacuum_cost_limit irrelevant)\n> autovacuum_naptime=1s\n> autovacuum_max_workers=4\n> \n> Some critical table partitions are configured with: autovacuum_analyze_scale_factor=0.001, autovacuum_analyze_threshold=50000 to force auto-analyze jobs pretty often. The interesting logs are like this:\n> automatic analyze of table \"t1\" system usage: CPU: user: 37.52 s, system: 23.01 s, elapsed: 252.14 s\n> automatic analyze of table \"t2\" system usage: CPU: user: 38.70 s, system: 22.63 s, elapsed: 317.33 s\n> automatic analyze of table \"t2\" system usage: CPU: user: 39.38 s, system: 21.43 s, elapsed: 213.58 s\n> automatic analyze of table \"t1\" system usage: CPU: user: 37.91 s, system: 24.49 s, elapsed: 229.45 s\n\nThat's certainly pretty aggressive. :)\n\n> and this is root-cause of my question. As you can see there is huge 3x-4x time discrepancy between \"elapsed\" and user+system which is strange at least for me as there should be no waiting (autovacuum_vacuum_cost_delay=0). According to various tools it is true: Time was wasted somewhere else, but not in the PostgreSQL analyze. The ps(1) and pidstat(1) also report the same for the worker:\n\nThe user/system time there is time-on-CPU (hence the 'CPU: ' prefix).\n\n> 06:56:12 AM PID %usr %system %guest %CPU CPU Command\n> 06:56:13 AM 114774 8.00 10.00 0.00 18.00 18 postgres\n> 06:56:14 AM 114774 8.00 11.00 0.00 19.00 15 postgres\n> 06:56:15 AM 114774 5.00 13.00 0.00 18.00 18 postgres\n> \n> 06:56:17 AM PID kB_rd/s kB_wr/s kB_ccwr/s Command\n> 06:56:18 AM 114774 63746.53 0.00 0.00 postgres\n> 06:56:19 AM 114774 62896.00 0.00 0.00 postgres\n> 06:56:20 AM 114774 62920.00 0.00 0.00 postgres\n> \n> One could argue that such autoanalyze worker could perform 5x more of work (%CPU -> 100%) here. The I/O system is not performing a lot (total = 242MB/s reads@22k IOPS, 7MB/s writes@7k IOPS, with service time 0.04ms), although reporting high utilization I'm pretty sure it could push much more. There can be up to 3x-4x of such 70-80MB/s analyzes in parallel (let's say 300MB/s for statistics collection alone).\n\nThe analyze is doing more-or-less random i/o since it's skipping through\nthe table picking out select blocks, not doing regular sequential i/o.\n\n> According to various gdb backtraces, a lot of time is spent here:\n> #0 0x00007f98cdfc9f73 in __pread_nocancel () from /lib64/libpthread.so.0\n> #1 0x0000000000741a16 in pread (__offset=811253760, __nbytes=8192, __buf=0x7f9413ab7280, __fd=<optimized out>) at /usr/include/bits/unistd.h:84\n> #2 FileRead (file=<optimized out>, buffer=0x7f9413ab7280 \"\\037\\005\", amount=8192, offset=811253760, wait_event_info=167772173) at fd.c:1883\n> #3 0x0000000000764b8f in mdread (reln=<optimized out>, forknum=<optimized out>, blocknum=19890902, buffer=0x7f9413ab7280 \"\\037\\005\") at md.c:596\n> #4 0x000000000073d69b in ReadBuffer_common (smgr=<optimized out>, relpersistence=112 'p', forkNum=MAIN_FORKNUM, blockNum=19890902, mode=RBM_NORMAL, strategy=0x1102278, hit=0x7fffba7e2d4f)\n> at bufmgr.c:897\n> #5 0x000000000073e27e in ReadBufferExtended (reln=0x7f98c0c9ded0, forkNum=MAIN_FORKNUM, blockNum=19890902, mode=<optimized out>, strategy=<optimized out>) at bufmgr.c:665\n> #6 0x00000000004c2e2f in heapam_scan_analyze_next_block (scan=<optimized out>, blockno=<optimized out>, bstrategy=<optimized out>) at heapam_handler.c:998\n> #7 0x0000000000597de1 in table_scan_analyze_next_block (bstrategy=<optimized out>, blockno=<optimized out>, scan=0x10c8098) at ../../../src/include/access/tableam.h:1462\n> #8 acquire_sample_rows (onerel=0x7f98c0c9ded0, elevel=13, rows=0x1342e08, targrows=1500000, totalrows=0x7fffba7e3160, totaldeadrows=0x7fffba7e3158) at analyze.c:1048\n> #9 0x0000000000596a50 in do_analyze_rel (onerel=0x7f98c0c9ded0, params=0x10744e4, va_cols=0x0, acquirefunc=0x597ca0 <acquire_sample_rows>, relpages=26763525, inh=false,\n> in_outer_xact=false, elevel=13) at analyze.c:502\n> [..]\n> #12 0x00000000006e76b4 in autovacuum_do_vac_analyze (bstrategy=0x1102278, tab=<optimized out>) at autovacuum.c:3118\n> [..]\n\nSure, we're blocked on a read call trying to get the next block.\n\n> The interesting thing that targrows=1.5mlns and that blocks are accessed (as expected) in sorted order:\n> \n> Breakpoint 1, heapam_scan_analyze_next_block (scan=0x10c8098, blockno=19890910, bstrategy=0x1102278) at heapam_handler.c:984\n> Breakpoint 1, heapam_scan_analyze_next_block (scan=0x10c8098, blockno=19890912, bstrategy=0x1102278) at heapam_handler.c:984\n> Breakpoint 1, heapam_scan_analyze_next_block (scan=0x10c8098, blockno=19890922, bstrategy=0x1102278) at heapam_handler.c:984\n> Breakpoint 1, heapam_scan_analyze_next_block (scan=0x10c8098, blockno=19890935, bstrategy=0x1102278) at heapam_handler.c:984\n> Breakpoint 1, heapam_scan_analyze_next_block (scan=0x10c8098, blockno=19890996, bstrategy=0x1102278) at heapam_handler.c:984\n\nNot really sure what's interesting here, but it does look like we're\nskipping through the table as expected.\n\n> And probably this explains the discrepancy, perf with CPU usage reporting shows a lot of frames waiting on I/O on readaheads done by ext4, trimmed for clarity:\n> \n> # Children Self sys usr Command Shared Object Symbol\n> 63.64% 0.00% 0.00% 0.00% postgres [kernel.kallsyms] [k] entry_SYSCALL_64_after_hwframe\n> ---entry_SYSCALL_64_after_hwframe\n> do_syscall_64\n> |--59.66%--sys_pread64\n> | vfs_read\n> | --59.09%--__vfs_read\n> | --58.24%--generic_file_read_iter\n> | |--47.44%--ondemand_readahead\n> | | --46.88%--__do_page_cache_readahead\n> | | |--32.67%--ext4_mpage_readpages\n> | | | |--16.76%--submit_bio\n> | | |--10.23%--blk_finish_plug\n> [..]\n> 63.64% 1.99% 1.99% 0.00% postgres [kernel.kallsyms] [k] do_syscall_64\n> |--61.65%--do_syscall_64\n> | |--59.66%--sys_pread64\n> | | vfs_read\n> | | --59.09%--__vfs_read\n> | | --58.24%--generic_file_read_iter\n> | | |--47.44%--ondemand_readahead\n> | | | --46.88%--__do_page_cache_readahead\n> \n> 61.36% 0.00% 0.00% 0.00% postgres postgres [.] FileRead \n> ---FileRead\n> __pread_nocancel\n> --60.51%--entry_SYSCALL_64_after_hwframe\n> do_syscall_64\n> --59.66%--sys_pread64\n> vfs_read\n> --59.09%--__vfs_read\n> --58.24%--generic_file_read_iter\n> |--47.44%--ondemand_readahead\n> | --46.88%--__do_page_cache_readahead\n> \n> 61.36% 0.85% 0.00% 0.85% postgres libpthread-2.17.so [.] __pread_nocancel\n> |--60.51%--__pread_nocancel\n> | entry_SYSCALL_64_after_hwframe\n> | do_syscall_64\n> | --59.66%--sys_pread64\n> | vfs_read\n> | --59.09%--__vfs_read\n> | --58.24%--generic_file_read_iter\n> | |--47.44%--ondemand_readahead\n> | | --46.88%--__do_page_cache_readahead\n> \n> \n> 59.66% 0.00% 0.00% 0.00% postgres [kernel.kallsyms] [k] sys_pread64\n> ---sys_pread64\n> vfs_read\n> --59.09%--__vfs_read\n> --58.24%--generic_file_read_iter\n> |--47.44%--ondemand_readahead\n> | --46.88%--__do_page_cache_readahead\n> | |--32.67%--ext4_mpage_readpages\n> \n\nWith all those 'readahead' calls it certainly makes one wonder if the\nLinux kernel is reading more than just the block we're looking for\nbecause it thinks we're doing a sequential read and will therefore want\nthe next few blocks when, in reality, we're going to skip past them,\nmeaning that any readahead the kernel is doing is likely just wasted\nI/O.\n\n> [..] \n> Perf --no-children also triple confirms that there isn't any function that is burning a lot inside the worker:\n> \n> # Overhead sys usr Command Shared Object Symbol\n> 5.40% 0.00% 5.40% postgres [vdso] [.] __vdso_clock_gettime\n> 5.11% 0.00% 5.11% postgres postgres [.] acquire_sample_rows\n> ---acquire_sample_rows\n> 3.98% 0.00% 3.98% postgres postgres [.] heapam_scan_analyze_next_tuple\n> ---heapam_scan_analyze_next_tuple\n> 3.69% 3.69% 0.00% postgres [kernel.kallsyms] [k] pvclock_clocksource_read\n\nSure, makes sense.\n\n> My questions are:\n> a) does anybody know if it is expected that getrusage() doesn't include readahead times as current thread system time ? (I don't know by may be performed by other kernel threads?) ru_stime is defined as \"This is the total amount of time spent executing in kernel mode\". Maybe the \"executing\" is the keyword here? (waiting != executing?)\n\ngetrusage()'s user/system CPU times are reporting time-on-CPU, not\ncounting time blocking for i/o. Waiting isn't the same as executing,\nno.\n\n> b) initially I've wanted to add a new pg_rusage_show_verbose() that would also add ru_inblock, but that wouldn't add much value to the end user. Also adding another timing directly around table_scan_analyze_next_block() seems like the bad idea as it involves locking underneah. So I've tried the most easy approach to simply log $pgStatBlockReadTime as strictly I/O time spent in pread() (ReadBuffer_common() already measures time). The attached patch for PgSQL14-devel in heavy I/O conditions (with track_io_timings=on) logs the following: \n> \"LOG: automatic analyze of table \"test.public.t1_default\" system usage: IO read time 0.69 s, CPU: user: 0.18 s, system: 0.13 s, elapsed: 0.92 s\"\n\nThat definitely seems like a useful thing to include and thanks for the\npatch! Please be sure to register it in the commitfest app:\nhttps://commitfest.postgresql.org\n\n> my interpretation would be that IO reading time was most limiting factor (69/92 = 75%), but *CPU* on kernel side was just 13s. It could give the enduser/DBA the information needed, the information where's the bottleneck given the autovacuum_vacuum_cost_delay=0. In autovacuum_vacuum_cost_delay>0 maybe it would make sense to include also time spent on sleeping?\n\nYeah, that would certainly be useful.\n\n> c) I'm curious if anybody has any I/O related insights into analyze.c processing especially related to readaheads? E.g. maybe disabling readahead would help for PostgreSQL analyze.c usecase on NVMe? Is it worth given that only x% of blocks are needed? The only option I'm aware would be to e.g. hash-partition the table (to introduce parallelism by autovacuums and enable even workers). Any hints or comments?\n\nI would think that, ideally, we'd teach analyze.c to work in the same\nway that bitmap heap scans do- that is, use posix_fadvise to let the\nkernel know what pages we're going to want next instead of the kernel\nguessing (incorrectly) or not doing any pre-fetching. I didn't spend a\nlot of time poking, but it doesn't look like analyze.c tries to do any\nprefetching today. In a similar vein, I wonder if VACUUM should be\ndoing prefetching too today, at least when it's skipping through the\nheap based on the visibility map and jumping over all-frozen pages.\n\n> All of the above observations from PostgreSQL 12.4 on Linux kernel 4.14 with ext4/striped dm with 3x-4x NVMEs.\n> \n> -Jakub Wartak.\n\n> diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c\n> index 8af12b5c6b..fea1bd6f44 100644\n> --- a/src/backend/commands/analyze.c\n> +++ b/src/backend/commands/analyze.c\n> @@ -312,6 +312,7 @@ do_analyze_rel(Relation onerel, VacuumParams *params,\n> \tOid\t\t\tsave_userid;\n> \tint\t\t\tsave_sec_context;\n> \tint\t\t\tsave_nestlevel;\n> +\tPgStat_Counter startblockreadtime = 0;\n> \n> \tif (inh)\n> \t\tereport(elevel,\n> @@ -347,6 +348,7 @@ do_analyze_rel(Relation onerel, VacuumParams *params,\n> \tif (IsAutoVacuumWorkerProcess() && params->log_min_duration >= 0)\n> \t{\n> \t\tpg_rusage_init(&ru0);\n> +\t\tstartblockreadtime = pgStatBlockReadTime;\n> \t\tif (params->log_min_duration > 0)\n> \t\t\tstarttime = GetCurrentTimestamp();\n> \t}\n> @@ -686,10 +688,11 @@ do_analyze_rel(Relation onerel, VacuumParams *params,\n> \t\t\tTimestampDifferenceExceeds(starttime, GetCurrentTimestamp(),\n> \t\t\t\t\t\t\t\t\t params->log_min_duration))\n> \t\t\tereport(LOG,\n> -\t\t\t\t\t(errmsg(\"automatic analyze of table \\\"%s.%s.%s\\\" system usage: %s\",\n> +\t\t\t\t\t(errmsg(\"automatic analyze of table \\\"%s.%s.%s\\\" system usage: IO read time %.2f s, %s\",\n> \t\t\t\t\t\t\tget_database_name(MyDatabaseId),\n> \t\t\t\t\t\t\tget_namespace_name(RelationGetNamespace(onerel)),\n> \t\t\t\t\t\t\tRelationGetRelationName(onerel),\n> +\t\t\t\t\t\t\t(double) (pgStatBlockReadTime - startblockreadtime)/1000000,\n> \t\t\t\t\t\t\tpg_rusage_show(&ru0))));\n> \t}\n> \n\nHaven't looked too closely at this but in general +1 on the idea and\nthis approach looks pretty reasonable to me. Only thing I can think of\noff-hand is to check how it compares to other places where we report IO\nread time and make sure that it looks similar.\n\nThanks,\n\nStephen", "msg_date": "Mon, 26 Oct 2020 11:44:15 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message" } ]
[ { "msg_contents": "The attached patches propose new interfaces for exposing more configuration\nand versioning information from libpq at runtime. They are to be used by\napplications to obtain finer grained information about libpq's\nconfiguration (SSL, GSSAPI, etc), to identify libpq binaries, and for\napplications that use libpq to report diagnostic information\n\n\nPatch 0001 adds PQlibInfo(), which returns an array of key/value\ndescription items reporting on configuration like the full version string,\nSSL support, gssapi support, thread safety, default port and default unix\nsocket path. This is for application use and application diagnostics. It\nalso adds PQlibInfoPrint() which dumps PQlibInfo() keys/values to stdout.\nSee the commit message in patch 0001 for details.\n\n\nPatch 0002 exposes LIBPQ_VERSION_STR, LIBPQ_VERSION_NUM and\nLIBPQ_CONFIGURE_ARGS symbols in the dynamic symbol table. These can be\naccessed by a debugger even when the library cannot be loaded or executed,\nand unlike macros are available even in a stripped executable. So they can\nbe used to identify a libpq binary found in the wild. Their storage is\nshared with PQlibInfo()'s static data, so they only cost three symbol table\nentries.\n\n$ cp ./build/src/interfaces/libpq/libpq.so libpq.so.stripped\n$ strip libpq.so.stripped\n$ gdb -batch -ex 'p (int)LIBPQ_VERSION_NUM' -ex 'p (const char\n*)LIBPQ_VERSION_STR' -ex 'p (const char *)LIBPQ_CONFIGURE_ARGS'\n./libpq.so.stripped\n$1 = 140000\n$2 = 0x285f0 \"PostgreSQL 14devel on x86_64-pc-linux-gnu, ....\"\n$3 = 0x28660 \" '--cache-file=config.cache-'\n'--prefix=/home/craig/pg/master' '--enable-debug' '--enable-cassert'\n'--enable-tap-tests' '--enable-dtrace' 'CC=/usr/lib64/ccache/gcc'\n'CFLAGS=-Og -ggdb3' ...\"\n\n\n\nPatch 0003 allows libpq.so to be executed directly from the command line to\nprint its version, configure arguments etc exactly as PQlibInfoPrint()\nwould output them. This is only enabled on x64 linux for now but can be\nextended to other targets quite simply.\n\n$ ./build/src/interfaces/libpq/libpq.so\nVERSION_NUM: 140000\nVERSION: PostgreSQL 14devel on x86_64-pc-linux-gnu, compiled by gcc (GCC)\n10.2.1 20200723 (Red Hat 10.2.1-1), 64-bit\nCONFIGURE_ARGS: '--cache-file=config.cache-'\n'--prefix=/home/craig/pg/master' '--enable-debug' '--enable-cassert'\n'--enable-tap-tests' '--enable-dtrace' 'CC=/usr/lib64/ccache/gcc'\n'CFLAGS=-Og -ggdb3' 'CPPFLAGS=' 'CPP=/usr/lib64/ccache/gcc -E'\nUSE_SSL: 0\nENABLE_GSS: 0\nENABLE_THREAD_SAFETY: 1\nHAVE_UNIX_SOCKETS: 1\nDEFAULT_PGSOCKET_DIR: /tmp\nDEF_PGPORT: 5432", "msg_date": "Mon, 26 Oct 2020 20:56:57 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "PATCH: Report libpq version and configuration" }, { "msg_contents": "On 2020-Oct-26, Craig Ringer wrote:\n\n> Patch 0001 adds PQlibInfo(), which returns an array of key/value\n> description items reporting on configuration like the full version string,\n> SSL support, gssapi support, thread safety, default port and default unix\n> socket path. This is for application use and application diagnostics. It\n> also adds PQlibInfoPrint() which dumps PQlibInfo() keys/values to stdout.\n> See the commit message in patch 0001 for details.\n\nSounds useful. I'd have PQlibInfoPrint(FILE *) instead, so you can pass\nstdout or whichever fd you want.\n\n> Patch 0002 exposes LIBPQ_VERSION_STR, LIBPQ_VERSION_NUM and\n> LIBPQ_CONFIGURE_ARGS symbols in the dynamic symbol table. These can be\n> accessed by a debugger even when the library cannot be loaded or executed,\n> and unlike macros are available even in a stripped executable. So they can\n> be used to identify a libpq binary found in the wild. Their storage is\n> shared with PQlibInfo()'s static data, so they only cost three symbol table\n> entries.\n\nInteresting. Is this real-world useful? I'm thinking most of the time\nI'd just run the library, but maybe you know of cases where that doesn't\nwork?\n\n> Patch 0003 allows libpq.so to be executed directly from the command line to\n> print its version, configure arguments etc exactly as PQlibInfoPrint()\n> would output them. This is only enabled on x64 linux for now but can be\n> extended to other targets quite simply.\n\n+1 --- to me this is the bit that would be most useful, I expect.\n\n\n", "msg_date": "Mon, 26 Oct 2020 13:41:46 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PATCH: Report libpq version and configuration" }, { "msg_contents": "At 2020-10-26 20:56:57 +0800, craig.ringer@enterprisedb.com wrote:\n>\n> $ ./build/src/interfaces/libpq/libpq.so\n> VERSION_NUM: 140000\n> VERSION: PostgreSQL 14devel on x86_64-pc-linux-gnu, compiled by gcc (GCC)\n> 10.2.1 20200723 (Red Hat 10.2.1-1), 64-bit\n> CONFIGURE_ARGS: '--cache-file=config.cache-'\n> '--prefix=/home/craig/pg/master' '--enable-debug' '--enable-cassert'\n> '--enable-tap-tests' '--enable-dtrace' 'CC=/usr/lib64/ccache/gcc'\n> 'CFLAGS=-Og -ggdb3' 'CPPFLAGS=' 'CPP=/usr/lib64/ccache/gcc -E'\n> USE_SSL: 0\n> ENABLE_GSS: 0\n> ENABLE_THREAD_SAFETY: 1\n> HAVE_UNIX_SOCKETS: 1\n> DEFAULT_PGSOCKET_DIR: /tmp\n> DEF_PGPORT: 5432\n\nThis is excellent.\n\n-- Abhijit\n\n\n", "msg_date": "Mon, 26 Oct 2020 22:25:02 +0530", "msg_from": "Abhijit Menon-Sen <ams@toroid.org>", "msg_from_op": false, "msg_subject": "Re: PATCH: Report libpq version and configuration" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2020-Oct-26, Craig Ringer wrote:\n>> also adds PQlibInfoPrint() which dumps PQlibInfo() keys/values to stdout.\n\n> Sounds useful. I'd have PQlibInfoPrint(FILE *) instead, so you can pass\n> stdout or whichever fd you want.\n\n+1. Are we concerned about translatability of these strings? I think\nI'd vote against, as it would complicate applications, but it's worth\nthinking about it now not later.\n\n>> Patch 0002 exposes LIBPQ_VERSION_STR, LIBPQ_VERSION_NUM and\n>> LIBPQ_CONFIGURE_ARGS symbols in the dynamic symbol table. These can be\n>> accessed by a debugger even when the library cannot be loaded or executed,\n>> and unlike macros are available even in a stripped executable. So they can\n>> be used to identify a libpq binary found in the wild. Their storage is\n>> shared with PQlibInfo()'s static data, so they only cost three symbol table\n>> entries.\n\n> Interesting. Is this real-world useful?\n\n-1, I think this is making way too many assumptions about the content\nand format of a shlib.\n\n>> Patch 0003 allows libpq.so to be executed directly from the command line to\n>> print its version, configure arguments etc exactly as PQlibInfoPrint()\n>> would output them. This is only enabled on x64 linux for now but can be\n>> extended to other targets quite simply.\n\n> +1 --- to me this is the bit that would be most useful, I expect.\n\nAgain, I'm not exactly excited about this. I do not one bit like\npatches that assume that x64 linux is the universe, or at least\nall of it that need be catered to. Reminds me of people who thought\nWindows was the universe, not too many years ago.\n\nI'd rather try to set this up so that some fairly standard tooling\nlike \"strings\" + \"grep\" can be used to pull out the info. Sure,\nit would be less convenient, but honestly how often is this really\ngoing to be necessary?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Oct 2020 12:56:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PATCH: Report libpq version and configuration" }, { "msg_contents": "On Tue, Oct 27, 2020 at 12:41 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2020-Oct-26, Craig Ringer wrote:\n>\n> > Patch 0001 adds PQlibInfo(), which returns an array of key/value\n> > description items reporting on configuration like the full version\n> string,\n> > SSL support, gssapi support, thread safety, default port and default unix\n> > socket path. This is for application use and application diagnostics. It\n> > also adds PQlibInfoPrint() which dumps PQlibInfo() keys/values to stdout.\n> > See the commit message in patch 0001 for details.\n>\n> Sounds useful. I'd have PQlibInfoPrint(FILE *) instead, so you can pass\n> stdout or whichever fd you want.\n>\n\nThe decision not to do so was deliberate. On any platform where a shared\nlibrary could be linked to a different C runtime library than the main\nexecutable or other libraries it is not safe to pass a FILE*. This is most\ncommon on Windows.\n\nI figured it's just a trivial wrapper anyway, so people can just write or\ncopy it if they really care.\n\n> Patch 0002 exposes LIBPQ_VERSION_STR, LIBPQ_VERSION_NUM and\n> > LIBPQ_CONFIGURE_ARGS symbols in the dynamic symbol table. These can be\n> > accessed by a debugger even when the library cannot be loaded or\n> executed,\n> > and unlike macros are available even in a stripped executable. So they\n> can\n> > be used to identify a libpq binary found in the wild. Their storage is\n> > shared with PQlibInfo()'s static data, so they only cost three symbol\n> table\n> > entries.\n>\n> Interesting. Is this real-world useful? I'm thinking most of the time\n> I'd just run the library, but maybe you know of cases where that doesn't\n> work?\n>\n\nIt was prompted by a support conversation about how to identify a libpq.\nSo I'd say yes.\n\nIn that case the eventual approach used was to use Python's ctypes to\ndynamically load libpq then call PQlibVersion().\n\n> Patch 0003 allows libpq.so to be executed directly from the command line\n> to\n> > print its version, configure arguments etc exactly as PQlibInfoPrint()\n> > would output them. This is only enabled on x64 linux for now but can be\n> > extended to other targets quite simply.\n>\n> +1 --- to me this is the bit that would be most useful, I expect.\n>\n\nIt's also kinda cool.\n\nBut it's using a bit of a platform quirk that's not supported by the\ntoolchain as well as I'd really like - annoyingly, when you pass a\n--entrypoint to GNU ld or to LLVM's ld.lld, it should really emit the\ndefault .interp section to point to /bin/ld.so.2 or\n/lib64/ld-linux-x86-64.so.2 as appropriate. But when building -shared they\ndon't seem to want to, nor do they expose a sensible macro that lets you\nget the default string yourself.\n\nSo I thought there was a moderate to high chance that this patch would trip\nsomeone's \"yuck\" meter.\n\nOn Tue, Oct 27, 2020 at 12:41 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2020-Oct-26, Craig Ringer wrote:\n\n> Patch 0001 adds PQlibInfo(), which returns an array of key/value\n> description items reporting on configuration like the full version string,\n> SSL support, gssapi support, thread safety, default port and default unix\n> socket path. This is for application use and application diagnostics. It\n> also adds PQlibInfoPrint() which dumps PQlibInfo() keys/values to stdout.\n> See the commit message in patch 0001 for details.\n\nSounds useful. I'd have PQlibInfoPrint(FILE *) instead, so you can pass\nstdout or whichever fd you want.The decision not to do so was deliberate. On any platform where a shared library could be linked to a different C runtime library than the main executable or other libraries it is not safe to pass a FILE*. This is most common on Windows.I figured it's just a trivial wrapper anyway, so people can just write or copy it if they really care. \n\n> Patch 0002 exposes LIBPQ_VERSION_STR, LIBPQ_VERSION_NUM and\n> LIBPQ_CONFIGURE_ARGS symbols in the dynamic symbol table. These can be\n> accessed by a debugger even when the library cannot be loaded or executed,\n> and unlike macros are available even in a stripped executable. So they can\n> be used to identify a libpq binary found in the wild. Their storage is\n> shared with PQlibInfo()'s static data, so they only cost three symbol table\n> entries.\n\nInteresting.  Is this real-world useful?  I'm thinking most of the time\nI'd just run the library, but maybe you know of cases where that doesn't\nwork?It was prompted by a support conversation about how to identify a  libpq. So I'd say yes.In that case the eventual approach used was to use Python's ctypes to dynamically load libpq then call PQlibVersion(). \n\n> Patch 0003 allows libpq.so to be executed directly from the command line to\n> print its version, configure arguments etc exactly as PQlibInfoPrint()\n> would output them. This is only enabled on x64 linux for now but can be\n> extended to other targets quite simply.\n\n+1 --- to me this is the bit that would be most useful, I expect.It's also kinda cool.But it's using a bit of a platform quirk that's not supported by the toolchain as well as I'd really like - annoyingly, when you pass a --entrypoint to GNU ld or to LLVM's ld.lld, it should really emit the default .interp section to point to /bin/ld.so.2 or /lib64/ld-linux-x86-64.so.2 as appropriate. But when building -shared they don't seem to want to, nor do they expose a sensible macro that lets you get the default string yourself.So I thought there was a moderate to high chance that this patch would trip someone's \"yuck\" meter.", "msg_date": "Tue, 27 Oct 2020 08:51:47 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Report libpq version and configuration" }, { "msg_contents": "On Tue, Oct 27, 2020 at 12:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2020-Oct-26, Craig Ringer wrote:\n> >> also adds PQlibInfoPrint() which dumps PQlibInfo() keys/values to stdout.\n>\n> > Sounds useful. I'd have PQlibInfoPrint(FILE *) instead, so you can pass\n> > stdout or whichever fd you want.\n>\n> +1. Are we concerned about translatability of these strings? I think\n> I'd vote against, as it would complicate applications, but it's worth\n> thinking about it now not later.\n\n\nIt's necessary not to translate the key names, they are identifiers\nnot descriptive text. I don't object to having translations too, but\nthe translation teams have quite enough to do already with user-facing\ntext that will get regularly seen. So while it'd be potentially\ninteresting to expose translated versions too, I'm not entirely\nconvinced. It's a bit like translating macro names. You could, but ...\nwhy?\n\n> >> Patch 0002 exposes LIBPQ_VERSION_STR, LIBPQ_VERSION_NUM and\n> >> LIBPQ_CONFIGURE_ARGS symbols in the dynamic symbol table. These can be\n> >> accessed by a debugger even when the library cannot be loaded or executed,\n> >> and unlike macros are available even in a stripped executable. So they can\n> >> be used to identify a libpq binary found in the wild. Their storage is\n> >> shared with PQlibInfo()'s static data, so they only cost three symbol table\n> >> entries.\n>\n> > Interesting. Is this real-world useful?\n>\n> -1, I think this is making way too many assumptions about the content\n> and format of a shlib.\n\n\nI'm not sure I understand what assumptions you're concerned about or\ntheir consequences. On any ELF it should be just fine, and Mach-O\nshould be too. I do need to check that MSVC generates direct symbols\nfor WIN32 PE, not indirect thunked data symbols.\n\nIt doesn't help that I failed to supply the final revision of this\npatch, which does this:\n\n-const char * const LIBPQ_VERSION_STR = PG_VERSION_STR;\n+const char LIBPQ_VERSION_STR[] = PG_VERSION_STR;\n\n-const char * const LIBPQ_CONFIGURE_ARGS = CONFIGURE_ARGS;\n+const char LIBPQ_CONFIGURE_ARGS[] = CONFIGURE_ARGS;\n\n... to properly ensure the string symbols go into the read-only data section:\n\n$ eu-nm --defined-only -D $LIBPQ | grep LIBPQ_\nLIBPQ_CONFIGURE_ARGS |0000000000028640|GLOBAL|OBJECT\n|00000000000000e4| libpq-version.c:74|.rodata\nLIBPQ_VERSION_NUM |0000000000028620|GLOBAL|OBJECT\n|0000000000000004| libpq-version.c:75|.rodata\nLIBPQ_VERSION_STR |0000000000028740|GLOBAL|OBJECT\n|000000000000006c| libpq-version.c:73|.rodata\n\nI don't propose these to replace information functions or macros, I'm\nsuggesting we add them as an aid to tooling and for debugging. I have\nhad quite enough times when I've faced a mystery libpq, and it's not\nalways practical in a given target environment to just compile a tool\nto print the version.\n\nIn addition to easy binary identification, having symbolic references\nto the version info is useful for dynamic tracing tools like perf and\nsystemtap - they cannot execute functions directly in the target\naddress space, but they can read data symbols. I actually want to\nexpose matching symbols in postgres itself, for the use of dynamic\ntracing utilities, so they can autodetect the target postgres at\nruntime even without -ggdb3 level debuginfo with macros, and correctly\nadapt to version specifics of the target postgres.\n\nIn terms of standard tooling here are some different ways you can get\nthis information symbolically.\n\n$ LIBPQ=/path/to/libpq.so\n\n$ gdb -batch -ex 'p (int) LIBPQ_VERSION_NUM' -ex 'p (const char *)\nLIBPQ_VERSION_STR' $LIBPQ\n$1 = 140000\n$2 = \"PostgreSQL 14devel on x86_64-pc-linux-gnu, compiled by gcc (GCC)\n10.2.1 20200723 (Red Hat 10.2.1-1), 64-bit\"\n\n$ perl getpqver.pl $LIBPQ\nLIBPQ_VERSION_NUM=140000\nLIBPQ_VERSION_STR=PostgreSQL 14devel on x86_64-pc-linux-gnu, compiled\nby gcc (GCC) 10.2.1 20200723 (Red Hat 10.2.1-1), 64-bit\n\nI've attached getpqver.pl. It uses eu-nm from elfutils to get symbol\noffset and length, which is pretty standard stuff. And it's quite\nsimple to adapt it to use legacy binutils \"nm\" by invoking\n\n nm --dynamic --defined -S $LIBPQ\n\nand tweaking the reader.\n\nIf you really want something strings-able, I'm sure that's reasonably\nfeasible, but I don't think it's particularly unreasonable to expect\nto be able to inspect the symbol table using appropriate platform\ntools or a simple debugger command.\n\n> Again, I'm not exactly excited about this. I do not one bit like\n> patches that assume that x64 linux is the universe, or at least\n> all of it that need be catered to. Reminds me of people who thought\n> Windows was the universe, not too many years ago.\n\nYeah. I figured you'd say that, and don't disagree. It's why I split\nthis patch out - it's kind of a sacrificial patch.\n\nI actually wrote this part first.\n\nThen I wrote PQlibInfo() when I realised that there was no sensible\npre-existing way to get the information I wanted to dump from libpq at\nthe API level, and adapted the executable .so output to call it.\n\n> I'd rather try to set this up so that some fairly standard tooling\n> like \"strings\" + \"grep\" can be used to pull out the info. Sure,\n> it would be less convenient, but honestly how often is this really\n> going to be necessary?\n\n\neu-readelf and objdump are pretty standard tooling. But I really don't\nmuch care if the executable .so hack gets in, it's mostly a fun PoC.\nIf you can execute libpq then the dynamic linker must be able to load\nit and resolve its symbols, in which case you can probably just as\neasily do this:\n\n python -c \"import sys, ctypes;\nctypes.cdll.LoadLibrary(sys.argv[1]).PQlibInfoPrint()\"\nbuild/src/interfaces/libpq/libpq.so\n\nor compile and run a trivial C one-liner.\n\nAs much as anything I thought it was a good way to stimulate\ndiscussion and give you something easy to reject ;)", "msg_date": "Tue, 27 Oct 2020 12:49:37 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Report libpq version and configuration" }, { "msg_contents": "On 2020-Oct-27, Craig Ringer wrote:\n\n> On Tue, Oct 27, 2020 at 12:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > +1. Are we concerned about translatability of these strings? I think\n> > I'd vote against, as it would complicate applications, but it's worth\n> > thinking about it now not later.\n> \n> It's necessary not to translate the key names, they are identifiers\n> not descriptive text. I don't object to having translations too, but\n> the translation teams have quite enough to do already with user-facing\n> text that will get regularly seen. So while it'd be potentially\n> interesting to expose translated versions too, I'm not entirely\n> convinced. It's a bit like translating macro names. You could, but ...\n> why?\n\nI don't think translating these values is useful for much. I see it\nsimilar to translating pg_controldata output: it is troublesome (to\npg_upgrade for instance) and serves no public that I know of.\n\n\n> > Again, I'm not exactly excited about this. I do not one bit like\n> > patches that assume that x64 linux is the universe, or at least\n> > all of it that need be catered to. Reminds me of people who thought\n> > Windows was the universe, not too many years ago.\n> \n> Yeah. I figured you'd say that, and don't disagree. It's why I split\n> this patch out - it's kind of a sacrificial patch.\n\nWell, if we can make it run in more systems than just Linux, then it\nseems worth having. The submitted patch seems a little bit on the\nnaughty side.\n\n\n", "msg_date": "Mon, 9 Nov 2020 13:08:22 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PATCH: Report libpq version and configuration" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Well, if we can make it run in more systems than just Linux, then it\n> seems worth having. The submitted patch seems a little bit on the\n> naughty side.\n\nI agree that the facility seems possibly useful, as long as we can\nminimize its platform dependency. Just embedding some strings, as\nI suggested upthread, seems like it'd go far in that direction.\nYeah, you could spend a lot of effort to make it a bit more user\nfriendly, but is the effort really going to be repaid? The use\ncase for this isn't that large, I don't think.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Nov 2020 11:33:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PATCH: Report libpq version and configuration" }, { "msg_contents": "On Tue, Nov 10, 2020 at 12:33 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > Well, if we can make it run in more systems than just Linux, then it\n> > seems worth having. The submitted patch seems a little bit on the\n> > naughty side.\n>\n> I agree that the facility seems possibly useful, as long as we can\n> minimize its platform dependency. Just embedding some strings, as\n> I suggested upthread, seems like it'd go far in that direction.\n> Yeah, you could spend a lot of effort to make it a bit more user\n> friendly, but is the effort really going to be repaid? The use\n> case for this isn't that large, I don't think.\n>\n\nThe reason I want to expose symbols is mainly for tracing tooling - perf,\nsystemtap, etc.\n\nI thought it'd make sense to also provide another way to identify the libpq\nbinary.\n\nMy other hesitation about using a \"strings libpq.so\" approach is that it's\nnot something I'd be super happy about automating and relying on in scripts\netc. It could break depending on how the compiler decides to arrange things\nor due to unrelated changes in libpq that create similar-looking strings\nlater. I'd prefer to do it deterministically. You can already use \"strings\"\nto identify an unstripped binary built with -ggdb3 (macros in DWARF\ndebuginfo), but we don't compile the PG_VERSION into the binary, so you\ncan't even get the basic version string like \"postgres (PostgreSQL) 11.9\"\nfrom 'strings'.\n\nThe whole PQlibInfo() thing came about because I thought it'd be\npotentially useful. I've had issues before with applications being built\nagainst a newer version of libpq than what they're linked against, and it's\nbeen rather frustrating to make the app tolerant of that. But it can be\nsolved (clumsily) using various existing workarounds.\n\nThe main things I'd really like to get in place are a way to get the\nversion as an ELF data symbol, and a simple way to ID the binary.\n\nSo the minimal change would be to declare:\n\nconst char LIBPQ_VERSION_STR[] = PG_VERSION_STR;\nconst int LIBPQ_VERSION_NUM = PG_VERSION_NUM;\n\nthen change PQgetVersion() to return LIBPQ_VERSION_NUM and add a\nPQgetVersionStr() that returns LIBPQ_VERSION_STR.\n\nThat OK with you?\n\nOn Tue, Nov 10, 2020 at 12:33 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Well, if we can make it run in more systems than just Linux, then it\n> seems worth having.  The submitted patch seems a little bit on the\n> naughty side.\n\nI agree that the facility seems possibly useful, as long as we can\nminimize its platform dependency.  Just embedding some strings, as\nI suggested upthread, seems like it'd go far in that direction.\nYeah, you could spend a lot of effort to make it a bit more user\nfriendly, but is the effort really going to be repaid?  The use\ncase for this isn't that large, I don't think.The reason I want to expose symbols is mainly for tracing tooling - perf, systemtap, etc.I thought it'd make sense to also provide another way to identify the libpq binary.My other hesitation about using a \"strings libpq.so\" approach is that it's not something I'd be super happy about automating and relying on in scripts etc. It could break depending on how the compiler decides to arrange things or due to unrelated changes in libpq that create similar-looking strings later. I'd prefer to do it deterministically. You can already use \"strings\" to identify an unstripped binary built with -ggdb3 (macros in DWARF debuginfo), but we don't compile the PG_VERSION into the binary, so you can't even get the basic version string like \"postgres (PostgreSQL) 11.9\" from 'strings'.The whole PQlibInfo() thing came about because I thought it'd be potentially useful. I've had issues before with applications being built against a newer version of libpq than what they're linked against, and it's been rather frustrating to make the app tolerant of that. But it can be solved (clumsily) using various existing workarounds. The main things I'd really like to get in place are a way to get the version as an ELF data symbol, and a simple way to ID the binary.So the minimal change would be to declare:const char LIBPQ_VERSION_STR[] = PG_VERSION_STR;const int LIBPQ_VERSION_NUM = PG_VERSION_NUM;then change PQgetVersion() to return LIBPQ_VERSION_NUM and add a PQgetVersionStr() that returns LIBPQ_VERSION_STR.That OK with you?", "msg_date": "Tue, 10 Nov 2020 14:22:16 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Report libpq version and configuration" }, { "msg_contents": "On Tue, Nov 10, 2020 at 2:22 PM Craig Ringer <craig.ringer@enterprisedb.com>\nwrote:\n\n>\n> The main things I'd really like to get in place are a way to get the\n> version as an ELF data symbol, and a simple way to ID the binary.\n>\n> So the minimal change would be to declare:\n>\n> const char LIBPQ_VERSION_STR[] = PG_VERSION_STR;\n> const int LIBPQ_VERSION_NUM = PG_VERSION_NUM;\n>\n> then change PQgetVersion() to return LIBPQ_VERSION_NUM and add a\n> PQgetVersionStr() that returns LIBPQ_VERSION_STR.\n>\n> That OK with you?\n>\n\nProposed minimal patch attached.", "msg_date": "Wed, 11 Nov 2020 12:11:05 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Report libpq version and configuration" }, { "msg_contents": "Hi Craig,\n\nOn Wed, Nov 11, 2020 at 1:11 PM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>\n> On Tue, Nov 10, 2020 at 2:22 PM Craig Ringer <craig.ringer@enterprisedb.com> wrote:\n>>\n>>\n>> The main things I'd really like to get in place are a way to get the version as an ELF data symbol, and a simple way to ID the binary.\n>>\n>> So the minimal change would be to declare:\n>>\n>> const char LIBPQ_VERSION_STR[] = PG_VERSION_STR;\n>> const int LIBPQ_VERSION_NUM = PG_VERSION_NUM;\n>>\n>> then change PQgetVersion() to return LIBPQ_VERSION_NUM and add a PQgetVersionStr() that returns LIBPQ_VERSION_STR.\n>>\n>> That OK with you?\n>\n>\n> Proposed minimal patch attached.\n\nYou sent in your patch, v2-0001-Add-PQlibVersionString-to-libpq.patch\nto pgsql-hackers on Nov 11, but you did not post it to the next\nCommitFest[1]. If this was intentional, then you need to take no\naction. However, if you want your patch to be reviewed as part of the\nupcoming CommitFest, then you need to add it yourself before\n2021-01-01 AOE[2]. Thanks for your contributions.\n\nRegards,\n\n[1] https://commitfest.postgresql.org/31/\n[2] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 28 Dec 2020 19:14:24 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Report libpq version and configuration" } ]
[ { "msg_contents": "Hello, hackers!\n\nNovember commitfest will start just in a few days.\nI'm happy to volunteer to be the CFM for this one. With a help of \nGeorgios Kokolatos [1].\n\nIt's time to register your patch in the commitfest, if not yet.\n\nIf you already have a patch in the commitfest, update its status and \nmake sure it still applies and that the tests pass. Check the state at  \nhttp://cfbot.cputube.org/\n\nIf there is a long-running stale discussion, please send a short summary \nupdate about its current state, open questions, and TODOs. I hope, it \nwill encourage reviewers to pay more attention to the thread.\n\n[1] \nhttps://www.postgresql.org/message-id/AxH0n_zLwwJ0MBN3uJpHfYDkV364diOGhtpLAv0OC0qHLN8ClyPsbRi1fSUAJLJZzObZE_y1qc-jqGravjIMoxVrdtLm74HmTUeIPWWkmSg%3D%40pm.me\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Mon, 26 Oct 2020 21:09:17 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Commitfest 2020-11" }, { "msg_contents": "On Mon, Oct 26, 2020 at 3:09 PM Anastasia Lubennikova <\na.lubennikova@postgrespro.ru> wrote:\n>\n> Hello, hackers!\n>\n> November commitfest will start just in a few days.\n> I'm happy to volunteer to be the CFM for this one. With a help of\n> Georgios Kokolatos [1].\n>\n> It's time to register your patch in the commitfest, if not yet.\n>\n> If you already have a patch in the commitfest, update its status and\n> make sure it still applies and that the tests pass. Check the state at\n> http://cfbot.cputube.org/\n>\n> If there is a long-running stale discussion, please send a short summary\n> update about its current state, open questions, and TODOs. I hope, it\n> will encourage reviewers to pay more attention to the thread.\n>\n\nAwesome!\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Mon, Oct 26, 2020 at 3:09 PM Anastasia Lubennikova <a.lubennikova@postgrespro.ru> wrote:>> Hello, hackers!>> November commitfest will start just in a few days.> I'm happy to volunteer to be the CFM for this one. With a help of> Georgios Kokolatos [1].>> It's time to register your patch in the commitfest, if not yet.>> If you already have a patch in the commitfest, update its status and> make sure it still applies and that the tests pass. Check the state at > http://cfbot.cputube.org/>> If there is a long-running stale discussion, please send a short summary> update about its current state, open questions, and TODOs. I hope, it> will encourage reviewers to pay more attention to the thread.>Awesome!--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Mon, 26 Oct 2020 16:47:30 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2020-11" }, { "msg_contents": "On Mon, Oct 26, 2020 at 09:09:17PM +0300, Anastasia Lubennikova wrote:\n> November commitfest will start just in a few days.\n> I'm happy to volunteer to be the CFM for this one. With a help of Georgios\n> Kokolatos [1].\n\nThanks to both of you for volunteering.\n--\nMichael", "msg_date": "Tue, 27 Oct 2020 09:21:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2020-11" }, { "msg_contents": "On Tue, Oct 27, 2020 at 8:21 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Oct 26, 2020 at 09:09:17PM +0300, Anastasia Lubennikova wrote:\n> > November commitfest will start just in a few days.\n> > I'm happy to volunteer to be the CFM for this one. With a help of Georgios\n> > Kokolatos [1].\n>\n> Thanks to both of you for volunteering.\n\nThanks a lot to both of you!\n\n\n", "msg_date": "Tue, 27 Oct 2020 10:11:25 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2020-11" }, { "msg_contents": "Hi everyone,\n\nNovember commitfest is now in progress!\nMe and Georgios Kokolatos are happy to volunteer to manage it.\n\nDuring this CF, I want to pay more attention to a long-living issues.\n\nCurrent state for the Commitfest is:\n\nNeeds review: 154\nWaiting on Author: 32\nReady for Committer: 20\nCommitted: 32\nWithdrawn: 5\nRejected: 1\nTotal: 244\n\nWe have quite a few ReadyForCommitter patches of a different size and \ncomplexity.\n\nPlease, if you have submitted patches in this CF make sure that you are \nalso reviewing patches of a similar number and complexity. The CF cannot \nmove forward without patch review.\n\nAlso, check the state of your patch at http://cfbot.cputube.org/\n\nHappy hacking!\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Mon, 2 Nov 2020 13:20:31 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Commitfest 2020-11" }, { "msg_contents": "Now that we are halfway through the commitfest, the status breakdown \nlooks like this:\n\nNeeds review: 116\nWaiting on Author: 45\nReady for Committer: 22\nCommitted: 51\nReturned with Feedback: 1\nWithdrawn: 8\nRejected: 1\nTotal: 244\n\nwhich means we have reached closure on a quarter of the patches. And \nmany discussions have significantly moved forward. Keep it up and don't \nwait for the deadline commitfest)\n\nMost inactive discussions and patches which didn't apply were notified \non their respective threads. If there will be no response till the last \ndays of the CF they will be considered stalled and returned with feedback.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Tue, 17 Nov 2020 19:33:29 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Commitfest 2020-11" } ]
[ { "msg_contents": "Hi all,\n\nAs you all already know Postgres supports functions in index expressions\n(marked as immutable ofc) and for this special index the ANALYZE command\ncreates some statistics (new pg_statistic entry) about it.\n\nThe problem is just after creating a new index or rebuilding concurrently\n(using the new REINDEX .. CONCURRENTLY or the old manner creating new one\nand then swapping) we need to run ANALYZE to update statistics but we don't\nmention it in any part of our documentation.\n\nLast weekend Gitlab went down because the lack of an ANALYZE after\nrebuilding concurrently a functional index and they followed the\nrecommendation we have into our documentation [1] about how to rebuild it\nconcurrently, but we don't warn users about the ANALYZE after.\n\nWould be nice if add some information about it into our docs but not sure\nwhere. I'm thinking about:\n- doc/src/sgml/ref/create_index.sgml\n- doc/src/sgml/maintenance.sgml (routine-reindex)\n\nThoughts?\n\n[1]\nhttps://gitlab.com/gitlab-com/gl-infra/production/-/issues/2885#note_436310499\n\n-- \n Fabrízio de Royes Mello\n PostgreSQL Developer at OnGres Inc. - https://ongres.com\n\nHi all,As you all already know Postgres supports functions in index expressions (marked as immutable ofc) and for this special index the ANALYZE command creates some statistics (new pg_statistic entry) about it.The problem is just after creating a new index or rebuilding concurrently (using the new REINDEX .. CONCURRENTLY or the old manner creating new one and then swapping) we need to run ANALYZE to update statistics but we don't mention it in any part of our documentation.Last weekend Gitlab went down because the lack of an ANALYZE after rebuilding concurrently a functional index and they followed the recommendation we have into our documentation [1] about how to rebuild it concurrently, but we don't warn users about the ANALYZE after.Would be nice if add some information about it into our docs but not sure where. I'm thinking about:- doc/src/sgml/ref/create_index.sgml- doc/src/sgml/maintenance.sgml (routine-reindex)Thoughts?[1] https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2885#note_436310499--    Fabrízio de Royes Mello   PostgreSQL Developer at OnGres Inc. - https://ongres.com", "msg_date": "Mon, 26 Oct 2020 19:08:14 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": true, "msg_subject": "Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Mon, Oct 26, 2020 at 3:08 PM Fabrízio de Royes Mello <\nfabriziomello@gmail.com> wrote:\n\n> Hi all,\n>\n> As you all already know Postgres supports functions in index expressions\n> (marked as immutable ofc) and for this special index the ANALYZE command\n> creates some statistics (new pg_statistic entry) about it.\n>\n> The problem is just after creating a new index or rebuilding concurrently\n> (using the new REINDEX .. CONCURRENTLY or the old manner creating new one\n> and then swapping) we need to run ANALYZE to update statistics but we don't\n> mention it in any part of our documentation.\n>\n> Last weekend Gitlab went down because the lack of an ANALYZE after\n> rebuilding concurrently a functional index and they followed the\n> recommendation we have into our documentation [1] about how to rebuild it\n> concurrently, but we don't warn users about the ANALYZE after.\n>\n> Would be nice if add some information about it into our docs but not sure\n> where. I'm thinking about:\n> - doc/src/sgml/ref/create_index.sgml\n> - doc/src/sgml/maintenance.sgml (routine-reindex)\n>\n> Thoughts?\n>\n> [1]\n> https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2885#note_436310499\n>\n\nIt would seem preferable to call the lack of auto-analyzing after these\noperations a bug and back-patch a fix that injects an analyze side-effect\njust before their completion. It doesn't have to be smart either,\nanalyzing things even if the created (or newly validated) index doesn't\nhave statistics of its own isn't a problem in my book.\n\nDavid J.\n\nOn Mon, Oct 26, 2020 at 3:08 PM Fabrízio de Royes Mello <fabriziomello@gmail.com> wrote:Hi all,As you all already know Postgres supports functions in index expressions (marked as immutable ofc) and for this special index the ANALYZE command creates some statistics (new pg_statistic entry) about it.The problem is just after creating a new index or rebuilding concurrently (using the new REINDEX .. CONCURRENTLY or the old manner creating new one and then swapping) we need to run ANALYZE to update statistics but we don't mention it in any part of our documentation.Last weekend Gitlab went down because the lack of an ANALYZE after rebuilding concurrently a functional index and they followed the recommendation we have into our documentation [1] about how to rebuild it concurrently, but we don't warn users about the ANALYZE after.Would be nice if add some information about it into our docs but not sure where. I'm thinking about:- doc/src/sgml/ref/create_index.sgml- doc/src/sgml/maintenance.sgml (routine-reindex)Thoughts?[1] https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2885#note_436310499It would seem preferable to call the lack of auto-analyzing after these operations a bug and back-patch a fix that injects an analyze side-effect just before their completion.  It doesn't have to be smart either, analyzing things even if the created (or newly validated) index doesn't have statistics of its own isn't a problem in my book.David J.", "msg_date": "Mon, 26 Oct 2020 15:46:10 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Mon, Oct 26, 2020 at 3:46 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> It would seem preferable to call the lack of auto-analyzing after these\n> operations a bug and back-patch a fix that injects an analyze side-effect\n> just before their completion. It doesn't have to be smart either,\n> analyzing things even if the created (or newly validated) index doesn't\n> have statistics of its own isn't a problem in my book.\n>\n\n+1 to consider it as a major problem of CREATE INDEX [CONCURRENTLY] for\nindexes on expressions, it's very easy to forget what I've observed many\ntimes.\n\nAlthough, this triggers a question – should ANALYZE be automated in, say,\npg_restore as well?\n\nAnd another question: how ANALYZE needs to be run? If it's under the\nuser's control, there is an option to use vacuumdb --analyze and benefit\nfrom using -j to parallelize the work (and, in some cases, benefit from\nusing --analyze-in-stages). If we had ANALYZE as a part of building indexes\non expressions, should it be parallelized to the same extent as index\ncreation (controlled by max_parallel_maintenance_workers)?\n\nThanks,\nNik\n\nOn Mon, Oct 26, 2020 at 3:46 PM David G. Johnston <david.g.johnston@gmail.com> wrote:It would seem preferable to call the lack of auto-analyzing after these operations a bug and back-patch a fix that injects an analyze side-effect just before their completion.  It doesn't have to be smart either, analyzing things even if the created (or newly validated) index doesn't have statistics of its own isn't a problem in my book.+1 to consider it as a major problem of CREATE INDEX [CONCURRENTLY] for indexes on expressions, it's very easy to forget what I've observed many times.Although, this triggers a question – should ANALYZE be automated in, say, pg_restore as well?And another question: how ANALYZE needs to be run? If it's under the user's control, there is an option to use vacuumdb --analyze and benefit from using -j to parallelize the work (and, in some cases, benefit from using --analyze-in-stages). If we had ANALYZE as a part of building indexes on expressions, should it be parallelized to the same extent as index creation (controlled by max_parallel_maintenance_workers)?Thanks,Nik", "msg_date": "Mon, 26 Oct 2020 18:29:07 -0700", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Monday, October 26, 2020, Nikolay Samokhvalov <samokhvalov@gmail.com>\nwrote:\n>\n>\n> Although, this triggers a question – should ANALYZE be automated in, say,\n> pg_restore as well?\n>\n\nIndependent concern.\n\n\n>\n> And another question: how ANALYZE needs to be run? If it's under the\n> user's control, there is an option to use vacuumdb --analyze and benefit\n> from using -j to parallelize the work (and, in some cases, benefit from\n> using --analyze-in-stages). If we had ANALYZE as a part of building indexes\n> on expressions, should it be parallelized to the same extent as index\n> creation (controlled by max_parallel_maintenance_workers)?\n>\n\nNone of that seems relevant here. The only relevant parameter I see is\nwhat to specify for “table_and_columns”.\n\nDavid J.\n\nOn Monday, October 26, 2020, Nikolay Samokhvalov <samokhvalov@gmail.com> wrote:Although, this triggers a question – should ANALYZE be automated in, say, pg_restore as well?Independent concern. And another question: how ANALYZE needs to be run? If it's under the user's control, there is an option to use vacuumdb --analyze and benefit from using -j to parallelize the work (and, in some cases, benefit from using --analyze-in-stages). If we had ANALYZE as a part of building indexes on expressions, should it be parallelized to the same extent as index creation (controlled by max_parallel_maintenance_workers)?None of that seems relevant here.  The only relevant parameter I see is what to specify for “table_and_columns”.David J.", "msg_date": "Mon, 26 Oct 2020 19:03:54 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Mon, Oct 26, 2020 at 7:03 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Monday, October 26, 2020, Nikolay Samokhvalov <samokhvalov@gmail.com>\n> wrote:\n>>\n>> Although, this triggers a question – should ANALYZE be automated in, say,\n>> pg_restore as well?\n>>\n>\n> Independent concern.\n>\n\nIt's the same class of issues – after we created some objects, we lack\nstatistics and willing to automate its collection. If the approach is\nautomated in one case, it should be automated in the others, for\nconsistency.\n\nAnd another question: how ANALYZE needs to be run? If it's under the\n>> user's control, there is an option to use vacuumdb --analyze and benefit\n>> from using -j to parallelize the work (and, in some cases, benefit from\n>> using --analyze-in-stages). If we had ANALYZE as a part of building indexes\n>> on expressions, should it be parallelized to the same extent as index\n>> creation (controlled by max_parallel_maintenance_workers)?\n>>\n>\n> None of that seems relevant here. The only relevant parameter I see is\n> what to specify for “table_and_columns”.\n>\n\nI'm not sure I follow.\n\nThanks,\nNik\n\nOn Mon, Oct 26, 2020 at 7:03 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Monday, October 26, 2020, Nikolay Samokhvalov <samokhvalov@gmail.com> wrote:Although, this triggers a question – should ANALYZE be automated in, say, pg_restore as well?Independent concern.It's the same class of issues – after we created some objects, we lack statistics and willing to automate its collection. If the approach is automated in one case, it should be automated in the others, for consistency.And another question: how ANALYZE needs to be run? If it's under the user's control, there is an option to use vacuumdb --analyze and benefit from using -j to parallelize the work (and, in some cases, benefit from using --analyze-in-stages). If we had ANALYZE as a part of building indexes on expressions, should it be parallelized to the same extent as index creation (controlled by max_parallel_maintenance_workers)?None of that seems relevant here.  The only relevant parameter I see is what to specify for “table_and_columns”.I'm not sure I follow.Thanks,Nik", "msg_date": "Mon, 26 Oct 2020 21:44:42 -0700", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Mon, Oct 26, 2020 at 3:08 PM Fabrízio de Royes Mello <\nfabriziomello@gmail.com> wrote:\n\n> Would be nice if add some information about it into our docs but not sure\n> where. I'm thinking about:\n> - doc/src/sgml/ref/create_index.sgml\n> - doc/src/sgml/maintenance.sgml (routine-reindex)\n>\n\nAttaching the patches for the docs, one for 11 and older, and another for\n12+ (which have REINDEX CONCURRENTLY not suffering from lack of ANALYZE).\n\nI still think that automating is the right thing to do but of course, it's\na much bigger topic that a quick fix dor the docs.", "msg_date": "Tue, 27 Oct 2020 00:12:00 -0700", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Mon, Oct 26, 2020 at 7:46 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n>\n> It would seem preferable to call the lack of auto-analyzing after these\noperations a bug and back-patch a fix that injects an analyze side-effect\njust before their completion. It doesn't have to be smart either,\nanalyzing things even if the created (or newly validated) index doesn't\nhave statistics of its own isn't a problem in my book.\n>\n\nWhen we create a new table or index they will not have statistics until an\nANALYZE happens. This is the default behaviour and I think is not a big\nproblem here, but we need to add some note on docs about the need of\nstatistics for indexes on expressions.\n\nBut IMHO there is a misbehaviour with the implementation of CONCURRENTLY on\nREINDEX because running it will lose the statistics. Have a look the\nexample below:\n\nfabrizio=# SELECT version();\n version\n\n---------------------------------------------------------------------------------------------------------\n PostgreSQL 14devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit\n(1 row)\n\nfabrizio=# CREATE TABLE t(f1 BIGSERIAL PRIMARY KEY, f2 TEXT) WITH\n(autovacuum_enabled = false);\nCREATE TABLE\nfabrizio=# INSERT INTO t(f2) SELECT repeat(chr(65+(random()*26)::int),\n(random()*300)::int) FROM generate_series(1, 10000);\nINSERT 0 10000\nfabrizio=# CREATE INDEX t_idx2 ON t(lower(f2));\nCREATE INDEX\nfabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid =\n't_pkey'::regclass;\n count\n-------\n 0\n(1 row)\n\nfabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid =\n't_idx2'::regclass;\n count\n-------\n 0\n(1 row)\n\nfabrizio=# ANALYZE t;\nANALYZE\nfabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid =\n't_pkey'::regclass;\n count\n-------\n 0\n(1 row)\n\nfabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid =\n't_idx2'::regclass;\n count\n-------\n 1\n(1 row)\n\nfabrizio=# REINDEX INDEX t_idx2;\nREINDEX\nfabrizio=# REINDEX INDEX t_pkey;\nREINDEX\nfabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid =\n't_pkey'::regclass;\n count\n-------\n 0\n(1 row)\n\nfabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid =\n't_idx2'::regclass;\n count\n-------\n 1\n(1 row)\n^^^^^^^^\n-- A regular REINDEX don't lose the statistics.\n\n\nfabrizio=# REINDEX INDEX CONCURRENTLY t_idx2;\nREINDEX\nfabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid =\n't_idx2'::regclass;\n count\n-------\n 0\n(1 row)\n\n^^^^^^^^\n-- But the REINDEX CONCURRENTLY loses.\n\nSo IMHO here is the place we should rework a bit to execute ANALYZE as a\nlast step.\n\nRegards,\n\n-- \n Fabrízio de Royes Mello\n PostgreSQL Developer at OnGres Inc. - https://ongres.com\n\nOn Mon, Oct 26, 2020 at 7:46 PM David G. Johnston <david.g.johnston@gmail.com> wrote:>> It would seem preferable to call the lack of auto-analyzing after these operations a bug and back-patch a fix that injects an analyze side-effect just before their completion.  It doesn't have to be smart either, analyzing things even if the created (or newly validated) index doesn't have statistics of its own isn't a problem in my book.>When we create a new table or index they will not have statistics until an ANALYZE happens. This is the default behaviour and I think is not a big problem here, but we need to add some note on docs about the need of statistics for indexes on expressions.But IMHO there is a misbehaviour with the implementation of CONCURRENTLY on REINDEX because running it will lose the statistics. Have a look the example below:fabrizio=# SELECT version();                                                 version                                                 --------------------------------------------------------------------------------------------------------- PostgreSQL 14devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit(1 row)fabrizio=# CREATE TABLE t(f1 BIGSERIAL PRIMARY KEY, f2 TEXT) WITH (autovacuum_enabled = false);CREATE TABLEfabrizio=# INSERT INTO t(f2) SELECT repeat(chr(65+(random()*26)::int), (random()*300)::int) FROM generate_series(1, 10000);INSERT 0 10000fabrizio=# CREATE INDEX t_idx2 ON t(lower(f2));CREATE INDEXfabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid = 't_pkey'::regclass; count -------     0(1 row)fabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid = 't_idx2'::regclass; count -------     0(1 row)fabrizio=# ANALYZE t;ANALYZEfabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid = 't_pkey'::regclass; count -------     0(1 row)fabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid = 't_idx2'::regclass; count -------     1(1 row)fabrizio=# REINDEX INDEX t_idx2;REINDEXfabrizio=# REINDEX INDEX t_pkey;REINDEXfabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid = 't_pkey'::regclass; count -------     0(1 row)fabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid = 't_idx2'::regclass; count -------     1(1 row)^^^^^^^^-- A regular REINDEX don't lose the statistics.fabrizio=# REINDEX INDEX CONCURRENTLY t_idx2;REINDEXfabrizio=# SELECT count(*) FROM pg_statistic WHERE starelid = 't_idx2'::regclass; count -------     0(1 row)^^^^^^^^-- But the REINDEX CONCURRENTLY loses.So IMHO here is the place we should rework a bit to execute ANALYZE as a last step.Regards,--    Fabrízio de Royes Mello   PostgreSQL Developer at OnGres Inc. - https://ongres.com", "msg_date": "Tue, 27 Oct 2020 11:06:22 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Tue, Oct 27, 2020 at 4:12 AM Nikolay Samokhvalov <samokhvalov@gmail.com>\nwrote:\n>\n> On Mon, Oct 26, 2020 at 3:08 PM Fabrízio de Royes Mello <\nfabriziomello@gmail.com> wrote:\n>>\n>> Would be nice if add some information about it into our docs but not\nsure where. I'm thinking about:\n>> - doc/src/sgml/ref/create_index.sgml\n>> - doc/src/sgml/maintenance.sgml (routine-reindex)\n>\n>\n> Attaching the patches for the docs, one for 11 and older, and another for\n12+ (which have REINDEX CONCURRENTLY not suffering from lack of ANALYZE).\n>\n\nActually the REINDEX CONCURRENTLY suffers with the lack of ANALYZE. See my\nprevious message on this thread.\n\nSo just adding the note on the ANALYZE docs is enough.\n\n\n> I still think that automating is the right thing to do but of course,\nit's a much bigger topic that a quick fix dor the docs.\n\nSo what we need to do is see how to fix REINDEX CONCURRENTLY.\n\nRegards,\n\n-- \n Fabrízio de Royes Mello\n PostgreSQL Developer at OnGres Inc. - https://ongres.com\n\nOn Tue, Oct 27, 2020 at 4:12 AM Nikolay Samokhvalov <samokhvalov@gmail.com> wrote:>> On Mon, Oct 26, 2020 at 3:08 PM Fabrízio de Royes Mello <fabriziomello@gmail.com> wrote:>>>> Would be nice if add some information about it into our docs but not sure where. I'm thinking about:>> - doc/src/sgml/ref/create_index.sgml>> - doc/src/sgml/maintenance.sgml (routine-reindex)>>> Attaching the patches for the docs, one for 11 and older, and another for 12+ (which have REINDEX CONCURRENTLY not suffering from lack of ANALYZE).>Actually the REINDEX CONCURRENTLY suffers with the lack of ANALYZE. See my previous message on this thread.So just adding the note on the ANALYZE docs is enough.> I still think that automating is the right thing to do but of course, it's a much bigger topic that a quick fix dor the docs.So what we need to do is see how to fix REINDEX CONCURRENTLY.Regards,--    Fabrízio de Royes Mello   PostgreSQL Developer at OnGres Inc. - https://ongres.com", "msg_date": "Tue, 27 Oct 2020 11:11:40 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Tue, Oct 27, 2020 at 11:06:22AM -0300, Fabrízio de Royes Mello wrote:\n> When we create a new table or index they will not have statistics until an\n> ANALYZE happens. This is the default behaviour and I think is not a big\n> problem here, but we need to add some note on docs about the need of\n> statistics for indexes on expressions.\n> \n> But IMHO there is a misbehaviour with the implementation of CONCURRENTLY on\n> REINDEX because running it will lose the statistics. Have a look the\n> example below:\n>\n> [...] \n> \n> So IMHO here is the place we should rework a bit to execute ANALYZE as a\n> last step.\n\nI agree that this is not user-friendly, and I suspect that we will\nneed to do something within index_concurrently_swap() to fill in the\nstats of the new index from the data of the old one (not looked at\nthat in details yet).\n--\nMichael", "msg_date": "Wed, 28 Oct 2020 14:14:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Wed, Oct 28, 2020 at 2:15 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Oct 27, 2020 at 11:06:22AM -0300, Fabrízio de Royes Mello wrote:\n> > When we create a new table or index they will not have statistics until\nan\n> > ANALYZE happens. This is the default behaviour and I think is not a big\n> > problem here, but we need to add some note on docs about the need of\n> > statistics for indexes on expressions.\n> >\n> > But IMHO there is a misbehaviour with the implementation of\nCONCURRENTLY on\n> > REINDEX because running it will lose the statistics. Have a look the\n> > example below:\n> >\n> > [...]\n> >\n> > So IMHO here is the place we should rework a bit to execute ANALYZE as a\n> > last step.\n>\n> I agree that this is not user-friendly, and I suspect that we will\n> need to do something within index_concurrently_swap() to fill in the\n> stats of the new index from the data of the old one (not looked at\n> that in details yet).\n>\n\nWe already do a similar thing for PgStats [1] so maybe we should also copy\npg_statistics from old to new index during the swap.\n\nBut I'm not sure if it's totally safe anyway and would be better to create\na new phase to issue ANALYZE if necessary (exists statistics for old index).\n\nRegards,\n\n[1]\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/catalog/index.c;h=0974f3e23a23726b63246cd3a1347e10923dd541;hb=HEAD#l1693\n\n-- \n Fabrízio de Royes Mello\n PostgreSQL Developer at OnGres Inc. - https://ongres.com\n\nOn Wed, Oct 28, 2020 at 2:15 AM Michael Paquier <michael@paquier.xyz> wrote:>> On Tue, Oct 27, 2020 at 11:06:22AM -0300, Fabrízio de Royes Mello wrote:> > When we create a new table or index they will not have statistics until an> > ANALYZE happens. This is the default behaviour and I think is not a big> > problem here, but we need to add some note on docs about the need of> > statistics for indexes on expressions.> >> > But IMHO there is a misbehaviour with the implementation of CONCURRENTLY on> > REINDEX because running it will lose the statistics. Have a look the> > example below:> >> > [...]> >> > So IMHO here is the place we should rework a bit to execute ANALYZE as a> > last step.>> I agree that this is not user-friendly, and I suspect that we will> need to do something within index_concurrently_swap() to fill in the> stats of the new index from the data of the old one (not looked at> that in details yet).> We already do a similar thing for PgStats [1] so maybe we should also copy pg_statistics from old to new index during the swap. But I'm not sure if it's totally safe anyway and would be better to create a new phase to issue ANALYZE if necessary (exists statistics for old index).Regards,[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/catalog/index.c;h=0974f3e23a23726b63246cd3a1347e10923dd541;hb=HEAD#l1693--    Fabrízio de Royes Mello   PostgreSQL Developer at OnGres Inc. - https://ongres.com", "msg_date": "Wed, 28 Oct 2020 09:35:21 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Tue, Oct 27, 2020 at 11:06:22AM -0300, Fabr�zio de Royes Mello wrote:\n>On Mon, Oct 26, 2020 at 7:46 PM David G. Johnston <\n>david.g.johnston@gmail.com> wrote:\n>>\n>> It would seem preferable to call the lack of auto-analyzing after these\n>operations a bug and back-patch a fix that injects an analyze side-effect\n>just before their completion. It doesn't have to be smart either,\n>analyzing things even if the created (or newly validated) index doesn't\n>have statistics of its own isn't a problem in my book.\n>>\n>\n>When we create a new table or index they will not have statistics until an\n>ANALYZE happens. This is the default behaviour and I think is not a big\n>problem here, but we need to add some note on docs about the need of\n>statistics for indexes on expressions.\n>\n\nI think the problem is we notice when a table has not been analyzed yet\n(and trigger an analyze), but we won't notice that for an index. So if\nthe table does not change very often, it may take ages before we build\nstats for the index - not great.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 28 Oct 2020 19:52:11 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Mon, Oct 26, 2020 at 03:46:10PM -0700, David G. Johnston wrote:\n>On Mon, Oct 26, 2020 at 3:08 PM Fabr�zio de Royes Mello <\n>fabriziomello@gmail.com> wrote:\n>\n>> Hi all,\n>>\n>> As you all already know Postgres supports functions in index expressions\n>> (marked as immutable ofc) and for this special index the ANALYZE command\n>> creates some statistics (new pg_statistic entry) about it.\n>>\n>> The problem is just after creating a new index or rebuilding concurrently\n>> (using the new REINDEX .. CONCURRENTLY or the old manner creating new one\n>> and then swapping) we need to run ANALYZE to update statistics but we don't\n>> mention it in any part of our documentation.\n>>\n>> Last weekend Gitlab went down because the lack of an ANALYZE after\n>> rebuilding concurrently a functional index and they followed the\n>> recommendation we have into our documentation [1] about how to rebuild it\n>> concurrently, but we don't warn users about the ANALYZE after.\n>>\n>> Would be nice if add some information about it into our docs but not sure\n>> where. I'm thinking about:\n>> - doc/src/sgml/ref/create_index.sgml\n>> - doc/src/sgml/maintenance.sgml (routine-reindex)\n>>\n>> Thoughts?\n>>\n>> [1]\n>> https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2885#note_436310499\n>>\n>\n>It would seem preferable to call the lack of auto-analyzing after these\n>operations a bug and back-patch a fix that injects an analyze side-effect\n>just before their completion. It doesn't have to be smart either,\n>analyzing things even if the created (or newly validated) index doesn't\n>have statistics of its own isn't a problem in my book.\n>\n\nI agree the lack of stats may be quite annoying and cause issues, but my\nguess is the chances of backpatching such change are about 0.000001%. We\nhave a usable 'workaround' for this - manual analyze.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 28 Oct 2020 19:55:38 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Wed, Oct 28, 2020 at 11:55 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> I agree the lack of stats may be quite annoying and cause issues, but my\n> guess is the chances of backpatching such change are about 0.000001%. We\n> have a usable 'workaround' for this - manual analyze.\n>\n\nMy guess is that it wouldn't be too difficult to write a patch that could\nbe safely back-patched and it's worth doing so even if ultimately the\ndecision is not to. But then again the patch writer isn't going to be me.\n\nGiven how simple the manual workaround is not having it be manual seems\nlike it would be safe and straight-forward to implement.\n\nDavid J.\n\nOn Wed, Oct 28, 2020 at 11:55 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:I agree the lack of stats may be quite annoying and cause issues, but my\nguess is the chances of backpatching such change are about 0.000001%. We\nhave a usable 'workaround' for this - manual analyze.My guess is that it wouldn't be too difficult to write a patch that could be safely back-patched and it's worth doing so even if ultimately the decision is not to.  But then again the patch writer isn't going to be me.Given how simple the manual workaround is not having it be manual seems like it would be safe and straight-forward to implement.David J.", "msg_date": "Wed, 28 Oct 2020 12:00:54 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Mon, Oct 26, 2020 at 03:46:10PM -0700, David G. Johnston wrote:\n>> It would seem preferable to call the lack of auto-analyzing after these\n>> operations a bug and back-patch a fix that injects an analyze side-effect\n>> just before their completion. It doesn't have to be smart either,\n>> analyzing things even if the created (or newly validated) index doesn't\n>> have statistics of its own isn't a problem in my book.\n\n> I agree the lack of stats may be quite annoying and cause issues, but my\n> guess is the chances of backpatching such change are about 0.000001%. We\n> have a usable 'workaround' for this - manual analyze.\n\nThis doesn't seem clearly different from any other situation where\nauto-analyze doesn't react fast enough to suit you. I would not\ncall it a bug, at least not without a wholesale redefinition of\nhow auto-analyze is supposed to work. As a close analogy, we\ndon't make any effort to force an immediate auto-analyze after\nCREATE STATISTICS.\n\nI don't see anything in the CREATE STATISTICS man page pointing\nthat out, either. But there's probably room for \"Notes\" entries\nabout it in both places.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Oct 2020 15:05:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Mon, Oct 26, 2020 at 9:44 PM Nikolay Samokhvalov <samokhvalov@gmail.com>\nwrote:\n\n> On Mon, Oct 26, 2020 at 7:03 PM David G. Johnston <\n> david.g.johnston@gmail.com> wrote:\n>\n>> On Monday, October 26, 2020, Nikolay Samokhvalov <samokhvalov@gmail.com>\n>> wrote:\n>>>\n>>> Although, this triggers a question – should ANALYZE be automated in,\n>>> say, pg_restore as well?\n>>>\n>>\n>> Independent concern.\n>>\n>\n> It's the same class of issues – after we created some objects, we lack\n> statistics and willing to automate its collection. If the approach is\n> automated in one case, it should be automated in the others, for\n> consistency.\n>\n\nI don't see a need to force consistency between something that will affect,\nat most, one table, and something that will affect an entire database or\ncluster. The other material difference is that the previous state of a\nrestore is \"nothing\" while in the create/reindex cases we are going from\nlive, populated, state to another.\n\nI do observe that while the create/reindex analyze would run automatically\nduring the restore on object creation there would be no data present so it\nwould be close to a no-op in practice.\n\n\n>\n> And another question: how ANALYZE needs to be run? If it's under the\n>>> user's control, there is an option to use vacuumdb --analyze and benefit\n>>> from using -j to parallelize the work (and, in some cases, benefit from\n>>> using --analyze-in-stages). If we had ANALYZE as a part of building indexes\n>>> on expressions, should it be parallelized to the same extent as index\n>>> creation (controlled by max_parallel_maintenance_workers)?\n>>>\n>>\n>> None of that seems relevant here. The only relevant parameter I see is\n>> what to specify for “table_and_columns”.\n>>\n>\n> I'm not sure I follow.\n>\n\nDescribe how parallelism within the session that is auto-analyzing is\nsupposed to work. vaccuumdb opens up multiple connections which shouldn't\nhappen here.\n\nI suppose having the auto-analyze run three times with different targets\nwould work but I'm doubting that is a win. I may just be underestimating\nhow long an analyze on an extremely large table with high statistics takes.\n\nDavid J.\n\nOn Mon, Oct 26, 2020 at 9:44 PM Nikolay Samokhvalov <samokhvalov@gmail.com> wrote:On Mon, Oct 26, 2020 at 7:03 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Monday, October 26, 2020, Nikolay Samokhvalov <samokhvalov@gmail.com> wrote:Although, this triggers a question – should ANALYZE be automated in, say, pg_restore as well?Independent concern.It's the same class of issues – after we created some objects, we lack statistics and willing to automate its collection. If the approach is automated in one case, it should be automated in the others, for consistency.I don't see a need to force consistency between something that will affect, at most, one table, and something that will affect an entire database or cluster.  The other material difference is that the previous state of a restore is \"nothing\" while in the create/reindex cases we are going from live, populated, state to another.I do observe that while the create/reindex analyze would run automatically during the restore on object creation there would be no data present so it would be close to a no-op in practice. And another question: how ANALYZE needs to be run? If it's under the user's control, there is an option to use vacuumdb --analyze and benefit from using -j to parallelize the work (and, in some cases, benefit from using --analyze-in-stages). If we had ANALYZE as a part of building indexes on expressions, should it be parallelized to the same extent as index creation (controlled by max_parallel_maintenance_workers)?None of that seems relevant here.  The only relevant parameter I see is what to specify for “table_and_columns”.I'm not sure I follow.Describe how parallelism within the session that is auto-analyzing is supposed to work.  vaccuumdb opens up multiple connections which shouldn't happen here.I suppose having the auto-analyze run three times with different targets would work but I'm doubting that is a win.  I may just be underestimating how long an analyze on an extremely large table with high statistics takes.David J.", "msg_date": "Wed, 28 Oct 2020 12:07:52 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Wed, Oct 28, 2020 at 12:00:54PM -0700, David G. Johnston wrote:\n>On Wed, Oct 28, 2020 at 11:55 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>wrote:\n>\n>> I agree the lack of stats may be quite annoying and cause issues, but my\n>> guess is the chances of backpatching such change are about 0.000001%. We\n>> have a usable 'workaround' for this - manual analyze.\n>>\n>\n>My guess is that it wouldn't be too difficult to write a patch that could\n>be safely back-patched and it's worth doing so even if ultimately the\n>decision is not to. But then again the patch writer isn't going to be me.\n>\n>Given how simple the manual workaround is not having it be manual seems\n>like it would be safe and straight-forward to implement.\n>\n\nMaybe, but I wouldn't be surprised if it was actually a bit trickier in\npractice, particularly for the CONCURRENTLY case. But I haven't tried.\n\nAnyway, I think there's an agreement it'd be valuable to do this after\nCREATE INDEX in the future, so if someone wants to implement it that'd\nbe great. We can consider backpatching only once we have an actual patch\nanyway.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 28 Oct 2020 20:12:01 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Wed, Oct 28, 2020 at 03:05:39PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Mon, Oct 26, 2020 at 03:46:10PM -0700, David G. Johnston wrote:\n>>> It would seem preferable to call the lack of auto-analyzing after these\n>>> operations a bug and back-patch a fix that injects an analyze side-effect\n>>> just before their completion. It doesn't have to be smart either,\n>>> analyzing things even if the created (or newly validated) index doesn't\n>>> have statistics of its own isn't a problem in my book.\n>\n>> I agree the lack of stats may be quite annoying and cause issues, but my\n>> guess is the chances of backpatching such change are about 0.000001%. We\n>> have a usable 'workaround' for this - manual analyze.\n>\n>This doesn't seem clearly different from any other situation where\n>auto-analyze doesn't react fast enough to suit you. I would not\n>call it a bug, at least not without a wholesale redefinition of\n>how auto-analyze is supposed to work. As a close analogy, we\n>don't make any effort to force an immediate auto-analyze after\n>CREATE STATISTICS.\n>\n\nTrue.\n\n>I don't see anything in the CREATE STATISTICS man page pointing\n>that out, either. But there's probably room for \"Notes\" entries\n>about it in both places.\n>\n\nI agree. I'll add it to my TODO list for the next CF.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 28 Oct 2020 20:13:48 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Wed, Oct 28, 2020 at 12:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> This doesn't seem clearly different from any other situation where\n> auto-analyze doesn't react fast enough to suit you.\n\n\n\n> I would not\n> call it a bug, at least not without a wholesale redefinition of\n> how auto-analyze is supposed to work.\n\n\nThe definition of auto-analyze is just fine; the issue is with the user\nunfriendly position that the only times analyze is ever run is when it is\nrun manually or heuristically in a separate process. I agree that this\nisn't a bug in the traditional sense - the current behavior is intentional\n- but it is a POLA violation.\n\nThe fundamental question here is do we want to change our policy in this\nregard and make our system more user-friendly? If so, let's do so for v14\nin honor of the problem the lack of documentation and POLA violation has\nrecently caused.\n\nThen, as a separate concern, should we admit the oversight and back-patch\nour policy change or just move forward and add documentation to older\nversions?\n\n\n> As a close analogy, we\n> don't make any effort to force an immediate auto-analyze after\n> CREATE STATISTICS.\n>\n\nAt least we have been consistent...\n\nDavid J.\n\nOn Wed, Oct 28, 2020 at 12:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:This doesn't seem clearly different from any other situation where\nauto-analyze doesn't react fast enough to suit you. I would not\ncall it a bug, at least not without a wholesale redefinition of\nhow auto-analyze is supposed to work.The definition of auto-analyze is just fine; the issue is with the user unfriendly position that the only times analyze is ever run is when it is run manually or heuristically in a separate process.  I agree that this isn't a bug in the traditional sense - the current behavior is intentional - but it is a POLA violation.The fundamental question here is do we want to change our policy in this regard and make our system more user-friendly?  If so, let's do so for v14 in honor of the problem the lack of documentation and POLA violation has recently caused.Then, as a separate concern, should we admit the oversight and back-patch our policy change or just move forward and add documentation to older versions?   As a close analogy, we\ndon't make any effort to force an immediate auto-analyze after\nCREATE STATISTICS.At least we have been consistent...David J.", "msg_date": "Wed, 28 Oct 2020 12:14:27 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Wed, Oct 28, 2020 at 12:00:54PM -0700, David G. Johnston wrote:\n>> Given how simple the manual workaround is not having it be manual seems\n>> like it would be safe and straight-forward to implement.\n\n> Maybe, but I wouldn't be surprised if it was actually a bit trickier in\n> practice, particularly for the CONCURRENTLY case. But I haven't tried.\n\n> Anyway, I think there's an agreement it'd be valuable to do this after\n> CREATE INDEX in the future, so if someone wants to implement it that'd\n> be great. We can consider backpatching only once we have an actual patch\n> anyway.\n\nJust to be clear, I'm entirely *not* on board with that. IMV it's\nintentional that we do not force auto-analyze activity after CREATE\nINDEX or CREATE STATISTICS. If we change that, people will want a\nway to opt out of it, and then your \"simple\" patch isn't so simple\nanymore. (Not that it was simple anyway. What if the CREATE is\ninside a transaction block, for instance? There's no use in\nkicking autovacuum before commit.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Oct 2020 15:18:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Wed, Oct 28, 2020 at 03:18:52PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Wed, Oct 28, 2020 at 12:00:54PM -0700, David G. Johnston wrote:\n>>> Given how simple the manual workaround is not having it be manual seems\n>>> like it would be safe and straight-forward to implement.\n>\n>> Maybe, but I wouldn't be surprised if it was actually a bit trickier in\n>> practice, particularly for the CONCURRENTLY case. But I haven't tried.\n>\n>> Anyway, I think there's an agreement it'd be valuable to do this after\n>> CREATE INDEX in the future, so if someone wants to implement it that'd\n>> be great. We can consider backpatching only once we have an actual patch\n>> anyway.\n>\n>Just to be clear, I'm entirely *not* on board with that. IMV it's\n>intentional that we do not force auto-analyze activity after CREATE\n>INDEX or CREATE STATISTICS. If we change that, people will want a\n>way to opt out of it, and then your \"simple\" patch isn't so simple\n>anymore.\n\nTrue. Some users may have reasons to not want the analyze, I guess.\n\n> (Not that it was simple anyway. What if the CREATE is\n>inside a transaction block, for instance? There's no use in\n>kicking autovacuum before commit.)\n>\n\nI don't think anyone proposed to do this through autovacuum. There was a\nreference to auto-analyze but I think that was meant as 'run analyze\nautomatically.' Which would work in transactions just fine, I think.\n\nBut I agree it'd likely be a more complicated patch than it might seem\nat first glance.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 28 Oct 2020 20:35:46 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Wed, Oct 28, 2020 at 4:35 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n>\n> I don't think anyone proposed to do this through autovacuum. There was a\n> reference to auto-analyze but I think that was meant as 'run analyze\n> automatically.' Which would work in transactions just fine, I think.\n>\n\nMaybe I was not very clear at the beginning so will try to clarify my\nthoughts:\n\n1) We should add notes on our docs about the need to issue ANALYZE after\ncreating indexes using expressions and create extended statistics. Nikolay\nsent a patch upthread and we can work on it and back patch.\n\n2) REINDEX CONCURRENTLY does not keep statistics (pg_statistc) like a\nregular REINDEX for indexes using expressions and to me it's a bug. Michael\npointed out upthread that maybe we should rework a bit\nindex_concurrently_swap() to copy statistics from old index to new one.\n\n\n> But I agree it'd likely be a more complicated patch than it might seem\n> at first glance.\n>\n\nIf we think about a way to kick AutoAnalyze for sure it will be a more\ncomplicated task but IMHO for now we can do it simply just by copying\nstatistics like I mentioned above.\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Wed, Oct 28, 2020 at 4:35 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:>> I don't think anyone proposed to do this through autovacuum. There was a> reference to auto-analyze but I think that was meant as 'run analyze> automatically.' Which would work in transactions just fine, I think.>Maybe I was not very clear at the beginning so will try to clarify my thoughts:1) We should add notes on our docs about the need to issue ANALYZE after creating indexes using expressions and create extended statistics. Nikolay sent a patch upthread and we can work on it and back patch.2) REINDEX CONCURRENTLY does not keep statistics (pg_statistc) like a regular REINDEX for indexes using expressions and to me it's a bug. Michael pointed out upthread that maybe we should rework a bit  index_concurrently_swap() to copy statistics from old index to new one.> But I agree it'd likely be a more complicated patch than it might seem> at first glance.>If we think about a way to kick AutoAnalyze for sure it will be a more complicated task but IMHO for now we can do it simply just by copying statistics like I mentioned above.Regards,--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Wed, 28 Oct 2020 17:43:08 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Wed, Oct 28, 2020 at 05:43:08PM -0300, Fabr�zio de Royes Mello wrote:\n>On Wed, Oct 28, 2020 at 4:35 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>wrote:\n>>\n>> I don't think anyone proposed to do this through autovacuum. There was a\n>> reference to auto-analyze but I think that was meant as 'run analyze\n>> automatically.' Which would work in transactions just fine, I think.\n>>\n>\n>Maybe I was not very clear at the beginning so will try to clarify my\n>thoughts:\n>\n>1) We should add notes on our docs about the need to issue ANALYZE after\n>creating indexes using expressions and create extended statistics. Nikolay\n>sent a patch upthread and we can work on it and back patch.\n>\n\n+1\n\n>2) REINDEX CONCURRENTLY does not keep statistics (pg_statistc) like a\n>regular REINDEX for indexes using expressions and to me it's a bug. Michael\n>pointed out upthread that maybe we should rework a bit\n>index_concurrently_swap() to copy statistics from old index to new one.\n>\n\nYeah. Not sure it counts as a bug, but I see what you mean - it's\ndefinitely an unexpected/undesirable difference in behavior between\nplain REINDEX and concurrent one.\n\n>\n>> But I agree it'd likely be a more complicated patch than it might seem\n>> at first glance.\n>>\n>\n>If we think about a way to kick AutoAnalyze for sure it will be a more\n>complicated task but IMHO for now we can do it simply just by copying\n>statistics like I mentioned above.\n>\n\nI very much doubt just we can rely on autoanalyze here. For one, it'll\nhave issues with transactions, as Tom already pointed out elsewhere in\nthis thread. So if you do a reindex after a bulk load in a transaction,\nfollowed by some report queries, autoanalyze is not going to help.\n\nBut it has another issue - there may not be any free autovacuum workers,\nso it'd have to wait for unknown amount of time. In fact, it'd have to\nwait for the autovacuum worker to actually do the analyze, otherwise we\ncould still have unpredictable behavior for queries immediately after\nthe REINDEX, even outside transactions. That's not good, so this would\nhave to do an actual analyze I think.\n\nBut as Tom pointed out, the automatic analyze may be against wishes of\nsome users, and there are other similar cases that don't trigger analyze\n(CREATE STATISTICS). So not sure about this.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 29 Oct 2020 00:02:11 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Thu, Oct 29, 2020 at 12:02:11AM +0100, Tomas Vondra wrote:\n> On Wed, Oct 28, 2020 at 05:43:08PM -0300, Fabrízio de Royes Mello wrote:\n>> 2) REINDEX CONCURRENTLY does not keep statistics (pg_statistc) like a\n>> regular REINDEX for indexes using expressions and to me it's a bug. Michael\n>> pointed out upthread that maybe we should rework a bit\n>> index_concurrently_swap() to copy statistics from old index to new one.\n> \n> Yeah. Not sure it counts as a bug, but I see what you mean - it's\n> definitely an unexpected/undesirable difference in behavior between\n> plain REINDEX and concurrent one.\n\nREINDEX CONCURRENTLY is by design wanted to provide an experience\ntransparent to the user similar to what a plain REINDEX would do, at\nleast that's the idea behind it, so.. This qualifies as a bug to me,\nin spirit.\n--\nMichael", "msg_date": "Thu, 29 Oct 2020 10:59:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Thu, Oct 29, 2020 at 10:59:52AM +0900, Michael Paquier wrote:\n> REINDEX CONCURRENTLY is by design wanted to provide an experience\n> transparent to the user similar to what a plain REINDEX would do, at\n> least that's the idea behind it, so.. This qualifies as a bug to me,\n> in spirit.\n\nAnd in spirit, it is possible to address this issue with the patch\nattached which copies the set of stats from the old to the new index.\nFor a non-concurrent REINDEX, this does not happen because we keep the\nsame base relation, while we replace completely the relation with a\nconcurrent operation. We have a RemoveStatistics() in heap.c, but I\ndid not really see the point to invent a copy flavor for this\nparticular case. Perhaps others have an opinion on that?\n--\nMichael", "msg_date": "Fri, 30 Oct 2020 15:22:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Fri, Oct 30, 2020 at 03:22:52PM +0900, Michael Paquier wrote:\n> On Thu, Oct 29, 2020 at 10:59:52AM +0900, Michael Paquier wrote:\n> > REINDEX CONCURRENTLY is by design wanted to provide an experience\n> > transparent to the user similar to what a plain REINDEX would do, at\n> > least that's the idea behind it, so.. This qualifies as a bug to me,\n> > in spirit.\n> \n> And in spirit, it is possible to address this issue with the patch\n> attached which copies the set of stats from the old to the new index.\n> For a non-concurrent REINDEX, this does not happen because we keep the\n> same base relation, while we replace completely the relation with a\n> concurrent operation. We have a RemoveStatistics() in heap.c, but I\n> did not really see the point to invent a copy flavor for this\n> particular case. Perhaps others have an opinion on that?\n\n+1\n\nThe implementation of REINDEX CONCURRENTLY is \"CREATE INDEX CONCURRENTLY\nfollowed by internal index swap\". But the command is called \"reindex\", and so\nthe user experience is that the statistics are inexplicably lost.\n\n(I'm quoting from the commit message of the patch I wrote, which is same as\nyour patch).\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 30 Oct 2020 22:30:13 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Fri, Oct 30, 2020 at 3:22 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> And in spirit, it is possible to address this issue with the patch\n> attached which copies the set of stats from the old to the new index.\n\nDid some tests and everything went ok... some comments below!\n\n> For a non-concurrent REINDEX, this does not happen because we keep the\n> same base relation, while we replace completely the relation with a\n> concurrent operation.\n\nExactly!\n\n> We have a RemoveStatistics() in heap.c, but I\n> did not really see the point to invent a copy flavor for this\n> particular case. Perhaps others have an opinion on that?\n>\n\nEven if we won't use it now, IMHO it is more legible to separate this\nresponsibility into its own CopyStatistics function as attached.\n\nRegards,\n\n-- \n Fabrízio de Royes Mello\n PostgreSQL Developer at OnGres Inc. - https://ongres.com", "msg_date": "Sat, 31 Oct 2020 19:56:33 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Sat, Oct 31, 2020 at 07:56:33PM -0300, Fabrízio de Royes Mello wrote:\n> Even if we won't use it now, IMHO it is more legible to separate this\n> responsibility into its own CopyStatistics function as attached.\n\nBy doing so, there is no need to include pg_statistic.h in index.c.\nExcept that, the logic looks fine at quick glance. In the long-term,\nI also think that it would make sense to move both routnes out of\nheap.c into a separate pg_statistic.c. That's material for a separate\npatch of course.\n--\nMichael", "msg_date": "Sun, 1 Nov 2020 09:23:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Fri, Oct 30, 2020 at 10:30:13PM -0500, Justin Pryzby wrote:\n> (I'm quoting from the commit message of the patch I wrote, which is same as\n> your patch).\n\n(I may have missed something, but you did not send a patch, right?)\n--\nMichael", "msg_date": "Sun, 1 Nov 2020 10:11:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Sun, Nov 01, 2020 at 10:11:06AM +0900, Michael Paquier wrote:\n> On Fri, Oct 30, 2020 at 10:30:13PM -0500, Justin Pryzby wrote:\n> > (I'm quoting from the commit message of the patch I wrote, which is same as\n> > your patch).\n> \n> (I may have missed something, but you did not send a patch, right?)\n\nRight, because it's the same as yours.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 31 Oct 2020 20:15:31 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Sun, Nov 01, 2020 at 09:23:44AM +0900, Michael Paquier wrote:\n> By doing so, there is no need to include pg_statistic.h in index.c.\n> Except that, the logic looks fine at quick glance. In the long-term,\n> I also think that it would make sense to move both routnes out of\n> heap.c into a separate pg_statistic.c. That's material for a separate\n> patch of course.\n\nI have looked again at that, and applied it after some tweaks.\nParticularly, I did not really like the use of \"old\" and \"new\" for the\ncopy from the old to a new relation in the new function, so I have\nreplaced that by \"from\" and \"to\".\n--\nMichael", "msg_date": "Sun, 1 Nov 2020 21:29:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Sun, 1 Nov 2020 at 09:29 Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sun, Nov 01, 2020 at 09:23:44AM +0900, Michael Paquier wrote:\n> > By doing so, there is no need to include pg_statistic.h in index.c.\n> > Except that, the logic looks fine at quick glance. In the long-term,\n> > I also think that it would make sense to move both routnes out of\n> > heap.c into a separate pg_statistic.c. That's material for a separate\n> > patch of course.\n>\n> I have looked again at that, and applied it after some tweaks.\n> Particularly, I did not really like the use of \"old\" and \"new\" for the\n> copy from the old to a new relation in the new function, so I have\n> replaced that by \"from\" and \"to\".\n>\n>\nAwesome thanks!!\n\nRegards,\n\n-- \n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Sun, 1 Nov 2020 at 09:29 Michael Paquier <michael@paquier.xyz> wrote:On Sun, Nov 01, 2020 at 09:23:44AM +0900, Michael Paquier wrote:\n> By doing so, there is no need to include pg_statistic.h in index.c.\n> Except that, the logic looks fine at quick glance.  In the long-term,\n> I also think that it would make sense to move both routnes out of\n> heap.c into a separate pg_statistic.c.  That's material for a separate\n> patch of course.\n\nI have looked again at that, and applied it after some tweaks.\nParticularly, I did not really like the use of \"old\" and \"new\" for the\ncopy from the old to a new relation in the new function, so I have\nreplaced that by \"from\" and \"to\".Awesome thanks!!Regards,--    Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Sun, 1 Nov 2020 13:53:23 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Tue, Oct 27, 2020 at 12:12:00AM -0700, Nikolay Samokhvalov wrote:\n> On Mon, Oct 26, 2020 at 3:08 PM Fabr�zio de Royes Mello <\n> fabriziomello@gmail.com> wrote:\n> \n> Would be nice if add some information about it into our docs but not sure\n> where. I'm thinking about:\n> - doc/src/sgml/ref/create_index.sgml\n> - doc/src/sgml/maintenance.sgml (routine-reindex)\n> \n> \n> Attaching the patches for the docs, one for 11 and older, and another for 12+\n> (which have REINDEX CONCURRENTLY not suffering from lack of ANALYZE).\n> \n> I still think that automating is the right thing to do but of course, it's a\n> much bigger topic that a quick fix dor the docs.\n\nI see REINDEX CONCURRENTLY was fixed in head, but the docs didn't get\nupdated to mention the need to run ANALYZE or wait for autovacuum before\nexpression indexes can be fully used by the optimizer. Instead of\nputting this mention in the maintenance section, I thought the CREATE\nINDEX page make more sense, since it is more of a usability issue,\nrather than \"why use expression indexes\". Patch attached, which I plan\nto apply to all supported branches.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Mon, 9 Nov 2020 18:27:20 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Mon, 9 Nov 2020 at 20:27 Bruce Momjian <bruce@momjian.us> wrote:\n\n>\n> I see REINDEX CONCURRENTLY was fixed in head, but the docs didn't get\n> updated to mention the need to run ANALYZE or wait for autovacuum before\n> expression indexes can be fully used by the optimizer. Instead of\n> putting this mention in the maintenance section, I thought the CREATE\n> INDEX page make more sense, since it is more of a usability issue,\n> rather than \"why use expression indexes\". Patch attached, which I plan\n> to apply to all supported branches.\n>\n\n\nDid a quick review and totally agree... thanks a lot Bruce to help us don’t\nmiss it.\n\nRegards,\n\n\n-- \n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Mon, 9 Nov 2020 at 20:27 Bruce Momjian <bruce@momjian.us> wrote:\n\nI see REINDEX CONCURRENTLY was fixed in head, but the docs didn't get\nupdated to mention the need to run ANALYZE or wait for autovacuum before\nexpression indexes can be fully used by the optimizer.  Instead of\nputting this mention in the maintenance section, I thought the CREATE\nINDEX page make more sense, since it is more of a usability issue,\nrather than \"why use expression indexes\".  Patch attached, which I plan\nto apply to all supported branches.   Did a quick review and totally agree... thanks a lot Bruce to help us don’t miss it.Regards,--    Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Mon, 9 Nov 2020 20:35:46 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Mon, Nov 9, 2020 at 08:35:46PM -0300, Fabrízio de Royes Mello wrote:\n> \n> \n> On Mon, 9 Nov 2020 at 20:27 Bruce Momjian <bruce@momjian.us> wrote:\n> \n> \n> I see REINDEX CONCURRENTLY was fixed in head, but the docs didn't get\n> updated to mention the need to run ANALYZE or wait for autovacuum before\n> expression indexes can be fully used by the optimizer.  Instead of\n> putting this mention in the maintenance section, I thought the CREATE\n> INDEX page make more sense, since it is more of a usability issue,\n> rather than \"why use expression indexes\".  Patch attached, which I plan\n> to apply to all supported branches.\n>    \n> \n> \n> Did a quick review and totally agree... thanks a lot Bruce to help us don’t\n> miss it.\n\nPatch applied to all branches. Thanks for the review.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Thu, 12 Nov 2020 15:01:10 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Mon, Nov 09, 2020 at 06:27:20PM -0500, Bruce Momjian wrote:\n> On Tue, Oct 27, 2020 at 12:12:00AM -0700, Nikolay Samokhvalov wrote:\n> > On Mon, Oct 26, 2020 at 3:08 PM Fabr�zio de Royes Mello <fabriziomello@gmail.com> wrote:\n> > \n> > Would be nice if add some information about it into our docs but not sure\n> > where. I'm thinking about:\n> > - doc/src/sgml/ref/create_index.sgml\n> > - doc/src/sgml/maintenance.sgml (routine-reindex)\n> > \n> > \n> > Attaching the patches for the docs, one for 11 and older, and another for 12+\n> > (which have REINDEX CONCURRENTLY not suffering from lack of ANALYZE).\n> > \n> > I still think that automating is the right thing to do but of course, it's a\n> > much bigger topic that a quick fix dor the docs.\n> \n> I see REINDEX CONCURRENTLY was fixed in head, but the docs didn't get\n> updated to mention the need to run ANALYZE or wait for autovacuum before\n> expression indexes can be fully used by the optimizer. Instead of\n> putting this mention in the maintenance section, I thought the CREATE\n> INDEX page make more sense, since it is more of a usability issue,\n> rather than \"why use expression indexes\". Patch attached, which I plan\n> to apply to all supported branches.\n\nThe commited patch actually says:\n\n--- a/doc/src/sgml/ref/create_index.sgml\n+++ b/doc/src/sgml/ref/create_index.sgml\n@@ -745,6 +745,16 @@ Indexes:\n sort high</quote>, in queries that depend on indexes to avoid sorting steps.\n </para>\n \n+ <para>\n+ The regularly system collects statistics on all of a table's\n+ columns. Newly-created non-expression indexes can immediately\n+ use these statistics to determine an index's usefulness.\n+ For new expression indexes, it is necessary to run <link\n+ linkend=\"sql-analyze\"><command>ANALYZE</command></link> or wait for\n+ the <link linkend=\"autovacuum\">autovacuum daemon</link> to analyze\n+ the table to generate statistics about new expression indexes.\n+ </para>\n+\n\nI guess it should say \"The system regularly ...\"\n\nAlso, the last sentence begins \"For new expression indexes\" and ends with\n\"about new expression indexes\", which I guess could instead say \"about the\nexpressions\".\n\n> diff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml\n> new file mode 100644\n> index 749db28..48c42db\n> *** a/doc/src/sgml/ref/create_index.sgml\n> --- b/doc/src/sgml/ref/create_index.sgml\n> *************** Indexes:\n> *** 746,751 ****\n> --- 746,761 ----\n> </para>\n> \n> <para>\n> + The system collects statistics on all of a table's columns.\n> + Newly-created non-expression indexes can immediately\n> + use these statistics to determine an index's usefulness.\n> + For new expression indexes, it is necessary to run <link\n> + linkend=\"sql-analyze\"><command>ANALYZE</command></link> or wait for\n> + the <link linkend=\"autovacuum\">autovacuum daemon</link> to analyze\n> + the table to generate statistics about new expression indexes.\n> + </para>\n> + \n> + <para>\n> For most index methods, the speed of creating an index is\n> dependent on the setting of <xref linkend=\"guc-maintenance-work-mem\"/>.\n> Larger values will reduce the time needed for index creation, so long\n\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581\n\n\n", "msg_date": "Thu, 12 Nov 2020 15:11:43 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Thu, Nov 12, 2020 at 03:11:43PM -0600, Justin Pryzby wrote:\n> I guess it should say \"The system regularly ...\"\n> \n> Also, the last sentence begins \"For new expression indexes\" and ends with\n> \"about new expression indexes\", which I guess could instead say \"about the\n> expressions\".\n\nHow is this followup patch?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Thu, 12 Nov 2020 18:01:02 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Thu, Nov 12, 2020 at 06:01:02PM -0500, Bruce Momjian wrote:\n> On Thu, Nov 12, 2020 at 03:11:43PM -0600, Justin Pryzby wrote:\n> > I guess it should say \"The system regularly ...\"\n> > \n> > Also, the last sentence begins \"For new expression indexes\" and ends with\n> > \"about new expression indexes\", which I guess could instead say \"about the\n> > expressions\".\n> \n> How is this followup patch?\n\nI see Alvaro already patched the first issue at bcbd77133.\n\nThe problematic language was recently introduced, and I'd reported at:\nhttps://www.postgresql.org/message-id/20201112211143.GL30691%40telsasoft.com\nAnd Erik at:\nhttps://www.postgresql.org/message-id/e92b3fba98a0c0f7afc0a2a37e765954%40xs4all.nl\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 16 Nov 2020 08:03:23 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On 2020-Nov-12, Bruce Momjian wrote:\n\n> For new expression indexes, it is necessary to run <link\n> linkend=\"sql-analyze\"><command>ANALYZE</command></link> or wait for\n> the <link linkend=\"autovacuum\">autovacuum daemon</link> to analyze\n> - the table to generate statistics about new expression indexes.\n> + the table to generate statistics for these indexes.\n\nLooks good to me.\n\n\n\n", "msg_date": "Mon, 16 Nov 2020 11:59:03 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On 2020-Nov-16, Justin Pryzby wrote:\n\n> I see Alvaro already patched the first issue at bcbd77133.\n> \n> The problematic language was recently introduced, and I'd reported at:\n> https://www.postgresql.org/message-id/20201112211143.GL30691%40telsasoft.com\n> And Erik at:\n> https://www.postgresql.org/message-id/e92b3fba98a0c0f7afc0a2a37e765954%40xs4all.nl\n\nYeah, sorry I didn't notice you had already reported it.\n\n\n", "msg_date": "Mon, 16 Nov 2020 11:59:31 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" }, { "msg_contents": "On Mon, Nov 16, 2020 at 11:59:03AM -0300, �lvaro Herrera wrote:\n> On 2020-Nov-12, Bruce Momjian wrote:\n> \n> > For new expression indexes, it is necessary to run <link\n> > linkend=\"sql-analyze\"><command>ANALYZE</command></link> or wait for\n> > the <link linkend=\"autovacuum\">autovacuum daemon</link> to analyze\n> > - the table to generate statistics about new expression indexes.\n> > + the table to generate statistics for these indexes.\n> \n> Looks good to me.\n\nApplied to all branches. Thanks for the review.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 16 Nov 2020 10:26:37 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add important info about ANALYZE after create Functional Index" } ]
[ { "msg_contents": "Forking this thread:\nhttps://www.postgresql.org/message-id/fd93f1c5-7818-a02c-01e5-1075ac0d4def@iki.fi\n\nThey have been deprecated for a Long Time, so finally remove them for v14.\nFour fewer exclamation marks makes the documentation less exciting, which is a\ngood thing.", "msg_date": "Mon, 26 Oct 2020 22:25:11 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "[PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "On 2020-10-27 04:25, Justin Pryzby wrote:\n> Forking this thread:\n> https://www.postgresql.org/message-id/fd93f1c5-7818-a02c-01e5-1075ac0d4def@iki.fi\n> \n> They have been deprecated for a Long Time, so finally remove them for v14.\n> Four fewer exclamation marks makes the documentation less exciting, which is a\n> good thing.\n\nI don't know the reason or context why they were deprecated, but I agree \nthat the timeline for removing them now is good.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 27 Oct 2020 09:38:24 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "On Tue, Oct 27, 2020 at 9:38 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-10-27 04:25, Justin Pryzby wrote:\n> > Forking this thread:\n> >\n> https://www.postgresql.org/message-id/fd93f1c5-7818-a02c-01e5-1075ac0d4def@iki.fi\n> >\n> > They have been deprecated for a Long Time, so finally remove them for\n> v14.\n> > Four fewer exclamation marks makes the documentation less exciting,\n> which is a\n> > good thing.\n>\n> I don't know the reason or context why they were deprecated, but I agree\n> that the timeline for removing them now is good.\n>\n\nIIRC it was to align things so that \"containment\" had the same operator for\nall different kinds of datatypes?\n\nBut whether that memory is right nor not it was indeed a long time ago,\nso +1 that it's definitely time to get rid of them.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Oct 27, 2020 at 9:38 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-10-27 04:25, Justin Pryzby wrote:\n> Forking this thread:\n> https://www.postgresql.org/message-id/fd93f1c5-7818-a02c-01e5-1075ac0d4def@iki.fi\n> \n> They have been deprecated for a Long Time, so finally remove them for v14.\n> Four fewer exclamation marks makes the documentation less exciting, which is a\n> good thing.\n\nI don't know the reason or context why they were deprecated, but I agree \nthat the timeline for removing them now is good.IIRC it was to align things so that \"containment\" had the same operator for all different kinds of datatypes?But whether that memory is right nor not it was indeed a long time ago, so +1 that it's definitely time to get rid of them.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Tue, 27 Oct 2020 09:46:34 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "On 2020-10-27 04:25, Justin Pryzby wrote:\n> Forking this thread:\n> https://www.postgresql.org/message-id/fd93f1c5-7818-a02c-01e5-1075ac0d4def@iki.fi\n> \n> They have been deprecated for a Long Time, so finally remove them for v14.\n> Four fewer exclamation marks makes the documentation less exciting, which is a\n> good thing.\n\nI have committed the parts that remove the built-in geometry operators \nand the related regression test changes.\n\nThe changes to the contrib modules appear to be incomplete in some ways. \n In cube, hstore, and seg, there are no changes to the extension \nscripts to remove the operators. All you're doing is changing the C \ncode to no longer recognize the strategy, but that doesn't explain what \nwill happen if the operator is still used. In intarray, by contrast, \nyou're editing an existing extension script, but that should be done by \nan upgrade script instead.\n\n\n", "msg_date": "Tue, 3 Nov 2020 10:47:14 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 2020-10-27 04:25, Justin Pryzby wrote:\n>> They have been deprecated for a Long Time, so finally remove them for v14.\n>> Four fewer exclamation marks makes the documentation less exciting, which is a\n>> good thing.\n\n> I have committed the parts that remove the built-in geometry operators \n> and the related regression test changes.\n\nI'm on board with pulling these now --- 8.2 to v14 is plenty of\ndeprecation notice. However, the patch seems incomplete in that\nthe code support for these is still there -- look for\nRTOldContainedByStrategyNumber and RTOldContainsStrategyNumber.\nAdmittedly, there's not much to be removed except some case labels,\nbut it still seems like we oughta do that to avoid future confusion.\n\n> The changes to the contrib modules appear to be incomplete in some ways. \n> In cube, hstore, and seg, there are no changes to the extension \n> scripts to remove the operators. All you're doing is changing the C \n> code to no longer recognize the strategy, but that doesn't explain what \n> will happen if the operator is still used. In intarray, by contrast, \n> you're editing an existing extension script, but that should be done by \n> an upgrade script instead.\n\nIn the contrib modules, I'm afraid what you gotta do is remove the\nSQL operator definitions but leave the opclass code support in place.\nThat's because there's no guarantee that users will update the extension's\nSQL version immediately, so a v14 build of the .so might still be used\nwith the old SQL definitions. It's not clear how much window we need\ngive for people to do that update, but I don't think \"zero\" is an\nacceptable answer.\n\n(The core code doesn't have to concern itself with such scenarios,\nsince we require the initial catalog contents to match the backend\nmajor version. Hence it is okay to remove the code support now in\nthe in-core opclasses.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Nov 2020 17:28:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "On 2020-11-12 23:28, Tom Lane wrote:\n> I'm on board with pulling these now --- 8.2 to v14 is plenty of\n> deprecation notice. However, the patch seems incomplete in that\n> the code support for these is still there -- look for\n> RTOldContainedByStrategyNumber and RTOldContainsStrategyNumber.\n> Admittedly, there's not much to be removed except some case labels,\n> but it still seems like we oughta do that to avoid future confusion.\n\nYeah, the stuff in gistproc.c should be removed now. But I wonder what \nthe mentions in brin_inclusion.c are and whether or how they should be \nremoved.\n\n\n\n", "msg_date": "Fri, 13 Nov 2020 08:26:28 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "On Thu, Nov 12, 2020 at 11:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > The changes to the contrib modules appear to be incomplete in some ways.\n> > In cube, hstore, and seg, there are no changes to the extension\n> > scripts to remove the operators. All you're doing is changing the C\n> > code to no longer recognize the strategy, but that doesn't explain what\n> > will happen if the operator is still used. In intarray, by contrast,\n> > you're editing an existing extension script, but that should be done by\n> > an upgrade script instead.\n>\n> In the contrib modules, I'm afraid what you gotta do is remove the\n> SQL operator definitions but leave the opclass code support in place.\n> That's because there's no guarantee that users will update the extension's\n> SQL version immediately, so a v14 build of the .so might still be used\n> with the old SQL definitions. It's not clear how much window we need\n> give for people to do that update, but I don't think \"zero\" is an\n> acceptable answer.\n\nBased on my experience from the field, the answer is \"never\".\n\nAs in, most people have no idea they are even *supposed* to do such an\nupgrade, so they don't do it. Until we solve that problem, I think\nwe're basically stuck with keeping them \"forever\". (and even if/when\nwe do, \"zero\" is probably not going to cut it, no)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Fri, 13 Nov 2020 10:39:51 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Thu, Nov 12, 2020 at 11:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > The changes to the contrib modules appear to be incomplete in some ways.\n> > > In cube, hstore, and seg, there are no changes to the extension\n> > > scripts to remove the operators. All you're doing is changing the C\n> > > code to no longer recognize the strategy, but that doesn't explain what\n> > > will happen if the operator is still used. In intarray, by contrast,\n> > > you're editing an existing extension script, but that should be done by\n> > > an upgrade script instead.\n> >\n> > In the contrib modules, I'm afraid what you gotta do is remove the\n> > SQL operator definitions but leave the opclass code support in place.\n> > That's because there's no guarantee that users will update the extension's\n> > SQL version immediately, so a v14 build of the .so might still be used\n> > with the old SQL definitions. It's not clear how much window we need\n> > give for people to do that update, but I don't think \"zero\" is an\n> > acceptable answer.\n> \n> Based on my experience from the field, the answer is \"never\".\n> \n> As in, most people have no idea they are even *supposed* to do such an\n> upgrade, so they don't do it. Until we solve that problem, I think\n> we're basically stuck with keeping them \"forever\". (and even if/when\n> we do, \"zero\" is probably not going to cut it, no)\n\nYeah, this is a serious problem and one that we should figure out a way\nto fix or at least improve on- maybe by having pg_upgrade say something\nabout extensions that could/should be upgraded..?\n\nThanks,\n\nStephen", "msg_date": "Fri, 13 Nov 2020 10:03:43 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 2020-11-12 23:28, Tom Lane wrote:\n>> I'm on board with pulling these now --- 8.2 to v14 is plenty of\n>> deprecation notice. However, the patch seems incomplete in that\n>> the code support for these is still there -- look for\n>> RTOldContainedByStrategyNumber and RTOldContainsStrategyNumber.\n>> Admittedly, there's not much to be removed except some case labels,\n>> but it still seems like we oughta do that to avoid future confusion.\n\n> Yeah, the stuff in gistproc.c should be removed now. But I wonder what \n> the mentions in brin_inclusion.c are and whether or how they should be \n> removed.\n\nI think those probably got cargo-culted in there at some point.\nThey're visibly dead code, because there are no pg_amop entries\nfor BRIN opclasses with amopstrategy 13 or 14.\n\nThis comment that you removed in 2f70fdb06 is suspicious:\n\n\t# we could, but choose not to, supply entries for strategies 13 and 14\n\nI'm guessing that somebody was vacillating about whether it'd be\na feature to support these old operator names in BRIN, and\neventually didn't, but forgot to remove the code support.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Nov 2020 10:57:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "On 2020-11-13 16:57, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 2020-11-12 23:28, Tom Lane wrote:\n>>> I'm on board with pulling these now --- 8.2 to v14 is plenty of\n>>> deprecation notice. However, the patch seems incomplete in that\n>>> the code support for these is still there -- look for\n>>> RTOldContainedByStrategyNumber and RTOldContainsStrategyNumber.\n>>> Admittedly, there's not much to be removed except some case labels,\n>>> but it still seems like we oughta do that to avoid future confusion.\n> \n>> Yeah, the stuff in gistproc.c should be removed now. But I wonder what\n>> the mentions in brin_inclusion.c are and whether or how they should be\n>> removed.\n> \n> I think those probably got cargo-culted in there at some point.\n> They're visibly dead code, because there are no pg_amop entries\n> for BRIN opclasses with amopstrategy 13 or 14.\n\nI have committed fixes that remove the unused strategy numbers from both \nof these code areas.\n\n\n", "msg_date": "Mon, 16 Nov 2020 17:30:27 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "On Fri, Nov 13, 2020 at 10:03:43AM -0500, Stephen Frost wrote:\n> * Magnus Hagander (magnus@hagander.net) wrote:\n> > On Thu, Nov 12, 2020 at 11:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > The changes to the contrib modules appear to be incomplete in some ways.\n> > > > In cube, hstore, and seg, there are no changes to the extension\n> > > > scripts to remove the operators. All you're doing is changing the C\n> > > > code to no longer recognize the strategy, but that doesn't explain what\n> > > > will happen if the operator is still used. In intarray, by contrast,\n> > > > you're editing an existing extension script, but that should be done by\n> > > > an upgrade script instead.\n> > >\n> > > In the contrib modules, I'm afraid what you gotta do is remove the\n> > > SQL operator definitions but leave the opclass code support in place.\n> > > That's because there's no guarantee that users will update the extension's\n> > > SQL version immediately, so a v14 build of the .so might still be used\n> > > with the old SQL definitions. It's not clear how much window we need\n> > > give for people to do that update, but I don't think \"zero\" is an\n> > > acceptable answer.\n> > \n> > Based on my experience from the field, the answer is \"never\".\n> > \n> > As in, most people have no idea they are even *supposed* to do such an\n> > upgrade, so they don't do it. Until we solve that problem, I think\n> > we're basically stuck with keeping them \"forever\". (and even if/when\n> > we do, \"zero\" is probably not going to cut it, no)\n> \n> Yeah, this is a serious problem and one that we should figure out a way\n> to fix or at least improve on- maybe by having pg_upgrade say something\n> about extensions that could/should be upgraded..?\n\nI think what's needed are:\n\n1) a way to *warn* users about deprecation. CREATE EXTENSION might give an\nelog(WARNING), but it's probably not enough. It only happens once, and if it's\nin pg_restore/pg_upgrade, it be wrapped by vendor upgrade scripts. It needs to\nbe more like first function call in every session. Or more likely, put it in\ndocumentation for 10 years.\n\n2) a way to *enforce* it, like making CREATE EXTENSION fail when run against an\nexcessively old server, including by pg_restore/pg_upgrade (which ought to also\nhandle it in --check).\n\nAre there any contrib for which (1) is done and we're anywhere near doing (2) ?\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 16 Nov 2020 14:55:16 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "On 16.11.2020 23:55, Justin Pryzby wrote:\n> On Fri, Nov 13, 2020 at 10:03:43AM -0500, Stephen Frost wrote:\n>> * Magnus Hagander (magnus@hagander.net) wrote:\n>>> On Thu, Nov 12, 2020 at 11:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>>> The changes to the contrib modules appear to be incomplete in some ways.\n>>>>> In cube, hstore, and seg, there are no changes to the extension\n>>>>> scripts to remove the operators. All you're doing is changing the C\n>>>>> code to no longer recognize the strategy, but that doesn't explain what\n>>>>> will happen if the operator is still used. In intarray, by contrast,\n>>>>> you're editing an existing extension script, but that should be done by\n>>>>> an upgrade script instead.\n>>>> In the contrib modules, I'm afraid what you gotta do is remove the\n>>>> SQL operator definitions but leave the opclass code support in place.\n>>>> That's because there's no guarantee that users will update the extension's\n>>>> SQL version immediately, so a v14 build of the .so might still be used\n>>>> with the old SQL definitions. It's not clear how much window we need\n>>>> give for people to do that update, but I don't think \"zero\" is an\n>>>> acceptable answer.\n>>> Based on my experience from the field, the answer is \"never\".\n>>>\n>>> As in, most people have no idea they are even *supposed* to do such an\n>>> upgrade, so they don't do it. Until we solve that problem, I think\n>>> we're basically stuck with keeping them \"forever\". (and even if/when\n>>> we do, \"zero\" is probably not going to cut it, no)\n>> Yeah, this is a serious problem and one that we should figure out a way\n>> to fix or at least improve on- maybe by having pg_upgrade say something\n>> about extensions that could/should be upgraded..?\n> I think what's needed are:\n>\n> 1) a way to *warn* users about deprecation. CREATE EXTENSION might give an\n> elog(WARNING), but it's probably not enough. It only happens once, and if it's\n> in pg_restore/pg_upgrade, it be wrapped by vendor upgrade scripts. It needs to\n> be more like first function call in every session. Or more likely, put it in\n> documentation for 10 years.\n>\n> 2) a way to *enforce* it, like making CREATE EXTENSION fail when run against an\n> excessively old server, including by pg_restore/pg_upgrade (which ought to also\n> handle it in --check).\n>\n> Are there any contrib for which (1) is done and we're anywhere near doing (2) ?\n>\n\nStatus update for a commitfest entry.\n\nThe commitfest is nearing the end and this thread is \"Waiting on \nAuthor\". As far as I see we don't have a patch here and discussion is a \nbit stuck.\nSo, I am planning to return it with feedback. Any objections?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Mon, 30 Nov 2020 21:51:12 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru> writes:\n> Status update for a commitfest entry.\n\n> The commitfest is nearing the end and this thread is \"Waiting on \n> Author\". As far as I see we don't have a patch here and discussion is a \n> bit stuck.\n> So, I am planning to return it with feedback. Any objections?\n\nAFAICS, the status is\n\n(1) core-code changes are committed;\n\n(2) proposed edits to contrib modules need significant rework;\n\n(3) there was also a bit of discussion about inventing a mechanism\nto prod users to update out-of-date extensions.\n\nNow, (3) is far outside the scope of this patch, and I do not\nthink it should block finishing (2). We need a new patch for (2),\nbut there's no real doubt as to what it should contain -- Justin\njust needs to turn the crank.\n\nYou could either move this to the next CF in state WoA, or\nclose it RWF. But the patch did make progress in this CF,\nso I'd tend to lean to the former.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Nov 2020 14:03:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "On Mon, Nov 30, 2020 at 09:51:12PM +0300, Anastasia Lubennikova wrote:\n> On 16.11.2020 23:55, Justin Pryzby wrote:\n> > On Fri, Nov 13, 2020 at 10:03:43AM -0500, Stephen Frost wrote:\n> > > * Magnus Hagander (magnus@hagander.net) wrote:\n> > > > On Thu, Nov 12, 2020 at 11:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > > > The changes to the contrib modules appear to be incomplete in some ways.\n> > > > > > In cube, hstore, and seg, there are no changes to the extension\n> > > > > > scripts to remove the operators. All you're doing is changing the C\n> > > > > > code to no longer recognize the strategy, but that doesn't explain what\n> > > > > > will happen if the operator is still used. In intarray, by contrast,\n> > > > > > you're editing an existing extension script, but that should be done by\n> > > > > > an upgrade script instead.\n> > > > > In the contrib modules, I'm afraid what you gotta do is remove the\n> > > > > SQL operator definitions but leave the opclass code support in place.\n> > > > > That's because there's no guarantee that users will update the extension's\n> > > > > SQL version immediately, so a v14 build of the .so might still be used\n> > > > > with the old SQL definitions. It's not clear how much window we need\n> > > > > give for people to do that update, but I don't think \"zero\" is an\n> > > > > acceptable answer.\n> > > > Based on my experience from the field, the answer is \"never\".\n> > > > \n> > > > As in, most people have no idea they are even *supposed* to do such an\n> > > > upgrade, so they don't do it. Until we solve that problem, I think\n> > > > we're basically stuck with keeping them \"forever\". (and even if/when\n> > > > we do, \"zero\" is probably not going to cut it, no)\n> > > Yeah, this is a serious problem and one that we should figure out a way\n> > > to fix or at least improve on- maybe by having pg_upgrade say something\n> > > about extensions that could/should be upgraded..?\n> > I think what's needed are:\n> > \n> > 1) a way to *warn* users about deprecation. CREATE EXTENSION might give an\n> > elog(WARNING), but it's probably not enough. It only happens once, and if it's\n> > in pg_restore/pg_upgrade, it be wrapped by vendor upgrade scripts. It needs to\n> > be more like first function call in every session. Or more likely, put it in\n> > documentation for 10 years.\n> > \n> > 2) a way to *enforce* it, like making CREATE EXTENSION fail when run against an\n> > excessively old server, including by pg_restore/pg_upgrade (which ought to also\n> > handle it in --check).\n> > \n> > Are there any contrib for which (1) is done and we're anywhere near doing (2) ?\n> \n> Status update for a commitfest entry.\n> \n> The commitfest is nearing the end and this thread is \"Waiting on Author\". As\n\nI think this is waiting on me to provide a patch for the contrib/ modules with\nupdate script removing the SQL operators, and documentating their deprecation.\n\nIs that right ?\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 30 Nov 2020 13:06:12 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I think this is waiting on me to provide a patch for the contrib/ modules with\n> update script removing the SQL operators, and documentating their deprecation.\n\nRight. We can remove the SQL operators, but not (yet) the C code support.\n\nI'm not sure that the docs change would do more than remove any existing\nmentions of the operators.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Nov 2020 14:09:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "On Tue, Dec 1, 2020 at 4:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > I think this is waiting on me to provide a patch for the contrib/ modules with\n> > update script removing the SQL operators, and documentating their deprecation.\n>\n> Right. We can remove the SQL operators, but not (yet) the C code support.\n>\n> I'm not sure that the docs change would do more than remove any existing\n> mentions of the operators.\n>\n\nStatus update for a commitfest entry.\n\nAlmost 2 months passed since the last update. Are you planning to work\non this, Justin? If not, I'm planning to set it \"Returned with\nFeedback\" barring objectinos.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 28 Jan 2021 21:50:05 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "On Thu, Jan 28, 2021 at 9:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Dec 1, 2020 at 4:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Justin Pryzby <pryzby@telsasoft.com> writes:\n> > > I think this is waiting on me to provide a patch for the contrib/ modules with\n> > > update script removing the SQL operators, and documentating their deprecation.\n> >\n> > Right. We can remove the SQL operators, but not (yet) the C code support.\n> >\n> > I'm not sure that the docs change would do more than remove any existing\n> > mentions of the operators.\n> >\n>\n> Status update for a commitfest entry.\n>\n> Almost 2 months passed since the last update. Are you planning to work\n> on this, Justin? If not, I'm planning to set it \"Returned with\n> Feedback\" barring objectinos.\n>\n\nThis patch has been awaiting updates for more than one month. As such,\nwe have moved it to \"Returned with Feedback\" and removed it from the\nreviewing queue. Depending on timing, this may be reversable, so let\nus know if there are extenuating circumstances. In any case, you are\nwelcome to address the feedback you have received, and resubmit the\npatch to the next CommitFest.\n\nThank you for contributing to PostgreSQL.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 1 Feb 2021 22:29:34 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "On Mon, Nov 30, 2020 at 02:09:10PM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > I think this is waiting on me to provide a patch for the contrib/ modules with\n> > update script removing the SQL operators, and documentating their deprecation.\n> \n> Right. We can remove the SQL operators, but not (yet) the C code support.\n> \n> I'm not sure that the docs change would do more than remove any existing\n> mentions of the operators.\n\nI've finally returned to this. RFC.\n\n-- \nJustin", "msg_date": "Tue, 2 Feb 2021 08:05:58 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> [ 0001-remove-deprecated-v8.2-containment-operators.patch ]\n\nI'm confused by why this patch is only dropping the operators'\nopclass-membership links. Don't we want to actually DROP OPERATOR\ntoo?\n\nAlso, the patch seems to be trying to resurrect hstore--1.0--1.1.sql,\nwhich I do not see the point of. It was removed because no modern\nserver will even think it's valid syntax, and that situation has\nnot changed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Mar 2021 20:58:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "On Thu, Mar 04, 2021 at 08:58:39PM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > [ 0001-remove-deprecated-v8.2-containment-operators.patch ]\n> \n> I'm confused by why this patch is only dropping the operators'\n> opclass-membership links. Don't we want to actually DROP OPERATOR\n> too?\n\nOkay\n\nAlso , I think it's unrelated to this patch, but shouldn't these be removed ?\nSee: b0b7be613, c15898c1d\n\n+++ b/doc/src/sgml/brin.sgml\n\n- <entry>Operator Strategy 7, 13, 16, 24, 25</entry>\n+ <entry>Operator Strategy 7, 16, 24, 25</entry>\n\n- <entry>Operator Strategy 8, 14, 26, 27</entry>\n+ <entry>Operator Strategy 8, 26, 27</entry>\n\n\n> Also, the patch seems to be trying to resurrect hstore--1.0--1.1.sql,\n\nNot sure why or how I had that.\n\n-- \nJustin", "msg_date": "Thu, 4 Mar 2021 21:13:17 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Thu, Mar 04, 2021 at 08:58:39PM -0500, Tom Lane wrote:\n>> I'm confused by why this patch is only dropping the operators'\n>> opclass-membership links. Don't we want to actually DROP OPERATOR\n>> too?\n\n> Okay\n\nPushed. Since hstore already had a new-in-v14 edition, I just added\nthe commands to hstore--1.7--1.8.sql rather than make another update\nscript. (Also, you forgot to drop ~ in that one?)\n\n> Also , I think it's unrelated to this patch, but shouldn't these be removed ?\n\nRight, done.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Mar 2021 11:01:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove deprecated v8.2 containment operators" } ]
[ { "msg_contents": "Over in the thread at [1] it's discussed how our code for making\nselectivity estimates using knowledge about FOREIGN KEY constraints\nis busted in the face of EquivalenceClasses including constants.\n\nThat is, if fktab(a,b) is a 2-column FK reference to pktab(a,b)\nand we have a query like\n\n\t... where fktab.a = pktab.a and fktab.b = pktab.b\n\nthen we understand that any given fktab row can match at most one\npktab row (and this estimate is often a lot better than we'd get\nfrom assuming that the a and b conditions are independent).\n\nHowever, if the query is like\n\n\t... where fktab.a = pktab.a and fktab.b = pktab.b\n\t and fktab.a = 1\n\nthen this suddenly breaks down and we go back to non-FK-aware\nestimates. The reason is that get_foreign_key_join_selectivity()\nis looking for join clauses that equate the two sides of the FK\nconstraint ... and in this example, it will not see any such\njoin clause for column \"a\". That's because equivclass.c decided\nto replace the given clauses with \"fktab.a = 1 and pktab.a = 1\",\nwhich can be enforced at the scan level, leaving nothing to be\ndone for column \"a\" at the join level.\n\nWe can fix that by detecting which EquivalenceClasses are marked\n\"ec_has_const\", since that's the property that dictates whether\nequivclass.c uses this strategy. However, that's only a partial\nfix; if you try it, you soon find that the selectivity estimates\nare still off. The reason is that because the two \"a = 1\" conditions\nare already factored into the size estimates for the join input\nrelations, we're essentially double-counting the \"fktab.a = 1\"\ncondition's selectivity if we use the existing FK selectivity\nestimation rule. If we treated the constant condition as only\nrelevant to the PK side, then the FK selectivity rule could work\nnormally. But we don't want to drop the ability to enforce the\nrestriction at the scan level. So what we have to do is cancel\nthe FK side's condition's selectivity out of the FK selectivity.\n\nAttached is a patch series that attacks it that way. For ease of\nreview I split it into two steps:\n\n0001 refactors process_implied_equality() so that it can pass\nback the new RestrictInfo to its callers in equivclass.c.\nI think this is a good change on its own merits, because it means\nthat when generating a derived equality, we don't have to use\ninitialize_mergeclause_eclasses() to set up the new RestrictInfo's\nleft_ec and right_ec pointers. The equivclass.c caller knows\nperfectly darn well which EquivalenceClass the two sides of the\nclause belong to, so it can just assign that value, saving a couple\nof potentially-not-cheap get_eclass_for_sort_expr() searches.\nThis does require process_implied_equality() to duplicate some of\nthe steps in distribute_qual_to_rels(), but on the other hand we\nget to remove some complexity from distribute_qual_to_rels() because\nit no longer has to deal with any is_deduced cases. Anyway, the\nend goal of this step is that we can save away all the generated\n\"x = const\" clauses in the EC's ec_derives list. 0001 doesn't\ndo anything with that information, but ...\n\n0002 actually fixes the bug. Dealing with the first part of the\nproblem just requires counting how many of the ECs we matched to\nan FK constraint are ec_has_const. To deal with the second part,\nwe dig out the scan-level \"x = const\" clause that the EC generated\nfor the FK column and see what selectivity it has got. This beats\nother ways of reconstructing the scan-clause selectivity because\n(at least in all normal cases) that selectivity would have been\ncached in the RestrictInfo. Thus we not only save cycles but can be\nsure we are cancelling out exactly the right amount of selectivity.\n\nI would not propose back-patching this, but it seems OK for HEAD.\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/AM6PR02MB5287A0ADD936C1FA80973E72AB190%40AM6PR02MB5287.eurprd02.prod.outlook.com", "msg_date": "Mon, 26 Oct 2020 23:47:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Patch to fix FK-related selectivity estimates with constants" }, { "msg_contents": "I wrote:\n> Over in the thread at [1] it's discussed how our code for making\n> selectivity estimates using knowledge about FOREIGN KEY constraints\n> is busted in the face of EquivalenceClasses including constants.\n> ...\n> Attached is a patch series that attacks it that way.\n\nI'd failed to generate a test case I liked yesterday, but perhaps\nthe attached will do. (While the new code is exercised in the\ncore regression tests already, it doesn't produce any visible\nplan changes.) I'm a little nervous about whether the plan\nshape will be stable in the buildfarm, but it works for me on\nboth 64-bit and 32-bit machines, so probably it's OK.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 27 Oct 2020 13:58:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Patch to fix FK-related selectivity estimates with constants" }, { "msg_contents": "On Tue, Oct 27, 2020 at 01:58:56PM -0400, Tom Lane wrote:\n>I wrote:\n>> Over in the thread at [1] it's discussed how our code for making\n>> selectivity estimates using knowledge about FOREIGN KEY constraints\n>> is busted in the face of EquivalenceClasses including constants.\n>> ...\n>> Attached is a patch series that attacks it that way.\n>\n\nThe patch sems fine to me, thanks for investigating and fixing this.\n\nTwo minor comments:\n\nI find it a bit strange that generate_base_implied_equalities_const adds\nthe rinfo to ec_derives, while generate_base_implied_equalities_no_const\ndoes not. I understand it's correct as we don't lookup the non-const\nclauses, and we want to keep the list as short as possible, but it seems\nlike a bit surprising/unexpected difference in behavior.\n\nI think casting the 'clause' to (Node*) in process_implied_equality is\nunnecessary - it was probably needed when it was declared as Expr* but\nthe patch changes that.\n\n\nAs for the backpatching, I don't feel very strongly about it. It's\nclearly a bug/thinko in the code, and I don't see any obvious risks in\nbackpatching it (no ABI breaks etc.). OTOH multi-column foreign keys are\nnot very common, and the query pattern seems rather unusual too, so the\nrisk is pretty low I guess. We certainly did not get many reports, so.\n\n\n>I'd failed to generate a test case I liked yesterday, but perhaps\n>the attached will do. (While the new code is exercised in the\n>core regression tests already, it doesn't produce any visible\n>plan changes.) I'm a little nervous about whether the plan\n>shape will be stable in the buildfarm, but it works for me on\n>both 64-bit and 32-bit machines, so probably it's OK.\n>\n\nWorks fine on raspberry pi 4 (i.e. armv7l, 32-bit arm) too.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 28 Oct 2020 00:18:14 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Patch to fix FK-related selectivity estimates with constants" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Tue, Oct 27, 2020 at 01:58:56PM -0400, Tom Lane wrote:\n>>> Attached is a patch series that attacks it that way.\n\n> The patch sems fine to me, thanks for investigating and fixing this.\n\nThanks for looking at it!\n\n> I find it a bit strange that generate_base_implied_equalities_const adds\n> the rinfo to ec_derives, while generate_base_implied_equalities_no_const\n> does not. I understand it's correct as we don't lookup the non-const\n> clauses, and we want to keep the list as short as possible, but it seems\n> like a bit surprising/unexpected difference in behavior.\n\nYeah, perhaps. I considered replacing ec_derives with two lists, one\nfor base-level derived clauses and one for join-level derived clauses,\nbut it didn't really seem worth the trouble. This is something we\ncould change later if a need arises to be able to look back at non-const\nbase-level derived clauses.\n\n> I think casting the 'clause' to (Node*) in process_implied_equality is\n> unnecessary - it was probably needed when it was declared as Expr* but\n> the patch changes that.\n\nHm, thought I got rid of the unnecessary casts ... I'll look again.\n\n> As for the backpatching, I don't feel very strongly about it. It's\n> clearly a bug/thinko in the code, and I don't see any obvious risks in\n> backpatching it (no ABI breaks etc.).\n\nI had two concerns about possible extension breakage from a back-patch:\n\n* Changing the set of fields in ForeignKeyOptInfo is an ABI break.\nWe could minimize the risk by adding the new fields at the end in\nthe back branches, but it still wouldn't be zero-risk.\n\n* Changing the expectation about whether process_implied_equality()\nwill fill left_ec/right_ec is an API break. It's somewhat doubtful\nwhether there exist any callers outside equivclass.c, but again it's\nnot zero risk.\n\nThe other issue, entirely unrelated to code hazards, is whether this\nis too big a change in planner behavior to be back-patched. We've\noften felt that destabilizing stable-branch plan choices is something\nto be avoided.\n\nNot to mention the whole issue of whether this patch has any bugs of\nits own.\n\nSo on the whole I wouldn't want to back-patch, or at least not do so\nvery far. Maybe there's an argument that v13 is still new enough to\ndeflect the concern about plan stability.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Oct 2020 21:27:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Patch to fix FK-related selectivity estimates with constants" }, { "msg_contents": "On Tue, Oct 27, 2020 at 09:27:06PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Tue, Oct 27, 2020 at 01:58:56PM -0400, Tom Lane wrote:\n>>>> Attached is a patch series that attacks it that way.\n>\n>> The patch sems fine to me, thanks for investigating and fixing this.\n>\n>Thanks for looking at it!\n>\n>> I find it a bit strange that generate_base_implied_equalities_const adds\n>> the rinfo to ec_derives, while generate_base_implied_equalities_no_const\n>> does not. I understand it's correct as we don't lookup the non-const\n>> clauses, and we want to keep the list as short as possible, but it seems\n>> like a bit surprising/unexpected difference in behavior.\n>\n>Yeah, perhaps. I considered replacing ec_derives with two lists, one\n>for base-level derived clauses and one for join-level derived clauses,\n>but it didn't really seem worth the trouble. This is something we\n>could change later if a need arises to be able to look back at non-const\n>base-level derived clauses.\n>\n>> I think casting the 'clause' to (Node*) in process_implied_equality is\n>> unnecessary - it was probably needed when it was declared as Expr* but\n>> the patch changes that.\n>\n>Hm, thought I got rid of the unnecessary casts ... I'll look again.\n>\n\nApologies, the casts are fine. I got it mixed up somehow.\n\n>> As for the backpatching, I don't feel very strongly about it. It's\n>> clearly a bug/thinko in the code, and I don't see any obvious risks in\n>> backpatching it (no ABI breaks etc.).\n>\n>I had two concerns about possible extension breakage from a back-patch:\n>\n>* Changing the set of fields in ForeignKeyOptInfo is an ABI break.\n>We could minimize the risk by adding the new fields at the end in\n>the back branches, but it still wouldn't be zero-risk.\n>\n>* Changing the expectation about whether process_implied_equality()\n>will fill left_ec/right_ec is an API break. It's somewhat doubtful\n>whether there exist any callers outside equivclass.c, but again it's\n>not zero risk.\n>\n>The other issue, entirely unrelated to code hazards, is whether this\n>is too big a change in planner behavior to be back-patched. We've\n>often felt that destabilizing stable-branch plan choices is something\n>to be avoided.\n>\n>Not to mention the whole issue of whether this patch has any bugs of\n>its own.\n>\n>So on the whole I wouldn't want to back-patch, or at least not do so\n>very far. Maybe there's an argument that v13 is still new enough to\n>deflect the concern about plan stability.\n>\n\nOK, understood.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 28 Oct 2020 02:43:19 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Patch to fix FK-related selectivity estimates with constants" }, { "msg_contents": "On 2020-Oct-27, Tom Lane wrote:\n\n> I had two concerns about possible extension breakage from a back-patch:\n> \n> * Changing the set of fields in ForeignKeyOptInfo is an ABI break.\n> We could minimize the risk by adding the new fields at the end in\n> the back branches, but it still wouldn't be zero-risk.\n\nIt'd be useful to be able to qualify this sort of risk more objectively.\nI think if a struct is used as a function argument somewhere or arrays\nof the struct are formed, then it's certain that changing that struct's\nsize is going to cause problems. In this case, at least in core code,\nwe only pass pointers to the struct around, not the struct itself, so\nnot a problem; and we only use the struct in lists, not in arrays, so\nthat's not a problem either.\n\nWhat other aspects should we consider?\n\n\nAnother angle is usage of the struct by third-party code. I used\ncodesearch.debian.net and, apart from Postgres itself, it only found the\nstring in hypopg (but in typedefs.list, so not relevant) and pgpool2\n(which appears to carry its own copy of nodes.h). Inconsequential.\n\nSearching the web, I did find this:\nhttps://docs.rs/rpgffi/0.3.3/rpgffi/struct.ForeignKeyOptInfo.html but it\nappears that this project (an incomplete attempt at a framework to\ncreate Postgres extensions in Rust) mechanically extracts every struct,\nbut no further use of the struct is done. I was unable to find anything\nactually *using* rpgffi.\n\nIt is possible that some proprietary code is using the struct in a way\nthat would put it in danger, though.\n\n\n", "msg_date": "Wed, 28 Oct 2020 12:30:24 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Patch to fix FK-related selectivity estimates with constants" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2020-Oct-27, Tom Lane wrote:\n>> * Changing the set of fields in ForeignKeyOptInfo is an ABI break.\n>> We could minimize the risk by adding the new fields at the end in\n>> the back branches, but it still wouldn't be zero-risk.\n\n> It'd be useful to be able to qualify this sort of risk more objectively.\n\nAgreed.\n\n> I think if a struct is used as a function argument somewhere or arrays\n> of the struct are formed, then it's certain that changing that struct's\n> size is going to cause problems.\n\nI grasp the point about arrays, but not sure how it's a problem for\nfunction arguments per se? Or were you thinking of functions that\ntake a struct as pass-by-value not pass-by-reference?\n\nThe way I've generally thought about this is that new fields added to the\nend of a Node struct are only likely to be a hazard if extension code\ncreates new instances of that Node type. If it does, it's certainly\nproblematic, first because makeNode() will allocate the wrong amount of\nstorage (ABI issue) and second because the extension won't know it needs\nto fill the new fields (API issue). However if we don't expect that that\nwill happen, then it's probably going to be OK. Code that just inspects\nNodes made by the core code won't be broken, as long as we don't change\nthe semantics of the existing fields. We don't ever pass Node structs by\nvalue, and we don't make arrays of them either, so the actual size of the\nstruct isn't much of an ABI issue.\n\nAs you say, we can also search to see if there seem to be any extensions\nusing the struct in question. I don't have a huge amount of faith in\nthat, because I think there are lots of proprietary/custom extensions\nthat aren't visible on the net. But on the other hand, the users\nof such extensions probably wouldn't have much trouble rebuilding them\nfor a new version, if they did get bit. It's the widely distributed\nextensions that might have users not capable of dealing with that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Oct 2020 11:57:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Patch to fix FK-related selectivity estimates with constants" }, { "msg_contents": "On 2020-Oct-28, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> > I think if a struct is used as a function argument somewhere or arrays\n> > of the struct are formed, then it's certain that changing that struct's\n> > size is going to cause problems.\n> \n> I grasp the point about arrays, but not sure how it's a problem for\n> function arguments per se? Or were you thinking of functions that\n> take a struct as pass-by-value not pass-by-reference?\n\nYeah, pass-by-value. As you say we don't do that with Node structs, but\nthere are some other structs that are sometimes passed by value. It's\ncertainly not a common problem though.\n\n> The way I've generally thought about this is that new fields added to the\n> end of a Node struct are only likely to be a hazard if extension code\n> creates new instances of that Node type. If it does, it's certainly\n> problematic, first because makeNode() will allocate the wrong amount of\n> storage (ABI issue) and second because the extension won't know it needs\n> to fill the new fields (API issue).\n\nRight.\n\n> As you say, we can also search to see if there seem to be any extensions\n> using the struct in question. I don't have a huge amount of faith in\n> that, because I think there are lots of proprietary/custom extensions\n> that aren't visible on the net. But on the other hand, the users\n> of such extensions probably wouldn't have much trouble rebuilding them\n> for a new version, if they did get bit. It's the widely distributed\n> extensions that might have users not capable of dealing with that.\n\nIn practice, at 2ndQuadrant we've had trouble a couple of times with ABI\nbreaks -- certain situations can become crasher bugs, to which some\ncustomers are extremely sensitive.\n\nI've added a link to your message to the wiki here:\nhttps://wiki.postgresql.org/wiki/Committing_checklist#Maintaining_ABI_compatibility_while_backpatching\n\n\n", "msg_date": "Wed, 28 Oct 2020 13:13:05 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Patch to fix FK-related selectivity estimates with constants" } ]
[ { "msg_contents": "Hi hackers,\n\nLibpq has supported to specify multiple hosts in connection string and enable auto failover when the previous PostgreSQL instance cannot be accessed.\nBut when I tried to enable this feature for a non-hot standby, it cannot do the failover with the following messages.\n\npsql: error: could not connect to server: FATAL: the database system is starting up\n\nDocument says ' If a connection is established successfully, but authentication fails, the remaining hosts in the list are not tried.'\nI'm wondering is it a feature by design or a bug? If it's a bug, I plan to fix it.\n\nThanks,\nHubert Zhang\n\n\n\n\n\n\n\n\nHi hackers,\n\n\n\n\nLibpq has supported to specify multiple hosts in connection string and enable auto failover when the previous PostgreSQL instance cannot be accessed.\n\nBut when I tried to enable this feature for a non-hot standby, it cannot do the failover with the following messages.\n\n\n\n\npsql: error: could not connect to server: FATAL:  the database system is starting up\n\n\n\n\n\nDocument says\n ' If\n a connection is established successfully, but authentication fails, the remaining hosts in the list are not tried.' \n\nI'm wondering is it a feature by design or a bug? If it's a bug, I plan to fix it.\n\n\n\n\nThanks,\n\nHubert Zhang", "msg_date": "Tue, 27 Oct 2020 07:14:14 +0000", "msg_from": "Hubert Zhang <zhubert@vmware.com>", "msg_from_op": true, "msg_subject": "Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "Please send emails in text format. Your email was in HTML, and I changed this reply to text format.\n\n\nFrom: Hubert Zhang <zhubert@vmware.com> \n> Libpq has supported to specify multiple hosts in connection string and enable auto failover when the previous PostgreSQL instance cannot be accessed.\n> But when I tried to enable this feature for a non-hot standby, it cannot do the failover with the following messages.\n> \n> psql: error: could not connect to server: FATAL: the database system is starting up\n\nWas the primary running and accepting connections when you encountered this error? That is, if you specified host=\"host1 host2\", host1 was the non-hot standby and host2 was a running primary? Or only the non-hot standby was running?\n\nIf a primary was running, I'd say it's a bug... Perhaps the following part in libpq gives up connection attempts wen the above FATAL error is returned from the server. Maybe libpq should differentiate errors using SQLSTATE and continue connection attempts on other hosts.\n\n[fe-connect.c]\n /* Handle errors. */\n if (beresp == 'E')\n {\n if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)\n...\n#endif\n\n goto error_return;\n }\n\n /* It is an authentication request. */\n conn->auth_req_received = true;\n\n /* Get the type of request. */\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Tue, 27 Oct 2020 09:30:36 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "On Tue, Oct 27, 2020 at 07:14:14AM +0000, Hubert Zhang wrote:\n> Libpq has supported to specify multiple hosts in connection string and enable auto failover when the previous PostgreSQL instance cannot be accessed.\n> But when I tried to enable this feature for a non-hot standby, it cannot do the failover with the following messages.\n> \n> psql: error: could not connect to server: FATAL: the database system is starting up\n> \n> Document says ' If a connection is established successfully, but authentication fails, the remaining hosts in the list are not tried.'\n> I'm wondering is it a feature by design or a bug? If it's a bug, I plan to fix it.\n\nI felt it was a bug, but the community as a whole may or may not agree:\nhttps://postgr.es/m/flat/16508-1a63222835164566%40postgresql.org\n\n\n", "msg_date": "Wed, 28 Oct 2020 03:37:34 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in\n non-hot standby mode" }, { "msg_contents": "Was the primary running and accepting connections when you encountered this error? That is, if you specified host=\"host1 host2\", host1 was the non-hot standby and host2 was a running primary? Or only the non-hot standby was running?\n\nIf a primary was running, I'd say it's a bug... Perhaps the following part in libpq gives up connection attempts wen the above FATAL error is returned from the server. Maybe libpq should differentiate errors using SQLSTATE and continue connection attempts on other hosts.\nYes, the primary was running, but non-hot standby is in front of the primary in connection string.\nHao Wu and I wrote a patch to fix this problem. Client side libpq should try another hosts in connection string when it is rejected by a non-hot standby, or the first host encounter some n/w problems during the libpq handshake.\n\nPlease send emails in text format. Your email was in HTML, and I changed this reply to text format.\nThanks. Is this email in text format now? I just use outlook in chrome. Let me know if it still in html format.\n\nHubert & Hao Wu\n\n________________________________\nFrom: tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com>\nSent: Tuesday, October 27, 2020 5:30 PM\nTo: Hubert Zhang <zhubert@vmware.com>\nCc: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nSubject: RE: Multiple hosts in connection string failed to failover in non-hot standby mode\n\nPlease send emails in text format. Your email was in HTML, and I changed this reply to text format.\n\n\nFrom: Hubert Zhang <zhubert@vmware.com>\n> Libpq has supported to specify multiple hosts in connection string and enable auto failover when the previous PostgreSQL instance cannot be accessed.\n> But when I tried to enable this feature for a non-hot standby, it cannot do the failover with the following messages.\n>\n> psql: error: could not connect to server: FATAL: the database system is starting up\n\nWas the primary running and accepting connections when you encountered this error? That is, if you specified host=\"host1 host2\", host1 was the non-hot standby and host2 was a running primary? Or only the non-hot standby was running?\n\nIf a primary was running, I'd say it's a bug... Perhaps the following part in libpq gives up connection attempts wen the above FATAL error is returned from the server. Maybe libpq should differentiate errors using SQLSTATE and continue connection attempts on other hosts.\n\n[fe-connect.c]\n /* Handle errors. */\n if (beresp == 'E')\n {\n if (PG_PROTOCOL_MAJOR(conn->pversion) >= 3)\n...\n#endif\n\n goto error_return;\n }\n\n /* It is an authentication request. */\n conn->auth_req_received = true;\n\n /* Get the type of request. */\n\n\nRegards\nTakayuki Tsunakawa", "msg_date": "Wed, 28 Oct 2020 10:41:50 +0000", "msg_from": "Hubert Zhang <zhubert@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "From: Hubert Zhang <zhubert@vmware.com> \n> Hao Wu and I wrote a patch to fix this problem. Client side libpq should try another hosts in connection string when it is rejected by a non-hot standby, or the first host encounter some n/w problems during the libpq handshake.\n\nThank you. Please add it to the November Commitfest.\n\n\n> Thanks. Is this email in text format now? I just use outlook in chrome. Let me know if it still in html format.\n\nI'm afraid not. The Outlook's title bar says that it's in HTML format. I'm using Outlook 2016 client app on Windows 10. I may have failed to convert my previous email to text, but it should be text format this time.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Wed, 28 Oct 2020 10:59:28 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "Hubert Zhang <zhubert@vmware.com> writes:\n> [ 0001-Enhance-libpq-to-support-multiple-host-for-non-hot-s.patch ]\n\nI took a quick look at this. TBH, I'd just drop the first three hunks,\nas they've got nothing to do with any failure mode that there's evidence\nfor in this thread or the prior one, and I'm afraid they're more likely\nto create trouble than fix it.\n\nAs for the last hunk, why is it after rather than before the SSL/GSS\nchecks? I doubt that retrying with/without SSL is going to change\na CANNOT_CONNECT_NOW result, unless maybe by slowing things down to\nthe point where recovery has finished ;-)\n\nThe bigger picture though is\n\n(1) what set of failures should we retry on? I think CANNOT_CONNECT_NOW\nis reasonable, but are there others?\n\n(2) what does this do to the quality of the error messages in cases\nwhere all the connection attempts fail?\n\nI think that error message quality was not thought too much about\nin the original development of the multi-host feature, so to some\nextent I'm asking you to clean up someone else's mess. Nonetheless,\nI feel that we do need to clean it up before we do things that risk\nmaking it even more confusing.\n\nThe problems that I see in this area are first that there's no\nreal standardization in libpq as to whether to append error messages\ntogether or just flush preceding messages; and second that no effort\nis made in multi-connection-attempt cases to separate the errors from\ndifferent attempts, much less identify which error goes with which\nhost or IP address. I think we really need to put some work into\nthat. In some cases you can infer what happened from breadcrumbs\nwe already put into the text, for example\n\n$ psql -h localhost,/tmp -p 12345\npsql: error: could not connect to server: Connection refused\n Is the server running on host \"localhost\" (::1) and accepting\n TCP/IP connections on port 12345?\ncould not connect to server: Connection refused\n Is the server running on host \"localhost\" (127.0.0.1) and accepting\n TCP/IP connections on port 12345?\ncould not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket \"/tmp/.s.PGSQL.12345\"?\n\nbut this doesn't seem particularly helpfully laid out to me, and we don't\nprovide the breadcrumbs at all for a lot of other error cases.\n\nI'm vaguely imagining that we could do something more like\n\ncould not connect to host \"localhost\" (::1), port 12345: Connection refused\ncould not connect to host \"localhost\" (127.0.0.1), port 12345: Connection refused\ncould not connect to socket \"/tmp/.s.PGSQL.12345\": No such file or directory\n\nNot quite sure if the \"Is the server running\" hint is worth preserving.\nWe'd have to reword it quite a bit, and it'd be very duplicative.\n\nThe implementation of this might involve sticking the initial string\n(up to the colon, in this example) into conn->errorMessage speculatively\nas we try each host. If we then append an error to it and go around\nagain, we're good. If we successfully connect, then the contents of\nconn->errorMessage don't matter.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Jan 2021 16:50:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "I wrote:\n> The problems that I see in this area are first that there's no\n> real standardization in libpq as to whether to append error messages\n> together or just flush preceding messages; and second that no effort\n> is made in multi-connection-attempt cases to separate the errors from\n> different attempts, much less identify which error goes with which\n> host or IP address. I think we really need to put some work into\n> that.\n\nI spent some time on this, and here is a patch set that tries to\nimprove matters.\n\n0001 changes the libpq coding rules to be that all error messages should\nbe appended to conn->errorMessage, never overwritten (there are a couple\nof exceptions in fe-lobj.c) and we reset conn->errorMessage to empty only\nat the start of a connection request or new query. This is something\nthat's been bugging me for a long time and I'm glad to get it cleaned up.\nFormerly it seemed that a dartboard had been used to decide whether to use\n\"printfPQExpBuffer\" or \"appendPQExpBuffer\"; now it's always the latter.\nWe can also get rid of several hacks that were used to get around the\nmess and force appending behavior.\n\n0002 then changes the reporting rules in fe-connect.c as I suggested,\nso that you might get errors like this:\n\n$ psql -h localhost,/tmp -p 12345\npsql: error: could not connect to host \"localhost\" (::1), port 12345: Connection refused\n Is the server running on that host and accepting TCP/IP connections?\ncould not connect to host \"localhost\" (127.0.0.1), port 12345: Connection refused\n Is the server running on that host and accepting TCP/IP connections?\ncould not connect to socket \"/tmp/.s.PGSQL.12345\": No such file or directory\n Is the server running locally and accepting connections on that socket?\n\nand we have a pretty uniform rule that errors coming back from a\nconnection attempt will be prefixed with the server address.\n\nThen 0003 is the part of your patch that I'm happy with. Given 0001+0002\nwe could certainly consider looping back and retrying for more cases, but\nI still want to tread lightly on that. I don't see a lot of value in\nretrying errors that seem to be on the client side, such as failure to\nset socket properties; and in general I'm hesitant to add untestable\ncode paths here.\n\nI feel pretty good about 0001: it might be committable as-is. 0002 is\nprobably subject to bikeshedding, plus it has a problem in the ECPG tests.\nTwo of the error messages are now unstable because they expose\nchosen-at-random socket paths:\n\ndiff -U3 /home/postgres/pgsql/src/interfaces/ecpg/test/expected/connect-test5.stderr /home/postgres/pgsql/src/interfaces/ecpg/test/results/connect-test5.stderr\n--- /home/postgres/pgsql/src/interfaces/ecpg/test/expected/connect-test5.stderr 2020-08-04 14:59:45.617380050 -0400\n+++ /home/postgres/pgsql/src/interfaces/ecpg/test/results/connect-test5.stderr 2021-01-10 16:12:22.539433702 -0500\n@@ -36,7 +36,7 @@\n [NO_PID]: sqlca: code: 0, state: 00000\n [NO_PID]: ECPGconnect: opening database <DEFAULT> on <DEFAULT> port <DEFAULT> for user regress_ecpg_user2\n [NO_PID]: sqlca: code: 0, state: 00000\n-[NO_PID]: ECPGconnect: could not open database: FATAL: database \"regress_ecpg_user2\" does not exist\n+[NO_PID]: ECPGconnect: could not open database: could not connect to socket \"/tmp/pg_regress-EbHubF/.s.PGSQL.58080\": FATAL: database \"regress_ecpg_user2\" does not exist\n \n [NO_PID]: sqlca: code: 0, state: 00000\n [NO_PID]: ecpg_finish: connection main closed\n@@ -73,7 +73,7 @@\n [NO_PID]: sqlca: code: -220, state: 08003\n [NO_PID]: ECPGconnect: opening database <DEFAULT> on <DEFAULT> port <DEFAULT> for user regress_ecpg_user2\n [NO_PID]: sqlca: code: 0, state: 00000\n-[NO_PID]: ECPGconnect: could not open database: FATAL: database \"regress_ecpg_user2\" does not exist\n+[NO_PID]: ECPGconnect: could not open database: could not connect to socket \"/tmp/pg_regress-EbHubF/.s.PGSQL.58080\": FATAL: database \"regress_ecpg_user2\" does not exist\n \n [NO_PID]: sqlca: code: 0, state: 00000\n [NO_PID]: ecpg_finish: connection main closed\n\nI don't have any non-hacky ideas what to do about that. The extra detail\nseems useful to end users, but we don't have any infrastructure that\nwould allow filtering it out in the ECPG tests.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 10 Jan 2021 17:38:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "I wrote:\n> I feel pretty good about 0001: it might be committable as-is. 0002 is\n> probably subject to bikeshedding, plus it has a problem in the ECPG tests.\n> Two of the error messages are now unstable because they expose\n> chosen-at-random socket paths:\n> ...\n> I don't have any non-hacky ideas what to do about that. The extra detail\n> seems useful to end users, but we don't have any infrastructure that\n> would allow filtering it out in the ECPG tests.\n\nSo far the only solution that comes to mind is to introduce some\ninfrastructure to do that filtering. 0001-0003 below are unchanged,\n0004 patches up the ecpg test framework with a rather ad-hoc filtering\nfunction. I'd feel worse about this if there weren't already a very\nad-hoc filtering function there ;-)\n\nThis set passes check-world for me; we'll soon see what the cfbot\nthinks.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 10 Jan 2021 21:56:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "Hi Tom,\n\nI agree to get detailed error message for each failed host as your patch 0001.\n\nAs for patch 0004, find ':' after \"could not connect to\" may failed when error message like:\n\"could not connect to host \"localhost\" (::1), port 12345: Connection refused\", where p2 will point to \"::1\" instead of \": Connection refused\". But since it's only used for test case, we don't need to filter the error message precisely.\n\n```\necpg_filter_stderr(const char *resultfile, const char *tmpfile)\n{\n ......\n char *p1 = strstr(linebuf.data, \"could not connect to \");\n if (p1)\n {\n char *p2 = strchr(p1, ':');\n if (p2)\n memmove(p1 + 17, p2, strlen(p2) + 1);\n }\n}\n```\n\nThanks,\nHubert\n\n________________________________\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: Monday, January 11, 2021 10:56 AM\nTo: Hubert Zhang <zhubert@vmware.com>\nCc: tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nSubject: Re: Multiple hosts in connection string failed to failover in non-hot standby mode\n\nI wrote:\n> I feel pretty good about 0001: it might be committable as-is. 0002 is\n> probably subject to bikeshedding, plus it has a problem in the ECPG tests.\n> Two of the error messages are now unstable because they expose\n> chosen-at-random socket paths:\n> ...\n> I don't have any non-hacky ideas what to do about that. The extra detail\n> seems useful to end users, but we don't have any infrastructure that\n> would allow filtering it out in the ECPG tests.\n\nSo far the only solution that comes to mind is to introduce some\ninfrastructure to do that filtering. 0001-0003 below are unchanged,\n0004 patches up the ecpg test framework with a rather ad-hoc filtering\nfunction. I'd feel worse about this if there weren't already a very\nad-hoc filtering function there ;-)\n\nThis set passes check-world for me; we'll soon see what the cfbot\nthinks.\n\n regards, tom lane\n\n\n\n\n\n\n\n\n\nHi Tom,\n\n\n\n\nI agree to get detailed error message for each failed host as your patch 0001.\n\n\n\n\nAs for patch 0004, find ':' after \"could not connect to\" may failed when error message like:\n\"could\n not connect to host \"localhost\" (::1), port 12345: Connection refused\", where p2\n will point to \"::1\"\n instead of \": Connection refused\". But since it's only used for test case, we don't need to filter the error message precisely.\n\n\n\n\n```\n\necpg_filter_stderr(const char *resultfile, const char *tmpfile)\n\n\n{\n\n    ......\n\n    char       *p1 = strstr(linebuf.data, \"could not connect to \");\n\n    if (p1)\n\n    {\n\n        char       *p2 = strchr(p1, ':');\n\n        if (p2)\n\n\n            memmove(p1 + 17, p2, strlen(p2) + 1);\n\n    }\n\n}\n\n```\n\n\n\n\nThanks,\n\nHubert\n\n\n\n\n\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: Monday, January 11, 2021 10:56 AM\nTo: Hubert Zhang <zhubert@vmware.com>\nCc: tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nSubject: Re: Multiple hosts in connection string failed to failover in non-hot standby mode\n \n\n\nI wrote:\n> I feel pretty good about 0001: it might be committable as-is.  0002 is\n> probably subject to bikeshedding, plus it has a problem in the ECPG tests.\n> Two of the error messages are now unstable because they expose\n> chosen-at-random socket paths:\n> ...\n> I don't have any non-hacky ideas what to do about that.  The extra detail\n> seems useful to end users, but we don't have any infrastructure that\n> would allow filtering it out in the ECPG tests.\n\nSo far the only solution that comes to mind is to introduce some\ninfrastructure to do that filtering.  0001-0003 below are unchanged,\n0004 patches up the ecpg test framework with a rather ad-hoc filtering\nfunction.  I'd feel worse about this if there weren't already a very\nad-hoc filtering function there ;-)\n\nThis set passes check-world for me; we'll soon see what the cfbot\nthinks.\n\n                        regards, tom lane", "msg_date": "Mon, 11 Jan 2021 14:31:40 +0000", "msg_from": "Hubert Zhang <zhubert@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "Hubert Zhang <zhubert@vmware.com> writes:\n> As for patch 0004, find ':' after \"could not connect to\" may failed when error message like:\n> \"could not connect to host \"localhost\" (::1), port 12345: Connection refused\", where p2 will point to \"::1\" instead of \": Connection refused\". But since it's only used for test case, we don't need to filter the error message precisely.\n\nExcellent point, and I think that could happen on a Windows installation.\nWe can make it look for \": \" instead of just ':', and that'll reduce the\nodds of trouble.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Jan 2021 10:15:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "On Sun, Jan 10, 2021 at 05:38:50PM -0500, Tom Lane wrote:\n> I wrote:\n> > The problems that I see in this area are first that there's no\n> > real standardization in libpq as to whether to append error messages\n> > together or just flush preceding messages; and second that no effort\n> > is made in multi-connection-attempt cases to separate the errors from\n> > different attempts, much less identify which error goes with which\n> > host or IP address. I think we really need to put some work into\n> > that.\n> \n> I spent some time on this, and here is a patch set that tries to\n> improve matters.\n> \n> 0001 changes the libpq coding rules to be that all error messages should\n> be appended to conn->errorMessage, never overwritten (there are a couple\n> of exceptions in fe-lobj.c) and we reset conn->errorMessage to empty only\n> at the start of a connection request or new query. This is something\n> that's been bugging me for a long time and I'm glad to get it cleaned up.\n> Formerly it seemed that a dartboard had been used to decide whether to use\n> \"printfPQExpBuffer\" or \"appendPQExpBuffer\"; now it's always the latter.\n> We can also get rid of several hacks that were used to get around the\n> mess and force appending behavior.\n> \n> 0002 then changes the reporting rules in fe-connect.c as I suggested,\n> so that you might get errors like this:\n> \n> $ psql -h localhost,/tmp -p 12345\n> psql: error: could not connect to host \"localhost\" (::1), port 12345: Connection refused\n> Is the server running on that host and accepting TCP/IP connections?\n> could not connect to host \"localhost\" (127.0.0.1), port 12345: Connection refused\n> Is the server running on that host and accepting TCP/IP connections?\n> could not connect to socket \"/tmp/.s.PGSQL.12345\": No such file or directory\n> Is the server running locally and accepting connections on that socket?\n> \n> and we have a pretty uniform rule that errors coming back from a\n> connection attempt will be prefixed with the server address.\n\n52a10224 broke sqlsmith, of all things.\n\nIt was using errmsg as a test of success, instead of checking if the connection\nresult wasn't null:\n\n conn = PQconnectdb(conninfo.c_str());\n char *errmsg = PQerrorMessage(conn);\n if (strlen(errmsg))\n throw dut::broken(errmsg, \"08001\");\n\nThat's clearly the wrong thing to do, but maybe this should be described in the\nrelease notes as a compatibility issue, in case other people had the same idea.\nClearing errorMessage during success is an option..\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 6 May 2021 11:26:51 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in\n non-hot standby mode" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> 52a10224 broke sqlsmith, of all things.\n\n> It was using errmsg as a test of success, instead of checking if the connection\n> result wasn't null:\n\n> conn = PQconnectdb(conninfo.c_str());\n> char *errmsg = PQerrorMessage(conn);\n> if (strlen(errmsg))\n> throw dut::broken(errmsg, \"08001\");\n\n> That's clearly the wrong thing to do, but maybe this should be described in the\n> release notes as a compatibility issue, in case other people had the same idea.\n> Clearing errorMessage during success is an option..\n\nHm. I'd debated whether to clear conn->errorMessage at the end of\na successful connection sequence, and decided not to on the grounds\nthat it might be interesting info (eg it could tell you why you\nended up connected to server Y and not server X). But perhaps\nit's too much of a compatibility break for this small benefit.\n\nI'm curious though why it took this long for anyone to complain.\nI'd supposed that people were running sqlsmith against HEAD on\na pretty regular basis.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 May 2021 13:22:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "On Thu, May 06, 2021 at 01:22:27PM -0400, Tom Lane wrote:\n> I'm curious though why it took this long for anyone to complain.\n> I'd supposed that people were running sqlsmith against HEAD on\n> a pretty regular basis.\n\nI think it's also becase sqlsmith would need to run against the v14 *client*\nlibrary. I don't know about anyone else's workflow, but I tend not to \"make\ninstall\", but work with binaries out of ./tmp_install.\n\nThere's a few changes needed on sqlsmith HEAD, but I guess nobody would have\ngotten that far if the connection was failing (or rather, detected as such).\n\ndiff --git a/grammar.cc b/grammar.cc\nindex 62aa8e9..76491ff 100644\n--- a/grammar.cc\n+++ b/grammar.cc\n@@ -327,7 +327,11 @@ query_spec::query_spec(prod *p, struct scope *s, bool lateral) :\n \n search = bool_expr::factory(this);\n \n- if (d6() > 2) {\n+ if (d6() > 4) {\n+ ostringstream cons;\n+ cons << \"order by 1 fetch first \" << d100() + d100() << \" rows with ties\";\n+ limit_clause = cons.str();\n+ } else if (d6() > 2) {\n ostringstream cons;\n cons << \"limit \" << d100() + d100();\n limit_clause = cons.str();\ndiff --git a/postgres.cc b/postgres.cc\nindex f2a3627..1c0c55f 100644\n--- a/postgres.cc\n+++ b/postgres.cc\n@@ -30,6 +30,7 @@ bool pg_type::consistent(sqltype *rvalue)\n case 'c': /* composite type */\n case 'd': /* domain */\n case 'r': /* range */\n+ case 'm': /* multirange */\n case 'e': /* enum */\n return this == t;\n \n\n\n", "msg_date": "Thu, 6 May 2021 12:38:13 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in\n non-hot standby mode" }, { "msg_contents": "> On Thu, May 06, 2021 at 01:22:27PM -0400, Tom Lane wrote:\n>> I'm curious though why it took this long for anyone to complain.\n>> I'd supposed that people were running sqlsmith against HEAD on\n>> a pretty regular basis.\n\nLast time I ran it was November 27. I'm neglecting it on my spare time\nand there is hardly any opportunity to sneak it onto my agenda at work.\nI'll do my best to try to get either of these fixed.\n\nJustin Pryzby writes:\n> I think it's also becase sqlsmith would need to run against the v14 *client*\n> library. I don't know about anyone else's workflow, but I tend not to \"make\n> install\", but work with binaries out of ./tmp_install.\n\nMy playbooks don't grab the client libraries of the test target either.\nI'll change them.\n\n> There's a few changes needed on sqlsmith HEAD, but I guess nobody would have\n> gotten that far if the connection was failing (or rather, detected as such).\n\nThanks for the patch.\n\nregards,\nandreas\n\n\n", "msg_date": "Thu, 06 May 2021 22:24:21 +0200", "msg_from": "Andreas Seltenreich <seltenreich@gmx.de>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "On Thu, May 06, 2021 at 01:22:27PM -0400, Tom Lane wrote:\n> Hm. I'd debated whether to clear conn->errorMessage at the end of\n> a successful connection sequence, and decided not to on the grounds\n> that it might be interesting info (eg it could tell you why you\n> ended up connected to server Y and not server X). But perhaps\n> it's too much of a compatibility break for this small benefit.\n> \n> I'm curious though why it took this long for anyone to complain.\n> I'd supposed that people were running sqlsmith against HEAD on\n> a pretty regular basis.\n\nFWIW, I think that the case of getting some information about any\nfailed connections while a connection has been successfully made\nwithin the scope of the connection string parameters provided by the\nuser is rather thin, and I really feel that this is going to cause\nmore pain to users than this is worth it. So my vote would be to\nclean up conn->errorMessage after a successful connection.\n\nNow, I would not mind either if we finish by taking a decision here\nafter beta1, to see if there are actual complains on the matter based\non the feedback we get.\n--\nMichael", "msg_date": "Tue, 11 May 2021 16:29:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in\n non-hot standby mode" }, { "msg_contents": "> On 11 May 2021, at 09:29, Michael Paquier <michael@paquier.xyz> wrote:\n\n> FWIW, I think that the case of getting some information about any\n> failed connections while a connection has been successfully made\n> within the scope of the connection string parameters provided by the\n> user is rather thin, and I really feel that this is going to cause\n> more pain to users than this is worth it. So my vote would be to\n> clean up conn->errorMessage after a successful connection.\n\nAgreed, given how conservative we typically are with backwards compatibility it\nseems a too thin benefit to warrant potential breakage.\n\nMy vote would too be to restore the behavior by clearing conn->errorMessage.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n", "msg_date": "Mon, 17 May 2021 13:06:51 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "On Thu, May 06, 2021 at 01:22:27PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > 52a10224 broke sqlsmith, of all things.\n> \n> > It was using errmsg as a test of success, instead of checking if the connection\n> > result wasn't null:\n> \n> > conn = PQconnectdb(conninfo.c_str());\n> > char *errmsg = PQerrorMessage(conn);\n> > if (strlen(errmsg))\n> > throw dut::broken(errmsg, \"08001\");\n> \n> > That's clearly the wrong thing to do, but maybe this should be described in the\n> > release notes as a compatibility issue, in case other people had the same idea.\n> > Clearing errorMessage during success is an option..\n> \n> Hm. I'd debated whether to clear conn->errorMessage at the end of\n> a successful connection sequence, and decided not to on the grounds\n> that it might be interesting info (eg it could tell you why you\n> ended up connected to server Y and not server X). But perhaps\n> it's too much of a compatibility break for this small benefit.\n\nI don't care if applications break because they check the errorMessage instead\nof the return value...\n\n..But I think it's not useful to put details into errorMessage on success,\nunless you're going to document that. It would never have occurred to me to\nlook there, or that it would even be safe.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 30 May 2021 20:25:00 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in\n non-hot standby mode" }, { "msg_contents": "On Sun, May 30, 2021 at 08:25:00PM -0500, Justin Pryzby wrote:\n> ..But I think it's not useful to put details into errorMessage on success,\n> unless you're going to document that. It would never have occurred to me to\n> look there, or that it would even be safe.\n\nYeah. On the contrary, it could be confusing if one sees an error\nmessage but there is nothing to worry about, because things are\nworking in the scope of what the user wanted at connection time.\n--\nMichael", "msg_date": "Mon, 31 May 2021 11:00:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in\n non-hot standby mode" }, { "msg_contents": "On Mon, May 31, 2021 at 11:00:55AM +0900, Michael Paquier wrote:\n> Yeah. On the contrary, it could be confusing if one sees an error\n> message but there is nothing to worry about, because things are\n> working in the scope of what the user wanted at connection time.\n\nIn my recent quest to look at GSSAPI builds on Windows, I have bumped\ninto another failure that's related to this thread. hamerkop\nsummarizes the situation here:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2021-05-29%2010%3A15%3A42\n\nThere are two failures like this one as errorMessage piles up on\nfailures, as of connect/test5:\n-[NO_PID]: ECPGconnect: connection to server failed: FATAL: database\n \"regress_ecpg_user2\" does not exist\n+[NO_PID]: ECPGconnect: connection to server failed: could not\n initiate GSSAPI security context: Unspecified GSS failure. Minor\n code may provide more information: Credential cache is empty\n+connection to server failed: FATAL: database \"regress_ecpg_user2\"\n does not exist \n--\nMichael", "msg_date": "Mon, 31 May 2021 12:33:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in\n non-hot standby mode" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> In my recent quest to look at GSSAPI builds on Windows, I have bumped\n> into another failure that's related to this thread. hamerkop\n> summarizes the situation here:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2021-05-29%2010%3A15%3A42\n> There are two failures like this one as errorMessage piles up on\n> failures, as of connect/test5:\n> -[NO_PID]: ECPGconnect: connection to server failed: FATAL: database\n> \"regress_ecpg_user2\" does not exist\n> +[NO_PID]: ECPGconnect: connection to server failed: could not\n> initiate GSSAPI security context: Unspecified GSS failure. Minor\n> code may provide more information: Credential cache is empty\n> +connection to server failed: FATAL: database \"regress_ecpg_user2\"\n> does not exist \n\nYeah, I was looking at that earlier today. Evidently libpq is\ntrying a GSS-encrypted connection, and that doesn't work, so\nit falls back to a regular connection where we get the expected\nerror. Probably all the connections in this test are hitting the\nGSS failure, but only the ones where the second attempt fails\nshow a visible issue.\n\nWhat is not clear is why GSS is acting that way. We wouldn't\nhave tried a GSS connection unless pg_GSS_have_cred_cache\nsucceeded ... so how come that worked but then gss_init_sec_context\ncomplained \"Credential cache is empty\"?\n\nMy rough guess is that Windows has implemented the GSS APIs in\nsuch a way that what pg_GSS_have_cred_cache is testing isn't\nsufficient to tell whether a sane credential actually exists.\n\nOr there's something particularly weird about how hamerkop\nis set up.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 31 May 2021 00:05:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "On Mon, May 31, 2021 at 12:05:12AM -0400, Tom Lane wrote:\n> Yeah, I was looking at that earlier today. Evidently libpq is\n> trying a GSS-encrypted connection, and that doesn't work, so\n> it falls back to a regular connection where we get the expected\n> error. Probably all the connections in this test are hitting the\n> GSS failure, but only the ones where the second attempt fails\n> show a visible issue.\n\nYep. This wastes cycles.\n\n> What is not clear is why GSS is acting that way. We wouldn't\n> have tried a GSS connection unless pg_GSS_have_cred_cache\n> succeeded ... so how come that worked but then gss_init_sec_context\n> complained \"Credential cache is empty\"?\n> \n> My rough guess is that Windows has implemented the GSS APIs in\n> such a way that what pg_GSS_have_cred_cache is testing isn't\n> sufficient to tell whether a sane credential actually exists.\n> \n> Or there's something particularly weird about how hamerkop\n> is set up.\n\nI suspect that's just the way the upstream installation works with a\ncredentials cache created from the beginning, as I see the same\nbehavior and the same error on my own host for HEAD with a KRB5 server\nset up once the upstream installation runs. Leaving the specific\ntopic of this thread aside for one moment, would there be an argument\nfor just enforcing gssencmode=disable in this set of tests down to 12?\nIt is worth noting that the problem does not show up in 12 and 13 once\nthe compilation works, because we just mask the error there, but the\ncode path is still taken.\n\nAnother thing that strikes me as incorrect is that we don't unset\nPGGSSENCMODE or PGGSSLIB in TestLib.pm. Just noting it on the way..\n--\nMichael", "msg_date": "Mon, 31 May 2021 15:31:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in\n non-hot standby mode" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, May 31, 2021 at 12:05:12AM -0400, Tom Lane wrote:\n>> What is not clear is why GSS is acting that way. We wouldn't\n>> have tried a GSS connection unless pg_GSS_have_cred_cache\n>> succeeded ... so how come that worked but then gss_init_sec_context\n>> complained \"Credential cache is empty\"?\n\n> I suspect that's just the way the upstream installation works with a\n> credentials cache created from the beginning, as I see the same\n> behavior and the same error on my own host for HEAD with a KRB5 server\n> set up once the upstream installation runs.\n\nInteresting --- I was considering running such a test locally, but\ndidn't get round to it yet.\n\n> Leaving the specific\n> topic of this thread aside for one moment, would there be an argument\n> for just enforcing gssencmode=disable in this set of tests down to 12?\n\nIt seems like the ideal solution would be to make pg_GSS_have_cred_cache\nsmarter, so that we don't attempt a GSS connection cycle here. But if\nwe can't, adding gssencmode=disable to these test cases is what I was\nthinking about, too.\n\n> Another thing that strikes me as incorrect is that we don't unset\n> PGGSSENCMODE or PGGSSLIB in TestLib.pm. Just noting it on the way..\n\nAgreed, that seems bogus.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 31 May 2021 09:36:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "On Mon, May 31, 2021 at 09:36:20AM -0400, Tom Lane wrote:\n> Interesting --- I was considering running such a test locally, but\n> didn't get round to it yet.\n\nJust to be clear, that's my Windows dev box.\n\n> It seems like the ideal solution would be to make pg_GSS_have_cred_cache\n> smarter, so that we don't attempt a GSS connection cycle here.\n\nI am not sure yet what would be adapted here. That requires diving a\nbit into the upstream code.\n\n>> Another thing that strikes me as incorrect is that we don't unset\n>> PGGSSENCMODE or PGGSSLIB in TestLib.pm. Just noting it on the way..\n>\n> Agreed, that seems bogus.\n\nThere may be others, and I have not checked yet. I'd rather do a\nbackpatch for this part, would you agree?\n--\nMichael", "msg_date": "Tue, 1 Jun 2021 09:56:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in\n non-hot standby mode" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n>>> Another thing that strikes me as incorrect is that we don't unset\n>>> PGGSSENCMODE or PGGSSLIB in TestLib.pm. Just noting it on the way..\n\n>> Agreed, that seems bogus.\n\n> There may be others, and I have not checked yet. I'd rather do a\n> backpatch for this part, would you agree?\n\n+1\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 31 May 2021 21:07:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "On Mon, May 31, 2021 at 09:07:38PM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>>> Agreed, that seems bogus.\n> \n>> There may be others, and I have not checked yet. I'd rather do a\n>> backpatch for this part, would you agree?\n> \n> +1\n\nPlaying with all those variables and broken values here and there, I\nhave been able to break a bunch of tests. Most of the failures were\nin the authentication and SSL tests, but there were also fancier\ncases. For example, PGCLIENTENCODING would cause a failure with\npg_ctl, for any TAP test.\n\nI got surprised that enforcing values for most of the PGSSL* ones did\nnot cause a failure when it came to the certs, CRLs keys and root\ncerts now. Still, I think that we'd be safer to cancel these as\nwell.\n\nAttached is the list I am finishing with. I'd like to fix that, so\nplease let me know if there are any comments or objections.\n--\nMichael", "msg_date": "Tue, 1 Jun 2021 16:34:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in\n non-hot standby mode" }, { "msg_contents": "It seems like nobody's terribly interested in figuring out why\npg_GSS_have_cred_cache() is misbehaving on Windows. So I took\na look at disabling GSSENC in these test cases to try to silence\nhamerkop's test failure that way. Here's a proposed patch.\nIt relies on setenv() being available, but I think that's fine\nbecause we link the ECPG test programs with libpgport.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 06 Jun 2021 17:27:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "On Sun, Jun 06, 2021 at 05:27:49PM -0400, Tom Lane wrote:\n> It seems like nobody's terribly interested in figuring out why\n> pg_GSS_have_cred_cache() is misbehaving on Windows.\n\nI have been investigating that for a couple of hours in total, but\nnothing to report yet.\n\n> So I took\n> a look at disabling GSSENC in these test cases to try to silence\n> hamerkop's test failure that way. Here's a proposed patch.\n> It relies on setenv() being available, but I think that's fine\n> because we link the ECPG test programs with libpgport.\n\nNo, that's not it. The compilation of the tests happens when\ntriggering the tests as of ecpgcheck() in vcregress.pl so I think that\nthis is going to fail. This requires at least the addition of a\nreference to libpgport in ecpg_regression.proj, perhaps more.\n--\nMichael", "msg_date": "Mon, 7 Jun 2021 15:43:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in\n non-hot standby mode" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sun, Jun 06, 2021 at 05:27:49PM -0400, Tom Lane wrote:\n>> So I took\n>> a look at disabling GSSENC in these test cases to try to silence\n>> hamerkop's test failure that way. Here's a proposed patch.\n>> It relies on setenv() being available, but I think that's fine\n>> because we link the ECPG test programs with libpgport.\n\n> No, that's not it. The compilation of the tests happens when\n> triggering the tests as of ecpgcheck() in vcregress.pl so I think that\n> this is going to fail. This requires at least the addition of a\n> reference to libpgport in ecpg_regression.proj, perhaps more.\n\nHmm. We do include \"-lpgcommon -lpgport\" when building the ecpg test\nprograms on Unix, so I'd assumed that the MSVC scripts did the same.\nIs there a good reason not to make them do so?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Jun 2021 10:38:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "On Mon, Jun 07, 2021 at 10:38:03AM -0400, Tom Lane wrote:\n> Hmm. We do include \"-lpgcommon -lpgport\" when building the ecpg test\n> programs on Unix, so I'd assumed that the MSVC scripts did the same.\n> Is there a good reason not to make them do so?\n\nI was looking at that this morning, and yes we need to add more\nreferences here. Actually, adding only libpgport.lib allows the\ncompilation and the tests to work, but I agree to add also\nlibpgcommon.lib so as we don't fall into the same compilation trap\nagain in the future.\n\nNow, I also see that using pgwin32_setenv() instead of\nsrc/port/setenv.c causes cl to be confused once we update\necpg_regression.proj because it cannot find setenv(). Bringing the\nquestion, why is it necessary to have both setenv.c and\npgwin32_setenv() on HEAD? setenv.c should be enough once you have the\nfallback implementation of putenv() available.\n\nAttached is the patch I am finishing with, that also brings all this\nstuff closer to what I did in 12 and 13 for hamerkop. The failing\ntest is passing for me now with MSVC and GSSAPI builds.\n\nThoughts?\n--\nMichael", "msg_date": "Tue, 8 Jun 2021 12:24:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in\n non-hot standby mode" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Now, I also see that using pgwin32_setenv() instead of\n> src/port/setenv.c causes cl to be confused once we update\n> ecpg_regression.proj because it cannot find setenv(). Bringing the\n> question, why is it necessary to have both setenv.c and\n> pgwin32_setenv() on HEAD? setenv.c should be enough once you have the\n> fallback implementation of putenv() available.\n\nIIUC, what you are proposing to do is replace pgwin32_setenv with\nsrc/port/setenv.c. I don't think that's an improvement. setenv.c\nleaks memory on repeat calls, because it cannot know what\npgwin32_setenv knows about how putenv works on that platform.\n\nIt'd be okay to do it like that for the ECPG tests, perhaps,\nbecause we don't really care about small leaks in those.\nBut I don't want it to happen across-the-board.\n\nThinking more, the real problem is that use of libpgport\ngoes hand-in-hand with #including port.h; it's not going\nto work real well if you do one without the other.\nAnd I don't think we want to include port.h in the ECPG\ntest programs, because those are trying to model the\nenvironment that typical user applications see.\n\nAlternatives seem to be\n\n(1) allow just this one ECPG test to include port.h (or\nprobably c.h). However, there's a whole other can of worms\nthere, which is that I wonder if we aren't doing it wrong\non the Unix side by linking libpgport when we shouldn't.\nWe've not been bit by that yet, but I wonder if it isn't\njust a matter of time. The MSVC build, by not linking\nthose libs in the first place, is really doing this the\ncorrect way.\n\n(2) Let pg_regress_ecpg.c pass down the environment setting.\n\n(3) Don't try to use the environment variable for this\npurpose. I'd originally tried to change test5.pgc to just\nspecify gssmode=disable in-line, but that only works\nnicely for one of the two failing cases. The other one\nis testing the case of a completely defaulted connection\ntarget, so there's no place to add an option without\nbreaking the only unique aspect of that test case.\n\n(2) is starting to seem attractive now that we've seen\nthe downsides of (1) and (3).\n\n(BTW, I just noticed that regress.c is unsetenv'ing the\nSSL connection environment variables, but not the GSS ones.\nSeens like that needs work similar to 8279f68a1.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 08 Jun 2021 11:21:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "On Tue, Jun 08, 2021 at 11:21:34AM -0400, Tom Lane wrote:\n> IIUC, what you are proposing to do is replace pgwin32_setenv with\n> src/port/setenv.c. I don't think that's an improvement. setenv.c\n> leaks memory on repeat calls, because it cannot know what\n> pgwin32_setenv knows about how putenv works on that platform.\n\nIs gaur the only animal that needs this file, by the way?\n\n> (1) allow just this one ECPG test to include port.h (or\n> probably c.h). However, there's a whole other can of worms\n> there, which is that I wonder if we aren't doing it wrong\n> on the Unix side by linking libpgport when we shouldn't.\n> We've not been bit by that yet, but I wonder if it isn't\n> just a matter of time. The MSVC build, by not linking\n> those libs in the first place, is really doing this the\n> correct way.\n\nI don't really want to include this stuff in the ECPG tests just to\nbypass an environment configuration.\n\n> (2) Let pg_regress_ecpg.c pass down the environment setting.\n> \n> (3) Don't try to use the environment variable for this\n> purpose. I'd originally tried to change test5.pgc to just\n> specify gssmode=disable in-line, but that only works\n> nicely for one of the two failing cases. The other one\n> is testing the case of a completely defaulted connection\n> target, so there's no place to add an option without\n> breaking the only unique aspect of that test case.\n\n> (2) is starting to seem attractive now that we've seen\n> the downsides of (1) and (3).\n\nFWIW, I'd be rather in favor of doing (3) because this remains simple\njust to take care of an edge case, even if that partially breaks the\npromise to rely on a default connection.\n\n(4) would be to revisit the decision to make libpq report all the\nerrors stored in its stack with multiple attempts. That would bring\nback the buildfarm to green at least, and we still need to take a\ndecision about that for 14 anyway as it involves a compatibility\nbreakage. But I agree that we also should do something for 12~ for\nthose tests.\n\n> (BTW, I just noticed that regress.c is unsetenv'ing the\n> SSL connection environment variables, but not the GSS ones.\n> Seens like that needs work similar to 8279f68a1.)\n\nYes, I saw that I was able to break things is many fancy ways when\nworking on 8279f68a1, but the list of parameters to reset needs to\ndiverge a bit compared to the TAP tests.\n--\nMichael", "msg_date": "Wed, 9 Jun 2021 10:52:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in\n non-hot standby mode" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Jun 08, 2021 at 11:21:34AM -0400, Tom Lane wrote:\n>> IIUC, what you are proposing to do is replace pgwin32_setenv with\n>> src/port/setenv.c. I don't think that's an improvement. setenv.c\n>> leaks memory on repeat calls, because it cannot know what\n>> pgwin32_setenv knows about how putenv works on that platform.\n\n> Is gaur the only animal that needs this file, by the way?\n\nI think it is. setenv has been in POSIX for awhile, so probably\nonly very old systems would need that. (This is why I don't care\nthat much that setenv.c leaks memory. But we can't start using it\non platforms where we *do* care about performance.)\n\n>> (3) Don't try to use the environment variable for this\n>> purpose. I'd originally tried to change test5.pgc to just\n>> specify gssmode=disable in-line, but that only works\n>> nicely for one of the two failing cases. The other one\n>> is testing the case of a completely defaulted connection\n>> target, so there's no place to add an option without\n>> breaking the only unique aspect of that test case.\n\n> FWIW, I'd be rather in favor of doing (3) because this remains simple\n> just to take care of an edge case, even if that partially breaks the\n> promise to rely on a default connection.\n\nYeah, it doesn't seem like we need to test that case all that\nbadly. I'd be okay with dropping that test; or maybe we could\nfix things so that the default case succeeds?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 08 Jun 2021 22:42:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "I wrote:\n> ... I'd be okay with dropping that test; or maybe we could\n> fix things so that the default case succeeds?\n\nHere's a draft patch that renames regress_ecpg_user2 to ecpg2_regression,\nwhich matches the name of one of the databases used, allowing the test\ncases with defaulted database name to succeed. That gets rid of one of\nthe problematic diffs. As it stood, though, that meant that connect/test5\nwasn't exercising the connection-failure code path at all, which didn't\nseem like what we want. So I adjusted the second place that had been\nfailing to again fail on no-such-database, and stuck in gssencmode=disable\nso that we shouldn't get any test diff on hamerkop.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 09 Jun 2021 12:05:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "On Wed, Jun 09, 2021 at 12:05:10PM -0400, Tom Lane wrote:\n> Here's a draft patch that renames regress_ecpg_user2 to ecpg2_regression,\n> which matches the name of one of the databases used, allowing the test\n> cases with defaulted database name to succeed. That gets rid of one of\n> the problematic diffs.\n\nYeah, I agree that this does not matter much for this one, as we want\nto stress the quotes and the grammar for the connections here, as\n99a5619 implies. It is good to check for the failure path as well, so\nwhat you have here looks fine to me.\n\n> As it stood, though, that meant that connect/test5\n> wasn't exercising the connection-failure code path at all, which didn't\n> seem like what we want. So I adjusted the second place that had been\n> failing to again fail on no-such-database, and stuck in gssencmode=disable\n> so that we shouldn't get any test diff on hamerkop.\n\nUsing ecpg2_regression for the role goes a bit against the recent rule\nto not create any role not suffixed by \"regress_\" as part of the\nregression tests, but I am fine to live with that here.\n\nThe changes for test1 with MinGW look right, I have not been able to\ntest them.\n--\nMichael", "msg_date": "Thu, 10 Jun 2021 09:46:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in\n non-hot standby mode" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Jun 09, 2021 at 12:05:10PM -0400, Tom Lane wrote:\n>> Here's a draft patch that renames regress_ecpg_user2 to ecpg2_regression,\n\n> Using ecpg2_regression for the role goes a bit against the recent rule\n> to not create any role not suffixed by \"regress_\" as part of the\n> regression tests, but I am fine to live with that here.\n\nOh dear, I forgot to check that carefully. I'd been thinking the rule was\nthat such names must *contain* \"regress\", but looking at user.c, it's\nstricter:\n\n#ifdef ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n\tif (strncmp(stmt->role, \"regress_\", 8) != 0)\n\t\telog(WARNING, \"roles created by regression test cases should have names starting with \\\"regress_\\\"\");\n#endif\n\nMeanwhile, the rule for database names is:\n\n#ifdef ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n\tif (IsUnderPostmaster && strstr(dbname, \"regression\") == NULL)\n\t\telog(WARNING, \"databases created by regression test cases should have names including \\\"regression\\\"\");\n#endif\n\nSo unless we want to relax one or both of those, we can't have a user\nname that matches the database name.\n\nNow I'm inclined to go back to the first-draft patch I had, which just\ndropped the first problematic test case, and added gssencmode=disable\nto the second one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Jun 2021 23:15:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "I wrote:\n> Now I'm inclined to go back to the first-draft patch I had, which just\n> dropped the first problematic test case, and added gssencmode=disable\n> to the second one.\n\nDone that way. If we figure out why the GSS code is acting strangely\non hamerkop, maybe this can be reverted --- but it seems like we already\nspent more time than is justified looking for band-aids.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Jun 2021 10:47:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sun, May 30, 2021 at 08:25:00PM -0500, Justin Pryzby wrote:\n>> ..But I think it's not useful to put details into errorMessage on success,\n>> unless you're going to document that. It would never have occurred to me to\n>> look there, or that it would even be safe.\n\n> Yeah. On the contrary, it could be confusing if one sees an error\n> message but there is nothing to worry about, because things are\n> working in the scope of what the user wanted at connection time.\n\nI got around to looking at this issue today, and verified that only one\nplace needs to be changed, as attached.\n\nAlthough I was initially thinking that maybe we should leave the code\nas-is, I now agree that resetting errorMessage is a good idea, because\nwhat tends to be in it at this point is something like\n\n\"connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: \"\n\n(ie the string made by emitHostIdentityInfo). Anybody who does\nlook at that is likely to be confused, because the connection\n*didn't* fail.\n\nThere might be some value in my original idea of preserving a trace of\nthe servers we tried before succeeding. But it would take additional\nwork to present it in a non-confusing way, and given the lack of any\nfield requests for that, I'm not excited about doing it right now.\n(One could also argue that it ought to get tied into the PQtrace\nfacilities somehow, rather than being done in this ad-hoc way.)\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 13 Sep 2021 16:09:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in non-hot\n standby mode" }, { "msg_contents": "On Mon, Sep 13, 2021 at 04:09:26PM -0400, Tom Lane wrote:\n> I got around to looking at this issue today, and verified that only one\n> place needs to be changed, as attached.\n\nThanks! This looks fine to me at quick glance.\n--\nMichael", "msg_date": "Tue, 14 Sep 2021 09:38:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Multiple hosts in connection string failed to failover in\n non-hot standby mode" } ]
[ { "msg_contents": "Hi hackers.\n\nThe example of test coverage in the documentation [1] works as advertised.\n\nBut I wanted to generate test coverage results only of some TAP tests\nin src/test/subscription.\n\nThe documentation [1] also says \"The make commands also work in\nsubdirectories.\" so I tried running them all in that folder.\n\nHowever, when I run \"make coverage-html\" in that subdirectory\nsrc/test/subscription it does not work:\n\n=====\n[postgres@CentOS7-x64 subscription]$ make coverage-html\n/usr/local/bin/lcov --gcov-tool /usr/bin/gcov -q --no-external -c -i\n-d . -d . -o lcov_base.info\ngeninfo: WARNING: no .gcno files found in . - skipping!\ngeninfo: WARNING: no .gcno files found in . - skipping!\n/usr/local/bin/lcov --gcov-tool /usr/bin/gcov -q --no-external -c -d .\n-d . -o lcov_test.info\ngeninfo: WARNING: no .gcda files found in . - skipping!\ngeninfo: WARNING: no .gcda files found in . - skipping!\nrm -rf coverage\n/usr/local/bin/genhtml -q --legend -o coverage --title='PostgreSQL\n14devel' --num-spaces=4 --prefix='/home/postgres/oss_postgres_2PC'\nlcov_base.info lcov_test.info\ngenhtml: ERROR: no valid records found in tracefile lcov_base.info\nmake: *** [coverage-html-stamp] Error 255\n[postgres@CentOS7-x64 subscription]$\n=====\n\nOTOH, running the \"make coverage-html\" at the top folder after running\nmy TAP tests does give the desired coverage results.\n\n~\n\nQUESTION:\n\nWas that documentation [1] just being misleading by saying it can work\nin the subdirectories?\ne.g. Are you only supposed to run \"make coverage-html\" from the top folder?\n\nOr is it supposed to work but I did something wrong?\n\n--\n\n[1] https://www.postgresql.org/docs/13/regress-coverage.html\n\nKind Regards.\nPeter Smith\nFujitsu Australia.\n\n\n", "msg_date": "Tue, 27 Oct 2020 19:09:30 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Question about make coverage-html" }, { "msg_contents": "On 27/10/2020 10:09, Peter Smith wrote:\n> Hi hackers.\n> \n> The example of test coverage in the documentation [1] works as advertised.\n> \n> But I wanted to generate test coverage results only of some TAP tests\n> in src/test/subscription.\n> \n> The documentation [1] also says \"The make commands also work in\n> subdirectories.\" so I tried running them all in that folder.\n> \n> However, when I run \"make coverage-html\" in that subdirectory\n> src/test/subscription it does not work:\n> \n> =====\n> [postgres@CentOS7-x64 subscription]$ make coverage-html\n> /usr/local/bin/lcov --gcov-tool /usr/bin/gcov -q --no-external -c -i\n> -d . -d . -o lcov_base.info\n> geninfo: WARNING: no .gcno files found in . - skipping!\n> geninfo: WARNING: no .gcno files found in . - skipping!\n> /usr/local/bin/lcov --gcov-tool /usr/bin/gcov -q --no-external -c -d .\n> -d . -o lcov_test.info\n> geninfo: WARNING: no .gcda files found in . - skipping!\n> geninfo: WARNING: no .gcda files found in . - skipping!\n> rm -rf coverage\n> /usr/local/bin/genhtml -q --legend -o coverage --title='PostgreSQL\n> 14devel' --num-spaces=4 --prefix='/home/postgres/oss_postgres_2PC'\n> lcov_base.info lcov_test.info\n> genhtml: ERROR: no valid records found in tracefile lcov_base.info\n> make: *** [coverage-html-stamp] Error 255\n> [postgres@CentOS7-x64 subscription]$\n> =====\n> \n> OTOH, running the \"make coverage-html\" at the top folder after running\n> my TAP tests does give the desired coverage results.\n> \n> ~\n> \n> QUESTION:\n> \n> Was that documentation [1] just being misleading by saying it can work\n> in the subdirectories?\n> e.g. Are you only supposed to run \"make coverage-html\" from the top folder?\n> \n> Or is it supposed to work but I did something wrong?\n\nRunning \"make coverage-html\" in src/test/subscription doesn't work, \nbecause there is no C code in that directory.\n\nCreating a coverage report is a two-step process. First, you run the \ntest you're interested in, with \"make check\" or similar. Then you create \na report for the source files you're interested in, with \"make \ncoverage-html\". You can run these commands in different subdirectories.\n\nIn this case, you want to do \"cd src/test/subscription; make check\", to \nrun those TAP tests, and then run \"make coverage-html\" from the top \nfolder. Or if you wanted to create coverage report that covers only \nreplication-related source code, for example, you could run it in the \nsrc/backend/replication directory (\"cd src/backend/replication; make \ncoverage-html\").\n\n- Heikki\n\n\n", "msg_date": "Tue, 27 Oct 2020 11:17:19 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Question about make coverage-html" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 27/10/2020 10:09, Peter Smith wrote:\n>> The documentation [1] also says \"The make commands also work in\n>> subdirectories.\" so I tried running them all in that folder.\n>> However, when I run \"make coverage-html\" in that subdirectory\n>> src/test/subscription it does not work:\n\n> Creating a coverage report is a two-step process. First, you run the \n> test you're interested in, with \"make check\" or similar. Then you create \n> a report for the source files you're interested in, with \"make \n> coverage-html\". You can run these commands in different subdirectories.\n\n> In this case, you want to do \"cd src/test/subscription; make check\", to \n> run those TAP tests, and then run \"make coverage-html\" from the top \n> folder. Or if you wanted to create coverage report that covers only \n> replication-related source code, for example, you could run it in the \n> src/backend/replication directory (\"cd src/backend/replication; make \n> coverage-html\").\n\nI agree with the OP that the documentation is a bit vague here.\nI think (maybe I'm wrong) that it's clear enough that you can run\nwhichever test case(s) you want, but this behavior of generating a\npartial coverage report is less clear. Maybe instead of\n\n\tThe \"make\" commands also work in subdirectories.\n\nwe could say\n\n\tYou can run the \"make coverage-html\" command in a subdirectory\n\tif you want a coverage report for only a portion of the code tree.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Oct 2020 10:19:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question about make coverage-html" }, { "msg_contents": "> > Creating a coverage report is a two-step process. First, you run the\n> > test you're interested in, with \"make check\" or similar. Then you create\n> > a report for the source files you're interested in, with \"make\n> > coverage-html\". You can run these commands in different subdirectories.\n>\n> > In this case, you want to do \"cd src/test/subscription; make check\", to\n> > run those TAP tests, and then run \"make coverage-html\" from the top\n> > folder. Or if you wanted to create coverage report that covers only\n> > replication-related source code, for example, you could run it in the\n> > src/backend/replication directory (\"cd src/backend/replication; make\n> > coverage-html\").\n>\n> I agree with the OP that the documentation is a bit vague here.\n> I think (maybe I'm wrong) that it's clear enough that you can run\n> whichever test case(s) you want, but this behavior of generating a\n> partial coverage report is less clear. Maybe instead of\n>\n> The \"make\" commands also work in subdirectories.\n>\n> we could say\n>\n> You can run the \"make coverage-html\" command in a subdirectory\n> if you want a coverage report for only a portion of the code tree.\n\nThank you for the clarifications and the updated documentation.\n\nKind Regards,\nPeter Smith\nFujitsu Australia\n\n\n", "msg_date": "Wed, 28 Oct 2020 10:04:20 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Question about make coverage-html" } ]