threads
listlengths 1
2.99k
|
|---|
[
{
"msg_contents": "Hi hackers,\n\nWhile reading codes related with logical decoding,\nI thought that following comment in rmgrlist.h is not consistent.\n\n> /* symbol name, textual name, redo, desc, identify, startup, cleanup */\n\nThis comment describes a set of APIs that the resource manager should have, but functions for {mask, decode} are missed here.\n\nDid we have any reasons for that? I thought it might be not friendly, so I attached a patch.\n\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Mon, 29 Aug 2022 03:18:14 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "patch: Add missing descriptions for rmgr APIs"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 8:48 AM kuroda.hayato@fujitsu.com\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Hi hackers,\n>\n> While reading codes related with logical decoding,\n> I thought that following comment in rmgrlist.h is not consistent.\n>\n> > /* symbol name, textual name, redo, desc, identify, startup, cleanup */\n>\n> This comment describes a set of APIs that the resource manager should have, but functions for {mask, decode} are missed here.\n>\n\nYour observation seems correct to me but you have not updated the\ncomment for the mask. Is there a reason for the same?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 29 Aug 2022 17:41:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: patch: Add missing descriptions for rmgr APIs"
},
{
"msg_contents": "> Your observation seems correct to me but you have not updated the\r\n> comment for the mask. Is there a reason for the same?\r\n\r\nOh, it seems that I attached wrong one. There is no reason.\r\nPSA the newer version.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Mon, 29 Aug 2022 12:49:22 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: patch: Add missing descriptions for rmgr APIs"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 6:19 PM kuroda.hayato@fujitsu.com\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> > Your observation seems correct to me but you have not updated the\n> > comment for the mask. Is there a reason for the same?\n>\n> Oh, it seems that I attached wrong one. There is no reason.\n> PSA the newer version.\n>\n\nLGTM. Pushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 30 Aug 2022 12:14:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: patch: Add missing descriptions for rmgr APIs"
}
] |
[
{
"msg_contents": "Hi all,\n\nRFC9266, that has been released not so long ago, has added\ntls-exporter as a new channel binding type:\nhttps://www.rfc-editor.org/rfc/rfc5929.html\n\nAn advantage over tls-server-end-point, AFAIK, is that this prevents\nman-in-the-middle attacks even if the attacker holds the server's\nprivate key, which was the kind of job that tls-unique does for\nTLSv1.2, though we've decided at the end to drop it during the PG11\ndev cycle because it does things poorly.\n\nThis patch provides an implementation, tests and documentation for the\nso-said feature. An environment variable called PGCHANNELBINDINGTYPE\nis added, as well as new connection parameter called\nchannel_binding_type. The key point of the implementation is\nSSL_export_keying_material(), that is available down to 1.0.1 (oldest\nversion supported on HEAD), so this should not require a ./configure\ncheck.\n\nPerhaps the part about the new libpq parameter could be refactored as\nof its own patch, with the addition of channel_binding_type in the\nSCRAM status structures. Note also that tls-exporter is aimed for\nTLSv1.3 and newer protocols, but OpenSSL allows the thing to work with\nolder protocols (testable with ssl_max_protocol_version, for example),\nand I don't see a need to prevent this scenario. An extra thing is\nthat attempting to use tls-exporter with a backend <= 15 and a client\n>= 16 causes a failure during the SASL exchange, where the backend\ncomplains about tls-exporter being unsupported.\n\nJacob Champion should be considered as the primary author of the\npatch, even if I have spent some time on this patch before sending it\nhere. I am adding that to the next commit fest.\n\nThanks,\n--\nMichael",
"msg_date": "Mon, 29 Aug 2022 15:02:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Support tls-exporter as channel binding for TLSv1.3 "
},
{
"msg_contents": "On Sun, Aug 28, 2022 at 11:02 PM Michael Paquier <michael@paquier.xyz> wrote:\n> RFC9266, that has been released not so long ago, has added\n> tls-exporter as a new channel binding type:\n> https://www.rfc-editor.org/rfc/rfc5929.html\n\nHi Michael, thank you for sending this!\n\n> Note also that tls-exporter is aimed for\n> TLSv1.3 and newer protocols, but OpenSSL allows the thing to work with\n> older protocols (testable with ssl_max_protocol_version, for example),\n> and I don't see a need to prevent this scenario.\n\nFor protocols less than 1.3 we'll need to ensure that the extended\nmaster secret is in use:\n\n This channel binding mechanism is defined only when the TLS handshake\n results in unique master secrets. This is true of TLS versions prior\n to 1.3 when the extended master secret extension of [RFC7627] is in\n use, and it is always true for TLS 1.3 (see Appendix D of [RFC8446]).\n\nOpenSSL should have an API for that (SSL_get_extms_support); I don't\nknow when it was introduced.\n\nIf we want to cross all our T's, we should also disallow tls-exporter\nif the server was unable to set SSL_OP_NO_RENEGOTIATION.\n\n> An extra thing is\n> that attempting to use tls-exporter with a backend <= 15 and a client\n> >= 16 causes a failure during the SASL exchange, where the backend\n> complains about tls-exporter being unsupported.\n\nYep.\n\n--\n\nDid you have any thoughts about contributing the Python tests (or\nporting them to Perl, bleh) so that we could test failure modes as\nwell? Unfortunately those Python tests were also OpenSSL-based, so\nthey're less powerful than an independent implementation...\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Wed, 31 Aug 2022 16:16:29 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Support tls-exporter as channel binding for TLSv1.3"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 04:16:29PM -0700, Jacob Champion wrote:\n> For protocols less than 1.3 we'll need to ensure that the extended\n> master secret is in use:\n> \n> This channel binding mechanism is defined only when the TLS handshake\n> results in unique master secrets. This is true of TLS versions prior\n> to 1.3 when the extended master secret extension of [RFC7627] is in\n> use, and it is always true for TLS 1.3 (see Appendix D of [RFC8446]).\n> \n> OpenSSL should have an API for that (SSL_get_extms_support); I don't\n> know when it was introduced.\n\nThis is only available from 1.1.0, meaning that we'd better disable\ntls-exporter when building with something older than that :( With\n1.0.2 already not supported by upstream even if a bunch of vendors\nkeep it around for compatibility, I guess that's fine as long as\nthe default setting is tls-server-end-point. It would not be complex\nto switch to tls-exporter by default when using TLSv1.3 and\ntls-server-end-point for TLS <= v1.2 though, but that makes the code\nmore complicated and OpenSSL does not complain with tls-exporter when\nusing < 1.3. If we switch the default on the fly, we could drop\nchannel_binding_type and control which one gets used depending on\nssl_max/min_server_protocol. I don't like that much, TBH, as this\ncreates more dependencies across our the internal code with the\ninitial choice of the connection parameters, making it more complex to\nmaintain in the long-term. At least that's my impression.\n\n> If we want to cross all our T's, we should also disallow tls-exporter\n> if the server was unable to set SSL_OP_NO_RENEGOTIATION.\n\nHmm. Okay. I have not considered that. But TLSv1.3 has no support\nfor renegotiation, isn't it? And you mean to fail hard when using\nTLS <= v1.2 with tls-exporter on the backend's SSL_CTX_set_options()\ncall? We cannot do that as the backend's SSL context is initialized\nbefore authentication, but we could re-check the state of the SSL\noptions afterwards, during authentication, and force a failure.\n\n> Did you have any thoughts about contributing the Python tests (or\n> porting them to Perl, bleh) so that we could test failure modes as\n> well? Unfortunately those Python tests were also OpenSSL-based, so\n> they're less powerful than an independent implementation...\n\nNo. I have not been able to look at that with the time I had,\nunfortunately.\n--\nMichael",
"msg_date": "Thu, 1 Sep 2022 09:57:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Support tls-exporter as channel binding for TLSv1.3"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 5:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Aug 31, 2022 at 04:16:29PM -0700, Jacob Champion wrote:\n> > OpenSSL should have an API for that (SSL_get_extms_support); I don't\n> > know when it was introduced.\n>\n> This is only available from 1.1.0, meaning that we'd better disable\n> tls-exporter when building with something older than that :( With\n> 1.0.2 already not supported by upstream even if a bunch of vendors\n> keep it around for compatibility, I guess that's fine as long as\n> the default setting is tls-server-end-point.\n\nYeah, that should be fine. Requiring newer OpenSSLs for stronger\ncrypto will probably be uncontroversial.\n\n> It would not be complex\n> to switch to tls-exporter by default when using TLSv1.3 and\n> tls-server-end-point for TLS <= v1.2 though, but that makes the code\n> more complicated and OpenSSL does not complain with tls-exporter when\n> using < 1.3. If we switch the default on the fly, we could drop\n> channel_binding_type and control which one gets used depending on\n> ssl_max/min_server_protocol. I don't like that much, TBH, as this\n> creates more dependencies across our the internal code with the\n> initial choice of the connection parameters, making it more complex to\n> maintain in the long-term. At least that's my impression.\n\nI think there are two separate concerns there -- whether to remove the\nconfiguration option, and whether to change the default.\n\nI definitely wouldn't want to remove the option. Users of TLS 1.2\nshould be able to choose tls-exporter if they want the extra power,\nand users of TLS 1.3 should be able to remain on tls-server-end-point\nif they need it in the future.\n\nChanging the default is trickier. tls-server-end-point is our default\nin the wild. We're not RFC-compliant already, because we don't\nimplement tls-unique at all. And there's no negotiation, so it seems\nlike switching the default for TLS 1.3 would impact interoperability\nbetween newer clients and older servers. The advantage would be that\nusers of newer clients would have to opt in before servers could\nforward their credentials around on their behalf. Maybe that's\nsomething we could switch safely in the future, once tls-exporter is\nmore widely deployed?\n\n> > If we want to cross all our T's, we should also disallow tls-exporter\n> > if the server was unable to set SSL_OP_NO_RENEGOTIATION.\n>\n> Hmm. Okay. I have not considered that. But TLSv1.3 has no support\n> for renegotiation, isn't it?\n\nRight. We only need to worry about that when we're using an older TLS.\n\n> And you mean to fail hard when using\n> TLS <= v1.2 with tls-exporter on the backend's SSL_CTX_set_options()\n> call? We cannot do that as the backend's SSL context is initialized\n> before authentication, but we could re-check the state of the SSL\n> options afterwards, during authentication, and force a failure.\n\nSounds reasonable.\n\n> > Did you have any thoughts about contributing the Python tests (or\n> > porting them to Perl, bleh) so that we could test failure modes as\n> > well? Unfortunately those Python tests were also OpenSSL-based, so\n> > they're less powerful than an independent implementation...\n>\n> No. I have not been able to look at that with the time I had,\n> unfortunately.\n\nAll good. Thanks!\n\n--Jacob\n\n\n",
"msg_date": "Wed, 7 Sep 2022 10:03:41 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Support tls-exporter as channel binding for TLSv1.3"
},
{
"msg_contents": "On Wed, Sep 7, 2022 at 10:03 AM Jacob Champion <jchampion@timescale.com> wrote:\n> Yeah, that should be fine. Requiring newer OpenSSLs for stronger\n> crypto will probably be uncontroversial.\n\nWhile looking into this I noticed that I left the following code in place:\n\n> #ifdef HAVE_BE_TLS_GET_CERTIFICATE_HASH\n> if (strcmp(selected_mech, SCRAM_SHA_256_PLUS_NAME) == 0 && port->ssl_in_use)\n\nIn other words, we're still deciding whether to advertise -PLUS based\nonly on whether we support tls-server-end-point. Maybe all the\nnecessary features landed in OpenSSL in the same version, but I\nhaven't double-checked that, and in any case I think I need to make\nthis code more correct in the next version of this patch.\n\n--Jacob\n\n\n",
"msg_date": "Mon, 19 Sep 2022 09:27:41 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Support tls-exporter as channel binding for TLSv1.3"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 09:27:41AM -0700, Jacob Champion wrote:\n> While looking into this I noticed that I left the following code in place:\n> \n>> #ifdef HAVE_BE_TLS_GET_CERTIFICATE_HASH\n>> if (strcmp(selected_mech, SCRAM_SHA_256_PLUS_NAME) == 0 && port->ssl_in_use)\n> \n> In other words, we're still deciding whether to advertise -PLUS based\n> only on whether we support tls-server-end-point.\n\nX509_get_signature_nid() has been introduced in 1.0.2.\nSSL_export_keying_material() is older than that, being present since\n1.0.1. Considering the fact that we want to always have\ntls-server-end-point as default, it seems to me that we should always\npublish SCRAM-SHA-256-PLUS and support channel binding when building\nwith OpenSSL >= 1.0.2. The same can be said about the areas where we\nhave HAVE_BE_TLS_GET_[PEER_]CERTIFICATE_HASH.\n\nThere could be a point in supporting tls-exporter as default in 1.0.1,\nor just allow it if the caller gives it in the connection string, but\nas that's the next version we are going to drop support for (sooner\nthan later would be better IMO), I don't really want to add more\nmaintenance burden in this area as 1.0.1 is not that used anyway as\nfar as I recall.\n\n> Maybe all the necessary features landed in OpenSSL in the same\n> version, but I haven't double-checked that, and in any case I think\n> I need to make this code more correct in the next version of this\n> patch.\n\nI was planning to continue working on this patch based on your latest\nreview. Anyway, as that's originally your code, I am fine to let you\ntake the lead here. Just let me know which way you prefer, and I'll\nstick to it :)\n--\nMichael",
"msg_date": "Tue, 20 Sep 2022 09:53:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Support tls-exporter as channel binding for TLSv1.3"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 5:54 PM Michael Paquier <michael@paquier.xyz> wrote:\n> X509_get_signature_nid() has been introduced in 1.0.2.\n> SSL_export_keying_material() is older than that, being present since\n> 1.0.1. Considering the fact that we want to always have\n> tls-server-end-point as default, it seems to me that we should always\n> publish SCRAM-SHA-256-PLUS and support channel binding when building\n> with OpenSSL >= 1.0.2. The same can be said about the areas where we\n> have HAVE_BE_TLS_GET_[PEER_]CERTIFICATE_HASH.\n\nShould we advertise support even if the client is too old to provide\nan extended master secret?\n\n> I was planning to continue working on this patch based on your latest\n> review. Anyway, as that's originally your code, I am fine to let you\n> take the lead here. Just let me know which way you prefer, and I'll\n> stick to it :)\n\nWell, I'm working on a next version, but it's ballooning in complexity\nas I try to navigate the fix for OpenSSL 1.0.1 (which is currently\nfailing the tests, unsurprisingly). You mentioned not wanting to add\nmaintenance burden for 1.0.1, and I'm curious to see whether the\napproach you have in mind would be easier than what mine is turning\nout to be. Maybe we can compare notes?\n\n--Jacob\n\n\n",
"msg_date": "Tue, 20 Sep 2022 11:01:29 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Support tls-exporter as channel binding for TLSv1.3"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 11:01 AM Jacob Champion <jchampion@timescale.com> wrote:\n> Well, I'm working on a next version, but it's ballooning in complexity\n> as I try to navigate the fix for OpenSSL 1.0.1 (which is currently\n> failing the tests, unsurprisingly).\n\nTo be more specific: I think I'm hitting the case that Heikki pointed\nout several years ago [1]:\n\n> The problematic case is when e.g. the server\n> only supports tls-unique and the client only supports\n> tls-server-end-point. What we would (usually) like to happen, is to fall\n> back to not using channel binding. But it's not clear how to make that\n> work, and still protect from downgrade attacks.\n\nThe problem was deferred when tls-unique was removed. We might have to\nactually solve it now.\n\nbcc: Heikki, in case he would like to weigh in.\n\n--Jacob\n\n[1] https://www.postgresql.org/message-id/ec787074-2305-c6f4-86aa-6902f98485a4%40iki.fi\n\n\n",
"msg_date": "Tue, 20 Sep 2022 11:51:44 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Support tls-exporter as channel binding for TLSv1.3"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 11:51:44AM -0700, Jacob Champion wrote:\n> On Tue, Sep 20, 2022 at 11:01 AM Jacob Champion <jchampion@timescale.com> wrote:\n>> Well, I'm working on a next version, but it's ballooning in complexity\n>> as I try to navigate the fix for OpenSSL 1.0.1 (which is currently\n>> failing the tests, unsurprisingly).\n> \n> To be more specific: I think I'm hitting the case that Heikki pointed\n> out several years ago [1]:\n> \n>> The problematic case is when e.g. the server\n>> only supports tls-unique and the client only supports\n>> tls-server-end-point. What we would (usually) like to happen, is to fall\n>> back to not using channel binding. But it's not clear how to make that\n>> work, and still protect from downgrade attacks.\n> \n> The problem was deferred when tls-unique was removed. We might have to\n> actually solve it now.\n\nRight. One thing that would reduce the complexity of the equation is\nto drop support for tls-server-end-point in the backend in PG >= 16 as\nthe versions of OpenSSL we still support on HEAD would cover\ncompletely tls-exporter.\n\nWe should have in libpq the code to support both tls-server-end-point\nand tls-exporter as channel bindings, for backward-compatibility. If\nwe were to drop support for OpenSSL 1.0.1, things get a bit easier\nhere, as we would be sure that channel binding is supported by all the\ncode paths of libpq. Having both channel_binding_type with\nchannel_binding=require would offer some protection against downgrade\nattacks. That does not feel completely water-proof, still default\nsettings like sslmode=prefer are not really secure either..\n--\nMichael",
"msg_date": "Thu, 13 Oct 2022 15:00:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Support tls-exporter as channel binding for TLSv1.3"
},
{
"msg_contents": "On Wed, Oct 12, 2022 at 11:01 PM Michael Paquier <michael@paquier.xyz> wrote:\n> One thing that would reduce the complexity of the equation is\n> to drop support for tls-server-end-point in the backend in PG >= 16 as\n> the versions of OpenSSL we still support on HEAD would cover\n> completely tls-exporter.\n\nIs the intent to backport tls-exporter client support? Or is the\ncompatibility break otherwise acceptable?\n\nIt seemed like there was also some general interest in proxy TLS\ntermination (see also the PROXY effort, though it has stalled a bit).\nFor that, I would think tls-server-end-point is an important feature.\n\n--Jacob\n\n\n",
"msg_date": "Thu, 13 Oct 2022 10:30:37 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Support tls-exporter as channel binding for TLSv1.3"
},
{
"msg_contents": "On Thu, Oct 13, 2022 at 10:30:37AM -0700, Jacob Champion wrote:\n> Is the intent to backport tls-exporter client support? Or is the\n> compatibility break otherwise acceptable?\n\nWell, I'd rather say yes thanks to the complexity avoided in the\nbackend as that's the most sensible piece security-wise, even if we\nalways require a certificate to exist in PG. A server attempting to\ntrick a client in downgrading would still be a problem :/\n\ntls-exporter would be a new feature, so backporting is out of the\npicture.\n\n> It seemed like there was also some general interest in proxy TLS\n> termination (see also the PROXY effort, though it has stalled a bit).\n> For that, I would think tls-server-end-point is an important feature.\n\nOh, okay. That's an argument in favor of not doing that, then.\nPerhaps we'd better revisit the introduction of tls-exporter once we\nknow more about all that, and it looks like we would need a way to be\nable to negotiate which channel binding to use (I recall that the\nsurrounding RFCs allowed some extra negotiation, vaguely, but my\nimpression may be wrong).\n--\nMichael",
"msg_date": "Fri, 14 Oct 2022 11:00:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Support tls-exporter as channel binding for TLSv1.3"
},
{
"msg_contents": "On Fri, Oct 14, 2022 at 11:00:10AM +0900, Michael Paquier wrote:\n> Oh, okay. That's an argument in favor of not doing that, then.\n> Perhaps we'd better revisit the introduction of tls-exporter once we\n> know more about all that, and it looks like we would need a way to be\n> able to negotiate which channel binding to use (I recall that the\n> surrounding RFCs allowed some extra negotiation, vaguely, but my\n> impression may be wrong).\n\nI am not sure what can be done for that now, so I have marked the\npatch as returned with feedback.\n--\nMichael",
"msg_date": "Wed, 30 Nov 2022 15:52:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Support tls-exporter as channel binding for TLSv1.3"
}
] |
[
{
"msg_contents": "This brings the bonus of support jitting on riscv64 (included in this patch)\nand other platforms Rtdyld doesn't support, e.g. windows COFF.\n\nCurrently, llvm doesn't expose jitlink (ObjectLinkingLayer) via C API, so\na wrapper is added. This also adds minor llvm 15 compat fix that is needed\n---\n config/llvm.m4 | 1 +\n src/backend/jit/llvm/llvmjit.c | 67 +++++++++++++++++++++++++--\n src/backend/jit/llvm/llvmjit_wrap.cpp | 35 ++++++++++++++\n src/include/jit/llvmjit.h | 9 ++++\n 4 files changed, 108 insertions(+), 4 deletions(-)\n\ndiff --git a/config/llvm.m4 b/config/llvm.m4\nindex 3a75cd8b4d..a31b8b304a 100644\n--- a/config/llvm.m4\n+++ b/config/llvm.m4\n@@ -75,6 +75,7 @@ AC_DEFUN([PGAC_LLVM_SUPPORT],\n engine) pgac_components=\"$pgac_components $pgac_component\";;\n debuginfodwarf) pgac_components=\"$pgac_components $pgac_component\";;\n orcjit) pgac_components=\"$pgac_components $pgac_component\";;\n+ jitlink) pgac_components=\"$pgac_components $pgac_component\";;\n passes) pgac_components=\"$pgac_components $pgac_component\";;\n native) pgac_components=\"$pgac_components $pgac_component\";;\n perfjitevents) pgac_components=\"$pgac_components $pgac_component\";;\ndiff --git a/src/backend/jit/llvm/llvmjit.c b/src/backend/jit/llvm/llvmjit.c\nindex 6c72d43beb..d8b840da8c 100644\n--- a/src/backend/jit/llvm/llvmjit.c\n+++ b/src/backend/jit/llvm/llvmjit.c\n@@ -229,6 +229,11 @@ llvm_release_context(JitContext *context)\n LLVMModuleRef\n llvm_mutable_module(LLVMJitContext *context)\n {\n+#ifdef __riscv\n+\tconst char* abiname;\n+\tconst char* target_abi = \"target-abi\";\n+\tLLVMMetadataRef abi_metadata;\n+#endif\n \tllvm_assert_in_fatal_section();\n \n \t/*\n@@ -241,6 +246,40 @@ llvm_mutable_module(LLVMJitContext *context)\n \t\tcontext->module = LLVMModuleCreateWithName(\"pg\");\n \t\tLLVMSetTarget(context->module, llvm_triple);\n \t\tLLVMSetDataLayout(context->module, llvm_layout);\n+#ifdef __riscv\n+#if __riscv_xlen == 64\n+#ifdef __riscv_float_abi_double\n+\t\tabiname = \"lp64d\";\n+#elif defined(__riscv_float_abi_single)\n+\t\tabiname = \"lp64f\";\n+#else\n+\t\tabiname = \"lp64\";\n+#endif\n+#elif __riscv_xlen == 32\n+#ifdef __riscv_float_abi_double\n+\t\tabiname = \"ilp32d\";\n+#elif defined(__riscv_float_abi_single)\n+\t\tabiname = \"ilp32f\";\n+#else\n+\t\tabiname = \"ilp32\";\n+#endif\n+#else\n+\t\telog(ERROR, \"unsupported riscv xlen %d\", __riscv_xlen);\n+#endif\n+\t\t/*\n+\t\t * set this manually to avoid llvm defaulting to soft float and\n+\t\t * resulting in linker error: `can't link double-float modules\n+\t\t * with soft-float modules`\n+\t\t * we could set this for TargetMachine via MCOptions, but there\n+\t\t * is no C API for it\n+\t\t * ref: https://github.com/llvm/llvm-project/blob/afa520ab34803c82587ea6759bfd352579f741b4/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp#L90\n+\t\t */\n+\t\tabi_metadata = LLVMMDStringInContext2(\n+\t\t\tLLVMGetModuleContext(context->module),\n+\t\t\tabiname, strlen(abiname));\n+\t\tLLVMAddModuleFlag(context->module, LLVMModuleFlagBehaviorOverride,\n+\t\t\ttarget_abi, strlen(target_abi), abi_metadata);\n+#endif\n \t}\n \n \treturn context->module;\n@@ -786,6 +825,8 @@ llvm_session_initialize(void)\n \tchar\t *error = NULL;\n \tchar\t *cpu = NULL;\n \tchar\t *features = NULL;\n+\tLLVMRelocMode reloc=LLVMRelocDefault;\n+\tLLVMCodeModel codemodel=LLVMCodeModelJITDefault;\n \tLLVMTargetMachineRef opt0_tm;\n \tLLVMTargetMachineRef opt3_tm;\n \n@@ -820,16 +861,21 @@ llvm_session_initialize(void)\n \telog(DEBUG2, \"LLVMJIT detected CPU \\\"%s\\\", with features \\\"%s\\\"\",\n \t\t cpu, features);\n \n+#ifdef __riscv\n+\treloc=LLVMRelocPIC;\n+\tcodemodel=LLVMCodeModelMedium;\n+#endif\n+\n \topt0_tm =\n \t\tLLVMCreateTargetMachine(llvm_targetref, llvm_triple, cpu, features,\n \t\t\t\t\t\t\t\tLLVMCodeGenLevelNone,\n-\t\t\t\t\t\t\t\tLLVMRelocDefault,\n-\t\t\t\t\t\t\t\tLLVMCodeModelJITDefault);\n+\t\t\t\t\t\t\t\treloc,\n+\t\t\t\t\t\t\t\tcodemodel);\n \topt3_tm =\n \t\tLLVMCreateTargetMachine(llvm_targetref, llvm_triple, cpu, features,\n \t\t\t\t\t\t\t\tLLVMCodeGenLevelAggressive,\n-\t\t\t\t\t\t\t\tLLVMRelocDefault,\n-\t\t\t\t\t\t\t\tLLVMCodeModelJITDefault);\n+\t\t\t\t\t\t\t\treloc,\n+\t\t\t\t\t\t\t\tcodemodel);\n \n \tLLVMDisposeMessage(cpu);\n \tcpu = NULL;\n@@ -1112,7 +1158,11 @@ llvm_resolve_symbols(LLVMOrcDefinitionGeneratorRef GeneratorObj, void *Ctx,\n \t\t\t\t\t LLVMOrcJITDylibRef JD, LLVMOrcJITDylibLookupFlags JDLookupFlags,\n \t\t\t\t\t LLVMOrcCLookupSet LookupSet, size_t LookupSetSize)\n {\n+#if LLVM_VERSION_MAJOR > 14\n+\tLLVMOrcCSymbolMapPairs symbols = palloc0(sizeof(LLVMOrcCSymbolMapPair) * LookupSetSize);\n+#else\n \tLLVMOrcCSymbolMapPairs symbols = palloc0(sizeof(LLVMJITCSymbolMapPair) * LookupSetSize);\n+#endif\n \tLLVMErrorRef error;\n \tLLVMOrcMaterializationUnitRef mu;\n \n@@ -1160,6 +1210,10 @@ llvm_log_jit_error(void *ctx, LLVMErrorRef error)\n static LLVMOrcObjectLayerRef\n llvm_create_object_layer(void *Ctx, LLVMOrcExecutionSessionRef ES, const char *Triple)\n {\n+#if defined(USE_JITLINK)\n+\tLLVMOrcObjectLayerRef objlayer =\n+\tLLVMOrcCreateJitlinkObjectLinkingLayer(ES);\n+#else\n \tLLVMOrcObjectLayerRef objlayer =\n \tLLVMOrcCreateRTDyldObjectLinkingLayerWithSectionMemoryManager(ES);\n \n@@ -1179,6 +1233,7 @@ llvm_create_object_layer(void *Ctx, LLVMOrcExecutionSessionRef ES, const char *T\n \n \t\tLLVMOrcRTDyldObjectLinkingLayerRegisterJITEventListener(objlayer, l);\n \t}\n+#endif\n #endif\n \n \treturn objlayer;\n@@ -1230,7 +1285,11 @@ llvm_create_jit_instance(LLVMTargetMachineRef tm)\n \t * Symbol resolution support for \"special\" functions, e.g. a call into an\n \t * SQL callable function.\n \t */\n+#if LLVM_VERSION_MAJOR > 14\n+\tref_gen = LLVMOrcCreateCustomCAPIDefinitionGenerator(llvm_resolve_symbols, NULL, NULL);\n+#else\n \tref_gen = LLVMOrcCreateCustomCAPIDefinitionGenerator(llvm_resolve_symbols, NULL);\n+#endif\n \tLLVMOrcJITDylibAddGenerator(LLVMOrcLLJITGetMainJITDylib(lljit), ref_gen);\n \n \treturn lljit;\ndiff --git a/src/backend/jit/llvm/llvmjit_wrap.cpp b/src/backend/jit/llvm/llvmjit_wrap.cpp\nindex 8f11cc02b2..29f21f1715 100644\n--- a/src/backend/jit/llvm/llvmjit_wrap.cpp\n+++ b/src/backend/jit/llvm/llvmjit_wrap.cpp\n@@ -27,6 +27,10 @@ extern \"C\"\n #include <llvm/Support/Host.h>\n \n #include \"jit/llvmjit.h\"\n+#ifdef USE_JITLINK\n+#include \"llvm/ExecutionEngine/JITLink/EHFrameSupport.h\"\n+#include \"llvm/ExecutionEngine/Orc/ObjectLinkingLayer.h\"\n+#endif\n \n \n /*\n@@ -48,6 +52,19 @@ char *LLVMGetHostCPUFeatures(void) {\n \t\tfor (auto &F : HostFeatures)\n \t\t\tFeatures.AddFeature(F.first(), F.second);\n \n+#if defined(__riscv)\n+\t/* getHostCPUName returns \"generic-rv[32|64]\", which lacks all features */\n+\tFeatures.AddFeature(\"m\", true);\n+\tFeatures.AddFeature(\"a\", true);\n+\tFeatures.AddFeature(\"c\", true);\n+# if defined(__riscv_float_abi_single)\n+\tFeatures.AddFeature(\"f\", true);\n+# endif\n+# if defined(__riscv_float_abi_double)\n+\tFeatures.AddFeature(\"d\", true);\n+# endif\n+#endif\n+\n \treturn strdup(Features.getString().c_str());\n }\n #endif\n@@ -76,3 +93,21 @@ LLVMGetAttributeCountAtIndexPG(LLVMValueRef F, uint32 Idx)\n \t */\n \treturn LLVMGetAttributeCountAtIndex(F, Idx);\n }\n+\n+#ifdef USE_JITLINK\n+/*\n+ * There is no public C API to create ObjectLinkingLayer for JITLINK, create our own\n+ */\n+DEFINE_SIMPLE_CONVERSION_FUNCTIONS(llvm::orc::ExecutionSession, LLVMOrcExecutionSessionRef)\n+DEFINE_SIMPLE_CONVERSION_FUNCTIONS(llvm::orc::ObjectLayer, LLVMOrcObjectLayerRef)\n+\n+LLVMOrcObjectLayerRef\n+LLVMOrcCreateJitlinkObjectLinkingLayer(LLVMOrcExecutionSessionRef ES)\n+{\n+\tassert(ES && \"ES must not be null\");\n+\tauto ObjLinkingLayer = new llvm::orc::ObjectLinkingLayer(*unwrap(ES));\n+\tObjLinkingLayer->addPlugin(std::make_unique<llvm::orc::EHFrameRegistrationPlugin>(\n+\t\t*unwrap(ES), std::make_unique<llvm::jitlink::InProcessEHFrameRegistrar>()));\n+\treturn wrap(ObjLinkingLayer);\n+}\n+#endif\ndiff --git a/src/include/jit/llvmjit.h b/src/include/jit/llvmjit.h\nindex 4541f9a2c4..85a0cfe5e0 100644\n--- a/src/include/jit/llvmjit.h\n+++ b/src/include/jit/llvmjit.h\n@@ -19,6 +19,11 @@\n \n #include <llvm-c/Types.h>\n \n+#if defined(__riscv) && LLVM_VERSION_MAJOR >= 15\n+#include <llvm-c/Orc.h>\n+#define USE_JITLINK\n+/* else use legacy RTDyld */\n+#endif\n \n /*\n * File needs to be includable by both C and C++ code, and include other\n@@ -134,6 +139,10 @@ extern char *LLVMGetHostCPUFeatures(void);\n \n extern unsigned LLVMGetAttributeCountAtIndexPG(LLVMValueRef F, uint32 Idx);\n \n+#ifdef USE_JITLINK\n+extern LLVMOrcObjectLayerRef LLVMOrcCreateJitlinkObjectLinkingLayer(LLVMOrcExecutionSessionRef ES);\n+#endif\n+\n #ifdef __cplusplus\n } /* extern \"C\" */\n #endif\n-- \n2.37.2\n\n\n\n",
"msg_date": "Mon, 29 Aug 2022 15:46:22 +0800",
"msg_from": "Alex Fan <alex.fan.q@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Enable using llvm jitlink as an alternative llvm jit linker\n of old Rtdyld."
},
{
"msg_contents": "Hi,\n\nI am new to the postgres community and apologise for resending this as the\nprevious one didn't include patch properly and didn't cc reviewers (maybe\nthe reason it has been buried in mailing list for months)\n\nAdding to previous email, this patch exposes its own C API for creating\nObjectLinkingLayer in a similar fashion as\nLLVMOrcCreateRTDyldObjectLinkingLayerWithSectionMemoryManager since orc\ndoesn't expose it yet.\n\nThanks and really appreciate if someone can offer a review to this and help\nget it merged.\n\nCheers,\nAlex\n\n\nOn Mon, Aug 29, 2022 at 5:46 PM Alex Fan <alex.fan.q@gmail.com> wrote:\n\n> This brings the bonus of support jitting on riscv64 (included in this\n> patch)\n> and other platforms Rtdyld doesn't support, e.g. windows COFF.\n>\n> Currently, llvm doesn't expose jitlink (ObjectLinkingLayer) via C API, so\n> a wrapper is added. This also adds minor llvm 15 compat fix that is needed\n> ---\n> config/llvm.m4 | 1 +\n> src/backend/jit/llvm/llvmjit.c | 67 +++++++++++++++++++++++++--\n> src/backend/jit/llvm/llvmjit_wrap.cpp | 35 ++++++++++++++\n> src/include/jit/llvmjit.h | 9 ++++\n> 4 files changed, 108 insertions(+), 4 deletions(-)\n>\n> diff --git a/config/llvm.m4 b/config/llvm.m4\n> index 3a75cd8b4d..a31b8b304a 100644\n> --- a/config/llvm.m4\n> +++ b/config/llvm.m4\n> @@ -75,6 +75,7 @@ AC_DEFUN([PGAC_LLVM_SUPPORT],\n> engine) pgac_components=\"$pgac_components $pgac_component\";;\n> debuginfodwarf) pgac_components=\"$pgac_components $pgac_component\";;\n> orcjit) pgac_components=\"$pgac_components $pgac_component\";;\n> + jitlink) pgac_components=\"$pgac_components $pgac_component\";;\n> passes) pgac_components=\"$pgac_components $pgac_component\";;\n> native) pgac_components=\"$pgac_components $pgac_component\";;\n> perfjitevents) pgac_components=\"$pgac_components $pgac_component\";;\n> diff --git a/src/backend/jit/llvm/llvmjit.c\n> b/src/backend/jit/llvm/llvmjit.c\n> index 6c72d43beb..d8b840da8c 100644\n> --- a/src/backend/jit/llvm/llvmjit.c\n> +++ b/src/backend/jit/llvm/llvmjit.c\n> @@ -229,6 +229,11 @@ llvm_release_context(JitContext *context)\n> LLVMModuleRef\n> llvm_mutable_module(LLVMJitContext *context)\n> {\n> +#ifdef __riscv\n> + const char* abiname;\n> + const char* target_abi = \"target-abi\";\n> + LLVMMetadataRef abi_metadata;\n> +#endif\n> llvm_assert_in_fatal_section();\n>\n> /*\n> @@ -241,6 +246,40 @@ llvm_mutable_module(LLVMJitContext *context)\n> context->module = LLVMModuleCreateWithName(\"pg\");\n> LLVMSetTarget(context->module, llvm_triple);\n> LLVMSetDataLayout(context->module, llvm_layout);\n> +#ifdef __riscv\n> +#if __riscv_xlen == 64\n> +#ifdef __riscv_float_abi_double\n> + abiname = \"lp64d\";\n> +#elif defined(__riscv_float_abi_single)\n> + abiname = \"lp64f\";\n> +#else\n> + abiname = \"lp64\";\n> +#endif\n> +#elif __riscv_xlen == 32\n> +#ifdef __riscv_float_abi_double\n> + abiname = \"ilp32d\";\n> +#elif defined(__riscv_float_abi_single)\n> + abiname = \"ilp32f\";\n> +#else\n> + abiname = \"ilp32\";\n> +#endif\n> +#else\n> + elog(ERROR, \"unsupported riscv xlen %d\", __riscv_xlen);\n> +#endif\n> + /*\n> + * set this manually to avoid llvm defaulting to soft\n> float and\n> + * resulting in linker error: `can't link double-float\n> modules\n> + * with soft-float modules`\n> + * we could set this for TargetMachine via MCOptions, but\n> there\n> + * is no C API for it\n> + * ref:\n> https://github.com/llvm/llvm-project/blob/afa520ab34803c82587ea6759bfd352579f741b4/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp#L90\n> + */\n> + abi_metadata = LLVMMDStringInContext2(\n> + LLVMGetModuleContext(context->module),\n> + abiname, strlen(abiname));\n> + LLVMAddModuleFlag(context->module,\n> LLVMModuleFlagBehaviorOverride,\n> + target_abi, strlen(target_abi), abi_metadata);\n> +#endif\n> }\n>\n> return context->module;\n> @@ -786,6 +825,8 @@ llvm_session_initialize(void)\n> char *error = NULL;\n> char *cpu = NULL;\n> char *features = NULL;\n> + LLVMRelocMode reloc=LLVMRelocDefault;\n> + LLVMCodeModel codemodel=LLVMCodeModelJITDefault;\n> LLVMTargetMachineRef opt0_tm;\n> LLVMTargetMachineRef opt3_tm;\n>\n> @@ -820,16 +861,21 @@ llvm_session_initialize(void)\n> elog(DEBUG2, \"LLVMJIT detected CPU \\\"%s\\\", with features \\\"%s\\\"\",\n> cpu, features);\n>\n> +#ifdef __riscv\n> + reloc=LLVMRelocPIC;\n> + codemodel=LLVMCodeModelMedium;\n> +#endif\n> +\n> opt0_tm =\n> LLVMCreateTargetMachine(llvm_targetref, llvm_triple, cpu,\n> features,\n>\n> LLVMCodeGenLevelNone,\n> -\n> LLVMRelocDefault,\n> -\n> LLVMCodeModelJITDefault);\n> + reloc,\n> + codemodel);\n> opt3_tm =\n> LLVMCreateTargetMachine(llvm_targetref, llvm_triple, cpu,\n> features,\n>\n> LLVMCodeGenLevelAggressive,\n> -\n> LLVMRelocDefault,\n> -\n> LLVMCodeModelJITDefault);\n> + reloc,\n> + codemodel);\n>\n> LLVMDisposeMessage(cpu);\n> cpu = NULL;\n> @@ -1112,7 +1158,11 @@ llvm_resolve_symbols(LLVMOrcDefinitionGeneratorRef\n> GeneratorObj, void *Ctx,\n> LLVMOrcJITDylibRef JD,\n> LLVMOrcJITDylibLookupFlags JDLookupFlags,\n> LLVMOrcCLookupSet LookupSet,\n> size_t LookupSetSize)\n> {\n> +#if LLVM_VERSION_MAJOR > 14\n> + LLVMOrcCSymbolMapPairs symbols =\n> palloc0(sizeof(LLVMOrcCSymbolMapPair) * LookupSetSize);\n> +#else\n> LLVMOrcCSymbolMapPairs symbols =\n> palloc0(sizeof(LLVMJITCSymbolMapPair) * LookupSetSize);\n> +#endif\n> LLVMErrorRef error;\n> LLVMOrcMaterializationUnitRef mu;\n>\n> @@ -1160,6 +1210,10 @@ llvm_log_jit_error(void *ctx, LLVMErrorRef error)\n> static LLVMOrcObjectLayerRef\n> llvm_create_object_layer(void *Ctx, LLVMOrcExecutionSessionRef ES, const\n> char *Triple)\n> {\n> +#if defined(USE_JITLINK)\n> + LLVMOrcObjectLayerRef objlayer =\n> + LLVMOrcCreateJitlinkObjectLinkingLayer(ES);\n> +#else\n> LLVMOrcObjectLayerRef objlayer =\n> LLVMOrcCreateRTDyldObjectLinkingLayerWithSectionMemoryManager(ES);\n>\n> @@ -1179,6 +1233,7 @@ llvm_create_object_layer(void *Ctx,\n> LLVMOrcExecutionSessionRef ES, const char *T\n>\n>\n> LLVMOrcRTDyldObjectLinkingLayerRegisterJITEventListener(objlayer, l);\n> }\n> +#endif\n> #endif\n>\n> return objlayer;\n> @@ -1230,7 +1285,11 @@ llvm_create_jit_instance(LLVMTargetMachineRef tm)\n> * Symbol resolution support for \"special\" functions, e.g. a call\n> into an\n> * SQL callable function.\n> */\n> +#if LLVM_VERSION_MAJOR > 14\n> + ref_gen =\n> LLVMOrcCreateCustomCAPIDefinitionGenerator(llvm_resolve_symbols, NULL,\n> NULL);\n> +#else\n> ref_gen =\n> LLVMOrcCreateCustomCAPIDefinitionGenerator(llvm_resolve_symbols, NULL);\n> +#endif\n> LLVMOrcJITDylibAddGenerator(LLVMOrcLLJITGetMainJITDylib(lljit),\n> ref_gen);\n>\n> return lljit;\n> diff --git a/src/backend/jit/llvm/llvmjit_wrap.cpp\n> b/src/backend/jit/llvm/llvmjit_wrap.cpp\n> index 8f11cc02b2..29f21f1715 100644\n> --- a/src/backend/jit/llvm/llvmjit_wrap.cpp\n> +++ b/src/backend/jit/llvm/llvmjit_wrap.cpp\n> @@ -27,6 +27,10 @@ extern \"C\"\n> #include <llvm/Support/Host.h>\n>\n> #include \"jit/llvmjit.h\"\n> +#ifdef USE_JITLINK\n> +#include \"llvm/ExecutionEngine/JITLink/EHFrameSupport.h\"\n> +#include \"llvm/ExecutionEngine/Orc/ObjectLinkingLayer.h\"\n> +#endif\n>\n>\n> /*\n> @@ -48,6 +52,19 @@ char *LLVMGetHostCPUFeatures(void) {\n> for (auto &F : HostFeatures)\n> Features.AddFeature(F.first(), F.second);\n>\n> +#if defined(__riscv)\n> + /* getHostCPUName returns \"generic-rv[32|64]\", which lacks all\n> features */\n> + Features.AddFeature(\"m\", true);\n> + Features.AddFeature(\"a\", true);\n> + Features.AddFeature(\"c\", true);\n> +# if defined(__riscv_float_abi_single)\n> + Features.AddFeature(\"f\", true);\n> +# endif\n> +# if defined(__riscv_float_abi_double)\n> + Features.AddFeature(\"d\", true);\n> +# endif\n> +#endif\n> +\n> return strdup(Features.getString().c_str());\n> }\n> #endif\n> @@ -76,3 +93,21 @@ LLVMGetAttributeCountAtIndexPG(LLVMValueRef F, uint32\n> Idx)\n> */\n> return LLVMGetAttributeCountAtIndex(F, Idx);\n> }\n> +\n> +#ifdef USE_JITLINK\n> +/*\n> + * There is no public C API to create ObjectLinkingLayer for JITLINK,\n> create our own\n> + */\n> +DEFINE_SIMPLE_CONVERSION_FUNCTIONS(llvm::orc::ExecutionSession,\n> LLVMOrcExecutionSessionRef)\n> +DEFINE_SIMPLE_CONVERSION_FUNCTIONS(llvm::orc::ObjectLayer,\n> LLVMOrcObjectLayerRef)\n> +\n> +LLVMOrcObjectLayerRef\n> +LLVMOrcCreateJitlinkObjectLinkingLayer(LLVMOrcExecutionSessionRef ES)\n> +{\n> + assert(ES && \"ES must not be null\");\n> + auto ObjLinkingLayer = new\n> llvm::orc::ObjectLinkingLayer(*unwrap(ES));\n> +\n> ObjLinkingLayer->addPlugin(std::make_unique<llvm::orc::EHFrameRegistrationPlugin>(\n> + *unwrap(ES),\n> std::make_unique<llvm::jitlink::InProcessEHFrameRegistrar>()));\n> + return wrap(ObjLinkingLayer);\n> +}\n> +#endif\n> diff --git a/src/include/jit/llvmjit.h b/src/include/jit/llvmjit.h\n> index 4541f9a2c4..85a0cfe5e0 100644\n> --- a/src/include/jit/llvmjit.h\n> +++ b/src/include/jit/llvmjit.h\n> @@ -19,6 +19,11 @@\n>\n> #include <llvm-c/Types.h>\n>\n> +#if defined(__riscv) && LLVM_VERSION_MAJOR >= 15\n> +#include <llvm-c/Orc.h>\n> +#define USE_JITLINK\n> +/* else use legacy RTDyld */\n> +#endif\n>\n> /*\n> * File needs to be includable by both C and C++ code, and include other\n> @@ -134,6 +139,10 @@ extern char *LLVMGetHostCPUFeatures(void);\n>\n> extern unsigned LLVMGetAttributeCountAtIndexPG(LLVMValueRef F, uint32\n> Idx);\n>\n> +#ifdef USE_JITLINK\n> +extern LLVMOrcObjectLayerRef\n> LLVMOrcCreateJitlinkObjectLinkingLayer(LLVMOrcExecutionSessionRef ES);\n> +#endif\n> +\n> #ifdef __cplusplus\n> } /* extern \"C\" */\n> #endif\n> --\n> 2.37.2\n>\n>",
"msg_date": "Wed, 23 Nov 2022 21:13:04 +1100",
"msg_from": "Alex Fan <alex.fan.q@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Enable using llvm jitlink as an alternative llvm jit\n linker of old Rtdyld."
},
{
"msg_contents": "On Wed, 23 Nov 2022 at 23:13, Alex Fan <alex.fan.q@gmail.com> wrote:\n> I am new to the postgres community and apologise for resending this as the previous one didn't include patch properly and didn't cc reviewers (maybe the reason it has been buried in mailing list for months)\n\nWelcome to the community!\n\nI've not looked at your patch, but I have noticed that you have\nassigned some reviewers to the CF entry yourself. Unless these people\nknow about that, this is likely a bad choice. People usually opt to\nreview patches of their own accord rather than because the patch\nauthor put their name on the reviewer list.\n\nThere are a few reasons that the patch might not be getting much attention:\n\n1. The CF entry ([1]) states that the patch is \"Waiting on Author\".\nIf you've done what you need to do, and are waiting for review, \"Needs\nreview\" might be a better state. Currently people browsing the CF app\nwill assume you need to do more work before it's worth looking at your\npatch.\n2. The CF entry already has reviewers listed. People looking for a\npatch to review are probably more likely to pick one with no reviewers\nlisted as they'd expect the existing listed reviewers to be taking\ncare of reviews for a particular patch. The latter might be unlikely\nto happen given you've assigned reviewers yourself without asking them\n(at least you didn't ask me after you put me on the list).\n3. Target version is 17. What's the reason for that? The next version is 16.\n\nI'd recommend setting the patch to \"Needs review\" and removing all the\nreviewers that have not confirmed to you that they'll review the\npatch. I'd also leave the target version blank or set it to 16.\n\nThere might be a bit more useful information for you in [2].\n\nDavid\n\n[1] https://commitfest.postgresql.org/40/3857/\n[2] https://wiki.postgresql.org/wiki/Submitting_a_Patch\n\n\n",
"msg_date": "Thu, 24 Nov 2022 00:08:20 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Enable using llvm jitlink as an alternative llvm jit\n linker of old Rtdyld."
},
{
"msg_contents": "On Thu, Nov 24, 2022 at 12:08 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 23 Nov 2022 at 23:13, Alex Fan <alex.fan.q@gmail.com> wrote:\n> > I am new to the postgres community and apologise for resending this as the previous one didn't include patch properly and didn't cc reviewers (maybe the reason it has been buried in mailing list for months)\n>\n> Welcome to the community!\n\n+1\n\nI don't know enough about LLVM or RISCV to have any strong opinions\nhere, but I have a couple of questions... It looks like we have two\ndifferent things in this patch:\n\n1. Optionally use JITLink instead of RuntimeDyld for relocation.\n From what I can tell from some quick googling, that is necessary for\nRISCV because they haven't got around to doing this yet:\n\nhttps://reviews.llvm.org/D127842\n\nIndependently of that, it seems that\nhttps://llvm.org/docs/JITLink.html is the future and RuntimeDyld will\neventually be obsolete, so one question I have is: why should we do\nthis only for riscv?\n\nYou mentioned that this change might be necessary to support COFF and\nthus Windows. I'm not a Windows user and I think it would be beyond\nmy pain threshold to try to get this working there by using CI alone,\nbut I'm just curious... wouldn't\nhttps://github.com/llvm/llvm-project/blob/main/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldCOFF.cpp\nwork for that already? (I haven't heard about anyone successfully\nusing PostgreSQL/LLVM on Windows; it would certainly be cool to hear\nsome more about what would be needed for that.)\n\n2. Manually adjust the CPU features and ABI/subtarget.\n\n+#if defined(__riscv)\n+ /* getHostCPUName returns \"generic-rv[32|64]\", which lacks all features */\n+ Features.AddFeature(\"m\", true);\n+ Features.AddFeature(\"a\", true);\n+ Features.AddFeature(\"c\", true);\n+# if defined(__riscv_float_abi_single)\n+ Features.AddFeature(\"f\", true);\n+# endif\n+# if defined(__riscv_float_abi_double)\n+ Features.AddFeature(\"d\", true);\n+# endif\n+#endif\n\nI'm trying to understand this, and the ABI name selection logic.\nMaybe there are two categories of features here?\n\nThe ABI bits, \"f\" and \"d\" are not just \"which instructions can I\nused\", but they also affect the ABI (I guess something like: where\nfloats go in the calling convention), and they have to match the ABI\nof the main executable to allow linking to succeed, right? Probably a\nstupid question: wouldn't the subtarget/ABI be the same as the one\nthat the LLVM library itself was compiled for (which must also match\nthe postgres executable), and doesn't it know that somewhere? I guess\nI'm confused about why we don't need to deal with this kind of manual\nsubtarget selection on any other architecture: for PPC it\nautomatically knows whether to be big endian/little endian, 32 or 64\nbit, etc.\n\nThen for \"m\", \"a\", \"c\", I guess these are code generation options -- I\nthink \"c\" is compressed instructions for example? Can we get a\ncomment to say what they are? Why do you think that all RISCV chips\nhave these features? Perhaps these are features that are part of some\nkind of server chip profile (ie features not present in a tiny\nmicrocontroller chip found in a toaster, but expected in any system\nthat would actually run PostgreSQL) -- in which case can we get a\nreference to explain that?\n\nI remembered the specific reason why we have that\nLLVMGethostCPUFeatures() call: it's because the list of default\nfeatures that would apply otherwise based on CPU \"name\" alone turned\nout to assume that all x86 chips had AVX, but some low end parts\ndon't, so we have to check for AVX etc presence that way. But your\npatch seems to imply that LLVM is not able to get features reliably\nfor RISCV -- why not, immaturity or technical reason why it can't?\n\n+ assert(ES && \"ES must not be null\");\n\nWe use our own Assert() macro (capital A).\n\n\n",
"msg_date": "Thu, 15 Dec 2022 11:59:39 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Enable using llvm jitlink as an alternative llvm jit\n linker of old Rtdyld."
},
{
"msg_contents": "Hi,\n\nOn 2022-11-23 21:13:04 +1100, Alex Fan wrote:\n> > @@ -241,6 +246,40 @@ llvm_mutable_module(LLVMJitContext *context)\n> > context->module = LLVMModuleCreateWithName(\"pg\");\n> > LLVMSetTarget(context->module, llvm_triple);\n> > LLVMSetDataLayout(context->module, llvm_layout);\n> > +#ifdef __riscv\n> > +#if __riscv_xlen == 64\n> > +#ifdef __riscv_float_abi_double\n> > + abiname = \"lp64d\";\n> > +#elif defined(__riscv_float_abi_single)\n> > + abiname = \"lp64f\";\n> > +#else\n> > + abiname = \"lp64\";\n> > +#endif\n> > +#elif __riscv_xlen == 32\n> > +#ifdef __riscv_float_abi_double\n> > + abiname = \"ilp32d\";\n> > +#elif defined(__riscv_float_abi_single)\n> > + abiname = \"ilp32f\";\n> > +#else\n> > + abiname = \"ilp32\";\n> > +#endif\n> > +#else\n> > + elog(ERROR, \"unsupported riscv xlen %d\", __riscv_xlen);\n> > +#endif\n> > + /*\n> > + * set this manually to avoid llvm defaulting to soft\n> > float and\n> > + * resulting in linker error: `can't link double-float\n> > modules\n> > + * with soft-float modules`\n> > + * we could set this for TargetMachine via MCOptions, but\n> > there\n> > + * is no C API for it\n> > + * ref:\n\nI think this is something that should go into the llvm code, rather than\npostgres.\n\n\n> > @@ -820,16 +861,21 @@ llvm_session_initialize(void)\n> > elog(DEBUG2, \"LLVMJIT detected CPU \\\"%s\\\", with features \\\"%s\\\"\",\n> > cpu, features);\n> >\n> > +#ifdef __riscv\n> > + reloc=LLVMRelocPIC;\n> > + codemodel=LLVMCodeModelMedium;\n> > +#endif\n\nSame.\n\n\n\n\n> > +#ifdef USE_JITLINK\n> > +/*\n> > + * There is no public C API to create ObjectLinkingLayer for JITLINK,\n> > create our own\n> > + */\n> > +DEFINE_SIMPLE_CONVERSION_FUNCTIONS(llvm::orc::ExecutionSession,\n> > LLVMOrcExecutionSessionRef)\n> > +DEFINE_SIMPLE_CONVERSION_FUNCTIONS(llvm::orc::ObjectLayer,\n> > LLVMOrcObjectLayerRef)\n\nI recommend proposing a patch for adding such an API to LLVM.\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 25 Dec 2022 04:01:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Enable using llvm jitlink as an alternative llvm jit\n linker of old Rtdyld."
},
{
"msg_contents": "> why should we do this only for riscv?\nI originally considered riscv because I only have amd64 and riscv hardware\nfor testing. But afaik, JITLink currently supports arm64 and x86-64 (ELF &\nMacOS), riscv (both 64bit and 32bit, ELF). i386 seems also supported. No\nsupport for ppc or mips exist yet. I am most familiar with riscv and its\nsupport has been quite stable. I also know Julia folks have been using\norcjit only for arm64 on MacOS for quite a while, but stayed in mcjit for\nother platforms. clasp <https://github.com/clasp-developers/clasp> also\nuses orcjit on x86 (ELF & macos) heavily. I'd say It should be safe to\nswitch on x86 and arm64 also.\n\n> that is necessary for RISCV because they haven't got around to doing this\nyet\nI doubt if the runtimedylib patch will eventually be accepted since mcjit\nwith its runtimedylib is in maintenance mode. Users are generally suggested\nto switch to orcjit.\n\nI am not familiar with Windows either. I realise from your link,\nRuntimeDyld does have windows COFF support. I have clearly misread\nsomething from discord.\nThere is a recent talk <https://www.youtube.com/watch?v=_5_gm58sQIg> about\njitlink windows support that covers a lot of details. I think the situation\nis that RuntimeDyld support for COFF is not stable and only limited to\nlarge code model (RuntimeDyld's constraint), so people tend to use ELF\nformat on windows with RuntimeDyld. JITLINK's recent COFF support is more\nstable and allows small code model.\n\nThe logic of abi and extensions is indeed confusing and llvm backend also\nhas some nuances. I will try to explain it to my best and am very happy to\nclarify more based on my knowledge if there are more questions.\n\n 'm' 'i' 'a' 'f' 'd' are all extension features and usually are grouped\ntogether as 'g' for general. Along with 'c' for compression (and two tiny\nextension sets Zicsr and Zifencei splitted from 'i' since isa spec\n20191213, llvm will still default enable them along with 'i', so not\nrelevant to us) is named rv64gc for 64 bit machine. rv64gc is generally the\nrecommended basic set for linux capable 64bit machines. Some embedded cores\nwithout +f +d still work with Linux, but are rare and unlikely want\npostgresql.\nabi_name like 'lp64' 'lp64d' is considered independent from cpu extensions.\nYou can have a \"+f +d\" machines with lp64 abi, meaning the function body\ncan have +f +d instructions and registers, but the function signature\ncannot. To set abi explicitly for llvm backend, we need to pass\nMCOptions.ABIName or a module metadata target-abi.\n\nIf abi is missing, before llvm-15, the backend defaults to lp64 on 64bit\nplatform, ignoring any float extension enabled or not. The consensus seemed\nto be backend should be explicitly configured. After llvm-15, specifically this\ncommit <https://reviews.llvm.org/D118333>, it chooses lp64d if +d is\npresent to align with clang default. I mostly test on llvm-14 because\nJITLINK riscv support is complete already except some minor fixes, and\nnotice until now the newly change. But because I test on Gentoo, it has abi\nlp64 build on riscv64gc, so if abi is not configured this way, I would end\nup with lp64d enabled by +d extension from riscv64`g`c on a lp64 build.\n\n> your patch seems to imply that LLVM is not able to get features reliably\n> for RISCV -- why not, immaturity or technical reason why it can't?\nImmaturity. Actually, unimplemented for riscv as you can check here\n<https://github.com/llvm/llvm-project/blob/b5edd522d195447e3ae16f95c5821762edbf815a/llvm/lib/Support/Host.cpp#L1836>.\nBecause gethostcpuname usually returns generic or generic-rv64, feature\nlist for these is basically empty except 'i'. I may work out a patch for\nllvm later.\n\n> wouldn't the subtarget/ABI be the same as the one that the LLVM library\nitself was compiled for\nllvm is inherently a multitarget & cross platform compiler backend. It is\ncapable of all subtargets & features for enabled platform(s). The target\ntriple works like you said. There is LLVM_DEFAULT_TARGET_TRIPLE that sets\nthe default to native if no target is specified in runtime, so default\ntriple is reliable. But cpuname, abi, extensions don't follow this logic.\nThe llvm riscv backend devs expect these to be configured explicitly\n(although there are some default and dependencies, and frontend like clang\nalso have default). Therefore gethostcpuname is needed and feature\nextensions are derived from known cpuname. In case cpuname from\ngethostcpuname is not enough, gethostcpufeatures is needed like your\nexample of AVX extension.\n\n> why we don't need to deal with this kind of manual subtarget selection on\nany other architecture\nppc sets default abi here\n<https://github.com/llvm/llvm-project/blob/04a23cb2191f865c51fce087b9f3083ac17ae10e/llvm/lib/Target/PowerPC/PPCTargetMachine.cpp#L235>,\nso there is no abi issue. Big end or little end is encoded in target triple\nlike ppc64 (big endian), ppc64le (little endian), and a recent riscv64be\npatch <https://reviews.llvm.org/D128612>. I guess that is why there are no\nendian issues.\n------------------------------\n*From:* Thomas Munro <thomas.munro@gmail.com>\n*Sent:* Thursday, December 15, 2022 9:59:39 AM\n*To:* David Rowley <dgrowleyml@gmail.com>\n*Cc:* Alex Fan <alex.fan.q@gmail.com>; pgsql-hackers@postgresql.org <\npgsql-hackers@postgresql.org>; andres@anarazel.de <andres@anarazel.de>;\ngeidav.pg@gmail.com <geidav.pg@gmail.com>; luc@swarm64.com <luc@swarm64.com>\n*Subject:* Re: [PATCH] Enable using llvm jitlink as an alternative llvm jit\nlinker of old Rtdyld.\n\nOn Thu, Nov 24, 2022 at 12:08 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 23 Nov 2022 at 23:13, Alex Fan <alex.fan.q@gmail.com> wrote:\n> > I am new to the postgres community and apologise for resending this as\nthe previous one didn't include patch properly and didn't cc reviewers\n(maybe the reason it has been buried in mailing list for months)\n>\n> Welcome to the community!\n\n+1\n\nI don't know enough about LLVM or RISCV to have any strong opinions\nhere, but I have a couple of questions... It looks like we have two\ndifferent things in this patch:\n\n1. Optionally use JITLink instead of RuntimeDyld for relocation.\n From what I can tell from some quick googling, that is necessary for\nRISCV because they haven't got around to doing this yet:\n\nhttps://reviews.llvm.org/D127842\n\nIndependently of that, it seems that\nhttps://llvm.org/docs/JITLink.html is the future and RuntimeDyld will\neventually be obsolete, so one question I have is: why should we do\nthis only for riscv?\n\nYou mentioned that this change might be necessary to support COFF and\nthus Windows. I'm not a Windows user and I think it would be beyond\nmy pain threshold to try to get this working there by using CI alone,\nbut I'm just curious... wouldn't\nhttps://github.com/llvm/llvm-project/blob/main/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldCOFF.cpp\nwork for that already? (I haven't heard about anyone successfully\nusing PostgreSQL/LLVM on Windows; it would certainly be cool to hear\nsome more about what would be needed for that.)\n\n2. Manually adjust the CPU features and ABI/subtarget.\n\n+#if defined(__riscv)\n+ /* getHostCPUName returns \"generic-rv[32|64]\", which lacks all\nfeatures */\n+ Features.AddFeature(\"m\", true);\n+ Features.AddFeature(\"a\", true);\n+ Features.AddFeature(\"c\", true);\n+# if defined(__riscv_float_abi_single)\n+ Features.AddFeature(\"f\", true);\n+# endif\n+# if defined(__riscv_float_abi_double)\n+ Features.AddFeature(\"d\", true);\n+# endif\n+#endif\n\nI'm trying to understand this, and the ABI name selection logic.\nMaybe there are two categories of features here?\n\nThe ABI bits, \"f\" and \"d\" are not just \"which instructions can I\nused\", but they also affect the ABI (I guess something like: where\nfloats go in the calling convention), and they have to match the ABI\nof the main executable to allow linking to succeed, right? Probably a\nstupid question: wouldn't the subtarget/ABI be the same as the one\nthat the LLVM library itself was compiled for (which must also match\nthe postgres executable), and doesn't it know that somewhere? I guess\nI'm confused about why we don't need to deal with this kind of manual\nsubtarget selection on any other architecture: for PPC it\nautomatically knows whether to be big endian/little endian, 32 or 64\nbit, etc.\n\nThen for \"m\", \"a\", \"c\", I guess these are code generation options -- I\nthink \"c\" is compressed instructions for example? Can we get a\ncomment to say what they are? Why do you think that all RISCV chips\nhave these features? Perhaps these are features that are part of some\nkind of server chip profile (ie features not present in a tiny\nmicrocontroller chip found in a toaster, but expected in any system\nthat would actually run PostgreSQL) -- in which case can we get a\nreference to explain that?\n\nI remembered the specific reason why we have that\nLLVMGethostCPUFeatures() call: it's because the list of default\nfeatures that would apply otherwise based on CPU \"name\" alone turned\nout to assume that all x86 chips had AVX, but some low end parts\ndon't, so we have to check for AVX etc presence that way. But your\npatch seems to imply that LLVM is not able to get features reliably\nfor RISCV -- why not, immaturity or technical reason why it can't?\n\n+ assert(ES && \"ES must not be null\");\n\nWe use our own Assert() macro (capital A).",
"msg_date": "Fri, 6 Jan 2023 10:19:55 +1100",
"msg_from": "Alex Fan <alex.fan.q@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Enable using llvm jitlink as an alternative llvm jit\n linker of old Rtdyld."
},
{
"msg_contents": "There is discussion in\nhttps://github.com/riscv-non-isa/riscv-toolchain-conventions/issues/13 to\nchange the abi default, but not much attention for some time. The consensus\nseems to be set the abi and extension explicitly.\n\n> I recommend proposing a patch for adding such an API to LLVM.\n\nI would like to try some time later. Jitlink allows lots of flexibility to\ninspect each linking process, I feel myself don't know enough use cases to\npropose a good enough c-abi for it.\n\nThe thing I am thinking is these patch to llvm will take some time to land\nespecially for abi and extension default. But jitlink and orc for riscv is\nvery mature since llvm-15, and even llvm-14 with two minor patches. It\nwould be good to have these bits, though ugly, so that postgresql jit can\nwork with llvm-15 as most distros are still moving to it.\n\ncheers,\nAlex Fan\n\nOn Sun, Dec 25, 2022 at 11:02 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-11-23 21:13:04 +1100, Alex Fan wrote:\n> > > @@ -241,6 +246,40 @@ llvm_mutable_module(LLVMJitContext *context)\n> > > context->module = LLVMModuleCreateWithName(\"pg\");\n> > > LLVMSetTarget(context->module, llvm_triple);\n> > > LLVMSetDataLayout(context->module, llvm_layout);\n> > > +#ifdef __riscv\n> > > +#if __riscv_xlen == 64\n> > > +#ifdef __riscv_float_abi_double\n> > > + abiname = \"lp64d\";\n> > > +#elif defined(__riscv_float_abi_single)\n> > > + abiname = \"lp64f\";\n> > > +#else\n> > > + abiname = \"lp64\";\n> > > +#endif\n> > > +#elif __riscv_xlen == 32\n> > > +#ifdef __riscv_float_abi_double\n> > > + abiname = \"ilp32d\";\n> > > +#elif defined(__riscv_float_abi_single)\n> > > + abiname = \"ilp32f\";\n> > > +#else\n> > > + abiname = \"ilp32\";\n> > > +#endif\n> > > +#else\n> > > + elog(ERROR, \"unsupported riscv xlen %d\", __riscv_xlen);\n> > > +#endif\n> > > + /*\n> > > + * set this manually to avoid llvm defaulting to soft\n> > > float and\n> > > + * resulting in linker error: `can't link double-float\n> > > modules\n> > > + * with soft-float modules`\n> > > + * we could set this for TargetMachine via MCOptions,\n> but\n> > > there\n> > > + * is no C API for it\n> > > + * ref:\n>\n> I think this is something that should go into the llvm code, rather than\n> postgres.\n>\n>\n> > > @@ -820,16 +861,21 @@ llvm_session_initialize(void)\n> > > elog(DEBUG2, \"LLVMJIT detected CPU \\\"%s\\\", with features\n> \\\"%s\\\"\",\n> > > cpu, features);\n> > >\n> > > +#ifdef __riscv\n> > > + reloc=LLVMRelocPIC;\n> > > + codemodel=LLVMCodeModelMedium;\n> > > +#endif\n>\n> Same.\n>\n>\n>\n>\n> > > +#ifdef USE_JITLINK\n> > > +/*\n> > > + * There is no public C API to create ObjectLinkingLayer for JITLINK,\n> > > create our own\n> > > + */\n> > > +DEFINE_SIMPLE_CONVERSION_FUNCTIONS(llvm::orc::ExecutionSession,\n> > > LLVMOrcExecutionSessionRef)\n> > > +DEFINE_SIMPLE_CONVERSION_FUNCTIONS(llvm::orc::ObjectLayer,\n> > > LLVMOrcObjectLayerRef)\n>\n> I recommend proposing a patch for adding such an API to LLVM.\n>\n>\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nThere is discussion in https://github.com/riscv-non-isa/riscv-toolchain-conventions/issues/13\n to change the abi default, but not much attention for some time. The \nconsensus seems to be set the abi and extension explicitly.> I recommend proposing a patch for adding such an API to LLVM.I\n would like to try some time later. Jitlink allows lots of flexibility \nto inspect each linking process, I feel myself don't know enough use \ncases to propose a good enough c-abi for it.The\n thing I am thinking is these patch to llvm will take some time to land \nespecially for abi and extension default. But jitlink and orc for riscv \nis very mature since llvm-15, and even llvm-14 with two minor patches. \nIt would be good to have these bits, though ugly, so that \npostgresql jit can work with llvm-15 as most distros are still moving to it.cheers,Alex FanOn Sun, Dec 25, 2022 at 11:02 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-11-23 21:13:04 +1100, Alex Fan wrote:\n> > @@ -241,6 +246,40 @@ llvm_mutable_module(LLVMJitContext *context)\n> > context->module = LLVMModuleCreateWithName(\"pg\");\n> > LLVMSetTarget(context->module, llvm_triple);\n> > LLVMSetDataLayout(context->module, llvm_layout);\n> > +#ifdef __riscv\n> > +#if __riscv_xlen == 64\n> > +#ifdef __riscv_float_abi_double\n> > + abiname = \"lp64d\";\n> > +#elif defined(__riscv_float_abi_single)\n> > + abiname = \"lp64f\";\n> > +#else\n> > + abiname = \"lp64\";\n> > +#endif\n> > +#elif __riscv_xlen == 32\n> > +#ifdef __riscv_float_abi_double\n> > + abiname = \"ilp32d\";\n> > +#elif defined(__riscv_float_abi_single)\n> > + abiname = \"ilp32f\";\n> > +#else\n> > + abiname = \"ilp32\";\n> > +#endif\n> > +#else\n> > + elog(ERROR, \"unsupported riscv xlen %d\", __riscv_xlen);\n> > +#endif\n> > + /*\n> > + * set this manually to avoid llvm defaulting to soft\n> > float and\n> > + * resulting in linker error: `can't link double-float\n> > modules\n> > + * with soft-float modules`\n> > + * we could set this for TargetMachine via MCOptions, but\n> > there\n> > + * is no C API for it\n> > + * ref:\n\nI think this is something that should go into the llvm code, rather than\npostgres.\n\n\n> > @@ -820,16 +861,21 @@ llvm_session_initialize(void)\n> > elog(DEBUG2, \"LLVMJIT detected CPU \\\"%s\\\", with features \\\"%s\\\"\",\n> > cpu, features);\n> >\n> > +#ifdef __riscv\n> > + reloc=LLVMRelocPIC;\n> > + codemodel=LLVMCodeModelMedium;\n> > +#endif\n\nSame.\n\n\n\n\n> > +#ifdef USE_JITLINK\n> > +/*\n> > + * There is no public C API to create ObjectLinkingLayer for JITLINK,\n> > create our own\n> > + */\n> > +DEFINE_SIMPLE_CONVERSION_FUNCTIONS(llvm::orc::ExecutionSession,\n> > LLVMOrcExecutionSessionRef)\n> > +DEFINE_SIMPLE_CONVERSION_FUNCTIONS(llvm::orc::ObjectLayer,\n> > LLVMOrcObjectLayerRef)\n\nI recommend proposing a patch for adding such an API to LLVM.\n\n\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 6 Jan 2023 11:07:38 +1100",
"msg_from": "Alex Fan <alex.fan.q@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Enable using llvm jitlink as an alternative llvm jit\n linker of old Rtdyld."
},
{
"msg_contents": "Hi Alex,\n\nJITLink came back onto the radar screen: we see that LLVM 20 will\ndeprecate the RuntimeDyld link layer that we're using. JITLink would\nin fact fix the open bug report we have about LLVM crashing all over\nthe place on ARM[1], and I can see that would be quite easy to do what\nyou showed, but we can't use that solution for that problem because it\nonly works on LLVM 15+ (and maybe doesn't even support all the\narchitectures until later releases, I haven't followed the details).\nSo we'll probably finish up backporting that fix for RuntimeDyld,\nwhich seems like it's going to work OK (thanks Anthonin, CCd, for\ndiagnosing that). That's all fine and good, but if my crystal ball is\noperating correctly, fairly soon we'll have a situation where RHEL is\nshipping versions that *only* support JITLink, while other distros are\nstill shipping versions that need RuntimeDyld because they don't yet\nhave JITLink or it is not mature enough yet for all architectures. So\nwe'll need to support both for a while. That's all fine, and I can\nsee that it's going to be pretty easy to do, it's mostly just\nLLVMOrcCreateThis() or LLVMOrcCreateThat() with some #ifdef around it,\njob done.\n\nThe question I have is: is someone looking into getting the C API we\nneed for that into the LLVM main branch (LLVM 20-to-be)? I guess I\nwould prefer to be able to use that, rather than adding more of our\nown C++ wrapper code into our tree, if we can, and it seems like now\nwould be a good time to get ahead of that.\n\n[1] https://www.postgresql.org/message-id/flat/CAO6_Xqr63qj%3DSx7HY6ZiiQ6R_JbX%2B-p6sTPwDYwTWZjUmjsYBg%40mail.gmail.com\n\n\n",
"msg_date": "Fri, 30 Aug 2024 17:19:05 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Enable using llvm jitlink as an alternative llvm jit\n linker of old Rtdyld."
}
] |
[
{
"msg_contents": "Hello,\n\nwhile reading the postgres code, occasionally I see a little bit of \ninconsistency in the comments after #else (and corresponding #endif).\n\nIn some places #else/endif's comment expresses condition for else block \nto be active:\n#ifdef HAVE_UUID_OSSP\n...\n#else /* !HAVE_UUID_OSSP */\n...\n#endif /* HAVE_UUID_OSSP */\n\nand in others -- just the opposite:\n\n#ifdef SHA2_UNROLL_TRANSFORM\n...\n#else /* SHA2_UNROLL_TRANSFORM */\n...\n#endif /* SHA2_UNROLL_TRANSFORM */\n\nAlso, #endif comment after #else might expresses condition for else \nblock to be active:\n#ifdef USE_ICU\n...\n#else /* !USE_ICU */\n...\n#endif /* !USE_ICU */\n\nor it might be just the opposite, like in HAVE_UUID_OSSP and \nSHA2_UNROLL_TRANSFORM examples above.\n\n\nI propose making them more consistent. Would the following guidelines be \nacceptable?\n\n\n1. #else/#elif/#endif's comment, if present, should reflect the\ncondition of the #else/#elif block as opposed to always being a copy\nof #if/ifdef/ifndef condition.\n\ne.g. prefer this:\n#if LLVM_VERSION_MAJOR > 11\n...\n#else /* LLVM_VERSION_MAJOR <= 11 */\n...\n#endif /* LLVM_VERSION_MAJOR <= 11 */\n\nover this:\n\n#if LLVM_VERSION_MAJOR > 11\n...\n#else /* LLVM_VERSION_MAJOR > 11 */\n...\n#endif /* LLVM_VERSION_MAJOR > 11 */\n\n\n2. In #else/#elif/#endif comments, prefer A to defined(A).\n\nE.g. prefer this:\n#endif /* DMETAPHONE_MAIN */\nover\n#endif /* defined DMETAPHONE_MAIN */\n\nAnd this:\n#else /* !_MSC_VER */\nover\n#else /* !defined(_MSC_VER) */\n\n\n3. Textual hand-crafted condition comments are perfectly fine.\nLike this:\n#else /* no ppoll(), so use select() */\n\n\n4. #else/#endif condition comment, if present, should reflect the\n*effective* condition, i.e. condition taking into account previous\n#if/#elif-s.\n\nE.g. do this:\n#if defined(HAVE_INT128)\n...\n#elif defined(HAS_64_BIT_INTRINSICS)\n...\n#else /* !HAVE_INT128 && !HAS_64_BIT_INTRINSICS */\n...\n#endif /* !HAVE_INT128 && !HAS_64_BIT_INTRINSICS */\n\n\n5. Comment of the form \"!A && !B\", if deemed complicated enough, may\nalso be expressed as \"neither A nor B\" for easier reading.\n\nExample:\n#if (defined(HAVE_LANGINFO_H) && defined(CODESET)) || defined(WIN32)\n...\n#else /* neither (HAVE_LANGINFO_H && CODESET) \nnor WIN32 */\n...\n#endif /* neither (HAVE_LANGINFO_H && CODESET) \nnor WIN32 */\n\n\n6. Use \"!\" as opposed to \"not\" to be consistent. E.g. do this:\n#ifdef LOCK_DEBUG\n...\n#else /* !LOCK_DEBUG */\n...\n#endif /* !LOCK_DEBUG */\n\nas opposed to:\n\n#ifdef LOCK_DEBUG\n...\n#else /* not LOCK_DEBUG */\n...\n#endif /* not LOCK_DEBUG */\n\n\nThe draft of proposed changes is attached as\n0001-Make-else-endif-comments-more-consistent.patch\nIn the patch I've also cleaned up some minor things, like removing\noccasional \"//\" comments within \"/* */\" ones.\n\nAny thoughts?\n-- \nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru",
"msg_date": "Mon, 29 Aug 2022 12:38:45 +0300",
"msg_from": "Anton Voloshin <a.voloshin@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Make #else/#endif comments more consistent"
},
{
"msg_contents": "On 29.08.22 11:38, Anton Voloshin wrote:\n> I propose making them more consistent. Would the following guidelines be \n> acceptable?\n\nI usually try to follow the guidelines in \n<https://www.gnu.org/prep/standards/html_node/Comments.html>, which \npretty much match what you are proposing.\n\n> And this:\n> #else /* !_MSC_VER */\n> over\n> #else /* !defined(_MSC_VER) */\n\nNote that for _MSC_VER in particular there is some trickiness: We \ngenerally use it to tell apart different MSVC compiler versions. But it \nis not present with MinGW. So !_MSC_VER and !defined(_MSC_VER) have \ndifferent meanings. So in this particular case, more precision in the \ncomments might be better.\n\n\n",
"msg_date": "Mon, 29 Aug 2022 13:50:09 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Make #else/#endif comments more consistent"
},
{
"msg_contents": "On 29/08/2022 14:50, Peter Eisentraut wrote:\n> I usually try to follow the guidelines in \n> <https://www.gnu.org/prep/standards/html_node/Comments.html>, which \n> pretty much match what you are proposing.\n\nThank you for the link, it's a useful one and the wording is better than \nmine.\n\n> Note that for _MSC_VER in particular there is some trickiness: We \n> generally use it to tell apart different MSVC compiler versions.\n\nThat's certainly true in branches <= 15, but in master, to my surprise, \nI don't see any numerical comparisons of _MSC_VER since the recent \n6203583b7.\n\nI'm not sure explicit !defined(_MSC_VER) is all that more clear\nthan !_MSC_VER in the commentary. After all, we would probably\nnever (?) see an actual\n#if (!_MSC_VER)\nin a real code.\n\nSo I have mixed feelings on forcing define() on _MSC_VER, but if you \ninsist, I don't mind much either way.\n\nWhat about other changes? Are there any obviously wrong or missed ones?\n\n-- \nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru\n\n\n\n",
"msg_date": "Mon, 29 Aug 2022 15:27:18 +0300",
"msg_from": "Anton Voloshin <a.voloshin@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Make #else/#endif comments more consistent"
}
] |
[
{
"msg_contents": "Hi,\nI add a tiny test to pg_checksum for coverage.\nI checked it improve test coverage 77.9% -> 87.7%.\n\n---\nRegards,\nDongWook Lee.",
"msg_date": "Mon, 29 Aug 2022 20:26:56 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_checksum: add test for coverage"
},
{
"msg_contents": "> On 29 Aug 2022, at 13:26, Dong Wook Lee <sh95119@gmail.com> wrote:\n\n> I add a tiny test to pg_checksum for coverage.\n> I checked it improve test coverage 77.9% -> 87.7%.\n\n+# Checksums are verified if --progress arguments are specified\n+command_ok(\n+\t[ 'pg_checksums', '--progress', '-D', $pgdata ],\n+\t\"verifies checksums as default action with --progress option\");\n+\n+# Checksums are verified if --verbose arguments are specified\n+command_ok(\n+\t[ 'pg_checksums', '--verbose', '-D', $pgdata ],\n+\t\"verifies checksums as default action with --verbose option\");\n\nThis isn't really true, --progress or --verbose doesn't enable checksum\nverification, it just happens to be the default and thus is invoked when called\nwithout a mode parameter.\n\nAs written these tests aren't providing more coverage, they run more code but\nthey don't ensure that the produced output is correct. If you write these\ntests with validation on the output they will be a lot more interesting.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 29 Aug 2022 13:46:25 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_checksum: add test for coverage"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 01:46:25PM +0200, Daniel Gustafsson wrote:\n> As written these tests aren't providing more coverage, they run more code but\n> they don't ensure that the produced output is correct. If you write these\n> tests with validation on the output they will be a lot more interesting.\n\nDongWook, if you are able to reply back to this feedback, please feel\nfree to send a new patch. For now, I have marked this CF entry as\nreturned with feedback.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 14:36:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_checksum: add test for coverage"
}
] |
[
{
"msg_contents": "Buildfarm member mamba (NetBSD-current on prairiedog's former hardware)\nhas failed repeatedly since I set it up. I have now run the cause of\nthat to ground [1], and here's what's happening: if the postmaster\nreceives a signal just before it first waits at the select() in\nServerLoop, it can self-deadlock. During the postmaster's first use of\nselect(), the dynamic loader needs to resolve the PLT branch table entry\nthat the core executable uses to reach select() in libc.so, and it locks\nthe loader's internal data structures while doing that. If we enter\na signal handler while the lock is held, and the handler needs to do\nanything that also requires the lock, the postmaster is frozen.\n\nThe probability of this happening seems remarkably small, since there's\nonly one narrow window per postmaster lifetime, and there's just not\nthat many potential signal causes active at that time either.\nNonetheless I have traces showing it happening (1) because we receive\nSIGCHLD for startup process termination and (2) because we receive\nSIGUSR1 from the startup process telling us to start walreceivers.\nI guess that mamba's slow single-CPU hardware interacts with the\nNetBSD scheduler in just the right way to make it more probable than\nyou'd think. On typical modern machines, the postmaster would almost\ncertainly manage to wait before the startup process is able to signal\nit. Still, \"almost certainly\" is not \"certainly\".\n\nThe attached patch seems to fix the problem, by forcing resolution of\nthe PLT link before we unblock signals. It depends on the assumption\nthat another select() call appearing within postmaster.c will share\nthe same PLT link, which seems pretty safe.\n\nI'd originally intended to make this code \"#ifdef __NetBSD__\",\nbut on looking into the FreeBSD sources I find much the same locking\nlogic in their dynamic loader, and now I'm wondering if such behavior\nisn't pretty standard. The added calls should have negligible cost,\nso it doesn't seem unreasonable to do them everywhere.\n\n(Of course, a much better answer is to get out of the business of\ndoing nontrivial stuff in signal handlers. But even if we get that\ndone soon, we'd surely not back-patch it.)\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://gnats.netbsd.org/56979",
"msg_date": "Mon, 29 Aug 2022 15:43:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Postmaster self-deadlock due to PLT linkage resolution"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 03:43:55PM -0400, Tom Lane wrote:\n> I'd originally intended to make this code \"#ifdef __NetBSD__\",\n> but on looking into the FreeBSD sources I find much the same locking\n> logic in their dynamic loader, and now I'm wondering if such behavior\n> isn't pretty standard.\n\nI doubt it's standard. POSIX specifies select() to be async-signal-safe.\nThis NetBSD bug makes select() not be async-signal-safe.\n\n> The added calls should have negligible cost,\n> so it doesn't seem unreasonable to do them everywhere.\n\nAgreed. I would make the comment mention the NetBSD version that prompted\nthis, so we have a better chance of removing the workaround in a few decades.\n\n\n",
"msg_date": "Mon, 29 Aug 2022 19:42:01 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster self-deadlock due to PLT linkage resolution"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 7:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Buildfarm member mamba (NetBSD-current on prairiedog's former hardware)\n> has failed repeatedly since I set it up. I have now run the cause of\n> that to ground [1], and here's what's happening: if the postmaster\n> receives a signal just before it first waits at the select() in\n> ServerLoop, it can self-deadlock. During the postmaster's first use of\n> select(), the dynamic loader needs to resolve the PLT branch table entry\n> that the core executable uses to reach select() in libc.so, and it locks\n> the loader's internal data structures while doing that. If we enter\n> a signal handler while the lock is held, and the handler needs to do\n> anything that also requires the lock, the postmaster is frozen.\n\n. o O ( pselect() wouldn't have this problem, but it's slightly too\nnew for the back branches that didn't yet require SUSv3... drat )\n\n> I'd originally intended to make this code \"#ifdef __NetBSD__\",\n> but on looking into the FreeBSD sources I find much the same locking\n> logic in their dynamic loader, and now I'm wondering if such behavior\n> isn't pretty standard. The added calls should have negligible cost,\n> so it doesn't seem unreasonable to do them everywhere.\n\nFWIW I suspect FreeBSD can't break like this in a program linked with\nlibthr, because it has a scheme for deferring signals while the\nruntime linker holds locks. _rtld_bind calls _thr_rtld_rlock_acquire,\nwhich uses the THR_CRITICAL_ENTER mechanism to cause thr_sighandler to\ndefer until release. For a non-thread program, I'm not entirely sure,\nbut I don't think the fork() problem exists there. (Could be wrong,\nbased on a quick look.)\n\n> (Of course, a much better answer is to get out of the business of\n> doing nontrivial stuff in signal handlers. But even if we get that\n> done soon, we'd surely not back-patch it.)\n\n+1\n\n\n",
"msg_date": "Wed, 31 Aug 2022 00:16:36 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster self-deadlock due to PLT linkage resolution"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 8:17 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> FWIW I suspect FreeBSD can't break like this in a program linked with\n> libthr, because it has a scheme for deferring signals while the\n> runtime linker holds locks. _rtld_bind calls _thr_rtld_rlock_acquire,\n> which uses the THR_CRITICAL_ENTER mechanism to cause thr_sighandler to\n> defer until release. For a non-thread program, I'm not entirely sure,\n> but I don't think the fork() problem exists there. (Could be wrong,\n> based on a quick look.)\n\nWell that seems a bit ironic, considering that Tom has worried in the\npast that linking with threading libraries would break stuff.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Aug 2022 08:26:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster self-deadlock due to PLT linkage resolution"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 12:26 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Aug 30, 2022 at 8:17 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > FWIW I suspect FreeBSD can't break like this in a program linked with\n> > libthr, because it has a scheme for deferring signals while the\n> > runtime linker holds locks. _rtld_bind calls _thr_rtld_rlock_acquire,\n> > which uses the THR_CRITICAL_ENTER mechanism to cause thr_sighandler to\n> > defer until release. For a non-thread program, I'm not entirely sure,\n> > but I don't think the fork() problem exists there. (Could be wrong,\n> > based on a quick look.)\n>\n> Well that seems a bit ironic, considering that Tom has worried in the\n> past that linking with threading libraries would break stuff.\n\nHah. To clarify, non-thread builds don't have that exact fork()\nproblem, but it turns out they do have a related state clobbering\nproblem elsewhere, which I've reported.\n\n\n",
"msg_date": "Wed, 31 Aug 2022 01:34:55 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster self-deadlock due to PLT linkage resolution"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-29 15:43:55 -0400, Tom Lane wrote:\n> Buildfarm member mamba (NetBSD-current on prairiedog's former hardware)\n> has failed repeatedly since I set it up. I have now run the cause of\n> that to ground [1], and here's what's happening: if the postmaster\n> receives a signal just before it first waits at the select() in\n> ServerLoop, it can self-deadlock. During the postmaster's first use of\n> select(), the dynamic loader needs to resolve the PLT branch table entry\n> that the core executable uses to reach select() in libc.so, and it locks\n> the loader's internal data structures while doing that. If we enter\n> a signal handler while the lock is held, and the handler needs to do\n> anything that also requires the lock, the postmaster is frozen.\n\nIck.\n\n\n> The attached patch seems to fix the problem, by forcing resolution of\n> the PLT link before we unblock signals. It depends on the assumption\n> that another select() call appearing within postmaster.c will share\n> the same PLT link, which seems pretty safe.\n\nHm, what stops the same problem from occuring with other functions?\n\nPerhaps it'd be saner to default to building with -Wl,-z,now? That should fix\nthe problem too, right (and if we combine it with relro, it'd be a security\nimprovement to boot).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Aug 2022 10:17:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster self-deadlock due to PLT linkage resolution"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-29 15:43:55 -0400, Tom Lane wrote:\n>> The attached patch seems to fix the problem, by forcing resolution of\n>> the PLT link before we unblock signals. It depends on the assumption\n>> that another select() call appearing within postmaster.c will share\n>> the same PLT link, which seems pretty safe.\n\n> Hm, what stops the same problem from occuring with other functions?\n\nThese few lines are the only part of the postmaster that runs with\nsignals enabled and unblocked.\n\n> Perhaps it'd be saner to default to building with -Wl,-z,now? That should fix\n> the problem too, right (and if we combine it with relro, it'd be a security\n> improvement to boot).\n\nHm. Not sure if that works on NetBSD, but I'll check it out.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Aug 2022 13:24:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Postmaster self-deadlock due to PLT linkage resolution"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-30 13:24:39 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Perhaps it'd be saner to default to building with -Wl,-z,now? That should fix\n> > the problem too, right (and if we combine it with relro, it'd be a security\n> > improvement to boot).\n> \n> Hm. Not sure if that works on NetBSD, but I'll check it out.\n\nFWIW, it's a decently (well over 10 years) old thing I think. And it's documented in\nthe netbsd ld manpage and their packaging guide (albeit indirectly, with their\ntooling doing the work of specifying the flags):\nhttps://www.netbsd.org/docs/pkgsrc/hardening.html#hardening.audit.relrofull\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Aug 2022 10:41:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster self-deadlock due to PLT linkage resolution"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-30 13:24:39 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> Perhaps it'd be saner to default to building with -Wl,-z,now? That should fix\n>>> the problem too, right (and if we combine it with relro, it'd be a security\n>>> improvement to boot).\n\n>> Hm. Not sure if that works on NetBSD, but I'll check it out.\n\n> FWIW, it's a decently (well over 10 years) old thing I think. And it's documented in\n> the netbsd ld manpage and their packaging guide (albeit indirectly, with their\n> tooling doing the work of specifying the flags):\n> https://www.netbsd.org/docs/pkgsrc/hardening.html#hardening.audit.relrofull\n\nIt does appear that they use GNU ld, and I've just finished confirming\nthat each of those switches has the expected effects on my PPC box.\nSo yeah, this looks like a better answer.\n\nDo we want to install this just for NetBSD, or more widely?\nI think we'd better back-patch it for NetBSD, so I'm inclined\nto be conservative about the change.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Aug 2022 14:07:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Postmaster self-deadlock due to PLT linkage resolution"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-30 14:07:41 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-08-30 13:24:39 -0400, Tom Lane wrote:\n> >> Andres Freund <andres@anarazel.de> writes:\n> >>> Perhaps it'd be saner to default to building with -Wl,-z,now? That should fix\n> >>> the problem too, right (and if we combine it with relro, it'd be a security\n> >>> improvement to boot).\n> \n> >> Hm. Not sure if that works on NetBSD, but I'll check it out.\n> \n> > FWIW, it's a decently (well over 10 years) old thing I think. And it's documented in\n> > the netbsd ld manpage and their packaging guide (albeit indirectly, with their\n> > tooling doing the work of specifying the flags):\n> > https://www.netbsd.org/docs/pkgsrc/hardening.html#hardening.audit.relrofull\n> \n> It does appear that they use GNU ld, and I've just finished confirming\n> that each of those switches has the expected effects on my PPC box.\n> So yeah, this looks like a better answer.\n\nCool.\n\n\n> Do we want to install this just for NetBSD, or more widely?\n> I think we'd better back-patch it for NetBSD, so I'm inclined\n> to be conservative about the change.\n\nIt's likely a good idea to enable it everywhere applicable, but I agree that\nwe shouldn't unnecessarily do so in the backbranches. So I'd be inclined to\nadd it to the netbsd template for the backbranches.\n\nFor HEAD I can see putting it into all the applicable templates, adding an\nAC_LINK_IFELSE() test, or just putting it into the meson stuff.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Aug 2022 11:20:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster self-deadlock due to PLT linkage resolution"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-30 14:07:41 -0400, Tom Lane wrote:\n>> Do we want to install this just for NetBSD, or more widely?\n>> I think we'd better back-patch it for NetBSD, so I'm inclined\n>> to be conservative about the change.\n\n> It's likely a good idea to enable it everywhere applicable, but I agree that\n> we shouldn't unnecessarily do so in the backbranches. So I'd be inclined to\n> add it to the netbsd template for the backbranches.\n\n> For HEAD I can see putting it into all the applicable templates, adding an\n> AC_LINK_IFELSE() test, or just putting it into the meson stuff.\n\nFor the moment I'll stick it into the netbsd template. I'm not on\nboard with having the meson stuff generating different executables\nthan the Makefiles do, so if someone wants to propose applying\nthis widely, they'll need to fix both. Seems like that is a good\nthing to consider after the meson patches land. We don't need\nunnecessary churn in that area before that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Aug 2022 14:32:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Postmaster self-deadlock due to PLT linkage resolution"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-30 14:32:26 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-08-30 14:07:41 -0400, Tom Lane wrote:\n> >> Do we want to install this just for NetBSD, or more widely?\n> >> I think we'd better back-patch it for NetBSD, so I'm inclined\n> >> to be conservative about the change.\n> \n> > It's likely a good idea to enable it everywhere applicable, but I agree that\n> > we shouldn't unnecessarily do so in the backbranches. So I'd be inclined to\n> > add it to the netbsd template for the backbranches.\n> \n> > For HEAD I can see putting it into all the applicable templates, adding an\n> > AC_LINK_IFELSE() test, or just putting it into the meson stuff.\n> \n> For the moment I'll stick it into the netbsd template.\n\nCool.\n\n\n> I'm not on board with having the meson stuff generating different\n> executables than the Makefiles do, so if someone wants to propose applying\n> this widely, they'll need to fix both. Seems like that is a good thing to\n> consider after the meson patches land. We don't need unnecessary churn in\n> that area before that.\n\nYea, I didn't like that idea either, hence listing it last...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Aug 2022 11:46:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster self-deadlock due to PLT linkage resolution"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 1:34 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Aug 31, 2022 at 12:26 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Tue, Aug 30, 2022 at 8:17 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > FWIW I suspect FreeBSD can't break like this in a program linked with\n> > > libthr, because it has a scheme for deferring signals while the\n> > > runtime linker holds locks. _rtld_bind calls _thr_rtld_rlock_acquire,\n> > > which uses the THR_CRITICAL_ENTER mechanism to cause thr_sighandler to\n> > > defer until release. For a non-thread program, I'm not entirely sure,\n> > > but I don't think the fork() problem exists there. (Could be wrong,\n> > > based on a quick look.)\n> >\n> > Well that seems a bit ironic, considering that Tom has worried in the\n> > past that linking with threading libraries would break stuff.\n>\n> Hah. To clarify, non-thread builds don't have that exact fork()\n> problem, but it turns out they do have a related state clobbering\n> problem elsewhere, which I've reported.\n\nFor the record, reporting that resulted in a change for non-libthr rtld:\n\nhttps://cgit.freebsd.org/src/commit/?id=a687683b997c5805ecd6d8278798b7ef00d9908f\n\n\n",
"msg_date": "Mon, 5 Sep 2022 14:28:48 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster self-deadlock due to PLT linkage resolution"
},
{
"msg_contents": "After commit 7389aad6, I think commit 8acd8f86's linker changes (+\nmeson.build's equivalent) must now be redundant?\n\n\n",
"msg_date": "Mon, 7 Aug 2023 16:49:00 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster self-deadlock due to PLT linkage resolution"
}
] |
[
{
"msg_contents": "Hi,\n\nA few things about the windows resource files we generate\n\n1) For make based builds, all libraries that are built with MODULES rather\n than MODULES_big have the wrong \"FILETYPE\", because Makefile.win32 checks\n $(shlib), which is only set for MODULES_big.\n\n This used to be even more widely wrong until recently:\n\n commit 16a4a3d59cd5574fdc697ea16ef5692ce34c54d5\n Author: Peter Eisentraut <peter@eisentraut.org>\n Date: 2020-01-15 10:15:06 +0100\n\n Remove libpq.rc, use win32ver.rc for libpq\n\n Afaict before that we only set it correctly for pgevent.\n\n\n2) For make base builds, We only set InternalName, OriginalFileName when\n $shlib is set, but InternalName, OriginalFilename are required.\n\n https://docs.microsoft.com/en-us/windows/win32/menurc/versioninfo-resource\n\n\n3) We don't add an icon to postgres (\"This is a daemon process, which is why\n it is not labeled as an executable\"), but we do add icons to several\n libraries, at least snowball, pgevent, libpq.\n\n We should probably just remove the icon from the libraries?\n\n\n4) We include the date, excluding 0 for some mysterious reason, in the version\n number. This seems to unnecessarily contribute to making the build not\n reproducible. Hails from long ago:\n\n commit 9af932075098bd3c143993386288a634d518713c\n Author: Bruce Momjian <bruce@momjian.us>\n Date: 2004-12-19 02:16:31 +0000\n\n Add Win32 version stamps that increment each day for proper SYSTEM32\n DLL pginstaller installs.\n\n5) We have a PGFILEDESC for (nearly?) every binary/library. They largely don't\n seem more useful descriptions than the binary's name. Why don't we just\n drop most of them and just set the description as something like\n \"PostgreSQL $name (binary|library)\"? I doubt anybody ever looks into these\n details except to perhaps check the version number or such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Aug 2022 15:13:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "windows resource files, bugs and what do we actually want"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 12:13 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> A few things about the windows resource files we generate\n>\n> 1) For make based builds, all libraries that are built with MODULES rather\n> than MODULES_big have the wrong \"FILETYPE\", because Makefile.win32\n> checks\n> $(shlib), which is only set for MODULES_big.\n>\n> This used to be even more widely wrong until recently:\n>\n> commit 16a4a3d59cd5574fdc697ea16ef5692ce34c54d5\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> Date: 2020-01-15 10:15:06 +0100\n>\n> Remove libpq.rc, use win32ver.rc for libpq\n>\n> Afaict before that we only set it correctly for pgevent.\n>\n>\n> 2) For make base builds, We only set InternalName, OriginalFileName when\n> $shlib is set, but InternalName, OriginalFilename are required.\n>\n>\n> https://docs.microsoft.com/en-us/windows/win32/menurc/versioninfo-resource\n>\n>\n> 3) We don't add an icon to postgres (\"This is a daemon process, which is\n> why\n> it is not labeled as an executable\"), but we do add icons to several\n> libraries, at least snowball, pgevent, libpq.\n>\n> We should probably just remove the icon from the libraries?\n>\n\nAgreed, adding it to libraries seems plain wrong.\n\n\n4) We include the date, excluding 0 for some mysterious reason, in the\n> version\n> number. This seems to unnecessarily contribute to making the build not\n> reproducible. Hails from long ago:\n>\n> commit 9af932075098bd3c143993386288a634d518713c\n> Author: Bruce Momjian <bruce@momjian.us>\n> Date: 2004-12-19 02:16:31 +0000\n>\n> Add Win32 version stamps that increment each day for proper SYSTEM32\n> DLL pginstaller installs.\n>\n\nThis is obviously far too long ago for me to *actually* remember, but I\nthink the idea was to make it work with snapshot installers. As they would\nonly replace the binary if the version number was newer, so for snapshots\nit would be useful to have it always upgrade.\n\nDoing it for release builds seem a lot less useful.\n\n\n5) We have a PGFILEDESC for (nearly?) every binary/library. They largely\n> don't\n> seem more useful descriptions than the binary's name. Why don't we just\n> drop most of them and just set the description as something like\n> \"PostgreSQL $name (binary|library)\"? I doubt anybody ever looks into\n> these\n> details except to perhaps check the version number or such.\n>\n\nAt least back in the days, a lot of software inventory programs would\nscrape this information into corporate-wide databases to keep track of what\nwas in use across enterprises. I have no idea if people still do that or if\nit's all just checksums+databases now, but that was one reason back in the\ndays to put it there.\n\nBut yes, setting the description per your suggestion would work equally\nwell for that, and would make things more consistent.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Aug 30, 2022 at 12:13 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nA few things about the windows resource files we generate\n\n1) For make based builds, all libraries that are built with MODULES rather\n than MODULES_big have the wrong \"FILETYPE\", because Makefile.win32 checks\n $(shlib), which is only set for MODULES_big.\n\n This used to be even more widely wrong until recently:\n\n commit 16a4a3d59cd5574fdc697ea16ef5692ce34c54d5\n Author: Peter Eisentraut <peter@eisentraut.org>\n Date: 2020-01-15 10:15:06 +0100\n\n Remove libpq.rc, use win32ver.rc for libpq\n\n Afaict before that we only set it correctly for pgevent.\n\n\n2) For make base builds, We only set InternalName, OriginalFileName when\n $shlib is set, but InternalName, OriginalFilename are required.\n\n https://docs.microsoft.com/en-us/windows/win32/menurc/versioninfo-resource\n\n\n3) We don't add an icon to postgres (\"This is a daemon process, which is why\n it is not labeled as an executable\"), but we do add icons to several\n libraries, at least snowball, pgevent, libpq.\n\n We should probably just remove the icon from the libraries?Agreed, adding it to libraries seems plain wrong.\n4) We include the date, excluding 0 for some mysterious reason, in the version\n number. This seems to unnecessarily contribute to making the build not\n reproducible. Hails from long ago:\n\n commit 9af932075098bd3c143993386288a634d518713c\n Author: Bruce Momjian <bruce@momjian.us>\n Date: 2004-12-19 02:16:31 +0000\n\n Add Win32 version stamps that increment each day for proper SYSTEM32\n DLL pginstaller installs.This is obviously far too long ago for me to *actually* remember, but I think the idea was to make it work with snapshot installers. As they would only replace the binary if the version number was newer, so for snapshots it would be useful to have it always upgrade.Doing it for release builds seem a lot less useful.\n5) We have a PGFILEDESC for (nearly?) every binary/library. They largely don't\n seem more useful descriptions than the binary's name. Why don't we just\n drop most of them and just set the description as something like\n \"PostgreSQL $name (binary|library)\"? I doubt anybody ever looks into these\n details except to perhaps check the version number or such.At least back in the days, a lot of software inventory programs would scrape this information into corporate-wide databases to keep track of what was in use across enterprises. I have no idea if people still do that or if it's all just checksums+databases now, but that was one reason back in the days to put it there. But yes, setting the description per your suggestion would work equally well for that, and would make things more consistent.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 1 Sep 2022 22:34:07 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: windows resource files, bugs and what do we actually want"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-01 22:34:07 +0200, Magnus Hagander wrote:\n> 4) We include the date, excluding 0 for some mysterious reason, in the\n> > version\n> > number. This seems to unnecessarily contribute to making the build not\n> > reproducible. Hails from long ago:\n> >\n> > commit 9af932075098bd3c143993386288a634d518713c\n> > Author: Bruce Momjian <bruce@momjian.us>\n> > Date: 2004-12-19 02:16:31 +0000\n> >\n> > Add Win32 version stamps that increment each day for proper SYSTEM32\n> > DLL pginstaller installs.\n> >\n> \n> This is obviously far too long ago for me to *actually* remember, but I\n> think the idea was to make it work with snapshot installers. As they would\n> only replace the binary if the version number was newer, so for snapshots\n> it would be useful to have it always upgrade.\n\nDoes any installer actually behave that way? Seems very doubtful.\n\n\n> 5) We have a PGFILEDESC for (nearly?) every binary/library. They largely\n> > don't\n> > seem more useful descriptions than the binary's name. Why don't we just\n> > drop most of them and just set the description as something like\n> > \"PostgreSQL $name (binary|library)\"? I doubt anybody ever looks into\n> > these\n> > details except to perhaps check the version number or such.\n> >\n> \n> At least back in the days, a lot of software inventory programs would\n> scrape this information into corporate-wide databases to keep track of what\n> was in use across enterprises. I have no idea if people still do that or if\n> it's all just checksums+databases now, but that was one reason back in the\n> days to put it there.\n\nThink that still happens, although I suspect they care more about the vendor\netc than about the description. And would likely care more if we signed\nbuild products etc...\n\n\n> But yes, setting the description per your suggestion would work equally\n> well for that, and would make things more consistent.\n\nI guess I'll come up with a patch then :(\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Sep 2022 14:22:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: windows resource files, bugs and what do we actually want"
},
{
"msg_contents": "On Thu, Sep 1, 2022 at 11:22 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-09-01 22:34:07 +0200, Magnus Hagander wrote:\n> > 4) We include the date, excluding 0 for some mysterious reason, in the\n> > > version\n> > > number. This seems to unnecessarily contribute to making the build\n> not\n> > > reproducible. Hails from long ago:\n> > >\n> > > commit 9af932075098bd3c143993386288a634d518713c\n> > > Author: Bruce Momjian <bruce@momjian.us>\n> > > Date: 2004-12-19 02:16:31 +0000\n> > >\n> > > Add Win32 version stamps that increment each day for proper\n> SYSTEM32\n> > > DLL pginstaller installs.\n> > >\n> >\n> > This is obviously far too long ago for me to *actually* remember, but I\n> > think the idea was to make it work with snapshot installers. As they\n> would\n> > only replace the binary if the version number was newer, so for snapshots\n> > it would be useful to have it always upgrade.\n>\n> Does any installer actually behave that way? Seems very doubtful.\n>\n\nI think the one we had back in the days was. But that one is *long* dead\nnow.\n\n\n> 5) We have a PGFILEDESC for (nearly?) every binary/library. They largely\n> > > don't\n> > > seem more useful descriptions than the binary's name. Why don't we\n> just\n> > > drop most of them and just set the description as something like\n> > > \"PostgreSQL $name (binary|library)\"? I doubt anybody ever looks into\n> > > these\n> > > details except to perhaps check the version number or such.\n> > >\n> >\n> > At least back in the days, a lot of software inventory programs would\n> > scrape this information into corporate-wide databases to keep track of\n> what\n> > was in use across enterprises. I have no idea if people still do that or\n> if\n> > it's all just checksums+databases now, but that was one reason back in\n> the\n> > days to put it there.\n>\n> Think that still happens, although I suspect they care more about the\n> vendor\n> etc than about the description. And would likely care more if we signed\n> build products etc...\n>\n\nYeah, agreed on both accounts.\n\nAnd getting into signing them would certainly be a good thing, but that's a\nmuch bigger thing...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>g\n\nOn Thu, Sep 1, 2022 at 11:22 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-09-01 22:34:07 +0200, Magnus Hagander wrote:\n> 4) We include the date, excluding 0 for some mysterious reason, in the\n> > version\n> > number. This seems to unnecessarily contribute to making the build not\n> > reproducible. Hails from long ago:\n> >\n> > commit 9af932075098bd3c143993386288a634d518713c\n> > Author: Bruce Momjian <bruce@momjian.us>\n> > Date: 2004-12-19 02:16:31 +0000\n> >\n> > Add Win32 version stamps that increment each day for proper SYSTEM32\n> > DLL pginstaller installs.\n> >\n> \n> This is obviously far too long ago for me to *actually* remember, but I\n> think the idea was to make it work with snapshot installers. As they would\n> only replace the binary if the version number was newer, so for snapshots\n> it would be useful to have it always upgrade.\n\nDoes any installer actually behave that way? Seems very doubtful.I think the one we had back in the days was. But that one is *long* dead now.\n> 5) We have a PGFILEDESC for (nearly?) every binary/library. They largely\n> > don't\n> > seem more useful descriptions than the binary's name. Why don't we just\n> > drop most of them and just set the description as something like\n> > \"PostgreSQL $name (binary|library)\"? I doubt anybody ever looks into\n> > these\n> > details except to perhaps check the version number or such.\n> >\n> \n> At least back in the days, a lot of software inventory programs would\n> scrape this information into corporate-wide databases to keep track of what\n> was in use across enterprises. I have no idea if people still do that or if\n> it's all just checksums+databases now, but that was one reason back in the\n> days to put it there.\n\nThink that still happens, although I suspect they care more about the vendor\netc than about the description. And would likely care more if we signed\nbuild products etc...Yeah, agreed on both accounts.And getting into signing them would certainly be a good thing, but that's a much bigger thing...-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/g",
"msg_date": "Thu, 1 Sep 2022 23:26:22 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: windows resource files, bugs and what do we actually want"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-29 15:13:14 -0700, Andres Freund wrote:\n> 1) For make based builds, all libraries that are built with MODULES rather\n> than MODULES_big have the wrong \"FILETYPE\", because Makefile.win32 checks\n> $(shlib), which is only set for MODULES_big.\n> \n> This used to be even more widely wrong until recently:\n> \n> commit 16a4a3d59cd5574fdc697ea16ef5692ce34c54d5\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> Date: 2020-01-15 10:15:06 +0100\n> \n> Remove libpq.rc, use win32ver.rc for libpq\n> \n> Afaict before that we only set it correctly for pgevent.\n> \n> 2) For make base builds, We only set InternalName, OriginalFileName when\n> $shlib is set, but InternalName, OriginalFilename are required.\n> \n> https://docs.microsoft.com/en-us/windows/win32/menurc/versioninfo-resource\n> \n\nThese are harder to fix than was immediately obvious to me. We generate one\nwin32ver.rc per directory, even if a directory contains multiple build\nproducts (think MODULES or src/bin/scripts). So we simply can't put a correct\nfilename etc into the .rc file, unless we change the name of the .rc file.\n\nI looked into how hard it would be to fix this on the make side, and decided\nit's too hard. I'm inclined to leave this alone and fix it later in the meson\nport.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Sep 2022 18:26:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: windows resource files, bugs and what do we actually want"
},
{
"msg_contents": "On Fri, Sep 2, 2022 at 3:26 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-08-29 15:13:14 -0700, Andres Freund wrote:\n> > 1) For make based builds, all libraries that are built with MODULES\n> rather\n> > than MODULES_big have the wrong \"FILETYPE\", because Makefile.win32\n> checks\n> > $(shlib), which is only set for MODULES_big.\n> >\n> > This used to be even more widely wrong until recently:\n> >\n> > commit 16a4a3d59cd5574fdc697ea16ef5692ce34c54d5\n> > Author: Peter Eisentraut <peter@eisentraut.org>\n> > Date: 2020-01-15 10:15:06 +0100\n> >\n> > Remove libpq.rc, use win32ver.rc for libpq\n> >\n> > Afaict before that we only set it correctly for pgevent.\n> >\n> > 2) For make base builds, We only set InternalName, OriginalFileName when\n> > $shlib is set, but InternalName, OriginalFilename are required.\n> >\n> >\n> https://docs.microsoft.com/en-us/windows/win32/menurc/versioninfo-resource\n> >\n>\n> These are harder to fix than was immediately obvious to me. We generate one\n> win32ver.rc per directory, even if a directory contains multiple build\n> products (think MODULES or src/bin/scripts). So we simply can't put a\n> correct\n> filename etc into the .rc file, unless we change the name of the .rc file.\n>\n\nEeep. Yeah, that may be the reasoning behind some of how it was in the past.\n\n\n>\n> I looked into how hard it would be to fix this on the make side, and\n> decided\n> it's too hard. I'm inclined to leave this alone and fix it later in the\n> meson\n> port.\n>\n\nAgreed.\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Sep 2, 2022 at 3:26 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-08-29 15:13:14 -0700, Andres Freund wrote:\n> 1) For make based builds, all libraries that are built with MODULES rather\n> than MODULES_big have the wrong \"FILETYPE\", because Makefile.win32 checks\n> $(shlib), which is only set for MODULES_big.\n> \n> This used to be even more widely wrong until recently:\n> \n> commit 16a4a3d59cd5574fdc697ea16ef5692ce34c54d5\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> Date: 2020-01-15 10:15:06 +0100\n> \n> Remove libpq.rc, use win32ver.rc for libpq\n> \n> Afaict before that we only set it correctly for pgevent.\n> \n> 2) For make base builds, We only set InternalName, OriginalFileName when\n> $shlib is set, but InternalName, OriginalFilename are required.\n> \n> https://docs.microsoft.com/en-us/windows/win32/menurc/versioninfo-resource\n> \n\nThese are harder to fix than was immediately obvious to me. We generate one\nwin32ver.rc per directory, even if a directory contains multiple build\nproducts (think MODULES or src/bin/scripts). So we simply can't put a correct\nfilename etc into the .rc file, unless we change the name of the .rc file.Eeep. Yeah, that may be the reasoning behind some of how it was in the past. \n\nI looked into how hard it would be to fix this on the make side, and decided\nit's too hard. I'm inclined to leave this alone and fix it later in the meson\nport.Agreed. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 2 Sep 2022 15:18:55 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: windows resource files, bugs and what do we actually want"
},
{
"msg_contents": "On 30.08.22 00:13, Andres Freund wrote:\n> 1) For make based builds, all libraries that are built with MODULES rather\n> than MODULES_big have the wrong \"FILETYPE\", because Makefile.win32 checks\n> $(shlib), which is only set for MODULES_big.\n> \n> This used to be even more widely wrong until recently:\n> \n> commit 16a4a3d59cd5574fdc697ea16ef5692ce34c54d5\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> Date: 2020-01-15 10:15:06 +0100\n> \n> Remove libpq.rc, use win32ver.rc for libpq\n> \n> Afaict before that we only set it correctly for pgevent.\n\nNote, when I worked on this at that time, it was with the aim of \nsimplifying the version stamping script. So I don't really know much \nabout this.\n\n> 3) We don't add an icon to postgres (\"This is a daemon process, which is why\n> it is not labeled as an executable\"), but we do add icons to several\n> libraries, at least snowball, pgevent, libpq.\n> \n> We should probably just remove the icon from the libraries?\n\nWouldn't the icon still show up in the file manager or something? Where \nis the icon actually used?\n\n> 4) We include the date, excluding 0 for some mysterious reason, in the version\n> number. This seems to unnecessarily contribute to making the build not\n> reproducible. Hails from long ago:\n\nYeah, that is evil.\n\n> 5) We have a PGFILEDESC for (nearly?) every binary/library. They largely don't\n> seem more useful descriptions than the binary's name. Why don't we just\n> drop most of them and just set the description as something like\n> \"PostgreSQL $name (binary|library)\"? I doubt anybody ever looks into these\n> details except to perhaps check the version number or such.\n\nWe do an equivalent shortcut with the pkg-config files:\n\n echo 'Description: PostgreSQL lib$(NAME) library' >>$@\n\nSeems good enough.\n\n\n",
"msg_date": "Wed, 7 Sep 2022 06:42:58 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: windows resource files, bugs and what do we actually want"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI'd like to propose some new hooks for the buffer manager. My primary goal\nis to allow users to create an additional caching mechanism between the\nshared buffers and disk for evicted buffers. For example, some EC2\ninstance classes have ephemeral disks that are physically attached to the\nhost machine that might be useful for such a cache. Presumably there are\nother uses (e.g., gathering more information about the buffer cache), but\nthis is the main use-case I have in mind. I am proposing the following new\nhooks:\n\n * bufmgr_read_hook: called in place of smgrread() in ReadBuffer_common().\n It is expected that such hooks would call smgrread() as necessary.\n\n * bufmgr_write_hook: called before smgrwrite() in FlushBuffer(). The hook\n indicateѕ whether the buffer is being evicted. Hook functions must\n gracefully handle concurrent hint bit updates to the page.\n\n * bufmgr_invalidate_hook: called within InvalidateBuffer().\n\nThe attached patch is a first attempt at introducing these hooks with\nacceptable names, placements, arguments, etc.\n\nThoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 29 Aug 2022 15:24:49 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "introduce bufmgr hooks"
},
{
"msg_contents": "At Mon, 29 Aug 2022 15:24:49 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> I'd like to propose some new hooks for the buffer manager. My primary goal\n> is to allow users to create an additional caching mechanism between the\n...\n> The attached patch is a first attempt at introducing these hooks with\n> acceptable names, placements, arguments, etc.\n> \n> Thoughts?\n\nsmgr is an abstract interface originally intended to allow to choose\none implementation among several (though cannot dynamically). Even\nthough the patch intends to replace specific (but most of all) uses of\nthe smgrread/write, still it sounds somewhat strange to me to add\nhooks to replace smgr functions in that respect. I'm not sure whether\nwe still regard smgr as just an interface, though..\n\nAs for the names, bufmgr_read_hook looks like as if it is additionally\ncalled when the normal operation performed by smgrread completes, or\njust before. (planner_hook already doesn't sounds so for me, though:p)\n\"bufmgr_alt_smgrread\" works for me but I'm not sure it is following\nthe project policy.\n\nI think that the INSTR_* section should enclose the hook call as it is\nstill an I/O operation in the view of the core.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 30 Aug 2022 13:02:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: introduce bufmgr hooks"
},
{
"msg_contents": "Thanks for taking a look.\n\nOn Tue, Aug 30, 2022 at 01:02:20PM +0900, Kyotaro Horiguchi wrote:\n> smgr is an abstract interface originally intended to allow to choose\n> one implementation among several (though cannot dynamically). Even\n> though the patch intends to replace specific (but most of all) uses of\n> the smgrread/write, still it sounds somewhat strange to me to add\n> hooks to replace smgr functions in that respect. I'm not sure whether\n> we still regard smgr as just an interface, though..\n\nI suspect that it's probably still worthwhile to provide such hooks so that\nyou don't have to write an entire smgr implementation. But I think you\nbring up a good point.\n\n> As for the names, bufmgr_read_hook looks like as if it is additionally\n> called when the normal operation performed by smgrread completes, or\n> just before. (planner_hook already doesn't sounds so for me, though:p)\n> \"bufmgr_alt_smgrread\" works for me but I'm not sure it is following\n> the project policy.\n\nYeah, the intent is for this hook to replace the smgrread() call (although\nit might end up calling smgrread()). I debated having this hook return\nwhether smgrread() needs to be called. Would that address your concern?\n\n> I think that the INSTR_* section should enclose the hook call as it is\n> still an I/O operation in the view of the core.\n\nOkay, will do.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 30 Aug 2022 15:22:43 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: introduce bufmgr hooks"
},
{
"msg_contents": "HHi,\n\nOn 2022-08-29 15:24:49 -0700, Nathan Bossart wrote:\n> I'd like to propose some new hooks for the buffer manager. My primary goal\n> is to allow users to create an additional caching mechanism between the\n> shared buffers and disk for evicted buffers.\n\nI'm very doubtful this is a good idea. These are quite hot paths. While not a\nhuge cost, adding an indirection isn't free nonetheless. I also think it'll\nmake it harder to improve things in this area, which needs quite a bit of\nwork.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Aug 2022 08:29:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: introduce bufmgr hooks"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 08:29:31AM -0700, Andres Freund wrote:\n> I'm very doubtful this is a good idea. These are quite hot paths. While not a\n> huge cost, adding an indirection isn't free nonetheless.\n\nAre you concerned about the NULL check or the potential hook\nimplementations? I can probably test the former pretty easily, but the\nlatter seems like a generic problem for many hooks.\n\n> I also think it'll\n> make it harder to improve things in this area, which needs quite a bit of\n> work.\n\nIf you have specific refactoring in mind that you think ought to be a\nprerequisite for this change, I'm happy to give it a try.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 1 Sep 2022 13:11:50 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: introduce bufmgr hooks"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 03:22:43PM -0700, Nathan Bossart wrote:\n> Okay, will do.\n\nv2 attached.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 1 Sep 2022 14:18:20 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: introduce bufmgr hooks"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-01 13:11:50 -0700, Nathan Bossart wrote:\n> On Wed, Aug 31, 2022 at 08:29:31AM -0700, Andres Freund wrote:\n> > I'm very doubtful this is a good idea. These are quite hot paths. While not a\n> > huge cost, adding an indirection isn't free nonetheless.\n> \n> Are you concerned about the NULL check or the potential hook\n> implementations? I can probably test the former pretty easily, but the\n> latter seems like a generic problem for many hooks.\n\nMostly the former. But the latter is also relevant - the lock nesting etc is\nvery hard to deal with if you don't know what runs inside.\n\n\n> > I also think it'll\n> > make it harder to improve things in this area, which needs quite a bit of\n> > work.\n> \n> If you have specific refactoring in mind that you think ought to be a\n> prerequisite for this change, I'm happy to give it a try.\n\nThere's a few semi-active threads (e.g. about not holding multiple buffer\npartition locks). One important change is to split the way we acquire buffers\nfor file extensions - right now we get a victim buffer while holding the\nrelation extension lock, because there's simply no API to do otherwise. We\nneed to change that so we get acquire a victim buffer before holding the\nextension lock (with the buffer pinned but not [tag] valid), then we need to\nget the extension lock, insert it into its new position in the buffer mapping\ntable.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Sep 2022 17:34:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: introduce bufmgr hooks"
},
{
"msg_contents": "On Thu, Sep 01, 2022 at 05:34:03PM -0700, Andres Freund wrote:\n> On 2022-09-01 13:11:50 -0700, Nathan Bossart wrote:\n>> On Wed, Aug 31, 2022 at 08:29:31AM -0700, Andres Freund wrote:\n>> > I also think it'll\n>> > make it harder to improve things in this area, which needs quite a bit of\n>> > work.\n>> \n>> If you have specific refactoring in mind that you think ought to be a\n>> prerequisite for this change, I'm happy to give it a try.\n> \n> There's a few semi-active threads (e.g. about not holding multiple buffer\n> partition locks). One important change is to split the way we acquire buffers\n> for file extensions - right now we get a victim buffer while holding the\n> relation extension lock, because there's simply no API to do otherwise. We\n> need to change that so we get acquire a victim buffer before holding the\n> extension lock (with the buffer pinned but not [tag] valid), then we need to\n> get the extension lock, insert it into its new position in the buffer mapping\n> table.\n\nI see, thanks for clarifying.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 2 Sep 2022 15:26:06 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: introduce bufmgr hooks"
}
] |
[
{
"msg_contents": "When adding an option, we have 5 choices (bool, integer, real, enum, string),\nso the comments seem stale.\n\nThere are some sentences missing *at ShareUpdateExclusiveLock*, this\npatch adds them to make the sentences complete.\n\nOne thing I'm not sure is should we use *at ShareUpdateExclusiveLock* or\n*with ShareUpdateExclusiveLock*, pls take a look.\n\n src/backend/access/common/reloptions.c | 18 +++++++++---------\n 1 file changed, 9 insertions(+), 9 deletions(-)\n\ndiff --git a/src/backend/access/common/reloptions.c\nb/src/backend/access/common/reloptions.c\nindex 609329bb21..9e99868faa 100644\n--- a/src/backend/access/common/reloptions.c\n+++ b/src/backend/access/common/reloptions.c\n@@ -42,9 +42,9 @@\n *\n * To add an option:\n *\n- * (i) decide on a type (integer, real, bool, string), name, default value,\n- * upper and lower bounds (if applicable); for strings, consider a validation\n- * routine.\n+ * (i) decide on a type (bool, integer, real, enum, string), name, default\n+ * value, upper and lower bounds (if applicable); for strings, consider a\n+ * validation routine.\n * (ii) add a record below (or use add_<type>_reloption).\n * (iii) add it to the appropriate options struct (perhaps StdRdOptions)\n * (iv) add it to the appropriate handling routine (perhaps\n@@ -68,24 +68,24 @@\n * since they are only used by the AV procs and don't change anything\n * currently executing.\n *\n- * Fillfactor can be set because it applies only to subsequent changes made to\n- * data blocks, as documented in hio.c\n+ * Fillfactor can be set at ShareUpdateExclusiveLock because it applies only to\n+ * subsequent changes made to data blocks, as documented in hio.c\n *\n * n_distinct options can be set at ShareUpdateExclusiveLock because they\n * are only used during ANALYZE, which uses a ShareUpdateExclusiveLock,\n * so the ANALYZE will not be affected by in-flight changes. Changing those\n * values has no effect until the next ANALYZE, so no need for stronger lock.\n *\n- * Planner-related parameters can be set with ShareUpdateExclusiveLock because\n+ * Planner-related parameters can be set at ShareUpdateExclusiveLock because\n * they only affect planning and not the correctness of the execution. Plans\n * cannot be changed in mid-flight, so changes here could not easily result in\n * new improved plans in any case. So we allow existing queries to continue\n * and existing plans to survive, a small price to pay for allowing better\n * plans to be introduced concurrently without interfering with users.\n *\n- * Setting parallel_workers is safe, since it acts the same as\n- * max_parallel_workers_per_gather which is a USERSET parameter that doesn't\n- * affect existing plans or queries.\n+ * Setting parallel_workers at ShareUpdateExclusiveLock is safe, since it acts\n+ * the same as max_parallel_workers_per_gather which is a USERSET parameter\n+ * that doesn't affect existing plans or queries.\n *\n * vacuum_truncate can be set at ShareUpdateExclusiveLock because it\n * is only used during VACUUM, which uses a ShareUpdateExclusiveLock,\n-- \n2.33.0\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Tue, 30 Aug 2022 11:56:30 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH v1] [doc] polish the comments of reloptions"
},
{
"msg_contents": "thoughts?\n\nOn Tue, Aug 30, 2022 at 11:56 AM Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> When adding an option, we have 5 choices (bool, integer, real, enum, string),\n> so the comments seem stale.\n>\n> There are some sentences missing *at ShareUpdateExclusiveLock*, this\n> patch adds them to make the sentences complete.\n>\n> One thing I'm not sure is should we use *at ShareUpdateExclusiveLock* or\n> *with ShareUpdateExclusiveLock*, pls take a look.\n>\n> src/backend/access/common/reloptions.c | 18 +++++++++---------\n> 1 file changed, 9 insertions(+), 9 deletions(-)\n>\n> diff --git a/src/backend/access/common/reloptions.c\n> b/src/backend/access/common/reloptions.c\n> index 609329bb21..9e99868faa 100644\n> --- a/src/backend/access/common/reloptions.c\n> +++ b/src/backend/access/common/reloptions.c\n> @@ -42,9 +42,9 @@\n> *\n> * To add an option:\n> *\n> - * (i) decide on a type (integer, real, bool, string), name, default value,\n> - * upper and lower bounds (if applicable); for strings, consider a validation\n> - * routine.\n> + * (i) decide on a type (bool, integer, real, enum, string), name, default\n> + * value, upper and lower bounds (if applicable); for strings, consider a\n> + * validation routine.\n> * (ii) add a record below (or use add_<type>_reloption).\n> * (iii) add it to the appropriate options struct (perhaps StdRdOptions)\n> * (iv) add it to the appropriate handling routine (perhaps\n> @@ -68,24 +68,24 @@\n> * since they are only used by the AV procs and don't change anything\n> * currently executing.\n> *\n> - * Fillfactor can be set because it applies only to subsequent changes made to\n> - * data blocks, as documented in hio.c\n> + * Fillfactor can be set at ShareUpdateExclusiveLock because it applies only to\n> + * subsequent changes made to data blocks, as documented in hio.c\n> *\n> * n_distinct options can be set at ShareUpdateExclusiveLock because they\n> * are only used during ANALYZE, which uses a ShareUpdateExclusiveLock,\n> * so the ANALYZE will not be affected by in-flight changes. Changing those\n> * values has no effect until the next ANALYZE, so no need for stronger lock.\n> *\n> - * Planner-related parameters can be set with ShareUpdateExclusiveLock because\n> + * Planner-related parameters can be set at ShareUpdateExclusiveLock because\n> * they only affect planning and not the correctness of the execution. Plans\n> * cannot be changed in mid-flight, so changes here could not easily result in\n> * new improved plans in any case. So we allow existing queries to continue\n> * and existing plans to survive, a small price to pay for allowing better\n> * plans to be introduced concurrently without interfering with users.\n> *\n> - * Setting parallel_workers is safe, since it acts the same as\n> - * max_parallel_workers_per_gather which is a USERSET parameter that doesn't\n> - * affect existing plans or queries.\n> + * Setting parallel_workers at ShareUpdateExclusiveLock is safe, since it acts\n> + * the same as max_parallel_workers_per_gather which is a USERSET parameter\n> + * that doesn't affect existing plans or queries.\n> *\n> * vacuum_truncate can be set at ShareUpdateExclusiveLock because it\n> * is only used during VACUUM, which uses a ShareUpdateExclusiveLock,\n> --\n> 2.33.0\n>\n>\n> --\n> Regards\n> Junwang Zhao\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Thu, 1 Sep 2022 14:38:01 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] [doc] polish the comments of reloptions"
},
{
"msg_contents": "\nPatch applied.\n\n---------------------------------------------------------------------------\n\nOn Tue, Aug 30, 2022 at 11:56:30AM +0800, Junwang Zhao wrote:\n> When adding an option, we have 5 choices (bool, integer, real, enum, string),\n> so the comments seem stale.\n> \n> There are some sentences missing *at ShareUpdateExclusiveLock*, this\n> patch adds them to make the sentences complete.\n> \n> One thing I'm not sure is should we use *at ShareUpdateExclusiveLock* or\n> *with ShareUpdateExclusiveLock*, pls take a look.\n> \n> src/backend/access/common/reloptions.c | 18 +++++++++---------\n> 1 file changed, 9 insertions(+), 9 deletions(-)\n> \n> diff --git a/src/backend/access/common/reloptions.c\n> b/src/backend/access/common/reloptions.c\n> index 609329bb21..9e99868faa 100644\n> --- a/src/backend/access/common/reloptions.c\n> +++ b/src/backend/access/common/reloptions.c\n> @@ -42,9 +42,9 @@\n> *\n> * To add an option:\n> *\n> - * (i) decide on a type (integer, real, bool, string), name, default value,\n> - * upper and lower bounds (if applicable); for strings, consider a validation\n> - * routine.\n> + * (i) decide on a type (bool, integer, real, enum, string), name, default\n> + * value, upper and lower bounds (if applicable); for strings, consider a\n> + * validation routine.\n> * (ii) add a record below (or use add_<type>_reloption).\n> * (iii) add it to the appropriate options struct (perhaps StdRdOptions)\n> * (iv) add it to the appropriate handling routine (perhaps\n> @@ -68,24 +68,24 @@\n> * since they are only used by the AV procs and don't change anything\n> * currently executing.\n> *\n> - * Fillfactor can be set because it applies only to subsequent changes made to\n> - * data blocks, as documented in hio.c\n> + * Fillfactor can be set at ShareUpdateExclusiveLock because it applies only to\n> + * subsequent changes made to data blocks, as documented in hio.c\n> *\n> * n_distinct options can be set at ShareUpdateExclusiveLock because they\n> * are only used during ANALYZE, which uses a ShareUpdateExclusiveLock,\n> * so the ANALYZE will not be affected by in-flight changes. Changing those\n> * values has no effect until the next ANALYZE, so no need for stronger lock.\n> *\n> - * Planner-related parameters can be set with ShareUpdateExclusiveLock because\n> + * Planner-related parameters can be set at ShareUpdateExclusiveLock because\n> * they only affect planning and not the correctness of the execution. Plans\n> * cannot be changed in mid-flight, so changes here could not easily result in\n> * new improved plans in any case. So we allow existing queries to continue\n> * and existing plans to survive, a small price to pay for allowing better\n> * plans to be introduced concurrently without interfering with users.\n> *\n> - * Setting parallel_workers is safe, since it acts the same as\n> - * max_parallel_workers_per_gather which is a USERSET parameter that doesn't\n> - * affect existing plans or queries.\n> + * Setting parallel_workers at ShareUpdateExclusiveLock is safe, since it acts\n> + * the same as max_parallel_workers_per_gather which is a USERSET parameter\n> + * that doesn't affect existing plans or queries.\n> *\n> * vacuum_truncate can be set at ShareUpdateExclusiveLock because it\n> * is only used during VACUUM, which uses a ShareUpdateExclusiveLock,\n> -- \n> 2.33.0\n> \n> \n> -- \n> Regards\n> Junwang Zhao\n\n> From 4cef73a6f7ef4b59a6cc1cd9720b4e545bc36861 Mon Sep 17 00:00:00 2001\n> From: Junwang Zhao <zhjwpku@gmail.com>\n> Date: Tue, 30 Aug 2022 11:33:14 +0800\n> Subject: [PATCH v1] [doc] polish the comments of reloptions\n> \n> 1. add the missing enum type and change the order to consistent\n> with relopt_type\n> 2. add some missing ShareUpdateExclusiveLock\n> ---\n> src/backend/access/common/reloptions.c | 18 +++++++++---------\n> 1 file changed, 9 insertions(+), 9 deletions(-)\n> \n> diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c\n> index 609329bb21..9e99868faa 100644\n> --- a/src/backend/access/common/reloptions.c\n> +++ b/src/backend/access/common/reloptions.c\n> @@ -42,9 +42,9 @@\n> *\n> * To add an option:\n> *\n> - * (i) decide on a type (integer, real, bool, string), name, default value,\n> - * upper and lower bounds (if applicable); for strings, consider a validation\n> - * routine.\n> + * (i) decide on a type (bool, integer, real, enum, string), name, default\n> + * value, upper and lower bounds (if applicable); for strings, consider a\n> + * validation routine.\n> * (ii) add a record below (or use add_<type>_reloption).\n> * (iii) add it to the appropriate options struct (perhaps StdRdOptions)\n> * (iv) add it to the appropriate handling routine (perhaps\n> @@ -68,24 +68,24 @@\n> * since they are only used by the AV procs and don't change anything\n> * currently executing.\n> *\n> - * Fillfactor can be set because it applies only to subsequent changes made to\n> - * data blocks, as documented in hio.c\n> + * Fillfactor can be set at ShareUpdateExclusiveLock because it applies only to\n> + * subsequent changes made to data blocks, as documented in hio.c\n> *\n> * n_distinct options can be set at ShareUpdateExclusiveLock because they\n> * are only used during ANALYZE, which uses a ShareUpdateExclusiveLock,\n> * so the ANALYZE will not be affected by in-flight changes. Changing those\n> * values has no effect until the next ANALYZE, so no need for stronger lock.\n> *\n> - * Planner-related parameters can be set with ShareUpdateExclusiveLock because\n> + * Planner-related parameters can be set at ShareUpdateExclusiveLock because\n> * they only affect planning and not the correctness of the execution. Plans\n> * cannot be changed in mid-flight, so changes here could not easily result in\n> * new improved plans in any case. So we allow existing queries to continue\n> * and existing plans to survive, a small price to pay for allowing better\n> * plans to be introduced concurrently without interfering with users.\n> *\n> - * Setting parallel_workers is safe, since it acts the same as\n> - * max_parallel_workers_per_gather which is a USERSET parameter that doesn't\n> - * affect existing plans or queries.\n> + * Setting parallel_workers at ShareUpdateExclusiveLock is safe, since it acts\n> + * the same as max_parallel_workers_per_gather which is a USERSET parameter\n> + * that doesn't affect existing plans or queries.\n> *\n> * vacuum_truncate can be set at ShareUpdateExclusiveLock because it\n> * is only used during VACUUM, which uses a ShareUpdateExclusiveLock,\n> -- \n> 2.33.0\n> \n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 27 Oct 2023 19:05:33 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] [doc] polish the comments of reloptions"
}
] |
[
{
"msg_contents": "mbuf_tell and mbuf_rewind functions were introduced in commit e94dd6ab91 but\nwere seemingly never used, so it seems we can consider retiring them in v16.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Tue, 30 Aug 2022 12:59:15 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Removing dead code in pgcrypto"
},
{
"msg_contents": "Hi Daniel,\n\n> it seems we can consider retiring them in v16.\n\nLooks good to me. A link to the discussion was added to the patch.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 30 Aug 2022 15:39:21 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Removing dead code in pgcrypto"
},
{
"msg_contents": "> On 30 Aug 2022, at 14:39, Aleksander Alekseev <aleksander@timescale.com> wrote:\n> \n> Hi Daniel,\n> \n>> it seems we can consider retiring them in v16.\n> \n> Looks good to me. A link to the discussion was added to the patch.\n\nThanks for looking! On closer inspection, I found another function which was\nnever used and which doesn't turn up when searching extensions. The attached\nremoves pgp_get_cipher_name as well.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Tue, 30 Aug 2022 14:52:35 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Removing dead code in pgcrypto"
},
{
"msg_contents": "Hi Daniel,\n\n> Thanks for looking! On closer inspection, I found another function which was\n> never used and which doesn't turn up when searching extensions. The attached\n> removes pgp_get_cipher_name as well.\n\nI'm pretty sure this change is fine too, but I added the patch to the\nCF application in order to play it safe. Let's see what cfbot will\ntell us.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 30 Aug 2022 16:01:02 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Removing dead code in pgcrypto"
},
{
"msg_contents": "Hi again,\n\n> I'm pretty sure this change is fine too, but I added the patch to the\n> CF application in order to play it safe. Let's see what cfbot will\n> tell us.\n\nI see a little race condition happen :) Sorry for this.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 30 Aug 2022 16:03:32 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Removing dead code in pgcrypto"
}
] |
[
{
"msg_contents": "In pg_regress we set restrictedToken when calling CreateRestrictedProcess, but\nwe never seem to use that anywhere. Not being well versed in Windows I might\nbe missing something, but is it needed or is it a copy/pasteo from fa1e5afa8a2\nwhich does that in restricted_token.c? If not needed, removing it makes the\ncode more readable IMO.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Tue, 30 Aug 2022 15:02:54 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Setting restrictedtoken in pg_regress"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 03:02:54PM +0200, Daniel Gustafsson wrote:\n> In pg_regress we set restrictedToken when calling CreateRestrictedProcess, but\n> we never seem to use that anywhere. Not being well versed in Windows I might\n> be missing something, but is it needed or is it a copy/pasteo from fa1e5afa8a2\n> which does that in restricted_token.c? If not needed, removing it makes the\n> code more readable IMO.\n\nLooks reasonable to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 12 Jun 2023 16:12:22 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Setting restrictedtoken in pg_regress"
},
{
"msg_contents": "On Mon, Jun 12, 2023 at 04:12:22PM -0700, Nathan Bossart wrote:\n> On Tue, Aug 30, 2022 at 03:02:54PM +0200, Daniel Gustafsson wrote:\n>> In pg_regress we set restrictedToken when calling CreateRestrictedProcess, but\n>> we never seem to use that anywhere. Not being well versed in Windows I might\n>> be missing something, but is it needed or is it a copy/pasteo from fa1e5afa8a2\n>> which does that in restricted_token.c? If not needed, removing it makes the\n>> code more readable IMO.\n> \n> Looks reasonable to me.\n\nIndeed, looks like a copy-pasto to me.\n\nI am actually a bit confused with the return value of\nCreateRestrictedProcess() on failures in restricted_token.c. Wouldn't\nit be cleaner to return INVALID_HANDLE_VALUE rather than 0 in these\ncases?\n--\nMichael",
"msg_date": "Tue, 13 Jun 2023 08:29:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Setting restrictedtoken in pg_regress"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 08:29:19AM +0900, Michael Paquier wrote:\n> I am actually a bit confused with the return value of\n> CreateRestrictedProcess() on failures in restricted_token.c. Wouldn't\n> it be cleaner to return INVALID_HANDLE_VALUE rather than 0 in these\n> cases?\n\nMy suspicion is that this was chosen to align with CreateProcess and to\nallow things like\n\n\tif (!CreateRestrictedProcess(...))\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 12 Jun 2023 16:43:09 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Setting restrictedtoken in pg_regress"
},
{
"msg_contents": "On 2023-06-12 Mo 19:43, Nathan Bossart wrote:\n> On Tue, Jun 13, 2023 at 08:29:19AM +0900, Michael Paquier wrote:\n>> I am actually a bit confused with the return value of\n>> CreateRestrictedProcess() on failures in restricted_token.c. Wouldn't\n>> it be cleaner to return INVALID_HANDLE_VALUE rather than 0 in these\n>> cases?\n> My suspicion is that this was chosen to align with CreateProcess and to\n> allow things like\n>\n> \tif (!CreateRestrictedProcess(...))\n\n\nProbably, it's been a while. I doubt it's worth changing at this point, \nand we could just change pg_regress.c to use a boolean test like the above.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-12 Mo 19:43, Nathan Bossart\n wrote:\n\n\nOn Tue, Jun 13, 2023 at 08:29:19AM +0900, Michael Paquier wrote:\n\n\nI am actually a bit confused with the return value of\nCreateRestrictedProcess() on failures in restricted_token.c. Wouldn't\nit be cleaner to return INVALID_HANDLE_VALUE rather than 0 in these\ncases?\n\n\n\nMy suspicion is that this was chosen to align with CreateProcess and to\nallow things like\n\n\tif (!CreateRestrictedProcess(...))\n\n\n\nProbably, it's been a while. I doubt it's worth changing at this\n point, and we could just change pg_regress.c to use a boolean test\n like the above.\n\n\ncheers\n\n\nandrew\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 14 Jun 2023 07:02:30 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Setting restrictedtoken in pg_regress"
},
{
"msg_contents": "> On 14 Jun 2023, at 13:02, Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 2023-06-12 Mo 19:43, Nathan Bossart wrote:\n>> On Tue, Jun 13, 2023 at 08:29:19AM +0900, Michael Paquier wrote:\n>> \n>>> I am actually a bit confused with the return value of\n>>> CreateRestrictedProcess() on failures in restricted_token.c. Wouldn't\n>>> it be cleaner to return INVALID_HANDLE_VALUE rather than 0 in these\n>>> cases?\n>>> \n>> My suspicion is that this was chosen to align with CreateProcess and to\n>> allow things like\n>> \n>> \tif (!CreateRestrictedProcess(...))\n> \n> Probably, it's been a while. I doubt it's worth changing at this point, and we could just change pg_regress.c to use a boolean test like the above.\n\nDone that way and pushed, thanks!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 22:10:16 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Setting restrictedtoken in pg_regress"
}
] |
[
{
"msg_contents": "Hi,\n\nI found the list of TG_ variables on\nhttps://www.postgresql.org/docs/current/plpgsql-trigger.html#PLPGSQL-DML-TRIGGER\nhard to read for several reasons: too much whitespace, all the lines\nstart with \"Data type\", and even after that, the actual content is\nhiding behind some extra \"variable that...\" boilerplate.\n\nThe attached patch formats the list as a table, and removes some of\nthe clutter from the text.\n\nI reused the catalog_table_entry table machinery, that is probably not\nquite the correct thing, but I didn't find a better variant, and the\nresult looks ok.\n\nThanks to ilmari for the idea and some initial reviews.\n\nChristoph",
"msg_date": "Tue, 30 Aug 2022 15:16:17 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "plpgsql-trigger.html: Format TG_ variables as table (patch)"
},
{
"msg_contents": "On 30.08.22 15:16, Christoph Berg wrote:\n> I found the list of TG_ variables on\n> https://www.postgresql.org/docs/current/plpgsql-trigger.html#PLPGSQL-DML-TRIGGER\n> hard to read for several reasons: too much whitespace, all the lines\n> start with \"Data type\", and even after that, the actual content is\n> hiding behind some extra \"variable that...\" boilerplate.\n> \n> The attached patch formats the list as a table, and removes some of\n> the clutter from the text.\n> \n> I reused the catalog_table_entry table machinery, that is probably not\n> quite the correct thing, but I didn't find a better variant, and the\n> result looks ok.\n\nI find the new version even harder to read. The catalog_table_entry \nstuff doesn't really make sense here, since what you have before is \nalready a definition list, and afterwards you have the same, just marked \nup \"incorrectly\".\n\nWe could move the data type in the <term>, similar to how you did it in \nyour patch.\n\nI agree the whitespace layout is weird, but that's a problem of the \nwebsite CSS stylesheet. I think it looks a bit better with the local \nstylesheet, but that can all be tweaked.\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 11:35:17 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql-trigger.html: Format TG_ variables as table (patch)"
},
{
"msg_contents": "> On 31 Aug 2022, at 11:35, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 30.08.22 15:16, Christoph Berg wrote:\n>> I found the list of TG_ variables on\n>> https://www.postgresql.org/docs/current/plpgsql-trigger.html#PLPGSQL-DML-TRIGGER\n>> hard to read for several reasons: too much whitespace, all the lines\n>> start with \"Data type\", and even after that, the actual content is\n>> hiding behind some extra \"variable that...\" boilerplate.\n>> The attached patch formats the list as a table, and removes some of\n>> the clutter from the text.\n>> I reused the catalog_table_entry table machinery, that is probably not\n>> quite the correct thing, but I didn't find a better variant, and the\n>> result looks ok.\n> \n> I find the new version even harder to read. The catalog_table_entry stuff doesn't really make sense here, since what you have before is already a definition list, and afterwards you have the same, just marked up \"incorrectly\".\n\nIf we change variable lists they should get their own formatting in the xsl\nand css stylesheets.\n\n> We could move the data type in the <term>, similar to how you did it in your patch.\n\nThat will make this look different from the trigger variable lists for other\nlanguages (which typically don't list type), but I think it's worth it to avoid\nthe boilerplate which is a bit annoying.\n\nAnother thing we should change while there (but it's not directly related to\nthis patch) is that we document TG_RELID and $TG_relid as \"object ID\" but\nTD[\"relid\"] and $_TD->{relid} as \"OID\". Punctuation of item descriptions is\nalso not consistent.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 11:55:50 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql-trigger.html: Format TG_ variables as table (patch)"
},
{
"msg_contents": "Re: Peter Eisentraut\n> I find the new version even harder to read. The catalog_table_entry stuff\n> doesn't really make sense here, since what you have before is already a\n> definition list, and afterwards you have the same, just marked up\n> \"incorrectly\".\n\nFair enough. For comparison, this is what yesterday's patch looked\nlike: https://www.df7cb.de/s/2022-08-31.115813.w5UvAS.png\n\n> We could move the data type in the <term>, similar to how you did it in your\n> patch.\n\nThe new version of the patch just moves up the data types, and removes\nthe extra clutter from the beginnings of each description:\n\nhttps://www.df7cb.de/s/2022-08-31.115857.LkkKl8.png\n\nChristoph",
"msg_date": "Wed, 31 Aug 2022 11:59:29 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: plpgsql-trigger.html: Format TG_ variables as table (patch)"
},
{
"msg_contents": "Re: To Peter Eisentraut\n> The new version of the patch just moves up the data types, and removes\n> the extra clutter from the beginnings of each description:\n\nThe last version had the brackets in TG_ARGV[] (text[]) duplicated.\n\nChristoph",
"msg_date": "Wed, 31 Aug 2022 13:55:19 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: plpgsql-trigger.html: Format TG_ variables as table (patch)"
},
{
"msg_contents": "> On 31 Aug 2022, at 13:55, Christoph Berg <myon@debian.org> wrote:\n> \n> Re: To Peter Eisentraut\n>> The new version of the patch just moves up the data types, and removes\n>> the extra clutter from the beginnings of each description:\n> \n> The last version had the brackets in TG_ARGV[] (text[]) duplicated.\n\nThis, and the other string variables, now reads a bit strange IMO:\n\n- Data type <type>text</type>; a string of\n+ string\n <literal>INSERT</literal>, <literal>UPDATE</literal>,\n <literal>DELETE</literal>, or <literal>TRUNCATE</literal>\n\nWouldn't it be better with \"string containing <literal>INSERT..\" or something\nsimilar?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 14:12:06 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql-trigger.html: Format TG_ variables as table (patch)"
},
{
"msg_contents": "Re: Daniel Gustafsson\n> This, and the other string variables, now reads a bit strange IMO:\n> \n> - Data type <type>text</type>; a string of\n> + string\n> <literal>INSERT</literal>, <literal>UPDATE</literal>,\n> <literal>DELETE</literal>, or <literal>TRUNCATE</literal>\n> \n> Wouldn't it be better with \"string containing <literal>INSERT..\" or something\n> similar?\n\nRight, that felt strange to me as well, but I couldn't think of\nsomething better.\n\n\"string containing\" is again pretty boilerplatish, how about just\n\"contains\"?\n\nChristoph",
"msg_date": "Wed, 31 Aug 2022 14:18:26 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: plpgsql-trigger.html: Format TG_ variables as table (patch)"
},
{
"msg_contents": "Re: To Daniel Gustafsson\n> \"string containing\" is again pretty boilerplatish, how about just\n> \"contains\"?\n\nActually, just omitting the whole prefix works best.\n\nTG_WHEN (text)\n\n BEFORE, AFTER, or INSTEAD OF, depending on the trigger's definition.\n\nI also shortened some \"name of table\" to just \"table\". Since the data\ntype is \"name\", it's clear what \"table\" means.\n\nChristoph",
"msg_date": "Wed, 31 Aug 2022 14:33:24 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: plpgsql-trigger.html: Format TG_ variables as table (patch)"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n\n> Re: To Daniel Gustafsson\n>> \"string containing\" is again pretty boilerplatish, how about just\n>> \"contains\"?\n>\n> Actually, just omitting the whole prefix works best.\n>\n> TG_WHEN (text)\n>\n> BEFORE, AFTER, or INSTEAD OF, depending on the trigger's definition.\n\nThe attached patch does not reflect this, did you attach an old version?\n\n> I also shortened some \"name of table\" to just \"table\". Since the data\n> type is \"name\", it's clear what \"table\" means.\n\nI think it reads better with the definite article and initial capital,\ne.g. \"The table that triggered ….\".\n\n> <variablelist>\n> <varlistentry>\n> - <term><varname>NEW</varname></term>\n> + <term><varname>NEW</varname> (record)</term>\n\nThe type names should still be wrapped in <type>, like they were before.\n\n- ilmari\n\n\n",
"msg_date": "Wed, 31 Aug 2022 15:19:50 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql-trigger.html: Format TG_ variables as table (patch)"
},
{
"msg_contents": "Re: Dagfinn Ilmari Mannsåker\n> > Actually, just omitting the whole prefix works best.\n> >\n> > TG_WHEN (text)\n> >\n> > BEFORE, AFTER, or INSTEAD OF, depending on the trigger's definition.\n> \n> The attached patch does not reflect this, did you attach an old version?\n\nForgot to git commit before exporting the patch, thanks for catching!\n\n> > I also shortened some \"name of table\" to just \"table\". Since the data\n> > type is \"name\", it's clear what \"table\" means.\n> \n> I think it reads better with the definite article and initial capital,\n> e.g. \"The table that triggered ….\".\n\nSince that's not a complete sentence anyway, I think \"The\" isn't\nnecessary.\n\n> > - <term><varname>NEW</varname></term>\n> > + <term><varname>NEW</varname> (record)</term>\n> \n> The type names should still be wrapped in <type>, like they were before.\n\nUpdated.\n\nThanks,\nChristoph",
"msg_date": "Thu, 1 Sep 2022 15:07:03 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: plpgsql-trigger.html: Format TG_ variables as table (patch)"
},
{
"msg_contents": "> On 1 Sep 2022, at 15:07, Christoph Berg <myon@debian.org> wrote:\n> Re: Dagfinn Ilmari Mannsåker\n\n>>> I also shortened some \"name of table\" to just \"table\". Since the data\n>>> type is \"name\", it's clear what \"table\" means.\n>> \n>> I think it reads better with the definite article and initial capital,\n>> e.g. \"The table that triggered ….\".\n> \n> Since that's not a complete sentence anyway, I think \"The\" isn't\n> necessary.\n\nLooking at the docs for the other PLs there is quite a lot of variation on how\nwe spell this, fixing that inconsistency is for another patch though.\n\nThe patch missed to update the corresponding list for TG_ event trigger vars,\nfixed in the attached.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Fri, 2 Sep 2022 11:19:12 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql-trigger.html: Format TG_ variables as table (patch)"
},
{
"msg_contents": "> On 2 Sep 2022, at 11:19, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> The patch missed to update the corresponding list for TG_ event trigger vars,\n> fixed in the attached.\n\nI took another look at this, and pushed it with a few small tweaks. Thanks!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 15 Nov 2022 15:00:44 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: plpgsql-trigger.html: Format TG_ variables as table (patch)"
}
] |
[
{
"msg_contents": ">It's a shame you only see 3%, but that's still worth it.\nHi,\n\nI ran this test here:\n\nDROP TABLE hash_speed;\nCREATE unlogged TABLE hash_speed (x integer);\nINSERT INTO hash_speed SELECT random()*10000000 FROM\ngenerate_series(1,10000000) x;\nVACUUM\nTiming is on.\nCREATE INDEX ON hash_speed USING hash (x);\n\nhead:\nTime: 20526,490 ms (00:20,526)\n\nattached patch (v3):\nTime: 18810,777 ms (00:18,811)\n\nI can see 9%, with the patch (v3) attached.\n\nThis optimization would not apply in any way also to _hash_pgaddmultitup?\n\nregards,\nRanier Vilela",
"msg_date": "Tue, 30 Aug 2022 15:36:17 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Hash index build performance tweak from sorting"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nPlease see the first draft for the PostgreSQL 15 release announcement. \r\nThis is the announcement that goes out when we ship 15.0.\r\n\r\nA few notes on the first draft:\r\n\r\n1. I have not put in any links yet -- I want to ensure the document is \r\nclose to being static before I add those in.\r\n\r\n2. I have left in a blurb about SQL/JSON while awaiting the decision on \r\nif the feature is included in v15.\r\n\r\nPlease provide feedback no later than 2022-09-10 0:00 AoE[1]. After this \r\ndate, we will begin assembling the presskit that includes the translations.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://en.wikipedia.org/wiki/Anywhere_on_Earth",
"msg_date": "Tue, 30 Aug 2022 15:58:48 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 15 release announcement draft"
},
{
"msg_contents": "Hi,\n\nOn Tue, Aug 30, 2022 at 03:58:48PM -0400, Jonathan S. Katz wrote:\n> ### Other Notable Changes\n> \n> PostgreSQL server-level statistics are now collected in shared memory,\n> eliminating the statistics collector process and writing these stats to disk.\n> PostgreSQL 15 also revokes the `CREATE` permission from all users except a\n> database owner from the `public` (or default) schema.\n\nIt's a bit weird to lump those two in the same paragraph, but ok.\n\nHowever, I think the \"and writing these stats to disk.\" might not be\nvery clear to people not familiar with the feature, they might think\nwriting stats to disk is part of the new feature. So I propose \"as well\nas writing theses stats to disk\" instead or something.\n\n\nMichael\n\n\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 19:51:33 +0200",
"msg_from": "Michael Banck <michael.banck@credativ.de>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 release announcement draft"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 03:58:48PM -0400, Jonathan S. Katz wrote:\n\n> In this latest release, PostgreSQL improves on its in-memory and on-disk sorting\n> algorithms, with benchmarks showing speedups of 25% - 400% based on sort types.\n\nrather than \"based on\": \"depending on the data types being sorted\"\n\n> Building on work from the previous PostgreSQL release for allowing async remote\n> queries, the PostgreSQL foreign data wrapper, `postgres_fdw`, can now commit\n> transactions in parallel.\n\nasynchronous\n\n> benefits for certain workloads. On certain operating systems, PostgreSQL 15\n\ns/certain/some ?\n\n> supports the ability to prefetch WAL file contents and speed up recovery times.\n\n> PostgreSQL's built-in backup command, `pg_basebackup`, now supports server-side\n> compression of backup files with a choice of gzip, LZ4, and zstd.\n\nremove \"server-side\", since they're also supported on the client-side.\n\n> PostgreSQL 15 lets user create views that query data using the permissions of\n\nusers\n\n> the caller, not the view creator. This option, called `security_invoker`, adds\n> an additional layer of protection to ensure view callers have the correct\n> permissions for working with the underlying data.\n\nensure *that ?\n\n> alter server-level configuration parameters. Additionally, users can now search\n> for information about configuration using the `\\dconfig` command from the `psql`\n> command-line tool.\n\nrather than \"search for information about configuration\", say \"list\nconfiguration information\" ?\n\n> PostgreSQL server-level statistics are now collected in shared memory,\n> eliminating the statistics collector process and writing these stats to disk.\n\nand *the need to periodically* write these stats to disk\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 31 Aug 2022 19:15:59 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 release announcement draft"
},
{
"msg_contents": "On 8/31/22 1:51 PM, Michael Banck wrote:\r\n> Hi,\r\n> \r\n> On Tue, Aug 30, 2022 at 03:58:48PM -0400, Jonathan S. Katz wrote:\r\n>> ### Other Notable Changes\r\n>>\r\n>> PostgreSQL server-level statistics are now collected in shared memory,\r\n>> eliminating the statistics collector process and writing these stats to disk.\r\n>> PostgreSQL 15 also revokes the `CREATE` permission from all users except a\r\n>> database owner from the `public` (or default) schema.\r\n> \r\n> It's a bit weird to lump those two in the same paragraph, but ok.\r\n\r\nI split this up for now, but I may change this section a bit.\r\n\r\n> However, I think the \"and writing these stats to disk.\" might not be\r\n> very clear to people not familiar with the feature, they might think\r\n> writing stats to disk is part of the new feature. So I propose \"as well\r\n> as writing theses stats to disk\" instead or something.\r\n\r\nAgree that this wasn't clear. I changed the language to \"both\". I'll \r\nattach the updated draft in the next reply.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Thu, 1 Sep 2022 20:59:06 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 15 release announcement draft"
},
{
"msg_contents": "On 8/31/22 8:15 PM, Justin Pryzby wrote:\r\n> On Tue, Aug 30, 2022 at 03:58:48PM -0400, Jonathan S. Katz wrote:\r\n> \r\n>> In this latest release, PostgreSQL improves on its in-memory and on-disk sorting\r\n>> algorithms, with benchmarks showing speedups of 25% - 400% based on sort types.\r\n> \r\n> rather than \"based on\": \"depending on the data types being sorted\"\r\n\r\nI followed this suggestion but modified it in a different way. Still, \r\nI'm not sure if this conveys the full breadth of the features. I wonder \r\nif we should expand on the sorting changes more?\r\n\r\n> \r\n>> Building on work from the previous PostgreSQL release for allowing async remote\r\n>> queries, the PostgreSQL foreign data wrapper, `postgres_fdw`, can now commit\r\n>> transactions in parallel.\r\n> \r\n> asynchronous\r\n\r\nModified.\r\n\r\n>> benefits for certain workloads. On certain operating systems, PostgreSQL 15\r\n> \r\n> s/certain/some ?\r\n\r\nI think \"some\" sounds like \"it may or may not work\" vs. \"certain\" says \r\nthat it \"can work with tuning\". We still want to ensure that people are \r\nexcited about the feature and try it out.\r\n\r\n>> supports the ability to prefetch WAL file contents and speed up recovery times.\r\n> \r\n>> PostgreSQL's built-in backup command, `pg_basebackup`, now supports server-side\r\n>> compression of backup files with a choice of gzip, LZ4, and zstd.\r\n> \r\n> remove \"server-side\", since they're also supported on the client-side.\r\n\r\nThe server-side feature is the new piece for PG15. Gzip was already \r\nsupported on the client-side; lz4 + zstd are new for PG15.\r\n\r\nI think the server-side compression is the part to call out, as you can \r\nbenefit from that existing on the server prior to transferring the \r\nbackup elsewhere, and have that network savings. However, happy to be \r\ntold that we should discuss both server/client compression in the \r\nannouncement.\r\n\r\n>> the caller, not the view creator. This option, called `security_invoker`, adds\r\n>> an additional layer of protection to ensure view callers have the correct\r\n>> permissions for working with the underlying data.\r\n> \r\n> ensure *that ?\r\n\r\nFixed.\r\n\r\n>> alter server-level configuration parameters. Additionally, users can now search\r\n>> for information about configuration using the `\\dconfig` command from the `psql`\r\n>> command-line tool.\r\n> \r\n> rather than \"search for information about configuration\", say \"list\r\n> configuration information\" ?\r\n\r\nYou can search though -- it supports wildcards. I understand the point \r\n-- really you are still listing it out, but I think one of the neat \r\nthings is I can fairly easily search for the name of a config parameter \r\nfrom the CLI, even if I can't remember the correct or full name of it.\r\n\r\n>> PostgreSQL server-level statistics are now collected in shared memory,\r\n>> eliminating the statistics collector process and writing these stats to disk.\r\n> \r\n> and *the need to periodically* write these stats to disk\r\n\r\nModified this. However, does this appropriately capture the performance \r\nbenefit of having the server-level stats collection modified in this \r\nway? Does it capture all the benefits?\r\n\r\nNew version attached.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Thu, 1 Sep 2022 21:10:39 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 15 release announcement draft"
},
{
"msg_contents": "On 9/1/22 9:10 PM, Jonathan S. Katz wrote:\r\n\r\n> New version attached.\r\n\r\nHere is a (penultimate?) draft that includes URLs. Please provide any \r\nadditional feedback no later than 2022-09-14 0:00 AoE. After that, we \r\nwill begin the translation process.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Mon, 12 Sep 2022 12:52:49 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 15 release announcement draft"
},
{
"msg_contents": "On 12.09.22 18:52, Jonathan S. Katz wrote:\n> On 9/1/22 9:10 PM, Jonathan S. Katz wrote:\n> \n>> New version attached.\n> \n> Here is a (penultimate?) draft that includes URLs. Please provide any \n> additional feedback no later than 2022-09-14 0:00 AoE. After that, we \n> will begin the translation process.\n\n<ownhorn>\nIs the ability to use ICU for the default collation worth mentioning?\n</ownhorn>\n\n\n\n",
"msg_date": "Mon, 12 Sep 2022 20:01:49 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 release announcement draft"
},
{
"msg_contents": "On 9/12/22 2:01 PM, Peter Eisentraut wrote:\r\n> On 12.09.22 18:52, Jonathan S. Katz wrote:\r\n>> On 9/1/22 9:10 PM, Jonathan S. Katz wrote:\r\n>>\r\n>>> New version attached.\r\n>>\r\n>> Here is a (penultimate?) draft that includes URLs. Please provide any \r\n>> additional feedback no later than 2022-09-14 0:00 AoE. After that, we \r\n>> will begin the translation process.\r\n> \r\n> <ownhorn>\r\n> Is the ability to use ICU for the default collation worth mentioning?\r\n> </ownhorn>\r\n\r\n<facepalm />\r\n\r\nYes -- it is. I had noted to myself to add that in for a variety of \r\nreasons, not the least of which some of the reported issues around glibc \r\ncollations. And then I forgot to add it.\r\n\r\nI'll include it in the next draft.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Mon, 12 Sep 2022 15:26:17 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 15 release announcement draft"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 12:52:49PM -0400, Jonathan S. Katz wrote:\n> sorted. Using `row_number()`, `rank()`, and `count()` as\n> [window functions](https://www.postgresql.org/docs/15/functions-window.html)\n> also have performance benefits in PostgreSQL 15, and queries using\n\nRemove \"using\" ?\n\n> certain operating systems, PostgreSQL 15 supports the ability to\n> [prefetch WAL file contents](https://www.postgresql.org/docs/15/runtime-config-wal.html#GUC-RECOVERY-PREFETCH)\n> and speed up recovery times. PostgreSQL's built-in backup command,\n\ns/and/to/ ?\n\n> [`pg_basebackup`](https://www.postgresql.org/docs/15/app-pgbasebackup.html), now\n> supports server-side compression of backup files with a choice of gzip, LZ4, and\n\ns/with/and/ ?\n\n> PostgreSQL 15 includes the SQL standard\n> [`MERGE`](https://www.postgresql.org/docs/15/sql-merge.html) command.\n> `MERGE` lets you write conditional SQL statements that include `INSERT`,\n> `UPDATE`, and `DELETE` actions within a single statement.\n\nmaybe \"include combinations of INSERT, UPDATE and DELETE ...\"\n\n> PostgreSQL\n> [server-level statistics](https://www.postgresql.org/docs/15/monitoring-stats.html)\n> are now collected in shared memory, eliminating both the statistics collector\n> process and periodically writing this data to disk.\n\nand *the need to* periodically write this data to disk ?\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 12 Sep 2022 14:34:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 release announcement draft"
},
{
"msg_contents": "On 9/12/22 3:34 PM, Justin Pryzby wrote:\r\n> On Mon, Sep 12, 2022 at 12:52:49PM -0400, Jonathan S. Katz wrote:\r\n>> sorted. Using `row_number()`, `rank()`, and `count()` as\r\n>> [window functions](https://www.postgresql.org/docs/15/functions-window.html)\r\n>> also have performance benefits in PostgreSQL 15, and queries using\r\n> \r\n> Remove \"using\" ?\r\n\r\nI don't think that's the correct change, but I broke up the sentences. \r\nI'll post the changes shortly.\r\n\r\n>> certain operating systems, PostgreSQL 15 supports the ability to\r\n>> [prefetch WAL file contents](https://www.postgresql.org/docs/15/runtime-config-wal.html#GUC-RECOVERY-PREFETCH)\r\n>> and speed up recovery times. PostgreSQL's built-in backup command,\r\n> \r\n> s/and/to/ ?\r\n\r\nI suppose that is the end goal of the feature, in which case \"to\" would \r\nbe correct. I made that adjustment.\r\n\r\n(I did see tests where recovery time did *not* speed up when prefetching \r\nwas used, though it may have been due to the knob settings on the tests).\r\n\r\n>> [`pg_basebackup`](https://www.postgresql.org/docs/15/app-pgbasebackup.html), now\r\n>> supports server-side compression of backup files with a choice of gzip, LZ4, and\r\n> \r\n> s/with/and/ ?\r\n\r\nI don't think that is correct.\r\n\r\n>> PostgreSQL 15 includes the SQL standard\r\n>> [`MERGE`](https://www.postgresql.org/docs/15/sql-merge.html) command.\r\n>> `MERGE` lets you write conditional SQL statements that include `INSERT`,\r\n>> `UPDATE`, and `DELETE` actions within a single statement.\r\n> \r\n> maybe \"include combinations of INSERT, UPDATE and DELETE ...\"\r\n\r\nAdded \"can\".\r\n\r\n> \r\n>> PostgreSQL\r\n>> [server-level statistics](https://www.postgresql.org/docs/15/monitoring-stats.html)\r\n>> are now collected in shared memory, eliminating both the statistics collector\r\n>> process and periodically writing this data to disk.\r\n> \r\n> and *the need to* periodically write this data to disk ?\r\n\r\nI don't see what that adds other than extra words, but I can be \r\nconvinced otherwise.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Mon, 12 Sep 2022 15:52:28 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 15 release announcement draft"
},
{
"msg_contents": "On Tue, 13 Sept 2022 at 04:53, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> Here is a (penultimate?) draft that includes URLs. Please provide any\n> additional feedback no later than 2022-09-14 0:00 AoE. After that, we\n> will begin the translation process.\n\nThanks for drafting these up.\n\nI noticed a couple of things, one pretty minor and one that might need\na bit of a reword.\n\n> sorted. Using `row_number()`, `rank()`, and `count()` as\n\ndense_rank() is affected by that change too. Maybe it was just omitted\nfor brevity. I'm ok if it was, but just wanted to make sure it wasn't\nan accidental omission.\n\n> certain operating systems, PostgreSQL 15 supports the ability to\n> [prefetch WAL file contents](https://www.postgresql.org/docs/15/runtime-config-wal.html#GUC-RECOVERY-PREFETCH)\n\nI think \"ability to prefetch WAL file contents\" is not really an\naccurate way to describe this feature. What the prefetcher does is\nprefetch the pages of tables, indexes and materialized views which are\nreferenced by WAL, so that when the recovery process comes along\nlater, the pages of these relations which are being changed by the WAL\nrecord are more likely to be in memory so that the recovery process is\nless likely to have to load the referenced pages from disk.\n\nPerhaps the text should read:\n\n\"PostgreSQL 15 adds support for prefetching pages referenced in [WAL].\"\n\nDavid\n\n\n",
"msg_date": "Tue, 13 Sep 2022 08:17:27 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 release announcement draft"
},
{
"msg_contents": "On 9/12/22 4:17 PM, David Rowley wrote:\r\n> On Tue, 13 Sept 2022 at 04:53, Jonathan S. Katz <jkatz@postgresql.org> wrote:\r\n>> Here is a (penultimate?) draft that includes URLs. Please provide any\r\n>> additional feedback no later than 2022-09-14 0:00 AoE. After that, we\r\n>> will begin the translation process.\r\n> \r\n> Thanks for drafting these up.\r\n> \r\n> I noticed a couple of things, one pretty minor and one that might need\r\n> a bit of a reword.\r\n> \r\n>> sorted. Using `row_number()`, `rank()`, and `count()` as\r\n> \r\n> dense_rank() is affected by that change too. Maybe it was just omitted\r\n> for brevity. I'm ok if it was, but just wanted to make sure it wasn't\r\n> an accidental omission.\r\n\r\nIt would be an accidental omission. It's also omitted from the release \r\nnotes:\r\n\r\n\"Improve the performance of window functions that use row_number(), \r\nrank(), and count() (David Rowley)\"[1]\r\n\r\nso we should add it there too :)\r\n\r\n>> certain operating systems, PostgreSQL 15 supports the ability to\r\n>> [prefetch WAL file contents](https://www.postgresql.org/docs/15/runtime-config-wal.html#GUC-RECOVERY-PREFETCH)\r\n> \r\n> I think \"ability to prefetch WAL file contents\" is not really an\r\n> accurate way to describe this feature. What the prefetcher does is\r\n> prefetch the pages of tables, indexes and materialized views which are\r\n> referenced by WAL, so that when the recovery process comes along\r\n> later, the pages of these relations which are being changed by the WAL\r\n> record are more likely to be in memory so that the recovery process is\r\n> less likely to have to load the referenced pages from disk.\r\n\r\nThanks. That's a really crisp explanation :)\r\n\r\n> Perhaps the text should read:\r\n> \r\n> \"PostgreSQL 15 adds support for prefetching pages referenced in [WAL].\"\r\n\r\nWhat do you think of this (copied from the attached file)\r\n\r\nOn certain operating systems, PostgreSQL 15 adds support to [prefetch \r\npages referenced in \r\nWAL](https://www.postgresql.org/docs/15/runtime-config-wal.html#GUC-RECOVERY-PREFETCH) \r\nto help speed up recovery times.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://www.postgresql.org/docs/15/release-15.html",
"msg_date": "Mon, 12 Sep 2022 16:56:28 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 15 release announcement draft"
},
{
"msg_contents": "On Tue, 13 Sept 2022 at 08:56, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> What do you think of this (copied from the attached file)\n>\n> On certain operating systems, PostgreSQL 15 adds support to [prefetch\n> pages referenced in\n> WAL](https://www.postgresql.org/docs/15/runtime-config-wal.html#GUC-RECOVERY-PREFETCH)\n> to help speed up recovery times.\n\nLooks good. Thanks for adjusting that.\n\nDavid\n\n\n",
"msg_date": "Tue, 13 Sep 2022 09:02:40 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 release announcement draft"
},
{
"msg_contents": "On Tue, Sep 13, 2022 at 08:17:27AM +1200, David Rowley wrote:\n> > certain operating systems, PostgreSQL 15 supports the ability to\n> > [prefetch WAL file contents](https://www.postgresql.org/docs/15/runtime-config-wal.html#GUC-RECOVERY-PREFETCH)\n> \n> I think \"ability to prefetch WAL file contents\" is not really an\n> accurate way to describe this feature. What the prefetcher does is\n> prefetch the pages of tables, indexes and materialized views which are\n> referenced by WAL, so that when the recovery process comes along\n> later, the pages of these relations which are being changed by the WAL\n> record are more likely to be in memory so that the recovery process is\n> less likely to have to load the referenced pages from disk.\n> \n> Perhaps the text should read:\n> \n> \"PostgreSQL 15 adds support for prefetching pages referenced in [WAL].\"\n\nRelated:\nhttps://www.postgresql.org/message-id/20220904075450.6g4nm4hralyw3tab@alvherre.pgsql\n\n\n",
"msg_date": "Mon, 12 Sep 2022 16:09:07 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 release announcement draft"
}
] |
[
{
"msg_contents": "I was looking at F263 from the SQL standard, Comma-separated predicates in\nsimple CASE expression, and thinking if we could support this within the\nframework we already have at a minimal added cost. The attached sketch diff\nturns each predicate in the list into a CaseWhen node and uses the location\nfrom parsing for grouping in errorhandling for searched case.\n\nIs this a viable approach or am I missing something obvious?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Wed, 31 Aug 2022 00:12:26 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Comma-separated predicates in simple CASE expressions (f263)"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> I was looking at F263 from the SQL standard, Comma-separated predicates in\n> simple CASE expression, and thinking if we could support this within the\n> framework we already have at a minimal added cost. The attached sketch diff\n> turns each predicate in the list into a CaseWhen node and uses the location\n> from parsing for grouping in errorhandling for searched case.\n\n> Is this a viable approach or am I missing something obvious?\n\nI don't particularly like duplicating the THEN clause multiple times.\nI think if we're going to do this we should do it right, and that\nmeans a substantially larger patch to propagate the notion of multiple\ncomparison values all the way down.\n\nI also don't care for the bit in transformCaseExpr where you seem\nto be relying on subexpression location fields to make semantic\ndecisions. Surely there's a better way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Aug 2022 18:20:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Comma-separated predicates in simple CASE expressions (f263)"
},
{
"msg_contents": "> On 31 Aug 2022, at 00:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> I was looking at F263 from the SQL standard, Comma-separated predicates in\n>> simple CASE expression, and thinking if we could support this within the\n>> framework we already have at a minimal added cost. The attached sketch diff\n>> turns each predicate in the list into a CaseWhen node and uses the location\n>> from parsing for grouping in errorhandling for searched case.\n> \n>> Is this a viable approach or am I missing something obvious?\n\nThanks for looking!\n\n> I don't particularly like duplicating the THEN clause multiple times.\n> I think if we're going to do this we should do it right, and that\n> means a substantially larger patch to propagate the notion of multiple\n> comparison values all the way down.\n\nFair enough, I think that's doable without splitting the simple and searched\ncase in the parser which I think would be a good thing to avoid. I'll take a\nstab at it.\n\n> I also don't care for the bit in transformCaseExpr where you seem\n> to be relying on subexpression location fields to make semantic\n> decisions. Surely there's a better way.\n\nIf we group the predicates such a single node contains the full list then we'll\nhave all the info we need at that point.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 00:29:41 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Comma-separated predicates in simple CASE expressions (f263)"
}
] |
[
{
"msg_contents": "Hi,\n\nOne of the most annoying things in the planner for me is unnesting of \ncorrelated queries [1]. A number papers on this subject were published \nstarting 1980s, but only trivial optimizations exists in the Core. It \nmeans a lack of performance, especially when we use foreign tables in \nsubquery.\nIn the patch I'm trying to propose a sort of sketch of solution.\n\nBefore flattening procedure we just look through the quals of subquery, \npull to the upper level OpExpr's containing variables from the upper \nrelation and replace their positions in the quals with true expression.\nFurther, the flattening machinery works as usual.\n\nThis patch is dedicated to simplest variant of correlated queries - \nwithout aggregate functions in the target list. It passes regression \ntests and contains some additional tests to demonstrate achievements.\n\nI'd like to get critics on the approach.\n\n[1] Kim, Won. “On optimizing an SQL-like nested query.” ACM Trans. \nDatabase Syst. 7 (1982): 443-469.\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Wed, 31 Aug 2022 11:35:09 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "[POC] Allow flattening of subquery with a link to upper query"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 2:35 PM Andrey Lepikhov <a.lepikhov@postgrespro.ru>\nwrote:\n\n> Before flattening procedure we just look through the quals of subquery,\n> pull to the upper level OpExpr's containing variables from the upper\n> relation and replace their positions in the quals with true expression.\n> Further, the flattening machinery works as usual.\n\n\nHmm, I'm not sure this patch works correctly in all cases. It seems to\nme this patch pulls up the subquery without checking the constraints\nimposed by lateral references. If its quals contain any lateral\nreferences to rels outside a higher outer join, we would need to\npostpone quals from below an outer join to above it, which is probably\nincorrect. As an example, consider\n\n select * from a left join b on b.i in\n (select c.i from c where c.j = a.j);\n\nIf we pull up the ANY SubLink into parent query and pull up its qual\ninto upper level, as what the patch does, then its qual 'c.j = a.j'\nwould have to be postponed past the B/C semi join, which is totally\nwrong. Doing this would firstly trigger the assertion failure in\ndistribute_qual_to_rels\n\n Assert(root->hasLateralRTEs); /* shouldn't happen otherwise */\n Assert(jointype == JOIN_INNER); /* mustn't postpone past outer join */\n\nEven if we ignore these assertion checks, in the final plan we would\nhave to access the RHS of the B/C semi join, i.e. C, to evaluate qual\n'c.j = a.j' at the join level of A/BC join, which is wrong.\n\nThanks\nRichard\n\nOn Wed, Aug 31, 2022 at 2:35 PM Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote:\nBefore flattening procedure we just look through the quals of subquery, \npull to the upper level OpExpr's containing variables from the upper \nrelation and replace their positions in the quals with true expression.\nFurther, the flattening machinery works as usual. Hmm, I'm not sure this patch works correctly in all cases. It seems tome this patch pulls up the subquery without checking the constraintsimposed by lateral references. If its quals contain any lateralreferences to rels outside a higher outer join, we would need topostpone quals from below an outer join to above it, which is probablyincorrect. As an example, consider select * from a left join b on b.i in (select c.i from c where c.j = a.j);If we pull up the ANY SubLink into parent query and pull up its qualinto upper level, as what the patch does, then its qual 'c.j = a.j'would have to be postponed past the B/C semi join, which is totallywrong. Doing this would firstly trigger the assertion failure indistribute_qual_to_rels Assert(root->hasLateralRTEs); /* shouldn't happen otherwise */ Assert(jointype == JOIN_INNER); /* mustn't postpone past outer join */Even if we ignore these assertion checks, in the final plan we wouldhave to access the RHS of the B/C semi join, i.e. C, to evaluate qual'c.j = a.j' at the join level of A/BC join, which is wrong.ThanksRichard",
"msg_date": "Thu, 1 Sep 2022 20:24:32 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [POC] Allow flattening of subquery with a link to upper query"
},
{
"msg_contents": "On 9/1/22 17:24, Richard Guo wrote:\n> \n> On Wed, Aug 31, 2022 at 2:35 PM Andrey Lepikhov \n> <a.lepikhov@postgrespro.ru <mailto:a.lepikhov@postgrespro.ru>> wrote:\n> \n> Before flattening procedure we just look through the quals of subquery,\n> pull to the upper level OpExpr's containing variables from the upper\n> relation and replace their positions in the quals with true expression.\n> Further, the flattening machinery works as usual.\n> \n> Hmm, I'm not sure this patch works correctly in all cases. It seems to\n> me this patch pulls up the subquery without checking the constraints\n> imposed by lateral references. If its quals contain any lateral\n> references to rels outside a higher outer join, we would need to\n> postpone quals from below an outer join to above it, which is probably\n> incorrect.Yeah, it's not easy-to-solve problem. If I correctly understand the \ncode, to fix this problem we must implement the same logic, as \npull_up_subqueries (lowest_outer_join/safe_upper_varnos). It looks ugly. \nBut, more important, does this feature deserve such efforts/changes?\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Fri, 2 Sep 2022 16:08:55 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [POC] Allow flattening of subquery with a link to upper query"
},
{
"msg_contents": "On Fri, Sep 2, 2022 at 7:09 PM Andrey Lepikhov <a.lepikhov@postgrespro.ru>\nwrote:\n\n> On 9/1/22 17:24, Richard Guo wrote:\n> > On Wed, Aug 31, 2022 at 2:35 PM Andrey Lepikhov\n> > <a.lepikhov@postgrespro.ru <mailto:a.lepikhov@postgrespro.ru>> wrote:\n> > Before flattening procedure we just look through the quals of\n> subquery,\n> > pull to the upper level OpExpr's containing variables from the upper\n> > relation and replace their positions in the quals with true\n> expression.\n> > Further, the flattening machinery works as usual.\n> >\n> > Hmm, I'm not sure this patch works correctly in all cases. It seems to\n> > me this patch pulls up the subquery without checking the constraints\n> > imposed by lateral references. If its quals contain any lateral\n> > references to rels outside a higher outer join, we would need to\n> > postpone quals from below an outer join to above it, which is probably\n> > incorrect.\n\n\n\n> Yeah, it's not easy-to-solve problem. If I correctly understand the\n> code, to fix this problem we must implement the same logic, as\n> pull_up_subqueries (lowest_outer_join/safe_upper_varnos).\n\n\nYeah, I think we'd have to consider the restrictions from lateral\nreferences to guarantee correctness when we pull up subqueries. We need\nto avoid the situation where quals need to be postponed past outer join.\n\nHowever, even if we have taken care of that, there may be other issues\nwith flattening direct-correlated ANY SubLink. The constraints imposed\nby LATERAL references may make it impossible for us to find any legal\njoin orders, as discussed in [1].\n\n[1]\nhttps://www.postgresql.org/message-id/CAMbWs49cvkF9akbomz_fCCKS=D5TY=4KGHEQcfHPZCXS1GVhkA@mail.gmail.com\n\nThanks\nRichard\n\nOn Fri, Sep 2, 2022 at 7:09 PM Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote:On 9/1/22 17:24, Richard Guo wrote:\n> On Wed, Aug 31, 2022 at 2:35 PM Andrey Lepikhov \n> <a.lepikhov@postgrespro.ru <mailto:a.lepikhov@postgrespro.ru>> wrote:\n> Before flattening procedure we just look through the quals of subquery,\n> pull to the upper level OpExpr's containing variables from the upper\n> relation and replace their positions in the quals with true expression.\n> Further, the flattening machinery works as usual.\n> \n> Hmm, I'm not sure this patch works correctly in all cases. It seems to\n> me this patch pulls up the subquery without checking the constraints\n> imposed by lateral references. If its quals contain any lateral\n> references to rels outside a higher outer join, we would need to\n> postpone quals from below an outer join to above it, which is probably\n> incorrect. Yeah, it's not easy-to-solve problem. If I correctly understand the \ncode, to fix this problem we must implement the same logic, as \npull_up_subqueries (lowest_outer_join/safe_upper_varnos). Yeah, I think we'd have to consider the restrictions from lateralreferences to guarantee correctness when we pull up subqueries. We needto avoid the situation where quals need to be postponed past outer join.However, even if we have taken care of that, there may be other issueswith flattening direct-correlated ANY SubLink. The constraints imposedby LATERAL references may make it impossible for us to find any legaljoin orders, as discussed in [1].[1] https://www.postgresql.org/message-id/CAMbWs49cvkF9akbomz_fCCKS=D5TY=4KGHEQcfHPZCXS1GVhkA@mail.gmail.comThanksRichard",
"msg_date": "Mon, 5 Sep 2022 15:22:36 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [POC] Allow flattening of subquery with a link to upper query"
},
{
"msg_contents": "On 9/5/22 12:22, Richard Guo wrote:\n> \n> On Fri, Sep 2, 2022 at 7:09 PM Andrey Lepikhov \n> Yeah, it's not easy-to-solve problem. If I correctly understand the\n> code, to fix this problem we must implement the same logic, as\n> pull_up_subqueries (lowest_outer_join/safe_upper_varnos). \n> \n> Yeah, I think we'd have to consider the restrictions from lateral\n> references to guarantee correctness when we pull up subqueries. We need\n> to avoid the situation where quals need to be postponed past outer join.\n> \n> However, even if we have taken care of that, there may be other issues\n> with flattening direct-correlated ANY SubLink. The constraints imposed\n> by LATERAL references may make it impossible for us to find any legal\n> join orders, as discussed in [1].\n> \n> [1] \n> https://www.postgresql.org/message-id/CAMbWs49cvkF9akbomz_fCCKS=D5TY=4KGHEQcfHPZCXS1GVhkA@mail.gmail.com <https://www.postgresql.org/message-id/CAMbWs49cvkF9akbomz_fCCKS=D5TY=4KGHEQcfHPZCXS1GVhkA@mail.gmail.com>\n\nThe problem you mentioned under this link is about ineffective query \nplan - as I understand it.\nThis is a problem, especially if we would think about more complex \npull-ups of subqueries - with aggregate functions in the target list.\nI think about that problem as about next step - we already have an \nexample - machinery of alternative plans. This problem may be solved in \nthis way, or by a GUC, as usual.\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Mon, 5 Sep 2022 15:54:58 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [POC] Allow flattening of subquery with a link to upper query"
},
{
"msg_contents": "On 5/9/2022 12:22, Richard Guo wrote:\n> \n> On Fri, Sep 2, 2022 at 7:09 PM Andrey Lepikhov \n> <a.lepikhov@postgrespro.ru <mailto:a.lepikhov@postgrespro.ru>> wrote:\n> > Hmm, I'm not sure this patch works correctly in all cases. It\n> seems to\n> > me this patch pulls up the subquery without checking the constraints\n> > imposed by lateral references. If its quals contain any lateral\n> > references to rels outside a higher outer join, we would need to\n> > postpone quals from below an outer join to above it, which is\n> probably\n> > incorrect.\n> \n> Yeah, it's not easy-to-solve problem. If I correctly understand the\n> code, to fix this problem we must implement the same logic, as\n> pull_up_subqueries (lowest_outer_join/safe_upper_varnos). \n> \n> Yeah, I think we'd have to consider the restrictions from lateral\n> references to guarantee correctness when we pull up subqueries. We need\n> to avoid the situation where quals need to be postponed past outer join.\n> \n> However, even if we have taken care of that, there may be other issues\n> with flattening direct-correlated ANY SubLink. The constraints imposed\n> by LATERAL references may make it impossible for us to find any legal\n> join orders, as discussed in [1].\nTo resolve both issues, lower outer join passes through pull_sublinks_* \ninto flattening routine (see attachment).\nI've added these cases into subselect.sql\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Tue, 13 Sep 2022 16:40:37 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [POC] Allow flattening of subquery with a link to upper query"
},
{
"msg_contents": "On 9/13/22 16:40, Andrey Lepikhov wrote:\n> On 5/9/2022 12:22, Richard Guo wrote:\n>> On Fri, Sep 2, 2022 at 7:09 PM Andrey Lepikhov \n>> <a.lepikhov@postgrespro.ru <mailto:a.lepikhov@postgrespro.ru>> wrote:\n> To resolve both issues, lower outer join passes through pull_sublinks_* \n> into flattening routine (see attachment).\n> I've added these cases into subselect.sql\nIn attachment - new version of the patch, rebased onto current master.\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Tue, 4 Oct 2022 10:35:30 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [POC] Allow flattening of subquery with a link to upper query"
},
{
"msg_contents": "On 1/9/2022 19:24, Richard Guo wrote:\n> Even if we ignore these assertion checks, in the final plan we would\n> have to access the RHS of the B/C semi join, i.e. C, to evaluate qual\n> 'c.j = a.j' at the join level of A/BC join, which is wrong.\nHaving committed 9f13376396 recently, we did a lot of work in this area. \nBy applying regression tests from my last patch [1] to the master, I \ncompared these two implementations.\nAs I see, using the LATERAL trick allowed us to simplify the code \ndrastically. But because we know just a fact of the lateral link, not \nits place, in the master we do less when in the patch proposed in that \nthread. For example, having query:\n\nexplain (costs off)\nSELECT relname FROM pg_class c1\nWHERE relname = ANY (\n SELECT a.amname from pg_am a WHERE a.oid=c1.oid GROUP BY a.amname\n);\n\nWe see on master:\n Nested Loop\n -> Seq Scan on pg_class c1\n -> Subquery Scan on \"ANY_subquery\"\n Filter: (c1.relname = \"ANY_subquery\".amname)\n -> Group\n Group Key: a.amname\n -> Sort\n Sort Key: a.amname\n -> Seq Scan on pg_am a\n Filter: (oid = c1.oid)\n\nAnd with this patch:\n Hash Join\n Hash Cond: ((c1.relname = a.amname) AND (c1.oid = a.oid))\n -> Seq Scan on pg_class c1\n -> Hash\n -> HashAggregate\n Group Key: a.amname\n -> Seq Scan on pg_am a\n\nAlso, we attempted to fix links from a non-parent query block.\nSo, in my opinion, the reason for this patch still exists, and we can \ncontinue this work further, maybe elaborating on flattening LATERAL \nreferences - this needs some research.\n\n[1] \nhttps://www.postgresql.org/message-id/35c8a3e8-d080-dfa8-2be3-cf5fe702010a%40postgrespro.ru\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Tue, 20 Feb 2024 16:57:27 +0700",
"msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [POC] Allow flattening of subquery with a link to upper query"
},
{
"msg_contents": "On Tue, 20 Feb 2024 at 22:57, Andrei Lepikhov <a.lepikhov@postgrespro.ru> wrote:\n> explain (costs off)\n> SELECT relname FROM pg_class c1\n> WHERE relname = ANY (\n> SELECT a.amname from pg_am a WHERE a.oid=c1.oid GROUP BY a.amname\n> );\n>\n> We see on master:\n> Nested Loop\n> -> Seq Scan on pg_class c1\n> -> Subquery Scan on \"ANY_subquery\"\n> Filter: (c1.relname = \"ANY_subquery\".amname)\n> -> Group\n> Group Key: a.amname\n> -> Sort\n> Sort Key: a.amname\n> -> Seq Scan on pg_am a\n> Filter: (oid = c1.oid)\n>\n> And with this patch:\n> Hash Join\n> Hash Cond: ((c1.relname = a.amname) AND (c1.oid = a.oid))\n> -> Seq Scan on pg_class c1\n> -> Hash\n> -> HashAggregate\n> Group Key: a.amname\n> -> Seq Scan on pg_am a\n\nI've only glanced at the patch just so I could determine if you're\nmaking a cost-based decision and doing this transformation only if the\nde-correlation of the subquery is deemed the cheaper option. It looks\nlike since you're doing this in the same location that we do the other\nsemi / anti join transformations that there's no costing.\n\nI agree that it would be nice to teach the planner how to do this, but\nI think it just has to be a cost-based decision. Imagine how the\ntransformed query would perform of pg_am had a billion rows and\npg_class had 1 row. That's quite a costly hash table build to be\nprobing it just once.\n\nI didn't follow the patch, but there was a patch to push aggregate\nfunction evaluation down [1]. I imagine this has the same problem as\nif you just blindly pushed and aggregate function evaluation as deep\nas you could evaluate all the aggregate's parameters and group by vars\nthen you may end up aggregating far more than you need to as some join\ncould eliminate the majority of the groups. I think we'd need to come\nup with some way to have the planner consider these types of\noptimisations as alternatives to what happens today and only apply\nthem when we estimate that they're cheaper. Right now a Path has no\nability to describe that it's performed GROUP BY.\n\nDavid\n\n[1] https://commitfest.postgresql.org/46/4019/\n\n\n",
"msg_date": "Tue, 20 Feb 2024 23:43:40 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [POC] Allow flattening of subquery with a link to upper query"
},
{
"msg_contents": "On 20/2/2024 17:43, David Rowley wrote:\n> On Tue, 20 Feb 2024 at 22:57, Andrei Lepikhov <a.lepikhov@postgrespro.ru> wrote: \n> I agree that it would be nice to teach the planner how to do this, but\n> I think it just has to be a cost-based decision. Imagine how the\n> transformed query would perform of pg_am had a billion rows and\n> pg_class had 1 row. That's quite a costly hash table build to be\n> probing it just once.\nTrue, the origins of this work lie in foreign tables where such a query \ngenerates an even worse situation.\n\n> I didn't follow the patch, but there was a patch to push aggregate\n> function evaluation down [1]. I imagine this has the same problem as\n> if you just blindly pushed and aggregate function evaluation as deep\n> as you could evaluate all the aggregate's parameters and group by vars\n> then you may end up aggregating far more than you need to as some join\n> could eliminate the majority of the groups. I think we'd need to come\n> up with some way to have the planner consider these types of\n> optimisations as alternatives to what happens today and only apply\n> them when we estimate that they're cheaper. Right now a Path has no\n> ability to describe that it's performed GROUP BY.\nThanks for the link. We also ended up with the idea of an alternative \nsubtree (inspired by the approach of AlternativeSubplan). Here, we just \nexplain the current state of the pull-up sublink technique.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Tue, 20 Feb 2024 18:01:21 +0700",
"msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [POC] Allow flattening of subquery with a link to upper query"
}
] |
[
{
"msg_contents": "Hi,\n\nThe commit\nhttps://github.com/postgres/postgres/commit/b17ff07aa3eb142d2cde2ea00e4a4e8f63686f96\nIntroduced the CopyStatistics function.\n\nTo do the work, CopyStatistics uses a less efficient function\nto update/insert tuples at catalog systems.\n\nThe comment at indexing.c says:\n\"Avoid using it for multiple tuples, since opening the indexes\n * and building the index info structures is moderately expensive.\n * (Use CatalogTupleInsertWithInfo in such cases.)\"\n\nSo inspired by the comment, changed in some fews places,\nthe CatalogInsert/CatalogUpdate to more efficient functions\nCatalogInsertWithInfo/CatalogUpdateWithInfo.\n\nWith quick tests, resulting in small performance.\n\nhead:\n\n1. REINDEX TABLE CONCURRENTLY pgbench_accounts;\nTime: 77,805 ms\nTime: 74,836 ms\nTime: 73,480 ms\n\n2. REINDEX TABLE CONCURRENTLY pgbench_tellers;\nTime: 22,260 ms\nTime: 22,205 ms\nTime: 21,008 ms\n\npatched:\n\n1. REINDEX TABLE CONCURRENTLY pgbench_accounts;\nTime: 65,048 ms\nTime: 61,853 ms\nTime: 61,119 ms\n\n2. REINDEX TABLE CONCURRENTLY pgbench_tellers;\nTime: 15,999 ms\nTime: 15,961 ms\nTime: 13,264 ms\n\nThere are other places that this could be useful,\nbut a careful analysis is necessary.\n\nregards,\nRanier Vilela",
"msg_date": "Wed, 31 Aug 2022 08:16:55 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Avoid overhead open-close indexes (catalog updates)"
},
{
"msg_contents": "At Wed, 31 Aug 2022 08:16:55 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi,\n> \n> The commit\n> https://github.com/postgres/postgres/commit/b17ff07aa3eb142d2cde2ea00e4a4e8f63686f96\n> Introduced the CopyStatistics function.\n> \n> To do the work, CopyStatistics uses a less efficient function\n> to update/insert tuples at catalog systems.\n> \n> The comment at indexing.c says:\n> \"Avoid using it for multiple tuples, since opening the indexes\n> * and building the index info structures is moderately expensive.\n> * (Use CatalogTupleInsertWithInfo in such cases.)\"\n> \n> So inspired by the comment, changed in some fews places,\n> the CatalogInsert/CatalogUpdate to more efficient functions\n> CatalogInsertWithInfo/CatalogUpdateWithInfo.\n> \n> With quick tests, resulting in small performance.\n\nConsidering the whole operation usually takes far longer time, I'm not\nsure that amount of performance gain is useful or not, but I like the\nchange as a matter of tidiness or as example for later codes.\n\n> There are other places that this could be useful,\n> but a careful analysis is necessary.\n\nWhat kind of concern do have in your mind?\n\nBy the way, there is another similar function\nCatalogTupleMultiInsertWithInfo() which would be more time-efficient\n(but not space-efficient), which is used in InsertPgAttributeTuples. I\ndon't see a clear criteria of choosing which one of the two, though.\n\nI think the overhead of catalog index open is significant when any\nother time-consuming tasks are not involved in the whole operation.\nIn that sense, in term of performance, rather storeOperations and\nstorePrecedures (called under DefineOpCalss) might get more benefit\nfrom that if disregarding the rareness of the command being used..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 01 Sep 2022 10:12:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overhead open-close indexes (catalog updates)"
},
{
"msg_contents": "Em qua., 31 de ago. de 2022 às 22:12, Kyotaro Horiguchi <\nhorikyota.ntt@gmail.com> escreveu:\n\n> At Wed, 31 Aug 2022 08:16:55 -0300, Ranier Vilela <ranier.vf@gmail.com>\n> wrote in\n> > Hi,\n> >\n> > The commit\n> >\n> https://github.com/postgres/postgres/commit/b17ff07aa3eb142d2cde2ea00e4a4e8f63686f96\n> > Introduced the CopyStatistics function.\n> >\n> > To do the work, CopyStatistics uses a less efficient function\n> > to update/insert tuples at catalog systems.\n> >\n> > The comment at indexing.c says:\n> > \"Avoid using it for multiple tuples, since opening the indexes\n> > * and building the index info structures is moderately expensive.\n> > * (Use CatalogTupleInsertWithInfo in such cases.)\"\n> >\n> > So inspired by the comment, changed in some fews places,\n> > the CatalogInsert/CatalogUpdate to more efficient functions\n> > CatalogInsertWithInfo/CatalogUpdateWithInfo.\n> >\n> > With quick tests, resulting in small performance.\n>\nHi,\nThanks for taking a look at this.\n\n\n>\n> Considering the whole operation usually takes far longer time, I'm not\n> sure that amount of performance gain is useful or not, but I like the\n> change as a matter of tidiness or as example for later codes.\n>\nYeah, this serves as an example for future codes.\n\n\n> > There are other places that this could be useful,\n> > but a careful analysis is necessary.\n>\n> What kind of concern do have in your mind?\n>\nCode Bloat.\n3 more lines are required per call (CatalogTupleInsert/CatalogTupleUpdate).\nHowever not all code paths are reachable.\nThe ideal typical case would be CopyStatistics, I think.\nWith none or at least one filter in tuples loop.\nThe cost to call CatalogOpenIndexes unconditionally, should be considered.\n\n\n>\n> By the way, there is another similar function\n> CatalogTupleMultiInsertWithInfo() which would be more time-efficient\n> (but not space-efficient), which is used in InsertPgAttributeTuples. I\n> don't see a clear criteria of choosing which one of the two, though.\n>\n> I don't think CatalogTupleMultiInsertWithInfo would be useful in these\ncases reported here.\nThe cost of building the slots I think would be unfeasible and would add\nunnecessary complexity.\n\n\n> I think the overhead of catalog index open is significant when any\n> other time-consuming tasks are not involved in the whole operation.\n> In that sense, in term of performance, rather storeOperations and\n> storePrecedures (called under DefineOpCalss) might get more benefit\n> from that if disregarding the rareness of the command being used..\n>\n> Yeah, storeOperations and storePrecedures are good candidates.\nLet's wait for the patch to be accepted and committed, so we can try to\nchange it.\n\nI will create a CF entry.\n\nregards,\nRanier Vilela\n\nEm qua., 31 de ago. de 2022 às 22:12, Kyotaro Horiguchi <horikyota.ntt@gmail.com> escreveu:At Wed, 31 Aug 2022 08:16:55 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi,\n> \n> The commit\n> https://github.com/postgres/postgres/commit/b17ff07aa3eb142d2cde2ea00e4a4e8f63686f96\n> Introduced the CopyStatistics function.\n> \n> To do the work, CopyStatistics uses a less efficient function\n> to update/insert tuples at catalog systems.\n> \n> The comment at indexing.c says:\n> \"Avoid using it for multiple tuples, since opening the indexes\n> * and building the index info structures is moderately expensive.\n> * (Use CatalogTupleInsertWithInfo in such cases.)\"\n> \n> So inspired by the comment, changed in some fews places,\n> the CatalogInsert/CatalogUpdate to more efficient functions\n> CatalogInsertWithInfo/CatalogUpdateWithInfo.\n> \n> With quick tests, resulting in small performance.Hi,Thanks for taking a look at this. \n\nConsidering the whole operation usually takes far longer time, I'm not\nsure that amount of performance gain is useful or not, but I like the\nchange as a matter of tidiness or as example for later codes.Yeah, this serves as an example for future codes. \n\n> There are other places that this could be useful,\n> but a careful analysis is necessary.\n\nWhat kind of concern do have in your mind?Code Bloat.3 more lines are required per call \n(CatalogTupleInsert/CatalogTupleUpdate).However not all code paths are reachable.The ideal typical case would be CopyStatistics, I think.With none or at least one filter in tuples loop.The cost to call CatalogOpenIndexes unconditionally, should be considered. \n\nBy the way, there is another similar function\nCatalogTupleMultiInsertWithInfo() which would be more time-efficient\n(but not space-efficient), which is used in InsertPgAttributeTuples. I\ndon't see a clear criteria of choosing which one of the two, though.\nI don't think CatalogTupleMultiInsertWithInfo would be useful in these cases reported here.The cost of building the slots I think would be unfeasible and would add unnecessary complexity. \nI think the overhead of catalog index open is significant when any\nother time-consuming tasks are not involved in the whole operation.\nIn that sense, in term of performance, rather storeOperations and\nstorePrecedures (called under DefineOpCalss) might get more benefit\nfrom that if disregarding the rareness of the command being used..\nYeah, \nstoreOperations and storePrecedures are good candidates.\n\nLet's wait for the patch to be accepted and committed, so we can try to change it. I will create a CF entry.regards,Ranier Vilela",
"msg_date": "Thu, 1 Sep 2022 08:42:15 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid overhead open-close indexes (catalog updates)"
},
{
"msg_contents": "On Thu, Sep 01, 2022 at 08:42:15AM -0300, Ranier Vilela wrote:\n> Let's wait for the patch to be accepted and committed, so we can try to\n> change it.\n\nFWIW, I think that this switch is a good idea for cases where we\npotentially update a bunch of tuples, especially based on what\nCatalogTupleInsert() tells in its top comment. Each code path updated\nhere needs a performance check to see if that's noticeable enough, but\nI can get behind the one of CopyStatistics(), at least.\n\nEnumValuesCreate() would matter less as this would require a large set\nof values in an enum, but perhaps ORMs would care and that should be\nmeasurable. update_attstats() should lead to a measurable difference\nwith a relation that has a bunch of attributes with few tuples.\nDefineTSConfiguration() is less of an issue, still fine to change.\nAddRoleMems() should be equally measurable with a large DDL. As a\nwhole, this looks pretty sane to me and a good idea to move on with.\n\nI still need to check properly the code paths changed here, of\ncourse..\n-- \nMichael",
"msg_date": "Thu, 10 Nov 2022 17:16:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overhead open-close indexes (catalog updates)"
},
{
"msg_contents": "Em qui., 10 de nov. de 2022 às 05:16, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Thu, Sep 01, 2022 at 08:42:15AM -0300, Ranier Vilela wrote:\n> > Let's wait for the patch to be accepted and committed, so we can try to\n> > change it.\n>\n> FWIW, I think that this switch is a good idea for cases where we\n> potentially update a bunch of tuples, especially based on what\n> CatalogTupleInsert() tells in its top comment.\n\nThat's the idea.\n\n\n> Each code path updated\n> here needs a performance check to see if that's noticeable enough, but\n> I can get behind the one of CopyStatistics(), at least.\n>\nFor CopyStatistics() have performance checks.\n\n\n>\n> EnumValuesCreate() would matter less as this would require a large set\n> of values in an enum, but perhaps ORMs would care and that should be\n> measurable.\n\nHave a list_length call, for a number of vals.\nFor 2 or more vals, it is already worth it, since\nCatalogOpenIndexes/CatalogCloseIndexes will be called for each val.\n\n\n\n> update_attstats() should lead to a measurable difference\n> with a relation that has a bunch of attributes with few tuples.\n>\nSame here.\nFor 2 or more attributes, it is already worth it, since\nCatalogOpenIndexes/CatalogCloseIndexes will be called for each.\n\nDefineTSConfiguration() is less of an issue, still fine to change.\n>\nOk.\n\nAddRoleMems() should be equally measurable with a large DDL. As a\n> whole, this looks pretty sane to me and a good idea to move on with.\n>\nOne filter, only.\n\nFor all these functions, the only case that would possibly have no effect\nwould be in the case of changing a single tuple, in which case there would\nbe only one call CatalogOpenIndexes/CatalogCloseIndexes for both paths.\n\n\n> I still need to check properly the code paths changed here, of\n> course..\n>\nAt least, the patch still applies properly.\n\nregards,\nRanier Vilela\n\nEm qui., 10 de nov. de 2022 às 05:16, Michael Paquier <michael@paquier.xyz> escreveu:On Thu, Sep 01, 2022 at 08:42:15AM -0300, Ranier Vilela wrote:\n> Let's wait for the patch to be accepted and committed, so we can try to\n> change it.\n\nFWIW, I think that this switch is a good idea for cases where we\npotentially update a bunch of tuples, especially based on what\nCatalogTupleInsert() tells in its top comment. That's the idea. Each code path updated\nhere needs a performance check to see if that's noticeable enough, but\nI can get behind the one of CopyStatistics(), at least.For CopyStatistics() have performance checks. \n\nEnumValuesCreate() would matter less as this would require a large set\nof values in an enum, but perhaps ORMs would care and that should be\nmeasurable. Have a list_length call, for a number of vals.For 2 or more vals, it is already worth it, since CatalogOpenIndexes/CatalogCloseIndexes will be called for each val. update_attstats() should lead to a measurable difference\nwith a relation that has a bunch of attributes with few tuples.Same here.\nFor 2 or more attributes, it is already worth it, since CatalogOpenIndexes/CatalogCloseIndexes will be called for each.\nDefineTSConfiguration() is less of an issue, still fine to change.Ok. \nAddRoleMems() should be equally measurable with a large DDL. As a\nwhole, this looks pretty sane to me and a good idea to move on with.One filter, only.For all these functions, the only case that would possibly have no effect would be in the case of changing a single tuple, in which case there would be only one call CatalogOpenIndexes/CatalogCloseIndexes for both paths.\n\nI still need to check properly the code paths changed here, of\ncourse..At least, the patch still applies properly.regards,Ranier Vilela",
"msg_date": "Thu, 10 Nov 2022 08:56:25 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid overhead open-close indexes (catalog updates)"
},
{
"msg_contents": "On Thu, Nov 10, 2022 at 08:56:25AM -0300, Ranier Vilela wrote:\n> For CopyStatistics() have performance checks.\n\nYou are not giving all the details of your tests, though, so I had a\nlook with some of my stuff using the attached set of SQL functions\n(create_function.sql) to create a bunch of indexes with a maximum\nnumber of expressions, as of:\nselect create_table_cols('tab', 32);\nselect create_index_multi_exprs('ind', 400, 'tab', 32);\ninsert into tab values (1);\nanalyze tab; -- 12.8k~ pg_statistic records\n\nOn HEAD, a REINDEX CONCURRENTLY for the table 'tab' takes 1550ms on my\nlaptop with an average of 10 runs. The patch impacts the runtime with\na single session, making the execution down to 1480ms as per an effect\nof the maximum number of attributes on an index being 32. There may\nbe some noise, but there is a trend, and some perf profiles confirm\nthe same with CopyStatistics(). My case is a bit extreme, of course,\nstill that's something.\n\nAnyway, while reviewing this code, it occured to me that we could do\neven better than this proposal once we switch to\nCatalogTuplesMultiInsertWithInfo() for the data insertion.\n\nThis would reduce more the operation overhead by switching to multi\nINSERTs rather than 1 INSERT for each index attribute with tuples\nstored in a set of TupleTableSlots, meaning 1 WAL record rather than N\nrecords. The approach would be similar to what you do for\ndependencies, see for example recordMultipleDependencies() when it\ncomes to the number of slots used etc.\n--\nMichael",
"msg_date": "Fri, 11 Nov 2022 13:53:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overhead open-close indexes (catalog updates)"
},
{
"msg_contents": "Em sex., 11 de nov. de 2022 às 01:54, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Thu, Nov 10, 2022 at 08:56:25AM -0300, Ranier Vilela wrote:\n> > For CopyStatistics() have performance checks.\n>\n> You are not giving all the details of your tests, though,\n\nWindows 10 64 bits\nSSD 256 GB\n\npgbench -i\npgbench_accounts;\npgbench_tellers;\n\nSimple test, based on tables created by pgbench.\n\n\n> so I had a\n> look with some of my stuff using the attached set of SQL functions\n> (create_function.sql) to create a bunch of indexes with a maximum\n> number of expressions, as of:\n> select create_table_cols('tab', 32);\n> select create_index_multi_exprs('ind', 400, 'tab', 32);\n> insert into tab values (1);\n> analyze tab; -- 12.8k~ pg_statistic records\n>\n> On HEAD, a REINDEX CONCURRENTLY for the table 'tab' takes 1550ms on my\n> laptop with an average of 10 runs. The patch impacts the runtime with\n> a single session, making the execution down to 1480ms as per an effect\n> of the maximum number of attributes on an index being 32. There may\n> be some noise, but there is a trend, and some perf profiles confirm\n> the same with CopyStatistics(). My case is a bit extreme, of course,\n> still that's something.\n>\n> Anyway, while reviewing this code, it occured to me that we could do\n> even better than this proposal once we switch to\n> CatalogTuplesMultiInsertWithInfo() for the data insertion.\n>\n\n> This would reduce more the operation overhead by switching to multi\n> INSERTs rather than 1 INSERT for each index attribute with tuples\n> stored in a set of TupleTableSlots, meaning 1 WAL record rather than N\n> records. The approach would be similar to what you do for\n> dependencies, see for example recordMultipleDependencies() when it\n> comes to the number of slots used etc.\n>\n\nI think complexity doesn't pay off.\nFor example, CopyStatistics not knowing how many tuples will be processed.\nIMHO, this step is right now.\nCatalogTupleInsertWithInfo offers considerable improvement without\nintroducing bugs and maintenance issues.\n\nregards,\nRanier Vilela\n\nEm sex., 11 de nov. de 2022 às 01:54, Michael Paquier <michael@paquier.xyz> escreveu:On Thu, Nov 10, 2022 at 08:56:25AM -0300, Ranier Vilela wrote:\n> For CopyStatistics() have performance checks.\n\nYou are not giving all the details of your tests, though, Windows 10 64 bitsSSD 256 GBpgbench -i \npgbench_accounts;\npgbench_tellers;\n\nSimple test, based on tables created by pgbench.\n\n so I had a\nlook with some of my stuff using the attached set of SQL functions\n(create_function.sql) to create a bunch of indexes with a maximum\nnumber of expressions, as of:\nselect create_table_cols('tab', 32);\nselect create_index_multi_exprs('ind', 400, 'tab', 32);\ninsert into tab values (1);\nanalyze tab; -- 12.8k~ pg_statistic records\n\nOn HEAD, a REINDEX CONCURRENTLY for the table 'tab' takes 1550ms on my\nlaptop with an average of 10 runs. The patch impacts the runtime with\na single session, making the execution down to 1480ms as per an effect\nof the maximum number of attributes on an index being 32. There may\nbe some noise, but there is a trend, and some perf profiles confirm\nthe same with CopyStatistics(). My case is a bit extreme, of course,\nstill that's something.\n\nAnyway, while reviewing this code, it occured to me that we could do\neven better than this proposal once we switch to\nCatalogTuplesMultiInsertWithInfo() for the data insertion.\n\nThis would reduce more the operation overhead by switching to multi\nINSERTs rather than 1 INSERT for each index attribute with tuples\nstored in a set of TupleTableSlots, meaning 1 WAL record rather than N\nrecords. The approach would be similar to what you do for\ndependencies, see for example recordMultipleDependencies() when it\ncomes to the number of slots used etc.\nI think complexity doesn't pay off.For example, CopyStatistics not knowing how many tuples will be processed.IMHO, this step is right now. CatalogTupleInsertWithInfo offers considerable improvement without introducing bugs and maintenance issues.regards,Ranier Vilela",
"msg_date": "Sat, 12 Nov 2022 11:03:46 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid overhead open-close indexes (catalog updates)"
},
{
"msg_contents": "On Sat, Nov 12, 2022 at 11:03:46AM -0300, Ranier Vilela wrote:\n> I think complexity doesn't pay off.\n> For example, CopyStatistics not knowing how many tuples will be processed.\n> IMHO, this step is right now.\n> CatalogTupleInsertWithInfo offers considerable improvement without\n> introducing bugs and maintenance issues.\n\nConsiderable may be a bit an overstatement? I can see a difference in\nprofiles when switching from one to the other in some extreme cases,\nbut for the REINDEX CONCURRENTLY case most of the runtime is going to\nbe eaten in the wait phases, the index build and its validation.\n\nAnyway, multi-inserts are going to be solution better than\nCatalogTupleInsertWithInfo() in some cases, because we would just\ngenerate one WAL record of N inserts rather than N records with one\nINSERT each.\n\nLooking closely, EnumValuesCreate() is a DDL path but I'd like to\nthink that two enum values are at least present at creation in most\ncases. AddRoleMems() becomes relevant when using more than one role,\nwhich is a less common pattern, so I'd be fine with switching to a\nsingle index-opening approach with CatalogTupleUpdateWithInfo() as you\nsuggest without the tuple slot management. CopyStatistics() does not\nknow in advance the number of tuples it would insert, and it would be\na gain when there are more than 2 expressions with entries in\npg_statistic as of HEAD. Perhaps you're right with your simple\nsuggestion to stick with CatalogTupleUpdateWithInfo() in this case.\nMaybe there is some external code calling this routine for tables, who\nknows.\n\nupdate_attstats() is actually an area that cannot be changed now that\nI look at it, as we could finish to update some entries, so the slot\napproach will not be relevant, but using CatalogTupleUpdateWithInfo()\nis. (As a matter of fact, the regression test suite is reporting that\nupdate_attstats() is called for one attribute 10% of the time, did not\ncheck the insert/update rate though).\n\nWould you like to give a try with the tuple slot management in\nEnumValuesCreate()?\n--\nMichael",
"msg_date": "Tue, 15 Nov 2022 09:57:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overhead open-close indexes (catalog updates)"
},
{
"msg_contents": "On Tue, Nov 15, 2022 at 09:57:26AM +0900, Michael Paquier wrote:\n> Anyway, multi-inserts are going to be solution better than\n> CatalogTupleInsertWithInfo() in some cases, because we would just\n> generate one WAL record of N inserts rather than N records with one\n> INSERT each.\n\nSomething that you did not consider in the initial patch is that we\nmay finish by opening catalog indexes even in cases where this would\nnot have happened on HEAD, as we may finish by doing nothing when\ncopying the stats or updating them during an analyze, and that's not\nfine IMO. However it is easy enough to minimize the cost: just do a\nCatalogOpenIndexes() when absolutely required, and close things only\nif the indexes have been opened.\n\nThen, there are the cases where it is worth switching to a\nmulti-insert logic as these are going to manipulate more than 2\nentries all the time: enum list addition and two code paths of\ntsearchcmds.c (where up to 16 entries can be lined up). This is a\ncase-by-case analysis. For example, in the case of the enums, the\nnumber of elements is known in advance so it is possible to know the\nnumber of slots that would be used and initialize them. But that's\nnot something you would do for the first tsearch bits where the data\nis built upon a scan so the slot init should be delayed. The second\ntsearch one can use a predictible approach, like the enums based on\nthe number of known elements to insert.\n\nSo I've given a try at all that, and finished with the attached. This\npatch finishes with a list of bullet points, so this had better be\nsplit into different commits, I guess.\nThoughts?\n--\nMichael",
"msg_date": "Tue, 15 Nov 2022 16:02:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overhead open-close indexes (catalog updates)"
},
{
"msg_contents": "Em ter., 15 de nov. de 2022 às 04:02, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Tue, Nov 15, 2022 at 09:57:26AM +0900, Michael Paquier wrote:\n> > Anyway, multi-inserts are going to be solution better than\n> > CatalogTupleInsertWithInfo() in some cases, because we would just\n> > generate one WAL record of N inserts rather than N records with one\n> > INSERT each.\n>\n> Something that you did not consider in the initial patch is that we\n> may finish by opening catalog indexes even in cases where this would\n> not have happened on HEAD, as we may finish by doing nothing when\n> copying the stats or updating them during an analyze, and that's not\n> fine IMO. However it is easy enough to minimize the cost: just do a\n> CatalogOpenIndexes() when absolutely required, and close things only\n> if the indexes have been opened.\n>\nI find it very difficult not to have some tuple to be updated,\nonce inside CopyStatistics and the branch cost can get in the way,\nbut I don't object with your solution.\n\n\n>\n> Then, there are the cases where it is worth switching to a\n> multi-insert logic as these are going to manipulate more than 2\n> entries all the time: enum list addition and two code paths of\n> tsearchcmds.c (where up to 16 entries can be lined up). This is a\n> case-by-case analysis. For example, in the case of the enums, the\n> number of elements is known in advance so it is possible to know the\n> number of slots that would be used and initialize them. But that's\n> not something you would do for the first tsearch bits where the data\n> is built upon a scan so the slot init should be delayed. The second\n> tsearch one can use a predictible approach, like the enums based on\n> the number of known elements to insert.\n>\nMakes sense.\n\n\n>\n> So I've given a try at all that, and finished with the attached. This\n> patch finishes with a list of bullet points, so this had better be\n> split into different commits, I guess.\n> Thoughts?\n>\nMissed AddRoleMems?\nCould you continue with CatalogTupleInsertWithInfo, what do you think?\n\nregards,\nRanier Vilela\n\nEm ter., 15 de nov. de 2022 às 04:02, Michael Paquier <michael@paquier.xyz> escreveu:On Tue, Nov 15, 2022 at 09:57:26AM +0900, Michael Paquier wrote:\n> Anyway, multi-inserts are going to be solution better than\n> CatalogTupleInsertWithInfo() in some cases, because we would just\n> generate one WAL record of N inserts rather than N records with one\n> INSERT each.\n\nSomething that you did not consider in the initial patch is that we\nmay finish by opening catalog indexes even in cases where this would\nnot have happened on HEAD, as we may finish by doing nothing when\ncopying the stats or updating them during an analyze, and that's not\nfine IMO. However it is easy enough to minimize the cost: just do a\nCatalogOpenIndexes() when absolutely required, and close things only\nif the indexes have been opened.I find it very difficult not to have some tuple to be updated, once inside CopyStatistics and the branch cost can get in the way, but I don't object with your solution. \n\nThen, there are the cases where it is worth switching to a\nmulti-insert logic as these are going to manipulate more than 2\nentries all the time: enum list addition and two code paths of\ntsearchcmds.c (where up to 16 entries can be lined up). This is a\ncase-by-case analysis. For example, in the case of the enums, the\nnumber of elements is known in advance so it is possible to know the\nnumber of slots that would be used and initialize them. But that's\nnot something you would do for the first tsearch bits where the data\nis built upon a scan so the slot init should be delayed. The second\ntsearch one can use a predictible approach, like the enums based on\nthe number of known elements to insert.Makes sense. \n\nSo I've given a try at all that, and finished with the attached. This\npatch finishes with a list of bullet points, so this had better be\nsplit into different commits, I guess.\nThoughts?Missed AddRoleMems?Could you continue with CatalogTupleInsertWithInfo, what do you think?regards,Ranier Vilela",
"msg_date": "Tue, 15 Nov 2022 11:42:34 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid overhead open-close indexes (catalog updates)"
},
{
"msg_contents": "On Tue, Nov 15, 2022 at 11:42:34AM -0300, Ranier Vilela wrote:\n> I find it very difficult not to have some tuple to be updated,\n> once inside CopyStatistics and the branch cost can get in the way,\n> but I don't object with your solution.\n\nThe code assumes that it is a possibility.\n\n> Missed AddRoleMems?\n> Could you continue with CatalogTupleInsertWithInfo, what do you think?\n\nThis one has been left out on purpose. I was tempting to use\nWithInfo() with a CatalogIndexState opened optionally but I got the\nimpression that it makes the code a bit harder to follow and\nAddRoleMems() is already complex on its own. Most DDL patterns\nworking on role would involve one role. More roles could be added of\ncourse in one shot, but the extra logic complexity did not look that\nappealing to me especially as some role updates are skipped.\n--\nMichael",
"msg_date": "Wed, 16 Nov 2022 06:58:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overhead open-close indexes (catalog updates)"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 06:58:01AM +0900, Michael Paquier wrote:\n> This one has been left out on purpose. I was tempting to use\n> WithInfo() with a CatalogIndexState opened optionally but I got the\n> impression that it makes the code a bit harder to follow and\n> AddRoleMems() is already complex on its own. Most DDL patterns\n> working on role would involve one role. More roles could be added of\n> course in one shot, but the extra logic complexity did not look that\n> appealing to me especially as some role updates are skipped.\n\nI have worked more on that today, and applied all that after splitting\nthe whole in three commits in total as different areas were touched.\nIt looks like we are good for this thread, then.\n\nI have spotted more optimizations possible, particularly for operator\nclasses, but that could happen later.\n--\nMichael",
"msg_date": "Wed, 16 Nov 2022 16:23:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overhead open-close indexes (catalog updates)"
},
{
"msg_contents": "Em qua., 16 de nov. de 2022 às 04:23, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Wed, Nov 16, 2022 at 06:58:01AM +0900, Michael Paquier wrote:\n> > This one has been left out on purpose. I was tempting to use\n> > WithInfo() with a CatalogIndexState opened optionally but I got the\n> > impression that it makes the code a bit harder to follow and\n> > AddRoleMems() is already complex on its own. Most DDL patterns\n> > working on role would involve one role. More roles could be added of\n> > course in one shot, but the extra logic complexity did not look that\n> > appealing to me especially as some role updates are skipped.\n>\n> I have worked more on that today, and applied all that after splitting\n> the whole in three commits in total as different areas were touched.\n> It looks like we are good for this thread, then.\n>\nThanks Michael.\n\n\n>\n> I have spotted more optimizations possible, particularly for operator\n> classes, but that could happen later.\n>\nGood to know.\n\nregards,\nRanier Vilela\n\nEm qua., 16 de nov. de 2022 às 04:23, Michael Paquier <michael@paquier.xyz> escreveu:On Wed, Nov 16, 2022 at 06:58:01AM +0900, Michael Paquier wrote:\n> This one has been left out on purpose. I was tempting to use\n> WithInfo() with a CatalogIndexState opened optionally but I got the\n> impression that it makes the code a bit harder to follow and\n> AddRoleMems() is already complex on its own. Most DDL patterns\n> working on role would involve one role. More roles could be added of\n> course in one shot, but the extra logic complexity did not look that\n> appealing to me especially as some role updates are skipped.\n\nI have worked more on that today, and applied all that after splitting\nthe whole in three commits in total as different areas were touched.\nIt looks like we are good for this thread, then.Thanks Michael. \n\nI have spotted more optimizations possible, particularly for operator\nclasses, but that could happen later.Good to know.regards,Ranier Vilela",
"msg_date": "Wed, 16 Nov 2022 08:33:58 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid overhead open-close indexes (catalog updates)"
}
] |
[
{
"msg_contents": "Commit 38bfae36526 moved the .txt files pg_upgrade generates to a separate\nsubdir, but there are a few left which are written to cwd. The thread\nresulting in that patch doesn't discuss these files specifically so it seems\nthey are just an oversight. Unless I'm missing something.\n\nShould something the attached be applied to ensure all generated files are\nplaced in the subdirectory?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Wed, 31 Aug 2022 14:09:06 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "pg_upgrade generated files in subdir follow-up"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Commit 38bfae36526 moved the .txt files pg_upgrade generates to a separate\n> subdir, but there are a few left which are written to cwd. The thread\n> resulting in that patch doesn't discuss these files specifically so it seems\n> they are just an oversight. Unless I'm missing something.\n\n> Should something the attached be applied to ensure all generated files are\n> placed in the subdirectory?\n\nIt certainly looks inconsistent ATM. I wondered if maybe the plan was to\nput routine output into the log directory but problem-reporting files\ninto cwd --- but that isn't what's happening now.\n\nAs long as we report the path to where the file is, I don't see a reason\nnot to put problem-reporting files in the subdir too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Aug 2022 09:59:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade generated files in subdir follow-up"
},
{
"msg_contents": "> On 31 Aug 2022, at 15:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> Commit 38bfae36526 moved the .txt files pg_upgrade generates to a separate\n>> subdir, but there are a few left which are written to cwd. The thread\n>> resulting in that patch doesn't discuss these files specifically so it seems\n>> they are just an oversight. Unless I'm missing something.\n> \n>> Should something the attached be applied to ensure all generated files are\n>> placed in the subdirectory?\n> \n> It certainly looks inconsistent ATM. I wondered if maybe the plan was to\n> put routine output into the log directory but problem-reporting files\n> into cwd --- but that isn't what's happening now.\n\nRight, check_proper_datallowconn and check_for_isn_and_int8_passing_mismatch\nand a few other check functions already place error reporting in the subdir.\n\n> As long as we report the path to where the file is, I don't see a reason\n> not to put problem-reporting files in the subdir too.\n\nAgreed. The documentation states:\n\n\t\"pg_upgrade creates various working files, such as schema dumps, stored\n\twithin pg_upgrade_output.d in the directory of the new cluster. Each\n\trun creates a new subdirectory named with a timestamp formatted as per\n\tISO 8601 (%Y%m%dT%H%M%S), where all the generated files are stored.\"\n\nThe delete_old_cluster and reindex_hash scripts are still placed in CWD, which\nisn't changed by this patch, as that seems correct (and might break scripts if\nwe move them). Maybe we should amend the docs to mention that scripts aren't\ngenerated in the subdir?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 16:41:24 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade generated files in subdir follow-up"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 04:41:24PM +0200, Daniel Gustafsson wrote:\n> > On 31 Aug 2022, at 15:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > \n> > Daniel Gustafsson <daniel@yesql.se> writes:\n> >> Commit 38bfae36526 moved the .txt files pg_upgrade generates to a separate\n> >> subdir, but there are a few left which are written to cwd. The thread\n> >> resulting in that patch doesn't discuss these files specifically so it seems\n> >> they are just an oversight. Unless I'm missing something.\n> > \n> >> Should something the attached be applied to ensure all generated files are\n> >> placed in the subdirectory?\n> > \n> > It certainly looks inconsistent ATM. I wondered if maybe the plan was to\n> > put routine output into the log directory but problem-reporting files\n> > into cwd --- but that isn't what's happening now.\n\nThe script files are intended to stay where they are, and the error\nfiles are intended to move under the subdir, to allow for their easy\nremoval, per Tom's request.\n\n> Right, check_proper_datallowconn and check_for_isn_and_int8_passing_mismatch\n> and a few other check functions already place error reporting in the subdir.\n\nIt looks like I may have grepped for fprintf or similar, and missed\nchecking output_path.\n\nI updated your patach to put the logic inside\ncheck_for_data_types_usage(), which is shorter, and seems to simplify\ndoing what's intended into the future.\n\n-- \nJustin",
"msg_date": "Fri, 9 Sep 2022 11:07:41 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade generated files in subdir follow-up"
},
{
"msg_contents": "On Fri, Sep 09, 2022 at 11:07:41AM -0500, Justin Pryzby wrote:\n> On Wed, Aug 31, 2022 at 04:41:24PM +0200, Daniel Gustafsson wrote:\n> > > On 31 Aug 2022, at 15:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > \n> > > Daniel Gustafsson <daniel@yesql.se> writes:\n> > >> Commit 38bfae36526 moved the .txt files pg_upgrade generates to a separate\n> > >> subdir, but there are a few left which are written to cwd. The thread\n> > >> resulting in that patch doesn't discuss these files specifically so it seems\n> > >> they are just an oversight. Unless I'm missing something.\n> > > \n> > >> Should something the attached be applied to ensure all generated files are\n> > >> placed in the subdirectory?\n> > > \n> > > It certainly looks inconsistent ATM. I wondered if maybe the plan was to\n> > > put routine output into the log directory but problem-reporting files\n> > > into cwd --- but that isn't what's happening now.\n> \n> The script files are intended to stay where they are, and the error\n> files are intended to move under the subdir, to allow for their easy\n> removal, per Tom's request.\n\nRight. The .txt files reporting that something went wrong should be\nin the basedir, like loadable_libraries.txt, as these are not really\ninternal logs but provide information about a failure state. I have\ndouble-checked the whole code of pg_upgrade, and I am not seeing\nanother area to fix, so 0001 looks fine to me. This one is on me, so\nI guess that I'd like to take care of it myself.\n\n> It looks like I may have grepped for fprintf or similar, and missed\n> checking output_path.\n> \n> I updated your patach to put the logic inside\n> check_for_data_types_usage(), which is shorter, and seems to simplify\n> doing what's intended into the future.\n\n0002 makes the code more complicated IMO, as we still need to report\nthe location of the file in the logs. So I would leave things to\nwhat's proposed in 0001.\n--\nMichael",
"msg_date": "Mon, 12 Sep 2022 16:33:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade generated files in subdir follow-up"
},
{
"msg_contents": "> On 12 Sep 2022, at 09:33, Michael Paquier <michael@paquier.xyz> wrote:\n\n> I have double-checked the whole code of pg_upgrade, and I am not seeing another\n> area to fix, so 0001 looks fine to me. This one is on me, so I guess that I'd\n> like to take care of it myself.\n\n\nSure, go for it.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 12 Sep 2022 09:51:32 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade generated files in subdir follow-up"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 09:51:32AM +0200, Daniel Gustafsson wrote:\n> Sure, go for it.\n\nThanks. Done, then, after an extra look.\n--\nMichael",
"msg_date": "Tue, 13 Sep 2022 10:40:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade generated files in subdir follow-up"
}
] |
[
{
"msg_contents": "Hi,\n\nThere are two ways in which a role can exercise the privileges of some\nother role which has been granted to it. First, it can implicitly\ninherit the privileges of the granted role. Second, it can assume the\nidentity of the granted role using the SET ROLE command. It is\npossible to control the former behavior, but not the latter. In v15\nand prior release, we had a role-level [NO]INHERIT property which\ncontrolled whether a role automatically inherited the privileges of\nany role granted to it. This was all-or-nothing. Beginning in\ne3ce2de09d814f8770b2e3b3c152b7671bcdb83f, the inheritance behavior of\nrole-grants can be overridden for individual grants, so that some\ngrants are inherited and others are not. However, there is no similar\nfacility for controlling whether a role can SET ROLE to some other\nrole of which it is a member. At present, if role A is a member of\nrole B, then A can SET ROLE B, and that's it.\n\nIn some circumstances, it may be desirable to control this behavior.\nFor example, if we GRANT pg_read_all_settings TO seer, we do want the\nseer to be able to read all the settings, else we would not have\ngranted the role. But we might not want the seer to be able to do\nthis:\n\nYou are now connected to database \"rhaas\" as user \"seer\".\nrhaas=> set role pg_read_all_settings;\nSET\nrhaas=> create table artifact (a int);\nCREATE TABLE\nrhaas=> \\d\n List of relations\n Schema | Name | Type | Owner\n--------+----------+-------+----------------------\n public | artifact | table | pg_read_all_settings\n(1 row)\n\nI have attached a rather small patch which makes it possible to\ncontrol this behavior:\n\nYou are now connected to database \"rhaas\" as user \"rhaas\".\nrhaas=# grant pg_read_all_settings to seer with set false;\nGRANT ROLE\nrhaas=# \\c - seer\nYou are now connected to database \"rhaas\" as user \"seer\".\nrhaas=> set role pg_read_all_settings;\nERROR: permission denied to set role \"pg_read_all_settings\"\n\nI think that this behavior is generally useful, and not just for the\npredefined roles that we ship as part of PostgreSQL. I don't think\nit's too hard to imagine someone wanting to use some locally created\nrole as a container for privileges but not wanting the users who\npossess this role to run around creating new objects owned by it. To\nsome extent that can be controlled by making sure the role in question\ndoesn't have any excess privileges, but that's not really sufficient:\nall you need is one schema anywhere in the system that grants CREATE\nto PUBLIC. You could avoid creating such a schema, which might be a\ngood idea for other reasons anyway, but it feels like approaching the\nproblem from the wrong end. What you really want is to allow the users\nto inherit the privileges of the role but not use SET ROLE to become\nthat role, so that's what this patch lets you do.\n\nThere's one other kind of case in which this sort of thing might be\nsomewhat useful, although it's more dubious. Suppose you have an\noncall group where you regularly add and remove members according to\nwho is on call. Naturally, you have an on-call bot which performs this\ntask automatically. The on-call bot has the ability to manage\nmemberships in the oncall group, but should not have the ability to\naccess any of its privileges, either by inheritance or via SET ROLE.\nThis patch KIND OF lets you accomplish this:\n\nrhaas=# create role oncall;\nCREATE ROLE\nrhaas=# create role oncallbot login;\nCREATE ROLE\nrhaas=# grant oncall to oncallbot with inherit false, set false, admin true;\nGRANT ROLE\nrhaas=# create role anna;\nCREATE ROLE\nrhaas=# create role eliza;\nCREATE ROLE\nrhaas=# \\c - oncallbot\nYou are now connected to database \"rhaas\" as user \"oncallbot\".\nrhaas=> grant oncall to anna;\nGRANT ROLE\nrhaas=> revoke oncall from anna;\nREVOKE ROLE\nrhaas=> grant oncall to eliza;\nGRANT ROLE\nrhaas=> set role oncall;\nERROR: permission denied to set role \"oncall\"\n\nThe problem here is that if a nasty evil hacker takes over the\noncallbot role, nothing whatsoever prevents them from executing \"grant\noncall to oncallbot with set true\" after which they can then \"SET ROLE\noncall\" using the privileges they just granted themselves. And even if\nunder some theory we blocked that, they could still maliciously grant\nthe sought-after on-call privileges to some other role i.e. \"grant\noncall to accomplice\". It's fundamentally difficult to allow people to\nadminister a set of privileges without giving them the ability to\nusurp those privileges, and I wouldn't like to pretend that this patch\nis in any way sufficient to accomplish such a thing. Nevertheless, I\nthink there's some chance it might be useful to someone building such\na system, in combination with other safeguards. Or maybe not: this\nisn't the main reason I'm interested in this, and it's just an added\nbenefit if it turns out that someone can do something like this with\nit.\n\nIn order to apply this patch, we'd need to reach a conclusion about\nthe matters mentioned in\nhttp://postgr.es/m/CA+TgmobhEYYnW9vrHvoLvD8ODsPBJuU9CbK6tms6Owd70hFMTw@mail.gmail.com\n-- and thinking about this patch might shed some light on what we'd\nwant to do over there.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 31 Aug 2022 08:56:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "allowing for control over SET ROLE"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 08:56:31AM -0400, Robert Haas wrote:\n> In some circumstances, it may be desirable to control this behavior.\n> For example, if we GRANT pg_read_all_settings TO seer, we do want the\n> seer to be able to read all the settings, else we would not have\n> granted the role. But we might not want the seer to be able to do\n> this:\n> \n> You are now connected to database \"rhaas\" as user \"seer\".\n> rhaas=> set role pg_read_all_settings;\n> SET\n> rhaas=> create table artifact (a int);\n> CREATE TABLE\n> rhaas=> \\d\n> List of relations\n> Schema | Name | Type | Owner\n> --------+----------+-------+----------------------\n> public | artifact | table | pg_read_all_settings\n> (1 row)\n\n+1\n\n> The problem here is that if a nasty evil hacker takes over the\n> oncallbot role, nothing whatsoever prevents them from executing \"grant\n> oncall to oncallbot with set true\" after which they can then \"SET ROLE\n> oncall\" using the privileges they just granted themselves. And even if\n> under some theory we blocked that, they could still maliciously grant\n> the sought-after on-call privileges to some other role i.e. \"grant\n> oncall to accomplice\". It's fundamentally difficult to allow people to\n> administer a set of privileges without giving them the ability to\n> usurp those privileges, and I wouldn't like to pretend that this patch\n> is in any way sufficient to accomplish such a thing. Nevertheless, I\n> think there's some chance it might be useful to someone building such\n> a system, in combination with other safeguards. Or maybe not: this\n> isn't the main reason I'm interested in this, and it's just an added\n> benefit if it turns out that someone can do something like this with\n> it.\n\nYeah, if you have ADMIN for a role, you would effectively have SET. I'm\ntempted to suggest that ADMIN roles should be restricted from granting SET\nunless they have it themselves. However, that seems like it'd create a\nweird discrepancy. If you have ADMIN but not INHERIT or SET, you'd still\nbe able to grant membership with or without INHERIT, but you wouldn't be\nable to grant SET. In the end, I guess I agree with you that it's\n\"fundamentally difficult to allow people to administer a set of privileges\nwithout giving them the ability to usurp those privileges...\"\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 1 Sep 2022 13:57:29 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "Robert Haas:\n> Beginning in\n> e3ce2de09d814f8770b2e3b3c152b7671bcdb83f, the inheritance behavior of\n> role-grants can be overridden for individual grants, so that some\n> grants are inherited and others are not.\n\nThat's a great thing to have!\n\n> However, there is no similar\n> facility for controlling whether a role can SET ROLE to some other\n> role of which it is a member. At present, if role A is a member of\n> role B, then A can SET ROLE B, and that's it.\n> \n> In some circumstances, it may be desirable to control this behavior.\n\n+1\n\n> rhaas=# grant oncall to oncallbot with inherit false, set false, admin true;\n\nLooking at the syntax here, I'm not sure whether adding more WITH \noptions is the best way to do this. From a user perspective WITH SET \nTRUE looks more like a privilege granted on how to use this database \nobject (role). Something like this would be more consistent with the \nother GRANT variants:\n\nGRANT SET ON ROLE oncall TO oncallbot WITH GRANT OPTION;\n\nThis is obviously not exactly the same as the command above, because \noncallbot would be able to use SET ROLE directly. But as discussed, this \nis more cosmetic anyway, because they could GRANT it to themselves.\n\nThe full syntax could look like this:\n\nGRANT { INHERIT | SET | ALL [ PRIVILEGES ] }\n ON ROLE role_name [, ...]\n TO role_specification [, ...] WITH GRANT OPTION\n [ GRANTED BY role_specification ]\n\nWith this new syntax, the existing\n\nGRANT role_name TO role_specification [WITH ADMIN OPTION];\n\nwould be the same as\n\nGRANT ALL ON role_name TO role_specification [WITH GRANT OPTION];\n\nThis would slightly change the way INHERIT works: As a privilege, it \nwould not override the member's role INHERIT attribute, but would \ncontrol whether that attribute is applied. This means:\n\n- INHERIT attribute + INHERIT granted -> inheritance (same)\n- INHERIT attribute + INHERIT not granted -> no inheritance (different!)\n- NOINHERIT attribute + INHERIT not granted -> no inheritance (same)\n- NOINHERIT attribute + INHERIT granted -> no inheritance (different!)\n\nThis would allow us to do the following:\n\nGRANT INHERIT ON ROLE pg_read_all_settings TO seer_bot WITH GRANT OPTION;\n\nseer_bot would now be able to GRANT pg_read_all_settings to other users, \ntoo - but without the ability to use or grant SET ROLE to anyone. As \nlong as seer_bot has the NOINHERIT attribute set, they wouldn't use that \nprivilege, though - which might be desired for the bot.\n\nSimilary, it would be possible for the oncallbot in the example above to \nbe able to grant SET ROLE only - and not INHERIT.\n\nI realize that there has been a lot of discussion about roles and \nprivileges in the past year. I have tried to follow those discussions, \nbut it's likely that I missed some good arguments against my proposal above.\n\nBest\n\nWolfgang\n\n\n",
"msg_date": "Fri, 2 Sep 2022 09:20:23 +0200",
"msg_from": "Wolfgang Walther <walther@technowledgy.de>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Fri, Sep 2, 2022 at 3:20 AM Wolfgang Walther <walther@technowledgy.de> wrote:\n> The full syntax could look like this:\n>\n> GRANT { INHERIT | SET | ALL [ PRIVILEGES ] }\n> ON ROLE role_name [, ...]\n> TO role_specification [, ...] WITH GRANT OPTION\n> [ GRANTED BY role_specification ]\n>\n> With this new syntax, the existing\n>\n> GRANT role_name TO role_specification [WITH ADMIN OPTION];\n>\n> would be the same as\n>\n> GRANT ALL ON role_name TO role_specification [WITH GRANT OPTION];\n\nThis would be a pretty significant rework. Right now, there's only one\nADMIN OPTION on a role, and you either have it or you don't. Changing\nthings around so that you can have each individual privilege with or\nwithout grant option would be a fair amount of work. I don't think\nit's completely crazy, but I'm not very sold on the idea, either,\nbecause giving somebody *either* the ability to grant INHERIT option\n*or* the ability to grant SET option is largely equivalent from a\nsecurity point of view. Either way, the grantees will be able to\naccess the privileges of the role in some fashion. This is different\nfrom table privileges, where SELECT and INSERT are clearly distinct\nrights that do not overlap, and thus separating the ability to\nadminister one of those things from the ability to administer the\nother one has more utility.\n\nThe situation might look different in the future if we added more role\noptions and if each of those were clearly severable rights. For\ninstance, if we had a DROP option on a role grant that conferred the\nright to drop the role, that would be distinct from SET and INHERIT\nand it might make sense to allow someone to administer SET and/or\nINHERIT but not DROP. However, I don't have any current plans to add\nsuch an option, and TBH I find it a little hard to come up with a\ncompelling list of things that would be worth adding as separate\npermissions here. There are a bunch of things that one role can do to\nanother using ALTER ROLE, and right now you have to be SUPERUSER or\nhave CREATEROLE to do that stuff. In\ntheory, you could turn that into a big list of individual rights so\nthat you can e.g. GRANT CHANGE PASSWORD ON role1 TO role2 WITH GRANT\nOPTION.\n\nHowever, I really don't see a lot of utility in slicing things up at\nthat level of granularity. There isn't in my view a lot of use case\nfor giving a user the right to change some other user's password but\nnot giving them the right to set the connection limit for that same\nother user -- and there's even less use case for giving some user the\nability to grant one of those rights but not the other.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Sep 2022 10:20:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Wed, 2022-08-31 at 08:56 -0400, Robert Haas wrote:\n> In some circumstances, it may be desirable to control this behavior.\n> For example, if we GRANT pg_read_all_settings TO seer, we do want the\n> seer to be able to read all the settings, else we would not have\n> granted the role. But we might not want the seer to be able to do\n> this:\n> \n> You are now connected to database \"rhaas\" as user \"seer\".\n> rhaas=> set role pg_read_all_settings;\n> SET\n> rhaas=> create table artifact (a int);\n> CREATE TABLE\n> rhaas=> \\d\n> List of relations\n> Schema | Name | Type | Owner\n> --------+----------+-------+----------------------\n> public | artifact | table | pg_read_all_settings\n> (1 row)\n\nInteresting case.\n\n> I have attached a rather small patch which makes it possible to\n> control this behavior:\n> \n> You are now connected to database \"rhaas\" as user \"rhaas\".\n> rhaas=# grant pg_read_all_settings to seer with set false;\n> GRANT ROLE\n\nYou've defined this in terms of the mechanics -- allow SET ROLE or not\n-- but I assume you intend it as a security feature to allow/forbid\nsome capabilities.\n\nIs this only about the capability to create objects owned by a role\nyou're a member of? Or are there other implications?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sat, 03 Sep 2022 12:46:40 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Sat, Sep 3, 2022 at 3:46 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > You are now connected to database \"rhaas\" as user \"rhaas\".\n> > rhaas=# grant pg_read_all_settings to seer with set false;\n> > GRANT ROLE\n>\n> You've defined this in terms of the mechanics -- allow SET ROLE or not\n> -- but I assume you intend it as a security feature to allow/forbid\n> some capabilities.\n>\n> Is this only about the capability to create objects owned by a role\n> you're a member of? Or are there other implications?\n\nI think there are some other implications, but I don't think they're\nanything super-dramatic. For example, you could create a group that's\njust for purposes of pg_hba.conf matching and make the grants both SET\nFALSE and INHERIT FALSE, with the idea that the members shouldn't have\nany access to the role; it's just there for grouping purposes. I\nmentioned one other possible scenario, with oncallbot, in the original\npost.\n\nI'm not sure whether thinking about this in terms of security\ncapabilities is the most helpful way to view it. My view was, as you\nsay, more mechanical. I think sometimes you grant somebody a role and\nyou want them to be able to use SET ROLE to assume the privileges of\nthe target role, and sometimes you don't. I think that primarily\ndepends on the reason why you made the grant. In the case of a\npredefined role, you're almost certainly granting membership so that\nthe privileges of the predefined role can be inherited. In other\ncases, you may be doing it so that the member can SET ROLE to the\ntarget role, or you may be doing it so that the member can administer\nthe role (because you give them ADMIN OPTION), or you may even be\ndoing it for pg_hba.conf matching.\n\nAnd because of this, I think it follows that there may be some\ncapabilities conferred by role membership that you don't really want\nto convey in particular cases, so I think it makes sense to have a way\nto avoid conveying the ones that aren't necessary for the grant to\nfulfill its purpose. I'm not exactly sure how far that gets you in\nterms of building a system that is more secure than what you could\nbuild otherwise, but it feels like a useful capability regardless.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Sep 2022 10:42:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Tue, 2022-09-06 at 10:42 -0400, Robert Haas wrote:\n> I think there are some other implications, but I don't think they're\n> anything super-dramatic. For example, you could create a group that's\n> just for purposes of pg_hba.conf matching and make the grants both\n> SET\n> FALSE and INHERIT FALSE, with the idea that the members shouldn't\n> have\n> any access to the role; it's just there for grouping purposes. I\n> mentioned one other possible scenario, with oncallbot, in the\n> original\n> post.\n\nInteresting. All of those seem like worthwhile use cases to me.\n\n> I'm not sure whether thinking about this in terms of security\n> capabilities is the most helpful way to view it. My view was, as you\n> say, more mechanical. I think sometimes you grant somebody a role and\n> you want them to be able to use SET ROLE to assume the privileges of\n> the target role, and sometimes you don't.\n\nBy denying the ability to \"SET ROLE pg_read_all_settings\", I assumed\nthat we'd deny the ability to create objects owned by that\npg_read_all_settings. But on closer inspection:\n\n grant all privileges on schema public to public;\n create user u1;\n grant pg_read_all_settings to u1 with set false;\n \\c - u1\n create table foo(i int);\n set role pg_read_all_settings;\n ERROR: permission denied to set role \"pg_read_all_settings\"\n alter table foo owner to pg_read_all_settings;\n \\d\n List of relations\n Schema | Name | Type | Owner \n --------+------+-------+----------------------\n public | foo | table | pg_read_all_settings\n (1 row)\n\n\nUsers will reasonably interpret any feature of GRANT to be a security\nfeature that allows or prevents certain users from causing certain\noutcomes. But here, I was initially fooled, and the outcome is still\npossible.\n\nSo I believe we do need to think in terms of what capabilities we are\nreally restricting with this feature rather than solely the mechanics.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 06 Sep 2022 11:45:54 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Tue, Sep 6, 2022 at 2:45 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > I'm not sure whether thinking about this in terms of security\n> > capabilities is the most helpful way to view it. My view was, as you\n> > say, more mechanical. I think sometimes you grant somebody a role and\n> > you want them to be able to use SET ROLE to assume the privileges of\n> > the target role, and sometimes you don't.\n>\n> By denying the ability to \"SET ROLE pg_read_all_settings\", I assumed\n> that we'd deny the ability to create objects owned by that\n> pg_read_all_settings. But on closer inspection:\n>\n> grant all privileges on schema public to public;\n> create user u1;\n> grant pg_read_all_settings to u1 with set false;\n> \\c - u1\n> create table foo(i int);\n> set role pg_read_all_settings;\n> ERROR: permission denied to set role \"pg_read_all_settings\"\n> alter table foo owner to pg_read_all_settings;\n> \\d\n> List of relations\n> Schema | Name | Type | Owner\n> --------+------+-------+----------------------\n> public | foo | table | pg_read_all_settings\n> (1 row)\n\nYeah. Please note this paragraph in my original post:\n\n\"In order to apply this patch, we'd need to reach a conclusion about\nthe matters mentioned in\nhttp://postgr.es/m/CA+TgmobhEYYnW9vrHvoLvD8ODsPBJuU9CbK6tms6Owd70hFMTw@mail.gmail.com\n-- and thinking about this patch might shed some light on what we'd\nwant to do over there.\"\n\nI hadn't quite gotten around to updating that thread based on posting\nthis, but this scenario was indeed on my mind.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Sep 2022 14:50:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On 31.08.22 14:56, Robert Haas wrote:\n> In some circumstances, it may be desirable to control this behavior.\n> For example, if we GRANT pg_read_all_settings TO seer, we do want the\n> seer to be able to read all the settings, else we would not have\n> granted the role. But we might not want the seer to be able to do\n> this:\n> \n> You are now connected to database \"rhaas\" as user \"seer\".\n> rhaas=> set role pg_read_all_settings;\n> SET\n> rhaas=> create table artifact (a int);\n> CREATE TABLE\n> rhaas=> \\d\n> List of relations\n> Schema | Name | Type | Owner\n> --------+----------+-------+----------------------\n> public | artifact | table | pg_read_all_settings\n> (1 row)\n\nI think this is because we have (erroneously) make SET ROLE to be the \nsame as SET SESSION AUTHORIZATION. If those two were separate (i.e., \nthere is a current user and a separate current role, as in the SQL \nstandard), then this would be more straightforward.\n\nI don't know if it's possible to untangle that at this point.\n\n\n\n",
"msg_date": "Mon, 12 Sep 2022 17:41:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 11:41 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> I think this is because we have (erroneously) make SET ROLE to be the\n> same as SET SESSION AUTHORIZATION. If those two were separate (i.e.,\n> there is a current user and a separate current role, as in the SQL\n> standard), then this would be more straightforward.\n>\n> I don't know if it's possible to untangle that at this point.\n\nI think that it already works as you describe:\n\nrhaas=# create role foo;\nCREATE ROLE\nrhaas=# create role bar;\nCREATE ROLE\nrhaas=# grant bar to foo;\nGRANT ROLE\nrhaas=# set session authorization foo;\nSET\nrhaas=> set role bar;\nSET\nrhaas=> select current_user;\n current_user\n--------------\n bar\n(1 row)\n\nrhaas=> select session_user;\n session_user\n--------------\n foo\n(1 row)\n\nThere may well be problems here, but this example shows that the\ncurrent_user and session_user concepts are different in PostgreSQL.\nIt's also true that the privileges required to execute the commands\nare different: SET SESSION AUTHORIZATION requires that the session\nuser is a superuser, and SET ROLE requires that the identity\nestablished via SET SESSION AUTHORIZATION has the target role granted\nto it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 13 Sep 2022 07:24:58 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 8:56 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> In order to apply this patch, we'd need to reach a conclusion about\n> the matters mentioned in\n> http://postgr.es/m/CA+TgmobhEYYnW9vrHvoLvD8ODsPBJuU9CbK6tms6Owd70hFMTw@mail.gmail.com\n> -- and thinking about this patch might shed some light on what we'd\n> want to do over there.\n\nThat thread has not reached an entirely satisfying conclusion.\nHowever, the behavior that was deemed outright buggy over there has\nbeen fixed. The remaining question is what to do about commands that\nallow you to give objects to other users (like ALTER <whatever> ..\nOWNER TO) or commands that allow you to create objects owned by other\nusers (like CREATE DATABASE ... OWNER). I have, in this version,\nadopted the proposal by Wolfgang Walther on the other thread to make\nthis controlled by the new SET option. This essentially takes the view\nthat the ability to create objects owned by another user is not\nprecisely a privilege, and is thus not inherited just because the\nINHERIT option is set on the GRANT, but it is something you can do if\nyou could SET ROLE to that role, so we make it dependent on the SET\noption. This logic is certainly debatable, but it does have the\npractical advantage of making INHERIT TRUE, SET FALSE a useful\ncombination of settings for predefined roles. It's also 100%\nbackward-compatible, whereas if we made the behavior dependent on the\nINHERIT option, users could potentially notice behavior changes after\nupgrading to v16.\n\nSo I do like this behavior ... but it's definitely arguable whether\nit's the best thing. At any rate, here's an updated patch that\nimplements it, and to which I've also added a test case.\n\nReview appreciated.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 30 Sep 2022 16:34:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 04:34:32PM -0400, Robert Haas wrote:\n> That thread has not reached an entirely satisfying conclusion.\n> However, the behavior that was deemed outright buggy over there has\n> been fixed. The remaining question is what to do about commands that\n> allow you to give objects to other users (like ALTER <whatever> ..\n> OWNER TO) or commands that allow you to create objects owned by other\n> users (like CREATE DATABASE ... OWNER). I have, in this version,\n> adopted the proposal by Wolfgang Walther on the other thread to make\n> this controlled by the new SET option. This essentially takes the view\n> that the ability to create objects owned by another user is not\n> precisely a privilege, and is thus not inherited just because the\n> INHERIT option is set on the GRANT, but it is something you can do if\n> you could SET ROLE to that role, so we make it dependent on the SET\n> option. This logic is certainly debatable, but it does have the\n> practical advantage of making INHERIT TRUE, SET FALSE a useful\n> combination of settings for predefined roles. It's also 100%\n> backward-compatible, whereas if we made the behavior dependent on the\n> INHERIT option, users could potentially notice behavior changes after\n> upgrading to v16.\n\nI'm not sure about tying the ownership stuff to this new SET privilege.\nWhile you noted some practical advantages, I'd expect users to find it kind\nof surprising. Also, for predefined roles, I think you need to be careful\nabout distributing ADMIN, as anyone with ADMIN on a predefined role can\njust GRANT SET to work around the restrictions. I don't have a better\nidea, though, so perhaps neither of these things is a deal-breaker. I was\ntempted to suggest using ADMIN instead of SET for the ownership stuff, but\nthat wouldn't be backward-compatible, and you'd still be able to work\naround it to some extent with SET (e.g., SET ROLE followed by CREATE\nDATABASE).\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 12 Oct 2022 13:59:37 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "Greetings,\n\n* Nathan Bossart (nathandbossart@gmail.com) wrote:\n> On Fri, Sep 30, 2022 at 04:34:32PM -0400, Robert Haas wrote:\n> > That thread has not reached an entirely satisfying conclusion.\n> > However, the behavior that was deemed outright buggy over there has\n> > been fixed. The remaining question is what to do about commands that\n> > allow you to give objects to other users (like ALTER <whatever> ..\n> > OWNER TO) or commands that allow you to create objects owned by other\n> > users (like CREATE DATABASE ... OWNER). I have, in this version,\n> > adopted the proposal by Wolfgang Walther on the other thread to make\n> > this controlled by the new SET option. This essentially takes the view\n> > that the ability to create objects owned by another user is not\n> > precisely a privilege, and is thus not inherited just because the\n> > INHERIT option is set on the GRANT, but it is something you can do if\n> > you could SET ROLE to that role, so we make it dependent on the SET\n> > option. This logic is certainly debatable, but it does have the\n> > practical advantage of making INHERIT TRUE, SET FALSE a useful\n> > combination of settings for predefined roles. It's also 100%\n> > backward-compatible, whereas if we made the behavior dependent on the\n> > INHERIT option, users could potentially notice behavior changes after\n> > upgrading to v16.\n> \n> I'm not sure about tying the ownership stuff to this new SET privilege.\n> While you noted some practical advantages, I'd expect users to find it kind\n> of surprising. Also, for predefined roles, I think you need to be careful\n> about distributing ADMIN, as anyone with ADMIN on a predefined role can\n> just GRANT SET to work around the restrictions. I don't have a better\n> idea, though, so perhaps neither of these things is a deal-breaker. I was\n> tempted to suggest using ADMIN instead of SET for the ownership stuff, but\n> that wouldn't be backward-compatible, and you'd still be able to work\n> around it to some extent with SET (e.g., SET ROLE followed by CREATE\n> DATABASE).\n\nAs we work through splitting up the privileges and managing them in a\nmore fine-grained way, it seems clear that we'll need to have a similar\nsplit for ADMIN rights on roles- that is, we'll need to be able to\nsay \"role X is allowed to GRANT INHERIT for role Y to other roles, but\nnot SET\".\n\nI'm still half-tempted to say that predefined roles should just be dealt\nwith as a special case.. but if we split ADMIN in the manner as\ndescribed above then maybe we could get away with not having to, but it\nwould depend a great deal of people actually reading the documentation\nand I'm concerned that's a bit too much to ask in this case.\n\nThat is- the first person who is likely to GRANT out ADMIN rights in a\npredefined role is going to be a superuser. To avoid breaking backwards\ncompatibility, GRANT'ing of ADMIN needs to GRANT all the partial-ADMIN\nrights that exist, or at least exist today, which includes both SET and\nINHERIT. Unless we put some kind of special case for predefined roles\nwhere we throw an error or at least a warning when a superuser\n(presumably) inadvertantly does a simple GRANT ADMIN for $predefined\nrole, we're going to end up in the situation where folks can SET ROLE to\na predefined role and do things that they really shouldn't be allowed\nto.\n\nWe could, of course, very clearly document that the way to GRANT ADMIN\nrights for a predefined role is to always make sure to *only* GRANT\nADMIN/INHERIT, but again I worry that it simply wouldn't be followed in\nmany cases. Perhaps we could arrange for the bootstrap superuser to\nonly be GRANT'd ADMIN/INHERIT for predefined roles and then not have an\nexplicit cut-out for superuser doing a GRANT on predefined roles or\nperhaps having such be protected under allow_system_table_mods under the\ngeneral consideration that modifying of predefined roles isn't something\nthat folks should be doing post-initdb.\n\nJust a few thoughts on this, not sure any of these ideas are great but\nperhaps this helps move us forward.\n\nThanks,\n\nStephen",
"msg_date": "Sun, 16 Oct 2022 12:34:06 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Wed, Oct 12, 2022 at 4:59 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> I'm not sure about tying the ownership stuff to this new SET privilege.\n> While you noted some practical advantages, I'd expect users to find it kind\n> of surprising. Also, for predefined roles, I think you need to be careful\n> about distributing ADMIN, as anyone with ADMIN on a predefined role can\n> just GRANT SET to work around the restrictions. I don't have a better\n> idea, though, so perhaps neither of these things is a deal-breaker.\n\nRight, I think if you give ADMIN away to someone, that's it: they can\ngrant that role to whoever they want in whatever mode they want,\nincluding themselves. That seems more or less intentional to me,\nthough. Giving someone ADMIN OPTION on a role is basically making them\nan administrator of that role, and then it is not surprising that they\ncan access its privileges.\n\nI agree with your other caveat about it being potentially surprising,\nbut I think it's not worse than a lot of other somewhat surprising\nthings that we handle by documenting them. And I don't have a better\nidea either.\n\n> I was\n> tempted to suggest using ADMIN instead of SET for the ownership stuff, but\n> that wouldn't be backward-compatible, and you'd still be able to work\n> around it to some extent with SET (e.g., SET ROLE followed by CREATE\n> DATABASE).\n\nI think that would be way worse. Giving ADMIN OPTION on a role is like\nmaking someone the owner of the object, whereas giving someone INHERIT\nor SET on a role is just a privilege to use the object.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 Oct 2022 08:07:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Sun, Oct 16, 2022 at 12:34 PM Stephen Frost <sfrost@snowman.net> wrote:\n> As we work through splitting up the privileges and managing them in a\n> more fine-grained way, it seems clear that we'll need to have a similar\n> split for ADMIN rights on roles- that is, we'll need to be able to\n> say \"role X is allowed to GRANT INHERIT for role Y to other roles, but\n> not SET\".\n\nI don't think this is clear at all, actually. I see very little\nadvantage in splitting up ADMIN OPTION this way. I did think about\nthis, because it would be more consistent with what we do for table\nprivileges, but INHERIT and SET overlap enough from a permissions\npoint of view that there doesn't seem to be a lot of value in it. Now,\nif we invent a bunch more per-grant options, things might look\ndifferent, but in my opinion that has dubious value. Right now, all\nrole privileges other than the ones that are controlled by ADMIN\nOPTION, INHERIT, and what I'm proposing to make controlled by SET, are\ngated by CREATEROLE or by SUPERUSER. The list looks something like\nthis: change the INHERIT flag on a role, change the CREATEROLE flag on\na role, change the CREATEDB flag on a role, change the connection\nlimit for a role, change the VALID UNTIL time for a role, change the\npassword for a role other than your own, drop the role.\n\nAnd that's a pretty obscure list of things. I do think we need better\nways to control who can do those things, but I don't think making them\nall role privileges and then on top of that giving them all separate\nadmin options is the right way to go. It's slicing things incredibly\nfinely to give alice the right to grant to some other user the right\nto set only the VALID UNTIL time on role bob, but not the right to\nmodify role bob in any other way or the right to confer the ability to\nset VALID UNTIL for any other user. I can't believe we want to go\nthere. It's not worth the permissions bits, and even if we had\ninfinite privilege bits available, it's not worth the complexity from\na user perspective. Maybe you have some less-obscure list of things\nthat you think should be grantable privileges on roles?\n\nAnother thing to consider is that, since ADMIN OPTION is, as I\nunderstand it, part of the SQL specification, I think it would move us\nfurther from the SQL specification. I think we will be better off\nthinking of ADMIN OPTION on a role as roughly equivalent to being the\nowner of that role, which is an indivisible privilege, rather than\nthinking of it as equivalent to GRANT OPTION on each of N rights,\nwhich could then be subdivided.\n\n> I'm still half-tempted to say that predefined roles should just be dealt\n> with as a special case.. but if we split ADMIN in the manner as\n> described above then maybe we could get away with not having to, but it\n> would depend a great deal of people actually reading the documentation\n> and I'm concerned that's a bit too much to ask in this case.\n\nI don't think any splitting of ADMIN would be required to solve the\npredefined roles problem. Doesn't the patch I proposed do that?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 Oct 2022 09:57:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "Bump.\n\nDiscussion has trailed off here, but I still don't see that we have a\nbetter way forward here than what I proposed on September 30th. Two\npeople have commented. Nathan said that he wasn't sure this was best\n(neither am I) but that he didn't have a better idea either (neither\ndo I). Stephen proposed decomposing ADMIN OPTION, which is not my\npreference, but even if it turns out that we want to pursue that\napproach, I do not think it would make sense to bundle that into this\npatch, because there isn't enough overlap between that change and this\nchange to justify that treatment.\n\nIf anyone else wants to comment, or if either of those people want to\ncomment further, please speak up soon. Otherwise, I am going to press\nforward with committing this. If we do not, we will continue to have\nno way of restricting of SET ROLE, and we will continue to have no way\nof preventing the creation of objects owned by predefined roles by\nusers who have been granted those roles. As far as I am aware, no one\nis opposed to those goals, and in fact I think everyone who has\ncommented thinks that it would be good to do something. If a better\nidea than what I've implemented comes along, I'm happy to defer to it,\nbut I think this is one of those cases in which there probably isn't\nany totally satisfying solution, and yet doing nothing is not a\nsuperior alternative.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 15 Nov 2022 12:07:06 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Tue, Nov 15, 2022 at 12:07:06PM -0500, Robert Haas wrote:\n> If anyone else wants to comment, or if either of those people want to\n> comment further, please speak up soon. Otherwise, I am going to press\n> forward with committing this.\n\nI don't think I have any further thoughts about the approach, so I won't\nbalk if you proceed with this change. It might be worth starting a new\nthread to discuss whether to treat predefined roles as a special case, but\nIMO that needn't hold up this patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 15 Nov 2022 13:14:16 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Tue, 2022-11-15 at 12:07 -0500, Robert Haas wrote:\n> If anyone else wants to comment, or if either of those people want to\n> comment further, please speak up soon.\n\nDid you have some thoughts on:\n\nhttps://postgr.es/m/a41d606daaaa03b629c2ef0ed274ae3b04a2c266.camel@j-davis.com\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 15 Nov 2022 16:23:14 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Tue, Nov 15, 2022 at 7:23 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Tue, 2022-11-15 at 12:07 -0500, Robert Haas wrote:\n> > If anyone else wants to comment, or if either of those people want to\n> > comment further, please speak up soon.\n>\n> Did you have some thoughts on:\n>\n> https://postgr.es/m/a41d606daaaa03b629c2ef0ed274ae3b04a2c266.camel@j-davis.com\n\nI mean, I think what we were discussing there could be done, but it's\nnot the approach I like best. That's partly because that was just a\nback-of-the-envelope sketch of an idea, not a real proposal for\nsomething with a clear implementation path. But I think the bigger\nreason is that, in my opinion, this proposal is more generally useful,\nbecause it takes no position on why you wish to disallow SET ROLE. You\ncan just disallow it in some cases and allow it in others, and that's\nfine. That proposal targets a specific use case, which may make it a\nbetter solution to that particular problem, but it makes it unworkable\nas a solution to any other problem, I believe.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 17 Nov 2022 16:52:29 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Thu, 2022-11-17 at 16:52 -0500, Robert Haas wrote:\n\n> But I think the bigger\n> reason is that, in my opinion, this proposal is more generally\n> useful,\n> because it takes no position on why you wish to disallow SET ROLE.\n> You\n> can just disallow it in some cases and allow it in others, and that's\n> fine.\n\nI agree that the it's more flexible in the sense that it does what it\ndoes, and administrators can use it if it's useful for them. That means\nwe don't need to understand the actual goals as well; but it also means\nthat it's harder to determine the consequences if we tweak the behavior\n(or any related behavior) later.\n\nI'll admit that I don't have an example of a likely problem here,\nthough.\n\n> That proposal targets a specific use case, which may make it a\n> better solution to that particular problem, but it makes it\n> unworkable\n> as a solution to any other problem, I believe.\n\nYeah, that's the flip side: \"virtual\" roles (for lack of a better name)\nare a more narrow fix for the problem as I understand it; but it might\nleave related problems unfixed. You and Stephen[2] both seemed to\nconsider this approach, and I happened to like it, so I wanted to make\nsure that it wasn't dismissed too quickly.\n\nBut I'm fine if you'd like to move on with the SET ROLE privilege\ninstead, as long as we believe it grants a stable set of capabilities\n(and conversely, that if the SET ROLE privilege is revoked, that it\nrevokes a stable set of capabilities).\n\n[2]\nhttps://www.postgresql.org/message-id/YzIAGCrxoXibAKOD%40tamriel.snowman.net\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Thu, 17 Nov 2022 16:24:24 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Thu, Nov 17, 2022 at 7:24 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> But I'm fine if you'd like to move on with the SET ROLE privilege\n> instead, as long as we believe it grants a stable set of capabilities\n> (and conversely, that if the SET ROLE privilege is revoked, that it\n> revokes a stable set of capabilities).\n\nOK.\n\nHere's a rebased v3 to see what cfbot thinks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 18 Nov 2022 12:50:56 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On 2022-Nov-18, Robert Haas wrote:\n\n> On Thu, Nov 17, 2022 at 7:24 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > But I'm fine if you'd like to move on with the SET ROLE privilege\n> > instead, as long as we believe it grants a stable set of capabilities\n> > (and conversely, that if the SET ROLE privilege is revoked, that it\n> > revokes a stable set of capabilities).\n> \n> OK.\n> \n> Here's a rebased v3 to see what cfbot thinks.\n\nI think this hunk in dumpRoleMembership() leaves an unwanted line\nbehind.\n\n /*\n- * Previous versions of PostgreSQL also did not have a grant-level\n+ * Previous versions of PostgreSQL also did not have grant-level options.\n * INHERIT option.\n */\n\n(I was just looking at the doc part of this patch.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El hombre nunca sabe de lo que es capaz hasta que lo intenta\" (C. Dickens)\n\n\n",
"msg_date": "Fri, 18 Nov 2022 19:43:14 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Fri, Nov 18, 2022 at 12:50 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Here's a rebased v3 to see what cfbot thinks.\n\ncfbot is happy, so committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 18 Nov 2022 13:43:16 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "Op 18-11-2022 om 19:43 schreef Robert Haas:\n> On Fri, Nov 18, 2022 at 12:50 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> Here's a rebased v3 to see what cfbot thinks.\n> \n> cfbot is happy, so committed.\n\nIn grant.sgml,\n\n 'actualy permisions'\n\nlooks a bit unorthodox.\n\n\n\n",
"msg_date": "Fri, 18 Nov 2022 19:50:04 +0100",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Fri, Nov 18, 2022 at 1:50 PM Erik Rijkers <er@xs4all.nl> wrote:\n> In grant.sgml,\n>\n> 'actualy permisions'\n>\n> looks a bit unorthodox.\n\nFixed that, and the other mistake Álvaro spotted, and also bumped\ncatversion because I forgot that earlier.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 18 Nov 2022 16:19:15 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "Op 18-11-2022 om 22:19 schreef Robert Haas:\n> On Fri, Nov 18, 2022 at 1:50 PM Erik Rijkers <er@xs4all.nl> wrote:\n>> In grant.sgml,\n>>\n>> 'actualy permisions'\n>>\n>> looks a bit unorthodox.\n> \n> Fixed that, and the other mistake Álvaro spotted, and also bumped\n> catversion because I forgot that earlier.\n\nSorry to be nagging but\n\n 'permisions' should be\n 'permissions'\n\nas well.\n\n\nAnd as I'm nagging anyway: I also wondered whether the word order could \nimprove:\n\n- Word order as it stands:\nHowever, the actual permissions conferred depend on the options \nassociated with the grant.\n\n-- maybe better:\nHowever, the permissions actually conferred depend on the options \nassociated with the grant.\n\nBut I'm not sure.\n\n\nThanks,\n\nErik\n\nThanks,\n\nErik\n\n\n",
"msg_date": "Sat, 19 Nov 2022 06:28:11 +0100",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Fri, Nov 18, 2022 at 04:19:15PM -0500, Robert Haas wrote:\n> Fixed that, and the other mistake Álvaro spotted, and also bumped\n> catversion because I forgot that earlier.\n\nI was looking at this code yesterday, to see today that psql's\ncompletion should be completed with this new clause, similary to ADMIN\nand INHERIT.\n--\nMichael",
"msg_date": "Sat, 19 Nov 2022 15:00:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Fri, Nov 18, 2022 at 04:19:15PM -0500, Robert Haas wrote:\n> On Fri, Nov 18, 2022 at 1:50 PM Erik Rijkers <er@xs4all.nl> wrote:\n> > In grant.sgml,\n> >\n> > 'actualy permisions'\n> >\n> > looks a bit unorthodox.\n> \n> Fixed that, and the other mistake �lvaro spotted, and also bumped\n> catversion because I forgot that earlier.\n\nI think Erik was trying to report that both words were misspelled. I\nadded to my typos to be fixed in batch if you want to wait.\n\n\n",
"msg_date": "Sat, 19 Nov 2022 12:41:30 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Sat, Nov 19, 2022 at 1:00 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Nov 18, 2022 at 04:19:15PM -0500, Robert Haas wrote:\n> > Fixed that, and the other mistake Álvaro spotted, and also bumped\n> > catversion because I forgot that earlier.\n>\n> I was looking at this code yesterday, to see today that psql's\n> completion should be completed with this new clause, similary to ADMIN\n> and INHERIT.\n\nSeems like a good idea but I'm not sure about this hunk:\n\n TailMatches(\"GRANT|REVOKE\", \"ALTER\", \"SYSTEM\") ||\n- TailMatches(\"REVOKE\", \"GRANT\", \"OPTION\", \"FOR\", \"ALTER\", \"SYSTEM\"))\n+ TailMatches(\"REVOKE\", \"GRANT\", \"OPTION\", \"FOR\", \"ALTER\", \"SYSTEM\") ||\n+ TailMatches(\"REVOKE\", \"GRANT\", \"OPTION\", \"FOR\", \"SET\"))\n\nThat might be a correct change for other reasons, but it doesn't seem\nrelated to this patch. The rest looks good.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 21 Nov 2022 10:45:53 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "идея и её реализация насчёт Параллельного чтения - как вам? Мне \nпоказалось, интересно и полезно. Но это, думаю, одноразовая акция. \nВремени и сил на это довольно много ухлопал, хотя вроде дело нехитрое \n:) Стоило?\n\n15.11.2022 20:07, Robert Haas пишет:\n> Bump.\n>\n> Discussion has trailed off here, but I still don't see that we have a\n> better way forward here than what I proposed on September 30th. Two\n> people have commented. Nathan said that he wasn't sure this was best\n> (neither am I) but that he didn't have a better idea either (neither\n> do I). Stephen proposed decomposing ADMIN OPTION, which is not my\n> preference, but even if it turns out that we want to pursue that\n> approach, I do not think it would make sense to bundle that into this\n> patch, because there isn't enough overlap between that change and this\n> change to justify that treatment.\n>\n> If anyone else wants to comment, or if either of those people want to\n> comment further, please speak up soon. Otherwise, I am going to press\n> forward with committing this. If we do not, we will continue to have\n> no way of restricting of SET ROLE, and we will continue to have no way\n> of preventing the creation of objects owned by predefined roles by\n> users who have been granted those roles. As far as I am aware, no one\n> is opposed to those goals, and in fact I think everyone who has\n> commented thinks that it would be good to do something. If a better\n> idea than what I've implemented comes along, I'm happy to defer to it,\n> but I think this is one of those cases in which there probably isn't\n> any totally satisfying solution, and yet doing nothing is not a\n> superior alternative.\n>\n> Thanks,\n>\n\n\n",
"msg_date": "Tue, 22 Nov 2022 13:09:29 +0300",
"msg_from": "igor levshin <i.levshin@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Mon, Nov 21, 2022 at 10:45:53AM -0500, Robert Haas wrote:\n> Seems like a good idea but I'm not sure about this hunk:\n> \n> TailMatches(\"GRANT|REVOKE\", \"ALTER\", \"SYSTEM\") ||\n> - TailMatches(\"REVOKE\", \"GRANT\", \"OPTION\", \"FOR\", \"ALTER\", \"SYSTEM\"))\n> + TailMatches(\"REVOKE\", \"GRANT\", \"OPTION\", \"FOR\", \"ALTER\", \"SYSTEM\") ||\n> + TailMatches(\"REVOKE\", \"GRANT\", \"OPTION\", \"FOR\", \"SET\"))\n> \n> That might be a correct change for other reasons, but it doesn't seem\n> related to this patch. The rest looks good.\n\n(Forgot to press \"Send\" a few days ago..)\n\nHmm, right, I see your point. I have just moved that to reorder the\nterms alphabetically, but moving the check on REVOKE GRANT OPTION FOR\nSET is not mandatory. I have moved it back in its previous\nposition, leading to less noise in the diffs, and applied the rest as\nof 9d0cf57.\nThanks!\n--\nMichael",
"msg_date": "Wed, 14 Dec 2022 11:44:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Thu, Nov 17, 2022 at 04:24:24PM -0800, Jeff Davis wrote:\n> On Thu, 2022-11-17 at 16:52 -0500, Robert Haas wrote:\n> > But I think the bigger reason is that, in my opinion, this proposal is\n> > more generally useful, because it takes no position on why you wish to\n> > disallow SET ROLE. You can just disallow it in some cases and allow it in\n> > others, and that's fine.\n\nIn this commit 3d14e17, the documentation takes the above \"no position\". The\nimplementation does not, in that WITH SET FALSE has undocumented ability to\nblock ALTER ... OWNER TO, not just SET ROLE. Leaving that undocumented feels\nweird to me, but documenting it would take the position that WITH SET FALSE is\nrelevant to the security objective of preventing object creation like the\nexample in the original post of this thread. How do you weigh those\ndocumentation trade-offs?\n\n> I agree that the it's more flexible in the sense that it does what it\n> does, and administrators can use it if it's useful for them. That means\n> we don't need to understand the actual goals as well; but it also means\n> that it's harder to determine the consequences if we tweak the behavior\n> (or any related behavior) later.\n\nI have similar concerns. For the original post's security objective, the role\nmust also own no objects of certain types. Otherwise, WITH SET FALSE members\ncan use operations like CREATE OR REPLACE FUNCTION or CREATE INDEX to escalate\nto full role privileges:\n\ncreate user unpriv;\ngrant pg_maintain to unpriv with set false;\ncreate schema maint authorization pg_maintain\n create table t (c int);\ncreate or replace function maint.f() returns int language sql as 'select 1';\nalter function maint.f() owner to pg_maintain;\nset session authorization unpriv;\ncreate or replace function maint.f() returns int language sql security definer as 'select 1';\ncreate index on maint.t(c);\n\n\n",
"msg_date": "Fri, 30 Dec 2022 22:16:40 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Sat, Dec 31, 2022 at 1:16 AM Noah Misch <noah@leadboat.com> wrote:\n> On Thu, Nov 17, 2022 at 04:24:24PM -0800, Jeff Davis wrote:\n> > On Thu, 2022-11-17 at 16:52 -0500, Robert Haas wrote:\n> > > But I think the bigger reason is that, in my opinion, this proposal is\n> > > more generally useful, because it takes no position on why you wish to\n> > > disallow SET ROLE. You can just disallow it in some cases and allow it in\n> > > others, and that's fine.\n>\n> In this commit 3d14e17, the documentation takes the above \"no position\". The\n> implementation does not, in that WITH SET FALSE has undocumented ability to\n> block ALTER ... OWNER TO, not just SET ROLE. Leaving that undocumented feels\n> weird to me, but documenting it would take the position that WITH SET FALSE is\n> relevant to the security objective of preventing object creation like the\n> example in the original post of this thread. How do you weigh those\n> documentation trade-offs?\n\nIn general, I favor trying to make the documentation clearer and more\ncomplete. Intentionally leaving things undocumented doesn't seem like\nthe right course of action to me. That said, the pre-existing\ndocumentation in this area is so incomplete that it's sometimes hard\nto figure out where to add new information - and it made no mention of\nthe privileges required for ALTER .. OWNER TO. I didn't immediately\nknow where to add that, so did nothing. Maybe I should have tried\nharder, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 Jan 2023 14:43:10 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Tue, Jan 03, 2023 at 02:43:10PM -0500, Robert Haas wrote:\n> On Sat, Dec 31, 2022 at 1:16 AM Noah Misch <noah@leadboat.com> wrote:\n> > On Thu, Nov 17, 2022 at 04:24:24PM -0800, Jeff Davis wrote:\n> > > On Thu, 2022-11-17 at 16:52 -0500, Robert Haas wrote:\n> > > > But I think the bigger reason is that, in my opinion, this proposal is\n> > > > more generally useful, because it takes no position on why you wish to\n> > > > disallow SET ROLE. You can just disallow it in some cases and allow it in\n> > > > others, and that's fine.\n> >\n> > In this commit 3d14e17, the documentation takes the above \"no position\". The\n> > implementation does not, in that WITH SET FALSE has undocumented ability to\n> > block ALTER ... OWNER TO, not just SET ROLE. Leaving that undocumented feels\n> > weird to me, but documenting it would take the position that WITH SET FALSE is\n> > relevant to the security objective of preventing object creation like the\n> > example in the original post of this thread. How do you weigh those\n> > documentation trade-offs?\n> \n> In general, I favor trying to make the documentation clearer and more\n> complete. Intentionally leaving things undocumented doesn't seem like\n> the right course of action to me.\n\nFor what it's worth, I like to leave many things undocumented, but not this.\n\n> That said, the pre-existing\n> documentation in this area is so incomplete that it's sometimes hard\n> to figure out where to add new information - and it made no mention of\n> the privileges required for ALTER .. OWNER TO. I didn't immediately\n> know where to add that, so did nothing.\n\nI'd start with locations where the patch already added documentation. In the\nabsence of documentation otherwise, a reasonable person could think WITH SET\ncontrols just SET ROLE. The documentation of WITH SET is a good place to list\nwhat else you opted for it to control. If the documentation can explain the\nset of principles that would be used to decide whether WITH SET should govern\nanother thing in the future, that would provide extra value.\n\n\n",
"msg_date": "Tue, 3 Jan 2023 14:03:11 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 5:03 PM Noah Misch <noah@leadboat.com> wrote:\n> I'd start with locations where the patch already added documentation. In the\n> absence of documentation otherwise, a reasonable person could think WITH SET\n> controls just SET ROLE. The documentation of WITH SET is a good place to list\n> what else you opted for it to control. If the documentation can explain the\n> set of principles that would be used to decide whether WITH SET should govern\n> another thing in the future, that would provide extra value.\n\n From the point of view of the code, we currently have four different\nfunctions that make inquiries about role membership:\nhas_privs_of_role, is_member_of_role, is_member_of_role_nosuper, and\nmember_can_set_role.\n\nI spent a while looking at how has_privs_of_role() is used. Basically,\nthere are three main patterns. First, in some places, you must have\nthe privileges of a certain role (typically, either a predefined role\nor the role that owns some object) or the operation will fail with an\nerror indicating that you don't have sufficient permissions. Second,\nthere are places where having the privileges of a certain role exempts\nyou from some other permissions check; if you have neither, you'll get\nan error. An example is that having the permissions of\npg_read_all_data substitutes for a select privilege. And third, there\nare cases where you definitely won't get an error, but the behavior\nwill vary depending on whether you have the privileges of some role.\nFor instance, you can see more data in pg_stat_replication,\npg_stat_wal_receiver, and other stats views if you have\npg_read_all_stats. The GUC values reported in EXPLAIN output will\nexclude superuser-only values unless you have pg_read_all_settings. It\nlooks like some maintenance commands like CLUSTER and VACUUM\ncompletely skip over, or just warn about, cases where permission is\nlacking. And weirdest of all, having the privileges of a role means\nthat the RLS policies applied to that role also apply to you. That's\nodd because it makes permissions not strictly additive.\n\nmember_can_set_role() controls (a) whether you can SET ROLE to some\nother role, (b) whether you can alter the owner of an existing object\nto that role, and (c) whether you can create an object owned by some\nother user in cases where the CREATE command has an option for that,\nlike CREATE DATABASE ... OWNER.\n\nis_member_of_role_nosuper() is used to prevent creation of role\nmembership loops, and for pg_hba.conf matching.\n\nThe only remaining call to is_member_of_role() is in\npg_role_aclcheck(), which just supports the SQL-callable\npg_has_role(). has_privs_of_role() and member_can_set_role() are used\nhere, too.\n\nHow much of this should we document, do you think? If we're going to\ngo into the details, I sort of feel like it would be good to somehow\ncontrast what is attached to membership with what is attached to the\nINHERIT option or the SET option. I think it would be slightly\nsurprising not to mention the way that RLS rules are triggered by\nprivilege inheritance yet include the fact that the SET option affects\nALTER ... OWNER TO, but maybe I've got the wrong idea. An exhaustive\nconcordance of what depends on what might be too much, or maybe it\nisn't, but it's probably good if the level of detail is pretty\nuniform.\n\nYour thoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 4 Jan 2023 15:56:34 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Wed, Jan 04, 2023 at 03:56:34PM -0500, Robert Haas wrote:\n> On Tue, Jan 3, 2023 at 5:03 PM Noah Misch <noah@leadboat.com> wrote:\n> > I'd start with locations where the patch already added documentation. In the\n> > absence of documentation otherwise, a reasonable person could think WITH SET\n> > controls just SET ROLE. The documentation of WITH SET is a good place to list\n> > what else you opted for it to control. If the documentation can explain the\n> > set of principles that would be used to decide whether WITH SET should govern\n> > another thing in the future, that would provide extra value.\n> \n> From the point of view of the code, we currently have four different\n> functions that make inquiries about role membership:\n> has_privs_of_role, is_member_of_role, is_member_of_role_nosuper, and\n> member_can_set_role.\n> \n> I spent a while looking at how has_privs_of_role() is used. Basically,\n> there are three main patterns. First, in some places, you must have\n> the privileges of a certain role (typically, either a predefined role\n> or the role that owns some object) or the operation will fail with an\n> error indicating that you don't have sufficient permissions. Second,\n> there are places where having the privileges of a certain role exempts\n> you from some other permissions check; if you have neither, you'll get\n> an error. An example is that having the permissions of\n> pg_read_all_data substitutes for a select privilege. And third, there\n> are cases where you definitely won't get an error, but the behavior\n> will vary depending on whether you have the privileges of some role.\n> For instance, you can see more data in pg_stat_replication,\n> pg_stat_wal_receiver, and other stats views if you have\n> pg_read_all_stats. The GUC values reported in EXPLAIN output will\n> exclude superuser-only values unless you have pg_read_all_settings. It\n> looks like some maintenance commands like CLUSTER and VACUUM\n> completely skip over, or just warn about, cases where permission is\n> lacking. And weirdest of all, having the privileges of a role means\n> that the RLS policies applied to that role also apply to you. That's\n> odd because it makes permissions not strictly additive.\n> \n> member_can_set_role() controls (a) whether you can SET ROLE to some\n> other role, (b) whether you can alter the owner of an existing object\n> to that role, and (c) whether you can create an object owned by some\n> other user in cases where the CREATE command has an option for that,\n> like CREATE DATABASE ... OWNER.\n> \n> is_member_of_role_nosuper() is used to prevent creation of role\n> membership loops, and for pg_hba.conf matching.\n> \n> The only remaining call to is_member_of_role() is in\n> pg_role_aclcheck(), which just supports the SQL-callable\n> pg_has_role(). has_privs_of_role() and member_can_set_role() are used\n> here, too.\n> \n> How much of this should we document, do you think?\n\nRough thoughts:\n\nDo document:\n- For pg_read_all_stats, something like s/Read all pg_stat_/See all rows of all pg_stat_/\n- At CREATE POLICY and/or similar places, explain the semantics used to judge\n the applicability of role_name to a given query.\n\nDon't document:\n- Mechanism for preventing membership loops.\n\nAlready documented adequately:\n- \"First, in some places, you must have the privileges of a certain role\" is\n documented through language like \"You must own the table\".\n- pg_read_all_data\n- EXPLAIN. I'm not seeing any setting that's both GUC_SUPERUSER_ONLY and\n GUC_EXPLAIN.\n- SQL-level pg_has_role().\n\nUnsure:\n- At INHERIT, cover the not-strictly-additive RLS consequences.\n\n> If we're going to\n> go into the details, I sort of feel like it would be good to somehow\n> contrast what is attached to membership with what is attached to the\n> INHERIT option or the SET option.\n\nWorks for me.\n\n> I think it would be slightly\n> surprising not to mention the way that RLS rules are triggered by\n> privilege inheritance yet include the fact that the SET option affects\n> ALTER ... OWNER TO, but maybe I've got the wrong idea.\n\nThe CREATE POLICY syntax and docs show the role_name parameter, though they\ndon't detail how exactly the server determines whether a given role applies at\na given moment. The docs are silent on the SET / OWNER TO connection. Hence,\nI think the doc gap around SET / OWNER TO is more acute than the doc gap\naround this RLS behavior.\n\nThanks,\nnm\n\n\n",
"msg_date": "Fri, 6 Jan 2023 21:00:41 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Fri, 2022-12-30 at 22:16 -0800, Noah Misch wrote:\n> create user unpriv;\n> grant pg_maintain to unpriv with set false;\n> create schema maint authorization pg_maintain\n> create table t (c int);\n> create or replace function maint.f() returns int language sql as\n> 'select 1';\n> alter function maint.f() owner to pg_maintain;\n> set session authorization unpriv;\n> create or replace function maint.f() returns int language sql\n> security definer as 'select 1';\n> create index on maint.t(c);\n\nI dug into this case, as well as some mirror-image risks associated\nwith SECURITY INVOKER. This goes on a bit of a tangent and I'm sure I'm\nretreading what others already know.\n\nThe risks of SECURITY DEFINER are about ownership: by owning something\nwith code attached, you're responsible to make sure the code is safe\nand can only be run by the right users. Additionally, there are a\nnumber of ways someone might come to own some code other than by\ndefining it themselves. Robert addressed the SET ROLE, CREATE ... OWNER\nand the OWNER TO paths; but that still leaves the replace-function path\nand the create index paths that you illustrated. As I said earlier I'm\nnot 100% satisfied with SET ROLE as a privilege; but I'm more\ncomfortable that it has a defined scope: the SET ROLE privilege should\ncontrol paths that can \"gift\" code to that user.\n\nThe risks of SECURITY INVOKER are more serious. It inherently means\nthat one user is writing code, and another is executing it. And in the\nSQL world of triggers, views, expression indexes and logical\nreplication, the invoker often doesn't know what they are invoking.\nThere are search path risks, risks associated with resolving the right\nfunction/operator/cast, risks of concurrent DDL (i.e. changing a\nfunction definition right before a superuser executes it), etc. It\nseverely limits the kinds of trust models you can use in logical\nreplication. And SECURITY INVOKER weirdly inverts the trust\nrelationship of a GRANT: if A grants to B, then B must *completely*\ntrust A in order to exercise that new privilege because A can inject\narbitrary SECURITY INVOKER code in front of the object.\n\nUNIX basically operates on a SECURITY INVOKER model, so I guess that\nmeans that it can work. But then again, grepping a file doesn't execute\narbitrary code from inside that file (though there are bugs\nsometimes... see [1]). It just seems like the wrong model for SQL.\n\n[ Aside: that probably explains why the SQL spec defaults to SECURITY\nDEFINER. ]\n\nBrainstorming, I think we can do more to mitigate the risks of SECURITY\nINVOKER:\n\n* If running a command that would invoke a SECURITY INVOKER function\nthat is not owned by superuser or a member of the invoker's role, throw\nan error instead. We could control this with a GUC for compatibility.\n\n* Have SECURITY PUBLIC which executes with minimal privileges, which\nwould be good for convenience functions that might be used in an index\nexpression or view.\n\n* Another idea is to separate out read privileges -- a SECURITY INVOKER\nthat is read-only is sounds less dangerous (though not without some\nrisk).\n\n* Prevent extension scripts from running SECURITY INVOKER functions.\n\n\n[1]\nhttps://lcamtuf.blogspot.com/2014/10/psa-dont-run-strings-on-untrusted-files.html\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Mon, 09 Jan 2023 23:28:55 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Sat, Jan 7, 2023 at 12:00 AM Noah Misch <noah@leadboat.com> wrote:\n> The docs are silent on the SET / OWNER TO connection. Hence,\n\nReviewing the documentation again today, I realized that the\ndocumentation describes the rules for changing the ownership of an\nobject in a whole bunch of places which this patch failed to update.\nHere's a patch to update all of the places I found.\n\nI suspect that these changes will mean that we don't also need to\nadjust the discussion of the SET option itself, but let me know what\nyou think.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 10 Jan 2023 11:06:52 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 2:28 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> The risks of SECURITY INVOKER are more serious. It inherently means\n> that one user is writing code, and another is executing it. And in the\n> SQL world of triggers, views, expression indexes and logical\n> replication, the invoker often doesn't know what they are invoking.\n> There are search path risks, risks associated with resolving the right\n> function/operator/cast, risks of concurrent DDL (i.e. changing a\n> function definition right before a superuser executes it), etc. It\n> severely limits the kinds of trust models you can use in logical\n> replication. And SECURITY INVOKER weirdly inverts the trust\n> relationship of a GRANT: if A grants to B, then B must *completely*\n> trust A in order to exercise that new privilege because A can inject\n> arbitrary SECURITY INVOKER code in front of the object.\n\nYes. I think it's extremely difficult to operate a PostgreSQL database\nwith mutually untrusting users. If the high-privilege users don't\ntrust the regular users, they must also make very little use of the\ndatabase and only in carefully circumscribed ways. If not, the whole\nsecurity model unravels really fast. It would certainly be nice to do\nbetter here.\n\n> UNIX basically operates on a SECURITY INVOKER model, so I guess that\n> means that it can work. But then again, grepping a file doesn't execute\n> arbitrary code from inside that file (though there are bugs\n> sometimes... see [1]). It just seems like the wrong model for SQL.\n\nI often think about the UNIX model to better understand the problems\nwe have in PostgreSQL. I don't think that there's any real theoretical\ndifference between the cases, but there are practical differences\nnearly all of which are unfavorable to PostgreSQL. For example, when\nyou log into your UNIX account, you have a home directory which is\npre-created. Your path is likely configured to contain only root-owned\ndirectories and perhaps directories within your home directory that\nare controlled by you, and the permissions on the root-owned\ndirectories are locked down tight. That's because people figured out\nin the 1970s and 1980s that if other people could write executable\ncode into a path you were likely to search, your account was probably\ngoing to get compromised.\n\nNow, in PostgreSQL, the equivalent of a home directory is a user\nschema. We set things up to search those by default if they exist, but\nwe don't create them by default. We also put the public schema in the\ndefault search path and, up until very recently, it was writeable by\ndefault. In practice, many users probably put back write permission on\nthat schema, partly because if they don't, unprivileged users can't\ncreate database objects anywhere at all. The practical effect of this\nis that, when you log into a UNIX system, you're strongly encouraged\nto access only things that are owned by you or root, and any new stuff\nyou create will be in a location where nobody but you is likely to\ntouch it. On the other hand, when you log into a PostgreSQL system,\nyou're set up by default to access objects created by other\nunprivileged users and you may have nowhere to put your own objects\nwhere those users won't also be accessing your stuff.\n\nSo the risks, which in theory are all very similar, are in practice\nfar greater in the PostgreSQL context, basically because our default\nsetup is about 40 years behind the times in terms of implementing best\npractices. At least we've locked down write permission on pg_catalog.\nI think some early UNIX systems didn't even do that, or not well. But\nthat's about the end of the good things that I have to say about what\nwe're doing in this area.\n\nTo be fair, I think many security people also consider it wise to\nassume that a local unprivileged UNIX user can probably find a way to\nescalate to root. There are a lot of setuid binaries on a\nnormally-configured UNIX system, and you only need to find one of them\nthat has an exploitable vulnerability. Those are the equivalent of\nSECURITY DEFINER privileges, and I don't think we ship any of those in\na default configuration. In that regard, we're perhaps better-secured\nthan UNIX. Unfortunately, I think it is probably still wise to assume\nthat an unprivileged PostgreSQL user can find some way of getting\nsuperuser if they want -- not only because of Trojan horse attacks\nbased on leaving security-invoker functions or procedures or operators\nlying around, but also because I strongly suspect there are more\nescalate-to-superuser bugs in the code than we've found yet. Those\nwe've not found, or have found but have not fixed, may still be known\nto bad actors.\n\n> [ Aside: that probably explains why the SQL spec defaults to SECURITY\n> DEFINER. ]\n\nI doubt that SECURITY DEFINER is safer in general than SECURITY\nINVOKER. That'd be the equivalent of having binaries installed setuid\nby default, which would be insane. I think it is right to regard\nSECURITY DEFINER as the bigger threat by far. The reason it doesn't\nalways seem that way with PostgreSQL, at least in my view, is because\nwe make it so atrociously easy to accidentally invoke executable code\nsomewhere. If you start by assuming that you're probably going to\nexecute some random other user's code by accident, well then in that\nworld yes you would prefer to at least have it be running as them, not\nyou. But that's not really safe anyway. Sure, if the code runs as\nthem, they can't so easily usurp your privileges, but they can still\nlog everything you do, or make it fail, or make it take forever. Those\nthings are less serious than outright account takeover, but nobody\nstands up a web site and hopes that it only gets DDOS'd rather than\nvandalized. What you want is for it to stay up.\n\n> Brainstorming, I think we can do more to mitigate the risks of SECURITY\n> INVOKER:\n>\n> * If running a command that would invoke a SECURITY INVOKER function\n> that is not owned by superuser or a member of the invoker's role, throw\n> an error instead. We could control this with a GUC for compatibility.\n>\n> * Have SECURITY PUBLIC which executes with minimal privileges, which\n> would be good for convenience functions that might be used in an index\n> expression or view.\n>\n> * Another idea is to separate out read privileges -- a SECURITY INVOKER\n> that is read-only is sounds less dangerous (though not without some\n> risk).\n>\n> * Prevent extension scripts from running SECURITY INVOKER functions.\n\nIt might be best to repost some of these ideas on a new thread with a\nrelevant subject line, but I agree that there's some potential here.\nYour first idea reminds me a lot of the proposal Tom made in\nhttps://www.postgresql.org/message-id/19327.1533748538@sss.pgh.pa.us\n-- except that his mechanism is more general, since you can say whose\ncode you trust and whose code you don't trust. Noah had a competing\nversion of that patch, too. But we never settled on an approach. I\nstill think something like this would be a good idea, and the fact\nthat you've apparently-independently come up with a similar notion\njust reinforces that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Jan 2023 11:45:18 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Tue, 2023-01-10 at 11:45 -0500, Robert Haas wrote:\n> So the risks, which in theory are all very similar, are in practice\n> far greater in the PostgreSQL context, basically because our default\n> setup is about 40 years behind the times in terms of implementing\n> best\n> practices.\n\nI agree that huge improvements could be made with improvements to best\npractices/defaults.\n\nBut there are some differences that are harder to fix that way. In\npostgres, one can attach arbitrary code to pretty much anything, so you\nneed to trust everything you touch. There is no safe postgres\nequivalent to grepping an untrusted file.\n\n\n> It might be best to repost some of these ideas on a new thread with a\n> relevant subject line, but I agree that there's some potential here.\n> Your first idea reminds me a lot of the proposal Tom made in\n> https://www.postgresql.org/message-id/19327.1533748538@sss.pgh.pa.us\n> -- except that his mechanism is more general, since you can say whose\n> code you trust and whose code you don't trust. Noah had a competing\n> version of that patch, too. But we never settled on an approach. I\n> still think something like this would be a good idea, and the fact\n> that you've apparently-independently come up with a similar notion\n> just reinforces that.\n\nWill do, thank you for the reference.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Tue, 10 Jan 2023 13:11:42 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 11:06:52AM -0500, Robert Haas wrote:\n> On Sat, Jan 7, 2023 at 12:00 AM Noah Misch <noah@leadboat.com> wrote:\n> > The docs are silent on the SET / OWNER TO connection. Hence,\n> \n> Reviewing the documentation again today, I realized that the\n> documentation describes the rules for changing the ownership of an\n> object in a whole bunch of places which this patch failed to update.\n> Here's a patch to update all of the places I found.\n\nA \"git grep 'direct or indirect mem'\" found a few more:\n\ndoc/src/sgml/ref/alter_collation.sgml:42: To alter the owner, you must also be a direct or indirect member of the new\ndoc/src/sgml/ref/create_database.sgml:92: role, you must be a direct or indirect member of that role,\ndoc/src/sgml/ref/create_schema.sgml:92: owned by another role, you must be a direct or indirect member of\n\nI wondered if the new recurring phrase \"must be able to SET ROLE\" should be\nmore specific, e.g. one of \"must have\n{permission,authorization,authority,right} to SET ROLE\". But then I stopped\nwondering and figured \"be able to\" is sufficient.\n\n> I suspect that these changes will mean that we don't also need to\n> adjust the discussion of the SET option itself, but let me know what\n> you think.\n\nI still think docs for the SET option itself should give a sense of the\ndiversity of things it's intended to control. It could be simple. A bunch of\nthe sites you're modifying are near text like \"These restrictions enforce that\naltering the owner doesn't do anything you couldn't do by dropping and\nrecreating the aggregate function.\" Perhaps the main SET doc could say\nsomething about how it restricts other things that would yield equivalent\noutcomes. (Incidentally, DROP is another case of something one likely doesn't\nwant the WITH SET FALSE member using. I think that reinforces a point I wrote\nupthread. To achieve the original post's security objective, the role must\nown no objects whatsoever.)\n\n\n",
"msg_date": "Wed, 11 Jan 2023 07:16:55 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 10:16 AM Noah Misch <noah@leadboat.com> wrote:\n> A \"git grep 'direct or indirect mem'\" found a few more:\n>\n> doc/src/sgml/ref/alter_collation.sgml:42: To alter the owner, you must also be a direct or indirect member of the new\n> doc/src/sgml/ref/create_database.sgml:92: role, you must be a direct or indirect member of that role,\n> doc/src/sgml/ref/create_schema.sgml:92: owned by another role, you must be a direct or indirect member of\n\nAh, thanks.\n\n> I wondered if the new recurring phrase \"must be able to SET ROLE\" should be\n> more specific, e.g. one of \"must have\n> {permission,authorization,authority,right} to SET ROLE\". But then I stopped\n> wondering and figured \"be able to\" is sufficient.\n\nI think so, too. Note the wording of the error message in check_can_set_role().\n\n> I still think docs for the SET option itself should give a sense of the\n> diversity of things it's intended to control. It could be simple. A bunch of\n> the sites you're modifying are near text like \"These restrictions enforce that\n> altering the owner doesn't do anything you couldn't do by dropping and\n> recreating the aggregate function.\" Perhaps the main SET doc could say\n> something about how it restricts other things that would yield equivalent\n> outcomes. (Incidentally, DROP is another case of something one likely doesn't\n> want the WITH SET FALSE member using. I think that reinforces a point I wrote\n> upthread. To achieve the original post's security objective, the role must\n> own no objects whatsoever.)\n\nI spent a while on this. The attached is as well I was able to figure\nout how to do. What do you think?\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 11 Jan 2023 15:13:29 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 03:13:29PM -0500, Robert Haas wrote:\n> On Wed, Jan 11, 2023 at 10:16 AM Noah Misch <noah@leadboat.com> wrote:\n> > I still think docs for the SET option itself should give a sense of the\n> > diversity of things it's intended to control. It could be simple. A bunch of\n> > the sites you're modifying are near text like \"These restrictions enforce that\n> > altering the owner doesn't do anything you couldn't do by dropping and\n> > recreating the aggregate function.\" Perhaps the main SET doc could say\n> > something about how it restricts other things that would yield equivalent\n> > outcomes. (Incidentally, DROP is another case of something one likely doesn't\n> > want the WITH SET FALSE member using. I think that reinforces a point I wrote\n> > upthread. To achieve the original post's security objective, the role must\n> > own no objects whatsoever.)\n> \n> I spent a while on this. The attached is as well I was able to figure\n> out how to do. What do you think?\n\nI think this is good to go modulo one or two things:\n\n> Subject: [PATCH v2] More documentation update for GRANT ... WITH SET OPTION.\n> \n> Update the reference pages for various ALTER commands that\n> mentioned that you must be a member of role that will be the\n> new owner to instead say that you must be able to SET ROLE\n> to the new owner. Update ddl.sgml's generate statement on this\n\ns/generate/general/\n\n> --- a/doc/src/sgml/ref/grant.sgml\n> +++ b/doc/src/sgml/ref/grant.sgml\n> @@ -298,6 +298,20 @@ GRANT <replaceable class=\"parameter\">role_name</replaceable> [, ...] TO <replace\n> This option defaults to <literal>TRUE</literal>.\n> </para>\n> \n> + <para>\n> + To create an object owned by another role or give ownership of an existing\n> + object to another role, you must have the ability to <literal>SET\n> + ROLE</literal> to that role; otherwise, commands such as <literal>ALTER\n> + ... OWNER TO</literal> or <literal>CREATE DATABASE ... OWNER</literal>\n> + will fail. However, a user who inherits the privileges of a role but does\n> + not have the ability to <literal>SET ROLE</literal> to that role may be\n> + able to obtain full access to the role by manipulating existing objects\n> + owned by that role (e.g. they could redefine an existing function to act\n> + as a Trojan horse). Therefore, if a role's privileges are to be inherited\n> + but should not be accessible via <literal>SET ROLE</literal>, it should not\n> + own any SQL objects.\n> + </para>\n\nI recommend deleting the phrase \"are to be inherited but\" as superfluous. The\nearlier sentence's mention will still be there. WITH SET FALSE + NOINHERIT is\na combination folks should not use or should use only when the role has no\nknown privileges.\n\n\n",
"msg_date": "Wed, 11 Jan 2023 21:09:32 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 12:09 AM Noah Misch <noah@leadboat.com> wrote:\n> I think this is good to go modulo one or two things:\n>\n> > Subject: [PATCH v2] More documentation update for GRANT ... WITH SET OPTION.\n> >\n> > Update the reference pages for various ALTER commands that\n> > mentioned that you must be a member of role that will be the\n> > new owner to instead say that you must be able to SET ROLE\n> > to the new owner. Update ddl.sgml's generate statement on this\n>\n> s/generate/general/\n\nOops, yes.\n\n> > --- a/doc/src/sgml/ref/grant.sgml\n> > +++ b/doc/src/sgml/ref/grant.sgml\n> > @@ -298,6 +298,20 @@ GRANT <replaceable class=\"parameter\">role_name</replaceable> [, ...] TO <replace\n> > This option defaults to <literal>TRUE</literal>.\n> > </para>\n> >\n> > + <para>\n> > + To create an object owned by another role or give ownership of an existing\n> > + object to another role, you must have the ability to <literal>SET\n> > + ROLE</literal> to that role; otherwise, commands such as <literal>ALTER\n> > + ... OWNER TO</literal> or <literal>CREATE DATABASE ... OWNER</literal>\n> > + will fail. However, a user who inherits the privileges of a role but does\n> > + not have the ability to <literal>SET ROLE</literal> to that role may be\n> > + able to obtain full access to the role by manipulating existing objects\n> > + owned by that role (e.g. they could redefine an existing function to act\n> > + as a Trojan horse). Therefore, if a role's privileges are to be inherited\n> > + but should not be accessible via <literal>SET ROLE</literal>, it should not\n> > + own any SQL objects.\n> > + </para>\n>\n> I recommend deleting the phrase \"are to be inherited but\" as superfluous. The\n> earlier sentence's mention will still be there. WITH SET FALSE + NOINHERIT is\n> a combination folks should not use or should use only when the role has no\n> known privileges.\n\nI don't think I agree with this suggestion. If the privileges aren't\ngoing to be inherited, it doesn't matter whether the role owns SQL\nobjects or not. And I think that there are two notable use cases for\nSET FALSE + NOINHERIT (or SET FALSE + INHERIT FALSE). First, the a\ngrant with SET FALSE, INHERIT FALSE, ADMIN TRUE gives you the ability\nto administer a role without inheriting its privileges or being able\nto SET ROLE to it. You could grant yourself those abilities if you\nwant, but you don't have them straight off. In fact, CREATE ROLE\nissued by a non-superuser creates such a grant implicitly as of\ncf5eb37c5ee0cc54c80d95c1695d7fca1f7c68cb. Second, SET FALSE, INHERIT\nFALSE could be used to set up groups for pg_hba.conf matching without\nconferring privileges.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 Jan 2023 10:21:32 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 10:21:32AM -0500, Robert Haas wrote:\n> On Thu, Jan 12, 2023 at 12:09 AM Noah Misch <noah@leadboat.com> wrote:\n\n> > > --- a/doc/src/sgml/ref/grant.sgml\n> > > +++ b/doc/src/sgml/ref/grant.sgml\n> > > @@ -298,6 +298,20 @@ GRANT <replaceable class=\"parameter\">role_name</replaceable> [, ...] TO <replace\n> > > This option defaults to <literal>TRUE</literal>.\n> > > </para>\n> > >\n> > > + <para>\n> > > + To create an object owned by another role or give ownership of an existing\n> > > + object to another role, you must have the ability to <literal>SET\n> > > + ROLE</literal> to that role; otherwise, commands such as <literal>ALTER\n> > > + ... OWNER TO</literal> or <literal>CREATE DATABASE ... OWNER</literal>\n> > > + will fail. However, a user who inherits the privileges of a role but does\n> > > + not have the ability to <literal>SET ROLE</literal> to that role may be\n> > > + able to obtain full access to the role by manipulating existing objects\n> > > + owned by that role (e.g. they could redefine an existing function to act\n> > > + as a Trojan horse). Therefore, if a role's privileges are to be inherited\n> > > + but should not be accessible via <literal>SET ROLE</literal>, it should not\n> > > + own any SQL objects.\n> > > + </para>\n> >\n> > I recommend deleting the phrase \"are to be inherited but\" as superfluous. The\n> > earlier sentence's mention will still be there. WITH SET FALSE + NOINHERIT is\n> > a combination folks should not use or should use only when the role has no\n> > known privileges.\n> \n> I don't think I agree with this suggestion. If the privileges aren't\n> going to be inherited, it doesn't matter whether the role owns SQL\n> objects or not. And I think that there are two notable use cases for\n> SET FALSE + NOINHERIT (or SET FALSE + INHERIT FALSE). First, the a\n> grant with SET FALSE, INHERIT FALSE, ADMIN TRUE gives you the ability\n> to administer a role without inheriting its privileges or being able\n> to SET ROLE to it. You could grant yourself those abilities if you\n> want, but you don't have them straight off. In fact, CREATE ROLE\n> issued by a non-superuser creates such a grant implicitly as of\n> cf5eb37c5ee0cc54c80d95c1695d7fca1f7c68cb.\n\nThat is a valid use case, but Trojan horse matters don't apply there.\n\n> Second, SET FALSE, INHERIT\n> FALSE could be used to set up groups for pg_hba.conf matching without\n> conferring privileges.\n\nThat is factual, but doing this and having that role own objects shouldn't be\nconsidered a best practice. It's a bit like using the address of a function\nas an enum value. Instead of role own_some_objects_and_control_hba, the best\npractice would be to have two roles, own_some_objects / control_hba.\n\nSince the text is superfluous but not wrong, I won't insist.\n\n\n",
"msg_date": "Thu, 12 Jan 2023 23:17:30 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing for control over SET ROLE"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 2:17 AM Noah Misch <noah@leadboat.com> wrote:\n> Since the text is superfluous but not wrong, I won't insist.\n\nOK, committed as I had it, then.\n\nTo me, the text isn't superfluous, because otherwise the connection to\nwhat has been said in the previous sentence seems tenuous, which\nimpacts understandability. We'll see what other people think, I guess.\nPerhaps there's some altogether better way to talk about this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 Jan 2023 10:41:44 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing for control over SET ROLE"
}
] |
[
{
"msg_contents": "An internal VM crashed last night due to OOM.\n\nWhen I tried to start postgres, it failed like:\n\n< 2022-08-31 08:44:10.495 CDT >LOG: checkpoint starting: end-of-recovery immediate wait\n< 2022-08-31 08:44:10.609 CDT >LOG: request to flush past end of generated WAL; request 1201/1CAF84F0, current position 1201/1CADB730\n< 2022-08-31 08:44:10.609 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n< 2022-08-31 08:44:10.609 CDT >ERROR: xlog flush request 1201/1CAF84F0 is not satisfied --- flushed only to 1201/1CADB730\n< 2022-08-31 08:44:10.609 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n< 2022-08-31 08:44:10.609 CDT >FATAL: checkpoint request failed\n\nI was able to start it with -c recovery_prefetch=no, so it seems like\nprefetch tried to do too much. The VM runs centos7 under qemu.\nI'm making a copy of the data dir in cases it's needed.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 31 Aug 2022 09:01:28 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "On Thu, Sep 1, 2022 at 2:01 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> < 2022-08-31 08:44:10.495 CDT >LOG: checkpoint starting: end-of-recovery immediate wait\n> < 2022-08-31 08:44:10.609 CDT >LOG: request to flush past end of generated WAL; request 1201/1CAF84F0, current position 1201/1CADB730\n> < 2022-08-31 08:44:10.609 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n> < 2022-08-31 08:44:10.609 CDT >ERROR: xlog flush request 1201/1CAF84F0 is not satisfied --- flushed only to 1201/1CADB730\n> < 2022-08-31 08:44:10.609 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n> < 2022-08-31 08:44:10.609 CDT >FATAL: checkpoint request failed\n>\n> I was able to start it with -c recovery_prefetch=no, so it seems like\n> prefetch tried to do too much. The VM runs centos7 under qemu.\n> I'm making a copy of the data dir in cases it's needed.\n\nHmm, a page with an LSN set 118208 bytes past the end of WAL. It's a\nvm fork page (which recovery prefetch should ignore completely). Did\nyou happen to get a copy before the successful recovery? After the\nsuccessful recovery, what LSN does that page have, and can you find\nthe references to it in the WAL with eg pg_waldump -R 1663/16681/2840\n-F vm? Have you turned FPW off (perhaps this is on ZFS?)?\n\n\n",
"msg_date": "Thu, 1 Sep 2022 12:05:36 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "On Thu, Sep 01, 2022 at 12:05:36PM +1200, Thomas Munro wrote:\n> On Thu, Sep 1, 2022 at 2:01 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > < 2022-08-31 08:44:10.495 CDT >LOG: checkpoint starting: end-of-recovery immediate wait\n> > < 2022-08-31 08:44:10.609 CDT >LOG: request to flush past end of generated WAL; request 1201/1CAF84F0, current position 1201/1CADB730\n> > < 2022-08-31 08:44:10.609 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n> > < 2022-08-31 08:44:10.609 CDT >ERROR: xlog flush request 1201/1CAF84F0 is not satisfied --- flushed only to 1201/1CADB730\n> > < 2022-08-31 08:44:10.609 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n> > < 2022-08-31 08:44:10.609 CDT >FATAL: checkpoint request failed\n> >\n> > I was able to start it with -c recovery_prefetch=no, so it seems like\n> > prefetch tried to do too much. The VM runs centos7 under qemu.\n> > I'm making a copy of the data dir in cases it's needed.\n> \n> Hmm, a page with an LSN set 118208 bytes past the end of WAL. It's a\n> vm fork page (which recovery prefetch should ignore completely). Did\n> you happen to get a copy before the successful recovery? After the\n> successful recovery, what LSN does that page have, and can you find\n> the references to it in the WAL with eg pg_waldump -R 1663/16681/2840\n> -F vm? Have you turned FPW off (perhaps this is on ZFS?)?\n\nYes, I have a copy that reproduces the issue:\n\n#1 0x00000000009a0df4 in errfinish (filename=<optimized out>, filename@entry=0xa15535 \"xlog.c\", lineno=lineno@entry=2671, funcname=funcname@entry=0xa1da27 <__func__.22763> \"XLogFlush\") at elog.c:588\n#2 0x000000000055f1cf in XLogFlush (record=19795985532144) at xlog.c:2668\n#3 0x0000000000813b24 in FlushBuffer (buf=0x7fffdf1f8900, reln=<optimized out>) at bufmgr.c:2889\n#4 0x0000000000817a5b in SyncOneBuffer (buf_id=buf_id@entry=7796, skip_recently_used=skip_recently_used@entry=false, wb_context=wb_context@entry=0x7fffffffcdf0) at bufmgr.c:2576\n#5 0x00000000008181c2 in BufferSync (flags=flags@entry=358) at bufmgr.c:2164\n#6 0x00000000008182f5 in CheckPointBuffers (flags=flags@entry=358) at bufmgr.c:2743\n#7 0x00000000005587b2 in CheckPointGuts (checkPointRedo=19795985413936, flags=flags@entry=358) at xlog.c:6855\n#8 0x000000000055feb3 in CreateCheckPoint (flags=flags@entry=358) at xlog.c:6534\n#9 0x00000000007aceaa in CheckpointerMain () at checkpointer.c:455\n#10 0x00000000007aac52 in AuxiliaryProcessMain (auxtype=auxtype@entry=CheckpointerProcess) at auxprocess.c:153\n#11 0x00000000007b0bd8 in StartChildProcess (type=<optimized out>) at postmaster.c:5430\n#12 0x00000000007b388f in PostmasterMain (argc=argc@entry=7, argv=argv@entry=0xf139e0) at postmaster.c:1463\n#13 0x00000000004986a6 in main (argc=7, argv=0xf139e0) at main.c:202\n\nIt's not on zfs, and FPW have never been turned off.\n\nI should add that this instance has been pg_upgraded since v10.\n\nBTW, base/16881 is the postgres DB )which has 43GB of logfiles imported from\nCSV, plus 2GB of snapshots of pg_control_checkpoint, pg_settings,\npg_stat_bgwriter, pg_stat_database, pg_stat_wal).\n\npostgres=# SELECT * FROM page_header(get_raw_page('pg_toast.pg_toast_2619', 'main', 0));\n lsn | checksum | flags | lower | upper | special | pagesize | version | prune_xid \n---------------+----------+-------+-------+-------+---------+----------+---------+------------\n 1201/1CDD1F98 | -6200 | 1 | 44 | 424 | 8192 | 8192 | 4 | 3681043287\n(1 fila)\n\npostgres=# SELECT * FROM page_header(get_raw_page('pg_toast.pg_toast_2619', 'vm', 0));\n lsn | checksum | flags | lower | upper | special | pagesize | version | prune_xid \n---------------+----------+-------+-------+-------+---------+----------+---------+-----------\n 1201/1CAF84F0 | -6010 | 0 | 24 | 8192 | 8192 | 8192 | 4 | 0\n\nI found this in waldump (note that you had a typoe - it's 16881).\n\n[pryzbyj@template0 ~]$ sudo /usr/pgsql-15/bin/pg_waldump -R 1663/16881/2840 -F vm -p /mnt/tmp/15/data/pg_wal 00000001000012010000001C\nrmgr: Heap2 len (rec/tot): 64/ 122, tx: 0, lsn: 1201/1CAF2658, prev 1201/1CAF2618, desc: VISIBLE cutoff xid 3681024856 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0 FPW, blkref #1: rel 1663/16881/2840 blk 54\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF3AF8, prev 1201/1CAF2788, desc: VISIBLE cutoff xid 2 flags 0x03, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 0\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF3B70, prev 1201/1CAF3B38, desc: VISIBLE cutoff xid 3671427998 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 2\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF4DC8, prev 1201/1CAF3BB0, desc: VISIBLE cutoff xid 3672889900 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 4\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF5FB0, prev 1201/1CAF4E08, desc: VISIBLE cutoff xid 3679743844 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 5\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF7320, prev 1201/1CAF5FF0, desc: VISIBLE cutoff xid 3679743844 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 6\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF7398, prev 1201/1CAF7360, desc: VISIBLE cutoff xid 3679751919 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 11\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF7410, prev 1201/1CAF73D8, desc: VISIBLE cutoff xid 2 flags 0x03, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 17\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF7488, prev 1201/1CAF7450, desc: VISIBLE cutoff xid 2 flags 0x03, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 19\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF7500, prev 1201/1CAF74C8, desc: VISIBLE cutoff xid 3645406844 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 23\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF7578, prev 1201/1CAF7540, desc: VISIBLE cutoff xid 3669978567 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 24\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF75F0, prev 1201/1CAF75B8, desc: VISIBLE cutoff xid 0 flags 0x03, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 25\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF7668, prev 1201/1CAF7630, desc: VISIBLE cutoff xid 3681024856 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 26\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF76E0, prev 1201/1CAF76A8, desc: VISIBLE cutoff xid 3671911724 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 27\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF7758, prev 1201/1CAF7720, desc: VISIBLE cutoff xid 2 flags 0x03, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 34\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF77D0, prev 1201/1CAF7798, desc: VISIBLE cutoff xid 2 flags 0x03, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 35\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF7EF8, prev 1201/1CAF7810, desc: VISIBLE cutoff xid 3672408544 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 37\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF7F70, prev 1201/1CAF7F38, desc: VISIBLE cutoff xid 2 flags 0x03, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 38\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF7FE8, prev 1201/1CAF7FB0, desc: VISIBLE cutoff xid 2 flags 0x03, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 39\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF8078, prev 1201/1CAF8040, desc: VISIBLE cutoff xid 3678237783 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 41\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF80F0, prev 1201/1CAF80B8, desc: VISIBLE cutoff xid 3672408544 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 42\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF8168, prev 1201/1CAF8130, desc: VISIBLE cutoff xid 3680789266 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 43\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF81E0, prev 1201/1CAF81A8, desc: VISIBLE cutoff xid 3667994218 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 44\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF8258, prev 1201/1CAF8220, desc: VISIBLE cutoff xid 3680789266 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 45\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF82D0, prev 1201/1CAF8298, desc: VISIBLE cutoff xid 3673830395 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 48\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF8348, prev 1201/1CAF8310, desc: VISIBLE cutoff xid 0 flags 0x03, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 50\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF83C0, prev 1201/1CAF8388, desc: VISIBLE cutoff xid 3681024856 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 51\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF8438, prev 1201/1CAF8400, desc: VISIBLE cutoff xid 3681024856 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 52\nrmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF84B0, prev 1201/1CAF8478, desc: VISIBLE cutoff xid 3678741092 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 53\npg_waldump: error: error en registro de WAL en 1201/1CD90E48: invalid record length at 1201/1CD91010: wanted 24, got 0\n\nI could send our WAL to you if that's desirable ..\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 31 Aug 2022 19:52:39 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "On Thu, Sep 1, 2022 at 12:53 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Yes, I have a copy that reproduces the issue:\n\nThat's good news.\n\nSo the last record touching that page was:\n\n> rmgr: Heap2 len (rec/tot): 59/ 59, tx: 0, lsn: 1201/1CAF84B0, prev 1201/1CAF8478, desc: VISIBLE cutoff xid 3678741092 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0, blkref #1: rel 1663/16881/2840 blk 53\n\nI think the expected LSN for that page is past the end of that record,\nso 0x1CAF84B0 + 59 = 0x1caf84eb which rounds up to 0x1CAF84F0, and\nindeed we see that in the restored page when recovery succeeds.\n\nNext question: why do we think the WAL finishes at 1201/1CADB730 while\nrunning that checkpoint? Looking...\n\n\n",
"msg_date": "Thu, 1 Sep 2022 13:37:19 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "Some more details, in case they're important:\n\nFirst: the server has wal_compression=zstd (I wonder if something\ndoesn't allow/accomodate compressed FPI?)\n\nI thought to mention that after compiling pg15 locally and forgetting to\nuse --with-zstd.\n\nI compiled it to enable your debug logging, which wrote these during\nrecovery:\n\n< 2022-08-31 21:17:01.807 CDT >NOTICE: suppressing prefetch in relation 1663/16888/165958212 from block 156 until 1201/1C3965A0 is replayed, which truncates the relation\n< 2022-08-31 21:17:01.903 CDT >NOTICE: suppressing prefetch in relation 1663/16888/165958523 from block 23 until 1201/1C39CC98 is replayed, which truncates the relation\n< 2022-08-31 21:17:02.029 CDT >NOTICE: suppressing prefetch in relation 1663/16888/165958523 from block 23 until 1201/1C8643C8 is replayed, because the relation is too small\n\nAlso, pg_waldump seems to fail early with -w:\n[pryzbyj@template0 ~]$ sudo /usr/pgsql-15/bin/pg_waldump -w -R 1663/16881/2840 -F vm -p /mnt/tmp/15/data/pg_wal 00000001000012010000001C \nrmgr: Heap2 len (rec/tot): 64/ 122, tx: 0, lsn: 1201/1CAF2658, prev 1201/1CAF2618, desc: VISIBLE cutoff xid 3681024856 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0 FPW, blkref #1: rel 1663/16881/2840 blk 54\npg_waldump: error: error in WAL record at 1201/1CD90E48: invalid record length at 1201/1CD91010: wanted 24, got 0\n\nAlso, the VM has crashed with OOM before, while runnning pg15, with no issue in\nrecovery. I haven't been able to track down the cause..\n\nThe VM is running: kernel-3.10.0-1160.66.1.el7.x86_64\n\npgsql is an ext4 FS (no tablespaces), which is a qemu block device\nexposed like:\n\n <driver name='qemu' type='raw' cache='none' io='native'/>\n <target dev='vdg' bus='virtio'/>\n\nIt's nowhere near full:\n\n/dev/vdc 96G 51G 46G 53% /var/lib/pgsql\n\n\n",
"msg_date": "Wed, 31 Aug 2022 21:48:38 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "At Thu, 1 Sep 2022 12:05:36 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Thu, Sep 1, 2022 at 2:01 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > < 2022-08-31 08:44:10.495 CDT >LOG: checkpoint starting: end-of-recovery immediate wait\n> > < 2022-08-31 08:44:10.609 CDT >LOG: request to flush past end of generated WAL; request 1201/1CAF84F0, current position 1201/1CADB730\n> > < 2022-08-31 08:44:10.609 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n> > < 2022-08-31 08:44:10.609 CDT >ERROR: xlog flush request 1201/1CAF84F0 is not satisfied --- flushed only to 1201/1CADB730\n> > < 2022-08-31 08:44:10.609 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n> > < 2022-08-31 08:44:10.609 CDT >FATAL: checkpoint request failed\n> >\n> > I was able to start it with -c recovery_prefetch=no, so it seems like\n> > prefetch tried to do too much. The VM runs centos7 under qemu.\n> > I'm making a copy of the data dir in cases it's needed.\n\nJust for information, there was a fixed bug about\noverwrite-aborted-contrecord feature, which causes this kind of\nfailure (xlog flush request exceeds insertion bleeding edge). If it is\nthat, it has been fixed by 6672d79139 two-days ago.\n\nhttp://postgr.es/m/CAFiTN-t7umki=PK8dT1tcPV=mOUe2vNhHML6b3T7W7qqvvajjg@mail.gmail.com\nhttp://postgr.es/m/FB0DEA0B-E14E-43A0-811F-C1AE93D00FF3%40amazon.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 01 Sep 2022 12:08:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "On Thu, Sep 1, 2022 at 3:08 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Thu, 1 Sep 2022 12:05:36 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in\n> > On Thu, Sep 1, 2022 at 2:01 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > < 2022-08-31 08:44:10.495 CDT >LOG: checkpoint starting: end-of-recovery immediate wait\n> > > < 2022-08-31 08:44:10.609 CDT >LOG: request to flush past end of generated WAL; request 1201/1CAF84F0, current position 1201/1CADB730\n> > > < 2022-08-31 08:44:10.609 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n> > > < 2022-08-31 08:44:10.609 CDT >ERROR: xlog flush request 1201/1CAF84F0 is not satisfied --- flushed only to 1201/1CADB730\n> > > < 2022-08-31 08:44:10.609 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n> > > < 2022-08-31 08:44:10.609 CDT >FATAL: checkpoint request failed\n> > >\n> > > I was able to start it with -c recovery_prefetch=no, so it seems like\n> > > prefetch tried to do too much. The VM runs centos7 under qemu.\n> > > I'm making a copy of the data dir in cases it's needed.\n>\n> Just for information, there was a fixed bug about\n> overwrite-aborted-contrecord feature, which causes this kind of\n> failure (xlog flush request exceeds insertion bleeding edge). If it is\n> that, it has been fixed by 6672d79139 two-days ago.\n\nHmm. Justin, when you built from source, which commit were you at?\nIf it's REL_15_BETA3, any chance you could cherry pick that change and\ncheck what happens? And without that, could you show what this logs\nfor good and bad recovery settings?",
"msg_date": "Thu, 1 Sep 2022 16:22:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "On Thu, Sep 01, 2022 at 04:22:20PM +1200, Thomas Munro wrote:\n> On Thu, Sep 1, 2022 at 3:08 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Thu, 1 Sep 2022 12:05:36 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in\n> > > On Thu, Sep 1, 2022 at 2:01 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > < 2022-08-31 08:44:10.495 CDT >LOG: checkpoint starting: end-of-recovery immediate wait\n> > > > < 2022-08-31 08:44:10.609 CDT >LOG: request to flush past end of generated WAL; request 1201/1CAF84F0, current position 1201/1CADB730\n> > > > < 2022-08-31 08:44:10.609 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n> > > > < 2022-08-31 08:44:10.609 CDT >ERROR: xlog flush request 1201/1CAF84F0 is not satisfied --- flushed only to 1201/1CADB730\n> > > > < 2022-08-31 08:44:10.609 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n> > > > < 2022-08-31 08:44:10.609 CDT >FATAL: checkpoint request failed\n> > > >\n> > > > I was able to start it with -c recovery_prefetch=no, so it seems like\n> > > > prefetch tried to do too much. The VM runs centos7 under qemu.\n> > > > I'm making a copy of the data dir in cases it's needed.\n> >\n> > Just for information, there was a fixed bug about\n> > overwrite-aborted-contrecord feature, which causes this kind of\n> > failure (xlog flush request exceeds insertion bleeding edge). If it is\n> > that, it has been fixed by 6672d79139 two-days ago.\n> \n> Hmm. Justin, when you built from source, which commit were you at?\n> If it's REL_15_BETA3,\n\nNo - it's:\ncommit a2039b1f8e90d26a7e2a115ad5784476bd6deaa2 (HEAD -> REL_15_STABLE, origin/REL_15_STABLE)\n \n> If it's REL_15_BETA3, any chance you could cherry pick that change and\n> check what happens? And without that, could you show what this logs\n> And without that, could you show what this logs\n> for good and bad recovery settings?\n\nI wasn't sure what mean by \"without that\" , so here's a bunch of logs to\nsift through:\n\nAt a203, with #define XLOGPREFETCHER_DEBUG_LEVEL NOTICE:\n\n[pryzbyj@template0 postgresql]$ sudo -u postgres ./tmp_install/usr/local/pgsql/bin/postgres -D /mnt/tmp/15/data -c logging_collector=no -c port=5678\n...\n< 2022-08-31 23:31:38.690 CDT >LOG: redo starts at 1201/1B931F50\n< 2022-08-31 23:31:40.204 CDT >NOTICE: suppressing prefetch in relation 1663/16888/165958212 from block 156 until 1201/1C3965A0 is replayed, which truncates the relation\n< 2022-08-31 23:31:40.307 CDT >NOTICE: suppressing prefetch in relation 1663/16888/165958523 from block 23 until 1201/1C39CC98 is replayed, which truncates the relation\n< 2022-08-31 23:31:40.493 CDT >NOTICE: suppressing prefetch in relation 1663/16888/165958523 from block 23 until 1201/1C8643C8 is replayed, because the relation is too small\n< 2022-08-31 23:31:40.721 CDT >LOG: redo done at 1201/1CADB300 system usage: CPU: user: 0.41 s, system: 0.23 s, elapsed: 2.03 s\n< 2022-08-31 23:31:41.452 CDT >LOG: checkpoint starting: end-of-recovery immediate wait\n< 2022-08-31 23:31:41.698 CDT >LOG: request to flush past end of generated WAL; request 1201/1CAF84F0, current position 1201/1CADB730\n< 2022-08-31 23:31:41.698 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n< 2022-08-31 23:31:41.698 CDT >ERROR: xlog flush request 1201/1CAF84F0 is not satisfied --- flushed only to 1201/1CADB730\n< 2022-08-31 23:31:41.698 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n< 2022-08-31 23:31:41.699 CDT >FATAL: checkpoint request failed\n< 2022-08-31 23:31:41.699 CDT >HINT: Consult recent messages in the server log for details.\n< 2022-08-31 23:31:41.704 CDT >LOG: startup process (PID 25046) exited with exit code 1\n< 2022-08-31 23:31:41.704 CDT >LOG: terminating any other active server processes\n< 2022-08-31 23:31:41.705 CDT >LOG: shutting down due to startup process failure\n< 2022-08-31 23:31:41.731 CDT >LOG: database system is shut down\n\nWith your patch:\n\n[pryzbyj@template0 postgresql]$ sudo -u postgres ./tmp_install/usr/local/pgsql/bin/postgres -D /mnt/tmp/15/data -c logging_collector=no -c port=5678\n...\n< 2022-08-31 23:34:22.897 CDT >LOG: redo starts at 1201/1B931F50\n< 2022-08-31 23:34:23.146 CDT >NOTICE: suppressing prefetch in relation 1663/16888/165958212 from block 156 until 1201/1C3965A0 is replayed, which truncates the relation\n< 2022-08-31 23:34:23.147 CDT >NOTICE: suppressing prefetch in relation 1663/16888/165958523 from block 23 until 1201/1C39CC98 is replayed, which truncates the relation\n< 2022-08-31 23:34:23.268 CDT >NOTICE: suppressing prefetch in relation 1663/16888/165958523 from block 23 until 1201/1C8643C8 is replayed, because the relation is too small\n< 2022-08-31 23:34:23.323 CDT >LOG: redo done at 1201/1CADB300 system usage: CPU: user: 0.29 s, system: 0.12 s, elapsed: 0.42 s\n< 2022-08-31 23:34:23.323 CDT >LOG: point 0: lastRec = 12011cadb300\n< 2022-08-31 23:34:23.323 CDT >LOG: point 0: endOfLog = 12011cadb730\n< 2022-08-31 23:34:23.323 CDT >LOG: XXX point 1: EndOfLog = 12011cadb730\n< 2022-08-31 23:34:23.386 CDT >LOG: XXX point 2: EndOfLog = 12011cadb730\n< 2022-08-31 23:34:23.387 CDT >LOG: XXX point 3: Insert->CurrBytePos = 11f39ab82310\n< 2022-08-31 23:34:23.565 CDT >LOG: XXX point 4: Insert->CurrBytePos = 11f39ab82310\n< 2022-08-31 23:34:23.606 CDT >LOG: checkpoint starting: end-of-recovery immediate wait\n< 2022-08-31 23:34:23.767 CDT >LOG: request to flush past end of generated WAL; request 1201/1CAF84F0, current position 1201/1CADB730\n< 2022-08-31 23:34:23.767 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n< 2022-08-31 23:34:23.767 CDT >ERROR: xlog flush request 1201/1CAF84F0 is not satisfied --- flushed only to 1201/1CADB730\n< 2022-08-31 23:34:23.767 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n< 2022-08-31 23:34:23.768 CDT >FATAL: checkpoint request failed\n\nAnd without prefetch:\n\n[pryzbyj@template0 postgresql]$ sudo -u postgres ./tmp_install/usr/local/pgsql/bin/postgres -D /mnt/tmp/15/data -c logging_collector=no -c port=5678 -c recovery_prefetch=no\n...\n< 2022-08-31 23:37:08.792 CDT >LOG: redo starts at 1201/1B931F50\n< 2022-08-31 23:37:09.269 CDT >LOG: invalid record length at 1201/1CD91010: wanted 24, got 0\n< 2022-08-31 23:37:09.269 CDT >LOG: redo done at 1201/1CD90E48 system usage: CPU: user: 0.35 s, system: 0.11 s, elapsed: 0.47 s\n< 2022-08-31 23:37:09.269 CDT >LOG: point 0: lastRec = 12011cd90e48\n< 2022-08-31 23:37:09.269 CDT >LOG: point 0: endOfLog = 12011cd91010\n< 2022-08-31 23:37:09.269 CDT >LOG: XXX point 1: EndOfLog = 12011cd91010\n< 2022-08-31 23:37:09.350 CDT >LOG: XXX point 2: EndOfLog = 12011cd91010\n< 2022-08-31 23:37:09.350 CDT >LOG: XXX point 3: Insert->CurrBytePos = 11f39ae35b68\n< 2022-08-31 23:37:09.420 CDT >LOG: XXX point 4: Insert->CurrBytePos = 11f39ae35b68\n< 2022-08-31 23:37:09.552 CDT >LOG: checkpoint starting: end-of-recovery immediate wait\n< 2022-08-31 23:37:12.987 CDT >LOG: checkpoint complete: wrote 8030 buffers (49.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.403 s, sync=2.841 s, total=3.566 s; sync files=102, longest=2.808 s, average=0.028 s; distance=20860 kB, estimate=20860 kB\n< 2022-08-31 23:37:13.077 CDT >LOG: database system is ready to accept connections\n\nIf I revert 6672d79139 (and roll back to the unrecovered state):\n\n[pryzbyj@template0 postgresql]$ sudo -u postgres ./tmp_install/usr/local/pgsql/bin/postgres -D /mnt/tmp/15/data -c logging_collector=no -c port=5678 # -c recovery_prefetch=no\n...\n< 2022-08-31 23:42:40.592 CDT >LOG: redo starts at 1201/1B931F50\n< 2022-08-31 23:42:42.168 CDT >NOTICE: suppressing prefetch in relation 1663/16888/165958212 from block 156 until 1201/1C3965A0 is replayed, which truncates the relation\n< 2022-08-31 23:42:42.238 CDT >NOTICE: suppressing prefetch in relation 1663/16888/165958523 from block 23 until 1201/1C39CC98 is replayed, which truncates the relation\n< 2022-08-31 23:42:42.405 CDT >NOTICE: suppressing prefetch in relation 1663/16888/165958523 from block 23 until 1201/1C8643C8 is replayed, because the relation is too small\n< 2022-08-31 23:42:42.602 CDT >LOG: redo done at 1201/1CADB300 system usage: CPU: user: 0.41 s, system: 0.25 s, elapsed: 2.01 s\n< 2022-08-31 23:42:42.602 CDT >LOG: point 0: lastRec = 12011cadb300\n< 2022-08-31 23:42:42.602 CDT >LOG: point 0: endOfLog = 12011cadb730\n< 2022-08-31 23:42:42.602 CDT >LOG: XXX point 1: EndOfLog = 12011cadb730\n< 2022-08-31 23:42:42.830 CDT >LOG: XXX point 2: EndOfLog = 12011cadb730\n< 2022-08-31 23:42:42.830 CDT >LOG: XXX point 3: Insert->CurrBytePos = 11f39ab82310\n< 2022-08-31 23:42:43.194 CDT >LOG: XXX point 4: Insert->CurrBytePos = 11f39ab82310\n< 2022-08-31 23:42:43.266 CDT >LOG: checkpoint starting: end-of-recovery immediate wait\n< 2022-08-31 23:42:43.425 CDT >LOG: request to flush past end of generated WAL; request 1201/1CAF84F0, current position 1201/1CADB730\n< 2022-08-31 23:42:43.425 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n< 2022-08-31 23:42:43.425 CDT >ERROR: xlog flush request 1201/1CAF84F0 is not satisfied --- flushed only to 1201/1CADB730\n< 2022-08-31 23:42:43.425 CDT >CONTEXT: writing block 0 of relation base/16881/2840_vm\n< 2022-08-31 23:42:43.425 CDT >FATAL: checkpoint request failed\n< 2022-08-31 23:42:43.425 CDT >HINT: Consult recent messages in the server log for details.\n< 2022-08-31 23:42:43.431 CDT >LOG: startup process (PID 2415) exited with exit code 1\n< 2022-08-31 23:42:43.431 CDT >LOG: terminating any other active server processes\n< 2022-08-31 23:42:43.432 CDT >LOG: shutting down due to startup process failure\n< 2022-08-31 23:42:43.451 CDT >LOG: database system is shut down\n\nIf I revert 6672d79139 and disable prefetch:\n\n[pryzbyj@template0 postgresql]$ sudo -u postgres ./tmp_install/usr/local/pgsql/bin/postgres -D /mnt/tmp/15/data -c logging_collector=no -c port=5678 -c recovery_prefetch=no\n...\n< 2022-08-31 23:43:25.711 CDT >LOG: redo starts at 1201/1B931F50\n< 2022-08-31 23:43:26.178 CDT >LOG: invalid record length at 1201/1CD91010: wanted 24, got 0\n< 2022-08-31 23:43:26.178 CDT >LOG: redo done at 1201/1CD90E48 system usage: CPU: user: 0.33 s, system: 0.11 s, elapsed: 0.46 s\n< 2022-08-31 23:43:26.178 CDT >LOG: point 0: lastRec = 12011cd90e48\n< 2022-08-31 23:43:26.178 CDT >LOG: point 0: endOfLog = 12011cd91010\n< 2022-08-31 23:43:26.178 CDT >LOG: XXX point 1: EndOfLog = 12011cd91010\n< 2022-08-31 23:43:26.369 CDT >LOG: XXX point 2: EndOfLog = 12011cd91010\n< 2022-08-31 23:43:26.369 CDT >LOG: XXX point 3: Insert->CurrBytePos = 11f39ae35b68\n< 2022-08-31 23:43:26.433 CDT >LOG: XXX point 4: Insert->CurrBytePos = 11f39ae35b68\n< 2022-08-31 23:43:26.490 CDT >LOG: checkpoint starting: end-of-recovery immediate wait\n< 2022-08-31 23:43:29.519 CDT >LOG: checkpoint complete: wrote 8030 buffers (49.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.380 s, sync=2.492 s, total=3.086 s; sync files=102, longest=2.438 s, average=0.025 s; distance=20860 kB, estimate=20860 kB\n< 2022-08-31 23:43:29.567 CDT >LOG: database system is ready to accept connections\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 23:47:53 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "At Wed, 31 Aug 2022 23:47:53 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> On Thu, Sep 01, 2022 at 04:22:20PM +1200, Thomas Munro wrote:\n> > On Thu, Sep 1, 2022 at 3:08 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > Just for information, there was a fixed bug about\n> > > overwrite-aborted-contrecord feature, which causes this kind of\n> > > failure (xlog flush request exceeds insertion bleeding edge). If it is\n> > > that, it has been fixed by 6672d79139 two-days ago.\n> > \n> > Hmm. Justin, when you built from source, which commit were you at?\n> > If it's REL_15_BETA3,\n> \n> No - it's:\n> commit a2039b1f8e90d26a7e2a115ad5784476bd6deaa2 (HEAD -> REL_15_STABLE, origin/REL_15_STABLE)\n\nIt's newer than eb29fa3889 (6672d79139 on master) so it is fixed at\nthat commit.\n\n> > If it's REL_15_BETA3, any chance you could cherry pick that change and\n> > check what happens? And without that, could you show what this logs\n> > And without that, could you show what this logs\n> > for good and bad recovery settings?\n> \n> I wasn't sure what mean by \"without that\" , so here's a bunch of logs to\n> sift through:\n\nThere's no need to cherry picking..\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 01 Sep 2022 14:17:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "On Thu, Sep 1, 2022 at 5:18 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Wed, 31 Aug 2022 23:47:53 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in\n> > On Thu, Sep 01, 2022 at 04:22:20PM +1200, Thomas Munro wrote:\n> > > Hmm. Justin, when you built from source, which commit were you at?\n> > > If it's REL_15_BETA3,\n> >\n> > No - it's:\n> > commit a2039b1f8e90d26a7e2a115ad5784476bd6deaa2 (HEAD -> REL_15_STABLE, origin/REL_15_STABLE)\n>\n> It's newer than eb29fa3889 (6672d79139 on master) so it is fixed at\n> that commit.\n\nYeah.\n\n> > I wasn't sure what mean by \"without that\" , so here's a bunch of logs to\n> > sift through:\n\nSo it *looks* like it finished early (and without the expected\nerror?). But it also looks like it replayed that record, according to\nthe page LSN. So which is it? Could you recompile with WAL_DEBUG\ndefined in pg_config_manual.h, and run recovery with wal_debug = on,\nand see if it replays 1CAF84B0?\n\n\n",
"msg_date": "Thu, 1 Sep 2022 17:35:23 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "On Thu, Sep 01, 2022 at 05:35:23PM +1200, Thomas Munro wrote:\n> So it *looks* like it finished early (and without the expected\n> error?). But it also looks like it replayed that record, according to\n> the page LSN. So which is it? Could you recompile with WAL_DEBUG\n> defined in pg_config_manual.h, and run recovery with wal_debug = on,\n> and see if it replays 1CAF84B0?\n\nThis is with 6672d79139 un-reverted.\n\n$ sudo -u postgres ./tmp_install/usr/local/pgsql/bin/postgres -D /mnt/tmp/15/data -c logging_collector=no -c port=5678 -c wal_debug=on 2>&1 |grep 1CAF84B0 || echo not found\nnot found\n\n$ sudo -u postgres ./tmp_install/usr/local/pgsql/bin/postgres -D /mnt/tmp/15/data -c logging_collector=no -c port=5678 -c wal_debug=on -c recovery_prefetch=no 2>&1 |grep 1CAF84B0 || echo not found\n< 2022-09-01 00:44:55.878 CDT >LOG: REDO @ 1201/1CAF8478; LSN 1201/1CAF84B0: prev 1201/1CAF8438; xid 0; len 2; blkref #0: rel 1663/16881/2840, blk 53 - Heap2/VACUUM: nunused 4\n< 2022-09-01 00:44:55.878 CDT >LOG: REDO @ 1201/1CAF84B0; LSN 1201/1CAF84F0: prev 1201/1CAF8478; xid 0; len 5; blkref #0: rel 1663/16881/2840, fork 2, blk 0; blkref #1: rel 1663/16881/2840, blk 53 - Heap2/VISIBLE: cutoff xid 3678741092 flags 0x01\n< 2022-09-01 00:44:55.878 CDT >LOG: REDO @ 1201/1CAF84F0; LSN 1201/1CAF8AC0: prev 1201/1CAF84B0; xid 0; len 2; blkref #0: rel 1663/16881/1259, blk 1 FPW, compression method: zstd - Heap/INPLACE: off 33\n\n(Note that \"compression method: zstd\" is a local change to\nxlog_block_info() which I just extracted from my original patch for\nwal_compression, after forgetting to compile --with-zstd. I'll mail\nabout that at a later time...).\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 1 Sep 2022 00:52:07 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "On Thu, Sep 1, 2022 at 5:52 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> compression method: zstd\n\nAhh, problem repro'd here with WAL compression. More soon.\n\n\n",
"msg_date": "Thu, 1 Sep 2022 23:18:17 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "On Thu, Sep 1, 2022 at 11:18 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Ahh, problem repro'd here with WAL compression. More soon.\n\nI followed some false pistes for a while there, but I finally figured\nit out what's happening here after Justin kindly shared some files\nwith me. The active ingredient here is a setting of\nmaintenance_io_concurency=0, which runs into a dumb accounting problem\nof the fencepost variety and incorrectly concludes it's reached the\nend early. Setting it to 3 or higher allows his system to complete\nrecovery. I'm working on a fix ASAP.\n\n\n",
"msg_date": "Fri, 2 Sep 2022 18:20:42 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "On Fri, Sep 2, 2022 at 6:20 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> ... The active ingredient here is a setting of\n> maintenance_io_concurency=0, which runs into a dumb accounting problem\n> of the fencepost variety and incorrectly concludes it's reached the\n> end early. Setting it to 3 or higher allows his system to complete\n> recovery. I'm working on a fix ASAP.\n\nThe short version is that when tracking the number of IOs in progress,\nI had two steps in the wrong order in the algorithm for figuring out\nwhether IO is saturated. Internally, the effect of\nmaintenance_io_concurrency is clamped to 2 or more, and that mostly\nhides the bug until you try to replay a particular sequence like\nJustin's with such a low setting. Without that clamp, and if you set\nit to 1, then several of our recovery tests fail.\n\nThat clamp was a bad idea. What I think we really want is for\nmaintenance_io_concurrency=0 to disable recovery prefetching exactly\nas if you'd set recovery_prefetch=off, and any other setting including\n1 to work without clamping.\n\nHere's the patch I'm currently testing. It also fixes a related\ndangling reference problem with very small maintenance_io_concurrency.\n\nI had this more or less figured out on Friday when I wrote last, but I\ngot stuck on a weird problem with 026_overwrite_contrecord.pl. I\nthink that failure case should report an error, no? I find it strange\nthat we end recovery in silence. That was a problem for the new\ncoding in this patch, because it is confused by XLREAD_FAIL without\nqueuing an error, and then retries, which clobbers the aborted recptr\nstate. I'm still looking into that.",
"msg_date": "Mon, 5 Sep 2022 13:28:12 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "On Mon, Sep 5, 2022 at 1:28 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I had this more or less figured out on Friday when I wrote last, but I\n> got stuck on a weird problem with 026_overwrite_contrecord.pl. I\n> think that failure case should report an error, no? I find it strange\n> that we end recovery in silence. That was a problem for the new\n> coding in this patch, because it is confused by XLREAD_FAIL without\n> queuing an error, and then retries, which clobbers the aborted recptr\n> state. I'm still looking into that.\n\nOn reflection, it'd be better not to clobber any pre-existing error\nthere, but report one only if there isn't one already queued. I've\ndone that in this version, which I'm planning to do a bit more testing\non and commit soonish if there are no comments/objections, especially\nfor that part.\n\nI'll have to check whether a doc change is necessary somewhere to\nadvertise that maintenance_io_concurrency=0 turns off prefetching, but\nIIRC that's kinda already implied.\n\nI've tested quite a lot of scenarios including make check-world with\nmaintenance_io_concurrency = 0, 1, 10, 1000, and ALTER SYSTEM for all\nrelevant GUCs on a standby running large pgbench to check expected\neffect on pg_stat_recovery_prefetch view and generate system calls.",
"msg_date": "Mon, 5 Sep 2022 16:54:07 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "At Mon, 5 Sep 2022 13:28:12 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> I had this more or less figured out on Friday when I wrote last, but I\n> got stuck on a weird problem with 026_overwrite_contrecord.pl. I\n> think that failure case should report an error, no? I find it strange\n> that we end recovery in silence. That was a problem for the new\n> coding in this patch, because it is confused by XLREAD_FAIL without\n> queuing an error, and then retries, which clobbers the aborted recptr\n> state. I'm still looking into that.\n\n+1 for showing any message for the failure, but I think we shouldn't\nhide an existing message if any. And the error messages around are\njust telling that \"<some error happened> at RecPtr\". So I think\n\"missing contrecord at RecPtr\" is sufficient here.\n\ndiff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c\nindex cdcacc7803..bfe332c014 100644\n--- a/src/backend/access/transam/xlogreader.c\n+++ b/src/backend/access/transam/xlogreader.c\n@@ -907,6 +907,11 @@ err:\n */\n state->abortedRecPtr = RecPtr;\n state->missingContrecPtr = targetPagePtr;\n+\n+ /* Put a generic error message if no particular cause is recorded. */\n+ if (!state->errormsg_buf[0])\n+ report_invalid_record(state, \"missing contrecord at %X/%X\",\n+ LSN_FORMAT_ARGS(RecPtr));\n }\n \n if (decoded && decoded->oversized)\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 05 Sep 2022 14:15:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "(the previous mail was crossing with yours..)\n\nAt Mon, 05 Sep 2022 14:15:27 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \nme> +1 for showing any message for the failure, but I think we shouldn't\nme> hide an existing message if any.\n\nAt Mon, 5 Sep 2022 16:54:07 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On reflection, it'd be better not to clobber any pre-existing error\n> there, but report one only if there isn't one already queued. I've\n> done that in this version, which I'm planning to do a bit more testing\n> on and commit soonish if there are no comments/objections, especially\n> for that part.\n\nIt looks fine in this regard. I still think that the message looks\nsomewhat internal.\n\nme> And the error messages around are\nme> just telling that \"<some error happened> at RecPtr\". So I think\nme> \"missing contrecord at RecPtr\" is sufficient here.\n\n\n> I'll have to check whether a doc change is necessary somewhere to\n> advertise that maintenance_io_concurrency=0 turns off prefetching, but\n> IIRC that's kinda already implied.\n> \n> I've tested quite a lot of scenarios including make check-world with\n> maintenance_io_concurrency = 0, 1, 10, 1000, and ALTER SYSTEM for all\n> relevant GUCs on a standby running large pgbench to check expected\n> effect on pg_stat_recovery_prefetch view and generate system calls.\n\n+\tif (likely(record = prefetcher->reader->record))\n\nIsn't this confusing a bit?\n\n\n+\tif (likely(record = prefetcher->reader->record))\n+\t{\n+\t\tXLogRecPtr\treplayed_up_to = record->next_lsn;\n+\n+\t\tXLogReleasePreviousRecord(prefetcher->reader);\n+\n\nThe likely condition is the prerequisite for\nXLogReleasePreviousRecord. But is is a little hard to read the\ncondition as \"in case no previous record exists\". Since there is one\nin most cases, can't call XLogReleasePreviousRecord() unconditionally\nthen the function returns true when the previous record exists and\nfalse if not?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 05 Sep 2022 14:34:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "On Mon, Sep 5, 2022 at 5:34 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Mon, 05 Sep 2022 14:15:27 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> me> +1 for showing any message for the failure, but I think we shouldn't\n> me> hide an existing message if any.\n>\n> At Mon, 5 Sep 2022 16:54:07 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in\n> > On reflection, it'd be better not to clobber any pre-existing error\n> > there, but report one only if there isn't one already queued. I've\n> > done that in this version, which I'm planning to do a bit more testing\n> > on and commit soonish if there are no comments/objections, especially\n> > for that part.\n>\n> It looks fine in this regard. I still think that the message looks\n> somewhat internal.\n\nThanks for looking!\n\n> me> And the error messages around are\n> me> just telling that \"<some error happened> at RecPtr\". So I think\n> me> \"missing contrecord at RecPtr\" is sufficient here.\n\nOk, I've updated it like that.\n\n> > I'll have to check whether a doc change is necessary somewhere to\n> > advertise that maintenance_io_concurrency=0 turns off prefetching, but\n> > IIRC that's kinda already implied.\n> >\n> > I've tested quite a lot of scenarios including make check-world with\n> > maintenance_io_concurrency = 0, 1, 10, 1000, and ALTER SYSTEM for all\n> > relevant GUCs on a standby running large pgbench to check expected\n> > effect on pg_stat_recovery_prefetch view and generate system calls.\n>\n> + if (likely(record = prefetcher->reader->record))\n>\n> Isn't this confusing a bit?\n>\n>\n> + if (likely(record = prefetcher->reader->record))\n> + {\n> + XLogRecPtr replayed_up_to = record->next_lsn;\n> +\n> + XLogReleasePreviousRecord(prefetcher->reader);\n> +\n>\n> The likely condition is the prerequisite for\n> XLogReleasePreviousRecord. But is is a little hard to read the\n> condition as \"in case no previous record exists\". Since there is one\n> in most cases, can't call XLogReleasePreviousRecord() unconditionally\n> then the function returns true when the previous record exists and\n> false if not?\n\nWe also need the LSN that is past that record.\nXLogReleasePreviousRecord() could return it (or we could use\nreader->EndRecPtr I suppose). Thoughts on this version?",
"msg_date": "Mon, 5 Sep 2022 21:08:16 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "On Mon, Sep 5, 2022 at 9:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > At Mon, 05 Sep 2022 14:15:27 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > At Mon, 5 Sep 2022 16:54:07 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in\n> > > On reflection, it'd be better not to clobber any pre-existing error\n> > > there, but report one only if there isn't one already queued. I've\n> > > done that in this version, which I'm planning to do a bit more testing\n> > > on and commit soonish if there are no comments/objections, especially\n> > > for that part.\n\nWell I was about to commit this, but beta4 just got stamped (but not\nyet tagged). I see now that Jonathan (with RMT hat on, CC'd) meant\ncommits should be in by the *start* of the 5th AoE, not the end. So\nthe procedural/RMT question is whether it's still possible to close\nthis item in beta4.\n\n\n",
"msg_date": "Tue, 6 Sep 2022 11:18:39 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "At Mon, 5 Sep 2022 21:08:16 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> We also need the LSN that is past that record.\n> XLogReleasePreviousRecord() could return it (or we could use\n> reader->EndRecPtr I suppose). Thoughts on this version?\n\n(Catching the gap...)\n\nIt is easier to read. Thanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 06 Sep 2022 09:48:54 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "On 9/5/22 7:18 PM, Thomas Munro wrote:\r\n> On Mon, Sep 5, 2022 at 9:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\r\n>>> At Mon, 05 Sep 2022 14:15:27 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\r\n>>> At Mon, 5 Sep 2022 16:54:07 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in\r\n>>>> On reflection, it'd be better not to clobber any pre-existing error\r\n>>>> there, but report one only if there isn't one already queued. I've\r\n>>>> done that in this version, which I'm planning to do a bit more testing\r\n>>>> on and commit soonish if there are no comments/objections, especially\r\n>>>> for that part.\r\n> \r\n> Well I was about to commit this, but beta4 just got stamped (but not\r\n> yet tagged). I see now that Jonathan (with RMT hat on, CC'd) meant\r\n> commits should be in by the *start* of the 5th AoE, not the end. So\r\n> the procedural/RMT question is whether it's still possible to close\r\n> this item in beta4.\r\n\r\nPresumably because Tom stamped it, the released is wrapped so it \r\nwouldn't make Beta 4, but I defer to him to see if it can be included \r\nwith the tag.\r\n\r\nThat said, if it doesn't make it for Beta 4, it would be in the next \r\nrelease (which is hopefully RC1).\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Mon, 5 Sep 2022 21:45:48 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 9/5/22 7:18 PM, Thomas Munro wrote:\n>> Well I was about to commit this, but beta4 just got stamped (but not\n>> yet tagged). I see now that Jonathan (with RMT hat on, CC'd) meant\n>> commits should be in by the *start* of the 5th AoE, not the end. So\n>> the procedural/RMT question is whether it's still possible to close\n>> this item in beta4.\n\n> Presumably because Tom stamped it, the released is wrapped so it \n> wouldn't make Beta 4, but I defer to him to see if it can be included \n> with the tag.\n\nI already made the tarballs available to packagers, so adding this\nwould involve a re-wrap and great confusion. In any case, I'm not\na fan of pushing fixes within a day or two of the wrap deadline,\nlet alone after it; you get inadequate buildfarm coverage when you\ncut corners that way. I think this one missed the boat.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Sep 2022 21:51:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "On Tue, Sep 6, 2022 at 1:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> > On 9/5/22 7:18 PM, Thomas Munro wrote:\n> >> Well I was about to commit this, but beta4 just got stamped (but not\n> >> yet tagged). I see now that Jonathan (with RMT hat on, CC'd) meant\n> >> commits should be in by the *start* of the 5th AoE, not the end. So\n> >> the procedural/RMT question is whether it's still possible to close\n> >> this item in beta4.\n>\n> > Presumably because Tom stamped it, the released is wrapped so it\n> > wouldn't make Beta 4, but I defer to him to see if it can be included\n> > with the tag.\n>\n> I already made the tarballs available to packagers, so adding this\n> would involve a re-wrap and great confusion. In any case, I'm not\n> a fan of pushing fixes within a day or two of the wrap deadline,\n> let alone after it; you get inadequate buildfarm coverage when you\n> cut corners that way. I think this one missed the boat.\n\nGot it. Yeah I knew it was going to be a close thing with a problem\ndiagnosed on Thursday/Friday before a Monday wrap, even before I\nmanaged to confuse myself about dates and times. Thanks both.\n\n\n",
"msg_date": "Tue, 6 Sep 2022 14:03:52 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "On 9/5/22 10:03 PM, Thomas Munro wrote:\r\n> On Tue, Sep 6, 2022 at 1:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n>> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>>> On 9/5/22 7:18 PM, Thomas Munro wrote:\r\n>>>> Well I was about to commit this, but beta4 just got stamped (but not\r\n>>>> yet tagged). I see now that Jonathan (with RMT hat on, CC'd) meant\r\n>>>> commits should be in by the *start* of the 5th AoE, not the end. So\r\n>>>> the procedural/RMT question is whether it's still possible to close\r\n>>>> this item in beta4.\r\n>>\r\n>>> Presumably because Tom stamped it, the released is wrapped so it\r\n>>> wouldn't make Beta 4, but I defer to him to see if it can be included\r\n>>> with the tag.\r\n>>\r\n>> I already made the tarballs available to packagers, so adding this\r\n>> would involve a re-wrap and great confusion. In any case, I'm not\r\n>> a fan of pushing fixes within a day or two of the wrap deadline,\r\n>> let alone after it; you get inadequate buildfarm coverage when you\r\n>> cut corners that way. I think this one missed the boat.\r\n> \r\n> Got it. Yeah I knew it was going to be a close thing with a problem\r\n> diagnosed on Thursday/Friday before a Monday wrap, even before I\r\n> managed to confuse myself about dates and times. Thanks both.\r\n\r\nTo close this loop, I added a section for \"fixed before RC1\" to Open \r\nItems since this is presumably the next release. We can include it there \r\nonce committed.\r\n\r\nJonathan",
"msg_date": "Tue, 6 Sep 2022 09:55:52 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
},
{
"msg_contents": "On Wed, Sep 7, 2022 at 1:56 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> To close this loop, I added a section for \"fixed before RC1\" to Open\n> Items since this is presumably the next release. We can include it there\n> once committed.\n\nDone yesterday.\n\nTo tie up a couple of loose ends from this thread:\n\nOn Thu, Sep 1, 2022 at 2:48 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Also, pg_waldump seems to fail early with -w:\n> [pryzbyj@template0 ~]$ sudo /usr/pgsql-15/bin/pg_waldump -w -R 1663/16881/2840 -F vm -p /mnt/tmp/15/data/pg_wal 00000001000012010000001C\n> rmgr: Heap2 len (rec/tot): 64/ 122, tx: 0, lsn: 1201/1CAF2658, prev 1201/1CAF2618, desc: VISIBLE cutoff xid 3681024856 flags 0x01, blkref #0: rel 1663/16881/2840 fork vm blk 0 FPW, blkref #1: rel 1663/16881/2840 blk 54\n> pg_waldump: error: error in WAL record at 1201/1CD90E48: invalid record length at 1201/1CD91010: wanted 24, got 0\n\nThat looks OK to me. With or without -w, we get as far as\n1201/1CD91010 and then hit zeroes.\n\nOn Thu, Sep 1, 2022 at 5:35 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> So it *looks* like it finished early (and without the expected\n> error?). But it also looks like it replayed that record, according to\n> the page LSN. So which is it?\n\nThe reason 1201/1CAF84B0 appeared on a page despite not having been\nreplayed (due to the bug) is just that vismap pages don't follow the\nusual logging rules, and can be read in by heap records that don't\nmention the vm page (and therefore no FPW). So we can finish up\nreading a page from disk with a future LSN on it.\n\n\n",
"msg_date": "Fri, 9 Sep 2022 16:36:55 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: recovery fails with wal prefetch enabled"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhile query jumbling is provided for function calls that’s currently not \nthe case for procedures calls.\nThe reason behind this is that all utility statements are currently \ndiscarded for jumbling.\n\nWe’ve recently seen performance impacts (LWLock contention) due to the \nlack of jumbling on procedure calls with pg_stat_statements and \npg_stat_statements.track_utility enabled (think an application with a \nhigh rate of procedure calls with unique parameters for each call).\n\nJeremy has had this conversation on twitter (see \nhttps://twitter.com/jer_s/status/1560003560116342785) and Nikolay \nreported that he also had to work on a similar performance issue with \nSET being used.\n\nThat’s why we think it would make sense to allow jumbling for those 2 \nutility statements: CALL and SET.\n\nPlease find attached a patch proposal for doing so.\n\nWith the attached patch we would get things like:\n\nCALL MINUS_TWO(3);\nCALL MINUS_TWO(7);\nCALL SUM_TWO(3, 8);\nCALL SUM_TWO(7, 5);\nset enable_seqscan=false;\nset enable_seqscan=true;\nset seq_page_cost=2.0;\nset seq_page_cost=1.0;\n\npostgres=# SELECT query, calls, rows FROM pg_stat_statements;\n query | calls | rows\n-----------------------------------+-------+------\n set seq_page_cost=$1 | 2 | 0\n CALL MINUS_TWO($1) | 2 | 0\n set enable_seqscan=$1 | 2 | 0\n CALL SUM_TWO($1, $2) | 2 | 0\n\nLooking forward to your feedback,\n\nThanks,\n\nJeremy & Bertrand\n\n-- \nBertrand Drouvot\nAmazon Web Services:https://aws.amazon.com",
"msg_date": "Wed, 31 Aug 2022 17:33:44 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi\n\n\nst 31. 8. 2022 v 17:34 odesílatel Drouvot, Bertrand <bdrouvot@amazon.com>\nnapsal:\n\n> Hi hackers,\n>\n> While query jumbling is provided for function calls that’s currently not\n> the case for procedures calls.\n> The reason behind this is that all utility statements are currently\n> discarded for jumbling.\n>\n> We’ve recently seen performance impacts (LWLock contention) due to the\n> lack of jumbling on procedure calls with pg_stat_statements and\n> pg_stat_statements.track_utility enabled (think an application with a high\n> rate of procedure calls with unique parameters for each call).\n>\n> Jeremy has had this conversation on twitter (see\n> https://twitter.com/jer_s/status/1560003560116342785) and Nikolay\n> reported that he also had to work on a similar performance issue with SET\n> being used.\n>\n> That’s why we think it would make sense to allow jumbling for those 2\n> utility statements: CALL and SET.\n>\n> Please find attached a patch proposal for doing so.\n>\n> With the attached patch we would get things like:\n> CALL MINUS_TWO(3);\n> CALL MINUS_TWO(7);\n> CALL SUM_TWO(3, 8);\n> CALL SUM_TWO(7, 5);\n> set enable_seqscan=false;\n> set enable_seqscan=true;\n> set seq_page_cost=2.0;\n> set seq_page_cost=1.0;\n>\n> postgres=# SELECT query, calls, rows FROM pg_stat_statements;\n> query | calls | rows\n> -----------------------------------+-------+------\n> set seq_page_cost=$1 | 2 | 0\n> CALL MINUS_TWO($1) | 2 | 0\n> set enable_seqscan=$1 | 2 | 0\n> CALL SUM_TWO($1, $2) | 2 | 0\n>\n> Looking forward to your feedback,\n>\nThe idea is good, but I think you should use pg_stat_functions instead.\nMaybe it is supported already (I didn't test it). I am not sure so SET\nstatement should be traced in pg_stat_statements - it is usually pretty\nfast, and without context it says nothing. It looks like just overhead.\n\nRegards\n\nPavel\n\n\n> Thanks,\n>\n> Jeremy & Bertrand\n>\n> --\n> Bertrand Drouvot\n> Amazon Web Services: https://aws.amazon.com\n>\n>\n\nHist 31. 8. 2022 v 17:34 odesílatel Drouvot, Bertrand <bdrouvot@amazon.com> napsal:\n\nHi hackers,\n\n While query jumbling is provided for function calls that’s\n currently not the case for procedures calls.\n The reason behind this is that all utility statements are\n currently discarded for jumbling.\n\n We’ve recently seen performance impacts (LWLock contention) due to\n the lack of jumbling on procedure calls with pg_stat_statements\n and pg_stat_statements.track_utility enabled (think an application\n with a high rate of procedure calls with unique parameters for\n each call).\n\n Jeremy has had this conversation on twitter (see\n https://twitter.com/jer_s/status/1560003560116342785) and Nikolay\n reported that he also had to work on a similar performance issue\n with SET being used.\n\n That’s why we think it would make sense to allow jumbling for\n those 2 utility statements: CALL and SET.\n\n Please find attached a patch proposal for doing so.\n\n With the attached patch we would get things like:\nCALL MINUS_TWO(3);\n CALL MINUS_TWO(7);\n CALL SUM_TWO(3, 8);\n CALL SUM_TWO(7, 5);\n set enable_seqscan=false;\n set enable_seqscan=true;\n set seq_page_cost=2.0;\n set seq_page_cost=1.0;\n\npostgres=# SELECT query, calls, rows\n FROM pg_stat_statements;\n query | calls | rows\n -----------------------------------+-------+------\n set seq_page_cost=$1 | 2 | 0\n CALL MINUS_TWO($1) | 2 | 0\n set enable_seqscan=$1 | 2 | 0\n CALL SUM_TWO($1, $2) | 2 | 0\n\nLooking forward to your feedback,The idea is good, but I think you should use pg_stat_functions instead. Maybe it is supported already (I didn't test it). I am not sure so SET statement should be traced in pg_stat_statements - it is usually pretty fast, and without context it says nothing. It looks like just overhead.RegardsPavel \n\nThanks,\nJeremy & Bertrand\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 31 Aug 2022 17:50:42 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-31 17:33:44 +0200, Drouvot, Bertrand wrote:\n> While query jumbling is provided for function calls that’s currently not the\n> case for procedures calls.\n> The reason behind this is that all utility statements are currently\n> discarded for jumbling.\n> [...]\n> That’s why we think it would make sense to allow jumbling for those 2\n> utility statements: CALL and SET.\n\nYes, I've seen this be an issue repeatedly. Although more heavily for PREPARE\n/ EXECUTE than either of the two cases you handle here. IME not tracking\nPREPARE / EXECUTE can distort statistics substantially - there's appears to be\na surprising number of applications / frameworks resorting to them. Basically\nrequiring that track utility is turned on.\n\nI suspect we should carve out things like CALL, PREPARE, EXECUTE from\ntrack_utility - it's more or less an architectural accident that they're\nutility statements. It's a bit less clear that SET should be dealt with that\nway.\n\n\n\n> @@ -383,6 +384,30 @@ JumbleExpr(JumbleState *jstate, Node *node)\n> \t\t\t\tAPP_JUMB(var->varlevelsup);\n> \t\t\t}\n> \t\t\tbreak;\n> +\t\tcase T_CallStmt:\n> +\t\t\t{\n> +\t\t\t\tCallStmt *stmt = (CallStmt *) node;\n> +\t\t\t\tFuncExpr *expr = stmt->funcexpr;\n> +\n> +\t\t\t\tAPP_JUMB(expr->funcid);\n> +\t\t\t\tJumbleExpr(jstate, (Node *) expr->args);\n> +\t\t\t}\n> +\t\t\tbreak;\n\nWhy do we need to take the arguments into account?\n\n\n> +\t\tcase T_VariableSetStmt:\n> +\t\t\t{\n> +\t\t\t\tVariableSetStmt *stmt = (VariableSetStmt *) node;\n> +\n> +\t\t\t\tAPP_JUMB_STRING(stmt->name);\n> +\t\t\t\tJumbleExpr(jstate, (Node *) stmt->args);\n> +\t\t\t}\n> +\t\t\tbreak;\n\nSame?\n\n\n> +\t\tcase T_A_Const:\n> +\t\t\t{\n> +\t\t\t\tint\t\t\tloc = ((const A_Const *) node)->location;\n> +\n> +\t\t\t\tRecordConstLocation(jstate, loc);\n> +\t\t\t}\n> +\t\t\tbreak;\n\nI suspect we only need this because of the jumbling of unparsed arguments I\nquestioned above? If we do end up needing it, shouldn't we include the type\nin the jumbling?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Aug 2022 09:08:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "st 31. 8. 2022 v 17:50 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n>\n> st 31. 8. 2022 v 17:34 odesílatel Drouvot, Bertrand <bdrouvot@amazon.com>\n> napsal:\n>\n>> Hi hackers,\n>>\n>> While query jumbling is provided for function calls that’s currently not\n>> the case for procedures calls.\n>> The reason behind this is that all utility statements are currently\n>> discarded for jumbling.\n>>\n>> We’ve recently seen performance impacts (LWLock contention) due to the\n>> lack of jumbling on procedure calls with pg_stat_statements and\n>> pg_stat_statements.track_utility enabled (think an application with a high\n>> rate of procedure calls with unique parameters for each call).\n>>\n>> Jeremy has had this conversation on twitter (see\n>> https://twitter.com/jer_s/status/1560003560116342785) and Nikolay\n>> reported that he also had to work on a similar performance issue with SET\n>> being used.\n>>\n>> That’s why we think it would make sense to allow jumbling for those 2\n>> utility statements: CALL and SET.\n>>\n>> Please find attached a patch proposal for doing so.\n>>\n>> With the attached patch we would get things like:\n>> CALL MINUS_TWO(3);\n>> CALL MINUS_TWO(7);\n>> CALL SUM_TWO(3, 8);\n>> CALL SUM_TWO(7, 5);\n>> set enable_seqscan=false;\n>> set enable_seqscan=true;\n>> set seq_page_cost=2.0;\n>> set seq_page_cost=1.0;\n>>\n>> postgres=# SELECT query, calls, rows FROM pg_stat_statements;\n>> query | calls | rows\n>> -----------------------------------+-------+------\n>> set seq_page_cost=$1 | 2 | 0\n>> CALL MINUS_TWO($1) | 2 | 0\n>> set enable_seqscan=$1 | 2 | 0\n>> CALL SUM_TWO($1, $2) | 2 | 0\n>>\n>> Looking forward to your feedback,\n>>\n> The idea is good, but I think you should use pg_stat_functions instead.\n> Maybe it is supported already (I didn't test it). I am not sure so SET\n> statement should be traced in pg_stat_statements - it is usually pretty\n> fast, and without context it says nothing. It looks like just overhead.\n>\n\nI was wrong - there is an analogy with SELECT fx, and the statistics are in\npg_stat_statements, and in pg_stat_function too.\n\nRegards\n\nPavel\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>> Thanks,\n>>\n>> Jeremy & Bertrand\n>>\n>> --\n>> Bertrand Drouvot\n>> Amazon Web Services: https://aws.amazon.com\n>>\n>>\n\nst 31. 8. 2022 v 17:50 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:Hist 31. 8. 2022 v 17:34 odesílatel Drouvot, Bertrand <bdrouvot@amazon.com> napsal:\n\nHi hackers,\n\n While query jumbling is provided for function calls that’s\n currently not the case for procedures calls.\n The reason behind this is that all utility statements are\n currently discarded for jumbling.\n\n We’ve recently seen performance impacts (LWLock contention) due to\n the lack of jumbling on procedure calls with pg_stat_statements\n and pg_stat_statements.track_utility enabled (think an application\n with a high rate of procedure calls with unique parameters for\n each call).\n\n Jeremy has had this conversation on twitter (see\n https://twitter.com/jer_s/status/1560003560116342785) and Nikolay\n reported that he also had to work on a similar performance issue\n with SET being used.\n\n That’s why we think it would make sense to allow jumbling for\n those 2 utility statements: CALL and SET.\n\n Please find attached a patch proposal for doing so.\n\n With the attached patch we would get things like:\nCALL MINUS_TWO(3);\n CALL MINUS_TWO(7);\n CALL SUM_TWO(3, 8);\n CALL SUM_TWO(7, 5);\n set enable_seqscan=false;\n set enable_seqscan=true;\n set seq_page_cost=2.0;\n set seq_page_cost=1.0;\n\npostgres=# SELECT query, calls, rows\n FROM pg_stat_statements;\n query | calls | rows\n -----------------------------------+-------+------\n set seq_page_cost=$1 | 2 | 0\n CALL MINUS_TWO($1) | 2 | 0\n set enable_seqscan=$1 | 2 | 0\n CALL SUM_TWO($1, $2) | 2 | 0\n\nLooking forward to your feedback,The idea is good, but I think you should use pg_stat_functions instead. Maybe it is supported already (I didn't test it). I am not sure so SET statement should be traced in pg_stat_statements - it is usually pretty fast, and without context it says nothing. It looks like just overhead.I was wrong - there is an analogy with SELECT fx, and the statistics are in pg_stat_statements, and in pg_stat_function too. RegardsPavel RegardsPavel \n\nThanks,\nJeremy & Bertrand\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 31 Aug 2022 18:59:23 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "On 8/31/22 8:33 AM, Drouvot, Bertrand wrote:\n> \n> We’ve recently seen performance impacts (LWLock contention) due to the\n> lack of jumbling on procedure calls with pg_stat_statements and\n> pg_stat_statements.track_utility enabled (think an application with a\n> high rate of procedure calls with unique parameters for each call).\n\nI ran some performance tests with the patch that Bertrand wrote to get\nnumbers. From my perspective, this patch is scoped very minimally and is\nlow risk; I don’t think it should need an enormous amount of validation.\nIt does appear to address the issues with both SET and CALL statements\nthat Nikolay and I respectively encountered. Honestly, this almost seems\nlike it was just an minor oversight in the original patch that added\nsupport for CALL and procedures.\n\nI used an r5.large EC2 instance running Linux and tested Bertrand’s\npatch using the PostgreSQL 14.4 code base, compiled without and with\nBertrand’s patch. The difference is a lot more extreme on big servers\nwith lots of cores, but the difference is obvious even on a small\ninstance like this one.\n\nAs a side note: while I certainly don't want to build a database\nprimarily based on benchmarks, it's nice when benchmarks showcase the\ndatabase's strength. Without this patch, HammerDB completely falls over\nin stored procedure mode, since one of the procedure arguments is a\ntime-based unique value on every call. Someone else at Amazon running\nHammerDB was how I originally became aware of this problem.\n\n-Jeremy\n\n\n===== Setup:\n$ psql -c \"create procedure test(x int) as 'begin return; end' language\nplpgsql;\"\nCREATE PROCEDURE\n$ echo -e \"\\set x random(1,100000000) \\n call test(:x)\" >test-call.pgbench\n$ echo -e \"\\set x random(1,100000000) \\n set application_name=':x'\"\n>test-set.pgbench\n\n\n\n===== CALL results without patch:\n\n[postgres@ip-172-31-44-176 ~]$ pgbench -n -c 100 -j 100 -T15 -r -f\ntest-set.pgbench\npgbench (14.4)\ntransaction type: test-set.pgbench\nscaling factor: 1\nquery mode: simple\nnumber of clients: 100\nnumber of threads: 100\nduration: 15 s\nnumber of transactions actually processed: 728748\nlatency average = 2.051 ms\ninitial connection time = 91.844 ms\ntps = 48755.446492 (without initial connection time)\nstatement latencies in milliseconds:\n0.000 \\set x random(1,100000000)\n2.046 set application_name=':x'\n\n\npg-14.4 rw postgres@postgres=# select wait_event, count(*) from\npg_stat_activity group by wait_event; \\watch 1\n...\nTue 30 Aug 2022 08:26:35 PM UTC (every 1s)\n\nwait_event | count\n---------------------+-------\n[NULL] | 6\npg_stat_statements | 95\nBgWriterMain | 1\nArchiverMain | 1\nWalWriterMain | 1\nAutoVacuumMain | 1\nCheckpointerMain | 1\nLogicalLauncherMain | 1\n(8 rows)\n...\n\n\n\n===== CALL results with patch:\n\n[postgres@ip-172-31-44-176 ~]$ pgbench -n -c 100 -j 100 -T15 -r -f\ntest-call.pgbench\npgbench (14.4)\ntransaction type: test-call.pgbench\nscaling factor: 1\nquery mode: simple\nnumber of clients: 100\nnumber of threads: 100\nduration: 15 s\nnumber of transactions actually processed: 1098776\nlatency average = 1.361 ms\ninitial connection time = 89.002 ms\ntps = 73491.904878 (without initial connection time)\nstatement latencies in milliseconds:\n0.000 \\set x random(1,100000000)\n1.383 call test(:x)\n\n\npg-14.4 rw postgres@postgres=# select wait_event, count(*) from\npg_stat_activity group by wait_event; \\watch 1\n...\nTue 30 Aug 2022 08:42:51 PM UTC (every 1s)\n\nwait_event | count\n---------------------+-------\n[NULL] | 99\nBgWriterHibernate | 1\nArchiverMain | 1\nWalWriterMain | 1\nAutoVacuumMain | 1\nCheckpointerMain | 1\nClientRead | 2\nLogicalLauncherMain | 1\n(8 rows)\n...\n\n\n\n===== SET results without patch:\n\n[postgres@ip-172-31-44-176 ~]$ pgbench -n -c 100 -j 100 -T15 -r -f\ntest-set.pgbench\npgbench (14.4)\ntransaction type: test-set.pgbench\nscaling factor: 1\nquery mode: simple\nnumber of clients: 100\nnumber of threads: 100\nduration: 15 s\nnumber of transactions actually processed: 728748\nlatency average = 2.051 ms\ninitial connection time = 91.844 ms\ntps = 48755.446492 (without initial connection time)\nstatement latencies in milliseconds:\n0.000 \\set x random(1,100000000)\n2.046 set application_name=':x'\n\n\npg-14.4 rw postgres@postgres=# select wait_event, count(*) from\npg_stat_activity group by wait_event; \\watch 1\n...\nTue 30 Aug 2022 08:26:35 PM UTC (every 1s)\n\nwait_event | count\n---------------------+-------\n[NULL] | 6\npg_stat_statements | 95\nBgWriterMain | 1\nArchiverMain | 1\nWalWriterMain | 1\nAutoVacuumMain | 1\nCheckpointerMain | 1\nLogicalLauncherMain | 1\n(8 rows)\n...\n\n\n\n===== SET results with patch:\n\n[postgres@ip-172-31-44-176 ~]$ pgbench -n -c 100 -j 100 -T15 -r -f\ntest-set.pgbench\npgbench (14.4)\ntransaction type: test-set.pgbench\nscaling factor: 1\nquery mode: simple\nnumber of clients: 100\nnumber of threads: 100\nduration: 15 s\nnumber of transactions actually processed: 1178844\nlatency average = 1.268 ms\ninitial connection time = 89.159 ms\ntps = 78850.178814 (without initial connection time)\nstatement latencies in milliseconds:\n0.000 \\set x random(1,100000000)\n1.270 set application_name=':x'\n\n\npg-14.4 rw postgres@postgres=# select wait_event, count(*) from\npg_stat_activity group by wait_event; \\watch 1\n...\nTue 30 Aug 2022 08:44:30 PM UTC (every 1s)\n\nwait_event | count\n---------------------+-------\n[NULL] | 101\nBgWriterHibernate | 1\nArchiverMain | 1\nWalWriterMain | 1\nAutoVacuumMain | 1\nCheckpointerMain | 1\nLogicalLauncherMain | 1\n(7 rows)\n...\n\n\n\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 10:58:14 -0700",
"msg_from": "Jeremy Schneider <schnjere@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "On 8/31/22 9:08 AM, Andres Freund wrote:\n> \n> I suspect we should carve out things like CALL, PREPARE, EXECUTE from\n> track_utility - it's more or less an architectural accident that they're\n> utility statements. It's a bit less clear that SET should be dealt with that\n> way.\n\nRegarding SET, the compelling use case was around \"application_name\"\nwhose purpose is to provide a label in pg_stat_activity and on log\nlines, which can be used to improve observability and connect queries to\ntheir source in application code. Nikolay's incident (on peak shopping\nday for an eCommerce corp) evidently involved an application which\nleveraged this, but as a result the contention on the pg_stat_statements\nLWLock in exclusive mode effectively caused an outage for the retailer?\nOr nearly did? My description here is based on Nikolay's public twitter\ncomment.\n\nI've seen a lot of applications that make heavy use of temp tables,\nwhere DDL would be pretty important to track as part of the regular\nworkload. So that probably should be added to the list alongside\nprepared statements. And I'd want to spend a little more time thinking\nabout what other use cases might be missing. I'm hesitant about the\ngeneral idea of carving out some utility statements away from this\n\"track_utility\" GUC.\n\nPersonally, at this point, I think pg_stat_statements is critical\ninfrastructure for anyone running PostgreSQL at scale. The information\nit provides is indispensable. I don't think it's really defensible to\ntell people that if they want to scale, then they need to fly blind on\nany utility statements.\n\n-Jeremy\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 11:00:05 -0700",
"msg_from": "Jeremy Schneider <schnjere@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-31 11:00:05 -0700, Jeremy Schneider wrote:\n> On 8/31/22 9:08 AM, Andres Freund wrote:\n> > \n> > I suspect we should carve out things like CALL, PREPARE, EXECUTE from\n> > track_utility - it's more or less an architectural accident that they're\n> > utility statements. It's a bit less clear that SET should be dealt with that\n> > way.\n> \n> Regarding SET, the compelling use case was around \"application_name\"\n> whose purpose is to provide a label in pg_stat_activity and on log\n> lines, which can be used to improve observability and connect queries to\n> their source in application code.\n\nI wasn't saying that SET shouldn't be jumbled, just that it seems more\nreasonable to track it only when track_utility is enabled, rather than doing\nso even when that's disabled. Which I do think makes sense for executing a\nprepared statement and calling a procedure, since they're really only utility\nstatements by accident.\n\n\n> Personally, at this point, I think pg_stat_statements is critical\n> infrastructure for anyone running PostgreSQL at scale. The information\n> it provides is indispensable. I don't think it's really defensible to\n> tell people that if they want to scale, then they need to fly blind on\n> any utility statements.\n\nI wasn't suggesting doing so at all.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Aug 2022 12:06:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "On 8/31/22 12:06 PM, Andres Freund wrote:\n>> Regarding SET, the compelling use case was around \"application_name\"\n>> whose purpose is to provide a label in pg_stat_activity and on log\n>> lines, which can be used to improve observability and connect queries to\n>> their source in application code.\n> I wasn't saying that SET shouldn't be jumbled, just that it seems more\n> reasonable to track it only when track_utility is enabled, rather than doing\n> so even when that's disabled. Which I do think makes sense for executing a\n> prepared statement and calling a procedure, since they're really only utility\n> statements by accident.\n\nHey Andres, sorry for misunderstanding your email!\n\nBased on this quick test I just now ran (transcript below), I think that\nPREPARE/EXECUTE is already excluded from track_utility?\n\nI get your point about CALL, maybe it does make sense to also exclude\nthis. It might also be worth a small update to the doc for track_utility\nabout how it behaves, in this regard.\n\nhttps://www.postgresql.org/docs/14/pgstatstatements.html#id-1.11.7.39.9\n\nExample updated sentence:\n> |pg_stat_statements.track_utility| controls whether <<most>> utility\ncommands are tracked by the module. Utility commands are all those other\nthan |SELECT|, |INSERT|, |UPDATE| and |DELETE| <<, but this parameter\ndoes not disable tracking of PREPARE, EXECUTE or CALL>>. The default\nvalue is |on|. Only superusers can change this setting.\n\n\n=====\n\npg-14.4 rw root@db1=# set pg_stat_statements.track_utility=on;\nSET\npg-14.4 rw root@db1=# select pg_stat_statements_reset();\n pg_stat_statements_reset\n--------------------------\n\n(1 row)\npg-14.4 rw root@db1=# prepare test as select /* unique123 */ 1;\nPREPARE\npg-14.4 rw root@db1=# execute test;\n ?column?\n----------\n 1\n(1 row)\n\npg-14.4 rw root@db1=# set application_name='test';\nSET\npg-14.4 rw root@db1=# select substr(query,1,50) from pg_stat_statements;\n substr\n-------------------------------------------\n prepare test as select /* unique123 */ $1\n select pg_stat_statements_reset()\n set application_name=$1\n(3 rows)\n\n\n=====\n\npg-14.4 rw root@db1=# set pg_stat_statements.track_utility=off;\nSET\npg-14.4 rw root@db1=# select pg_stat_statements_reset();\n pg_stat_statements_reset\n--------------------------\n\n(1 row)\n\npg-14.4 rw root@db1=# prepare test as select /* unique123 */ 1;\nPREPARE\npg-14.4 rw root@db1=# execute test;\n ?column?\n----------\n 1\n(1 row)\n\npg-14.4 rw root@db1=# set application_name='test';\nSET\npg-14.4 rw root@db1=# select substr(query,1,50) from pg_stat_statements;\n substr\n-------------------------------------------\n prepare test as select /* unique123 */ $1\n select pg_stat_statements_reset()\n(2 rows)\n\n\n\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n\n\n\n\nOn 8/31/22 12:06 PM, Andres Freund\n wrote:\n\n\n\nRegarding SET, the compelling use case was around \"application_name\"\nwhose purpose is to provide a label in pg_stat_activity and on log\nlines, which can be used to improve observability and connect queries to\ntheir source in application code.\n\n\n\nI wasn't saying that SET shouldn't be jumbled, just that it seems more\nreasonable to track it only when track_utility is enabled, rather than doing\nso even when that's disabled. Which I do think makes sense for executing a\nprepared statement and calling a procedure, since they're really only utility\nstatements by accident.\n\n\n\n Hey Andres, sorry for misunderstanding your email!\n\n Based on this quick test I just now ran (transcript below), I think\n that PREPARE/EXECUTE is already excluded from track_utility?\n\n I get your point about CALL, maybe it does make sense to also\n exclude this. It might also be worth a small update to the doc for\n track_utility about how it behaves, in this regard.\n\nhttps://www.postgresql.org/docs/14/pgstatstatements.html#id-1.11.7.39.9\n\n Example updated sentence:\n > pg_stat_statements.track_utility\n controls whether <<most>> utility commands are tracked\n by the module. Utility commands are all those other than SELECT, INSERT,\n UPDATE and DELETE\n <<, but this parameter does not disable tracking of PREPARE,\n EXECUTE or CALL>>. The default value is on.\n Only superusers can change this setting.\n\n\n=====\n\n pg-14.4 rw root@db1=# set pg_stat_statements.track_utility=on;\n SET\n pg-14.4 rw root@db1=# select pg_stat_statements_reset();\n pg_stat_statements_reset\n --------------------------\n\n (1 row)\n pg-14.4 rw root@db1=# prepare test as select /* unique123 */ 1;\n PREPARE\n pg-14.4 rw root@db1=# execute test;\n ?column?\n ----------\n 1\n (1 row)\n\n pg-14.4 rw root@db1=# set application_name='test';\n SET\n pg-14.4 rw root@db1=# select substr(query,1,50) from\n pg_stat_statements;\n substr\n -------------------------------------------\n prepare test as select /* unique123 */ $1\n select pg_stat_statements_reset()\n set application_name=$1\n (3 rows)\n\n\n =====\n\n pg-14.4 rw root@db1=# set pg_stat_statements.track_utility=off;\n SET\n pg-14.4 rw root@db1=# select pg_stat_statements_reset();\n pg_stat_statements_reset\n --------------------------\n\n (1 row)\n\n pg-14.4 rw root@db1=# prepare test as select /* unique123 */ 1;\n PREPARE\n pg-14.4 rw root@db1=# execute test;\n ?column?\n ----------\n 1\n (1 row)\n\n pg-14.4 rw root@db1=# set application_name='test';\n SET\n pg-14.4 rw root@db1=# select substr(query,1,50) from\n pg_stat_statements;\n substr\n -------------------------------------------\n prepare test as select /* unique123 */ $1\n select pg_stat_statements_reset()\n (2 rows)\n\n\n\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services",
"msg_date": "Wed, 31 Aug 2022 13:05:39 -0700",
"msg_from": "Jeremy Schneider <schnjere@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn 8/31/22 6:08 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2022-08-31 17:33:44 +0200, Drouvot, Bertrand wrote:\n>> @@ -383,6 +384,30 @@ JumbleExpr(JumbleState *jstate, Node *node)\n>> APP_JUMB(var->varlevelsup);\n>> }\n>> break;\n>> + case T_CallStmt:\n>> + {\n>> + CallStmt *stmt = (CallStmt *) node;\n>> + FuncExpr *expr = stmt->funcexpr;\n>> +\n>> + APP_JUMB(expr->funcid);\n>> + JumbleExpr(jstate, (Node *) expr->args);\n>> + }\n>> + break;\n> Why do we need to take the arguments into account?\n\nThanks for looking at it!\n\nAgree that It's not needed to \"solve\" the Lock contention issue, but I \nthink it's needed for the \"render\".\n\nWithout it we would get, things like:\n\npostgres=# call MY_PROC(10);\nCALL\npostgres=# call MY_PROC(100000000);\nCALL\npostgres=# SELECT query, calls, rows FROM pg_stat_statements;\n query | calls | rows\n-----------------------------------+-------+------\n select pg_stat_statements_reset() | 1 | 1\n call MY_PROC(10) | 2 | 0\n(2 rows)\n\ninstead of\n\npostgres=# SELECT query, calls, rows FROM pg_stat_statements;\n query | calls | rows\n-----------------------------------+-------+------\n select pg_stat_statements_reset() | 1 | 1\n call MY_PROC($1) | 2 | 0\n(2 rows)\n\n>\n>\n>> + case T_VariableSetStmt:\n>> + {\n>> + VariableSetStmt *stmt = (VariableSetStmt *) node;\n>> +\n>> + APP_JUMB_STRING(stmt->name);\n>> + JumbleExpr(jstate, (Node *) stmt->args);\n>> + }\n>> + break;\n> Same?\n\nyeah, same reason. Without it we would get things like:\n\npostgres=# set enable_seqscan=false;\nSET\npostgres=# set enable_seqscan=true;\nSET\npostgres=# SELECT query, calls, rows FROM pg_stat_statements;\n query | calls | rows\n-----------------------------------+-------+------\n select pg_stat_statements_reset() | 1 | 1\n set enable_seqscan=false | 2 | 0\n(2 rows)\n\ninstead of\n\npostgres=# SELECT query, calls, rows FROM pg_stat_statements;\n query | calls | rows\n-----------------------------------+-------+------\n set enable_seqscan=$1 | 2 | 0\n select pg_stat_statements_reset() | 1 | 1\n(2 rows)\n\n>\n>> + case T_A_Const:\n>> + {\n>> + int loc = ((const A_Const *) node)->location;\n>> +\n>> + RecordConstLocation(jstate, loc);\n>> + }\n>> + break;\n> I suspect we only need this because of the jumbling of unparsed arguments I\n> questioned above?\n\nRight but only for the T_VariableSetStmt case.\n\n> If we do end up needing it, shouldn't we include the type\n> in the jumbling?\n\nI don't think so as this is only for the T_VariableSetStmt case.\n\nAnd looking closer I don't see such as thing as \"consttype\" (that we can \nfind in the Const struct) in the A_Const struct.\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services:https://aws.amazon.com\n\n\n\n\n\n\nHi,\n\nOn 8/31/22 6:08 PM, Andres Freund\n wrote:\n \n\nHi,\n\nOn 2022-08-31 17:33:44 +0200, Drouvot, Bertrand wrote:\n\n\n@@ -383,6 +384,30 @@ JumbleExpr(JumbleState *jstate, Node *node)\n APP_JUMB(var->varlevelsup);\n }\n break;\n+ case T_CallStmt:\n+ {\n+ CallStmt *stmt = (CallStmt *) node;\n+ FuncExpr *expr = stmt->funcexpr;\n+\n+ APP_JUMB(expr->funcid);\n+ JumbleExpr(jstate, (Node *) expr->args);\n+ }\n+ break;\n\n\n\nWhy do we need to take the arguments into account?\n\nThanks for looking at it!\n\nAgree that It's not needed to \"solve\" the Lock contention issue,\n but I think it's needed for the \"render\".\nWithout it we would get, things like:\npostgres=# call MY_PROC(10);\n CALL\n postgres=# call MY_PROC(100000000);\n CALL\n postgres=# SELECT query, calls, rows FROM pg_stat_statements;\n query | calls | rows\n -----------------------------------+-------+------\n select pg_stat_statements_reset() | 1 | 1\n call MY_PROC(10) | 2 | 0\n (2 rows)\n\ninstead of\npostgres=# SELECT query, calls, rows\n FROM pg_stat_statements;\n query | calls | rows\n -----------------------------------+-------+------\n select pg_stat_statements_reset() | 1 | 1\n call MY_PROC($1) | 2 | 0\n (2 rows)\n\n\n\n\n\n\n\n+ case T_VariableSetStmt:\n+ {\n+ VariableSetStmt *stmt = (VariableSetStmt *) node;\n+\n+ APP_JUMB_STRING(stmt->name);\n+ JumbleExpr(jstate, (Node *) stmt->args);\n+ }\n+ break;\n\n\n\nSame?\n\n\nyeah, same reason. Without it we would get things like:\npostgres=# set enable_seqscan=false;\n SET\n postgres=# set enable_seqscan=true;\n SET\n postgres=# SELECT query, calls, rows FROM pg_stat_statements;\n query | calls | rows\n -----------------------------------+-------+------\n select pg_stat_statements_reset() | 1 | 1\n set enable_seqscan=false | 2 | 0\n (2 rows)\n\ninstead of\npostgres=# SELECT query, calls, rows\n FROM pg_stat_statements;\n query | calls | rows\n -----------------------------------+-------+------\n set enable_seqscan=$1 | 2 | 0\n select pg_stat_statements_reset() | 1 | 1\n (2 rows)\n\n\n\n\n\n\n+ case T_A_Const:\n+ {\n+ int loc = ((const A_Const *) node)->location;\n+\n+ RecordConstLocation(jstate, loc);\n+ }\n+ break;\n\n\n\nI suspect we only need this because of the jumbling of unparsed arguments I\nquestioned above? \n\nRight but only for the T_VariableSetStmt case.\n\n\n If we do end up needing it, shouldn't we include the type\nin the jumbling?\n\nI don't think so as this is only for the T_VariableSetStmt case.\nAnd looking closer I don't see such as thing as \"consttype\" (that\n we can find in the Const struct) in the A_Const struct.\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 1 Sep 2022 09:16:35 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn 8/31/22 10:05 PM, Jeremy Schneider wrote:\n> On 8/31/22 12:06 PM, Andres Freund wrote:\n>>> Regarding SET, the compelling use case was around \"application_name\"\n>>> whose purpose is to provide a label in pg_stat_activity and on log\n>>> lines, which can be used to improve observability and connect queries to\n>>> their source in application code.\n>> I wasn't saying that SET shouldn't be jumbled, just that it seems more\n>> reasonable to track it only when track_utility is enabled, rather than doing\n>> so even when that's disabled. Which I do think makes sense for executing a\n>> prepared statement and calling a procedure, since they're really only utility\n>> statements by accident.\n>\n> I get your point about CALL, maybe it does make sense to also exclude \n> this. \n\nThat's a good point and i think we should track CALL whatever the value \nof pgss_track_utility is.\n\nI think so because we are tracking function calls in all the cases \n(because \"linked\" to select aka not a utility) and i don't see any \nreasons why not to do the same for procedure calls.\n\nPlease find attached v2 as an attempt to do so.\n\nWith v2 we get things like:\n\npostgres=# set pg_stat_statements.track_utility=on;\nSET\npostgres=# call MY_PROC(20);\nCALL\npostgres=# call MY_PROC(10);\nCALL\npostgres=# set enable_seqscan=false;\nSET\npostgres=# set enable_seqscan=true;\nSET\npostgres=# select queryid,query,calls from pg_stat_statements;\n queryid | query | calls\n---------------------+-----------------------------------------+-------\n 4670878543381973400 | set pg_stat_statements.track_utility=$1 | 1\n -640317129591544054 | set enable_seqscan=$1 | 2\n 492647827690744963 | select pg_stat_statements_reset() | 1\n 6541399678435597534 | call MY_PROC($1) | 2\n\nand\n\npostgres=# set pg_stat_statements.track_utility=off;\nSET\npostgres=# call MY_PROC(10);\nCALL\npostgres=# call MY_PROC(20);\nCALL\npostgres=# set enable_seqscan=true;\nSET\npostgres=# set enable_seqscan=false;\nSET\npostgres=# select queryid,query,calls from pg_stat_statements;\n queryid | query | calls\n---------------------+-----------------------------------------+-------\n 4670878543381973400 | set pg_stat_statements.track_utility=$1 | 1\n 492647827690744963 | select pg_stat_statements_reset() | 1\n 6541399678435597534 | call MY_PROC($1) | 2\n(3 rows)\n\n> It might also be worth a small update to the doc for track_utility \n> about how it behaves, in this regard.\n>\n> https://www.postgresql.org/docs/14/pgstatstatements.html#id-1.11.7.39.9\n>\n> Example updated sentence:\n> > |pg_stat_statements.track_utility| controls whether <<most>> utility \n> commands are tracked by the module. Utility commands are all those \n> other than |SELECT|, |INSERT|, |UPDATE| and |DELETE| <<, but this \n> parameter does not disable tracking of PREPARE, EXECUTE or CALL>>. The \n> default value is |on|. Only superusers can change this setting.\n\nAgree, wording added to v2.\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services:https://aws.amazon.com",
"msg_date": "Thu, 1 Sep 2022 12:55:11 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "> Please find attached v2 as an attempt to do so.\r\n\r\n+1 to the idea.\r\nI think it will be better to evaluate jstate instead of\r\nJUMBLE_UTILITY, such as:\r\n\r\nif (query->utilityStmt && !jstate)\r\n\r\ninstead of\r\n\r\nif (query->utilityStmt && !JUMBLE_UTILITY(query->utilityStmt))\r\n\r\nThis will allow for support of potentially other utility statements\r\nIn the future without having to teach pg_stat_statements about them.\r\nIf a jstate is set for the utility statements, pgss will do the right thing.\r\n\r\n\r\nThanks\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n> Please find attached v2 as an attempt to do so.\n+1 to the idea.\nI think it will be better to evaluate jstate instead of\nJUMBLE_UTILITY, such as:\n \nif (query->utilityStmt && !jstate)\n \ninstead of\n \nif (query->utilityStmt && !JUMBLE_UTILITY(query->utilityStmt))\n \nThis will allow for support of potentially other utility statements\nIn the future without having to teach pg_stat_statements about them.\nIf a jstate is set for the utility statements, pgss will do the right thing.\n \n \nThanks\n \n--\nSami Imseih\nAmazon Web Services (AWS)",
"msg_date": "Thu, 1 Sep 2022 15:13:42 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn 9/1/22 5:13 PM, Imseih (AWS), Sami wrote:\n>\n> > Please find attached v2 as an attempt to do so.\n>\n> +1 to the idea.\n>\nThanks for looking at it!\n\n> I think it will be better to evaluate jstate instead of\n>\n> JUMBLE_UTILITY, such as:\n>\n> if (query->utilityStmt && !jstate)\n>\n> instead of\n>\n> if (query->utilityStmt && !JUMBLE_UTILITY(query->utilityStmt))\n>\n> This will allow for support of potentially other utility statements\n>\n> In the future without having to teach pg_stat_statements about them.\n>\n> If a jstate is set for the utility statements, pgss will do the right \n> thing.\n>\nFair point, thanks!\n\nv3 including this change is attached.\n\nThanks,\n\n-- \n\nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com",
"msg_date": "Fri, 2 Sep 2022 11:06:50 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "This is ready for committer but I suggest the following for the\r\ndoc changes:\r\n\r\n1.\r\nPlannable queries (that is, SELECT, INSERT, UPDATE, and DELETE) are\r\ncombined into a single pg_stat_statements entry whenever they have\r\nidentical query structures according to an internal hash calculation.\r\nTypically, two queries will be considered the same for this purpose\r\nif they are semantically equivalent except for the values of literal\r\nconstants appearing in the query. Utility commands (that is, all other commands)\r\nare compared strictly on the basis of their textual query strings, however.\r\n -- to --\r\nPlannable queries (that is, SELECT, INSERT, UPDATE, and DELETE) as\r\nwell as CALL and SET commands are combined into a single\r\npg_stat_statements entry whenever they have identical query\r\nstructures according to an internal hash calculation.\r\nTypically, two queries will be considered the same for this purpose\r\nif they are semantically equivalent except for the values of literal\r\nconstants appearing in the command. All other commands are compared\r\nstrictly on the basis of their textual query strings, however.\r\n\r\n2.\r\n\r\npg_stat_statements.track_utility controls whether utility\r\ncommands are tracked by the module. Utility commands\r\nare all those other than SELECT, INSERT, UPDATE and DELETE.\r\nThe default value is on. Only superusers can change this setting.\r\n -- to --\r\npg_stat_statements.track_utility controls whether utility commands\r\nare tracked by the module. Tracked utility commands are all those\r\nother than SELECT, INSERT, UPDATE, DELETE, CALL and SET.\r\nThe default value is on. Only superusers can change this setting.\r\n\r\n--\r\nThanks,\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\nFrom: \"Drouvot, Bertrand\" <bdrouvot@amazon.com>\r\nDate: Friday, September 2, 2022 at 4:06 AM\r\nTo: \"Imseih (AWS), Sami\" <simseih@amazon.com>, \"Schneider (AWS), Jeremy\" <schnjere@amazon.com>, Andres Freund <andres@anarazel.de>\r\nCc: PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>, Peter Eisentraut <peter.eisentraut@enterprisedb.com>, Pavel Stehule <pavel.stehule@gmail.com>, Nikolay Samokhvalov <samokhvalov@gmail.com>\r\nSubject: Re: [PATCH] Query Jumbling for CALL and SET utility statements\r\n\r\n\r\nHi,\r\nOn 9/1/22 5:13 PM, Imseih (AWS), Sami wrote:\r\n\r\n> Please find attached v2 as an attempt to do so.\r\n\r\n+1 to the idea.\r\n\r\nThanks for looking at it!\r\nI think it will be better to evaluate jstate instead of\r\nJUMBLE_UTILITY, such as:\r\n\r\nif (query->utilityStmt && !jstate)\r\n\r\ninstead of\r\n\r\nif (query->utilityStmt && !JUMBLE_UTILITY(query->utilityStmt))\r\n\r\nThis will allow for support of potentially other utility statements\r\nIn the future without having to teach pg_stat_statements about them.\r\nIf a jstate is set for the utility statements, pgss will do the right thing.\r\n\r\n\r\n\r\nFair point, thanks!\r\n\r\nv3 including this change is attached.\r\n\r\nThanks,\r\n--\r\n\r\nBertrand Drouvot\r\n\r\nPostgreSQL Contributors Team\r\n\r\nRDS Open Source Databases\r\n\r\nAmazon Web Services: https://aws.amazon.com\r\n\n\n\n\n\n\n\n\n\nThis is ready for committer but I suggest the following for the\ndoc changes:\n \n1.\nPlannable queries (that is, SELECT, INSERT, UPDATE, and DELETE) are\r\n\ncombined into a single pg_stat_statements entry whenever they have\r\n\nidentical query structures according to an internal hash calculation.\r\n\nTypically, two queries will be considered the same for this purpose\r\n\nif they are semantically equivalent except for the values of literal\r\n\nconstants appearing in the query. Utility commands (that is, all other commands)\r\n\nare compared strictly on the basis of their textual query strings, however.\n -- to --\nPlannable queries (that is, SELECT, INSERT, UPDATE, and DELETE) as\r\n\nwell as CALL and SET commands are combined into a single \n\npg_stat_statements entry whenever they have identical query \n\nstructures according to an internal hash calculation. \nTypically, two queries will be considered the same for this purpose\r\n\nif they are semantically equivalent except for the values of literal\r\n\nconstants appearing in the command. All other commands are compared\r\n\nstrictly on the basis of their textual query strings, however.\n \n2.\n \npg_stat_statements.track_utility controls whether utility \n\ncommands are tracked by the module. Utility commands \nare all those other than SELECT, INSERT, UPDATE and DELETE. \n\nThe default value is on. Only superusers can change this setting.\n -- to --\npg_stat_statements.track_utility controls whether utility commands\r\n\nare tracked by the module. Tracked utility commands are all those\nother than SELECT, INSERT, UPDATE, DELETE, CALL and SET. \n\nThe default value is on. Only superusers can change this setting.\n \n--\nThanks,\nSami Imseih\nAmazon Web Services (AWS)\n \n\nFrom: \"Drouvot, Bertrand\" <bdrouvot@amazon.com>\nDate: Friday, September 2, 2022 at 4:06 AM\nTo: \"Imseih (AWS), Sami\" <simseih@amazon.com>, \"Schneider (AWS), Jeremy\" <schnjere@amazon.com>, Andres Freund <andres@anarazel.de>\nCc: PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>, Peter Eisentraut <peter.eisentraut@enterprisedb.com>, Pavel Stehule <pavel.stehule@gmail.com>, Nikolay Samokhvalov <samokhvalov@gmail.com>\nSubject: Re: [PATCH] Query Jumbling for CALL and SET utility statements\n\n\n \n\nHi,\n\nOn 9/1/22 5:13 PM, Imseih (AWS), Sami wrote:\n\n\n> Please find attached v2 as an attempt to do so.\n+1 to the idea.\n\nThanks for looking at it!\n\nI think it will be better to evaluate jstate instead of\nJUMBLE_UTILITY, such as:\n \nif (query->utilityStmt && !jstate)\n \ninstead of\n \nif (query->utilityStmt && !JUMBLE_UTILITY(query->utilityStmt))\n \nThis will allow for support of potentially other utility statements\nIn the future without having to teach pg_stat_statements about them.\nIf a jstate is set for the utility statements, pgss will do the right thing.\n \n \n\nFair point, thanks!\nv3 including this change is attached. \nThanks,\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 7 Sep 2022 15:48:36 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "On 9/2/22 2:06 AM, Drouvot, Bertrand wrote:\n> v3 including this change is attached.\n\nFYI: \"reset all\" core dumps with v3\n\nI didn't fully debug yet, but here's the backtrace on my 14.4 build with\nthe patch\n\n[postgres@ip-172-31-44-176 data]$ gdb /usr/local/pgsql-14.4/bin/postgres\ncore.27217\n...\nCore was generated by `postgres: postgres postgres [local]\nRESET '.\nProgram terminated with signal 11, Segmentation fault.\n#0 0x00007f7776ae4821 in __strlen_sse2_pminub () from /lib64/libc.so.6\n...\n(gdb) bt\n#0 0x00007f7776ae4821 in __strlen_sse2_pminub () from /lib64/libc.so.6\n#1 0x00000000008e061c in JumbleExpr (jstate=0x1cf7f80, node=<optimized\nout>) at queryjumble.c:400\n#2 0x00000000008dfdd8 in JumbleQueryInternal (jstate=0x1cf7f80,\nquery=0x1cf7e70) at queryjumble.c:247\n#3 0x00000000008e0b4b in JumbleQuery (query=query@entry=0x1cf7e70,\nquerytext=querytext@entry=0x1cf72f8 \"reset all;\") at queryjumble.c:127\n#4 0x000000000056ba4b in parse_analyze (parseTree=0x1cf7ce0,\nsourceText=0x1cf72f8 \"reset all;\", paramTypes=0x0, numParams=<optimized\nout>, queryEnv=0x0) at analyze.c:130\n#5 0x000000000079df63 in pg_analyze_and_rewrite\n(parsetree=parsetree@entry=0x1cf7ce0,\nquery_string=query_string@entry=0x1cf72f8 \"reset all;\",\n paramTypes=paramTypes@entry=0x0, numParams=numParams@entry=0,\nqueryEnv=queryEnv@entry=0x0) at postgres.c:657\n#6 0x000000000079e472 in exec_simple_query (query_string=0x1cf72f8\n\"reset all;\") at postgres.c:1130\n#7 0x000000000079f9d3 in PostgresMain (argc=argc@entry=1,\nargv=argv@entry=0x7ffd0c341f80, dbname=0x1d44948 \"postgres\",\nusername=<optimized out>) at postgres.c:4496\n#8 0x000000000048c9f3 in BackendRun (port=<optimized out>,\nport=<optimized out>) at postmaster.c:4530\n#9 BackendStartup (port=0x1d3bdd0) at postmaster.c:4252\n#10 ServerLoop () at postmaster.c:1745\n#11 0x0000000000721332 in PostmasterMain (argc=argc@entry=5,\nargv=argv@entry=0x1cf1e10) at postmaster.c:1417\n#12 0x000000000048da6e in main (argc=5, argv=0x1cf1e10) at main.c:209\n\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n\n\n\n\nOn 9/2/22 2:06 AM, Drouvot, Bertrand\n wrote:\n\n\n\n v3 including this change is attached.\n\n FYI: \"reset all\" core dumps with v3\n\n I didn't fully debug yet, but here's the backtrace on my 14.4 build\n with the patch\n\n [postgres@ip-172-31-44-176 data]$ gdb\n /usr/local/pgsql-14.4/bin/postgres core.27217\n ...\n Core was generated by `postgres: postgres postgres [local]\n RESET '.\n Program terminated with signal 11, Segmentation fault.\n #0 0x00007f7776ae4821 in __strlen_sse2_pminub () from\n /lib64/libc.so.6\n ...\n (gdb) bt\n #0 0x00007f7776ae4821 in __strlen_sse2_pminub () from\n /lib64/libc.so.6\n #1 0x00000000008e061c in JumbleExpr (jstate=0x1cf7f80,\n node=<optimized out>) at queryjumble.c:400\n #2 0x00000000008dfdd8 in JumbleQueryInternal (jstate=0x1cf7f80,\n query=0x1cf7e70) at queryjumble.c:247\n #3 0x00000000008e0b4b in JumbleQuery (query=query@entry=0x1cf7e70,\n querytext=querytext@entry=0x1cf72f8 \"reset all;\") at\n queryjumble.c:127\n #4 0x000000000056ba4b in parse_analyze (parseTree=0x1cf7ce0,\n sourceText=0x1cf72f8 \"reset all;\", paramTypes=0x0,\n numParams=<optimized out>, queryEnv=0x0) at analyze.c:130\n #5 0x000000000079df63 in pg_analyze_and_rewrite\n (parsetree=parsetree@entry=0x1cf7ce0,\n query_string=query_string@entry=0x1cf72f8 \"reset all;\",\n paramTypes=paramTypes@entry=0x0, numParams=numParams@entry=0,\n queryEnv=queryEnv@entry=0x0) at postgres.c:657\n #6 0x000000000079e472 in exec_simple_query (query_string=0x1cf72f8\n \"reset all;\") at postgres.c:1130\n #7 0x000000000079f9d3 in PostgresMain (argc=argc@entry=1,\n argv=argv@entry=0x7ffd0c341f80, dbname=0x1d44948 \"postgres\",\n username=<optimized out>) at postgres.c:4496\n #8 0x000000000048c9f3 in BackendRun (port=<optimized out>,\n port=<optimized out>) at postmaster.c:4530\n #9 BackendStartup (port=0x1d3bdd0) at postmaster.c:4252\n #10 ServerLoop () at postmaster.c:1745\n #11 0x0000000000721332 in PostmasterMain (argc=argc@entry=5,\n argv=argv@entry=0x1cf1e10) at postmaster.c:1417\n #12 0x000000000048da6e in main (argc=5, argv=0x1cf1e10) at\n main.c:209\n\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services",
"msg_date": "Wed, 7 Sep 2022 18:19:42 -0700",
"msg_from": "Jeremy Schneider <schnjere@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "On Wed, Sep 07, 2022 at 06:19:42PM -0700, Jeremy Schneider wrote:\n> I didn't fully debug yet, but here's the backtrace on my 14.4 build with\n> the patch\n\nWhat happens on HEAD? That would be the target branch for a new\nfeature.\n--\nMichael",
"msg_date": "Thu, 8 Sep 2022 14:23:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn 9/8/22 7:23 AM, Michael Paquier wrote:\n> On Wed, Sep 07, 2022 at 06:19:42PM -0700, Jeremy Schneider wrote:\n>> I didn't fully debug yet, but here's the backtrace on my 14.4 build with\n>> the patch\n\nThanks Jeremy for reporting the issue!\n\n> What happens on HEAD? That would be the target branch for a new\n> feature.\n\nJust tested and i can see the same issue on HEAD.\n\nIssue is on stmt->name being NULL here:\n\nBreakpoint 2, JumbleExpr (jstate=0x55d60e769e30, node=0x55d60e769b60) at \nqueryjumble.c:364\n364 if (node == NULL)\n(gdb) n\n368 check_stack_depth();\n(gdb)\n374 APP_JUMB(node->type);\n(gdb)\n376 switch (nodeTag(node))\n(gdb)\n398 VariableSetStmt *stmt = \n(VariableSetStmt *) node;\n(gdb) n\n400 APP_JUMB_STRING(stmt->name);\n(gdb) p stmt->name\n$1 = 0x0\n\nI'll have a closer look.\n\nRegards,\n\n-- \n\nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com\n\n\n\n\n\n\nHi,\n\nOn 9/8/22 7:23 AM, Michael Paquier\n wrote:\n\n\nOn Wed, Sep 07, 2022 at 06:19:42PM -0700, Jeremy Schneider wrote:\n\n\nI didn't fully debug yet, but here's the backtrace on my 14.4 build with\nthe patch\n\n\n\nThanks Jeremy for reporting the issue!\n\n\nWhat happens on HEAD? That would be the target branch for a new\nfeature.\n\nJust tested and i can see the same issue on HEAD.\nIssue is on stmt->name being NULL here:\nBreakpoint 2, JumbleExpr (jstate=0x55d60e769e30,\n node=0x55d60e769b60) at queryjumble.c:364\n 364 if (node == NULL)\n (gdb) n\n 368 check_stack_depth();\n (gdb)\n 374 APP_JUMB(node->type);\n (gdb)\n 376 switch (nodeTag(node))\n (gdb)\n 398 VariableSetStmt *stmt =\n (VariableSetStmt *) node;\n (gdb) n\n 400 \n APP_JUMB_STRING(stmt->name);\n (gdb) p stmt->name\n $1 = 0x0\n\nI'll have a closer look.\nRegards,\n\n --\n Bertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 8 Sep 2022 08:49:13 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "On Thu, Sep 08, 2022 at 02:23:19PM +0900, Michael Paquier wrote:\n> On Wed, Sep 07, 2022 at 06:19:42PM -0700, Jeremy Schneider wrote:\n> > I didn't fully debug yet, but here's the backtrace on my 14.4 build with\n> > the patch\n> \n> What happens on HEAD? That would be the target branch for a new\n> feature.\n\nIt would be the same AFAICS. From v3:\n\n+\t\tcase T_VariableSetStmt:\n+\t\t\t{\n+\t\t\t\tVariableSetStmt *stmt = (VariableSetStmt *) node;\n+\n+\t\t\t\tAPP_JUMB_STRING(stmt->name);\n+\t\t\t\tJumbleExpr(jstate, (Node *) stmt->args);\n+\t\t\t}\n\nFor a RESET ALL command stmt->name is NULL.\n\n\n",
"msg_date": "Thu, 8 Sep 2022 14:50:12 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn 9/8/22 8:50 AM, Julien Rouhaud wrote:\n\nThanks for looking at it!\n\n> On Thu, Sep 08, 2022 at 02:23:19PM +0900, Michael Paquier wrote:\n>> On Wed, Sep 07, 2022 at 06:19:42PM -0700, Jeremy Schneider wrote:\n>>> I didn't fully debug yet, but here's the backtrace on my 14.4 build with\n>>> the patch\n>> What happens on HEAD? That would be the target branch for a new\n>> feature.\n> It would be the same AFAICS. From v3:\n>\n> + case T_VariableSetStmt:\n> + {\n> + VariableSetStmt *stmt = (VariableSetStmt *) node;\n> +\n> + APP_JUMB_STRING(stmt->name);\n> + JumbleExpr(jstate, (Node *) stmt->args);\n> + }\n>\n> For a RESET ALL command stmt->name is NULL.\n\nRight, please find attached v4 addressing the issue and also Sami's \ncomments [1].\n\n\n[1]: \nhttps://www.postgresql.org/message-id/82A35172-BEB3-4DFA-B11C-AE5E50A0F932%40amazon.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com",
"msg_date": "Thu, 8 Sep 2022 11:06:51 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn Thu, Sep 08, 2022 at 11:06:51AM +0200, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 9/8/22 8:50 AM, Julien Rouhaud wrote:\n> \n> Thanks for looking at it!\n> \n> > On Thu, Sep 08, 2022 at 02:23:19PM +0900, Michael Paquier wrote:\n> > > On Wed, Sep 07, 2022 at 06:19:42PM -0700, Jeremy Schneider wrote:\n> > > > I didn't fully debug yet, but here's the backtrace on my 14.4 build with\n> > > > the patch\n> > > What happens on HEAD? That would be the target branch for a new\n> > > feature.\n> > It would be the same AFAICS. From v3:\n> > \n> > + case T_VariableSetStmt:\n> > + {\n> > + VariableSetStmt *stmt = (VariableSetStmt *) node;\n> > +\n> > + APP_JUMB_STRING(stmt->name);\n> > + JumbleExpr(jstate, (Node *) stmt->args);\n> > + }\n> > \n> > For a RESET ALL command stmt->name is NULL.\n> \n> Right, please find attached v4 addressing the issue and also Sami's comments\n> [1].\n\n(Sorry I've not been following this thread until now)\n\nIME if your application relies on 2PC it's very likely that you will hit the\nexact same problems described in your original email. What do you think about\nnormalizing those too while working on the subject?\n\n\n",
"msg_date": "Thu, 8 Sep 2022 19:29:19 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn 9/8/22 1:29 PM, Julien Rouhaud wrote:\n> Hi,\n>\n> On Thu, Sep 08, 2022 at 11:06:51AM +0200, Drouvot, Bertrand wrote:\n>> Hi,\n>>\n>> On 9/8/22 8:50 AM, Julien Rouhaud wrote:\n>>\n>> Thanks for looking at it!\n>>\n>>> On Thu, Sep 08, 2022 at 02:23:19PM +0900, Michael Paquier wrote:\n>>>> On Wed, Sep 07, 2022 at 06:19:42PM -0700, Jeremy Schneider wrote:\n>>>>> I didn't fully debug yet, but here's the backtrace on my 14.4 build with\n>>>>> the patch\n>>>> What happens on HEAD? That would be the target branch for a new\n>>>> feature.\n>>> It would be the same AFAICS. From v3:\n>>>\n>>> + case T_VariableSetStmt:\n>>> + {\n>>> + VariableSetStmt *stmt = (VariableSetStmt *) node;\n>>> +\n>>> + APP_JUMB_STRING(stmt->name);\n>>> + JumbleExpr(jstate, (Node *) stmt->args);\n>>> + }\n>>>\n>>> For a RESET ALL command stmt->name is NULL.\n>> Right, please find attached v4 addressing the issue and also Sami's comments\n>> [1].\n> (Sorry I've not been following this thread until now)\n>\n> IME if your application relies on 2PC it's very likely that you will hit the\n> exact same problems described in your original email.\n\nAgree\n\n> What do you think about\n> normalizing those too while working on the subject?\n\nThat sounds reasonable, I'll have a look at those too while at it.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com\n\n\n\n\n\n\nHi,\n\nOn 9/8/22 1:29 PM, Julien Rouhaud\n wrote:\n \n\nHi,\n\nOn Thu, Sep 08, 2022 at 11:06:51AM +0200, Drouvot, Bertrand wrote:\n\n\nHi,\n\nOn 9/8/22 8:50 AM, Julien Rouhaud wrote:\n\nThanks for looking at it!\n\n\n\nOn Thu, Sep 08, 2022 at 02:23:19PM +0900, Michael Paquier wrote:\n\n\nOn Wed, Sep 07, 2022 at 06:19:42PM -0700, Jeremy Schneider wrote:\n\n\nI didn't fully debug yet, but here's the backtrace on my 14.4 build with\nthe patch\n\n\nWhat happens on HEAD? That would be the target branch for a new\nfeature.\n\n\nIt would be the same AFAICS. From v3:\n\n+ case T_VariableSetStmt:\n+ {\n+ VariableSetStmt *stmt = (VariableSetStmt *) node;\n+\n+ APP_JUMB_STRING(stmt->name);\n+ JumbleExpr(jstate, (Node *) stmt->args);\n+ }\n\nFor a RESET ALL command stmt->name is NULL.\n\n\n\nRight, please find attached v4 addressing the issue and also Sami's comments\n[1].\n\n\n\n(Sorry I've not been following this thread until now)\n\nIME if your application relies on 2PC it's very likely that you will hit the\nexact same problems described in your original email. \n\nAgree\n\n\n What do you think about\nnormalizing those too while working on the subject?\n\nThat sounds reasonable, I'll have a look at those too while at\n it.\n\nRegards,\n\n\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 8 Sep 2022 18:07:05 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn 9/8/22 6:07 PM, Drouvot, Bertrand wrote:\n> On 9/8/22 1:29 PM, Julien Rouhaud wrote:\n>>\n>> IME if your application relies on 2PC it's very likely that you will hit the\n>> exact same problems described in your original email.\n>\n> Agree\n>\n>> What do you think about\n>> normalizing those too while working on the subject?\n>\n> That sounds reasonable, I'll have a look at those too while at it.\n>\nAttached v5 to normalize 2PC commands too, so that we get things like:\n\ncreate table test_tx (a int);\nbegin;\nprepare transaction 'tx1';\ninsert into test_tx values (1);\ncommit prepared 'tx1';\nbegin;\nprepare transaction 'tx2';\ninsert into test_tx values (2);\ncommit prepared 'tx2';\nbegin;\nprepare transaction 'tx3';\ninsert into test_tx values (3);\nrollback prepared 'tx3';\nbegin;\nprepare transaction 'tx4';\ninsert into test_tx values (4);\nrollback prepared 'tx4';\nSELECT query, calls, rows FROM pg_stat_statements ORDER BY query COLLATE \n\"C\";\nquery | calls | rows\n------------------------------------------------------------------------------+-------+------\n SELECT pg_stat_statements_reset() | 1 | 1\n SELECT query, calls, rows FROM pg_stat_statements ORDER BY query \nCOLLATE \"C\" | 0 | 0\n begin | 4 | 0\n commit prepared $1 | 2 | 0\n create table test_tx (a \nint) | 1 | 0\n insert into test_tx values \n($1) | 4 | 4\n prepare transaction \n$1 | 4 | 0\n rollback prepared \n$1 | 2 | 0\n(8 rows)\n\nFor those ones I also had to do some minor changes in gram.y and to the \nTransactionStmt struct to record the gid location.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com",
"msg_date": "Fri, 9 Sep 2022 12:11:50 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "> Attached v5 to normalize 2PC commands too, so that we get things like:\n> \n> \n> create table test_tx (a int);\n> begin;\n> prepare transaction 'tx1';\n> insert into test_tx values (1);\n> commit prepared 'tx1';\n> begin;\n> prepare transaction 'tx2';\n> insert into test_tx values (2);\n> commit prepared 'tx2';\n> begin;\n> prepare transaction 'tx3';\n> insert into test_tx values (3);\n> rollback prepared 'tx3';\n> begin;\n> prepare transaction 'tx4';\n> insert into test_tx values (4);\n> rollback prepared 'tx4';\n> SELECT query, calls, rows FROM pg_stat_statements ORDER BY query\n> COLLATE \"C\";\n> query\n> | calls | rows\n> ------------------------------------------------------------------------------+-------+------\n> SELECT pg_stat_statements_reset()\n> | 1 | 1\n> SELECT query, calls, rows FROM pg_stat_statements ORDER BY query\n> COLLATE \"C\" | 0 | 0\n> begin\n> | 4 | 0\n> commit prepared $1\n> | 2 | 0\n> create table test_tx (a int)\n> | 1 | 0\n> insert into test_tx values ($1)\n> | 4 | 4\n> prepare transaction $1\n> | 4 | 0\n> rollback prepared $1\n> | 2 | 0\n> (8 rows)\n> \n> For those ones I also had to do some minor changes in gram.y and to\n> the TransactionStmt struct to record the gid location.\n\nThanks Bertrand.\nI used your patch. It's looks very good.\nI found that utility statement is counted separately in upper and lower \ncase.\nFor example BEGIN and begin are counted separately.\nIs it difficult to fix this problem?\n\nRegards,\n\nKotaro Kawamoto\n\n\n",
"msg_date": "Tue, 13 Sep 2022 11:43:52 +0900",
"msg_from": "bt22kawamotok <bt22kawamotok@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn Tue, Sep 13, 2022 at 11:43:52AM +0900, bt22kawamotok wrote:\n>\n> I found that utility statement is counted separately in upper and lower\n> case.\n> For example BEGIN and begin are counted separately.\n> Is it difficult to fix this problem?\n\nThis is a known behavior, utility command aren't normalized (apart from the few\nthat will be with this patch) and the queryid is just a hash of the provided\nquery string.\n\nIt seems unrelated to this patch though. While it can be a bit annoying, it's\nunlikely that the application will have thousands of way to ask for a new\ntransaction (mixing case, adding a random number of spaces between BEGIN and\nTRANSACTION and so on), so in real life it won't cause any problem. Fixing it\nwould require to fully jumble all utility statements, which would require a\nseparate discussion.\n\n\n",
"msg_date": "Tue, 13 Sep 2022 12:33:16 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn 9/13/22 6:33 AM, Julien Rouhaud wrote:\n> Hi,\n>\n> On Tue, Sep 13, 2022 at 11:43:52AM +0900, bt22kawamotok wrote:\n>> I found that utility statement is counted separately in upper and lower\n>> case.\n>> For example BEGIN and begin are counted separately.\n>> Is it difficult to fix this problem?\n> This is a known behavior, utility command aren't normalized (apart from the few\n> that will be with this patch) and the queryid is just a hash of the provided\n> query string.\n>\n> It seems unrelated to this patch though. While it can be a bit annoying, it's\n> unlikely that the application will have thousands of way to ask for a new\n> transaction (mixing case, adding a random number of spaces between BEGIN and\n> TRANSACTION and so on), so in real life it won't cause any problem.\n\nAgree that it seems unlikely to cause any problem (as compare to the \nutility statements that are handled in this patch).\n\n> Fixing it\n> would require to fully jumble all utility statements, which would require a\n> separate discussion.\n\nAgree.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n\nAmazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284\n\nAmazon Web Services EMEA SARL, succursale francaise, 31 Place des Corolles, Tour Carpe Diem, F-92400 Courbevoie, SIREN 831 001 334, RCS Nanterre, APE 6311Z, TVA FR30831001334\n\n\n",
"msg_date": "Tue, 13 Sep 2022 07:30:23 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "> Attached v5 to normalize 2PC commands too, so that we get things like:\r\nA nit on the documentation for v5, otherwise lgtm.\r\n\r\nPlannable queries (that is, SELECT, INSERT, UPDATE, and DELETE) as well as CALL, SET\r\nand two-phase commit commands PREPARE TRANSACTION, , COMMIT PREPARED\r\nand ROLLBACK PREPARED are combined\r\n\r\n---- to ----\r\n\r\nPlannable queries (that is, SELECT, INSERT, UPDATE, and DELETE) as well as CALL,\r\nSET, PREPARE TRANSACTION, COMMIT PREPARED and ROLLBACK PREPARED are combined\r\n\r\n\r\n---\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n> Attached v5 to normalize 2PC commands too, so that we get things like:\nA nit on the documentation for v5, otherwise lgtm.\n \nPlannable queries (that is, SELECT, INSERT, UPDATE, and DELETE) as well as CALL, SET\nand two-phase commit commands PREPARE TRANSACTION, , COMMIT PREPARED\nand ROLLBACK PREPARED are combined\n \n---- to ----\n \nPlannable queries (that is, SELECT, INSERT, UPDATE, and DELETE) as well as CALL,\nSET, PREPARE TRANSACTION, COMMIT PREPARED and ROLLBACK PREPARED are combined\n \n \n---\n \nRegards,\n \nSami Imseih\nAmazon Web Services (AWS)",
"msg_date": "Wed, 14 Sep 2022 13:20:09 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "\n\nOn 2022/09/09 19:11, Drouvot, Bertrand wrote:\n>>> IME if your application relies on 2PC it's very likely that you will hit the\n>>> exact same problems described in your original email.\n\nThe utility commands for cursor like DECLARE CURSOR seem to have the same issue\nand can cause lots of pgss entries. For example, when we use postgres_fdw and\nexecute \"SELECT * FROM <foreign table> WHERE id = 10\" five times in the same\ntransaction, the following commands are executed in the remote PostgreSQL server\nand recorded as pgss entries there.\n\nDECLARE c1 CURSOR FOR ...\nDECLARE c2 CURSOR FOR ...\nDECLARE c3 CURSOR FOR ...\nDECLARE c4 CURSOR FOR ...\nDECLARE c5 CURSOR FOR ...\nFETCH 100 FROM c1\nFETCH 100 FROM c2\nFETCH 100 FROM c3\nFETCH 100 FROM c4\nFETCH 100 FROM c5\nCLOSE c1\nCLOSE c2\nCLOSE c3\nCLOSE c4\nCLOSE c5\n\nFurthermore, if the different query on foreign table is executed in the next\ntransaction, it may reuse the same cursor name previously used by another query.\nThat is, different queries can cause the same FETCH command like\n\"FETCH 100 FROM c1\". This would be also an issue.\n\nI'm not sure if the patch should also handle cursor cases. We can implement\nthat separately later if necessary.\n\nI don't think that the patch should include the fix for cursor cases. It can be implemented separately later if necessary.\n\n\n> Attached v5 to normalize 2PC commands too, so that we get things like:\n\n+\t\tcase T_VariableSetStmt:\n+\t\t\t{\n+\t\t\t\tVariableSetStmt *stmt = (VariableSetStmt *) node;\n+\n+\t\t\t\t/* stmt->name is NULL for RESET ALL */\n+\t\t\t\tif (stmt->name)\n+\t\t\t\t{\n+\t\t\t\t\tAPP_JUMB_STRING(stmt->name);\n+\t\t\t\t\tJumbleExpr(jstate, (Node *) stmt->args);\n\nWith the patch, \"SET ... TO DEFAULT\" and \"RESET ...\" are counted as the same query.\nIs this intentional? Which might be ok because their behavior is basically the same.\nBut I'm afaid which may cause users to be confused. For example, they may fail to\nfind the pgss entry for RESET command they ran and just wonder why the command was\nnot recorded. To avoid such confusion, how about appending stmt->kind to the jumble?\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 16 Sep 2022 21:53:36 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "> The utility commands for cursor like DECLARE CURSOR seem to have the same issue\r\n> and can cause lots of pgss entries. For example, when we use postgres_fdw and\r\n> execute \"SELECT * FROM <foreign table> WHERE id = 10\" five times in the same\r\n> transaction, the following commands are executed in the remote PostgreSQL server\r\n> and recorded as pgss entries there.\r\n\r\n> DECLARE c1 CURSOR FOR ...\r\n> DECLARE c2 CURSOR FOR ...\r\n> DECLARE c3 CURSOR FOR ...\r\n\r\n+1\r\n\r\nI also made this observation recently and have a patch to suggest\r\nto improve tis situation. I will start a separate thread for this.\r\n\r\nRegards,\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n",
"msg_date": "Fri, 16 Sep 2022 15:08:59 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn 9/16/22 2:53 PM, Fujii Masao wrote:\n> \n> \n>> Attached v5 to normalize 2PC commands too, so that we get things like:\n> \n> + case T_VariableSetStmt:\n> + {\n> + VariableSetStmt *stmt = (VariableSetStmt *) node;\n> +\n> + /* stmt->name is NULL for RESET ALL */\n> + if (stmt->name)\n> + {\n> + APP_JUMB_STRING(stmt->name);\n> + JumbleExpr(jstate, (Node *) stmt->args);\n> \n> With the patch, \"SET ... TO DEFAULT\" and \"RESET ...\" are counted as the \n> same query.\n> Is this intentional? \n\nThanks for looking at the patch!\nNo, it is not intentional, good catch!\n\n> Which might be ok because their behavior is \n> basically the same.\n> But I'm afaid which may cause users to be confused. For example, they \n> may fail to\n> find the pgss entry for RESET command they ran and just wonder why the \n> command was\n> not recorded. To avoid such confusion, how about appending stmt->kind to \n> the jumble?\n> Thought?\n\nI think that's a good idea and will provide a new version taking care of \nit (and also Sami's comments up-thread).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 16 Sep 2022 17:47:40 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn 9/16/22 5:47 PM, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 9/16/22 2:53 PM, Fujii Masao wrote:\n>>\n>>\n>>> Attached v5 to normalize 2PC commands too, so that we get things like:\n>>\n>> + case T_VariableSetStmt:\n>> + {\n>> + VariableSetStmt *stmt = (VariableSetStmt *) node;\n>> +\n>> + /* stmt->name is NULL for RESET ALL */\n>> + if (stmt->name)\n>> + {\n>> + APP_JUMB_STRING(stmt->name);\n>> + JumbleExpr(jstate, (Node *) stmt->args);\n>>\n>> With the patch, \"SET ... TO DEFAULT\" and \"RESET ...\" are counted as \n>> the same query.\n>> Is this intentional? \n> \n> Thanks for looking at the patch!\n> No, it is not intentional, good catch!\n> \n>> Which might be ok because their behavior is basically the same.\n>> But I'm afaid which may cause users to be confused. For example, they \n>> may fail to\n>> find the pgss entry for RESET command they ran and just wonder why the \n>> command was\n>> not recorded. To avoid such confusion, how about appending stmt->kind \n>> to the jumble?\n>> Thought?\n> \n> I think that's a good idea and will provide a new version taking care of \n> it (and also Sami's comments up-thread).\n\nPlease find attached v6 taking care of the remarks mentioned above.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 19 Sep 2022 08:29:22 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "\n\nOn 2022/09/19 15:29, Drouvot, Bertrand wrote:\n> Please find attached v6 taking care of the remarks mentioned above.\n\nThanks for updating the patch!\n\n+SET pg_stat_statements.track_utility = TRUE;\n+\n+-- PL/pgSQL procedure and pg_stat_statements.track = all\n+-- we drop and recreate the procedures to avoid any caching funnies\n+SET pg_stat_statements.track_utility = FALSE;\n\nCould you tell me why track_utility is enabled just before it's disabled?\n\nCould you tell me what actually happens if we don't drop and\nrecreate the procedures? I'd like to know what \"any caching funnies\" are.\n\n+SELECT pg_stat_statements_reset();\n+CALL MINUS_TWO(3);\n+CALL MINUS_TWO(7);\n+CALL SUM_TWO(3, 8);\n+CALL SUM_TWO(7, 5);\n+\n+SELECT query, calls, rows FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n\nThis test set for the procedures is executed with the following\nfour conditions, respectively. Do we really need all of these tests?\n\ntrack = top, track_utility = true\ntrack = top, track_utility = false\ntrack = all, track_utility = true\ntrack = all, track_utility = false\n\n+begin;\n+prepare transaction 'tx1';\n+insert into test_tx values (1);\n+commit prepared 'tx1';\n\nThe test set of 2PC commands is also executed with track_utility = on\nand off, respectively. But why do we need to run that test when\ntrack_utility = off?\n\n-\tif (query->utilityStmt)\n+\tif (query->utilityStmt && !jstate)\n \t{\n \t\tif (pgss_track_utility && !PGSS_HANDLED_UTILITY(query->utilityStmt))\n\n\"pgss_track_utility\" should be\n\"pgss_track_utility || FORCE_TRACK_UTILITY(parsetree)\" theoretically?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 22 Sep 2022 01:07:33 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn 9/21/22 6:07 PM, Fujii Masao wrote:\n> \n> \n> On 2022/09/19 15:29, Drouvot, Bertrand wrote:\n>> Please find attached v6 taking care of the remarks mentioned above.\n> \n> Thanks for updating the patch!\n> \n> +SET pg_stat_statements.track_utility = TRUE;\n> +\n> +-- PL/pgSQL procedure and pg_stat_statements.track = all\n> +-- we drop and recreate the procedures to avoid any caching funnies\n> +SET pg_stat_statements.track_utility = FALSE;\n> \n> Could you tell me why track_utility is enabled just before it's disabled?\n\nThanks for looking at the new version!\n\nNo real reason, I removed those useless SET in the new V7 attached.\n\n> \n> Could you tell me what actually happens if we don't drop and\n> recreate the procedures? I'd like to know what \"any caching funnies\" are.\n\nWithout the drop/recreate the procedure body does not appear normalized \n(while the CALL itself is) when switching from track = top to track = all.\n\nI copy-pasted this comment from the already existing \"function\" section \nin the pg_stat_statements.sql file. This comment has been introduced for \nthe function section in commit 83f2061dd0. Note that the behavior would \nbe the same for the function case (function body does not appear \nnormalized without the drop/recreate).\n\n> \n> +SELECT pg_stat_statements_reset();\n> +CALL MINUS_TWO(3);\n> +CALL MINUS_TWO(7);\n> +CALL SUM_TWO(3, 8);\n> +CALL SUM_TWO(7, 5);\n> +\n> +SELECT query, calls, rows FROM pg_stat_statements ORDER BY query \n> COLLATE \"C\";\n> \n> This test set for the procedures is executed with the following\n> four conditions, respectively. Do we really need all of these tests?\n> \n> track = top, track_utility = true\n> track = top, track_utility = false\n> track = all, track_utility = true\n> track = all, track_utility = false\n\nOh right, the track_utility = false cases have been added when we \ndecided up-thread to force track CALL.\n\nBut now that's probably not needed to test with track_utility = true. So \nI'm just keeping track_utility = off with track = top or all in the new \nV7 attached (like this is the case for the \"function\" section).\n\n> \n> +begin;\n> +prepare transaction 'tx1';\n> +insert into test_tx values (1);\n> +commit prepared 'tx1';\n> \n> The test set of 2PC commands is also executed with track_utility = on\n> and off, respectively. But why do we need to run that test when\n> track_utility = off?\n\nThat's useless, thanks for pointing out. Removed in V7 attached.\n\n> \n> - if (query->utilityStmt)\n> + if (query->utilityStmt && !jstate)\n> {\n> if (pgss_track_utility && \n> !PGSS_HANDLED_UTILITY(query->utilityStmt))\n> \n> \"pgss_track_utility\" should be\n> \"pgss_track_utility || FORCE_TRACK_UTILITY(parsetree)\" theoretically?\n\nGood catch! That's not needed (in practice) with the current code but \nthat is \"theoretically\" needed indeed, let's add it in V7 attached \n(that's safer should the code change later on).\n\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 26 Sep 2022 12:40:34 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 08:29:22AM +0200, Drouvot, Bertrand wrote:\n> Please find attached v6 taking care of the remarks mentioned above.\n> + case T_VariableSetStmt:\n> + {\n> + VariableSetStmt *stmt = (VariableSetStmt *) node;\n> +\n> + /* stmt->name is NULL for RESET ALL */\n> + if (stmt->name)\n> + {\n> + APP_JUMB(stmt->kind);\n> + APP_JUMB_STRING(stmt->name);\n> + JumbleExpr(jstate, (Node *) stmt->args);\n> + }\n> + }\n> + break;\n\nHmm. If VariableSetStmt->is_local is not added to the jumble, then\naren't \"SET foo = $1\" and \"SET LOCAL foo = $1\" counted as the same\nquery? \n\nI am not seeing SAVEPOINT, RELEASE, ROLLBACK .. TO SAVEPOINT\nmentioned on this thread. Would these be worth considering in what\ngets compiled? That would cover the remaining bits of\nTransactionStmt. The ODBC driver abuses of savepoints, for example,\nso this could be useful for monitoring purposes in such cases.\n\nAs of the code stands, it could be cleaner to check\nIsJumbleUtilityAllowed() in compute_utility_query_id(), falling back \nto a default in JumbleQuery(). Not that what your patch does is\nincorrect, of course.\n--\nMichael",
"msg_date": "Thu, 6 Oct 2022 15:39:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn 9/26/22 12:40 PM, Drouvot, Bertrand wrote:\n> let's add it in V7 attached\n> (that's safer should the code change later on).\n\nAttached a tiny rebase needed due to 249b0409b1.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 6 Oct 2022 10:36:01 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn 10/6/22 8:39 AM, Michael Paquier wrote:\n> On Mon, Sep 19, 2022 at 08:29:22AM +0200, Drouvot, Bertrand wrote:\n>> Please find attached v6 taking care of the remarks mentioned above.\n>> + case T_VariableSetStmt:\n>> + {\n>> + VariableSetStmt *stmt = (VariableSetStmt *) node;\n>> +\n>> + /* stmt->name is NULL for RESET ALL */\n>> + if (stmt->name)\n>> + {\n>> + APP_JUMB(stmt->kind);\n>> + APP_JUMB_STRING(stmt->name);\n>> + JumbleExpr(jstate, (Node *) stmt->args);\n>> + }\n>> + }\n>> + break;\n> \n> Hmm. If VariableSetStmt->is_local is not added to the jumble, then\n> aren't \"SET foo = $1\" and \"SET LOCAL foo = $1\" counted as the same\n> query?\n> \n\nGood catch, thanks!\nWhile at it let's also jumble \"SET SESSION foo =\". For this one, we \nwould need to record another bool in VariableSetStmt: I'll create a \ndedicated patch for that.\n\n\n> I am not seeing SAVEPOINT, RELEASE, ROLLBACK .. TO SAVEPOINT\n> mentioned on this thread. Would these be worth considering in what\n> gets compiled? That would cover the remaining bits of\n> TransactionStmt. The ODBC driver abuses of savepoints, for example,\n> so this could be useful for monitoring purposes in such cases.\n\nAgree. I'll look at those too.\n\n> \n> As of the code stands, it could be cleaner to check\n> IsJumbleUtilityAllowed() in compute_utility_query_id(), falling back\n> to a default in JumbleQuery(). Not that what your patch does is\n> incorrect, of course.\n\nWill look at it.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 6 Oct 2022 10:43:57 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "On Thu, Oct 06, 2022 at 10:43:57AM +0200, Drouvot, Bertrand wrote:\n> On 10/6/22 8:39 AM, Michael Paquier wrote:\n>> I am not seeing SAVEPOINT, RELEASE, ROLLBACK .. TO SAVEPOINT\n>> mentioned on this thread. Would these be worth considering in what\n>> gets compiled? That would cover the remaining bits of\n>> TransactionStmt. The ODBC driver abuses of savepoints, for example,\n>> so this could be useful for monitoring purposes in such cases.\n> \n> Agree. I'll look at those too.\n\nThanks.\n\nWhile studying a bit more this thread, I've been reminded of the fact\nthat this would treat different flavors of BEGIN/COMMIT commands (mix\nof upper/lower characters, etc.) as different entries in\npg_stat_statements, and it feels inconsistent to me that we'd begin \njumbling the 2PC and savepoint commands with their nodes but not do\nthat for the rest of the commands, even if, as mentioned upthread,\napplications may not mix grammars. If they do, one could finish by\nviewing incorrect reports, and I'd like to think that this would make\nthe life of a lot of people easier.\n\nSET/RESET and CALL have a much lower presence frequency than the\ntransaction commands, where it is fine by me to include both of these\nunder the utility statement switch. For OLTP workloads (I've seen\nquite a bit of 2PC used across multiple nodes for short transactions\nwith writes involving more than two remote nodes), with a lot of\nBEGIN/COMMIT or even 2PC commands issued, the performance could be\nnoticeable? It may make sense to control these with a different GUC\nswitch, where we drop completely the string-only approach under\ntrack_utility. In short, I don't have any objections about the\nbusiness with SET and CALL, but the transaction part worries me a\nbit. As a first step, we could cut the cake in two parts, and just\nfocus on SET/RESET and CALL, which was the main point of discussion\nof this thread to begin with.\n--\nMichael",
"msg_date": "Fri, 7 Oct 2022 12:41:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> While studying a bit more this thread, I've been reminded of the fact\n> that this would treat different flavors of BEGIN/COMMIT commands (mix\n> of upper/lower characters, etc.) as different entries in\n> pg_stat_statements, and it feels inconsistent to me that we'd begin \n> jumbling the 2PC and savepoint commands with their nodes but not do\n> that for the rest of the commands, even if, as mentioned upthread,\n> applications may not mix grammars.\n\nI've been thinking since the beginning of this thread that there\nwas no coherent, defensible rationale being offered for jumbling\nsome utility statements and not others.\n\nI wonder if the answer is to jumble them all. We avoided that\nup to now because it would imply a ton of manual effort and\nfuture code maintenance ... but now that the backend/nodes/\ninfrastructure is largely auto-generated, could we auto-generate\nthe jumbling code?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Oct 2022 23:51:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "On Thu, Oct 06, 2022 at 11:51:52PM -0400, Tom Lane wrote:\n> I've been thinking since the beginning of this thread that there\n> was no coherent, defensible rationale being offered for jumbling\n> some utility statements and not others.\n\nYeah. The potential performance impact of all the TransactionStmts\nworries me a bit, though. \n\n> I wonder if the answer is to jumble them all. We avoided that\n> up to now because it would imply a ton of manual effort and\n> future code maintenance ... but now that the backend/nodes/\n> infrastructure is largely auto-generated, could we auto-generate\n> the jumbling code?\n\nProbably. One part that may be tricky though is the location of the\nconstants we'd like to make generic, but perhaps this could be handled\nby using a dedicated variable type that just maps to int? It does not\nseem like a mandatory requirement to add that everywhere as a first\nstep, either.\n--\nMichael",
"msg_date": "Fri, 7 Oct 2022 13:13:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "On Thu, Oct 06, 2022 at 11:51:52PM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > While studying a bit more this thread, I've been reminded of the fact\n> > that this would treat different flavors of BEGIN/COMMIT commands (mix\n> > of upper/lower characters, etc.) as different entries in\n> > pg_stat_statements, and it feels inconsistent to me that we'd begin\n> > jumbling the 2PC and savepoint commands with their nodes but not do\n> > that for the rest of the commands, even if, as mentioned upthread,\n> > applications may not mix grammars.\n>\n> I've been thinking since the beginning of this thread that there\n> was no coherent, defensible rationale being offered for jumbling\n> some utility statements and not others.\n\nOnly a very small subset causes trouble in real life scenario, but I agree that\ncherry-picking some utility statements isn't a great approach.\n\n> I wonder if the answer is to jumble them all. We avoided that\n> up to now because it would imply a ton of manual effort and\n> future code maintenance ... but now that the backend/nodes/\n> infrastructure is largely auto-generated, could we auto-generate\n> the jumbling code?\n\nThat's a good idea. Naively, it seems doable as the infrastructure in\ngen_node_support.pl already supports everything that should be needed (like\nper-member annotation).\n\n\n",
"msg_date": "Fri, 7 Oct 2022 12:18:26 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn 10/7/22 6:13 AM, Michael Paquier wrote:\n> On Thu, Oct 06, 2022 at 11:51:52PM -0400, Tom Lane wrote:\n>> I've been thinking since the beginning of this thread that there\n>> was no coherent, defensible rationale being offered for jumbling\n>> some utility statements and not others.\n> \n> Yeah. The potential performance impact of all the TransactionStmts\n> worries me a bit, though.\n> \n>> I wonder if the answer is to jumble them all. We avoided that\n>> up to now because it would imply a ton of manual effort and\n>> future code maintenance ... but now that the backend/nodes/\n>> infrastructure is largely auto-generated, could we auto-generate\n>> the jumbling code?\n\nI think that's a good idea.\n\n> \n> Probably. One part that may be tricky though is the location of the\n> constants we'd like to make generic, but perhaps this could be handled\n> by using a dedicated variable type that just maps to int? \n\nIt looks to me that we'd also need to teach the perl parser which fields \nper statements struct need to be jumbled (or more probably which ones to \nexclude from the jumbling, for example funccall in CallStmt). Not sure \nyet how to teach the perl parser, but I'll build this list first to help \ndecide for a right approach, unless you already have some thoughts about it?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 10 Oct 2022 15:04:57 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "Hi,\n\nOn Mon, Oct 10, 2022 at 03:04:57PM +0200, Drouvot, Bertrand wrote:\n>\n> On 10/7/22 6:13 AM, Michael Paquier wrote:\n> >\n> > Probably. One part that may be tricky though is the location of the\n> > constants we'd like to make generic, but perhaps this could be handled\n> > by using a dedicated variable type that just maps to int?\n>\n> It looks to me that we'd also need to teach the perl parser which fields per\n> statements struct need to be jumbled (or more probably which ones to exclude\n> from the jumbling, for example funccall in CallStmt). Not sure yet how to\n> teach the perl parser, but I'll build this list first to help decide for a\n> right approach, unless you already have some thoughts about it?\n\nUnless I'm missing something both can be handled using pg_node_attr()\nannotations that already exists?\n\n\n",
"msg_date": "Mon, 10 Oct 2022 21:16:47 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "On Mon, Oct 10, 2022 at 09:16:47PM +0800, Julien Rouhaud wrote:\n> Unless I'm missing something both can be handled using pg_node_attr()\n> annotations that already exists?\n\nIndeed, that should work for the locators.\n--\nMichael",
"msg_date": "Tue, 11 Oct 2022 11:54:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "> I've been thinking since the beginning of this thread that there\r\n> was no coherent, defensible rationale being offered for jumbling\r\n> some utility statements and not others.\r\n\r\n+1 to the idea, as there are other utility statement cases\r\nthat should be Jumbled. Here is a recent thread for jumbling\r\ncursors [1].\r\n\r\nThe CF entry [2] has been withdrawn until consensus is reached\r\non this topic.\r\n\r\n[1]: https://www.postgresql.org/message-id/203CFCF7-176E-4AFC-A48E-B2CECFECD6AA@amazon.com\r\n[2]: https://commitfest.postgresql.org/40/3901/\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAWS (Amazon Web Services)\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Tue, 11 Oct 2022 14:18:54 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "On Tue, Oct 11, 2022 at 02:18:54PM +0000, Imseih (AWS), Sami wrote:\n> +1 to the idea, as there are other utility statement cases\n> that should be Jumbled. Here is a recent thread for jumbling\n> cursors [1].\n\nThanks for mentioning that. With an automated way to generate this\ncode, cursors would be handled, at the advantage of making sure that\nno fields are missing in the jumbled structures (is_local was missed\nfor example on SET).\n\n> The CF entry [2] has been withdrawn until consensus is reached\n> on this topic.\n\nIt seems to me that the consensus is here, which is a good step\nforward ;)\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 09:13:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "On Wed, Oct 12, 2022 at 09:13:20AM +0900, Michael Paquier wrote:\n> Thanks for mentioning that. With an automated way to generate this\n> code, cursors would be handled, at the advantage of making sure that\n> no fields are missing in the jumbled structures (is_local was missed\n> for example on SET).\n\nSo, this thread has stalled for the last few weeks and it seems to me\nthat the conclusion is that we'd want an approach using a set of\nscripts that automate the generation of the code in charge of the DDL\njumbling. And, depending on the portions of the queries that need to\nbe silenced, we may need to extend a few nodes with a location. We\nare not there yet, so I have marked the patch as returned with\nfeedback.\n--\nMichael",
"msg_date": "Wed, 30 Nov 2022 15:55:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
},
{
"msg_contents": "On Tue, Oct 11, 2022 at 11:54:51AM +0900, Michael Paquier wrote:\n> On Mon, Oct 10, 2022 at 09:16:47PM +0800, Julien Rouhaud wrote:\n>> Unless I'm missing something both can be handled using pg_node_attr()\n>> annotations that already exists?\n> \n> Indeed, that should work for the locators.\n\nMy mistake here.\n\nWhen it comes to the locations to silence in the normalization, we\nalready rely on the variable name \"location\" of type int in\ngen_node_support.pl, so we can just do the same for the code\ngenerating the jumbling.\n\nThis stuff still needs two more pg_node_attr() to be able to work: one\nto ignore a full Node in the jumbling and a second that can be used on\na per-field basis. Once this is in place, automating the generation\nof the code is not that complicated, most of the work is to get around\nplacing the pg_node_attr() so as the jumbling gets things right. The\nnumber of fields to mark as things to ignore depends on the Node type\n(heavy for Const, for example), but I'd like to think that a \"ignore\"\napproach is better than an \"include\" approach so as new fields would\nbe considered by default in the jumble compilation.\n\nI have not looked at all the edge cases, so perhaps more attrs would\nbe needed..\n--\nMichael",
"msg_date": "Wed, 7 Dec 2022 09:57:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Query Jumbling for CALL and SET utility statements"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nAttached is a patch to \nAdd tracking of backend memory allocated to pg_stat_activity\n \nThis new field displays the current bytes of memory allocated to the\nbackend process. It is updated as memory for the process is\nmalloc'd/free'd. Memory allocated to items on the freelist is included\nin the displayed value. Dynamic shared memory allocations are included\nonly in the value displayed for the backend that created them, they are\nnot included in the value for backends that are attached to them to\navoid double counting. On occasion, orphaned memory segments may be\ncleaned up on postmaster startup. This may result in decreasing the sum\nwithout a prior increment. We limit the floor of backend_mem_allocated\nto zero.\n\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com",
"msg_date": "Wed, 31 Aug 2022 12:03:06 -0400",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 12:03:06PM -0400, Reid Thompson wrote:\n> Hi Hackers,\n> \n> Attached is a patch to \n> Add tracking of backend memory allocated to pg_stat_activity\n\n> + proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',\n\nIn the past, there was concern about making pg_stat_activity wider by\nadding information that's less-essential than what's been there for\nyears. This is only an int64, so it's not \"wide\", but I wonder if\nthere's another way to expose this information? Like adding backends to\npg_get_backend_memory_contexts() , maybe with another view on top of\nthat ?\n\n+ * shown allocated in pgstat_activity when the creator destroys the \n\npg_stat\n\n> +\t\t * Posix creation calls dsm_impl_posix_resize implying that resizing\n> +\t\t * occurs or may be added in the future. As implemented\n> +\t\t * dsm_impl_posix_resize utilizes fallocate or truncate, passing the\n> +\t\t * whole new size as input, growing the allocation as needed * (only\n> +\t\t * truncate supports shrinking). We update by replacing the * old\n\nwrapping caused extraneous stars\n\n> +\t * Do not allow backend_mem_allocated to go below zero. ereport if we\n> +\t * would have. There's no need for a lock around the read here asit's\n\nas it's\n\n> +\t\tereport(LOG, (errmsg(\"decrease reduces reported backend memory allocated below zero; setting reported to 0\")));\n\nerrmsg() doesn't require the outside paranthesis since a couple years\nago.\n\n> +\t/*\n> +\t * Until header allocation is included in context->mem_allocated cast to\n> +\t * slab and decrement the headerSize\n\nadd a comma before \"cast\" ?\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 31 Aug 2022 12:05:55 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "At Wed, 31 Aug 2022 12:05:55 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> On Wed, Aug 31, 2022 at 12:03:06PM -0400, Reid Thompson wrote:\n> > Hi Hackers,\n> > \n> > Attached is a patch to \n> > Add tracking of backend memory allocated to pg_stat_activity\n> \n> > + proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',\n> \n> In the past, there was concern about making pg_stat_activity wider by\n> adding information that's less-essential than what's been there for\n> years. This is only an int64, so it's not \"wide\", but I wonder if\n> there's another way to expose this information? Like adding backends to\n\nThe view looks already too wide to me. I don't want the numbers for\nmetrics are added to the view.\n\n> pg_get_backend_memory_contexts() , maybe with another view on top of\n> that ?\n\n+1\n\n> + * shown allocated in pgstat_activity when the creator destroys the \n> \n> pg_stat\n> \n> > +\t\t * Posix creation calls dsm_impl_posix_resize implying that resizing\n> > +\t\t * occurs or may be added in the future. As implemented\n> > +\t\t * dsm_impl_posix_resize utilizes fallocate or truncate, passing the\n> > +\t\t * whole new size as input, growing the allocation as needed * (only\n> > +\t\t * truncate supports shrinking). We update by replacing the * old\n> \n> wrapping caused extraneous stars\n> \n> > +\t * Do not allow backend_mem_allocated to go below zero. ereport if we\n> > +\t * would have. There's no need for a lock around the read here asit's\n> \n> as it's\n> \n> > +\t\tereport(LOG, (errmsg(\"decrease reduces reported backend memory allocated below zero; setting reported to 0\")));\n> \n> errmsg() doesn't require the outside paranthesis since a couple years\n> ago.\n\n+1\n\n> > +\t/*\n> > +\t * Until header allocation is included in context->mem_allocated cast to\n> > +\t * slab and decrement the headerSize\n> \n> add a comma before \"cast\" ?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 01 Sep 2022 10:28:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "At Wed, 31 Aug 2022 12:03:06 -0400, Reid Thompson <reid.thompson@crunchydata.com> wrote in \n> Attached is a patch to \n> Add tracking of backend memory allocated to pg_stat_activity\n\n> @@ -916,6 +930,7 @@ AllocSetAlloc(MemoryContext context, Size size)\n> \t\t\treturn NULL;\n> \n> \t\tcontext->mem_allocated += blksize;\n> +\t\tpgstat_report_backend_mem_allocated_increase(blksize);\n\nI'm not sure this is acceptable. The function adds a branch even when\nthe feature is turned off, which I think may cause a certain extent of\nperformance degradation. A past threads [1], [2] and [3] might be\ninformative.\n\n[1] https://www.postgresql.org/message-id/1434311039.4369.39.camel%40jeff-desktop\n[2] https://www.postgresql.org/message-id/72a656e0f71d0860161e0b3f67e4d771%40oss.nttdata.com\n[3] https://www.postgresql.org/message-id/0271f440ac77f2a4180e0e56ebd944d1%40oss.nttdata.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 01 Sep 2022 13:43:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "Hi,\n\nOn 9/1/22 3:28 AM, Kyotaro Horiguchi wrote:\n> At Wed, 31 Aug 2022 12:05:55 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in\n>> On Wed, Aug 31, 2022 at 12:03:06PM -0400, Reid Thompson wrote:\n>>> Hi Hackers,\n>>>\n>>> Attached is a patch to\n>>> Add tracking of backend memory allocated\n\nThanks for the patch.\n\n+ 1 on the idea.\n\n>>> + proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',\n>> In the past, there was concern about making pg_stat_activity wider by\n>> adding information that's less-essential than what's been there for\n>> years. This is only an int64, so it's not \"wide\", but I wonder if\n>> there's another way to expose this information? Like adding backends to\n> The view looks already too wide to me. I don't want the numbers for\n> metrics are added to the view.\n\n+1 for a dedicated view.\n\nWhile we are at it, what do you think about also recording the max \nmemory allocated by a backend? (could be useful and would avoid sampling \nfor which there is no guarantee to sample the max anyway).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Fri, 2 Sep 2022 08:33:32 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Wed, 2022-08-31 at 12:05 -0500, Justin Pryzby wrote:\n> > + proargmodes =>\n> > '{i,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}'\n> > ,\n> \n> In the past, there was concern about making pg_stat_activity wider by\n> adding information that's less-essential than what's been there for\n> years. This is only an int64, so it's not \"wide\", but I wonder if\n> there's another way to expose this information? Like adding backends\n> to\n> pg_get_backend_memory_contexts() , maybe with another view on top of\n\nI will take a look at pg_get_backend_memory_contexts. I will also look\nat the other suggestions in the thread.\n\n> + * shown allocated in pgstat_activity when the\n> \n> pg_stat\n\nCorrected,\n\n> > replacing the * old\n> \n> wrapping caused extraneous stars\n\nCorrected\n\n> > here asit's\n> \n> as it's\n\nCorrected\n\n> errmsg() doesn't require the outside paranthesis since a couple years\n> ago.\n\nCorrected\n\n> > > mem_allocated cast to\n> add a comma before \"cast\" ?\n\nCorrected\n\nPatch with the corrections attached\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com\n\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com",
"msg_date": "Sat, 03 Sep 2022 23:34:20 -0400",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Thu, 2022-09-01 at 13:43 +0900, Kyotaro Horiguchi wrote:\n> \n> > @@ -916,6 +930,7 @@ AllocSetAlloc(MemoryContext context, Size size)\n> > \t\t\treturn NULL;\n> > \n> > \t\tcontext->mem_allocated += blksize;\n> > +\t\tpgstat_report_backend_mem_allocated_increase(blksi\n> > ze);\n> \n> I'm not sure this is acceptable. The function adds a branch even when\n> the feature is turned off, which I think may cause a certain extent\n> of\n> performance degradation. A past threads [1], [2] and [3] might be\n> informative.\n\n Stated above is '...even when the feature is turned off...', I want to\n note that this feature/patch (for tracking memory allocated) doesn't\n have an 'on/off'. Tracking would always occur.\n\n I'm open to guidance on testing for performance degradation. I did\n note some basic pgbench comparison numbers in the thread regarding\n limiting backend memory allocations. \n\n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 06 Sep 2022 17:10:49 -0400",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "At Tue, 06 Sep 2022 17:10:49 -0400, Reid Thompson <reid.thompson@crunchydata.com> wrote in \n> On Thu, 2022-09-01 at 13:43 +0900, Kyotaro Horiguchi wrote:\n> > \n> > > @@ -916,6 +930,7 @@ AllocSetAlloc(MemoryContext context, Size size)\n> > > \t\t\treturn NULL;\n> > > \n> > > \t\tcontext->mem_allocated += blksize;\n> > > +\t\tpgstat_report_backend_mem_allocated_increase(blksi\n> > > ze);\n> > \n> > I'm not sure this is acceptable. The function adds a branch even when\n> > the feature is turned off, which I think may cause a certain extent\n> > of\n> > performance degradation. A past threads [1], [2] and [3] might be\n> > informative.\n> \n> Stated above is '...even when the feature is turned off...', I want to\n> note that this feature/patch (for tracking memory allocated) doesn't\n> have an 'on/off'. Tracking would always occur.\n\nIn the patch, I see that\npgstat_report_backend_mem_allocated_increase() runs the following\ncode, which seems like to me to be a branch..\n\n+\tif (!beentry || !pgstat_track_activities)\n+\t{\n+\t\t/*\n+\t\t * Account for memory before pgstats is initialized. This will be\n+\t\t * migrated to pgstats on initialization.\n+\t\t */\n+\t\tbackend_mem_allocated += allocation;\n+\n+\t\treturn;\n+\t}\n\n\n> I'm open to guidance on testing for performance degradation. I did\n> note some basic pgbench comparison numbers in the thread regarding\n> limiting backend memory allocations. \n\nYeah.. That sounds good..\n\n(I have a patch that is stuck at benchmarking on slight possible\ndegradation caused by a branch (or indirect call) on a hot path\nsimilary to this one. The test showed fluctuation that is not clearly\ndistinguishable between noise and degradation by running the target\nfunctions in a busy loop..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 07 Sep 2022 17:08:41 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "At Wed, 07 Sep 2022 17:08:41 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Tue, 06 Sep 2022 17:10:49 -0400, Reid Thompson <reid.thompson@crunchydata.com> wrote in \n> > On Thu, 2022-09-01 at 13:43 +0900, Kyotaro Horiguchi wrote:\n> > > \n> > > > @@ -916,6 +930,7 @@ AllocSetAlloc(MemoryContext context, Size size)\n> > > > \t\t\treturn NULL;\n> > > > \n> > > > \t\tcontext->mem_allocated += blksize;\n> > > > +\t\tpgstat_report_backend_mem_allocated_increase(blksi\n> > > > ze);\n> > > \n> > > I'm not sure this is acceptable. The function adds a branch even when\n> > > the feature is turned off, which I think may cause a certain extent\n> > > of\n> > > performance degradation. A past threads [1], [2] and [3] might be\n> > > informative.\n> > \n> > Stated above is '...even when the feature is turned off...', I want to\n> > note that this feature/patch (for tracking memory allocated) doesn't\n> > have an 'on/off'. Tracking would always occur.\n> \n> In the patch, I see that\n> pgstat_report_backend_mem_allocated_increase() runs the following\n> code, which seems like to me to be a branch..\n\nAh.. sorry. \n\n> pgstat_report_backend_mem_allocated_increase() runs the following\n- code, which seems like to me to be a branch..\n+ code, which seems like to me to be a branch that can turn of the feature..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 07 Sep 2022 17:17:30 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "Greetings,\n\n* Drouvot, Bertrand (bdrouvot@amazon.com) wrote:\n> On 9/1/22 3:28 AM, Kyotaro Horiguchi wrote:\n> >At Wed, 31 Aug 2022 12:05:55 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in\n> >>On Wed, Aug 31, 2022 at 12:03:06PM -0400, Reid Thompson wrote:\n> >>>Attached is a patch to\n> >>>Add tracking of backend memory allocated\n> \n> Thanks for the patch.\n> \n> + 1 on the idea.\n\nGlad folks are in support of the general idea.\n\n> >>>+ proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',\n> >>In the past, there was concern about making pg_stat_activity wider by\n> >>adding information that's less-essential than what's been there for\n> >>years. This is only an int64, so it's not \"wide\", but I wonder if\n> >>there's another way to expose this information? Like adding backends to\n> >The view looks already too wide to me. I don't want the numbers for\n> >metrics are added to the view.\n> \n> +1 for a dedicated view.\n\nA dedicated view with a single column in it hardly seems sensible. I'd\nalso argue that this particular bit of information is extremely useful\nand therefore worthy of being put directly into pg_stat_activity. I\ncould see a dedicated view possibly *also* being added later if/when we\nprovide a more detailed break-down of how the memory is being used but\nthat's a whole other thing and I'm not even 100% sure we'll ever\nactually get there, as you can already poke a backend and have it dump\nout the memory context-level information on an as-needed basis.\n\n> While we are at it, what do you think about also recording the max memory\n> allocated by a backend? (could be useful and would avoid sampling for which\n> there is no guarantee to sample the max anyway).\n\nWhat would you do with that information..? By itself, it doesn't strike\nme as useful. Perhaps it'd be interesting to grab the max required for\na particular query in pg_stat_statements or such but again, that's a\nvery different thing.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 9 Sep 2022 12:34:15 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "Greetings,\n\n* Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> At Tue, 06 Sep 2022 17:10:49 -0400, Reid Thompson <reid.thompson@crunchydata.com> wrote in \n> > I'm open to guidance on testing for performance degradation. I did\n> > note some basic pgbench comparison numbers in the thread regarding\n> > limiting backend memory allocations. \n> \n> Yeah.. That sounds good..\n> \n> (I have a patch that is stuck at benchmarking on slight possible\n> degradation caused by a branch (or indirect call) on a hot path\n> similary to this one. The test showed fluctuation that is not clearly\n> distinguishable between noise and degradation by running the target\n> functions in a busy loop..)\n\nJust to be clear- this path is (hopefully) not *super* hot as we're only\ntracking actual allocations (that is- malloc() calls), this isn't\nchanging anything for palloc() calls that aren't also needing to do a\nmalloc(), and we already try to reduce the amount of malloc() calls\nwe're doing by allocating more and more each time we run out in a given\ncontext.\n\nWhile I'm generally supportive of doing some benchmarking around this, I\ndon't think the bar is as high as it would be if we were actually\nchanging the cost of routine palloc() or such calls.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 9 Sep 2022 12:40:04 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Fri, Sep 09, 2022 at 12:34:15PM -0400, Stephen Frost wrote:\n> > While we are at it, what do you think about also recording the max memory\n> > allocated by a backend? (could be useful and would avoid sampling for which\n> > there is no guarantee to sample the max anyway).\n\nFYI, that's already kind-of available from getrusage:\n\n$ psql ts -c \"SET log_executor_stats=on; SET client_min_messages=debug;\nSELECT a, COUNT(1) FROM generate_series(1,999999) a GROUP BY 1;\" |wc\nLOG: EXECUTOR STATISTICS\n...\n! 194568 kB max resident size\n\nNote that max rss counts things allocated outside postgres (like linked\nlibraries).\n\n> What would you do with that information..? By itself, it doesn't strike\n> me as useful. Perhaps it'd be interesting to grab the max required for\n> a particular query in pg_stat_statements or such but again, that's a\n> very different thing.\n\nlog_executor_stats is at level \"debug\", so it's not great to enable it\nfor a single session, and painful to think about enabling it globally.\nThis would be a lot friendlier.\n\nStoring the maxrss per backend somewhere would be useful (and avoid the\nissue of \"sampling\" with top), after I agree that it ought to be exposed\nto a view. For example, it might help to determine whether (and which!)\nbackends are using large multiple of work_mem, and then whether that can\nbe increased. If/when we had a \"memory budget allocator\", this would\nhelp to determine how to set its GUCs, maybe to see \"which backends are\nusing the work_mem that are precluding this other backend from using\nefficient query plan\".\n\nI wonder if it's better to combine these two threads into one. The 0001\npatch of course can be considered independently from the 0002 patch, as\nusual. Right now, there's different parties on both threads ...\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 9 Sep 2022 12:08:09 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "Hi,\n\nOn 9/9/22 7:08 PM, Justin Pryzby wrote:\n> On Fri, Sep 09, 2022 at 12:34:15PM -0400, Stephen Frost wrote:\n>>> While we are at it, what do you think about also recording the max memory\n>>> allocated by a backend? (could be useful and would avoid sampling for which\n>>> there is no guarantee to sample the max anyway).\n>> What would you do with that information..? By itself, it doesn't strike\n>> me as useful. Perhaps it'd be interesting to grab the max required for\n>> a particular query in pg_stat_statements or such but again, that's a\n>> very different thing.\n>\n> Storing the maxrss per backend somewhere would be useful (and avoid the\n> issue of \"sampling\" with top), after I agree that it ought to be exposed\n> to a view. For example, it might help to determine whether (and which!)\n> backends are using large multiple of work_mem, and then whether that can\n> be increased. If/when we had a \"memory budget allocator\", this would\n> help to determine how to set its GUCs, maybe to see \"which backends are\n> using the work_mem that are precluding this other backend from using\n> efficient query plan\".\n\n+1.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com\n\n\n\n\n\n\nHi,\n\nOn 9/9/22 7:08 PM, Justin Pryzby wrote:\n\n\n\nOn Fri, Sep 09, 2022 at 12:34:15PM -0400, Stephen Frost wrote:\n\n\n\nWhile we are at it, what do you think about also recording the max memory\nallocated by a backend? (could be useful and would avoid sampling for which\nthere is no guarantee to sample the max anyway).\n\n\n\n\n\n\nWhat would you do with that information..? By itself, it doesn't strike\nme as useful. Perhaps it'd be interesting to grab the max required for\na particular query in pg_stat_statements or such but again, that's a\nvery different thing.\n\n\n\n\nStoring the maxrss per backend somewhere would be useful (and avoid the\nissue of \"sampling\" with top), after I agree that it ought to be exposed\nto a view. For example, it might help to determine whether (and which!)\nbackends are using large multiple of work_mem, and then whether that can\nbe increased. If/when we had a \"memory budget allocator\", this would\nhelp to determine how to set its GUCs, maybe to see \"which backends are\nusing the work_mem that are precluding this other backend from using\nefficient query plan\".\n\n\n+1.\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 12 Sep 2022 09:59:15 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "Greetings,\n\n* Drouvot, Bertrand (bdrouvot@amazon.com) wrote:\n> On 9/9/22 7:08 PM, Justin Pryzby wrote:\n> >On Fri, Sep 09, 2022 at 12:34:15PM -0400, Stephen Frost wrote:\n> >>>While we are at it, what do you think about also recording the max memory\n> >>>allocated by a backend? (could be useful and would avoid sampling for which\n> >>>there is no guarantee to sample the max anyway).\n> >>What would you do with that information..? By itself, it doesn't strike\n> >>me as useful. Perhaps it'd be interesting to grab the max required for\n> >>a particular query in pg_stat_statements or such but again, that's a\n> >>very different thing.\n>\n> >Storing the maxrss per backend somewhere would be useful (and avoid the\n> >issue of \"sampling\" with top), after I agree that it ought to be exposed\n> >to a view. For example, it might help to determine whether (and which!)\n> >backends are using large multiple of work_mem, and then whether that can\n> >be increased. If/when we had a \"memory budget allocator\", this would\n> >help to determine how to set its GUCs, maybe to see \"which backends are\n> >using the work_mem that are precluding this other backend from using\n> >efficient query plan\".\n> \n> +1.\n\nI still have a hard time seeing the value in tracking which backends are\nusing the most memory over the course of a backend's entire lifetime,\nwhich would involve lots of different queries, some of which might use\nmany multiples of work_mem and others not. Much more interesting would\nbe to track this as part of pg_stat_statements and associated with\nqueries.\n\nEither way, this looks like an independent feature which someone who has\ninterest in could work on but generally doesn't impact what the feature\nof this thread is about; a feature which has already shown merit in\nfinding a recently introduced memory leak and is the basis of another\nfeature being contemplated to help avoid OOM-killer introduced crashes.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 12 Sep 2022 11:22:31 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "patch rebased to current master\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com\n\n\n\n",
"msg_date": "Tue, 25 Oct 2022 14:51:40 -0400",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Tue, 2022-10-25 at 14:51 -0400, Reid Thompson wrote:\n> patch rebased to current master\n> \nactually attach the patch\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com",
"msg_date": "Tue, 25 Oct 2022 14:59:17 -0400",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "Hi,\n\nOn 10/25/22 8:59 PM, Reid Thompson wrote:\n> On Tue, 2022-10-25 at 14:51 -0400, Reid Thompson wrote:\n>> patch rebased to current master\n>>\n> actually attach the patch\n> \n\nIt looks like the patch does not apply anymore since b1099eca8f.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 4 Nov 2022 11:06:00 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Fri, 2022-11-04 at 11:06 +0100, Drouvot, Bertrand wrote:\n> Hi,\n> \n> It looks like the patch does not apply anymore since b1099eca8f.\n> \n> Regards,\n> \n\nThanks,\n\nrebased to current master attached.\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com",
"msg_date": "Fri, 04 Nov 2022 08:56:13 -0400",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-04 08:56:13 -0400, Reid Thompson wrote:\n> From a8de5d29c0c6f10962181926a49ad4fec1e52bd1 Mon Sep 17 00:00:00 2001\n> From: Reid Thompson <jreidthompson@nc.rr.com>\n> Date: Thu, 11 Aug 2022 12:01:25 -0400\n> Subject: [PATCH] Add tracking of backend memory allocated to pg_stat_activity\n> \n> This new field displays the current bytes of memory allocated to the\n> backend process. It is updated as memory for the process is\n> malloc'd/free'd. Memory allocated to items on the freelist is included in\n> the displayed value. Dynamic shared memory allocations are included\n> only in the value displayed for the backend that created them, they are\n> not included in the value for backends that are attached to them to\n> avoid double counting. On occasion, orphaned memory segments may be\n> cleaned up on postmaster startup. This may result in decreasing the sum\n> without a prior increment. We limit the floor of backend_mem_allocated\n> to zero. Updated pg_stat_activity documentation for the new column.\n\nI'm not convinced that counting DSM values this way is quite right. There are\na few uses of DSMs that track shared resources, with the biggest likely being\nthe stats for relations etc. I suspect that tracking that via backend memory\nusage will be quite confusing, because fairly random backends that had to grow\nthe shared state end up being blamed with the memory usage in perpituity - and\nthen suddenly that memory usage will vanish when that backend exits, despite\nthe memory continuing to exist.\n\n\n\n> @@ -734,6 +747,7 @@ AllocSetAlloc(MemoryContext context, Size size)\n> \t\t\treturn NULL;\n> \n> \t\tcontext->mem_allocated += blksize;\n> +\t\tpgstat_report_backend_allocated_bytes_increase(blksize);\n> \n> \t\tblock->aset = set;\n> \t\tblock->freeptr = block->endptr = ((char *) block) + blksize;\n> @@ -944,6 +958,7 @@ AllocSetAlloc(MemoryContext context, Size size)\n> \t\t\treturn NULL;\n> \n> \t\tcontext->mem_allocated += blksize;\n> +\t\tpgstat_report_backend_allocated_bytes_increase(blksize);\n> \n> \t\tblock->aset = set;\n> \t\tblock->freeptr = ((char *) block) + ALLOC_BLOCKHDRSZ;\n> @@ -1043,6 +1058,7 @@ AllocSetFree(void *pointer)\n> \t\t\tblock->next->prev = block->prev;\n> \n> \t\tset->header.mem_allocated -= block->endptr - ((char *) block);\n> +\t\tpgstat_report_backend_allocated_bytes_decrease(block->endptr - ((char *) block));\n> \n> #ifdef CLOBBER_FREED_MEMORY\n> \t\twipe_mem(block, block->freeptr - ((char *) block));\n\nI suspect this will be noticable cost-wise. Even though these paths aren't the\nhottest memory allocation paths, by nature of going down into malloc, adding\nan external function call that then does a bunch of branching etc. seems\nlikely to add up to some. See below for how I think we can deal with that...\n\n\n> +\n> +/* --------\n> + * pgstat_report_backend_allocated_bytes_increase() -\n> + *\n> + * Called to report increase in memory allocated for this backend\n> + * --------\n> + */\n> +void\n> +pgstat_report_backend_allocated_bytes_increase(uint64 allocation)\n> +{\n> +\tvolatile PgBackendStatus *beentry = MyBEEntry;\n> +\n> +\tif (!beentry || !pgstat_track_activities)\n> +\t{\n> +\t\t/*\n> +\t\t * Account for memory before pgstats is initialized. This will be\n> +\t\t * migrated to pgstats on initialization.\n> +\t\t */\n> +\t\tbackend_allocated_bytes += allocation;\n> +\n> +\t\treturn;\n> +\t}\n> +\n> +\t/*\n> +\t * Update my status entry, following the protocol of bumping\n> +\t * st_changecount before and after. We use a volatile pointer here to\n> +\t * ensure the compiler doesn't try to get cute.\n> +\t */\n> +\tPGSTAT_BEGIN_WRITE_ACTIVITY(beentry);\n> +\tbeentry->backend_allocated_bytes += allocation;\n> +\tPGSTAT_END_WRITE_ACTIVITY(beentry);\n> +}\n\nThis is quite a few branches, including write/read barriers.\n\nIt doesn't really make sense to use the PGSTAT_BEGIN_WRITE_ACTIVITY() pattern\nhere - you're just updating a single value, there's nothing to be gained by\nit. The point of PGSTAT_BEGIN_*ACTIVITY() stuff is to allow readers to get a\nconsistent view of multiple values - but there aren't multiple values here!\n\n\nTo avoid the overhead of checking (!beentry || !pgstat_track_activities) and\nthe external function call, I think you'd be best off copying the trickery I\nintroduced for pgstat_report_wait_start(), in 225a22b19ed.\n\nI.e. make pgstat_report_backend_allocated_bytes_increase() a static inline\nfunction that unconditionally updates something like\n*my_backend_allocated_memory. To deal with the case of (!beentry ||\n!pgstat_track_activities), that variable initially points to some backend\nlocal state and is set to the shared state in pgstat_bestart().\n\nThis additionally has the nice benefit that you can track memory usage from\nbefore pgstat_bestart(), it'll be in the local variable.\n\n\n> +void\n> +pgstat_report_backend_allocated_bytes_decrease(uint64 deallocation)\n> +{\n> +\tvolatile PgBackendStatus *beentry = MyBEEntry;\n> +\n> +\t/*\n> +\t * Cases may occur where shared memory from a previous postmaster\n> +\t * invocation still exist. These are cleaned up at startup by\n> +\t * dsm_cleanup_using_control_segment. Limit decreasing memory allocated to\n> +\t * zero in case no corresponding prior increase exists or decrease has\n> +\t * already been accounted for.\n> +\t */\n\nI don't really follow - postmaster won't ever have a backend status array, so\nhow would they be tracked here?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Nov 2022 19:41:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Fri, Nov 04, 2022 at 08:56:13AM -0400, Reid Thompson wrote:\n> From a8de5d29c0c6f10962181926a49ad4fec1e52bd1 Mon Sep 17 00:00:00 2001\n> From: Reid Thompson <jreidthompson@nc.rr.com>\n> Date: Thu, 11 Aug 2022 12:01:25 -0400\n> Subject: [PATCH] Add tracking of backend memory allocated to pg_stat_activity\n> \n> This new field displays the current bytes of memory allocated to the\n> backend process. It is updated as memory for the process is\n> malloc'd/free'd. Memory allocated to items on the freelist is included in\n> the displayed value. Dynamic shared memory allocations are included\n> only in the value displayed for the backend that created them, they are\n> not included in the value for backends that are attached to them to\n> avoid double counting. On occasion, orphaned memory segments may be\n> cleaned up on postmaster startup. This may result in decreasing the sum\n> without a prior increment. We limit the floor of backend_mem_allocated\n> to zero. Updated pg_stat_activity documentation for the new column.\n> ---\n> doc/src/sgml/monitoring.sgml | 12 +++\n> src/backend/catalog/system_views.sql | 1 +\n> src/backend/storage/ipc/dsm_impl.c | 81 +++++++++++++++\n> src/backend/utils/activity/backend_status.c | 105 ++++++++++++++++++++\n> src/backend/utils/adt/pgstatfuncs.c | 4 +-\n> src/backend/utils/mmgr/aset.c | 18 ++++\n> src/backend/utils/mmgr/generation.c | 15 +++\n> src/backend/utils/mmgr/slab.c | 21 ++++\n> src/include/catalog/pg_proc.dat | 6 +-\n> src/include/utils/backend_status.h | 7 +-\n> src/test/regress/expected/rules.out | 9 +-\n> 11 files changed, 270 insertions(+), 9 deletions(-)\n> \n> diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n> index e5d622d514..972805b85a 100644\n> --- a/doc/src/sgml/monitoring.sgml\n> +++ b/doc/src/sgml/monitoring.sgml\n> @@ -947,6 +947,18 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser\n> </para></entry>\n> </row>\n> \n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>backend_allocated_bytes</structfield> <type>bigint</type>\n> + </para>\n> + <para>\n> + The byte count of memory allocated to this backend. Dynamic shared memory\n> + allocations are included only in the value displayed for the backend that\n> + created them, they are not included in the value for backends that are\n> + attached to them to avoid double counting.\n> + </para></entry>\n> + </row>\n> +\n\nIt doesn't seem like you need the backend_ prefix in the view since that\nis implied by it being in pg_stat_activity.\n\nFor the wording on the description, I find \"memory allocated to this\nbackend\" a bit confusing. Perhaps you could reword it to make clear you\nmean that the number represents the balance of allocations by this\nbackend. Something like:\n\n\tMemory currently allocated to this backend in bytes. This is the\n\tbalance of bytes allocated and freed by this backend.\n\nI would also link to the system administration function pg_size_pretty()\nso users know how to easily convert the value.\n\nIf you end up removing shared memory as Andres suggests in [1], I would link\npg_shmem_allocations view here and point out that shared memory allocations can\nbe viewed there instead (and why).\n\nYou could instead add dynamic shared memory allocation to the\npg_shmem_allocations view as suggested as follow-on work by the commit which\nintroduced it, ed10f32e3.\n\n> +/* --------\n> + * pgstat_report_backend_allocated_bytes_increase() -\n> + *\n> + * Called to report increase in memory allocated for this backend\n> + * --------\n> + */\n\nIt seems like you could combine the\npgstat_report_backend_allocated_bytes_decrease/increase() by either using a\nsigned integer to represent the allocation/deallocation or passing in a\n\"direction\" that is just a positive or negative multiplier enum.\n\nEspecially if you don't use the write barriers, I think you could\nsimplify the logic in the two functions.\n\nIf you do combine them, you might shorten the name to\npgstat_report_backend_allocation() or pgstat_report_allocation().\n\n> +void\n> +pgstat_report_backend_allocated_bytes_increase(uint64 allocation)\n> +{\n> +\tvolatile PgBackendStatus *beentry = MyBEEntry;\n> +\n> +\tif (!beentry || !pgstat_track_activities)\n> +\t{\n> +\t\t/*\n> +\t\t * Account for memory before pgstats is initialized. This will be\n> +\t\t * migrated to pgstats on initialization.\n> +\t\t */\n> +\t\tbackend_allocated_bytes += allocation;\n> +\n> +\t\treturn;\n> +\t}\n> +\n> +\t/*\n> +\t * Update my status entry, following the protocol of bumping\n> +\t * st_changecount before and after. We use a volatile pointer here to\n> +\t * ensure the compiler doesn't try to get cute.\n> +\t */\n> +\tPGSTAT_BEGIN_WRITE_ACTIVITY(beentry);\n> +\tbeentry->backend_allocated_bytes += allocation;\n> +\tPGSTAT_END_WRITE_ACTIVITY(beentry);\n> +}\n> +\n> +/* --------\n> + * pgstat_report_backend_allocated_bytes_decrease() -\n> + *\n> + * Called to report decrease in memory allocated for this backend\n> + * --------\n> + */\n> +void\n> +pgstat_report_backend_allocated_bytes_decrease(uint64 deallocation)\n> +{\n> +\tvolatile PgBackendStatus *beentry = MyBEEntry;\n> +\n> +\t/*\n> +\t * Cases may occur where shared memory from a previous postmaster\n> +\t * invocation still exist. These are cleaned up at startup by\n> +\t * dsm_cleanup_using_control_segment. Limit decreasing memory allocated to\n> +\t * zero in case no corresponding prior increase exists or decrease has\n> +\t * already been accounted for.\n> +\t */\n> +\n> +\tif (!beentry || !pgstat_track_activities)\n> +\t{\n> +\t\t/*\n> +\t\t * Account for memory before pgstats is initialized. This will be\n> +\t\t * migrated to pgstats on initialization. Do not allow\n> +\t\t * backend_allocated_bytes to go below zero. If pgstats has not been\n> +\t\t * initialized, we are in startup and we set backend_allocated_bytes\n> +\t\t * to zero in cases where it would go negative and skip generating an\n> +\t\t * ereport.\n> +\t\t */\n> +\t\tif (deallocation > backend_allocated_bytes)\n> +\t\t\tbackend_allocated_bytes = 0;\n> +\t\telse\n> +\t\t\tbackend_allocated_bytes -= deallocation;\n> +\n> +\t\treturn;\n> +\t}\n> +\n> +\t/*\n> +\t * Do not allow backend_allocated_bytes to go below zero. ereport if we\n> +\t * would have. There's no need for a lock around the read here as it's\n> +\t * being referenced from the same backend which means that there shouldn't\n> +\t * be concurrent writes. We want to generate an ereport in these cases.\n> +\t */\n> +\tif (deallocation > beentry->backend_allocated_bytes)\n> +\t{\n> +\t\tereport(LOG, errmsg(\"decrease reduces reported backend memory allocated below zero; setting reported to 0\"));\n> +\n\nI also think it would be nice to include the deallocation amount and\nbackend_allocated_bytes amount in the log message.\nIt also might be nice to start the message with something more clear\nthan \"decrease\".\nFor example, I would find this clear as a user:\n\n\tBackend [backend_type or pid] deallocated [deallocation number] bytes,\n\t[backend_allocated_bytes - deallocation number] more than this backend\n\thas reported allocating.\n\n> diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c\n> index 96bffc0f2a..b6d135ad2f 100644\n> --- a/src/backend/utils/adt/pgstatfuncs.c\n> +++ b/src/backend/utils/adt/pgstatfuncs.c\n> @@ -553,7 +553,7 @@ pg_stat_get_progress_info(PG_FUNCTION_ARGS)\n> Datum\n> pg_stat_get_activity(PG_FUNCTION_ARGS)\n> {\n> -#define PG_STAT_GET_ACTIVITY_COLS\t30\n> +#define PG_STAT_GET_ACTIVITY_COLS\t31\n> \tint\t\t\tnum_backends = pgstat_fetch_stat_numbackends();\n> \tint\t\t\tcurr_backend;\n> \tint\t\t\tpid = PG_ARGISNULL(0) ? -1 : PG_GETARG_INT32(0);\n> @@ -609,6 +609,8 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n> \t\telse\n> \t\t\tnulls[16] = true;\n> \n> +\t\tvalues[30] = UInt64GetDatum(beentry->backend_allocated_bytes);\n\nThough not the fault of this patch, it is becoming very difficult to\nkeep the columns straight in pg_stat_get_activity(). Perhaps you could\nadd a separate commit to add an enum for the columns so the function is\neasier to understand.\n\n> diff --git a/src/include/utils/backend_status.h b/src/include/utils/backend_status.h\n> index b582b46e9f..75d87e8308 100644\n> --- a/src/include/utils/backend_status.h\n> +++ b/src/include/utils/backend_status.h\n> @@ -169,6 +169,9 @@ typedef struct PgBackendStatus\n> \n> \t/* query identifier, optionally computed using post_parse_analyze_hook */\n> \tuint64\t\tst_query_id;\n> +\n> +\t/* Current memory allocated to this backend */\n> +\tuint64\t\tbackend_allocated_bytes;\n> } PgBackendStatus;\n\nI don't think you need the backend_ prefix here since it is in\nPgBackendStatus.\n\n> @@ -313,7 +316,9 @@ extern const char *pgstat_get_backend_current_activity(int pid, bool checkUser);\n> extern const char *pgstat_get_crashed_backend_activity(int pid, char *buffer,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t int buflen);\n> extern uint64 pgstat_get_my_query_id(void);\n> -\n> +extern void pgstat_report_backend_allocated_bytes_increase(uint64 allocation);\n> +extern void pgstat_report_backend_allocated_bytes_decrease(uint64 deallocation);\n> +extern uint64 pgstat_get_all_backend_memory_allocated(void);\n> \n> /* ----------\n> * Support functions for the SQL-callable functions to\n> diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out\n> index 624d0e5aae..ba9f494806 100644\n> --- a/src/test/regress/expected/rules.out\n> +++ b/src/test/regress/expected/rules.out\n> @@ -1753,10 +1753,11 @@ pg_stat_activity| SELECT s.datid,\n> s.state,\n> s.backend_xid,\n> s.backend_xmin,\n> + s.backend_allocated_bytes,\n> s.query_id,\n> s.query,\n> s.backend_type\n\nSeems like it would be possible to add a functional test to stats.sql.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/20221105024146.xxlbtsxh2niyz2fu%40awork3.anarazel.de\n\n\n",
"msg_date": "Mon, 7 Nov 2022 16:17:47 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "Hi Andres,\nThanks for looking at this and for the feedback. Responses inline below.\n\nOn Fri, 2022-11-04 at 19:41 -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2022-11-04 08:56:13 -0400, Reid Thompson wrote:\n> \n> I'm not convinced that counting DSM values this way is quite right.\n> There are a few uses of DSMs that track shared resources, with the biggest\n> likely being the stats for relations etc. I suspect that tracking that via\n> backend memory usage will be quite confusing, because fairly random backends that\n> had to grow the shared state end up being blamed with the memory usage in\n> perpituity - and then suddenly that memory usage will vanish when that backend exits,\n> despite the memory continuing to exist.\n\nOk, I'll make an attempt to identify these allocations and manage them\nelsewhere.\n\n> \n> \n> > @@ -734,6 +747,7 @@ AllocSetAlloc(MemoryContext context, Size size)\n> > return NULL;\n> > \n> > context->mem_allocated += blksize;\n> > + pgstat_report_backend_allocated_bytes_increase(blksize);\n> > \n> > block->aset = set;\n> > block->freeptr = block->endptr = ((char *) block) + blksize;\n> > @@ -944,6 +958,7 @@ AllocSetAlloc(MemoryContext context, Size size)\n> > return NULL;\n> > \n> > context->mem_allocated += blksize;\n> > + pgstat_report_backend_allocated_bytes_increase(blksize);\n> > \n> > block->aset = set;\n> > block->freeptr = ((char *) block) + ALLOC_BLOCKHDRSZ;\n> > @@ -1043,6 +1058,7 @@ AllocSetFree(void *pointer)\n> > block->next->prev = block->prev;\n> > \n> > set->header.mem_allocated -= block->endptr - ((char *) block);\n> > + pgstat_report_backend_allocated_bytes_decrease(block->endptr - ((char *) block));\n> > \n> > #ifdef CLOBBER_FREED_MEMORY\n> > wipe_mem(block, block->freeptr - ((char *) block));\n> \n> I suspect this will be noticable cost-wise. Even though these paths aren't the\n> hottest memory allocation paths, by nature of going down into malloc, adding\n> an external function call that then does a bunch of branching etc. seems\n> likely to add up to some. See below for how I think we can deal with\n> that...\n> \n> \n> > +\n> > +/* --------\n> > + * pgstat_report_backend_allocated_bytes_increase() -\n> > + *\n> > + * Called to report increase in memory allocated for this backend\n> > + * --------\n> > + */\n> > +void\n> > +pgstat_report_backend_allocated_bytes_increase(uint64 allocation)\n> > +{\n> > + volatile PgBackendStatus *beentry = MyBEEntry;\n> > +\n> > + if (!beentry || !pgstat_track_activities)\n> > + {\n> > + /*\n> > + * Account for memory before pgstats is initialized. This will be\n> > + * migrated to pgstats on initialization.\n> > + */\n> > + backend_allocated_bytes += allocation;\n> > +\n> > + return;\n> > + }\n> > +\n> > + /*\n> > + * Update my status entry, following the protocol of bumping\n> > + * st_changecount before and after. We use a volatile pointer here to\n> > + * ensure the compiler doesn't try to get cute.\n> > + */\n> > + PGSTAT_BEGIN_WRITE_ACTIVITY(beentry);\n> > + beentry->backend_allocated_bytes += allocation;\n> > + PGSTAT_END_WRITE_ACTIVITY(beentry);\n> > +}\n> \n> This is quite a few branches, including write/read barriers.\n> \n> It doesn't really make sense to use the PGSTAT_BEGIN_WRITE_ACTIVITY() pattern\n> here - you're just updating a single value, there's nothing to be gained by\n> it. The point of PGSTAT_BEGIN_*ACTIVITY() stuff is to allow readers to get a\n> consistent view of multiple values - but there aren't multiple values\n> here!\n\nI'll remove the barriers - initially I copied how prior functions were\ncoded as my template ala\npgstat_report_query_id, pgstat_report_xact_timestamp.\n\n> \n> To avoid the overhead of checking (!beentry || !pgstat_track_activities) and\n> the external function call, I think you'd be best off copying the trickery I\n> introduced for pgstat_report_wait_start(), in 225a22b19ed.\n> \n> I.e. make pgstat_report_backend_allocated_bytes_increase() a static inline\n> function that unconditionally updates something like\n> *my_backend_allocated_memory. To deal with the case of (!beentry ||\n> !pgstat_track_activities), that variable initially points to some backend\n> local state and is set to the shared state in pgstat_bestart().\n> \n> This additionally has the nice benefit that you can track memory usage from\n> before pgstat_bestart(), it'll be in the local variable.\n\nOK, I think I can mimic the code you reference.\n\n> \n> > +void\n> > +pgstat_report_backend_allocated_bytes_decrease(uint64\n> > deallocation)\n> > +{\n> > + volatile PgBackendStatus *beentry = MyBEEntry;\n> > +\n> > + /*\n> > + * Cases may occur where shared memory from a previous postmaster\n> > + * invocation still exist. These are cleaned up at startup by\n> > + * dsm_cleanup_using_control_segment. Limit decreasing memory allocated to\n> > + * zero in case no corresponding prior increase exists or decrease has\n> > + * already been accounted for.\n> > + */\n> \n> I don't really follow - postmaster won't ever have a backend status\n> array, so how would they be tracked here?\n\nOn startup, a check is made for leftover dsm control segments in the\nDataDir. It appears possible that in certain situations on startup we\nmay find and destroy stale segments and thus decrement the allocation\nvariable. \n\nI based this off of:\n/ipc/dsm.c\n\ndsm_postmaster_startup:\n 150 dsm_postmaster_startup(PGShmemHeader *shim)\n {\n...snip...\n 158 /* \n 159 ¦* If we're using the mmap implementations, clean up any leftovers. \n 160 ¦* Cleanup isn't needed on Windows, and happens earlier in startup for \n 161 ¦* POSIX and System V shared memory, via a direct call to \n 162 ¦* dsm_cleanup_using_control_segment. \n 163 ¦*/\n...snip... }\n\n\ndsm_cleanup_using_control_segment:\n 206 /* \n 207 * Determine whether the control segment from the previous postmaster \n 208 * invocation still exists. If so, remove the dynamic shared memory \n 209 * segments to which it refers, and then the control segment itself. \n 210 */ \n 211 void \n 212 dsm_cleanup_using_control_segment(dsm_handle old_control_handle) \n 213 {\n ...snip...\n 270 /* Destroy the referenced segment. */ \n 271 dsm_impl_op(DSM_OP_DESTROY, handle, 0, &junk_impl_private, \n 272 &junk_mapped_address, &junk_mapped_size, LOG); \n 273 } \n 274 \n 275 /* Destroy the old control segment, too. */ \n 276 elog(DEBUG2, \n 277 ¦\"cleaning up dynamic shared memory control segment with ID %u\", \n 278 ¦old_control_handle); \n 279 dsm_impl_op(DSM_OP_DESTROY, old_control_handle, 0, &impl_private, \n 280 &mapped_address, &mapped_size, LOG);\n 281 }\n\n> \n> Greetings,\n> \n> Andres Freund\n\nThanks again,\nReid\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com\n\n\n\n\n",
"msg_date": "Wed, 09 Nov 2022 08:54:54 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "Hi Melanie,\nThank you for looking at this and for the feedback. Responses inline\nbelow.\n\nOn Mon, 2022-11-07 at 16:17 -0500, Melanie Plageman wrote:\n> On Fri, Nov 04, 2022 at 08:56:13AM -0400, Reid Thompson wrote:\n> > From a8de5d29c0c6f10962181926a49ad4fec1e52bd1 Mon Sep 17 00:00:00\n> > 2001\n> > From: Reid Thompson <jreidthompson@nc.rr.com>\n> > Date: Thu, 11 Aug 2022 12:01:25 -0400\n> > Subject: [PATCH] Add tracking of backend memory allocated to pg_stat_activity\n> > +\n> \n> It doesn't seem like you need the backend_ prefix in the view since\n> that is implied by it being in pg_stat_activity.\n\nI will remove the prefix.\n\n> For the wording on the description, I find \"memory allocated to this\n> backend\" a bit confusing. Perhaps you could reword it to make clear\n> you mean that the number represents the balance of allocations by this\n> backend. Something like:\n> \n> Memory currently allocated to this backend in bytes. This is the\n> balance of bytes allocated and freed by this backend.\n> I would also link to the system administration function\n> pg_size_pretty() so users know how to easily convert the value.\n\nThanks, I'll make these changes\n\n> If you end up removing shared memory as Andres suggests in [1], I\n> would link pg_shmem_allocations view here and point out that shared memory\n> allocations can be viewed there instead (and why).\n> \n> You could instead add dynamic shared memory allocation to the\n> pg_shmem_allocations view as suggested as follow-on work by the\n> commit which introduced it, ed10f32e3.\n> \n> > +/* --------\n> > + * pgstat_report_backend_allocated_bytes_increase() -\n> > + *\n> > + * Called to report increase in memory allocated for this backend\n> > + * --------\n> > + */\n> \n> It seems like you could combine the\n> pgstat_report_backend_allocated_bytes_decrease/increase() by either\n> using a signed integer to represent the allocation/deallocation or passing in\n> a \"direction\" that is just a positive or negative multiplier enum.\n> \n> Especially if you don't use the write barriers, I think you could\n> simplify the logic in the two functions.\n> \n> If you do combine them, you might shorten the name to\n> pgstat_report_backend_allocation() or pgstat_report_allocation().\n\nAgreed. This seems a cleaner, simpler way to go. I'll add it to the\nTODO list.\n\n> > + /*\n> > + * Do not allow backend_allocated_bytes to go below zero.\n> > ereport if we\n> > + * would have. There's no need for a lock around the read\n> > here as it's\n> > + * being referenced from the same backend which means that\n> > there shouldn't\n> > + * be concurrent writes. We want to generate an ereport in\n> > these cases.\n> > + */\n> > + if (deallocation > beentry->backend_allocated_bytes)\n> > + {\n> > + ereport(LOG, errmsg(\"decrease reduces reported\n> > backend memory allocated below zero; setting reported to 0\"));\n> > +\n> \n> I also think it would be nice to include the deallocation amount and\n> backend_allocated_bytes amount in the log message.\n> It also might be nice to start the message with something more clear\n> than \"decrease\".\n> For example, I would find this clear as a user:\n> \n> Backend [backend_type or pid] deallocated [deallocation number] bytes,\n> [backend_allocated_bytes - deallocation number] more than this backend\n> has reported allocating.\n\nSounds good, I'll implement these changes\n\n> > diff --git a/src/backend/utils/adt/pgstatfuncs.c\n> > b/src/backend/utils/adt/pgstatfuncs.c\n> > index 96bffc0f2a..b6d135ad2f 100644\n> > --- a/src/backend/utils/adt/pgstatfuncs.c\n> > +++ b/src/backend/utils/adt/pgstatfuncs.c\n> > @@ -553,7 +553,7 @@ pg_stat_get_progress_info(PG_FUNCTION_ARGS)\n> > Datum\n> > pg_stat_get_activity(PG_FUNCTION_ARGS)\n> > {\n> > -#define PG_STAT_GET_ACTIVITY_COLS 30\n> > +#define PG_STAT_GET_ACTIVITY_COLS 31\n> > int num_backends =\n> > \n> > + values[30] = UInt64GetDatum(beentry->backend_allocated_bytes);\n> \n> Though not the fault of this patch, it is becoming very difficult to\n> keep the columns straight in pg_stat_get_activity(). Perhaps you\n> could add a separate commit to add an enum for the columns so the function\n> is easier to understand.\n> \n> > diff --git a/src/include/utils/backend_status.h\n> > b/src/include/utils/backend_status.h\n> > index b582b46e9f..75d87e8308 100644\n> > --- a/src/include/utils/backend_status.h\n> > +++ b/src/include/utils/backend_status.h\n> > @@ -169,6 +169,9 @@ typedef struct PgBackendStatus\n> > \n> > /* query identifier, optionally computed using\n> > post_parse_analyze_hook */\n> > uint64 st_query_id;\n> > +\n> > + /* Current memory allocated to this backend */\n> > + uint64 backend_allocated_bytes;\n> > } PgBackendStatus;\n> \n> I don't think you need the backend_ prefix here since it is in\n> PgBackendStatus.\n\nAgreed again, I'll remove the prefix.\n\n> > @@ -313,7 +316,9 @@ extern const char\n> > *pgstat_get_backend_current_activity(int pid, bool checkUser);\n> > extern const char *pgstat_get_crashed_backend_activity(int pid,\n> > char *buffer,\n> > \n> > int buflen);\n> > extern uint64 pgstat_get_my_query_id(void);\n> > -\n> > +extern void pgstat_report_backend_allocated_bytes_increase(uint64\n> > allocation);\n> > +extern void pgstat_report_backend_allocated_bytes_decrease(uint64\n> > deallocation);\n> > +extern uint64 pgstat_get_all_backend_memory_allocated(void);\n> > \n> > /* ----------\n> > * Support functions for the SQL-callable functions to\n> > diff --git a/src/test/regress/expected/rules.out\n> > b/src/test/regress/expected/rules.out\n> > index 624d0e5aae..ba9f494806 100644\n> > --- a/src/test/regress/expected/rules.out\n> > +++ b/src/test/regress/expected/rules.out\n> > @@ -1753,10 +1753,11 @@ pg_stat_activity| SELECT s.datid,\n> > s.state,\n> > s.backend_xid,\n> > s.backend_xmin,\n> > + s.backend_allocated_bytes,\n> > s.query_id,\n> > s.query,\n> > s.backend_type\n> \n> Seems like it would be possible to add a functional test to\n> stats.sql.\n\nI will look at adding this.\n\n\n> - Melanie\n> \n> [1]\n> https://www.postgresql.org/message-id/20221105024146.xxlbtsxh2niyz2fu%40awork3.anarazel.de\n\nThanks again,\nReid\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com\n\n\n\n",
"msg_date": "Wed, 09 Nov 2022 09:23:25 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "Hi,\n\n2022-11-09 08:54:54 -0500, Reid Thompson wrote:\n> Thanks for looking at this and for the feedback. Responses inline below.\n> > > +void\n> > > +pgstat_report_backend_allocated_bytes_decrease(uint64\n> > > deallocation)\n> > > +{\n> > > +�������volatile PgBackendStatus *beentry = MyBEEntry;\n> > > +\n> > > +�������/*\n> > > +������� * Cases may occur where shared memory from a previous postmaster\n> > > +������� * invocation still exist. These are cleaned up at startup by\n> > > +������� * dsm_cleanup_using_control_segment. Limit decreasing memory allocated to\n> > > +������� * zero in case no corresponding prior increase exists or decrease has\n> > > +������� * already been accounted for.\n> > > +������� */\n> > \n> > I don't really follow - postmaster won't ever have a backend status\n> > array, so how would they be tracked here?\n> \n> On startup, a check is made for leftover dsm control segments in the\n> DataDir. It appears possible that in certain situations on startup we\n> may find and destroy stale segments and thus decrement the allocation\n> variable. \n> \n> I based this off of:\n> /ipc/dsm.c\n> \n> dsm_postmaster_startup:\n> 150 dsm_postmaster_startup(PGShmemHeader *shim)\n> {\n> ...\n> 281 }\n\nI don't think we should account for memory allocations done in postmaster in\nthis patch. They'll otherwise be counted again in each of the forked\nbackends. As this cleanup happens during postmaster startup, we'll have to\nmake sure accounting is reset during backend startup.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 9 Nov 2022 08:13:49 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "Code rebased to current master.\nUpdated to incorporate additional recommendations from the the list\n - add units to variables in view\n - remove 'backend_' prefix from variables/functions\n - update documentation\n - add functional test for allocated_bytes\n - refactor allocation reporting to reduce number of functions and\n branches/reduce performance hit\n - zero allocated bytes after fork to avoid double counting postmaster allocations\n\n\n\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com",
"msg_date": "Sat, 26 Nov 2022 22:10:06 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Wed, 2022-11-09 at 09:23 -0500, Reid Thompson wrote:\n> Hi Melanie,\n> Thank you for looking at this and for the feedback. Responses inline\n> below.\n> \n> On Mon, 2022-11-07 at 16:17 -0500, Melanie Plageman wrote:\n> > \n> > It doesn't seem like you need the backend_ prefix in the view since\n> > that is implied by it being in pg_stat_activity.\n> \n> I will remove the prefix.\n\ndone\n\n> \n> > For the wording on the description, I find \"memory allocated to\n> > this\n> > backend\" a bit confusing. Perhaps you could reword it to make clear\n> > you mean that the number represents the balance of allocations by\n> > this\n> > backend. Something like:\n> > \n> > Memory currently allocated to this backend in bytes. This\n> > is the\n> > balance of bytes allocated and freed by this backend.\n> > I would also link to the system administration function\n> > pg_size_pretty() so users know how to easily convert the value.\n> \n> Thanks, I'll make these changes\n\ndone\n\n> > > +/* --------\n> > > + * pgstat_report_backend_allocated_bytes_increase() -\n> > > + *\n> > > + * Called to report increase in memory allocated for this\n> > > backend\n> > > + * --------\n> > > + */\n> > \n> > It seems like you could combine the\n> > pgstat_report_backend_allocated_bytes_decrease/increase() by either\n> > using a signed integer to represent the allocation/deallocation or\n> > passing in\n> > a \"direction\" that is just a positive or negative multiplier enum.\n> > \n> > Especially if you don't use the write barriers, I think you could\n> > simplify the logic in the two functions.\n> > \n> > If you do combine them, you might shorten the name to\n> > pgstat_report_backend_allocation() or pgstat_report_allocation().\n> \n> Agreed. This seems a cleaner, simpler way to go. I'll add it to the\n> TODO list.\n\ndone\n\n> \n> > > + /*\n> > > + * Do not allow backend_allocated_bytes to go below zero.\n> > > ereport if we\n> > > + * would have. There's no need for a lock around the read\n> > > here as it's\n> > > + * being referenced from the same backend which means\n> > > that\n> > > there shouldn't\n> > > + * be concurrent writes. We want to generate an ereport\n> > > in\n> > > these cases.\n> > > + */\n> > > + if (deallocation > beentry->backend_allocated_bytes)\n> > > + {\n> > > + ereport(LOG, errmsg(\"decrease reduces reported\n> > > backend memory allocated below zero; setting reported to 0\"));\n> > > +\n> > \n> > I also think it would be nice to include the deallocation amount\n> > and\n> > backend_allocated_bytes amount in the log message.\n> > It also might be nice to start the message with something more\n> > clear\n> > than \"decrease\".\n> > For example, I would find this clear as a user:\n> > \n> > Backend [backend_type or pid] deallocated [deallocation\n> > number] bytes,\n> > [backend_allocated_bytes - deallocation number] more than\n> > this backend\n> > has reported allocating.\n> \n> Sounds good, I'll implement these changes\n\ndone\n\n> > > diff --git a/src/include/utils/backend_status.h\n> > > b/src/include/utils/backend_status.h\n> > > index b582b46e9f..75d87e8308 100644\n> > > --- a/src/include/utils/backend_status.h\n> > > +++ b/src/include/utils/backend_status.h\n> > > @@ -169,6 +169,9 @@ typedef struct PgBackendStatus\n> > > \n> > > /* query identifier, optionally computed using\n> > > post_parse_analyze_hook */\n> > > uint64 st_query_id;\n> > > +\n> > > + /* Current memory allocated to this backend */\n> > > + uint64 backend_allocated_bytes;\n> > > } PgBackendStatus;\n> > \n> > I don't think you need the backend_ prefix here since it is in\n> > PgBackendStatus.\n> \n> Agreed again, I'll remove the prefix.\n\ndone\n\n> > > /* ----------\n> > > * Support functions for the SQL-callable functions to\n> > > diff --git a/src/test/regress/expected/rules.out\n> > > b/src/test/regress/expected/rules.out\n> > > index 624d0e5aae..ba9f494806 100644\n> > > --- a/src/test/regress/expected/rules.out\n> > > +++ b/src/test/regress/expected/rules.out\n> > > @@ -1753,10 +1753,11 @@ pg_stat_activity| SELECT s.datid,\n> > > s.state,\n> > > s.backend_xid,\n> > > s.backend_xmin,\n> > > + s.backend_allocated_bytes,\n> > > s.query_id,\n> > > s.query,\n> > > s.backend_type\n> > \n> > Seems like it would be possible to add a functional test to\n> > stats.sql.\n> \n> I will look at adding this.\n\ndone\n\npatch attached to https://www.postgresql.org/message-id/06b4922193b80776a31e08a3809f2414b0d4bf90.camel%40crunchydata.com\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com\n\n\n\n",
"msg_date": "Sat, 26 Nov 2022 22:13:15 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Wed, 2022-11-09 at 08:54 -0500, Reid Thompson wrote:\n> Hi Andres,\n> Thanks for looking at this and for the feedback. Responses inline\n> below.\n> \n>> On Fri, 2022-11-04 at 19:41 -0700, Andres Freund wrote:\n> > Hi,\n> > \n> On 2022-11-04 08:56:13 -0400, Reid Thompson wrote:\n> > \n> > I'm not convinced that counting DSM values this way is quite right.\n> > There are a few uses of DSMs that track shared resources, with the biggest\n> > likely being the stats for relations etc. I suspect that tracking that via\n> > backend memory usage will be quite confusing, because fairly random backends that\n> > had to grow the shared state end up being blamed with the memory usage in\n> > perpituity - and then suddenly that memory usage will vanish when that backend exits,\n> > despite the memory continuing to exist.\n> \n> Ok, I'll make an attempt to identify these allocations and manage\n> them elsewhere.\n\nstill TBD\n\n> > \n> > \n> > > @@ -734,6 +747,7 @@ AllocSetAlloc(MemoryContext context, Size\n> > > size)\n> > > return NULL;\n> > > \n> > > context->mem_allocated += blksize;\n> > > + pgstat_report_backend_allocated_bytes_increase(bl\n> > > ksize);\n> > \n> > I suspect this will be noticable cost-wise. Even though these paths aren't the\n> > hottest memory allocation paths, by nature of going down into malloc, adding\n> > an external function call that then does a bunch of branching etc. seems\n> > likely to add up to some. See below for how I think we can deal with that...\n> > \n> > This is quite a few branches, including write/read barriers.\n> > \n> > It doesn't really make sense to use the\n> > PGSTAT_BEGIN_WRITE_ACTIVITY() pattern\n> > here - you're just updating a single value, there's nothing to be gained by\n> > it. The point of PGSTAT_BEGIN_*ACTIVITY() stuff is to allow readers to get a\n> > consistent view of multiple values - but there aren't multiple values\n> > here!\n> \n> I'll remove the barriers - initially I copied how prior functions were\n\nbarriers removed\n\n> > \n> > To avoid the overhead of checking (!beentry || !pgstat_track_activities) and\n> > the external function call, I think you'd be best off copying the trickery I\n> > introduced for pgstat_report_wait_start(), in 225a22b19ed.\n> > \n> > I.e. make pgstat_report_backend_allocated_bytes_increase() a static inline\n> > function that unconditionally updates something like\n> > *my_backend_allocated_memory. To deal with the case of (!beentry ||\n> > !pgstat_track_activities), that variable initially points to some backend\n> > local state and is set to the shared state in pgstat_bestart().\n> > \n> > This additionally has the nice benefit that you can track memory usage from\n> > before pgstat_bestart(), it'll be in the local variable.\n> \n> OK, I think I can mimic the code you reference.\n\ndone\n\npatch attached to https://www.postgresql.org/message-id/06b4922193b80776a31e08a3809f2414b0d4bf90.camel%40crunchydata.com\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com\n\n\n\n",
"msg_date": "Sat, 26 Nov 2022 22:13:23 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Sat, 2022-11-26 at 22:10 -0500, Reid Thompson wrote:\n> Code rebased to current master.\n> Updated to incorporate additional recommendations from the the list\n> - add units to variables in view\n> - remove 'backend_' prefix from variables/functions\n> - update documentation\n> - add functional test for allocated_bytes\n> - refactor allocation reporting to reduce number of functions and\n> branches/reduce performance hit\n> - zero allocated bytes after fork to avoid double counting\n> postmaster allocations\n> \n> \n> \n> \n\nattempt to remedy cfbot windows build issues\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com",
"msg_date": "Sun, 27 Nov 2022 00:32:19 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Sun, Nov 27, 2022 at 12:32:19AM -0500, Reid Thompson wrote:\n> attempt to remedy cfbot windows build issues\n\nYou can trigger those tests under your own/private repo by pushing a\nbranch to github. See src/tools/ci/README\n\nI suppose the cfbot page ought to point that out.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 27 Nov 2022 08:46:33 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Sat, Nov 26, 2022 at 9:32 PM Reid Thompson <reid.thompson@crunchydata.com>\nwrote:\n\n> On Sat, 2022-11-26 at 22:10 -0500, Reid Thompson wrote:\n> > Code rebased to current master.\n> > Updated to incorporate additional recommendations from the the list\n> > - add units to variables in view\n> > - remove 'backend_' prefix from variables/functions\n> > - update documentation\n> > - add functional test for allocated_bytes\n> > - refactor allocation reporting to reduce number of functions and\n> > branches/reduce performance hit\n> > - zero allocated bytes after fork to avoid double counting\n> > postmaster allocations\n> >\n> >\n> >\n> >\n>\n> attempt to remedy cfbot windows build issues\n>\n>\n> Hi,\n\n+ if (request_size > *mapped_size)\n+ {\n+ pgstat_report_allocated_bytes(*mapped_size,\nDECREASE);\n+ pgstat_report_allocated_bytes(request_size,\nINCREASE);\n\npgstat_report_allocated_bytes is called twice for this case. Can the two\ncalls be combined into one (with request_size - *mapped_size, INCREASE) ?\n\nCheers\n\nOn Sat, Nov 26, 2022 at 9:32 PM Reid Thompson <reid.thompson@crunchydata.com> wrote:On Sat, 2022-11-26 at 22:10 -0500, Reid Thompson wrote:\n> Code rebased to current master.\n> Updated to incorporate additional recommendations from the the list\n> - add units to variables in view\n> - remove 'backend_' prefix from variables/functions\n> - update documentation\n> - add functional test for allocated_bytes\n> - refactor allocation reporting to reduce number of functions and\n> branches/reduce performance hit\n> - zero allocated bytes after fork to avoid double counting\n> postmaster allocations\n> \n> \n> \n> \n\nattempt to remedy cfbot windows build issues\nHi,+ if (request_size > *mapped_size)+ {+ pgstat_report_allocated_bytes(*mapped_size, DECREASE);+ pgstat_report_allocated_bytes(request_size, INCREASE);pgstat_report_allocated_bytes is called twice for this case. Can the two calls be combined into one (with request_size - *mapped_size, INCREASE) ?Cheers",
"msg_date": "Sun, 27 Nov 2022 07:17:34 -0800",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Sun, Nov 27, 2022 at 12:32:19AM -0500, Reid Thompson wrote:\n> @@ -32,6 +33,12 @@ typedef enum BackendState\n> \tSTATE_DISABLED\n> } BackendState;\n> \n> +/* Enum helper for reporting memory allocated bytes */\n> +enum allocation_direction\n> +{\n> +\tDECREASE = -1,\n> +\tINCREASE = 1,\n> +};\n\nBTW, these should have some kind of prefix, like PG_ALLOC_* to avoid\ncausing the same kind of problem for someone else that another header\ncaused for you by defining something somewhere called IGNORE (ignore\nwhat, I don't know). The other problem was probably due to a define,\nthough. Maybe instead of an enum, the function should take a boolean.\n\nI still wonder whether there needs to be a separate CF entry for the\n0001 patch. One issue is that there's two different lists of people\ninvolved in the threads.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 27 Nov 2022 09:40:53 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Sun, Nov 27, 2022 at 7:41 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Sun, Nov 27, 2022 at 12:32:19AM -0500, Reid Thompson wrote:\n> > @@ -32,6 +33,12 @@ typedef enum BackendState\n> > STATE_DISABLED\n> > } BackendState;\n> >\n> > +/* Enum helper for reporting memory allocated bytes */\n> > +enum allocation_direction\n> > +{\n> > + DECREASE = -1,\n> > + INCREASE = 1,\n> > +};\n>\n> BTW, these should have some kind of prefix, like PG_ALLOC_* to avoid\n> causing the same kind of problem for someone else that another header\n> caused for you by defining something somewhere called IGNORE (ignore\n> what, I don't know). The other problem was probably due to a define,\n> though. Maybe instead of an enum, the function should take a boolean.\n>\n> I still wonder whether there needs to be a separate CF entry for the\n> 0001 patch. One issue is that there's two different lists of people\n> involved in the threads.\n>\n> --\n> Justin\n>\n>\n> I am a bit curious: why is the allocation_direction enum needed ?\n\npgstat_report_allocated_bytes() can be given the amount (either negative or\npositive) to adjust directly.\n\nCheers\n\nOn Sun, Nov 27, 2022 at 7:41 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Sun, Nov 27, 2022 at 12:32:19AM -0500, Reid Thompson wrote:\n> @@ -32,6 +33,12 @@ typedef enum BackendState\n> STATE_DISABLED\n> } BackendState;\n> \n> +/* Enum helper for reporting memory allocated bytes */\n> +enum allocation_direction\n> +{\n> + DECREASE = -1,\n> + INCREASE = 1,\n> +};\n\nBTW, these should have some kind of prefix, like PG_ALLOC_* to avoid\ncausing the same kind of problem for someone else that another header\ncaused for you by defining something somewhere called IGNORE (ignore\nwhat, I don't know). The other problem was probably due to a define,\nthough. Maybe instead of an enum, the function should take a boolean.\n\nI still wonder whether there needs to be a separate CF entry for the\n0001 patch. One issue is that there's two different lists of people\ninvolved in the threads.\n\n-- \nJustin\n\nI am a bit curious: why is the allocation_direction enum needed ?pgstat_report_allocated_bytes() can be given the amount (either negative or positive) to adjust directly.Cheers",
"msg_date": "Sun, 27 Nov 2022 10:09:47 -0800",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On 2022-11-26 22:10:06 -0500, Reid Thompson wrote:\n> - zero allocated bytes after fork to avoid double counting postmaster allocations\n\nI still don't understand this - postmaster shouldn't be counted at all. It\ndoesn't have a PgBackendStatus. There simply shouldn't be any tracked\nallocations from it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 28 Nov 2022 10:59:09 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Mon, 2022-11-28 at 10:59 -0800, Andres Freund wrote:\n> On 2022-11-26 22:10:06 -0500, Reid Thompson wrote:\n> > - zero allocated bytes after fork to avoid double counting\n> > postmaster allocations\n> \n> I still don't understand this - postmaster shouldn't be counted at\n> all. It\n> doesn't have a PgBackendStatus. There simply shouldn't be any tracked\n> allocations from it.\n> \n> Greetings,\n> \n> Andres Freund\n\nHi Andres,\nI based this on the following.\n\nIt appears to me that Postmaster populates the local variable that\n*my_allocated_bytes points to. That allocation is passed to forked\nchildren, and if not zeroed out, will be double counted as part of\nthe child allocation. Is this invalid?\n\n$ ps -ef|grep postgres\npostgres 6389 1 0 Dec01 ? 00:00:17 /usr/sbin/pgbouncer -d /etc/pgbouncer/pgbouncer.ini\nrthompso 2937799 1 0 09:45 ? 00:00:00 /tmp/postgres/install/pg-stats-memory/bin/postgres -D /var/tmp/pg-stats-memory -p 5433\nrthompso 2937812 2937799 0 09:45 ? 00:00:00 postgres: checkpointer \nrthompso 2937813 2937799 0 09:45 ? 00:00:00 postgres: background writer \nrthompso 2937816 2937799 0 09:45 ? 00:00:00 postgres: walwriter \nrthompso 2937817 2937799 0 09:45 ? 00:00:00 postgres: autovacuum launcher \nrthompso 2937818 2937799 0 09:45 ? 00:00:00 postgres: logical replication launcher \nrthompso 2938877 2636586 0 09:46 pts/4 00:00:00 /usr/lib/postgresql/12/bin/psql -h localhost -p 5433 postgres\nrthompso 2938909 2937799 0 09:46 ? 00:00:00 postgres: rthompso postgres 127.0.0.1(44532) idle\nrthompso 2942164 1987403 0 09:49 pts/3 00:00:00 grep postgres\n\nBracketing fork_process() calls with logging to print *my_allocated_bytes immediately prior and after fork_process...\nTo me, this indicates that the forked children for this invocation\n(checkpointer, walwriter, autovac launcher, client backend, autovac worker, etc)\nare inheriting 240672 bytes from postmaster. \n\n$ ccat logfile \n2022-12-02 09:45:05.871 EST [2937799] LOG: starting PostgreSQL 16devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit\n2022-12-02 09:45:05.872 EST [2937799] LOG: listening on IPv4 address \"127.0.0.1\", port 5433\n2022-12-02 09:45:05.874 EST [2937799] LOG: listening on Unix socket \"/tmp/.s.PGSQL.5433\"\nparent StartChildProcess process. pid: 2937799 *my_allocated_bytes: 240672\nparent StartChildProcess process. pid: 2937799 *my_allocated_bytes: 240672\nparent StartChildProcess process. pid: 2937799 *my_allocated_bytes: 240672\nchild StartChildProcess process. pid: 2937812 *my_allocated_bytes: 240672\nchild StartChildProcess process. pid: 2937813 *my_allocated_bytes: 240672\nchild StartChildProcess process. pid: 2937814 *my_allocated_bytes: 240672\n2022-12-02 09:45:05.884 EST [2937814] LOG: database system was shut down at 2022-12-02 09:41:13 EST\nparent StartChildProcess process. pid: 2937799 *my_allocated_bytes: 240672\nparent StartAutoVacLauncher process. pid: 2937799 *my_allocated_bytes: 240672\nchild StartChildProcess process. pid: 2937816 *my_allocated_bytes: 240672\nparent do_start_bgworker process. pid: 2937799 *my_allocated_bytes: 240672\nchild StartAutoVacLauncher process. pid: 2937817 *my_allocated_bytes: 240672\n2022-12-02 09:45:05.889 EST [2937799] LOG: database system is ready to accept connections\nchild do_start_bgworker process. pid: 2937818 *my_allocated_bytes: 240672\nparent StartAutoVacWorker process. pid: 2937799 *my_allocated_bytes: 240672\nchild StartAutoVacWorker process. pid: 2938417 *my_allocated_bytes: 240672\nparent BackendStartup process. pid: 2937799 *my_allocated_bytes: 240672\nchild BackendStartup process. pid: 2938909 *my_allocated_bytes: 240672\nparent StartAutoVacWorker process. pid: 2937799 *my_allocated_bytes: 240672\nchild StartAutoVacWorker process. pid: 2938910 *my_allocated_bytes: 240672\nparent StartAutoVacWorker process. pid: 2937799 *my_allocated_bytes: 240672\nchild StartAutoVacWorker process. pid: 2939340 *my_allocated_bytes: 240672\nparent StartAutoVacWorker process. pid: 2937799 *my_allocated_bytes: 240672\nchild StartAutoVacWorker process. pid: 2939665 *my_allocated_bytes: 240672\nparent StartAutoVacWorker process. pid: 2937799 *my_allocated_bytes: 240672\nchild StartAutoVacWorker process. pid: 2940038 *my_allocated_bytes: 240672\nparent StartAutoVacWorker process. pid: 2937799 *my_allocated_bytes: 240672\nchild StartAutoVacWorker process. pid: 2940364 *my_allocated_bytes: 240672\nparent StartAutoVacWorker process. pid: 2937799 *my_allocated_bytes: 240672\nchild StartAutoVacWorker process. pid: 2940698 *my_allocated_bytes: 240672\nparent StartAutoVacWorker process. pid: 2937799 *my_allocated_bytes: 240672\nchild StartAutoVacWorker process. pid: 2941317 *my_allocated_bytes: 240672\nparent StartAutoVacWorker process. pid: 2937799 *my_allocated_bytes: 240672\nchild StartAutoVacWorker process. pid: 2941825 *my_allocated_bytes: 240672\n\n\nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com\n\n\n\n",
"msg_date": "Fri, 02 Dec 2022 11:09:30 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-02 11:09:30 -0500, Reid Thompson wrote:\n> It appears to me that Postmaster populates the local variable that\n> *my_allocated_bytes points to. That allocation is passed to forked\n> children, and if not zeroed out, will be double counted as part of\n> the child allocation. Is this invalid?\n\nI don't think we should count allocations made before backend_status.c has\nbeen initialized.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Dec 2022 09:18:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Fri, 2022-12-02 at 09:18 -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2022-12-02 11:09:30 -0500, Reid Thompson wrote:\n> > It appears to me that Postmaster populates the local variable that\n> > *my_allocated_bytes points to. That allocation is passed to forked\n> > children, and if not zeroed out, will be double counted as part of\n> > the child allocation. Is this invalid?\n> \n> I don't think we should count allocations made before\n> backend_status.c has\n> been initialized.\n> \n> Greetings,\n> \n> Andres Freund\n\nHi,\nThe intent was to capture and display all the memory allocated to the\nbackends, for admins/users/max_total_backend_memory/others to utilize.\nWhy should we ignore the allocations prior to backend_status.c?\n\nThanks,\nReid\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com\n\n\n",
"msg_date": "Tue, 06 Dec 2022 08:47:55 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-06 08:47:55 -0500, Reid Thompson wrote:\n> The intent was to capture and display all the memory allocated to the\n> backends, for admins/users/max_total_backend_memory/others to utilize.\n\nIt's going to be far less accurate than that. For one, there's a lot of\noperating system resources, like the page table, that are going to be\nignored. We're also not going to capture allocations done directly via\nmalloc(). There's also copy-on-write effects that we're ignoring.\n\nIf we just want to show an accurate picture of the current memory usage, we\nneed to go to operating system specific interfaces.\n\n\n> Why should we ignore the allocations prior to backend_status.c?\n\nIt seems to add complexity without a real increase in accuracy to me. But I'm\nnot going to push harder on it than I already have.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Dec 2022 09:55:44 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Sun, 2022-11-27 at 09:40 -0600, Justin Pryzby wrote:\n> > BTW, these should have some kind of prefix, like PG_ALLOC_* to\n> > avoid causing the same kind of problem for someone else that\n> > another header caused for you by defining something somewhere\n> > called IGNORE (ignore what, I don't know). The other problem was\n> > probably due to a define, though. Maybe instead of an enum, the\n> > function should take a boolean.\n> > \n\nPatch updated to current master and includes above prefix\nrecommendation and combining of two function calls to one recommended\nby Ted Yu.\n\n> > \n> > I still wonder whether there needs to be a separate CF entry for\n> > the 0001 patch. One issue is that there's two different lists of\n> > people involved in the threads.\n> > \n\nI'm OK with containing the conversation to one thread if everyone else\nis. If there's no argument against, then patches after today will go\nto the \"Add the ability to limit the amount of memory that can be\nallocated to backends\" thread \nhttps://www.postgresql.org/message-id/bd57d9a4c219cc1392665fd5fba61dde8027b3da.camel@crunchydata.com\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com",
"msg_date": "Thu, 08 Dec 2022 09:09:29 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Thu, 8 Dec 2022 at 19:44, Reid Thompson\n<reid.thompson@crunchydata.com> wrote:\n>\n> On Sun, 2022-11-27 at 09:40 -0600, Justin Pryzby wrote:\n> > > BTW, these should have some kind of prefix, like PG_ALLOC_* to\n> > > avoid causing the same kind of problem for someone else that\n> > > another header caused for you by defining something somewhere\n> > > called IGNORE (ignore what, I don't know). The other problem was\n> > > probably due to a define, though. Maybe instead of an enum, the\n> > > function should take a boolean.\n> > >\n>\n> Patch updated to current master and includes above prefix\n> recommendation and combining of two function calls to one recommended\n> by Ted Yu.\n>\n> > >\n> > > I still wonder whether there needs to be a separate CF entry for\n> > > the 0001 patch. One issue is that there's two different lists of\n> > > people involved in the threads.\n> > >\n>\n> I'm OK with containing the conversation to one thread if everyone else\n> is. If there's no argument against, then patches after today will go\n> to the \"Add the ability to limit the amount of memory that can be\n> allocated to backends\" thread\n> https://www.postgresql.org/message-id/bd57d9a4c219cc1392665fd5fba61dde8027b3da.camel@crunchydata.com\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\ne351f85418313e97c203c73181757a007dfda6d0 ===\n=== applying patch\n./0001-Add-tracking-of-backend-memory-allocated-to-pg_stat_.patch\npatching file src/backend/utils/mmgr/slab.c\nHunk #1 succeeded at 69 (offset 16 lines).\nHunk #2 succeeded at 414 (offset 175 lines).\nHunk #3 succeeded at 436 with fuzz 2 (offset 176 lines).\nHunk #4 FAILED at 286.\nHunk #5 succeeded at 488 (offset 186 lines).\nHunk #6 FAILED at 381.\nHunk #7 FAILED at 554.\n3 out of 7 hunks FAILED -- saving rejects to file\nsrc/backend/utils/mmgr/slab.c.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3865.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 3 Jan 2023 16:25:49 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Tue, 2023-01-03 at 16:25 +0530, vignesh C wrote:\n> ...\n> The patch does not apply on top of HEAD as in [1], please post a\n> rebased patch:\n>... \n> Regards,\n> Vignesh\n\nPer conversation in thread listed below, patches have been submitted to the \"Add the ability to limit the amount of memory that can be allocated to backends\" thread\nhttps://www.postgresql.org/message-id/bd57d9a4c219cc1392665fd5fba61dde8027b3da.camel@crunchydata.com\n\n0001-Add-tracking-of-backend-memory-allocated-to-pg_stat_.patch\n0002-Add-the-ability-to-limit-the-amount-of-memory-that-c.patch\n\nOn Thu, 8 Dec 2022 at 19:44, Reid Thompson\n<reid(dot)thompson(at)crunchydata(dot)com> wrote:\n>\n> On Sun, 2022-11-27 at 09:40 -0600, Justin Pryzby wrote:\n> > > ...\n> > > I still wonder whether there needs to be a separate CF entry for\n> > > the 0001 patch. One issue is that there's two different lists of\n> > > people involved in the threads.\n> > >\n>\n> I'm OK with containing the conversation to one thread if everyone else\n> is. If there's no argument against, then patches after today will go\n> to the \"Add the ability to limit the amount of memory that can be\n> allocated to backends\" thread\n> https://www.postgresql.org/message-id/bd57d9a4c219cc1392665fd5fba61dde8027b3da.camel@crunchydata.com\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com\n\n\n\n",
"msg_date": "Thu, 05 Jan 2023 13:58:33 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Thu, Jan 05, 2023 at 01:58:33PM -0500, Reid Thompson wrote:\n> On Tue, 2023-01-03 at 16:25 +0530, vignesh C wrote:\n> > ...\n> > The patch does not apply on top of HEAD as in [1], please post a\n> > rebased patch:\n> >... \n> > Regards,\n> > Vignesh\n> \n> Per conversation in thread listed below, patches have been submitted to the \"Add the ability to limit the amount of memory that can be allocated to backends\" thread\n> https://www.postgresql.org/message-id/bd57d9a4c219cc1392665fd5fba61dde8027b3da.camel@crunchydata.com\n\nI suggest to close the associated CF entry.\n\n(Also, the people who participated in this thread may want to be\nincluded in the other thread going forward.)\n\n> 0001-Add-tracking-of-backend-memory-allocated-to-pg_stat_.patch\n> 0002-Add-the-ability-to-limit-the-amount-of-memory-that-c.patch\n> \n> On Thu, 8 Dec 2022 at 19:44, Reid Thompson\n> <reid(dot)thompson(at)crunchydata(dot)com> wrote:\n> >\n> > On Sun, 2022-11-27 at 09:40 -0600, Justin Pryzby wrote:\n> > > > ...\n> > > > I still wonder whether there needs to be a separate CF entry for\n> > > > the 0001 patch. One issue is that there's two different lists of\n> > > > people involved in the threads.\n> > > >\n> >\n> > I'm OK with containing the conversation to one thread if everyone else\n> > is. If there's no argument against, then patches after today will go\n> > to the \"Add the ability to limit the amount of memory that can be\n> > allocated to backends\" thread\n> > https://www.postgresql.org/message-id/bd57d9a4c219cc1392665fd5fba61dde8027b3da.camel@crunchydata.com\n\n\n",
"msg_date": "Thu, 5 Jan 2023 14:13:30 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Thu, 2023-01-05 at 14:13 -0600, Justin Pryzby wrote:\n> \n> I suggest to close the associated CF entry.\n\nClosed with status of Withdrawn. If that is invalid, please advise.\n\n> (Also, the people who participated in this thread may want to be\n> included in the other thread going forward.)\n\nExplicitly adding previously missed participants to Cc: - that conversation/patches are being consolidated to\nthe thread \"Add the ability to limit the amount of memory that can be allocated to backends\"\n https://www.postgresql.org/message-id/bd57d9a4c219cc1392665fd5fba61dde8027b3da.camel@crunchydata.com\n\n\n\nThanks,\nReid\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com\n\n\n\n",
"msg_date": "Thu, 05 Jan 2023 16:33:28 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "Here is an updated version of the earlier work.\n\nThis version:\n 1) Tracks memory as requested by the backend.\n 2) Includes allocations made during program startup.\n 3) Optimizes the \"fast path\" to only update two local variables.\n 4) Places a cluster wide limit on total memory allocated.\n\nThe cluster wide limit is useful for multi-hosting. One greedy cluster doesn't starve\nthe other clusters of memory.\n\nNote there isn't a good way to track actual memory used by a cluster. \nIdeally, we like to get the working set size of each memory segment along with\nthe size of the associated kernel data structures. \nGathering that info in a portable way is a \"can of worms\".\nInstead, we're managing memory as requested by the application.\nWhile not identical, the two approaches are strongly correlated.\n\n The memory model used is\n 1) Each process is assumed to use a certain amount of memory\n simply by existing.\n 2) All pg memory allocations are counted, including those before\n the process is fully initialized.\n 3) Each process maintains its own local counters. These are the \"truth\".\n 4) Periodically,\n - local counters are added into the global, shared memory counters.\n - pgstats is updated\n - total memory is checked.\n\nFor efficiency, the global total is an approximation, not a precise number.\nIt can be off by as much as 1 MB per process. Memory limiting\ndoesn't need precision, just a consistent and reasonable approximation.\n\nRepeating the earlier benchmark test, there is no measurable loss of performance.",
"msg_date": "Thu, 31 Aug 2023 16:18:57 +0000",
"msg_from": "John Morris <john.morris@crunchydata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 9:19 AM John Morris <john.morris@crunchydata.com>\nwrote:\n\n> Here is an updated version of the earlier work.\n>\n> This version:\n> 1) Tracks memory as requested by the backend.\n> 2) Includes allocations made during program startup.\n> 3) Optimizes the \"fast path\" to only update two local variables.\n> 4) Places a cluster wide limit on total memory allocated.\n>\n> The cluster wide limit is useful for multi-hosting. One greedy cluster\n> doesn't starve\n> the other clusters of memory.\n>\n> Note there isn't a good way to track actual memory used by a cluster.\n> Ideally, we like to get the working set size of each memory segment along\n> with\n> the size of the associated kernel data structures.\n> Gathering that info in a portable way is a \"can of worms\".\n> Instead, we're managing memory as requested by the application.\n> While not identical, the two approaches are strongly correlated.\n>\n> The memory model used is\n> 1) Each process is assumed to use a certain amount of memory\n> simply by existing.\n> 2) All pg memory allocations are counted, including those before\n> the process is fully initialized.\n> 3) Each process maintains its own local counters. These are the\n> \"truth\".\n> 4) Periodically,\n> - local counters are added into the global, shared memory\n> counters.\n> - pgstats is updated\n> - total memory is checked.\n>\n> For efficiency, the global total is an approximation, not a precise number.\n> It can be off by as much as 1 MB per process. Memory limiting\n> doesn't need precision, just a consistent and reasonable approximation.\n>\n> Repeating the earlier benchmark test, there is no measurable loss of\n> performance.\n>\n> Hi,\nIn `InitProcGlobal`:\n\n+ elog(WARNING, \"proc init: max_total=%d result=%d\\n\",\nmax_total_bkend_mem, result);\n\nIs the above log for debugging purposes ? Maybe the log level should be\nchanged.\n\n+\nerrmsg(\"max_total_backend_memory %dMB - shared_memory_size %dMB is <=\n100MB\",\n\nThe `<=` in the error message is inconsistent with the `max_total_bkend_mem\n< result + 100` check.\nPlease modify one of them.\n\nFor update_global_allocation :\n\n+\nAssert((int64)pg_atomic_read_u64(&ProcGlobal->total_bkend_mem_bytes) >= 0);\n+\nAssert((int64)pg_atomic_read_u64(&ProcGlobal->global_dsm_allocation) >= 0);\n\nShould the assertions be done earlier in the function?\n\nFor reserve_backend_memory:\n\n+ return true;\n+\n+ /* CASE: the new allocation is within bounds. Take the fast path. */\n+ else if (my_memory.allocated_bytes + size <= allocation_upper_bound)\n\nThe `else` can be omitted (the preceding if block returns).\n\nFor `atomic_add_within_bounds_i64`\n\n+ newval = oldval + add;\n+\n+ /* check if we are out of bounds */\n+ if (newval < lower_bound || newval > upper_bound ||\naddition_overflow(oldval, add))\n\nSince the summation is stored in `newval`, you can pass newval to\n`addition_overflow` so that `addition_overflow` doesn't add them again.\n\nThere are debug statements, such as:\n\n+ debug(\"done\\n\");\n\nyou can drop them in the next patch.\n\nThanks\n\nOn Thu, Aug 31, 2023 at 9:19 AM John Morris <john.morris@crunchydata.com> wrote:Here is an updated version of the earlier work.\n\nThis version:\n 1) Tracks memory as requested by the backend.\n 2) Includes allocations made during program startup.\n 3) Optimizes the \"fast path\" to only update two local variables.\n 4) Places a cluster wide limit on total memory allocated.\n\nThe cluster wide limit is useful for multi-hosting. One greedy cluster doesn't starve\nthe other clusters of memory.\n\nNote there isn't a good way to track actual memory used by a cluster. \nIdeally, we like to get the working set size of each memory segment along with\nthe size of the associated kernel data structures. \nGathering that info in a portable way is a \"can of worms\".\nInstead, we're managing memory as requested by the application.\nWhile not identical, the two approaches are strongly correlated.\n\n The memory model used is\n 1) Each process is assumed to use a certain amount of memory\n simply by existing.\n 2) All pg memory allocations are counted, including those before\n the process is fully initialized.\n 3) Each process maintains its own local counters. These are the \"truth\".\n 4) Periodically,\n - local counters are added into the global, shared memory counters.\n - pgstats is updated\n - total memory is checked.\n\nFor efficiency, the global total is an approximation, not a precise number.\nIt can be off by as much as 1 MB per process. Memory limiting\ndoesn't need precision, just a consistent and reasonable approximation.\n\nRepeating the earlier benchmark test, there is no measurable loss of performance. \nHi,In `InitProcGlobal`:+ elog(WARNING, \"proc init: max_total=%d result=%d\\n\", max_total_bkend_mem, result);Is the above log for debugging purposes ? Maybe the log level should be changed.+ errmsg(\"max_total_backend_memory %dMB - shared_memory_size %dMB is <= 100MB\",The `<=` in the error message is inconsistent with the `max_total_bkend_mem < result + 100` check.Please modify one of them.For update_global_allocation :+ Assert((int64)pg_atomic_read_u64(&ProcGlobal->total_bkend_mem_bytes) >= 0);+ Assert((int64)pg_atomic_read_u64(&ProcGlobal->global_dsm_allocation) >= 0);Should the assertions be done earlier in the function?For reserve_backend_memory:+ return true;++ /* CASE: the new allocation is within bounds. Take the fast path. */+ else if (my_memory.allocated_bytes + size <= allocation_upper_bound)The `else` can be omitted (the preceding if block returns).For `atomic_add_within_bounds_i64`+ newval = oldval + add;++ /* check if we are out of bounds */+ if (newval < lower_bound || newval > upper_bound || addition_overflow(oldval, add))Since the summation is stored in `newval`, you can pass newval to `addition_overflow` so that `addition_overflow` doesn't add them again.There are debug statements, such as:+ debug(\"done\\n\");you can drop them in the next patch.Thanks",
"msg_date": "Sat, 2 Sep 2023 08:13:00 -0700",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add tracking of backend memory allocated to pg_stat_activity"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nThis patch ensures get_database_list() switches back to the memory\ncontext in use upon entry rather than returning with TopMemoryContext\nas the context.\n\nThis will address memory allocations in autovacuum.c being associated\nwith TopMemoryContext when they should not be.\n\nautovacuum.c do_start_worker() with current context\n'Autovacuum start worker (tmp)' invokes get_database_list(). Upon\nreturn, the current context has been changed to TopMemoryContext by\nAtCommit_Memory() as part of an internal transaction. Further down\nin the do_start_worker(), pgstat_fetch_stat_dbentry() is invoked.\nPreviously this didn't pose a issue, however recent changes altered\nhow pgstat_fetch_stat_dbentry() is implemented. The new\nimplementation has a branch utilizing palloc. The patch ensures these\nallocations are associated with the 'Autovacuum start worker (tmp)'\ncontext rather than the TopMemoryContext. Prior to the change,\nleaving an idle laptop PG instance running just shy of 3 days saw the\nautovacuum launcher process grow to 42MB with most of that growth in\nTopMemoryContext due to the palloc allocations issued with autovacuum\nworker startup.\n\n\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com\n\n\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com",
"msg_date": "Wed, 31 Aug 2022 12:10:26 -0400",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Bug fix. autovacuum.c do_worker_start() associates memory\n allocations with TopMemoryContext rather than 'Autovacuum start worker\n (tmp)'"
},
{
"msg_contents": "Reid Thompson <reid.thompson@crunchydata.com> writes:\n> This patch ensures get_database_list() switches back to the memory\n> context in use upon entry rather than returning with TopMemoryContext\n> as the context.\n\n> This will address memory allocations in autovacuum.c being associated\n> with TopMemoryContext when they should not be.\n\n> autovacuum.c do_start_worker() with current context\n> 'Autovacuum start worker (tmp)' invokes get_database_list(). Upon\n> return, the current context has been changed to TopMemoryContext by\n> AtCommit_Memory() as part of an internal transaction. Further down\n> in the do_start_worker(), pgstat_fetch_stat_dbentry() is invoked.\n> Previously this didn't pose a issue, however recent changes altered\n> how pgstat_fetch_stat_dbentry() is implemented. The new\n> implementation has a branch utilizing palloc. The patch ensures these\n> allocations are associated with the 'Autovacuum start worker (tmp)'\n> context rather than the TopMemoryContext. Prior to the change,\n> leaving an idle laptop PG instance running just shy of 3 days saw the\n> autovacuum launcher process grow to 42MB with most of that growth in\n> TopMemoryContext due to the palloc allocations issued with autovacuum\n> worker startup.\n\nYeah, I can reproduce noticeable growth within a couple minutes\nafter setting autovacuum_naptime to 1s, and I confirm that the\nlauncher's space consumption stays static after applying this.\n\nEven if there's only a visible leak in v15/HEAD, I'm inclined\nto back-patch this all the way, because it's certainly buggy\non its own terms. It's just the merest accident that neither\ncaller is leaking other stuff into TopMemoryContext, because\nthey both think they are using a short-lived context.\n\nThanks for the report!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Aug 2022 16:05:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug fix. autovacuum.c do_worker_start() associates memory\n allocations with TopMemoryContext rather than 'Autovacuum start worker (tmp)'"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-31 16:05:03 -0400, Tom Lane wrote:\n> Even if there's only a visible leak in v15/HEAD, I'm inclined\n> to back-patch this all the way, because it's certainly buggy\n> on its own terms. It's just the merest accident that neither\n> caller is leaking other stuff into TopMemoryContext, because\n> they both think they are using a short-lived context.\n\n+1\n\n> Thanks for the report!\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Aug 2022 13:09:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Bug fix. autovacuum.c do_worker_start() associates memory\n allocations with TopMemoryContext rather than 'Autovacuum start worker\n (tmp)'"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 01:09:22PM -0700, Andres Freund wrote:\n> On 2022-08-31 16:05:03 -0400, Tom Lane wrote:\n>> Even if there's only a visible leak in v15/HEAD, I'm inclined\n>> to back-patch this all the way, because it's certainly buggy\n>> on its own terms. It's just the merest accident that neither\n>> caller is leaking other stuff into TopMemoryContext, because\n>> they both think they are using a short-lived context.\n> \n> +1\n\nOuch. Thanks for the quick fix and the backpatch.\n--\nMichael",
"msg_date": "Thu, 1 Sep 2022 09:35:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Bug fix. autovacuum.c do_worker_start() associates memory\n allocations with TopMemoryContext rather than 'Autovacuum start worker\n (tmp)'"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nAdd the ability to limit the amount of memory that can be allocated to\nbackends.\n\nThis builds on the work that adds backend memory allocated to\npg_stat_activity\nhttps://www.postgresql.org/message-id/67bb5c15c0489cb499723b0340f16e10c22485ec.camel%40crunchydata.com\nBoth patches are attached.\n\nAdd GUC variable max_total_backend_memory.\n\nSpecifies a limit to the amount of memory (MB) that may be allocated to\nbackends in total (i.e. this is not a per user or per backend limit).\nIf unset, or set to 0 it is disabled. It is intended as a resource to\nhelp avoid the OOM killer. A backend request that would push the total\nover the limit will be denied with an out of memory error causing that\nbackends current query/transaction to fail. Due to the dynamic nature\nof memory allocations, this limit is not exact. If within 1.5MB of the\nlimit and two backends request 1MB each at the same time both may be\nallocated exceeding the limit. Further requests will not be allocated\nuntil dropping below the limit. Keep this in mind when setting this\nvalue to avoid the OOM killer. Currently, this limit does not affect\nauxiliary backend processes, this list of non-affected backend\nprocesses is open for discussion as to what should/should not be\nincluded. Backend memory allocations are displayed in the\npg_stat_activity view. \n\n\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com",
"msg_date": "Wed, 31 Aug 2022 12:50:19 -0400",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 12:50:19PM -0400, Reid Thompson wrote:\n> Hi Hackers,\n> \n> Add the ability to limit the amount of memory that can be allocated to\n> backends.\n> \n> This builds on the work that adds backend memory allocated to\n> pg_stat_activity\n> https://www.postgresql.org/message-id/67bb5c15c0489cb499723b0340f16e10c22485ec.camel%40crunchydata.com\n> Both patches are attached.\n\nYou should name the patches with different prefixes, like\n001,002,003 Otherwise, cfbot may try to apply them in the wrong order.\ngit format-patch is the usual tool for that.\n\n> + Specifies a limit to the amount of memory (MB) that may be allocated to\n\nMB are just the default unit, right ?\nThe user should be allowed to write max_total_backend_memory='2GB'\n\n> + backends in total (i.e. this is not a per user or per backend limit).\n> + If unset, or set to 0 it is disabled. A backend request that would push\n> + the total over the limit will be denied with an out of memory error\n> + causing that backends current query/transaction to fail. Due to the dynamic\n\nbackend's\n\n> + nature of memory allocations, this limit is not exact. If within 1.5MB of\n> + the limit and two backends request 1MB each at the same time both may be\n> + allocated exceeding the limit. Further requests will not be allocated until\n\nallocated, and exceed the limit\n\n> +bool\n> +exceeds_max_total_bkend_mem(uint64 allocation_request)\n> +{\n> +\tbool result = false;\n> +\n> +\tif (MyAuxProcType != NotAnAuxProcess)\n> +\t\treturn result;\n\nThe double negative is confusing, so could use a comment.\n\n> +\t/* Convert max_total_bkend_mem to bytes for comparison */\n> +\tif (max_total_bkend_mem &&\n> +\t\tpgstat_get_all_backend_memory_allocated() +\n> +\t\tallocation_request > (uint64)max_total_bkend_mem * 1024 * 1024)\n> +\t{\n> +\t\t/*\n> +\t\t * Explicitely identify the OOM being a result of this\n> +\t\t * configuration parameter vs a system failure to allocate OOM.\n> +\t\t */\n> +\t\telog(WARNING,\n> +\t\t\t \"request will exceed postgresql.conf defined max_total_backend_memory limit (%lu > %lu)\",\n> +\t\t\t pgstat_get_all_backend_memory_allocated() +\n> +\t\t\t allocation_request, (uint64)max_total_bkend_mem * 1024 * 1024);\n\nI think it should be ereport() rather than elog(), which is\ninternal-only, and not-translated.\n\n> + {\"max_total_backend_memory\", PGC_SIGHUP, RESOURCES_MEM,\n> + gettext_noop(\"Restrict total backend memory allocations to this max.\"),\n> + gettext_noop(\"0 turns this feature off.\"),\n> + GUC_UNIT_MB\n> + },\n> + &max_total_bkend_mem,\n> + 0, 0, INT_MAX,\n> + NULL, NULL, NULL\n\nI think this needs a maximum like INT_MAX/1024/1024\n\n> +uint64\n> +pgstat_get_all_backend_memory_allocated(void)\n> +{\n...\n> +\tfor (i = 1; i <= NumBackendStatSlots; i++)\n> +\t{\n\nIt's looping over every backend for each allocation.\nDo you know if there's any performance impact of that ?\n\nI think it may be necessary to track the current allocation size in\nshared memory (with atomic increments?). Maybe decrements would need to\nbe exactly accounted for, or otherwise Assert() that the value is not\nnegative. I don't know how expensive it'd be to have conditionals for\neach decrement, but maybe the value would only be decremented at\nstrategic times, like at transaction commit or backend shutdown.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 31 Aug 2022 12:34:57 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "At Wed, 31 Aug 2022 12:50:19 -0400, Reid Thompson <reid.thompson@crunchydata.com> wrote in \n> Hi Hackers,\n> \n> Add the ability to limit the amount of memory that can be allocated to\n> backends.\n\nThe patch seems to limit both of memory-context allocations and DSM\nallocations happen on a specific process by the same budget. In the\nfist place I don't think it's sensible to cap the amount of DSM\nallocations by per-process budget.\n\nDSM is used by pgstats subsystem. There can be cases where pgstat\ncomplains for denial of DSM allocation after the budget has been\nexhausted by memory-context allocations, or every command complains\nfor denial of memory-context allocation after once the per-process\nbudget is exhausted by DSM allocations. That doesn't seem reasonable.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 01 Sep 2022 11:48:40 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Hi,\n\nOn 8/31/22 6:50 PM, Reid Thompson wrote:\n> Hi Hackers,\n>\n> Add the ability to limit the amount of memory that can be allocated to\n> backends.\n\nThanks for the patch.\n\n+ 1 on the idea.\n\n> Specifies a limit to the amount of memory (MB) that may be allocated to\n> backends in total (i.e. this is not a per user or per backend limit).\n> If unset, or set to 0 it is disabled. It is intended as a resource to\n> help avoid the OOM killer. A backend request that would push the total\n> over the limit will be denied with an out of memory error causing that\n> backends current query/transaction to fail.\n\nI'm not sure we are choosing the right victims here (aka the ones that \nare doing the request that will push the total over the limit).\n\nImagine an extreme case where a single backend consumes say 99% of the \nlimit, shouldn't it be the one to be \"punished\"? (and somehow forced to \ngive the memory back).\n\nThe problem that i see with the current approach is that a \"bad\" backend \ncould impact all the others and continue to do so.\n\nwhat about punishing say the highest consumer , what do you think? (just \nspeaking about the general idea here, not about the implementation)\n\nRegards,\n\n-- \n\nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Fri, 2 Sep 2022 09:30:21 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Thu, 1 Sept 2022 at 04:52, Reid Thompson\n<reid.thompson@crunchydata.com> wrote:\n> Add the ability to limit the amount of memory that can be allocated to\n> backends.\n\nAre you aware that relcache entries are stored in backend local memory\nand that once we've added a relcache entry for a relation that we have\nno current code which attempts to reduce the memory consumption used\nby cache entries when there's memory pressure?\n\nIt seems to me that if we had this feature as you propose that a\nbackend could hit the limit and stay there just from the memory\nrequirements of the relation cache after some number of tables have\nbeen accessed from the given backend. It's not hard to imagine a\nsituation where the palloc() would start to fail during parse, which\nmight make it quite infuriating for anyone trying to do something\nlike:\n\nSET max_total_backend_memory TO 0;\n\nor\n\nALTER SYSTEM SET max_total_backend_memory TO 0;\n\nI think a better solution to this problem would be to have \"memory\ngrants\", where we configure some amount of \"pool\" memory that backends\nare allowed to use for queries. The planner would have to add the\nexpected number of work_mem that the given query is expected to use\nand before that query starts, the executor would have to \"checkout\"\nthat amount of memory from the pool and return it when finished. If\nthere is not enough memory in the pool then the query would have to\nwait until enough memory is available. This creates a deadlocking\nhazard that the deadlock detector would need to be made aware of.\n\nI know Thomas Munro has mentioned this \"memory grant\" or \"memory pool\"\nfeature to me previously and I think he even has some work in progress\ncode for it. It's a very tricky problem, however, as aside from the\ndeadlocking issue, it requires working out how much memory a given\nplan will use concurrently. That's not as simple as counting the nodes\nthat use work_mem and summing those up.\n\nThere is some discussion about the feature in [1]. I was unable to\nfind what Thomas mentioned on the list about this. I've included him\nhere in case he has any extra information to share.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/20220713222342.GE18011%40telsasoft.com#b4f526aa8f2c893567c1ecf069f9e6c7\n\n\n",
"msg_date": "Fri, 2 Sep 2022 20:52:54 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Wed, 2022-08-31 at 12:34 -0500, Justin Pryzby wrote:\n> You should name the patches with different prefixes, like\n> 001,002,003 Otherwise, cfbot may try to apply them in the wrong\n> order.\n> git format-patch is the usual tool for that.\n\nThanks for the pointer. My experience with git in the past has been\nminimal and basic.\n\n> > + Specifies a limit to the amount of memory (MB) that may be\n> > allocated to\n> \n> MB are just the default unit, right ?\n> The user should be allowed to write max_total_backend_memory='2GB'\n\nCorrect. Default units are MB. Other unit types are converted to MB.\n\n> > + causing that backends current query/transaction to fail.\n> \n> backend's\n> > + allocated exceeding the limit. Further requests will not\n> \n> allocated, and exceed the limit\n> \n> > + if (MyAuxProcType != NotAnAuxProcess)\n> The double negative is confusing, so could use a comment.\n\n> > + elog(WARNING,\n> I think it should be ereport() rather than elog(), which is\n> internal-only, and not-translated.\n\nCorrected/added the the above items. Attached patches with the corrections.\n\n> > + 0, 0, INT_MAX,\n> > + NULL, NULL, NULL\n> I think this needs a maximum like INT_MAX/1024/1024\n\nIs this noting that we'd set a ceiling of 2048MB?\n\n> > + for (i = 1; i <= NumBackendStatSlots; i++)\n> > + {\n> \n> It's looping over every backend for each allocation.\n> Do you know if there's any performance impact of that ?\n\nI'm not very familiar with how to test performance impact, I'm open to\nsuggestions. I have performed the below pgbench tests and noted the basic\ntps differences in the table.\n\nTest 1:\nbranch master\nCFLAGS=\"-I/usr/include/python3.8/ \" /home/rthompso/src/git/postgres/configure --silent --prefix=/home/rthompso/src/git/postgres/install/master --with-openssl --with-tcl --with-tclconfig=/usr/lib/tcl8.6 --with-perl --with-libxml --with-libxslt --with-python --with-gssapi --with-systemd --with-ldap --enable-nls\nmake -s -j12 && make -s install\ninitdb\ndefault postgresql.conf settings\ninit pgbench pgbench -U rthompso -p 5433 -h localhost -i -s 50 testpgbench\n10 iterations\nfor ctr in {1..10}; do { time pgbench -p 5433 -h localhost -c 10 -j 10 -t 50000 testpgbench; } 2>&1 | tee -a pgstatsResultsNoLimitSet; done\n\nTest 2:\nbranch pg-stat-activity-backend-memory-allocated\nCFLAGS=\"-I/usr/include/python3.8/ \" /home/rthompso/src/git/postgres/configure --silent --prefix=/home/rthompso/src/git/postgres/install/pg-stats-memory/ --with-openssl --with-tcl --with-tclconfig=/usr/lib/tcl8.6 --with-perl --with-libxml --with-libxslt --with-python --with-gssapi --with-systemd --with-ldap --enable-nls\nmake -s -j12 && make -s install\ninitdb \ndefault postgresql.conf settings\ninit pgbench pgbench -U rthompso -p 5433 -h localhost -i -s 50\ntestpgbench\n10 iterations\nfor ctr in {1..10}; do { time pgbench -p 5433 -h localhost -c 10 -j 10 -t 50000 testpgbench; } 2>&1 | tee -a pgstatsResultsPg-stats-memory; done\n\nTest 3:\nbranch dev-max-memory\nCFLAGS=\"-I/usr/include/python3.8/ \" /home/rthompso/src/git/postgres/configure --silent --prefix=/home/rthompso/src/git/postgres/install/dev-max-memory/ --with-openssl --with-tcl --with-tclconfig=/usr/lib/tcl8.6 --with-perl --with-libxml --with-libxslt --with-python --with-gssapi --with-systemd --with-ldap --enable-nls\nmake -s -j12 && make -s install\ninitdb \ndefault postgresql.conf settings\ninit pgbench pgbench -U rthompso -p 5433 -h localhost -i -s 50 testpgbench\n10 iterations\nfor ctr in {1..10}; do { time pgbench -p 5433 -h localhost -c 10 -j 10 -t 50000 testpgbench; } 2>&1 | tee -a pgstatsResultsDev-max-memory; done\n\nTest 4:\nbranch dev-max-memory\nCFLAGS=\"-I/usr/include/python3.8/ \" /home/rthompso/src/git/postgres/configure --silent --prefix=/home/rthompso/src/git/postgres/install/dev-max-memory/ --with-openssl --with-tcl --with-tclconfig=/usr/lib/tcl8.6 --with-perl --with-libxml --with-libxslt --with-python --with-gssapi --with-systemd --with-ldap --enable-nls\nmake -s -j12 && make -s install\ninitdb \nnon-default postgresql.conf setting for max_total_backend_memory = 100MB\ninit pgbench pgbench -U rthompso -p 5433 -h localhost -i -s 50 testpgbench\n10 iterations\nfor ctr in {1..10}; do { time pgbench -p 5433 -h localhost -c 10 -j 10 -t 50000 testpgbench; } 2>&1 | tee -a pgstatsResultsDev-max-memory100MB; done\n\nLaptop\n11th Gen Intel(R) Core(TM) i7-11850H @ 2.50GHz 8 Cores 16 threads\n32GB RAM\nSSD drive\n\nAverages from the 10 runs and tps difference over the 10 runs\n|------------------+------------------+------------------------+-------------------+------------------+-------------------+---------------+------------------|\n| Test Run | Master | Track Memory Allocated | Diff from Master | Max Mem off | Diff from Master | Max Mem 100MB | Diff from Master |\n| Set 1 | Test 1 | Test 2 | | Test 3 | | Test 4 | |\n| latency average | 2.43390909090909 | 2.44327272727273 | | 2.44381818181818 | | 2.6843 | |\n| tps inc conn est | 3398.99291372727 | 3385.40984336364 | -13.583070363637 | 3385.08184309091 | -13.9110706363631 | 3729.5363413 | 330.54342757273 |\n| tps exc conn est | 3399.12185727273 | 3385.52527490909 | -13.5965823636366 | 3385.22100872727 | -13.9008485454547 | 3729.7097607 | 330.58790342727 |\n|------------------+------------------+------------------------+-------------------+------------------+-------------------+---------------+------------------|\n| Set 2 | | | | | | | |\n| latency average | 2.691 | 2.6895 | 2 | 2.69 | 3 | 2.6827 | 4 |\n| tps inc conn est | 3719.56 | 3721.7587106 | 2.1987106 | 3720.3 | .74 | 3730.86 | 11.30 |\n| tps exc conn est | 3719.71 | 3721.9268465 | 2.2168465 | 3720.47 | .76 | 3731.02 | 11.31 |\n|------------------+------------------+------------------------+-------------------+------------------+-------------------+---------------+------------------|\n\n\n> I think it may be necessary to track the current allocation size in\n> shared memory (with atomic increments?). Maybe decrements would need\n> to\n> be exactly accounted for, or otherwise Assert() that the value is not\n> negative. I don't know how expensive it'd be to have conditionals\n> for\n> each decrement, but maybe the value would only be decremented at\n> strategic times, like at transaction commit or backend shutdown.\n> \n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com",
"msg_date": "Sat, 03 Sep 2022 23:40:03 -0400",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Thu, 2022-09-01 at 11:48 +0900, Kyotaro Horiguchi wrote:\n> > \n> > The patch seems to limit both of memory-context allocations and DSM\n> > allocations happen on a specific process by the same budget. In the\n> > fist place I don't think it's sensible to cap the amount of DSM\n> > allocations by per-process budget.\n> > \n> > DSM is used by pgstats subsystem. There can be cases where pgstat\n> > complains for denial of DSM allocation after the budget has been\n> > exhausted by memory-context allocations, or every command complains\n> > for denial of memory-context allocation after once the per-process\n> > budget is exhausted by DSM allocations. That doesn't seem\n> > reasonable.\n> > regards.\n\nIt's intended as a mechanism for administrators to limit total\npostgresql memory consumption to avoid the OOM killer causing a crash\nand restart, or to ensure that resources are available for other\nprocesses on shared hosts, etc. It limits all types of allocations in\norder to accomplish this. Our documentation will note this, so that\nadministrators that have the need to set it are aware that it can\naffect all non-auxiliary processes and what the effect is.\n\n\n\n\n",
"msg_date": "Tue, 06 Sep 2022 20:17:25 -0400",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Fri, 2022-09-02 at 09:30 +0200, Drouvot, Bertrand wrote:\n> Hi,\n> \n> I'm not sure we are choosing the right victims here (aka the ones\n> that are doing the request that will push the total over the limit).\n>\n> Imagine an extreme case where a single backend consumes say 99% of\n> the limit, shouldn't it be the one to be \"punished\"? (and somehow forced\n> to give the memory back).\n> \n> The problem that i see with the current approach is that a \"bad\"\n> backend could impact all the others and continue to do so.\n> \n> what about punishing say the highest consumer , what do you think?\n> (just speaking about the general idea here, not about the implementation)\n\nInitially, we believe that punishing the detector is reasonable if we\ncan help administrators avoid the OOM killer/resource starvation. But\nwe can and should expand on this idea.\n\nAnother thought is, rather than just failing the query/transaction we\nhave the affected backend do a clean exit, freeing all it's resources.\n\n\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com\n\n\n\n\n\n",
"msg_date": "Tue, 06 Sep 2022 20:25:06 -0400",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Greetings,\n\n* David Rowley (dgrowleyml@gmail.com) wrote:\n> On Thu, 1 Sept 2022 at 04:52, Reid Thompson\n> <reid.thompson@crunchydata.com> wrote:\n> > Add the ability to limit the amount of memory that can be allocated to\n> > backends.\n> \n> Are you aware that relcache entries are stored in backend local memory\n> and that once we've added a relcache entry for a relation that we have\n> no current code which attempts to reduce the memory consumption used\n> by cache entries when there's memory pressure?\n\nShort answer to this is yes, and that's an issue, but it isn't this\npatch's problem to deal with- that's an issue that the relcache system\nneeds to be changed to address.\n\n> It seems to me that if we had this feature as you propose that a\n> backend could hit the limit and stay there just from the memory\n> requirements of the relation cache after some number of tables have\n> been accessed from the given backend. It's not hard to imagine a\n> situation where the palloc() would start to fail during parse, which\n> might make it quite infuriating for anyone trying to do something\n> like:\n\nAgreed that this could happen but I don't imagine it to be super likely-\nand even if it does, this is probably a better position to be in as the\nbackend could then be disconnected from and would then go away and its\nmemory free'd, unlike the current OOM-killer situation where we crash\nand go through recovery. We should note this in the documentation\nthough, sure, so that administrators understand how this can occur and\ncan take action to address it.\n\n> I think a better solution to this problem would be to have \"memory\n> grants\", where we configure some amount of \"pool\" memory that backends\n> are allowed to use for queries. The planner would have to add the\n> expected number of work_mem that the given query is expected to use\n> and before that query starts, the executor would have to \"checkout\"\n> that amount of memory from the pool and return it when finished. If\n> there is not enough memory in the pool then the query would have to\n> wait until enough memory is available. This creates a deadlocking\n> hazard that the deadlock detector would need to be made aware of.\n\nSure, that also sounds great and a query acceptance system would be\nwonderful. If someone is working on that with an expectation of it\nlanding before v16, great. Otherwise, I don't see it as relevant to\nthe question about if we should include this feature or not, and I'm not\neven sure that we'd refuse this feature even if we already had an\nacceptance system as a stop-gap should we guess wrong and not realize it\nuntil it's too late.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 9 Sep 2022 12:48:56 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Sat, Sep 03, 2022 at 11:40:03PM -0400, Reid Thompson wrote:\n> > > +�������������� 0, 0, INT_MAX,\n> > > +�������������� NULL, NULL, NULL\n> > I think this needs a maximum like INT_MAX/1024/1024\n> \n> Is this noting that we'd set a ceiling of 2048MB?\n\nThe reason is that you're later multiplying it by 1024*1024, so you need\nto limit it to avoid overflowing. Compare with\nmin_dynamic_shared_memory, Log_RotationSize, maintenance_work_mem,\nautovacuum_work_mem.\n\ntypo: Explicitely\n\n+ errmsg(\"request will exceed postgresql.conf defined max_total_backend_memory limit (%lu > %lu)\",\n\nI wouldn't mention postgresql.conf - it could be in\npostgresql.auto.conf, or an include file, or a -c parameter.\nSuggest: allocation would exceed max_total_backend_memory limit...\n\n+ ereport(LOG, errmsg(\"decrease reduces reported backend memory allocated below zero; setting reported to 0\"));\n\nSuggest: deallocation would decrease backend memory below zero;\n\n+ {\"max_total_backend_memory\", PGC_SIGHUP, RESOURCES_MEM, \n\nShould this be PGC_SU_BACKEND to allow a superuser to set a higher\nlimit (or no limit)?\n\nThere's compilation warning under mingw cross compile due to\nsizeof(long). See d914eb347 and other recent commits which I guess is\nthe current way to handle this.\nhttp://cfbot.cputube.org/reid-thompson.html\n\nFor performance test, you'd want to check what happens with a large\nnumber of max_connections (and maybe a large number of clients). TPS\nisn't the only thing that matters. For example, a utility command might\nsometimes do a lot of allocations (or deallocations), or a\n\"parameterized nested loop\" may loop over over many outer tuples and\nreset for each. There's also a lot of places that reset to a\n\"per-tuple\" context. I started looking at its performance, but nothing\nto show yet.\n\nWould you keep people copied on your replies (\"reply all\") ? Otherwise\nI (at least) may miss them. I think that's what's typical on these\nlists (and the list tool is smart enough not to send duplicates to\npeople who are direct recipients).\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 9 Sep 2022 12:14:17 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Fri, 2022-09-09 at 12:14 -0500, Justin Pryzby wrote:\n> On Sat, Sep 03, 2022 at 11:40:03PM -0400, Reid Thompson wrote:\n> > > > + 0, 0, INT_MAX,\n> > > > + NULL, NULL, NULL\n> > > I think this needs a maximum like INT_MAX/1024/1024\n> > \n> > Is this noting that we'd set a ceiling of 2048MB?\n> \n> The reason is that you're later multiplying it by 1024*1024, so you\n> need\n> to limit it to avoid overflowing. Compare with\n> min_dynamic_shared_memory, Log_RotationSize, maintenance_work_mem,\n> autovacuum_work_mem.\n\nWhat I originally attempted to implement is:\nGUC \"max_total_backend_memory\" max value as INT_MAX = 2147483647 MB\n(2251799812636672 bytes). And the other variables and comparisons as\nbytes represented as uint64 to avoid overflow.\n\nIs this invalid?\n\n> typo: Explicitely\n\ncorrected\n\n> + errmsg(\"request will exceed postgresql.conf\n> defined max_total_backend_memory limit (%lu > %lu)\",\n> \n> I wouldn't mention postgresql.conf - it could be in\n> postgresql.auto.conf, or an include file, or a -c parameter.\n> Suggest: allocation would exceed max_total_backend_memory limit...\n>\n\nupdated\n\n> \n> + ereport(LOG, errmsg(\"decrease reduces reported\n> backend memory allocated below zero; setting reported to 0\"));\n> \n> Suggest: deallocation would decrease backend memory below zero;\n\nupdated\n\n> + {\"max_total_backend_memory\", PGC_SIGHUP,\n> RESOURCES_MEM, \n> \n> \n> \n> Should this be PGC_SU_BACKEND to allow a superuser to set a higher\n> limit (or no limit)?\n\nSounds good to me. I'll update to that.\nWould PGC_SUSET be too open?\n\n> There's compilation warning under mingw cross compile due to\n> sizeof(long). See d914eb347 and other recent commits which I guess\n> is\n> the current way to handle this.\n> http://cfbot.cputube.org/reid-thompson.html\n\nupdated %lu to %llu and changed cast from uint64 to \nunsigned long long in the ereport call\n\n> For performance test, you'd want to check what happens with a large\n> number of max_connections (and maybe a large number of clients). TPS\n> isn't the only thing that matters. For example, a utility command\n> might\n> sometimes do a lot of allocations (or deallocations), or a\n> \"parameterized nested loop\" may loop over over many outer tuples and\n> reset for each. There's also a lot of places that reset to a\n> \"per-tuple\" context. I started looking at its performance, but\n> nothing\n> to show yet.\n\nThanks\n\n> Would you keep people copied on your replies (\"reply all\") ? \n> Otherwise\n> I (at least) may miss them. I think that's what's typical on these\n> lists (and the list tool is smart enough not to send duplicates to\n> people who are direct recipients).\n\nOk - will do, thanks.\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com",
"msg_date": "Mon, 12 Sep 2022 12:25:25 -0400",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 8:30 PM Reid Thompson <reid.thompson@crunchydata.com>\nwrote:\n\n> On Fri, 2022-09-09 at 12:14 -0500, Justin Pryzby wrote:\n> > On Sat, Sep 03, 2022 at 11:40:03PM -0400, Reid Thompson wrote:\n> > > > > + 0, 0, INT_MAX,\n> > > > > + NULL, NULL, NULL\n> > > > I think this needs a maximum like INT_MAX/1024/1024\n> > >\n> > > Is this noting that we'd set a ceiling of 2048MB?\n> >\n> > The reason is that you're later multiplying it by 1024*1024, so you\n> > need\n> > to limit it to avoid overflowing. Compare with\n> > min_dynamic_shared_memory, Log_RotationSize, maintenance_work_mem,\n> > autovacuum_work_mem.\n>\n> What I originally attempted to implement is:\n> GUC \"max_total_backend_memory\" max value as INT_MAX = 2147483647 MB\n> (2251799812636672 bytes). And the other variables and comparisons as\n> bytes represented as uint64 to avoid overflow.\n>\n> Is this invalid?\n>\n> > typo: Explicitely\n>\n> corrected\n>\n> > + errmsg(\"request will exceed postgresql.conf\n> > defined max_total_backend_memory limit (%lu > %lu)\",\n> >\n> > I wouldn't mention postgresql.conf - it could be in\n> > postgresql.auto.conf, or an include file, or a -c parameter.\n> > Suggest: allocation would exceed max_total_backend_memory limit...\n> >\n>\n> updated\n>\n> >\n> > + ereport(LOG, errmsg(\"decrease reduces reported\n> > backend memory allocated below zero; setting reported to 0\"));\n> >\n> > Suggest: deallocation would decrease backend memory below zero;\n>\n> updated\n>\n> > + {\"max_total_backend_memory\", PGC_SIGHUP,\n> > RESOURCES_MEM,\n> >\n> >\n> >\n> > Should this be PGC_SU_BACKEND to allow a superuser to set a higher\n> > limit (or no limit)?\n>\n> Sounds good to me. I'll update to that.\n> Would PGC_SUSET be too open?\n>\n> > There's compilation warning under mingw cross compile due to\n> > sizeof(long). See d914eb347 and other recent commits which I guess\n> > is\n> > the current way to handle this.\n> > http://cfbot.cputube.org/reid-thompson.html\n>\n> updated %lu to %llu and changed cast from uint64 to\n> unsigned long long in the ereport call\n>\n> > For performance test, you'd want to check what happens with a large\n> > number of max_connections (and maybe a large number of clients). TPS\n> > isn't the only thing that matters. For example, a utility command\n> > might\n> > sometimes do a lot of allocations (or deallocations), or a\n> > \"parameterized nested loop\" may loop over over many outer tuples and\n> > reset for each. There's also a lot of places that reset to a\n> > \"per-tuple\" context. I started looking at its performance, but\n> > nothing\n> > to show yet.\n>\n> Thanks\n>\n> > Would you keep people copied on your replies (\"reply all\") ?\n> > Otherwise\n> > I (at least) may miss them. I think that's what's typical on these\n> > lists (and the list tool is smart enough not to send duplicates to\n> > people who are direct recipients).\n>\n> Ok - will do, thanks.\n>\n> --\n> Reid Thompson\n> Senior Software Engineer\n> Crunchy Data, Inc.\n>\n> reid.thompson@crunchydata.com\n> www.crunchydata.com\n>\n>\n> The patch does not apply; please rebase the patch.\n\npatching file src/backend/utils/misc/guc.c\nHunk #1 FAILED at 3664.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/backend/utils/misc/guc.c.rej\n\npatching file src/backend/utils/misc/postgresql.conf.sample\n\n\n-- \nIbrar Ahmed\n\nOn Mon, Sep 12, 2022 at 8:30 PM Reid Thompson <reid.thompson@crunchydata.com> wrote:On Fri, 2022-09-09 at 12:14 -0500, Justin Pryzby wrote:\n> On Sat, Sep 03, 2022 at 11:40:03PM -0400, Reid Thompson wrote:\n> > > > + 0, 0, INT_MAX,\n> > > > + NULL, NULL, NULL\n> > > I think this needs a maximum like INT_MAX/1024/1024\n> > \n> > Is this noting that we'd set a ceiling of 2048MB?\n> \n> The reason is that you're later multiplying it by 1024*1024, so you\n> need\n> to limit it to avoid overflowing. Compare with\n> min_dynamic_shared_memory, Log_RotationSize, maintenance_work_mem,\n> autovacuum_work_mem.\n\nWhat I originally attempted to implement is:\nGUC \"max_total_backend_memory\" max value as INT_MAX = 2147483647 MB\n(2251799812636672 bytes). And the other variables and comparisons as\nbytes represented as uint64 to avoid overflow.\n\nIs this invalid?\n\n> typo: Explicitely\n\ncorrected\n\n> + errmsg(\"request will exceed postgresql.conf\n> defined max_total_backend_memory limit (%lu > %lu)\",\n> \n> I wouldn't mention postgresql.conf - it could be in\n> postgresql.auto.conf, or an include file, or a -c parameter.\n> Suggest: allocation would exceed max_total_backend_memory limit...\n>\n\nupdated\n\n> \n> + ereport(LOG, errmsg(\"decrease reduces reported\n> backend memory allocated below zero; setting reported to 0\"));\n> \n> Suggest: deallocation would decrease backend memory below zero;\n\nupdated\n\n> + {\"max_total_backend_memory\", PGC_SIGHUP,\n> RESOURCES_MEM, \n> \n> \n> \n> Should this be PGC_SU_BACKEND to allow a superuser to set a higher\n> limit (or no limit)?\n\nSounds good to me. I'll update to that.\nWould PGC_SUSET be too open?\n\n> There's compilation warning under mingw cross compile due to\n> sizeof(long). See d914eb347 and other recent commits which I guess\n> is\n> the current way to handle this.\n> http://cfbot.cputube.org/reid-thompson.html\n\nupdated %lu to %llu and changed cast from uint64 to \nunsigned long long in the ereport call\n\n> For performance test, you'd want to check what happens with a large\n> number of max_connections (and maybe a large number of clients). TPS\n> isn't the only thing that matters. For example, a utility command\n> might\n> sometimes do a lot of allocations (or deallocations), or a\n> \"parameterized nested loop\" may loop over over many outer tuples and\n> reset for each. There's also a lot of places that reset to a\n> \"per-tuple\" context. I started looking at its performance, but\n> nothing\n> to show yet.\n\nThanks\n\n> Would you keep people copied on your replies (\"reply all\") ? \n> Otherwise\n> I (at least) may miss them. I think that's what's typical on these\n> lists (and the list tool is smart enough not to send duplicates to\n> people who are direct recipients).\n\nOk - will do, thanks.\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com\n\n\nThe patch does not apply; please rebase the patch.patching file src/backend/utils/misc/guc.c\nHunk #1 FAILED at 3664.\n1 out of 1 hunk FAILED -- saving rejects to file src/backend/utils/misc/guc.c.rej patching file src/backend/utils/misc/postgresql.conf.sample-- Ibrar Ahmed",
"msg_date": "Thu, 15 Sep 2022 12:07:25 +0400",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Thu, 2022-09-15 at 12:07 +0400, Ibrar Ahmed wrote:\n> \n> The patch does not apply; please rebase the patch.\n> \n> patching file src/backend/utils/misc/guc.c\n> Hunk #1 FAILED at 3664.\n> 1 out of 1 hunk FAILED -- saving rejects to file\n> src/backend/utils/misc/guc.c.rej \n> \n> patching file src/backend/utils/misc/postgresql.conf.sample\n> \n\nrebased patches attached.\n\nThanks,\nReid",
"msg_date": "Thu, 15 Sep 2022 10:58:19 -0400",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Hello Reid,\n\n\ncould you rebase the patch again? It doesn't apply currently (http://cfbot.cputube.org/patch_40_3867.log). Thanks!\n\n\nYou mention, that you want to prevent the compiler from getting cute.\n\nI don't think this comments are exactly helpful in the current state. I think probably fine to just omit them.\n\n\nI don't understand the purpose of the result variable in exceeds_max_total_bkend_mem. What purpose does it serve?\n\n\nI really like the simplicity of the suggestion here to prevent oom.\n\nI intent to play around with a lot of backends, once I get a rebased patch.\n\n\nRegards\n\nArne\n\n\n________________________________\nFrom: Reid Thompson <reid.thompson@crunchydata.com>\nSent: Thursday, September 15, 2022 4:58:19 PM\nTo: Ibrar Ahmed; pgsql-hackers@lists.postgresql.org\nCc: reid.thompson@crunchydata.com; Justin Pryzby\nSubject: Re: Add the ability to limit the amount of memory that can be allocated to backends.\n\nOn Thu, 2022-09-15 at 12:07 +0400, Ibrar Ahmed wrote:\n>\n> The patch does not apply; please rebase the patch.\n>\n> patching file src/backend/utils/misc/guc.c\n> Hunk #1 FAILED at 3664.\n> 1 out of 1 hunk FAILED -- saving rejects to file\n> src/backend/utils/misc/guc.c.rej\n>\n> patching file src/backend/utils/misc/postgresql.conf.sample\n>\n\nrebased patches attached.\n\nThanks,\nReid\n\n\n\n\n\n\n\n\n\n\n\n\n\nHello Reid,\n\n\ncould you rebase the patch again? It doesn't apply currently (http://cfbot.cputube.org/patch_40_3867.log). Thanks!\n\n\nYou mention, that you want to prevent the compiler from getting cute.\n\nI don't think this comments are exactly helpful in the current state. I think probably fine to just omit them.\n\n\nI don't understand the purpose of the result variable in exceeds_max_total_bkend_mem. What purpose does it serve?\n\n\n\nI really like the simplicity of the suggestion here to prevent oom.\n\n\nI intent to play around with a lot of backends, once I get a rebased patch. \n\n\nRegards\nArne\n\n\n\n\nFrom: Reid Thompson <reid.thompson@crunchydata.com>\nSent: Thursday, September 15, 2022 4:58:19 PM\nTo: Ibrar Ahmed; pgsql-hackers@lists.postgresql.org\nCc: reid.thompson@crunchydata.com; Justin Pryzby\nSubject: Re: Add the ability to limit the amount of memory that can be allocated to backends.\n \n\n\n\nOn Thu, 2022-09-15 at 12:07 +0400, Ibrar Ahmed wrote:\n> \n> The patch does not apply; please rebase the patch.\n> \n> patching file src/backend/utils/misc/guc.c\n> Hunk #1 FAILED at 3664.\n> 1 out of 1 hunk FAILED -- saving rejects to file\n> src/backend/utils/misc/guc.c.rej \n> \n> patching file src/backend/utils/misc/postgresql.conf.sample\n> \n\nrebased patches attached.\n\nThanks,\nReid",
"msg_date": "Mon, 24 Oct 2022 15:27:51 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Hi Arne,\n\nOn Mon, 2022-10-24 at 15:27 +0000, Arne Roland wrote:\n> Hello Reid,\n> \n> could you rebase the patch again? It doesn't apply currently\n> (http://cfbot.cputube.org/patch_40_3867.log). Thanks!\n\nrebased patches attached.\n\n> You mention, that you want to prevent the compiler from getting\n> cute.I don't think this comments are exactly helpful in the current\n> state. I think probably fine to just omit them.\n\nI attempted to follow previous convention when adding code and these\ncomments have been consistently applied throughout backend_status.c\nwhere a volatile pointer is being used.\n\n> I don't understand the purpose of the result variable in\n> exceeds_max_total_bkend_mem. What purpose does it serve?\n> \n> I really like the simplicity of the suggestion here to prevent oom.\n\nIf max_total_backend_memory is configured, exceeds_max_total_bkend_mem()\nwill return true if an allocation request will push total backend memory\nallocated over the configured value.\n\nexceeds_max_total_bkend_mem() is implemented in the various allocators\nalong the lines of\n...snip...\n /* Do not exceed maximum allowed memory allocation */\n if (exceeds_max_total_bkend_mem('new request size')) \n return NULL; \n...snip...\nDo not allocate the memory requested, return NULL instead. PG already\nhad code in place to handle NULL returns from allocation requests.\n\nThe allocation code in aset.c, slab.c, generation.c, dsm_impl.c utilizes\n exceeds_max_total_bkend_mem()\n\nmax_total_backend_memory (integer)\nSpecifies a limit to the amount of memory (MB) that may be allocated\nto backends in total (i.e. this is not a per user or per backend limit).\nIf unset, or set to 0 it is disabled. A backend request that would push\nthe total over the limit will be denied with an out of memory error\ncausing that backend's current query/transaction to fail. Due to the\ndynamic nature of memory allocations, this limit is not exact. If within\n1.5MB of the limit and two backends request 1MB each at the same time\nboth may be allocated, and exceed the limit. Further requests will not\nbe allocated until dropping below the limit. Keep this in mind when\nsetting this value. This limit does not affect auxiliary backend\nprocesses Auxiliary process . Backend memory allocations\n(backend_mem_allocated) are displayed in the pg_stat_activity view.\n\n> I intent to play around with a lot of backends, once I get a rebased\n> patch. \n> \n> Regards\n> Arne",
"msg_date": "Tue, 25 Oct 2022 11:49:03 -0400",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Tue, 2022-10-25 at 11:49 -0400, Reid Thompson wrote:\n> Hi Arne,\n> \n> On Mon, 2022-10-24 at 15:27 +0000, Arne Roland wrote:\n> > Hello Reid,\n> > \n> > could you rebase the patch again? It doesn't apply currently\n> > (http://cfbot.cputube.org/patch_40_3867.log). Thanks!\n> \n> rebased patches attached.\n\nRebased to current. Add a couple changes per conversation with D\nChristensen (include units in field name, group field with backend_xid\nand backend_xmin fields in pg_stat_activity view, rather than between\nquery_id and query)\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com",
"msg_date": "Thu, 03 Nov 2022 11:48:50 -0400",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Thu, 2022-11-03 at 11:48 -0400, Reid Thompson wrote:\n> On Tue, 2022-10-25 at 11:49 -0400, Reid Thompson wrote:\n>\n> Rebased to current. Add a couple changes per conversation with D\n> Christensen (include units in field name, group field with\n> backend_xid\n> and backend_xmin fields in pg_stat_activity view, rather than between\n> query_id and query)\n> \n\nrebased/patched to current master && current pg-stat-activity-backend-memory-allocated \n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com",
"msg_date": "Sat, 26 Nov 2022 22:22:15 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Hi,\n\nOn 2022-11-26 22:22:15 -0500, Reid Thompson wrote:\n> rebased/patched to current master && current pg-stat-activity-backend-memory-allocated\n\nThis version fails to build with msvc, and builds with warnings on other\nplatforms.\nhttps://cirrus-ci.com/build/5410696721072128\nmsvc:\n[20:26:51.286] c:\\cirrus\\src\\include\\utils/backend_status.h(40): error C2059: syntax error: 'constant'\n\nmingw cross:\n[20:26:26.358] from /usr/share/mingw-w64/include/winsock2.h:23,\n[20:26:26.358] from ../../src/include/port/win32_port.h:60,\n[20:26:26.358] from ../../src/include/port.h:24,\n[20:26:26.358] from ../../src/include/c.h:1306,\n[20:26:26.358] from ../../src/include/postgres.h:47,\n[20:26:26.358] from controldata_utils.c:18:\n[20:26:26.358] ../../src/include/utils/backend_status.h:40:2: error: expected identifier before numeric constant\n[20:26:26.358] 40 | IGNORE,\n[20:26:26.358] | ^~~~~~\n[20:26:26.358] In file included from ../../src/include/postgres.h:48,\n[20:26:26.358] from controldata_utils.c:18:\n[20:26:26.358] ../../src/include/utils/backend_status.h: In function ‘pgstat_report_allocated_bytes’:\n[20:26:26.358] ../../src/include/utils/backend_status.h:365:12: error: format ‘%ld’ expects argument of type ‘long int’, but argument 3 has type ‘uint64’ {aka ‘long long unsigned int’} [-Werror=format=]\n[20:26:26.358] 365 | errmsg(\"Backend %d deallocated %ld bytes, exceeding the %ld bytes it is currently reporting allocated. Setting reported to 0.\",\n[20:26:26.358] | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n[20:26:26.358] 366 | MyProcPid, allocated_bytes, *my_allocated_bytes));\n[20:26:26.358] | ~~~~~~~~~~~~~~~\n[20:26:26.358] | |\n[20:26:26.358] | uint64 {aka long long unsigned int}\n\nDue to windows having long be 32bit, you need to use %lld. Our custom to deal\nwith that is to cast the argument to errmsg as long long unsigned and use\n%llu.\n\nBtw, given that the argument is uint64, it doesn't seem correct to use %ld,\nthat's signed. Not that it's going to matter, but ...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Dec 2022 10:32:44 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Tue, 2022-12-06 at 10:32 -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2022-11-26 22:22:15 -0500, Reid Thompson wrote:\n> > rebased/patched to current master && current pg-stat-activity-\n> > backend-memory-allocated\n> \n> This version fails to build with msvc, and builds with warnings on\n> other\n> platforms.\n> https://cirrus-ci.com/build/5410696721072128\n> msvc:\n> \n> Andres Freund\n\nupdated patches\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com",
"msg_date": "Fri, 09 Dec 2022 10:05:45 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Fri, 9 Dec 2022 at 20:41, Reid Thompson\n<reid.thompson@crunchydata.com> wrote:\n>\n> On Tue, 2022-12-06 at 10:32 -0800, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2022-11-26 22:22:15 -0500, Reid Thompson wrote:\n> > > rebased/patched to current master && current pg-stat-activity-\n> > > backend-memory-allocated\n> >\n> > This version fails to build with msvc, and builds with warnings on\n> > other\n> > platforms.\n> > https://cirrus-ci.com/build/5410696721072128\n> > msvc:\n> >\n> > Andres Freund\n>\n> updated patches\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\n92957ed98c5c565362ce665266132a7f08f6b0c0 ===\n=== applying patch\n./0001-Add-tracking-of-backend-memory-allocated-to-pg_stat_.patch\n...\npatching file src/backend/utils/mmgr/slab.c\nHunk #1 succeeded at 69 (offset 16 lines).\nHunk #2 succeeded at 414 (offset 175 lines).\nHunk #3 succeeded at 436 with fuzz 2 (offset 176 lines).\nHunk #4 FAILED at 286.\nHunk #5 succeeded at 488 (offset 186 lines).\nHunk #6 FAILED at 381.\nHunk #7 FAILED at 554.\n3 out of 7 hunks FAILED -- saving rejects to file\nsrc/backend/utils/mmgr/slab.c.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3867.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 3 Jan 2023 16:22:08 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Tue, 2023-01-03 at 16:22 +0530, vignesh C wrote:\n> ....\n> The patch does not apply on top of HEAD as in [1], please post a\n> rebased patch:\n> ...\n> Regards,\n> Vignesh\n>\n\nAttached is rebased patch, with some updates related to committed changes.\n\nThanks,\nReid\n\n-- \nReid Thompson\nSenior Software Engineer\nCrunchy Data, Inc.\n\nreid.thompson@crunchydata.com\nwww.crunchydata.com",
"msg_date": "Thu, 05 Jan 2023 13:44:20 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Hi,\n\nOn 2023-01-05 13:44:20 -0500, Reid Thompson wrote:\n> From 0a6b152e0559a250dddd33bd7d43eb0959432e0d Mon Sep 17 00:00:00 2001\n> From: Reid Thompson <jreidthompson@nc.rr.com>\n> Date: Thu, 11 Aug 2022 12:01:25 -0400\n> Subject: [PATCH 1/2] Add tracking of backend memory allocated to\n> pg_stat_activity\n> \n> This new field displays the current bytes of memory allocated to the\n> backend process. It is updated as memory for the process is\n> malloc'd/free'd. Memory allocated to items on the freelist is included in\n> the displayed value.\n\nIt doesn't actually malloc/free. It tracks palloc/pfree.\n\n\n> Dynamic shared memory allocations are included only in the value displayed\n> for the backend that created them, they are not included in the value for\n> backends that are attached to them to avoid double counting.\n\nAs mentioned before, I don't think accounting DSM this way makes sense.\n\n\n> --- a/src/backend/postmaster/autovacuum.c\n> +++ b/src/backend/postmaster/autovacuum.c\n> @@ -407,6 +407,9 @@ StartAutoVacLauncher(void)\n> \n> #ifndef EXEC_BACKEND\n> \t\tcase 0:\n> +\t\t\t/* Zero allocated bytes to avoid double counting parent allocation */\n> +\t\t\tpgstat_zero_my_allocated_bytes();\n> +\n> \t\t\t/* in postmaster child ... */\n> \t\t\tInitPostmasterChild();\n\n\n\n> @@ -1485,6 +1488,9 @@ StartAutoVacWorker(void)\n> \n> #ifndef EXEC_BACKEND\n> \t\tcase 0:\n> +\t\t\t/* Zero allocated bytes to avoid double counting parent allocation */\n> +\t\t\tpgstat_zero_my_allocated_bytes();\n> +\n> \t\t\t/* in postmaster child ... */\n> \t\t\tInitPostmasterChild();\n> \n> diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c\n> index eac3450774..24278e5c18 100644\n> --- a/src/backend/postmaster/postmaster.c\n> +++ b/src/backend/postmaster/postmaster.c\n> @@ -4102,6 +4102,9 @@ BackendStartup(Port *port)\n> \t{\n> \t\tfree(bn);\n> \n> +\t\t/* Zero allocated bytes to avoid double counting parent allocation */\n> +\t\tpgstat_zero_my_allocated_bytes();\n> +\n> \t\t/* Detangle from postmaster */\n> \t\tInitPostmasterChild();\n\n\nIt doesn't at all seem right to call pgstat_zero_my_allocated_bytes() here,\nbefore even InitPostmasterChild() is called. Nor does it seem right to add the\ncall to so many places.\n\nNote that this is before we even delete postmaster's memory, see e.g.:\n\t/*\n\t * If the PostmasterContext is still around, recycle the space; we don't\n\t * need it anymore after InitPostgres completes. Note this does not trash\n\t * *MyProcPort, because ConnCreate() allocated that space with malloc()\n\t * ... else we'd need to copy the Port data first. Also, subsidiary data\n\t * such as the username isn't lost either; see ProcessStartupPacket().\n\t */\n\tif (PostmasterContext)\n\t{\n\t\tMemoryContextDelete(PostmasterContext);\n\t\tPostmasterContext = NULL;\n\t}\n\ncalling pgstat_zero_my_allocated_bytes() before we do this will lead to\nundercounting memory usage, afaict.\n\n\n> +/* Enum helper for reporting memory allocated bytes */\n> +enum allocation_direction\n> +{\n> +\tPG_ALLOC_DECREASE = -1,\n> +\tPG_ALLOC_IGNORE,\n> +\tPG_ALLOC_INCREASE,\n> +};\n\nWhat's the point of this?\n\n\n> +/* ----------\n> + * pgstat_report_allocated_bytes() -\n> + *\n> + * Called to report change in memory allocated for this backend.\n> + *\n> + * my_allocated_bytes initially points to local memory, making it safe to call\n> + * this before pgstats has been initialized. allocation_direction is a\n> + * positive/negative multiplier enum defined above.\n> + * ----------\n> + */\n> +static inline void\n> +pgstat_report_allocated_bytes(int64 allocated_bytes, int allocation_direction)\n\nI don't think this should take allocation_direction as a parameter, I'd make\nit two different functions.\n\n\n> +{\n> +\tuint64\t\ttemp;\n> +\n> +\t/*\n> +\t * Avoid *my_allocated_bytes unsigned integer overflow on\n> +\t * PG_ALLOC_DECREASE\n> +\t */\n> +\tif (allocation_direction == PG_ALLOC_DECREASE &&\n> +\t\tpg_sub_u64_overflow(*my_allocated_bytes, allocated_bytes, &temp))\n> +\t{\n> +\t\t*my_allocated_bytes = 0;\n> +\t\tereport(LOG,\n> +\t\t\t\terrmsg(\"Backend %d deallocated %lld bytes, exceeding the %llu bytes it is currently reporting allocated. Setting reported to 0.\",\n> +\t\t\t\t\t MyProcPid, (long long) allocated_bytes,\n> +\t\t\t\t\t (unsigned long long) *my_allocated_bytes));\n\nWe certainly shouldn't have an ereport in here. This stuff really needs to be\ncheap.\n\n\n> +\t}\n> +\telse\n> +\t\t*my_allocated_bytes += (allocated_bytes) * allocation_direction;\n\nSuperfluous parens?\n\n\n\n> +/* ----------\n> + * pgstat_get_all_memory_allocated() -\n> + *\n> + *\tReturn a uint64 representing the current shared memory allocated to all\n> + *\tbackends. This looks directly at the BackendStatusArray, and so will\n> + *\tprovide current information regardless of the age of our transaction's\n> + *\tsnapshot of the status array.\n> + *\tIn the future we will likely utilize additional values - perhaps limit\n> + *\tbackend allocation by user/role, etc.\n> + * ----------\n> + */\n> +uint64\n> +pgstat_get_all_backend_memory_allocated(void)\n> +{\n> +\tPgBackendStatus *beentry;\n> +\tint\t\t\ti;\n> +\tuint64\t\tall_memory_allocated = 0;\n> +\n> +\tbeentry = BackendStatusArray;\n> +\n> +\t/*\n> +\t * We probably shouldn't get here before shared memory has been set up,\n> +\t * but be safe.\n> +\t */\n> +\tif (beentry == NULL || BackendActivityBuffer == NULL)\n> +\t\treturn 0;\n> +\n> +\t/*\n> +\t * We include AUX procs in all backend memory calculation\n> +\t */\n> +\tfor (i = 1; i <= NumBackendStatSlots; i++)\n> +\t{\n> +\t\t/*\n> +\t\t * We use a volatile pointer here to ensure the compiler doesn't try\n> +\t\t * to get cute.\n> +\t\t */\n> +\t\tvolatile PgBackendStatus *vbeentry = beentry;\n> +\t\tbool\t\tfound;\n> +\t\tuint64\t\tallocated_bytes = 0;\n> +\n> +\t\tfor (;;)\n> +\t\t{\n> +\t\t\tint\t\t\tbefore_changecount;\n> +\t\t\tint\t\t\tafter_changecount;\n> +\n> +\t\t\tpgstat_begin_read_activity(vbeentry, before_changecount);\n> +\n> +\t\t\t/*\n> +\t\t\t * Ignore invalid entries, which may contain invalid data.\n> +\t\t\t * See pgstat_beshutdown_hook()\n> +\t\t\t */\n> +\t\t\tif (vbeentry->st_procpid > 0)\n> +\t\t\t\tallocated_bytes = vbeentry->allocated_bytes;\n> +\n> +\t\t\tpgstat_end_read_activity(vbeentry, after_changecount);\n> +\n> +\t\t\tif ((found = pgstat_read_activity_complete(before_changecount,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t after_changecount)))\n> +\t\t\t\tbreak;\n> +\n> +\t\t\t/* Make sure we can break out of loop if stuck... */\n> +\t\t\tCHECK_FOR_INTERRUPTS();\n> +\t\t}\n> +\n> +\t\tif (found)\n> +\t\t\tall_memory_allocated += allocated_bytes;\n> +\n> +\t\tbeentry++;\n> +\t}\n> +\n> +\treturn all_memory_allocated;\n> +}\n> +\n> +/*\n> + * Determine if allocation request will exceed max backend memory allowed.\n> + * Do not apply to auxiliary processes.\n> + */\n> +bool\n> +exceeds_max_total_bkend_mem(uint64 allocation_request)\n> +{\n> +\tbool\t\tresult = false;\n> +\n> +\t/* Exclude auxiliary processes from the check */\n> +\tif (MyAuxProcType != NotAnAuxProcess)\n> +\t\treturn result;\n> +\n> +\t/* Convert max_total_bkend_mem to bytes for comparison */\n> +\tif (max_total_bkend_mem &&\n> +\t\tpgstat_get_all_backend_memory_allocated() +\n> +\t\tallocation_request > (uint64) max_total_bkend_mem * 1024 * 1024)\n> +\t{\n> +\t\t/*\n> +\t\t * Explicitly identify the OOM being a result of this configuration\n> +\t\t * parameter vs a system failure to allocate OOM.\n> +\t\t */\n> +\t\tereport(WARNING,\n> +\t\t\t\terrmsg(\"allocation would exceed max_total_memory limit (%llu > %llu)\",\n> +\t\t\t\t\t (unsigned long long) pgstat_get_all_backend_memory_allocated() +\n> +\t\t\t\t\t allocation_request, (unsigned long long) max_total_bkend_mem * 1024 * 1024));\n> +\n> +\t\tresult = true;\n> +\t}\n\nI think it's completely unfeasible to execute something as expensive as\npgstat_get_all_backend_memory_allocated() on every allocation. Like,\nseriously, no.\n\nAnd we absolutely definitely shouldn't just add CHECK_FOR_INTERRUPT() calls\ninto the middle of allocator code.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Jan 2023 18:31:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Mon, 2023-01-09 at 18:31 -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-05 13:44:20 -0500, Reid Thompson wrote:\n> > This new field displays the current bytes of memory allocated to the\n> > backend process. It is updated as memory for the process is\n> > malloc'd/free'd. Memory allocated to items on the freelist is included in\n> > the displayed value.\n> \n> It doesn't actually malloc/free. It tracks palloc/pfree.\n\nI will update the message\n\n> \n> > Dynamic shared memory allocations are included only in the value displayed\n> > for the backend that created them, they are not included in the value for\n> > backends that are attached to them to avoid double counting.\n> \n> As mentioned before, I don't think accounting DSM this way makes sense.\n\nUnderstood, previously you noted 'There are a few uses of DSMs that track\nshared resources, with the biggest likely being the stats for relations\netc'. I'd like to come up with a solution to address this; identifying the\nlong term allocations to shared state and accounting for them such that they\ndon't get 'lost' when the allocating backend exits. Any guidance or\ndirection would be appreciated. \n\n> > --- a/src/backend/postmaster/autovacuum.c\n> > +++ b/src/backend/postmaster/autovacuum.c\n> > @@ -407,6 +407,9 @@ StartAutoVacLauncher(void)\n> > \n> > #ifndef EXEC_BACKEND\n> > \t\tcase 0:\n> > +\t\t\t/* Zero allocated bytes to avoid double counting parent allocation */\n> > +\t\t\tpgstat_zero_my_allocated_bytes();\n> > +\n> > \t\t\t/* in postmaster child ... */\n> > \t\t\tInitPostmasterChild();\n> \n> \n> \n> > @@ -1485,6 +1488,9 @@ StartAutoVacWorker(void)\n> > \n> > #ifndef EXEC_BACKEND\n> > \t\tcase 0:\n> > +\t\t\t/* Zero allocated bytes to avoid double counting parent allocation */\n> > +\t\t\tpgstat_zero_my_allocated_bytes();\n> > +\n> > \t\t\t/* in postmaster child ... */\n> > \t\t\tInitPostmasterChild();\n> > \n> > diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c\n> > index eac3450774..24278e5c18 100644\n> > --- a/src/backend/postmaster/postmaster.c\n> > +++ b/src/backend/postmaster/postmaster.c\n> > @@ -4102,6 +4102,9 @@ BackendStartup(Port *port)\n> > \t{\n> > \t\tfree(bn);\n> > \n> > +\t\t/* Zero allocated bytes to avoid double counting parent allocation */\n> > +\t\tpgstat_zero_my_allocated_bytes();\n> > +\n> > \t\t/* Detangle from postmaster */\n> > \t\tInitPostmasterChild();\n> \n> \n> It doesn't at all seem right to call pgstat_zero_my_allocated_bytes() here,\n> before even InitPostmasterChild() is called. Nor does it seem right to add the\n> call to so many places.\n> \n> Note that this is before we even delete postmaster's memory, see e.g.:\n> \t/*\n> \t * If the PostmasterContext is still around, recycle the space; we don't\n> \t * need it anymore after InitPostgres completes. Note this does not trash\n> \t * *MyProcPort, because ConnCreate() allocated that space with malloc()\n> \t * ... else we'd need to copy the Port data first. Also, subsidiary data\n> \t * such as the username isn't lost either; see ProcessStartupPacket().\n> \t */\n> \tif (PostmasterContext)\n> \t{\n> \t\tMemoryContextDelete(PostmasterContext);\n> \t\tPostmasterContext = NULL;\n> \t}\n> \n> calling pgstat_zero_my_allocated_bytes() before we do this will lead to\n> undercounting memory usage, afaict.\n> \n\nOK - I'll trace back through these and see if I can better locate and reduce the\nnumber of invocations.\n\n> > +/* Enum helper for reporting memory allocated bytes */\n> > +enum allocation_direction\n> > +{\n> > +\tPG_ALLOC_DECREASE = -1,\n> > +\tPG_ALLOC_IGNORE,\n> > +\tPG_ALLOC_INCREASE,\n> > +};\n> \n> What's the point of this?\n> \n> \n> > +/* ----------\n> > + * pgstat_report_allocated_bytes() -\n> > + *\n> > + * Called to report change in memory allocated for this backend.\n> > + *\n> > + * my_allocated_bytes initially points to local memory, making it safe to call\n> > + * this before pgstats has been initialized. allocation_direction is a\n> > + * positive/negative multiplier enum defined above.\n> > + * ----------\n> > + */\n> > +static inline void\n> > +pgstat_report_allocated_bytes(int64 allocated_bytes, int allocation_direction)\n> \n> I don't think this should take allocation_direction as a parameter, I'd make\n> it two different functions.\n\nOriginally it was two functions, a suggestion was made in the thread to\nmaybe consolidate them to a single function with a direction indicator,\nhence the above. I'm fine with converting it back to separate functions.\n\n> \n> > +\tif (allocation_direction == PG_ALLOC_DECREASE &&\n> > +\t\tpg_sub_u64_overflow(*my_allocated_bytes, allocated_bytes, &temp))\n> > +\t{\n> > +\t\t*my_allocated_bytes = 0;\n> > +\t\tereport(LOG,\n> > +\t\t\t\terrmsg(\"Backend %d deallocated %lld bytes, exceeding the %llu bytes it is currently reporting allocated. Setting reported to 0.\",\n> > +\t\t\t\t\t MyProcPid, (long long) allocated_bytes,\n> > +\t\t\t\t\t (unsigned long long) *my_allocated_bytes));\n> \n> We certainly shouldn't have an ereport in here. This stuff really needs to be\n> cheap.\n\nI will remove the ereport.\n\n> \n> > +\t\t*my_allocated_bytes += (allocated_bytes) * allocation_direction;\n> \n> Superfluous parens? \n\nI will remove these.\n\n> \n> \n> > +/* ----------\n> > + * pgstat_get_all_memory_allocated() -\n> > + *\n> > + *\tReturn a uint64 representing the current shared memory allocated to all\n> > + *\tbackends. This looks directly at the BackendStatusArray, and so will\n> > + *\tprovide current information regardless of the age of our transaction's\n> > + *\tsnapshot of the status array.\n> > + *\tIn the future we will likely utilize additional values - perhaps limit\n> > + *\tbackend allocation by user/role, etc.\n> > + * ----------\n> \n> I think it's completely unfeasible to execute something as expensive as\n> pgstat_get_all_backend_memory_allocated() on every allocation. Like,\n> seriously, no.\n\nOk. Do we check every nth allocation/try to implement a scheme of checking\nmore often as we we get closer to the declared max_total_bkend_mem?\n\n> \n> And we absolutely definitely shouldn't just add CHECK_FOR_INTERRUPT() calls\n> into the middle of allocator code.\n\nI'm open to guidance/suggestions/pointers to remedying these.\n\n> Greetings,\n> \n> Andres Freund\n> \n\n\nThanks,\nReid\n\n\n\n\n\n",
"msg_date": "Fri, 13 Jan 2023 09:15:10 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Hi,\n\nOn 2023-01-13 09:15:10 -0500, Reid Thompson wrote:\n> On Mon, 2023-01-09 at 18:31 -0800, Andres Freund wrote:\n> > > Dynamic shared memory allocations are included only in the value displayed\n> > > for the backend that created them, they are not included in the value for\n> > > backends that are attached to them to avoid double counting.\n> >\n> > As mentioned before, I don't think accounting DSM this way makes sense.\n>\n> Understood, previously you noted 'There are a few uses of DSMs that track\n> shared resources, with the biggest likely being the stats for relations\n> etc'. I'd like to come up with a solution to address this; identifying the\n> long term allocations to shared state and accounting for them such that they\n> don't get 'lost' when the allocating backend exits. Any guidance or\n> direction would be appreciated.\n\nTracking it as backend memory usage doesn't seem helpful to me, particularly\nbecause some of it is for server wide data tracking (pgstats, some\ncaches). But that doesn't mean you couldn't track and report it\nseparately.\n\n\n> > > +/* ----------\n> > > + * pgstat_get_all_memory_allocated() -\n> > > + *\n> > > + *\tReturn a uint64 representing the current shared memory allocated to all\n> > > + *\tbackends. This looks directly at the BackendStatusArray, and so will\n> > > + *\tprovide current information regardless of the age of our transaction's\n> > > + *\tsnapshot of the status array.\n> > > + *\tIn the future we will likely utilize additional values - perhaps limit\n> > > + *\tbackend allocation by user/role, etc.\n> > > + * ----------\n> >\n> > I think it's completely unfeasible to execute something as expensive as\n> > pgstat_get_all_backend_memory_allocated() on every allocation. Like,\n> > seriously, no.\n>\n> Ok. Do we check every nth allocation/try to implement a scheme of checking\n> more often as we we get closer to the declared max_total_bkend_mem?\n\nI think it's just not acceptable to do O(connections) work as part of\nsomething critical as memory allocation. Even if amortized imo.\n\nWhat you could do is to have a single, imprecise, shared counter for the total\nmemory allocation, and have a backend-local \"allowance\". When the allowance is\nused up, refill it from the shared counter (a single atomic op).\n\nBut honestly, I think we first need to have the accounting for a while before\nit makes sense to go for the memory limiting patch. And I doubt a single GUC\nwill suffice to make this usable.\n\n\n\n> > And we absolutely definitely shouldn't just add CHECK_FOR_INTERRUPT() calls\n> > into the middle of allocator code.\n>\n> I'm open to guidance/suggestions/pointers to remedying these.\n\nWell, just don't have the CHECK_FOR_INTERRUPT(). Nor the O(N) operation.\n\nYou also can't do the ereport(WARNING) there, that itself allocates memory,\nand could lead to recursion in some edge cases.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 Jan 2023 10:04:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Fri, 6 Jan 2023 at 00:19, Reid Thompson\n<reid.thompson@crunchydata.com> wrote:\n>\n> On Tue, 2023-01-03 at 16:22 +0530, vignesh C wrote:\n> > ....\n> > The patch does not apply on top of HEAD as in [1], please post a\n> > rebased patch:\n> > ...\n> > Regards,\n> > Vignesh\n> >\n>\n> Attached is rebased patch, with some updates related to committed changes.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\n48880840f18cb75fcaecc77b5e7816b92c27157b ===\n=== applying patch\n./0001-Add-tracking-of-backend-memory-allocated-to-pg_stat_.patch\n....\npatching file src/test/regress/expected/rules.out\nHunk #2 FAILED at 1875.\nHunk #4 FAILED at 2090.\n2 out of 4 hunks FAILED -- saving rejects to file\nsrc/test/regress/expected/rules.out.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3867.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 19 Jan 2023 16:50:44 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Thu, 2023-01-19 at 16:50 +0530, vignesh C wrote:\n> \n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n> \n> Regards,\n> Vignesh\n\nrebased patch attached\n\nThanks,\nReid",
"msg_date": "Mon, 23 Jan 2023 10:48:38 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Hi,\n\nOn 2023-01-23 10:48:38 -0500, Reid Thompson wrote:\n> On Thu, 2023-01-19 at 16:50 +0530, vignesh C wrote:\n> > \n> > The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n> > \n> > Regards,\n> > Vignesh\n> \n> rebased patch attached\n\nI think it's basically still waiting on author, until the O(N) cost is gone\nfrom the overflow limit check.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 23 Jan 2023 12:31:28 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Mon, 2023-01-23 at 12:31 -0800, Andres Freund wrote:\n> Hi,\n> \n> I think it's basically still waiting on author, until the O(N) cost is gone\n> from the overflow limit check.\n> \n> Greetings,\n> \n> Andres Freund\n\nYes, just a rebase. There is still work to be done per earlier in the\nthread.\n\nI do want to follow up and note re palloc/pfree vs malloc/free that the\ntracking code (0001-Add-tracking-...) is not tracking palloc/pfree but is\nexplicitely tracking malloc/free. Not every palloc/pfree call executes the\ntracking code, only those where the path followed includes malloc() or\nfree(). Routine palloc() calls fulfilled from the context's\nfreelist/emptyblocks/freeblock/etc and pfree() calls not invoking free()\navoid the tracking code.\n\nThanks,\nReid\n\n\n\n",
"msg_date": "Thu, 26 Jan 2023 15:27:20 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Regarding the shared counter noted here,\n\n> What you could do is to have a single, imprecise, shared counter for the total\n> memory allocation, and have a backend-local \"allowance\". When the allowance is\n> used up, refill it from the shared counter (a single atomic op).\n\nIs there a preferred or suggested location to put variables like this?\nPerhaps a current variable to use as a reference?\n\nThanks,\nReid\n\n\n\n",
"msg_date": "Thu, 02 Feb 2023 10:41:55 -0500",
"msg_from": "Reid Thompson <reid.thompson@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Hi,\n\nOn 2023-01-26 15:27:20 -0500, Reid Thompson wrote:\n> Yes, just a rebase. There is still work to be done per earlier in the\n> thread.\n\nThe tests recently started to fail:\n\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3867\n\n\n> I do want to follow up and note re palloc/pfree vs malloc/free that the\n> tracking code (0001-Add-tracking-...) is not tracking palloc/pfree but is\n> explicitely tracking malloc/free. Not every palloc/pfree call executes the\n> tracking code, only those where the path followed includes malloc() or\n> free(). Routine palloc() calls fulfilled from the context's\n> freelist/emptyblocks/freeblock/etc and pfree() calls not invoking free()\n> avoid the tracking code.\n\nSure, but we create a lot of memory contexts, so that's not a whole lot of\ncomfort.\n\n\nI marked this as waiting on author.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Feb 2023 16:26:27 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Mon, 2023-02-13 at 16:26 -0800, Andres Freund wrote:\n> Hi,\n> \n> The tests recently started to fail:\n> \n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3867\n> \n> I marked this as waiting on author.\n> \n> Greetings,\n> \n> Andres Freund\n\nPatch has been rebased to master.\n\nThe memory limiting portion (patch 0002-*) has been refactored to utilize a\nshared counter for total memory allocation along with backend-local\nallowances that are initialized at process startup and refilled from the\ncentral counter upon being used up. Free'd memory is accumulated and\nreturned to the shared counter upon meeting a threshold and/or upon process\nexit. At this point arbitrarily picked 1MB as the initial allowance and\nreturn threshold. \n\nThanks,\nReid",
"msg_date": "Thu, 02 Mar 2023 14:41:26 -0500",
"msg_from": "reid.thompson@crunchydata.com",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On 2023-03-02 14:41:26 -0500, reid.thompson@crunchydata.com wrote:\n> Patch has been rebased to master.\n\nQuite a few prior review comments seem to not have been\naddressed. There's not much point in posting new versions without that.\n\nI think there's zero chance 0002 can make it into 16. If 0001 is cleaned\nup, I can see a path.\n\n\n",
"msg_date": "Thu, 2 Mar 2023 16:29:07 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Updated patches attached. \n\n\n====================================================================\npg-stat-activity-backend-memory-allocated\n====================================================================\nDSM allocations created by a process and not destroyed prior to it's exit are\nconsidered long lived and are tracked in global_dsm_allocated_bytes.\n\ncreated 2 new system views (see below):\n\npg_stat_global_memory_allocation view displays datid, shared_memory_size,\nshared_memory_size_in_huge_pages, global_dsm_allocated_bytes. shared_memory_size\nand shared_memory_size_in_huge_pages display the calculated read only values for\nthese GUCs.\n\npg_stat_memory_allocation view\nMigrated allocated_bytes out of pg_stat_activity view into this view.\npg_stat_memory_allocation also contains a breakdown of allocation by allocator\ntype (aset, dsm, generation, slab). View displays datid, pid, allocated_bytes,\naset_allocated_bytes, dsm_allocated_bytes, generation_allocated_bytes,\nslab_allocated_bytes by process.\n\nReduced calls to initialize allocation counters by moving\nintialization call into InitPostmasterChild.\n\npostgres=# select * from pg_stat_global_memory_allocation;\n datid | shared_memory_size | shared_memory_size_in_huge_pages | global_dsm_allocated_bytes\n-------+--------------------+----------------------------------+----------------------------\n 5 | 192MB | 96 | 1048576\n(1 row)\n\n\npostgres=# select * from pg_stat_memory_allocation;\n datid | pid | allocated_bytes | aset_allocated_bytes | dsm_allocated_bytes | generation_allocated_bytes | slab_allocated_bytes\n-------+--------+-----------------+----------------------+---------------------+----------------------------+----------------------\n | 981842 | 771512 | 771512 | 0 | 0 | 0\n | 981843 | 736696 | 736696 | 0 | 0 | 0\n 5 | 981913 | 4274792 | 4274792 | 0 | 0 | 0\n | 981838 | 107216 | 107216 | 0 | 0 | 0\n | 981837 | 123600 | 123600 | 0 | 0 | 0\n | 981841 | 107216 | 107216 | 0 | 0 | 0\n(6 rows)\n\npostgres=# select ps.datid, ps.pid, state,application_name,backend_type, pa.* from pg_stat_activity ps join pg_stat_memory_allocation pa on (pa.pid = ps.pid) order by dsm_allocated_bytes, pa.pid;\n datid | pid | state | application_name | backend_type | datid | pid | allocated_bytes | aset_allocated_bytes | dsm_allocated_bytes | generation_allocated_bytes | slab_allocated_bytes\n-------+--------+--------+------------------+------------------------------+-------+--------+-----------------+----------------------+---------------------+----------------------------+----------------------\n | 981837 | | | checkpointer | | 981837 | 123600 | 123600 | 0 | 0 | 0\n | 981838 | | | background writer | | 981838 | 107216 | 107216 | 0 | 0 | 0\n | 981841 | | | walwriter | | 981841 | 107216 | 107216 | 0 | 0 | 0\n | 981842 | | | autovacuum launcher | | 981842 | 771512 | 771512 | 0 | 0 | 0\n | 981843 | | | logical replication launcher | | 981843 | 736696 | 736696 | 0 | 0 | 0\n 5 | 981913 | active | psql | client backend | 5 | 981913 | 5390864 | 5382824 | 0 | 8040 | 0\n(6 rows)\n\n\n\n\n====================================================================\ndev-max-memory\n====================================================================\nInclude shared_memory_size in max_total_backend_memory calculations.\nmax_total_backend_memory is reduced by shared_memory_size at startup.\nLocal allowance is refilled when consumed from global\nmax_total_bkend_mem_bytes_available.\n\npg_stat_global_memory_allocation view\nadd columns max_total_backend_memory_bytes, max_total_bkend_mem_bytes_available.\nmax_total_backend_memory_bytes displays a byte representation of\nmax_total_backend_memory. max_total_bkend_mem_bytes_available tracks the balance\nof max_total_backend_memory_bytes available to backend processes.\n\npostgres=# select * from pg_stat_global_memory_allocation;\n datid | shared_memory_size | shared_memory_size_in_huge_pages | max_total_backend_memory_bytes | max_total_bkend_mem_bytes_available | global_dsm_allocated_bytes\n-------+--------------------+----------------------------------+--------------------------------+-------------------------------------+----------------------------\n 5 | 192MB | 96 | 2147483648 | 1874633712 | 5242880\n(1 row)\n\npostgres=# select * from pg_stat_memory_allocation ;\n datid | pid | allocated_bytes | aset_allocated_bytes | dsm_allocated_bytes | generation_allocated_bytes | slab_allocated_bytes\n-------+--------+-----------------+----------------------+---------------------+----------------------------+----------------------\n | 534528 | 812472 | 812472 | 0 | 0 | 0\n | 534529 | 736696 | 736696 | 0 | 0 | 0\n 5 | 556271 | 4458088 | 4458088 | 0 | 0 | 0\n 5 | 534942 | 1298680 | 1298680 | 0 | 0 | 0\n 5 | 709283 | 7985464 | 7985464 | 0 | 0 | 0\n 5 | 718693 | 8809240 | 8612504 | 196736 | 0 | 0\n 5 | 752113 | 25803192 | 25803192 | 0 | 0 | 0\n 5 | 659886 | 9042232 | 9042232 | 0 | 0 | 0\n | 534525 | 2491088 | 2491088 | 0 | 0 | 0\n | 534524 | 4465360 | 4465360 | 0 | 0 | 0\n | 534527 | 107216 | 107216 | 0 | 0 | 0\n(11 rows)\n\n\npostgres=# select ps.datid, ps.pid, state,application_name,backend_type, pa.* from pg_stat_activity ps join pg_stat_memory_allocation pa on (pa.pid = ps.pid) order by dsm_allocated_bytes, pa.pid;\n datid | pid | state | application_name | backend_type | datid | pid | allocated_bytes | aset_allocated_bytes | dsm_allocated_bytes | generation_allocated_bytes | slab_allocated_bytes\n-------+--------+--------+------------------+------------------------------+-------+--------+-----------------+----------------------+---------------------+----------------------------+----------------------\n | 534524 | | | checkpointer | | 534524 | 4465360 | 4465360 | 0 | 0 | 0\n | 534525 | | | background writer | | 534525 | 2491088 | 2491088 | 0 | 0 | 0\n | 534527 | | | walwriter | | 534527 | 107216 | 107216 | 0 | 0 | 0\n | 534528 | | | autovacuum launcher | | 534528 | 812472 | 812472 | 0 | 0 | 0\n | 534529 | | | logical replication launcher | | 534529 | 736696 | 736696 | 0 | 0 | 0\n 5 | 534942 | idle | psql | client backend | 5 | 534942 | 1298680 | 1298680 | 0 | 0 | 0\n 5 | 556271 | active | psql | client backend | 5 | 556271 | 4866576 | 4858536 | 0 | 8040 | 0\n 5 | 659886 | active | | autovacuum worker | 5 | 659886 | 8993080 | 8993080 | 0 | 0 | 0\n 5 | 709283 | active | | autovacuum worker | 5 | 709283 | 7928120 | 7928120 | 0 | 0 | 0\n 5 | 752113 | active | | autovacuum worker | 5 | 752113 | 27935608 | 27935608 | 0 | 0 | 0\n 5 | 718693 | active | psql | client backend | 5 | 718693 | 8669976 | 8473240 | 196736 | 0 | 0\n(11 rows)",
"msg_date": "Fri, 24 Mar 2023 12:19:10 -0400",
"msg_from": "reid.thompson@crunchydata.com",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Updated patches attached. \n\nRebased to current master.\nAdded additional columns to pg_stat_global_memory_allocation to summarize backend allocations by type.\nUpdated documentation.\nCorrected some issues noted in review by John Morris.\nAdded code re EXEC_BACKEND for dev-max-memory branch.",
"msg_date": "Thu, 06 Apr 2023 18:35:38 -0400",
"msg_from": "reid.thompson@crunchydata.com",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Thank you! I just tried our benchmark and got a performance degration of around 28 %, which is way better than the last patch.\n\nThe simple query select * from generate_series(0, 10000000) shows roughly 18.9 % degradation on my test server.\n\nBy raising initial_allocation_allowance and allocation_allowance_refill_qty I can get it to 16 % degradation. So most of the degradation seems to be independent from raising the allowance.\n\nI think we probably should investigate this further.\n\nRegards\nArne\n\n\n\n\n\n\n\n\nThank you! I just tried our benchmark and got a performance degration of around 28 %, which is way better than the last patch.\n\n\n\nThe simple query select * from generate_series(0, 10000000) shows roughly 18.9 % degradation on my test server.\n\n\nBy raising initial_allocation_allowance and allocation_allowance_refill_qty I can get it to 16 % degradation. So most of the degradation seems to be independent from raising the allowance.\n\n\n\nI think we probably should investigate this further.\n\n\nRegards\nArne",
"msg_date": "Wed, 19 Apr 2023 23:28:36 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Wed, 2023-04-19 at 23:28 +0000, Arne Roland wrote:\n> > Thank you! I just tried our benchmark and got a performance\n> > degration > of around 28 %, which is way better than the last\n> > patch.\n> > \n> > The simple query select * from generate_series(0, 10000000) shows >\n> > roughly 18.9 % degradation on my test server.\n> > \n> > By raising initial_allocation_allowance and >\n> > allocation_allowance_refill_qty I can get it to 16 % degradation.\n> > So > most of the degradation seems to be independent from raising\n> > the > allowance.\n> > \n> > I think we probably should investigate this further.\n> > \n> > Regards\n> > Arne\n> > \n\nHi Arne,\n\nThanks for the feedback.\n\nI'm plannning to look at this. \n\nIs your benchmark something that I could utilize? I.E. is it a set of\nscripts or a standard test from somewhere that I can duplicate?\n\nThanks,\nReid\n\n\n\n\n\n",
"msg_date": "Wed, 17 May 2023 23:07:03 -0400",
"msg_from": "reid.thompson@crunchydata.com",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Wed, 2023-05-17 at 23:07 -0400, reid.thompson@crunchydata.com wrote:\n> Thanks for the feedback.\n> \n> I'm plannning to look at this. \n> \n> Is your benchmark something that I could utilize? I.E. is it a set of\n> scripts or a standard test from somewhere that I can duplicate?\n> \n> Thanks,\n> Reid\n> \nHi Arne,\n\nFollowup to the above.\n\nI experimented on my system regarding\n\"The simple query select * from generate_series(0, 10000000) shows roughly 18.9 % degradation on my test server.\"\n\nMy laptop:\n32GB ram\n11th Gen Intel(R) Core(TM) i7-11850H 8 cores/16 threads @ 2.50GHz (Max Turbo Frequency. 4.80 GHz ; Cache. 24 MB)\nSSD -> Model: KXG60ZNV1T02 NVMe KIOXIA 1024GB (nvme)\n\nI updated to latest master and rebased my patch branches.\n\nI wrote a script to check out, build, install, init, and startup\nmaster, patch 1, patch 1+2, patch 1+2 as master, pg-stats-memory, \ndev-max-memory, and dev-max-memory-unset configured with\n\n../../configure --silent --prefix=/home/rthompso/src/git/postgres/install/${dir} --with-openssl --with-tcl --with-tclconfig=/usr/lib/tcl8.6 --with-perl --with-libxml --with-libxslt --with-python --with-gssapi --with-systemd --with-ldap --enable-nls\n\nwhere $dir in master, pg-stats-memory, and dev-max-memory,\ndev-max-memory-unset.\n\nThe only change made to the default postgresql.conf was to have the\nscript add to the dev-max-memory instance the line\n\"max_total_backend_memory = 2048\" before startup.\nI did find one change in patch 2 that I pushed back into patch 1, this\nshould only impact the pg-stats-memory instance.\n\nmy .psqlrc turns timing on\n\nI created a script where I can pass two instances to be compared.\nIt invokes\n psql -At -d postgres $connstr -P pager=off -c 'select * from generate_series(0, 10000000)'\n100 times on each of the 2 instances and calculates the AVG time and SD\nfor the 100 runs. It then uses the AVG from each instance to calculate\nthe percentage difference.\n\nDepending on the instance, my results differ from master from\nnegligible to ~5.5%. Comparing master to itself had up to a ~2%\nvariation. See below.\n\n------------------------\n12 runs comparing dev-max-memory 2048 VS master\nShows ~3% to 5.5% variation\n\nCalculate average runtime percentage difference between VER dev-max-memory 2048 and VER master\n1307.14 -> VER dev-max-memory 2048\n1240.74 -> VER master\n5.21218% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory 2048 and VER master\n1315.99 -> VER dev-max-memory 2048\n1245.64 -> VER master\n5.4926% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory 2048 and VER master\n1317.39 -> VER dev-max-memory 2048\n1265.33 -> VER master\n4.03141% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory 2048 and VER master\n1313.52 -> VER dev-max-memory 2048\n1256.69 -> VER master\n4.42221% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory 2048 and VER master\n1329.98 -> VER dev-max-memory 2048\n1253.75 -> VER master\n5.90077% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory 2048 and VER master\n1314.47 -> VER dev-max-memory 2048\n1245.6 -> VER master\n5.38032% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory 2048 and VER master\n1309.7 -> VER dev-max-memory 2048\n1258.55 -> VER master\n3.98326% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory 2048 and VER master\n1322.16 -> VER dev-max-memory 2048\n1248.94 -> VER master\n5.69562% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory 2048 and VER master\n1320.15 -> VER dev-max-memory 2048\n1261.41 -> VER master\n4.55074% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory 2048 and VER master\n1345.22 -> VER dev-max-memory 2048\n1280.96 -> VER master\n4.8938% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory 2048 and VER master\n1296.03 -> VER dev-max-memory 2048\n1257.06 -> VER master\n3.05277% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory 2048 and VER master\n1319.5 -> VER dev-max-memory 2048\n1252.34 -> VER master\n5.22272% difference\n\n----------------------------\n12 showing dev-max-memory-unset VS master\nShows ~2.5% to 5% variation\n\nCalculate average runtime percentage difference between VER dev-max-memory unset and VER master\n1300.93 -> VER dev-max-memory unset\n1235.12 -> VER master\n5.18996% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory unset and VER master\n1293.57 -> VER dev-max-memory unset\n1263.93 -> VER master\n2.31789% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory unset and VER master\n1303.05 -> VER dev-max-memory unset\n1258.11 -> VER master\n3.50935% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory unset and VER master\n1302.14 -> VER dev-max-memory unset\n1256.51 -> VER master\n3.56672% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory unset and VER master\n1299.22 -> VER dev-max-memory unset\n1282.74 -> VER master\n1.27655% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory unset and VER master\n1334.06 -> VER dev-max-memory unset\n1263.77 -> VER master\n5.41144% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory unset and VER master\n1319.92 -> VER dev-max-memory unset\n1262.35 -> VER master\n4.45887% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory unset and VER master\n1318.01 -> VER dev-max-memory unset\n1257.16 -> VER master\n4.7259% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory unset and VER master\n1316.88 -> VER dev-max-memory unset\n1257.63 -> VER master\n4.60282% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory unset and VER master\n1320.33 -> VER dev-max-memory unset\n1282.12 -> VER master\n2.93646% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory unset and VER master\n1306.91 -> VER dev-max-memory unset\n1246.12 -> VER master\n4.76218% difference\n--\nCalculate average runtime percentage difference between VER dev-max-memory unset and VER master\n1320.65 -> VER dev-max-memory unset\n1258.78 -> VER master\n4.79718% difference\n-------------------------------\n\n12 showing pg-stat-activity-only VS master\nShows ~<1% to 2.5% variation\n\nCalculate average runtime percentage difference between VER pg-stat-activity-backend-memory-allocated and VER master\n1252.65 -> VER pg-stat-activity-backend-memory-allocated\n1245.36 -> VER master\n0.583665% difference\n--\nCalculate average runtime percentage difference between VER pg-stat-activity-backend-memory-allocated and VER master\n1294.75 -> VER pg-stat-activity-backend-memory-allocated\n1277.55 -> VER master\n1.33732% difference\n--\nCalculate average runtime percentage difference between VER pg-stat-activity-backend-memory-allocated and VER master\n1264.11 -> VER pg-stat-activity-backend-memory-allocated\n1257.57 -> VER master\n0.518702% difference\n--\nCalculate average runtime percentage difference between VER pg-stat-activity-backend-memory-allocated and VER master\n1267.44 -> VER pg-stat-activity-backend-memory-allocated\n1251.31 -> VER master\n1.28079% difference\n--\nCalculate average runtime percentage difference between VER pg-stat-activity-backend-memory-allocated and VER master\n1270.05 -> VER pg-stat-activity-backend-memory-allocated\n1250.1 -> VER master\n1.58324% difference\n--\nCalculate average runtime percentage difference between VER pg-stat-activity-backend-memory-allocated and VER master\n1298.92 -> VER pg-stat-activity-backend-memory-allocated\n1265.04 -> VER master\n2.64279% difference\n--\nCalculate average runtime percentage difference between VER pg-stat-activity-backend-memory-allocated and VER master\n1280.99 -> VER pg-stat-activity-backend-memory-allocated\n1263.51 -> VER master\n1.37394% difference\n--\nCalculate average runtime percentage difference between VER pg-stat-activity-backend-memory-allocated and VER master\n1273.23 -> VER pg-stat-activity-backend-memory-allocated\n1275.53 -> VER master\n-0.18048% difference\n--\nCalculate average runtime percentage difference between VER pg-stat-activity-backend-memory-allocated and VER master\n1261.2 -> VER pg-stat-activity-backend-memory-allocated\n1263.04 -> VER master\n-0.145786% difference\n--\nCalculate average runtime percentage difference between VER pg-stat-activity-backend-memory-allocated and VER master\n1289.73 -> VER pg-stat-activity-backend-memory-allocated\n1289.02 -> VER master\n0.0550654% difference\n--\nCalculate average runtime percentage difference between VER pg-stat-activity-backend-memory-allocated and VER master\n1287.57 -> VER pg-stat-activity-backend-memory-allocated\n1279.42 -> VER master\n0.634985% difference\n--\nCalculate average runtime percentage difference between VER pg-stat-activity-backend-memory-allocated and VER master\n1272.01 -> VER pg-stat-activity-backend-memory-allocated\n1259.22 -> VER master\n1.01058% difference\n----------------------------------\n\nI also did 12 runs master VS master\nShows, ~1% to 2% variation\n\nCalculate average runtime percentage difference between VER master and VER master\n1239.6 -> VER master\n1263.73 -> VER master\n-1.92783% difference\n--\nCalculate average runtime percentage difference between VER master and VER master\n1253.82 -> VER master\n1252.5 -> VER master\n0.105334% difference\n--\nCalculate average runtime percentage difference between VER master and VER master\n1256.05 -> VER master\n1258.97 -> VER master\n-0.232205% difference\n--\nCalculate average runtime percentage difference between VER master and VER master\n1264.8 -> VER master\n1248.94 -> VER master\n1.26186% difference\n--\nCalculate average runtime percentage difference between VER master and VER master\n1265.08 -> VER master\n1275.43 -> VER master\n-0.814797% difference\n--\nCalculate average runtime percentage difference between VER master and VER master\n1260.95 -> VER master\n1288.81 -> VER master\n-2.1853% difference\n--\nCalculate average runtime percentage difference between VER master and VER master\n1260.46 -> VER master\n1252.86 -> VER master\n0.604778% difference\n--\nCalculate average runtime percentage difference between VER master and VER master\n1253.49 -> VER master\n1255.25 -> VER master\n-0.140309% difference\n--\nCalculate average runtime percentage difference between VER master and VER master\n1277.5 -> VER master\n1267.42 -> VER master\n0.792166% difference\n--\nCalculate average runtime percentage difference between VER master and VER master\n1266.2 -> VER master\n1283.12 -> VER master\n-1.32741% difference\n--\nCalculate average runtime percentage difference between VER master and VER master\n1245.78 -> VER master\n1246.78 -> VER master\n-0.0802388% difference\n--\nCalculate average runtime percentage difference between VER master and VER master\n1255.15 -> VER master\n1276.73 -> VER master\n-1.70466% difference\n\n\n\n",
"msg_date": "Mon, 22 May 2023 08:42:51 -0400",
"msg_from": "reid.thompson@crunchydata.com",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Mon, 2023-05-22 at 08:42 -0400, reid.thompson@crunchydata.com wrote:\n> On Wed, 2023-05-17 at 23:07 -0400, reid.thompson@crunchydata.com wrote:\n> > Thanks for the feedback.\n> > \n> > I'm plannning to look at this. \n> > \n> > Is your benchmark something that I could utilize? I.E. is it a set of\n> > scripts or a standard test from somewhere that I can duplicate?\n> > \n> > Thanks,\n> > Reid\n> > \n\nAttach patches updated to master.\nPulled from patch 2 back to patch 1 a change that was also pertinent to patch 1.",
"msg_date": "Mon, 22 May 2023 11:59:56 -0400",
"msg_from": "reid.thompson@crunchydata.com",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On Mon, 2023-05-22 at 08:42 -0400, reid.thompson@crunchydata.com wrote:\n \nMore followup to the above.\n> \n> I experimented on my system regarding\n> \"The simple query select * from generate_series(0, 10000000) shows roughly 18.9 % degradation on my test server.\"\n> \n> My laptop:\n> 32GB ram\n> 11th Gen Intel(R) Core(TM) i7-11850H 8 cores/16 threads @ 2.50GHz (Max Turbo Frequency. 4.80 GHz ; Cache. 24 MB)\n> SSD -> Model: KXG60ZNV1T02 NVMe KIOXIA 1024GB (nvme)\n\nHi\n\nRan through a few more tests on my system varying the\ninitial_allocation_allowance and allocation_allowance_refill_qty from the\ncurrent 1MB to 2, 4, 6, 8, 10 mb. Also realized that in my last tests/email I\nhad posted percent difference rather than percent change. Turns out for the\nnumbers that were being compared they're essentially the same, but I'm\nproviding both for this set of tests. Ten runs for each comparison. Compared\ndev-max-memory set, dev-max-memory unset, master, and pg-stat-activity-backend-memory-allocated\nagainst master at each allocation value;\n\nAgain, the test invokes\n psql -At -d postgres $connstr -P pager=off -c 'select * from generate_series(0, 10000000)'\n100 times on each of the 2 instances and calculates the AVG time and SD\nfor the 100 runs. It then uses the AVG from each instance to calculate\nthe percentage difference/change.\n\nThese tests contain one code change not yet pushed to pgsql-hackers. In\nAllocSetReset() do not enter pgstat_report_allocated_bytes_decrease if no\nmemory has been freed.\n\nWill format and post some pgbench test result in a separate email.\n\nPercent difference:\n\n───────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n │ Results: difference-dev-max-memory-set VS master\n───────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n 1 │ 1MB allocation 2MB allocation 4MB allocation 6MB allocation 8MB allocation 10MB allocation\n 2 │ 4.2263% difference 3.03961% difference 0.0585808% difference 2.92451% difference 3.34694% difference 2.67771% difference\n 3 │ 3.55709% difference 3.92339% difference 2.29144% difference 3.2156% difference 2.06153% difference 2.86217% difference\n 4 │ 2.04389% difference 2.91866% difference 3.73463% difference 2.86161% difference 3.60992% difference 3.07293% difference\n 5 │ 3.1306% difference 3.64773% difference 2.38063% difference 1.84845% difference 4.87375% difference 4.16953% difference\n 6 │ 3.12556% difference 3.34537% difference 2.99052% difference 2.60538% difference 2.14825% difference 1.95454% difference\n 7 │ 2.20615% difference 2.12861% difference 2.85282% difference 2.43336% difference 2.31389% difference 3.21563% difference\n 8 │ 1.9954% difference 3.61371% difference 3.35543% difference 3.49821% difference 3.41526% difference 8.25753% difference\n 9 │ 2.46845% difference 2.57784% difference 3.13067% difference 3.67681% difference 2.89139% difference 3.6067% difference\n 10 │ 3.60092% difference 2.16164% difference 3.9976% difference 2.6144% difference 4.27892% difference 2.68998% difference\n 11 │ 2.55454% difference 2.39073% difference 3.09631% difference 3.24292% difference 1.9107% difference 1.76182% difference\n 12 │\n 13 │ 28.9089/10 29.74729/10 27.888631/10 28.92125/10 30.85055/10 34.26854/10\n 14 │ 2.89089 2.974729 2.7888631 2.892125 3.085055 3.426854\n───────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n───────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n │ Results: difference-dev-max-memory-unset VS master\n───────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n 1 │ 1MB allocation 2MB allocation 4MB allocation 6MB allocation 8MB allocation 10MB allocation\n 2 │ 3.96616% difference 3.05528% difference 0.563267% difference 1.12075% difference 3.52398% difference 3.25641% difference\n 3 │ 3.11387% difference 3.12499% difference 1.1133% difference 4.86997% difference 2.11481% difference 1.11668% difference\n 4 │ 3.14506% difference 2.06193% difference 3.36034% difference 2.80644% difference 2.37822% difference 3.07669% difference\n 5 │ 2.81052% difference 3.18499% difference 2.70705% difference 2.27847% difference 2.78506% difference 3.02919% difference\n 6 │ 2.9765% difference 3.44165% difference 2.62039% difference 4.61596% difference 2.27937% difference 3.89676% difference\n 7 │ 3.201% difference 1.35838% difference 2.40578% difference 3.95695% difference 2.25983% difference 4.17585% difference\n 8 │ 5.35191% difference 3.96434% difference 4.32891% difference 3.62715% difference 2.17503% difference 0.620856% difference\n 9 │ 3.44241% difference 2.9754% difference 3.03765% difference 1.48104% difference 1.53958% difference 3.14598% difference\n 10 │ 10.1155% difference 4.21062% difference 1.64416% difference 1.51458% difference 2.92131% difference 2.95603% difference\n 11 │ 3.11011% difference 4.31318% difference 2.01991% difference 4.71192% difference 2.37039% difference 4.25241% difference\n 12 │\n 13 │ 41.23304/10 31.69076/10 23.800757/10 30.98323/10 24.34758/10 29.526856/10\n 14 │ 4.123304 3.169076 2.3800757 3.098323 2.434758 2.9526856\n───────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n───────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n │ Results: difference-master VS master\n───────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n 1 │ 1MB allocation 2MB allocation 4MB allocation 6MB allocation 8MB allocation 10MB allocation\n 2 │ 0.0734782% difference 0.0955457% difference 0.0521627% difference 2.32643% difference 0.286493% difference 1.26977% difference\n 3 │ 0.547862% difference 1.19087% difference 0.276915% difference 0.334332% difference 0.260545% difference 0.108956% difference\n 4 │ 0.0714666% difference 0.931605% difference 0.753996% difference 0.457174% difference 0.215904% difference 1.43979% difference\n 5 │ 0.269737% difference 0.848613% difference 0.222909% difference 0.315927% difference 0.290408% difference 0.248591% difference\n 6 │ 1.04231% difference 0.367444% difference 0.699571% difference 0.29266% difference 0.844548% difference 0.273776% difference\n 7 │ 0.0584984% difference 0.15094% difference 0.0721539% difference 0.594991% difference 1.80223% difference 0.500557% difference\n 8 │ 0.355129% difference 1.19517% difference 0.201835% difference 1.2351% difference 0.266004% difference 0.80893% difference\n 9 │ 0.0811794% difference 1.16184% difference 1.01913% difference 0.149087% difference 0.402931% difference 0.125788% difference\n 10 │ 0.950973% difference 0.154471% difference 0.42623% difference 0.874816% difference 0.157934% difference 0.225433% difference\n 11 │ 0.501783% difference 0.308357% difference 0.279147% difference 0.122458% difference 0.538141% difference 0.865846% difference\n 12 │\n 13 │ 3.952417/10 6.404856/10 4.00405/10 6.702975/10 5.065138/10 5.867437/10\n 14 │ 0.3952417 0.6404856 0.400405 0.6702975 0.5065138 0.5867437\n───────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n───────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n │ Results: difference-pg-stat-activity-backend-memory-allocated VS master\n───────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n 1 │ 1MB allocation 2MB allocation 4MB allocation 6MB allocation 8MB allocation 10MB allocation\n 2 │ 2.04788% difference 0.50705% difference 0.504772% difference 0.136316% difference 0.590087% difference 1.33931% difference\n 3 │ 1.21173% difference 0.3309% difference 0.482685% difference 1.67956% difference 0.175478% difference 0.969286% difference\n 4 │ 0.0680972% difference 0.295211% difference 0.867547% difference 1.12959% difference 0.193756% difference 0.714178% difference\n 5 │ 0.91525% difference 1.42408% difference 1.49059% difference 0.641652% difference 1.34265% difference 0.378394% difference\n 6 │ 2.46448% difference 2.67081% difference 0.63824% difference 0.650301% difference 0.481858% difference 1.65711% difference\n 7 │ 1.31021% difference 0.0548831% difference 1.23217% difference 2.11691% difference 0.31629% difference 3.85858% difference\n 8 │ 1.61458% difference 0.46042% difference 0.724742% difference 0.172952% difference 1.33157% difference 0.556898% difference\n 9 │ 1.65063% difference 0.59815% difference 1.42473% difference 0.725576% difference 0.229639% difference 0.875489% difference\n 10 │ 1.78567% difference 1.45652% difference 0.6317% difference 1.99146% difference 0.999521% difference 1.85291% difference\n 11 │ 0.391318% difference 1.13216% difference 0.138291% difference 0.531084% difference 0.680197% difference 1.63162% difference\n 12 │\n 13 │ 13.459845/10 8.930184/10 8.135467/10 9.775401/10 6.341046/10 13.83377/10\n 14 │ 1.3459845 0.8930184 0.8135467 0.9775401 0.6341046 1.3833775\n───────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n\n\nPercent change:\n\n───────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n │ Results: change-dev-max-memory-set VS master\n───────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n 1 │ 1MB allocation 2MB allocation 4MB allocation 6MB allocation 8MB allocation 10MB allocation\n 2 │ 4.13884% change 2.99411% change 0.0585636% change 2.88237% change 3.29185% change 2.64233% change\n 3 │ 3.49493% change 3.84791% change 2.26549% change 3.16472% change 2.0405% change 2.82179% change\n 4 │ 2.02322% change 2.87668% change 3.66617% change 2.82124% change 3.54592% change 3.02643% change\n 5 │ 3.08235% change 3.5824% change 2.35263% change 1.83153% change 4.75781% change 4.08438% change\n 6 │ 3.07746% change 3.29033% change 2.94646% change 2.57188% change 2.12542% change 1.93562% change\n 7 │ 2.18208% change 2.10619% change 2.8127% change 2.40411% change 2.28743% change 3.16474% change\n 8 │ 1.97569% change 3.54957% change 3.30007% change 3.43808% change 3.35792% change 7.93011% change\n 9 │ 2.43836% change 2.54504% change 3.08242% change 3.61044% change 2.85019% change 3.54281% change\n 10 │ 3.53724% change 2.13852% change 3.91926% change 2.58067% change 4.18929% change 2.65428% change\n 11 │ 2.52233% change 2.36249% change 3.0491% change 3.19118% change 1.89262% change 1.74644% change\n 12 │\n 13 │ 28.4725/10 29.29324/10 27.452864/10 28.49622/10 30.33895/10 33.54893/10\n 14 │ 2.84725 2.929324 2.7452864 2.849622 3.033895 3.354893\n───────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n───────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n │ Results: change-dev-max-memory-unset VS master\n───────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n 1 │ 1MB allocation 2MB allocation 4MB allocation 6MB allocation 8MB allocation 10MB allocation\n 2 │ 3.88903% change 3.00931% change 0.564858% change 1.11451% change 3.46296% change 3.20424% change\n 3 │ 3.06613% change 3.07691% change 1.10714% change 4.75421% change 2.09268% change 1.11048% change\n 4 │ 3.09637% change 2.04089% change 3.30482% change 2.7676% change 2.35028% change 3.03008% change\n 5 │ 2.77157% change 3.13506% change 2.6709% change 2.2528% change 2.74681% change 2.984% change\n 6 │ 2.93285% change 3.38343% change 2.5865% change 4.51183% change 2.25368% change 3.82229% change\n 7 │ 3.15057% change 1.34921% change 2.37719% change 3.88018% change 2.23458% change 4.09044% change\n 8 │ 5.21243% change 3.88728% change 4.23719% change 3.56254% change 2.15163% change 0.62279% change\n 9 │ 3.38416% change 2.93178% change 2.99221% change 1.47015% change 1.52782% change 3.09726% change\n 10 │ 10.6543% change 4.1238% change 1.63075% change 1.5032% change 2.87926% change 2.91298% change\n 11 │ 3.06248% change 4.22213% change 1.99972% change 4.60347% change 2.34263% change 4.16388% change\n 12 │\n 13 │ 41.21989/10 31.1598/10 23.471278/10 30.42049/10 24.04233/10 29.03844/10\n 14 │ 4.121989 3.11598 2.3471278 3.042049 2.404233 2.903844\n───────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n───────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n │ Results: change-master VS master\n───────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n 1 │ 1MB allocation 2MB allocation 4MB allocation 6MB allocation 8MB allocation 10MB allocation\n 2 │ 0.0734512% change 0.0955% change 0.0521763% change 2.35381% change 0.286904% change 1.27789% change\n 3 │ 0.549367% change 1.18382% change 0.276532% change 0.333774% change 0.260206% change 0.108897% change\n 4 │ 0.0714411% change 0.927286% change 0.751164% change 0.456132% change 0.216137% change 1.4295% change\n 5 │ 0.269374% change 0.845028% change 0.222661% change 0.315429% change 0.29083% change 0.2489% change\n 6 │ 1.0369% change 0.368121% change 0.702026% change 0.292232% change 0.840997% change 0.273402% change\n 7 │ 0.0584813% change 0.151054% change 0.07218% change 0.596766% change 1.78613% change 0.499307% change\n 8 │ 0.355761% change 1.18807% change 0.201631% change 1.22752% change 0.265651% change 0.805671% change\n 9 │ 0.0812124% change 1.16863% change 1.02435% change 0.149198% change 0.402121% change 0.125709% change\n 10 │ 0.955516% change 0.154351% change 0.425324% change 0.871006% change 0.158059% change 0.225179% change\n 11 │ 0.500527% change 0.307882% change 0.278758% change 0.122533% change 0.539593% change 0.862113% change\n 12 │\n 13 │ 3.952031/10 6.389742/10 4.006802/10 6.7184/10 5.046628/10 5.856568/10\n 14 │ 0.3952031 0.6389742 0.4006802 0.67184 0.5046628 0.5856568\n───────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n───────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n │ Results: change-pg-stat-activity-backend-memory-allocated VS master\n───────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n 1 │ 1MB allocation 2MB allocation 4MB allocation 6MB allocation 8MB allocation 10MB allocation\n 2 │ 2.02713% change 0.505768% change 0.506049% change 0.136223% change 0.591833% change 1.3304% change\n 3 │ 1.20444% change 0.331448% change 0.481523% change 1.66557% change 0.175325% change 0.974006% change\n 4 │ 0.068074% change 0.294776% change 0.8638% change 1.12325% change 0.193568% change 0.711637% change\n 5 │ 0.91108% change 1.41401% change 1.47956% change 0.6396% change 1.33369% change 0.377679% change\n 6 │ 2.43448% change 2.63562% change 0.636209% change 0.648194% change 0.4807% change 1.64349% change\n 7 │ 1.30168% change 0.054868% change 1.22463% change 2.09474% change 0.316791% change 3.93449% change\n 8 │ 1.60165% change 0.461483% change 0.722126% change 0.173102% change 1.32277% change 0.555352% change\n 9 │ 1.63712% change 0.599944% change 1.41466% change 0.722953% change 0.229375% change 0.871673% change\n 10 │ 1.76986% change 1.44599% change 0.629711% change 1.97183% change 0.99455% change 1.8359% change\n 11 │ 0.392085% change 1.12579% change 0.138195% change 0.532498% change 0.677892% change 1.61841% change\n 12 │\n 13 │ 13.347599/10 8.869697/10 8.096463/10 9.70796/10 6.316494/10 13.853037/10\n 14 │ 1.3347599 0.8869697 0.8096463 0.970796 0.6316494 1.385303\n───────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n\n\n\n\n",
"msg_date": "Mon, 05 Jun 2023 14:33:28 -0400",
"msg_from": "reid.thompson@crunchydata.com",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On 22/5/2023 22:59, reid.thompson@crunchydata.com wrote:\n> Attach patches updated to master.\n> Pulled from patch 2 back to patch 1 a change that was also pertinent to patch 1.\n+1 to the idea, have doubts on the implementation.\n\nI have a question. I see the feature triggers ERROR on the exceeding of \nthe memory limit. The superior PG_CATCH() section will handle the error. \nAs I see, many such sections use memory allocations. What if some \nroutine, like the CopyErrorData(), exceeds the limit, too? In this case, \nwe could repeat the error until the top PG_CATCH(). Is this correct \nbehaviour? Maybe to check in the exceeds_max_total_bkend_mem() for \nrecursion and allow error handlers to slightly exceed this hard limit?\n\nAlso, the patch needs to be rebased.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Fri, 29 Sep 2023 09:52:47 +0700",
"msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On 29/9/2023 09:52, Andrei Lepikhov wrote:\n> On 22/5/2023 22:59, reid.thompson@crunchydata.com wrote:\n>> Attach patches updated to master.\n>> Pulled from patch 2 back to patch 1 a change that was also pertinent \n>> to patch 1.\n> +1 to the idea, have doubts on the implementation.\n> \n> I have a question. I see the feature triggers ERROR on the exceeding of \n> the memory limit. The superior PG_CATCH() section will handle the error. \n> As I see, many such sections use memory allocations. What if some \n> routine, like the CopyErrorData(), exceeds the limit, too? In this case, \n> we could repeat the error until the top PG_CATCH(). Is this correct \n> behaviour? Maybe to check in the exceeds_max_total_bkend_mem() for \n> recursion and allow error handlers to slightly exceed this hard limit?\nBy the patch in attachment I try to show which sort of problems I'm \nworrying about. In some PП_CATCH() sections we do CopyErrorData \n(allocate some memory) before aborting the transaction. So, the \nallocation error can move us out of this section before aborting. We \nawait for soft ERROR message but will face more hard consequences.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Tue, 3 Oct 2023 18:33:37 +0700",
"msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Greetings,\n\n* Andrei Lepikhov (a.lepikhov@postgrespro.ru) wrote:\n> On 29/9/2023 09:52, Andrei Lepikhov wrote:\n> > On 22/5/2023 22:59, reid.thompson@crunchydata.com wrote:\n> > > Attach patches updated to master.\n> > > Pulled from patch 2 back to patch 1 a change that was also pertinent\n> > > to patch 1.\n> > +1 to the idea, have doubts on the implementation.\n> > \n> > I have a question. I see the feature triggers ERROR on the exceeding of\n> > the memory limit. The superior PG_CATCH() section will handle the error.\n> > As I see, many such sections use memory allocations. What if some\n> > routine, like the CopyErrorData(), exceeds the limit, too? In this case,\n> > we could repeat the error until the top PG_CATCH(). Is this correct\n> > behaviour? Maybe to check in the exceeds_max_total_bkend_mem() for\n> > recursion and allow error handlers to slightly exceed this hard limit?\n\n> By the patch in attachment I try to show which sort of problems I'm worrying\n> about. In some PП_CATCH() sections we do CopyErrorData (allocate some\n> memory) before aborting the transaction. So, the allocation error can move\n> us out of this section before aborting. We await for soft ERROR message but\n> will face more hard consequences.\n\nWhile it's an interesting idea to consider making exceptions to the\nlimit, and perhaps we'll do that (or have some kind of 'reserve' for\nsuch cases), this isn't really any different than today, is it? We\nmight have a malloc() failure in the main path, end up in PG_CATCH() and\nthen try to do a CopyErrorData() and have another malloc() failure.\n\nIf we can rearrange the code to make this less likely to happen, by\ndoing a bit more work to free() resources used in the main path before\ntrying to do new allocations, then, sure, let's go ahead and do that,\nbut that's independent from this effort.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 18 Oct 2023 15:00:42 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On 19/10/2023 02:00, Stephen Frost wrote:\n> Greetings,\n> \n> * Andrei Lepikhov (a.lepikhov@postgrespro.ru) wrote:\n>> On 29/9/2023 09:52, Andrei Lepikhov wrote:\n>>> On 22/5/2023 22:59, reid.thompson@crunchydata.com wrote:\n>>>> Attach patches updated to master.\n>>>> Pulled from patch 2 back to patch 1 a change that was also pertinent\n>>>> to patch 1.\n>>> +1 to the idea, have doubts on the implementation.\n>>>\n>>> I have a question. I see the feature triggers ERROR on the exceeding of\n>>> the memory limit. The superior PG_CATCH() section will handle the error.\n>>> As I see, many such sections use memory allocations. What if some\n>>> routine, like the CopyErrorData(), exceeds the limit, too? In this case,\n>>> we could repeat the error until the top PG_CATCH(). Is this correct\n>>> behaviour? Maybe to check in the exceeds_max_total_bkend_mem() for\n>>> recursion and allow error handlers to slightly exceed this hard limit?\n> \n>> By the patch in attachment I try to show which sort of problems I'm worrying\n>> about. In some PП_CATCH() sections we do CopyErrorData (allocate some\n>> memory) before aborting the transaction. So, the allocation error can move\n>> us out of this section before aborting. We await for soft ERROR message but\n>> will face more hard consequences.\n> \n> While it's an interesting idea to consider making exceptions to the\n> limit, and perhaps we'll do that (or have some kind of 'reserve' for\n> such cases), this isn't really any different than today, is it? We\n> might have a malloc() failure in the main path, end up in PG_CATCH() and\n> then try to do a CopyErrorData() and have another malloc() failure.\n> \n> If we can rearrange the code to make this less likely to happen, by\n> doing a bit more work to free() resources used in the main path before\n> trying to do new allocations, then, sure, let's go ahead and do that,\n> but that's independent from this effort.\n\nI agree that rearranging efforts can be made independently. The code in \nthe letter above was shown just as a demo of the case I'm worried about.\nIMO, the thing that should be implemented here is a recursion level for \nthe memory limit. If processing the error, we fall into recursion with \nthis limit - we should ignore it.\nI imagine custom extensions that use PG_CATCH() and allocate some data \nthere. At least we can raise the level of error to FATAL.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Thu, 19 Oct 2023 09:57:08 +0700",
"msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Greetings,\n\n* Andrei Lepikhov (a.lepikhov@postgrespro.ru) wrote:\n> On 19/10/2023 02:00, Stephen Frost wrote:\n> > * Andrei Lepikhov (a.lepikhov@postgrespro.ru) wrote:\n> > > On 29/9/2023 09:52, Andrei Lepikhov wrote:\n> > > > On 22/5/2023 22:59, reid.thompson@crunchydata.com wrote:\n> > > > > Attach patches updated to master.\n> > > > > Pulled from patch 2 back to patch 1 a change that was also pertinent\n> > > > > to patch 1.\n> > > > +1 to the idea, have doubts on the implementation.\n> > > > \n> > > > I have a question. I see the feature triggers ERROR on the exceeding of\n> > > > the memory limit. The superior PG_CATCH() section will handle the error.\n> > > > As I see, many such sections use memory allocations. What if some\n> > > > routine, like the CopyErrorData(), exceeds the limit, too? In this case,\n> > > > we could repeat the error until the top PG_CATCH(). Is this correct\n> > > > behaviour? Maybe to check in the exceeds_max_total_bkend_mem() for\n> > > > recursion and allow error handlers to slightly exceed this hard limit?\n> > \n> > > By the patch in attachment I try to show which sort of problems I'm worrying\n> > > about. In some PП_CATCH() sections we do CopyErrorData (allocate some\n> > > memory) before aborting the transaction. So, the allocation error can move\n> > > us out of this section before aborting. We await for soft ERROR message but\n> > > will face more hard consequences.\n> > \n> > While it's an interesting idea to consider making exceptions to the\n> > limit, and perhaps we'll do that (or have some kind of 'reserve' for\n> > such cases), this isn't really any different than today, is it? We\n> > might have a malloc() failure in the main path, end up in PG_CATCH() and\n> > then try to do a CopyErrorData() and have another malloc() failure.\n> > \n> > If we can rearrange the code to make this less likely to happen, by\n> > doing a bit more work to free() resources used in the main path before\n> > trying to do new allocations, then, sure, let's go ahead and do that,\n> > but that's independent from this effort.\n> \n> I agree that rearranging efforts can be made independently. The code in the\n> letter above was shown just as a demo of the case I'm worried about.\n> IMO, the thing that should be implemented here is a recursion level for the\n> memory limit. If processing the error, we fall into recursion with this\n> limit - we should ignore it.\n> I imagine custom extensions that use PG_CATCH() and allocate some data\n> there. At least we can raise the level of error to FATAL.\n\nIgnoring such would defeat much of the point of this effort- which is to\nget to a position where we can say with some confidence that we're not\ngoing to go over some limit that the user has set and therefore not\nallow ourselves to end up getting OOM killed. These are all the same\nissues that already exist today on systems which don't allow overcommit\ntoo, there isn't anything new here in regards to these risks, so I'm not\nreally keen to complicate this to deal with issues that are already\nthere.\n\nPerhaps once we've got the basics in place then we could consider\nreserving some space for handling such cases.. but I don't think it'll\nactually be very clean and what if we have an allocation that goes\nbeyond what that reserved space is anyway? Then we're in the same spot\nagain where we have the choice of either failing the allocation in a\nless elegant way than we might like to handle that error, or risk\ngetting outright kill'd by the kernel. Of those choices, sure seems\nlike failing the allocation is the better way to go.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 19 Oct 2023 18:06:10 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Hi,\n\nOn 2023-10-19 18:06:10 -0400, Stephen Frost wrote:\n> Ignoring such would defeat much of the point of this effort- which is to\n> get to a position where we can say with some confidence that we're not\n> going to go over some limit that the user has set and therefore not\n> allow ourselves to end up getting OOM killed.\n\nI think that is a good medium to long term goal. I do however think that we'd\nbe better off merging the visibility of memory allocations soon-ish and\nimplement the limiting later. There's a lot of hairy details to get right for\nthe latter, and even just having visibility will be a huge improvement.\n\nI think even patch 1 is doing too much at once. I doubt the DSM stuff is\nquite right.\n\nI'm unconvinced it's a good idea to split the different types of memory\ncontexts out. That just exposes too much implementation detail stuff without a\ngood reason.\n\nI think the overhead even just the tracking implies right now is likely too\nhigh and needs to be optimized. It should be a single math operation, not\ntracking things in multiple fields. I don't think pg_sub_u64_overflow() should\nbe in the path either, that suddenly adds conditional branches. You really\nought to look at the difference in assembly for the hot functions.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 Oct 2023 15:22:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2023-10-19 18:06:10 -0400, Stephen Frost wrote:\n> > Ignoring such would defeat much of the point of this effort- which is to\n> > get to a position where we can say with some confidence that we're not\n> > going to go over some limit that the user has set and therefore not\n> > allow ourselves to end up getting OOM killed.\n> \n> I think that is a good medium to long term goal. I do however think that we'd\n> be better off merging the visibility of memory allocations soon-ish and\n> implement the limiting later. There's a lot of hairy details to get right for\n> the latter, and even just having visibility will be a huge improvement.\n\nI agree that having the visibility will be a great improvement and\nperhaps could go in separately, but I don't know that I agree that the\nlimits are going to be that much of an issue. In any case, there's been\nwork ongoing on this and that'll be posted soon. I was just trying to\naddress the general comment raised in this sub-thread here.\n\n> I think even patch 1 is doing too much at once. I doubt the DSM stuff is\n> quite right.\n\nGetting DSM right has certainly been tricky, along with other things,\nbut we've been working towards, and continue to work towards, getting\neverything to line up nicely between memory context allocations of\nvarious types and the amounts which are being seen as malloc'd/free'd.\nThere's been parts of this also reworked to allow us to see per-backend\nreservations as well as total reserved and to get those numbers able to\nbe matched up inside of a given transaction using the statistics system.\n\n> I'm unconvinced it's a good idea to split the different types of memory\n> contexts out. That just exposes too much implementation detail stuff without a\n> good reason.\n\nDSM needs to be independent anyway ... as for the others, perhaps we\ncould combine them, though that's pretty easily done later and for now\nit's been useful to see them split out as we've been working on the\npatch.\n\n> I think the overhead even just the tracking implies right now is likely too\n> high and needs to be optimized. It should be a single math operation, not\n> tracking things in multiple fields. I don't think pg_sub_u64_overflow() should\n> be in the path either, that suddenly adds conditional branches. You really\n> ought to look at the difference in assembly for the hot functions.\n\nThis has been improved in the most recent work and we'll have that\nposted soon, probably best to hold off from larger review of this right\nnow- as mentioned, I was just trying to address the specific question in\nthis sub-thread since a new patch is coming soon.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 19 Oct 2023 18:49:21 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On 20/10/2023 05:06, Stephen Frost wrote:\n> Greetings,\n> \n> * Andrei Lepikhov (a.lepikhov@postgrespro.ru) wrote:\n>> On 19/10/2023 02:00, Stephen Frost wrote:\n>>> * Andrei Lepikhov (a.lepikhov@postgrespro.ru) wrote:\n>>>> On 29/9/2023 09:52, Andrei Lepikhov wrote:\n>>>>> On 22/5/2023 22:59, reid.thompson@crunchydata.com wrote:\n>>>>>> Attach patches updated to master.\n>>>>>> Pulled from patch 2 back to patch 1 a change that was also pertinent\n>>>>>> to patch 1.\n>>>>> +1 to the idea, have doubts on the implementation.\n>>>>>\n>>>>> I have a question. I see the feature triggers ERROR on the exceeding of\n>>>>> the memory limit. The superior PG_CATCH() section will handle the error.\n>>>>> As I see, many such sections use memory allocations. What if some\n>>>>> routine, like the CopyErrorData(), exceeds the limit, too? In this case,\n>>>>> we could repeat the error until the top PG_CATCH(). Is this correct\n>>>>> behaviour? Maybe to check in the exceeds_max_total_bkend_mem() for\n>>>>> recursion and allow error handlers to slightly exceed this hard limit?\n>>>\n>>>> By the patch in attachment I try to show which sort of problems I'm worrying\n>>>> about. In some PП_CATCH() sections we do CopyErrorData (allocate some\n>>>> memory) before aborting the transaction. So, the allocation error can move\n>>>> us out of this section before aborting. We await for soft ERROR message but\n>>>> will face more hard consequences.\n>>>\n>>> While it's an interesting idea to consider making exceptions to the\n>>> limit, and perhaps we'll do that (or have some kind of 'reserve' for\n>>> such cases), this isn't really any different than today, is it? We\n>>> might have a malloc() failure in the main path, end up in PG_CATCH() and\n>>> then try to do a CopyErrorData() and have another malloc() failure.\n>>>\n>>> If we can rearrange the code to make this less likely to happen, by\n>>> doing a bit more work to free() resources used in the main path before\n>>> trying to do new allocations, then, sure, let's go ahead and do that,\n>>> but that's independent from this effort.\n>>\n>> I agree that rearranging efforts can be made independently. The code in the\n>> letter above was shown just as a demo of the case I'm worried about.\n>> IMO, the thing that should be implemented here is a recursion level for the\n>> memory limit. If processing the error, we fall into recursion with this\n>> limit - we should ignore it.\n>> I imagine custom extensions that use PG_CATCH() and allocate some data\n>> there. At least we can raise the level of error to FATAL.\n> \n> Ignoring such would defeat much of the point of this effort- which is to\n> get to a position where we can say with some confidence that we're not\n> going to go over some limit that the user has set and therefore not\n> allow ourselves to end up getting OOM killed. These are all the same\n> issues that already exist today on systems which don't allow overcommit\n> too, there isn't anything new here in regards to these risks, so I'm not\n> really keen to complicate this to deal with issues that are already\n> there.\n> \n> Perhaps once we've got the basics in place then we could consider\n> reserving some space for handling such cases.. but I don't think it'll\n> actually be very clean and what if we have an allocation that goes\n> beyond what that reserved space is anyway? Then we're in the same spot\n> again where we have the choice of either failing the allocation in a\n> less elegant way than we might like to handle that error, or risk\n> getting outright kill'd by the kernel. Of those choices, sure seems\n> like failing the allocation is the better way to go.\n\nI've got your point.\nThe only issue I worry about is the uncertainty and clutter that can be \ncreated by this feature. In the worst case, when we have a complex error \nstack (including the extension's CATCH sections, exceptions in stored \nprocedures, etc.), the backend will throw the memory limit error \nrepeatedly. Of course, one failed backend looks better than a \nsurprisingly killed postmaster, but the mix of different error reports \nand details looks terrible and challenging to debug in the case of \ntrouble. So, may we throw a FATAL error if we reach this limit while \nhandling an exception?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Fri, 20 Oct 2023 09:36:07 +0700",
"msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Greetings,\n\n* Andrei Lepikhov (a.lepikhov@postgrespro.ru) wrote:\n> The only issue I worry about is the uncertainty and clutter that can be\n> created by this feature. In the worst case, when we have a complex error\n> stack (including the extension's CATCH sections, exceptions in stored\n> procedures, etc.), the backend will throw the memory limit error repeatedly.\n\nI'm not seeing what additional uncertainty or clutter there is- this is,\nagain, exactly the same as what happens today on a system with\novercommit disabled and I don't feel like we get a lot of complaints\nabout this today.\n\n> Of course, one failed backend looks better than a surprisingly killed\n> postmaster, but the mix of different error reports and details looks\n> terrible and challenging to debug in the case of trouble. So, may we throw a\n> FATAL error if we reach this limit while handling an exception?\n\nI don't see why we'd do that when we can do better- we just fail\nwhatever the ongoing query or transaction is and allow further requests\non the same connection. We already support exactly that and it works\nreally rather well and I don't see why we'd throw that away because\nthere's a different way to get an OOM error.\n\nIf you want to make the argument that we should throw FATAL on OOM when\nhandling an exception, that's something you could argue independently of\nthis effort already today, but I don't think you'll get agreement that\nit's an improvement.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 20 Oct 2023 08:39:33 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On 20/10/2023 19:39, Stephen Frost wrote:\nGreetings,\n> * Andrei Lepikhov (a.lepikhov@postgrespro.ru) wrote:\n>> The only issue I worry about is the uncertainty and clutter that can be\n>> created by this feature. In the worst case, when we have a complex error\n>> stack (including the extension's CATCH sections, exceptions in stored\n>> procedures, etc.), the backend will throw the memory limit error repeatedly.\n> \n> I'm not seeing what additional uncertainty or clutter there is- this is,\n> again, exactly the same as what happens today on a system with\n> overcommit disabled and I don't feel like we get a lot of complaints\n> about this today.\n\nMaybe I missed something or see this feature from an alternate point of \nview (as an extension developer), but overcommit is more useful so far: \nit kills a process.\nIt means that after restart, the backend/background worker will have an \ninitial internal state. With this limit enabled, we need to remember \nthat each function call can cause an error, and we have to remember it \nusing static PG_CATCH sections where we must rearrange local variables \nto the initial (?) state. So, it complicates development.\nOf course, this limit is a good feature, but from my point of view, it \nwould be better to kill a memory-consuming backend instead of throwing \nan error. At least for now, we don't have a technique to repeat query \nplanning with chances to build a more effective plan.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Tue, 24 Oct 2023 09:39:42 +0700",
"msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Hi,\n\nOn 2023-10-24 09:39:42 +0700, Andrei Lepikhov wrote:\n> On 20/10/2023 19:39, Stephen Frost wrote:\n> Greetings,\n> > * Andrei Lepikhov (a.lepikhov@postgrespro.ru) wrote:\n> > > The only issue I worry about is the uncertainty and clutter that can be\n> > > created by this feature. In the worst case, when we have a complex error\n> > > stack (including the extension's CATCH sections, exceptions in stored\n> > > procedures, etc.), the backend will throw the memory limit error repeatedly.\n> > \n> > I'm not seeing what additional uncertainty or clutter there is- this is,\n> > again, exactly the same as what happens today on a system with\n> > overcommit disabled and I don't feel like we get a lot of complaints\n> > about this today.\n> \n> Maybe I missed something or see this feature from an alternate point of view\n> (as an extension developer), but overcommit is more useful so far: it kills\n> a process.\n\nIn case of postgres it doesn't just kill one postgres, it leads to *all*\nconnections being terminated.\n\n\n> It means that after restart, the backend/background worker will have an\n> initial internal state. With this limit enabled, we need to remember that\n> each function call can cause an error, and we have to remember it using\n> static PG_CATCH sections where we must rearrange local variables to the\n> initial (?) state. So, it complicates development.\n\nYou need to be aware of errors being thrown regardless this feature, as\nout-of-memory errors can be encountered today already. There also are many\nother kinds of errors that can be thrown.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 23 Oct 2023 19:44:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Hello!\n\nEarlier in this thread, the pgbench results were published, where with a strong memory limit of 100MB\na significant, about 10%, decrease in TPS was observed [1].\n\nUsing dedicated server with 12GB RAM and methodology described in [3], i performed five series\nof measurements for the patches from the [2].\nThe series were like this:\n1) unpatched 16th version at the REL_16_BETA1 (e0b82fc8e83) as close to [2] in time.\n2) patched REL_16_BETA1 at e0b82fc8e83 with undefined max_total_backend_memory GUC (with default value = 0).\n3) patched REL_16_BETA1 with max_total_backend_memory = 16GB\n4) the same with max_total_backend_memory = 8GB\n5) and again with max_total_backend_memory = 200MB\n\nMeasurements with max_total_backend_memory = 100MB were not be carried out,\nwith limit 100MB the server gave an error on startup:\nFATAL: configured max_total_backend_memory 100MB is <= shared_memory_size 143MB\nSo i used 200MB to retain all other GUCs the same.\n\nPgbench gave the following results:\n1) and 2) almost the same: ~6350 TPS. See orange and green\ndistributions on the attached graph.png respectively.\n3) and 4) identical to each other (~6315 TPS) and a bit slower than 1) and 2) by ~0,6%.\nSee blue and yellow distributions respectively.\n5) is slightly slower (~6285 TPS) than 3) and 4) by another 0,5%. (grey distribution)\nThe standard error in all series was ~0.2%. There is a raw data in the raw_data.txt.\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n[1] https://www.postgresql.org/message-id/3178e9a1b7acbcf023fafed68ca48d76afc07907.camel%40crunchydata.com\n[2] https://www.postgresql.org/message-id/4edafedc0f8acb12a2979088ac1317bd7dd42145.camel%40crunchydata.com\n[3] https://www.postgresql.org/message-id/1d3a7d8f-cb7c-4468-a578-d8a1194ea2de%40postgrespro.ru",
"msg_date": "Tue, 26 Dec 2023 13:49:00 +0300",
"msg_from": "\"Anton A. Melnikov\" <a.melnikov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On 12/26/23 11:49, Anton A. Melnikov wrote:\n> Hello!\n> \n> Earlier in this thread, the pgbench results were published, where with a\n> strong memory limit of 100MB\n> a significant, about 10%, decrease in TPS was observed [1].\n> \n> Using dedicated server with 12GB RAM and methodology described in [3], i\n> performed five series\n> of measurements for the patches from the [2].\n\nCan you share some info about the hardware? For example the CPU model,\nnumber of cores, and so on. 12GB RAM is not quite huge, so presumably it\nwas a small machine.\n\n> The series were like this:\n> 1) unpatched 16th version at the REL_16_BETA1 (e0b82fc8e83) as close to\n> [2] in time.\n> 2) patched REL_16_BETA1 at e0b82fc8e83 with undefined\n> max_total_backend_memory GUC (with default value = 0).\n> 3) patched REL_16_BETA1 with max_total_backend_memory = 16GB\n> 4) the same with max_total_backend_memory = 8GB\n> 5) and again with max_total_backend_memory = 200MB\n> \n\nOK\n\n> Measurements with max_total_backend_memory = 100MB were not be carried out,\n> with limit 100MB the server gave an error on startup:\n> FATAL: configured max_total_backend_memory 100MB is <=\n> shared_memory_size 143MB\n> So i used 200MB to retain all other GUCs the same.\n> \n\nI'm not very familiar with the patch yet, but this seems a bit strange.\nWhy should shared_buffers be included this limit?\n\n> Pgbench gave the following results:\n> 1) and 2) almost the same: ~6350 TPS. See orange and green\n> distributions on the attached graph.png respectively.\n> 3) and 4) identical to each other (~6315 TPS) and a bit slower than 1)\n> and 2) by ~0,6%.\n> See blue and yellow distributions respectively.\n> 5) is slightly slower (~6285 TPS) than 3) and 4) by another 0,5%. (grey\n> distribution)\n> The standard error in all series was ~0.2%. There is a raw data in the\n> raw_data.txt.\n> \n\nI think 6350 is a pretty terrible number, especially for scale 8, which\nis maybe 150MB of data. I think that's a pretty clear sign the system\nwas hitting some other bottleneck, which can easily mask regressions in\nthe memory allocation code. AFAICS the pgbench runs were regular r/w\nbenchmarks, so I'd bet it was hitting I/O, and possibly even subject to\nsome random effects at that level.\n\nI think what would be interesting are runs with\n\n pgbench -M prepared -S -c $N -j $N\n\ni.e. read-only tests (to not hit I/O), and $N being sufficiently large\nto maybe also show some concurrency/locking bottlenecks, etc.\n\nI may do some benchmarks if I happen to find a bit of time, but maybe\nyou could collect such numbers too?\n\nThe other benchmark that might be interesting is more OLAP, with low\nconcurrency but backends allocating a lot of memory.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 26 Dec 2023 18:28:43 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Hi,\n\nI wanted to take a look at the patch, and I noticed it's broken since\n3d51cb5197 renamed a couple pgstat functions in August. I plan to maybe\ndo some benchmarks etc. preferably on current master, so here's a\nversion fixing that minor bitrot.\n\nAs for the patch, I only skimmed through the thread so far, to get some\nidea of what the approach and goals are, etc. I didn't look at the code\nyet, so can't comment on that.\n\nHowever, at pgconf.eu a couple week ago I had quite a few discussions\nabout such \"backend memory limit\" could/should work in principle, and\nI've been thinking about ways to implement this. So let me share some\nthoughts about how this patch aligns with that ...\n\n(FWIW it's not my intent to hijack or derail this patch in any way, but\nthere's a couple things I think we should do differently.)\n\nI'm 100% on board with having a memory limit \"above\" work_mem. It's\nreally annoying that we have no way to restrict the amount of memory a\nbackend can allocate for complex queries, etc.\n\nBut I find it a bit strange that we aim to introduce a \"global\" memory\nlimit for all backends combined first. I'm not against having that too,\nbut it's not the feature I usually wish to have. I need some protection\nagainst runaway backends, that happen to allocate a lot memory.\n\nSimilarly, I'd like to be able to have different limits depending on\nwhat the backend does - a backend doing OLAP may naturally need more\nmemory, while a backend doing OLTP may have a much tighter limit.\n\nBut with a single global limit none of this is possible. It may help\nreducing the risk of unexpected OOM issues (not 100%, but useful), but\nit can't limit the impact to the one backend - if memory starts runnning\nout, it will affect all other backends a bit randomly (depending on the\norder in which the backends happen to allocate memory). And it does not\nconsider what workloads the backends execute.\n\nLet me propose a slightly different architecture that I imagined while\nthinking about this. It's not radically differrent from what the patch\ndoes, but it focuses on the local accounting first. I believe it's\npossible to extend this to enforce the global limit too.\n\nFWIW I haven't tried implementing this - I don't want to \"hijack\" this\nthread and do my own thing. I can take a stab at a PoC if needed.\n\nFirstly, I'm not quite happy with how all the memory contexts have to\ndo their own version of the accounting and memory checking. I think we\nshould move that into a new abstraction which I call \"memory pool\".\nIt's very close to \"memory context\" but it only deals with allocating\nblocks, not the chunks requested by palloc() etc. So when someone does\npalloc(), that may be AllocSetAlloc(). And instead of doing malloc()\nthat would do MemoryPoolAlloc(blksize), and then that would do all the\naccounting and checks, and then do malloc().\n\nThis may sound like an unnecessary indirection, but the idea is that a\nsingle memory pool would back many memory contexts (perhaps all for\na given backend). In principle we might even establish separate memory\npools for different parts of the memory context hierarchy, but I'm not\nsure we need that.\n\nI can imagine the pool could also cache blocks for cases when we create\nand destroy contexts very often, but glibc should already does that for\nus, I believe.\n\nFor me, the accounting and memory context is the primary goal. I wonder\nif we considered this context/pool split while working on the accounting\nfor hash aggregate, but I think we were too attached to doing all of it\nin the memory context hierarchy.\n\nOf course, this memory pool is per backend, and so would be the memory\naccounting and limit enforced by it. But I can imagine extending to do\na global limit similar to what the current patch does - using a counter\nin shared memory, or something. I haven't reviewed what's the overhead\nor how it handles cases when a backend terminates in some unexpected\nway. But whatever the current patch does, memory pool could do too.\n\n\nSecondly, I think there's an annoying issue with the accounting at the\nblock level - it makes it problematic to use low limit values. We double\nthe block size, so we may quickly end up with a block size a couple MBs,\nwhich means the accounting granularity gets very coarse.\n\nI think it'd be useful to introduce a \"backpressure\" between the memory\npool and the memory context, depending on how close we are to the limit.\nFor example if the limit is 128MB and the backend allocated 16MB so far,\nwe're pretty far away from the limit. So if the backend requests 8MB\nblock, that's fine and the memory pool should malloc() that. But if we\nalready allocated 100MB, maybe we should be careful and not allow 8MB\nblocks - the memory pool should be allowed to override this and return\njust 1MB block. Sure, this would have to be optional, and not all places\ncan accept a smaller block than requested (when the chunk would not fit\ninto the smaller block). It would require a suitable memory pool API\nand more work in the memory contexts, but it seems pretty useful.\nCertainly not something for v1.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 26 Dec 2023 22:52:06 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Hi!\n\nThanks for your interest and reply!\n\nOn 26.12.2023 20:28, Tomas Vondra wrote:\n\n> Can you share some info about the hardware? For example the CPU model,\n> number of cores, and so on. 12GB RAM is not quite huge, so presumably it\n> was a small machine.\n> \n\nIt is HP ProLanit 2x socket server. 2x6 cores Intel(R) Xeon(R) CPU X5675 @ 3.07GHz,\n2x12GB RAM, RAID from SSD drives.\nLinux 5.10.0-21-amd64 #1 SMP Debian 5.10.162-1 (2023-01-21) x86_64 GNU/Linux\n\nOne cpu was disabled and some tweaks was made as Andres advised to avoid\nNUMA and other side effects.\n\nFull set of the configuration commands for server was like that:\nnumactl --cpunodebind=0 --membind=0 --physcpubind=1,3,5,7,9,11 bash\nsudo cpupower frequency-set -g performance\nsudo cpupower idle-set -D0\necho 3059000 | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq\n(Turbo Boost and hyperthreading was disabled in BIOS.)\n\n> I think what would be interesting are runs with\n> \n> pgbench -M prepared -S -c $N -j $N\n> \n> i.e. read-only tests (to not hit I/O), and $N being sufficiently large\n> to maybe also show some concurrency/locking bottlenecks, etc.\n> \n> I may do some benchmarks if I happen to find a bit of time, but maybe\n> you could collect such numbers too?\n\nFirstly, i repeated the same -c and -j values in read-only mode as you advised.\nAs one can see in the read-only.png, the absolute TPS value has increased\nsignificantly, by about 13 times.\nPatched version with limit 200Mb was slightly slower than with limit 0 by ~2%.\nThe standard error in all series was ~0.5%.\nSince the deviation has increased in comparison with rw test\nthe difference between unpatched version and patched ones with\nlimits 0, 8Gb and 16Gb is not sensible.\nThere is a raw data in the raw_data-read-only.txt.\n\n\n> I think 6350 is a pretty terrible number, especially for scale 8, which\n> is maybe 150MB of data. I think that's a pretty clear sign the system\n> was hitting some other bottleneck, which can easily mask regressions in\n> the memory allocation code. AFAICS the pgbench runs were regular r/w\n> benchmarks, so I'd bet it was hitting I/O, and possibly even subject to\n> some random effects at that level.\n>\n\nTo avoid possible I/O bottleneck i followed these steps:\n- gave all 24G mem to cpu 0 rather than 12G as in [1];\n- created a ramdisk of 12G size;\n- disable swap like that:\nnumactl --cpunodebind=0 --physcpubind=1,3,5,7,9,11 bash\nsudo swapoff -a\nsudo mkdir /mnt/ramdisk\nsudo mount -t tmpfs -o rw,size=12G tmpfs /mnt/ramdisk\n \nThe inst dir, data dir and log file were all on ramdisk.\n\nPgbench in rw mode gave the following results:\n- the difference between unpatched version and patched ones with\nlimits 0 and 16Gb almost the same: ~7470+-0.2% TPS.\n (orange, green and blue distributions on the RW-ramdisk.png respectively)\n- patched version with limit 8GB is slightly slower than three above;\n (yellow distribution)\n- patched version with limit 200MB slower than the first three\n by a measurable value ~0,4% (~7440 TPS);\n (black distribution)\n The standard error in all series was ~0.2%. There is a raw data in the\n raw_data-rw-ramdisk.txt\n\n\nFor the sake of completeness i'm going to repeat read-only measurements\nwith ramdisk. Аnd perform some tests with increased -c and -j values\nas you advised to find the possible point where concurrency/blocking\nbottlenecks start to play a role. And do this, of cause, for the last\nversion of the patch. Thanks for rebased it!\n\nIn general, i don't observe any considerable degradation in performance\nfrom this patch of several or even 10%, which were mentioned in [2].\n\n\nWith the best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n[1] https://www.postgresql.org/message-id/1d3a7d8f-cb7c-4468-a578-d8a1194ea2de%40postgrespro.ru\n[2] https://www.postgresql.org/message-id/3178e9a1b7acbcf023fafed68ca48d76afc07907.camel%40crunchydata.com",
"msg_date": "Tue, 23 Jan 2024 14:47:18 +0300",
"msg_from": "\"Anton A. Melnikov\" <a.melnikov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Hi,\n\nI took a closer look at this patch over the last couple days, and I did\na bunch of benchmarks to see how expensive the accounting is. The good\nnews is that I haven't observed any overhead - I did two simple tests,\nthat I think would be particularly sensitive to this:\n\n1) regular \"pgbench -S\" with scale 10, so tiny and not hitting I/O\n\n2) \"select count(*) from generate_series(1,N)\" where N is either 10k or\n1M, which should be enough to cause a fair number of allocations\n\nAnd I tested this on three branches - master (no patches applied),\npatched (but limit=0, so just accounting) and patched-limit (with limit\nset to 4.5GB, so high enough not to be hit).\n\nAttached are script and raw results for both benchmarks from two\nmachines, i5 (small, 4C) and xeon (bigger, 16C/32C), and a PDF showing\nthe results as candlestick charts (with 1 and 2 sigma intervals). AFAICS\nthere's no measurable difference between master and patched builds.\nWhich is good, clearly the local allowance makes the overhead negligible\n(at least in these benchmarks).\n\n\nNow, for the patch itself - I already said some of the stuff about a\nmonth ago [1], but I'll repeat some of it for the sake of completeness\nbefore I get to comments about the code etc.\n\nFirstly, I agree with the goal of having a way to account for memory\nused by the backends, and also ability to enforce some sort of limit.\nIt's difficult to track the memory at the OS level (interpreting RSS\nvalues is not trivial), and work_mem is not sufficient to enforce a\nbackend-level limit, not even talking about a global limit.\n\nBut as I said earlier, it seems quite strange to start by introducing\nsome sort of global limit, combining memory for all backends. I do\nunderstand that the intent is to have such global limit in order to\nprevent issues with the OOM killer and/or not to interfere with other\nstuff running on the same machine. And while I'm not saying we should\nnot have such limit, every time I wished to have a memory limit it was a\nbackend-level one. Ideally a workmem-like limit that would \"adjust\" the\nwork_mem values used by the optimizer (but that's not what this patch\naims to do), or at least a backstop in case something goes wrong (say, a\nmemory leak, OLTP application issuing complex queries, etc.).\n\nThe accounting and infrastructure introduced by the patch seems to be\nsuitable for both types of limits (global and per-backend), but then it\ngoes for the global limit first. It may seem simpler to implement, but\nin practice there's a bunch of problems mentioned earlier, but ignored.\n\nFor example, I really don't believe it's OK to just report the error in\nthe first backend that happens to hit the limit. Bertrand mentioned that\n[2], asking about a situation when a runaway process allocates 99% of\nmemory. The response to that was\n\n> Initially, we believe that punishing the detector is reasonable if we\n> can help administrators avoid the OOM killer/resource starvation. But\n> we can and should expand on this idea.\n\nwhich I find rather unsatisfactory. Belief is not an argument, and it\nrelies on the assumption that this helps the administrator to avoid the\nOOM killer etc. ISTM it can easily mean the administrator can't even\nconnect to the database, run a query (to inspect the new system views),\netc. because any of that would hit the memory limit. That seems more\nlike a built-in DoS facility ...\n\nIf this was up to me, I'd probably start with the per-backend limit. But\nthat's my personal opinion.\n\nNow, some comments about the code etc. Stephen was promising a new patch\nversion in October [6], but that didn't happen yet, so I looked at the\npatches I rebased in December.\n\n\n0001\n----\n\n1) I don't see why we should separate memory by context type - without\nknowing which exact backends are using the memory, this seems pretty\nirrelevant/useless. Either it's private or shared memory, I don't think\nthe accounting should break this into more counters. Also, while I'm not\naware of anyone proposing a new memory context type, I doubt we'd want\nto add more and more counters if that happens.\n\nsuggestion: only two counters, for local and shared memory\n\nIf we absolutely want to have per-type counters, we should not mix the\nprivate memory and shared memory ones.\n\n\n2) I have my doubts about the tracking of shared memory. It seems weird\nto count them only into the backend that allocated them, and transfer\nthem to \"global\" on process exit. Surely we should know which shared\nmemory is meant to be long-lived from the start, no?\n\nFor example, it's not uncommon that an extension allocates a chunk of\nshared memory to store some sort of global state (e.g. BDR/pglogical\ndoes that, I'm sure other extensions do that). But the process that\nallocates the shared memory keeps running. AFAICS this would be tracked\nas backend DSM memory, which I'm not sure this is what we want ...\n\nAnd I'm not alone in thinking this should work differently - [3], [4].\n\n\n3) We limit the floor of allocation counters to zero.\n\nThis seems really weird. Why would it be necessary? I mean, how could we\nget a negative amount of memory? Seems like it might easily mask issues\nwith incorrect accounting, or something.\n\n\n4) pg_stat_memory_allocation\n\nMaybe pg_stat_memory_usage would be a better name ...\n\n\n5) The comments/docs repeatedly talk about \"dynamic nature\" and that it\nmakes the values not exact:\n\n> Due to the dynamic nature of memory allocations the allocated bytes\n> values may not be exact but should be sufficient for the intended\n> purposes.\n\nI don't understand what \"dynamic nature\" refers to, and why would it\nmake the values not exact. Presumably this refers to the \"allowance\" but\nhow is that \"dynamic nature\"?\n\nThis definitely needs to explain this better, with some basic estimate\nhow how accurate the values are expected to be.\n\n\n6) The SGML docs keep recommending to use pg_size_pretty(). I find that\nunnecessary, no other docs reporting values in bytes do that.\n\n\n7) I see no reason to include shared_memory_size_mb in the view, and\nsame for shared_memory_size_in_huge_pages. It has little to do with\n\"allocated\" memory, IMO. And a value in \"MB\" goes directly against the\nsuggestion to use pg_size_pretty().\n\nsuggestion: drop this from the view\n\n\n8) In a lot of places we do\n\n context->mem_allocated += blksize;\n pgstat_report_allocated_bytes_increase(blksize, PG_ALLOC_ASET);\n\nMaybe we should integrate the two types of accounting, wrap them into a\nsingle function call, or something? This makes it very simple to forget\nupdating one of those places. AFAIC the patch tries to minimize the\nnumber of updates of the new shared counters, but with the allowance\nthat should not be an issue I think.\n\n\n9) This seems wrong. Why would it be OK to ever overflow? Isn't that a\nsign of incorrect accounting?\n\n /* Avoid allocated_bytes unsigned integer overflow on decrease */\n if (pg_sub_u64_overflow(*my_allocated_bytes, proc_allocated_bytes,\n&temp))\n {\n /* On overflow, set allocated bytes and allocator type bytes to\nzero */\n *my_allocated_bytes = 0;\n *my_aset_allocated_bytes = 0;\n *my_dsm_allocated_bytes = 0;\n *my_generation_allocated_bytes = 0;\n *my_slab_allocated_bytes = 0;\n }\n\n9) How could any of these values be negative? It's all capped to 0 and\nalso stored in uint64. Seems pretty useless.\n\n+SELECT\n+ datid > 0, pg_size_bytes(shared_memory_size) >= 0,\nshared_memory_size_in_huge_pages >= -1, global_dsm_allocated_bytes >= 0\n+FROM\n+ pg_stat_global_memory_allocation;\n+ ?column? | ?column? | ?column? | ?column?\n+----------+----------+----------+----------\n+ t | t | t | t\n+(1 row)\n+\n+-- ensure that pg_stat_memory_allocation view exists\n+SELECT\n+ pid > 0, allocated_bytes >= 0, aset_allocated_bytes >= 0,\ndsm_allocated_bytes >= 0, generation_allocated_bytes >= 0,\nslab_allocated_bytes >= 0\n\n\n\n0003\n----\n\n1) The commit message says:\n\n> Further requests will not be allocated until dropping below the limit.\n> Keep this in mind when setting this value.\n\nThe SGML docs have a similar \"keep this in mind\" suggestion in relation\nto the 1MB local allowance. I find this pretty useless, as it doesn't\nreally say what to do with the information / what it means. I mean,\nshould I set the value higher, or what am I supposed to do? This needs\nto be understandable for average user reading the SGML docs, who is\nunlikely to know the implementation details.\n\n\n2) > This limit does not affect auxiliary backend processes.\n\nThis seems pretty unfortunate, because in the cases where I actually saw\nOOM killer to intervene, it was often because of auxiliary processes\nallocating a lot of memory (say, autovacuum with maintenance_work_mem\nset very high, etc.).\n\nIn a way, not excluding these auxiliary processes from the limit seems\nto go against the idea of preventing the OOM killer.\n\n\n3) I don't think the patch ever explains what the \"local allowance\" is,\nhow the refill works, why it's done this way, etc. I do think I\nunderstand that now, but I had to go through the thread, That's not\nreally how it should be. There should be an explanation of how this\nworks somewhere (comment in mcxt.c? separate README?).\n\n\n4) > doc/src/sgml/monitoring.sgml\n\nNot sure why this removes the part about DSM being included only in the\nbackend that created it.\n\n\n5) > total_bkend_mem_bytes_available\n\nThe \"bkend\" name is strange (to save 2 characters), and the fact how it\ncombines local and shared memory seems confusing too.\n\n\n6)\n+ /*\n+ * Local counter to manage shared memory allocations. At backend\nstartup, set to\n+ * initial_allocation_allowance via pgstat_init_allocated_bytes().\nDecrease as\n+ * memory is malloc'd. When exhausted, atomically refill if available from\n+ * ProcGlobal->max_total_bkend_mem via exceeds_max_total_bkend_mem().\n+ */\n+uint64\t\tallocation_allowance = 0;\n\nBut this allowance is for shared memory too, and shared memory is not\nallocated using malloc.\n\n\n7) There's a lot of new global variables. Maybe it'd be better to group\nthem into a struct, or something?\n\n\n8) Why does pgstat_set_allocated_bytes_storage need the new return?\n\n\n9) Unnecessary changes in src/backend/utils/hash/dynahash.c (whitespace,\nnew comment that seems not very useful)\n\n\n10) Shouldn't we have a new malloc wrapper that does the check? That way\nwe wouldn't need to have the call in every memory context place calling\nmalloc.\n\n\n11) I'm not sure about the ereport() calls after hitting the limit.\nAdres thinks it might lead to recursion [4], but Stephen [5] seems to\nthink this does not really make the situation worse. I'm not sure about\nthat, though - I agree the ENOMEM can already happen, but but maybe\nhaving a limit (which clearly needs to be more stricter than the limit\nused by kernel for OOM) would be more likely to hit?\n\n12) In general, I agree with Andres [4] that we'd be better of to focus\non the accounting part, see how it works in practice, and then add some\nability to limit memory after a while.\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/4fb99fb7-8a6a-2828-dd77-e2f1d75c7dd0%40enterprisedb.com\n\n[2]\nhttps://www.postgresql.org/message-id/3b9b90c6-f4ae-a7df-6519-847ea9d5fe1e%40amazon.com\n\n[3]\nhttps://www.postgresql.org/message-id/20230110023118.qqbjbtyecofh3uvd%40awork3.anarazel.de\n\n[4]\nhttps://www.postgresql.org/message-id/20230113180411.rdqbrivz5ano2uat%40awork3.anarazel.de\n\n[5]\nhttps://www.postgresql.org/message-id/ZTArWsctGn5fEVPR%40tamriel.snowman.net\n\n[6]\nhttps://www.postgresql.org/message-id/ZTGycSYuFrsixv6q%40tamriel.snowman.net\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 28 Jan 2024 20:11:56 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Hi,\n\n> I took a closer look at this patch over the last couple days, and I did\n> a bunch of benchmarks to see how expensive the accounting is. The good\n> news is that I haven't observed any overhead - I did two simple tests,\n> that I think would be particularly sensitive to this:\n>\n> [...]\n\nJust wanted to let you know that v20231226 doesn't apply. The patch\nneeds love from somebody interested in it.\n\nBest regards,\nAleksander Alekseev (wearing a co-CFM hat)\n\n\n",
"msg_date": "Tue, 12 Mar 2024 16:30:23 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On 12.03.2024 16:30, Aleksander Alekseev wrote:\n\n> Just wanted to let you know that v20231226 doesn't apply. The patch\n> needs love from somebody interested in it.\n\nThanks for pointing to this!\nHere is a version updated for the current master.\n\nWith the best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 13 Mar 2024 10:41:45 +0300",
"msg_from": "\"Anton A. Melnikov\" <a.melnikov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "On 13.03.2024 10:41, Anton A. Melnikov wrote:\n\n> Here is a version updated for the current master.\n>\n\nDuring patch updating i mistakenly added double counting of deallocatated blocks.\nThat's why the tests in the patch tester failed.\nFixed it and squashed fix 0002 with 0001.\nHere is fixed version.\n\nWith the best wishes!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 14 Mar 2024 23:36:12 +0300",
"msg_from": "\"Anton A. Melnikov\" <a.melnikov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
},
{
"msg_contents": "Hello Anton,\n\n14.03.2024 23:36, Anton A. Melnikov wrote:\n> On 13.03.2024 10:41, Anton A. Melnikov wrote:\n>\n>> Here is a version updated for the current master.\n>>\n>\n> During patch updating i mistakenly added double counting of deallocatated blocks.\n> That's why the tests in the patch tester failed.\n> Fixed it and squashed fix 0002 with 0001.\n> Here is fixed version.\n\nPlease try the following with the patches applied:\necho \"shared_buffers = '1MB'\nmax_total_backend_memory = '10MB'\" > /tmp/extra.config\n\nCPPFLAGS=\"-Og\" ./configure --enable-tap-tests --enable-debug --enable-cassert ...\nTEMP_CONFIG=/tmp/extra.config make check\n\nIt fails for me as follows:\n...\n# postmaster did not respond within 60 seconds, examine \".../src/test/regress/log/postmaster.log\" for the reason\n...\nsrc/test/regress/log/postmaster.log contains:\n...\nTRAP: failed Assert(\"ret != NULL\"), File: \"mcxt.c\", Line: 1327, PID: 4109270\nTRAP: failed Assert(\"ret != NULL\"), File: \"mcxt.c\", Line: 1327, PID: 4109271\npostgres: autovacuum launcher (ExceptionalCondition+0x69)[0x55ce441fcc6e]\npostgres: autovacuum launcher (palloc0+0x0)[0x55ce4422eb67]\npostgres: logical replication launcher (ExceptionalCondition+0x69)[0x55ce441fcc6e]\npostgres: autovacuum launcher (InitDeadLockChecking+0xa6)[0x55ce4408a6f0]\npostgres: logical replication launcher (palloc0+0x0)[0x55ce4422eb67]\npostgres: logical replication launcher (InitDeadLockChecking+0x45)[0x55ce4408a68f]\npostgres: autovacuum launcher (InitProcess+0x600)[0x55ce4409c6f2]\npostgres: logical replication launcher (InitProcess+0x600)[0x55ce4409c6f2]\npostgres: autovacuum launcher (+0x44b4e2)[0x55ce43ff24e2]\n...\ngrep TRAP src/test/regress/log/postmaster.log | wc -l\n445\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 15 Mar 2024 10:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends."
}
] |
[
{
"msg_contents": "I noticed that buildfarm member margay, which just recently started\nrunning tests on REL_12_STABLE, is failing the plpython tests in that\nbranch [1], though it's happy in v13 and later. The failures appear\ndue to syntax errors in Python \"except\" statements, and it's visible\nin some of the tests that we are failing to convert ancient Python\n\"except\" syntax to what Python 3 wants:\n\ndiff -U3 /home/marcel/build-farm-14/buildroot/REL_12_STABLE/pgsql.build/src/pl/plpython/expected/python3/plpython_types_3.out /home/marcel/build-farm-14/buildroot/REL_12_STABLE/pgsql.build/src/pl/plpython/results/python3/plpython_types.out\n--- /home/marcel/build-farm-14/buildroot/REL_12_STABLE/pgsql.build/src/pl/plpython/expected/python3/plpython_types_3.out\t2022-08-31 22:29:51.070597370 +0200\n+++ /home/marcel/build-farm-14/buildroot/REL_12_STABLE/pgsql.build/src/pl/plpython/results/python3/plpython_types.out\t2022-08-31 22:29:53.053544840 +0200\n@@ -400,15 +400,16 @@\n import marshal\n try:\n return marshal.loads(x)\n-except ValueError as e:\n+except ValueError, e:\n return 'FAILED: ' + str(e)\n $$ LANGUAGE plpython3u;\n+ERROR: could not compile PL/Python function \"test_type_unmarshal\"\n+DETAIL: SyntaxError: invalid syntax (<string>, line 6)\n\nSo it would seem that regress-python3-mangle.mk's sed command to\nperform this transformation isn't working on margay's sed.\n\nWe've had to hack regress-python3-mangle.mk for Solaris \"sed\" before\n(cf commit c3556f6fa). This seems to be another instance of that\nsame crummy implementation of '*' patterns. I suppose Noah missed\nthis angle at the time because the problematic pattern had already\nbeen removed in v13 and up (45223fd9c). But it's still there in v12.\n\nI am not completely sure why buildfarm member wrasse isn't failing\nsimilarly, but a likely theory is that Noah has got some other sed\nin his search path there.\n\nI confirmed on the gcc farm's Solaris 11 box that the pattern\ndoesn't work as expected with /usr/bin/sed:\n\ntgl@gcc-solaris11:~$ echo except foo, bar: | sed -e 's/except \\([[:alpha:]][[:alpha:].]*\\), *\\([[:alpha:]][[:alpha:]]*\\):/except \\1 as \\2:/g'\nexcept foo, bar:\n\nWe could perhaps do this instead:\n\n$ echo except foo, bar: | sed -e '/^ *except.*,.*: *$/s/, / as /g'\nexcept foo as bar:\n\nIt's a little bit more brittle, but Python doesn't allow any other\ncommands on the same line does it?\n\nAnother idea is to try to find some other sed to use. I see that\nconfigure already does that on most platforms, because ax_pthread.m4\nhas \"AC_REQUIRE([AC_PROG_SED])\". But if there's still someone\nout there using --disable-thread-safety, they might think this is\nan annoying movement of the build-requirements goalposts.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=margay&dt=2022-08-31%2020%3A00%3A05\n\n\n",
"msg_date": "Wed, 31 Aug 2022 18:25:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Solaris \"sed\" versus pre-v13 plpython tests"
},
{
"msg_contents": "I wrote:\n> I confirmed on the gcc farm's Solaris 11 box that the pattern\n> doesn't work as expected with /usr/bin/sed:\n\n> tgl@gcc-solaris11:~$ echo except foo, bar: | sed -e 's/except \\([[:alpha:]][[:alpha:].]*\\), *\\([[:alpha:]][[:alpha:]]*\\):/except \\1 as \\2:/g'\n> except foo, bar:\n\n> We could perhaps do this instead:\n\n> $ echo except foo, bar: | sed -e '/^ *except.*,.*: *$/s/, / as /g'\n> except foo as bar:\n\n> It's a little bit more brittle, but Python doesn't allow any other\n> commands on the same line does it?\n\nOh ... after a bit more experimentation, there's an easier way.\nApparently the real problem is that Solaris' sed doesn't handle\n[[:alpha:]] (I wonder if this is locale-dependent?). I get\ncorrect results after expanding it manually, eg\n\ntgl@gcc-solaris11:~$ echo except foo, bar: | sed -e 's/except \\([a-z][a-z.]*\\), *\\([a-z][a-zA-Z]*\\):/except \\1 as \\2:/g'\nexcept foo as bar:\n\nWe aren't likely to need anything beyond a-zA-Z and maybe 0-9,\nso I'll go fix it that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Aug 2022 20:57:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Solaris \"sed\" versus pre-v13 plpython tests"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 06:25:01PM -0400, Tom Lane wrote:\n> I am not completely sure why buildfarm member wrasse isn't failing\n> similarly\n\nwrasse disabled plpython in v12-, from day one, due to this and a crash bug\nthat I shelved. I will be interested to see how margay reacts to your fix.\n\n\n",
"msg_date": "Wed, 31 Aug 2022 21:25:55 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Solaris \"sed\" versus pre-v13 plpython tests"
},
{
"msg_contents": "On 2022-Aug-31, Noah Misch wrote:\n\n> On Wed, Aug 31, 2022 at 06:25:01PM -0400, Tom Lane wrote:\n> > I am not completely sure why buildfarm member wrasse isn't failing\n> > similarly\n> \n> wrasse disabled plpython in v12-, from day one, due to this and a crash bug\n> that I shelved. I will be interested to see how margay reacts to your fix.\n\nIt turned green two hours ago. Yay :-)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 1 Sep 2022 11:10:35 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Solaris \"sed\" versus pre-v13 plpython tests"
},
{
"msg_contents": "On Thu, Sep 01, 2022 at 11:10:35AM +0200, Alvaro Herrera wrote:\n> On 2022-Aug-31, Noah Misch wrote:\n> \n> > On Wed, Aug 31, 2022 at 06:25:01PM -0400, Tom Lane wrote:\n> > > I am not completely sure why buildfarm member wrasse isn't failing\n> > > similarly\n> > \n> > wrasse disabled plpython in v12-, from day one, due to this and a crash bug\n> > that I shelved. I will be interested to see how margay reacts to your fix.\n> \n> It turned green two hours ago. Yay :-)\n\nExcellent. I can no longer reproduce the crash bug, so I enabled plpython on\nwrasse v10,v11,v12.\n\n\n",
"msg_date": "Sat, 3 Sep 2022 08:43:57 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Solaris \"sed\" versus pre-v13 plpython tests"
}
] |
[
{
"msg_contents": "Hi, when I’m trying to access values of my custom enum type I created with\n\ncreate type colors as enum ('red', 'green', 'brown', 'yellow', 'blue');\n\nI’m getting oid as 16387 and I can see it stored as a chars\n\nis number 16387 is always OID for enum type?\n\nif not how I can get information about type of the result if it’s custom enum type\n\nthanks in advance\n\ndm\n\n\n\n",
"msg_date": "Thu, 1 Sep 2022 00:28:07 -0400",
"msg_from": "Dmitry Markman <dmarkman@mac.com>",
"msg_from_op": true,
"msg_subject": "question about access custom enum type from C"
},
{
"msg_contents": "(I think this is a better question for the general mailing list)\n\nOn Thu, 1 Sept 2022 at 16:28, Dmitry Markman <dmarkman@mac.com> wrote:\n>\n> Hi, when I’m trying to access values of my custom enum type I created with\n>\n> create type colors as enum ('red', 'green', 'brown', 'yellow', 'blue');\n>\n> I’m getting oid as 16387 and I can see it stored as a chars\n\nYou might see the names if you query the table, but all that's stored\nin the table is the numerical value.\n\nhttps://www.postgresql.org/docs/current/datatype-enum.html states \"An\nenum value occupies four bytes on disk.\".\n\n> is number 16387 is always OID for enum type?\n\nI'm not sure where you got that number from. Perhaps it's the oid for\nthe pg_type record? The following would show it.\n\nselect oid,typname from pg_type where typname = 'colors';\n\n> if not how I can get information about type of the result if it’s custom enum type\n\nI'm not sure what you mean by \"the result\". Maybe pg_typeof(column)\nmight be what you want? You can do: SELECT pg_typeof(myenumcol) FROM\nmytable;\n\nDavid\n\n\n",
"msg_date": "Thu, 1 Sep 2022 16:49:33 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: question about access custom enum type from C"
},
{
"msg_contents": "Hi David, thanks a lot for your answer\n\nI got that number from\n\nPQparamtype\n\nI already see that 16387 is not a ‘constant’, if I have few custom types I got different numbers for them\n\nthanks\n\ndm\n\n\n\n> On Sep 1, 2022, at 12:49 AM, David Rowley <dgrowleyml@gmail.com> wrote:\n> \n> (I think this is a better question for the general mailing list)\n> \n> On Thu, 1 Sept 2022 at 16:28, Dmitry Markman <dmarkman@mac.com> wrote:\n>> \n>> Hi, when I’m trying to access values of my custom enum type I created with\n>> \n>> create type colors as enum ('red', 'green', 'brown', 'yellow', 'blue');\n>> \n>> I’m getting oid as 16387 and I can see it stored as a chars\n> \n> You might see the names if you query the table, but all that's stored\n> in the table is the numerical value.\n> \n> https://www.postgresql.org/docs/current/datatype-enum.html states \"An\n> enum value occupies four bytes on disk.\".\n> \n>> is number 16387 is always OID for enum type?\n> \n> I'm not sure where you got that number from. Perhaps it's the oid for\n> the pg_type record? The following would show it.\n> \n> select oid,typname from pg_type where typname = 'colors';\n> \n>> if not how I can get information about type of the result if it’s custom enum type\n> \n> I'm not sure what you mean by \"the result\". Maybe pg_typeof(column)\n> might be what you want? You can do: SELECT pg_typeof(myenumcol) FROM\n> mytable;\n> \n> David\n\n\nHi David, thanks a lot for your answerI got that number fromPQparamtypeI already see that 16387 is not a ‘constant’, if I have few custom types I got different numbers for themthanksdmOn Sep 1, 2022, at 12:49 AM, David Rowley <dgrowleyml@gmail.com> wrote:(I think this is a better question for the general mailing list)On Thu, 1 Sept 2022 at 16:28, Dmitry Markman <dmarkman@mac.com> wrote:Hi, when I’m trying to access values of my custom enum type I created withcreate type colors as enum ('red', 'green', 'brown', 'yellow', 'blue');I’m getting oid as 16387 and I can see it stored as a charsYou might see the names if you query the table, but all that's storedin the table is the numerical value.https://www.postgresql.org/docs/current/datatype-enum.html states \"Anenum value occupies four bytes on disk.\".is number 16387 is always OID for enum type?I'm not sure where you got that number from. Perhaps it's the oid forthe pg_type record? The following would show it.select oid,typname from pg_type where typname = 'colors';if not how I can get information about type of the result if it’s custom enum typeI'm not sure what you mean by \"the result\". Maybe pg_typeof(column)might be what you want? You can do: SELECT pg_typeof(myenumcol) FROMmytable;David",
"msg_date": "Thu, 1 Sep 2022 01:03:10 -0400",
"msg_from": "Dmitry Markman <dmarkman@mac.com>",
"msg_from_op": true,
"msg_subject": "Re: question about access custom enum type from C"
},
{
"msg_contents": "Hi David\n\nas you suggested\n\ncreate type first_type as enum ('red', 'green', 'brown', 'yellow', 'blue');\nSELECT oid,typname,typlen,typtype from pg_type where typname='first_type'\n\nreturns everything I was looking for\n\nthanks again, I think I’m all set\n\ndm\n\n\n> On Sep 1, 2022, at 12:49 AM, David Rowley <dgrowleyml@gmail.com> wrote:\n> \n> (I think this is a better question for the general mailing list)\n> \n> On Thu, 1 Sept 2022 at 16:28, Dmitry Markman <dmarkman@mac.com> wrote:\n>> \n>> Hi, when I’m trying to access values of my custom enum type I created with\n>> \n>> create type colors as enum ('red', 'green', 'brown', 'yellow', 'blue');\n>> \n>> I’m getting oid as 16387 and I can see it stored as a chars\n> \n> You might see the names if you query the table, but all that's stored\n> in the table is the numerical value.\n> \n> https://www.postgresql.org/docs/current/datatype-enum.html states \"An\n> enum value occupies four bytes on disk.\".\n> \n>> is number 16387 is always OID for enum type?\n> \n> I'm not sure where you got that number from. Perhaps it's the oid for\n> the pg_type record? The following would show it.\n> \n> select oid,typname from pg_type where typname = 'colors';\n> \n>> if not how I can get information about type of the result if it’s custom enum type\n> \n> I'm not sure what you mean by \"the result\". Maybe pg_typeof(column)\n> might be what you want? You can do: SELECT pg_typeof(myenumcol) FROM\n> mytable;\n> \n> David\n\n\n\n",
"msg_date": "Thu, 1 Sep 2022 01:31:27 -0400",
"msg_from": "Dmitry Markman <dmarkman@mac.com>",
"msg_from_op": true,
"msg_subject": "Re: question about access custom enum type from C"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on something else, I noticed that commit 487e9861d added\na new field to struct Trigger, but failed to update $SUBJECT to match.\nAttached is a small patch for that.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Thu, 1 Sep 2022 15:18:38 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "struct Trigger definition in trigger.sgml"
},
{
"msg_contents": "On Thu, Sep 1, 2022 at 2:18 PM Etsuro Fujita <etsuro.fujita@gmail.com>\nwrote:\n\n> While working on something else, I noticed that commit 487e9861d added\n> a new field to struct Trigger, but failed to update $SUBJECT to match.\n> Attached is a small patch for that.\n\n\n+1. Good catch.\n\nThanks\nRichard\n\nOn Thu, Sep 1, 2022 at 2:18 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\nWhile working on something else, I noticed that commit 487e9861d added\na new field to struct Trigger, but failed to update $SUBJECT to match.\nAttached is a small patch for that. +1. Good catch.ThanksRichard",
"msg_date": "Thu, 1 Sep 2022 15:29:48 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: struct Trigger definition in trigger.sgml"
},
{
"msg_contents": "Hi Richard,\n\nOn Thu, Sep 1, 2022 at 4:29 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> On Thu, Sep 1, 2022 at 2:18 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>> While working on something else, I noticed that commit 487e9861d added\n>> a new field to struct Trigger, but failed to update $SUBJECT to match.\n>> Attached is a small patch for that.\n\n> +1. Good catch.\n\nPushed.\n\nThanks for reviewing!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 2 Sep 2022 17:02:21 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: struct Trigger definition in trigger.sgml"
}
] |
[
{
"msg_contents": "Hi\r\n\r\nI found some data that are badly formatted in psql\r\n\r\ncreate table foo(a varchar);\r\ninsert into foo values('Dětská šperkovnice Goki 15545');\r\ninsert into foo values('Tlakoměr Omron Evolv s Bluetooth připojením');\r\ninsert into foo values('Řetěz KMC BE08SEP22 stříbrný');\r\n\r\npsql older than 12 shows this table correctly\r\n\r\n(2022-09-01 08:42:44) postgres=# select * from foo;\r\n┌─────────────────────────────────────────────┐\r\n│ a │\r\n╞═════════════════════════════════════════════╡\r\n│ Dětská šperkovnice Goki 15545 │\r\n│ Tlakoměr Omron Evolv s Bluetooth připojením │\r\n│ Řetěz KMC BE08SEP22 stříbrný │\r\n└─────────────────────────────────────────────┘\r\n(3 rows)\r\n\r\npsql 12 and later breaks border little bit\r\n\r\n(2022-09-01 08:42:49) postgres=# select * from foo;\r\n┌─────────────────────────────────────────────┐\r\n│ a │\r\n╞═════════════════════════════════════════════╡\r\n│ Dětská šperkovnice Goki 15545 │\r\n│ Tlakoměr Omron Evolv s Bluetooth připojením │\r\n│ Řetěz KMC BE08SEP22 stříbrný │\r\n└─────────────────────────────────────────────┘\r\n(3 rows)\r\n\r\nproblem is in bad width of invisible char 200E\r\n\r\nhttps://unicodeplus.com/U+200E\r\n\r\n(2022-09-01 09:10:05) postgres=# select e'Ahoj\\u200eNazdar';\r\n┌─────────────┐\r\n│ ?column? │\r\n╞═════════════╡\r\n│ AhojNazdar │\r\n└─────────────┘\r\n(1 row)\r\n\r\nRegards\r\n\r\nPavel\r\n\nHiI found some data that are badly formatted in psqlcreate table foo(a varchar);insert into foo values('Dětská šperkovnice Goki 15545');insert into foo values('Tlakoměr Omron Evolv s Bluetooth připojením');insert into foo values('Řetěz KMC BE08SEP22 stříbrný');psql older than 12 shows this table correctly (2022-09-01 08:42:44) postgres=# select * from foo;┌─────────────────────────────────────────────┐│ a │╞═════════════════════════════════════════════╡│ Dětská šperkovnice Goki 15545 ││ Tlakoměr Omron Evolv s Bluetooth připojením ││ Řetěz KMC BE08SEP22 stříbrný │└─────────────────────────────────────────────┘(3 rows)psql 12 and later breaks border little bit(2022-09-01 08:42:49) postgres=# select * from foo;┌─────────────────────────────────────────────┐│ a │╞═════════════════════════════════════════════╡│ Dětská šperkovnice Goki 15545 ││ Tlakoměr Omron Evolv s Bluetooth připojením ││ Řetěz KMC BE08SEP22 stříbrný │└─────────────────────────────────────────────┘(3 rows)problem is in bad width of invisible char 200Ehttps://unicodeplus.com/U+200E(2022-09-01 09:10:05) postgres=# select e'Ahoj\\u200eNazdar';┌─────────────┐│ ?column? │╞═════════════╡│ AhojNazdar │└─────────────┘(1 row)RegardsPavel",
"msg_date": "Thu, 1 Sep 2022 09:12:19 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "broken table formatting in psql"
},
{
"msg_contents": "On Thu, Sep 1, 2022 at 2:13 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> problem is in bad width of invisible char 200E\n\nI removed this comment in bab982161e since it didn't match the code.\nI'd be interested to see what happened after v12.\n\n- * - Other format characters (general category code Cf in the Unicode\n- * database) and ZERO WIDTH SPACE (U+200B) have a column\nwidth of 0.\n\nUnicodeData.txt has this:\n\n200B;ZERO WIDTH SPACE;Cf;0;BN;;;;;N;;;;;\n200C;ZERO WIDTH NON-JOINER;Cf;0;BN;;;;;N;;;;;\n200D;ZERO WIDTH JOINER;Cf;0;BN;;;;;N;;;;;\n200E;LEFT-TO-RIGHT MARK;Cf;0;L;;;;;N;;;;;\n200F;RIGHT-TO-LEFT MARK;Cf;0;R;;;;;N;;;;;\n\nSo maybe we need to take Cf characters in this file into account, in\naddition to Me and Mn (combining characters).\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Sep 2022 15:00:38 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: broken table formatting in psql"
},
{
"msg_contents": "At Thu, 1 Sep 2022 15:00:38 +0700, John Naylor <john.naylor@enterprisedb.com> wrote in \n> On Thu, Sep 1, 2022 at 2:13 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > problem is in bad width of invisible char 200E\n> \n> I removed this comment in bab982161e since it didn't match the code.\n> I'd be interested to see what happened after v12.\n> \n> - * - Other format characters (general category code Cf in the Unicode\n> - * database) and ZERO WIDTH SPACE (U+200B) have a column\n> width of 0.\n> \n> UnicodeData.txt has this:\n> \n> 200B;ZERO WIDTH SPACE;Cf;0;BN;;;;;N;;;;;\n> 200C;ZERO WIDTH NON-JOINER;Cf;0;BN;;;;;N;;;;;\n> 200D;ZERO WIDTH JOINER;Cf;0;BN;;;;;N;;;;;\n> 200E;LEFT-TO-RIGHT MARK;Cf;0;L;;;;;N;;;;;\n> 200F;RIGHT-TO-LEFT MARK;Cf;0;R;;;;;N;;;;;\n> \n> So maybe we need to take Cf characters in this file into account, in\n> addition to Me and Mn (combining characters).\n\nIncluding them into unicode_combining_table.h actually worked, but I'm\nnot sure it is valid to include Cf's among Mn/Me's..\n\n> diff --git a/src/common/unicode/generate-unicode_combining_table.pl b/src/common/unicode/generate-unicode_combining_table.pl\n> index 8177c20260..7030bc637b 100644\n> --- a/src/common/unicode/generate-unicode_combining_table.pl\n> +++ b/src/common/unicode/generate-unicode_combining_table.pl\n> @@ -25,7 +25,7 @@ foreach my $line (<ARGV>)\n> my @fields = split ';', $line;\n> $codepoint = hex $fields[0];\n> \n> - if ($fields[2] eq 'Me' || $fields[2] eq 'Mn')\n> + if ($fields[2] eq 'Me' || $fields[2] eq 'Mn' || $fields[2] eq 'Cf')\n> {\n> # combining character, save for start of range\n> if (!defined($range_start))\n\nBy the way I was super annoyed that it was super-hard to reflect the\nchanges under src/common to the final binary. There are two hops of\nmissing dependencies and finally ccache stood in my way.. I find that\nAndres once meant to try that using --dependency-files but I hope we\nmake that reflection automated even if we do define the dependencies\nmanually..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 01 Sep 2022 18:22:06 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: broken table formatting in psql"
},
{
"msg_contents": "At Thu, 01 Sep 2022 18:22:06 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 1 Sep 2022 15:00:38 +0700, John Naylor <john.naylor@enterprisedb.com> wrote in \n> > UnicodeData.txt has this:\n> > \n> > 200B;ZERO WIDTH SPACE;Cf;0;BN;;;;;N;;;;;\n> > 200C;ZERO WIDTH NON-JOINER;Cf;0;BN;;;;;N;;;;;\n> > 200D;ZERO WIDTH JOINER;Cf;0;BN;;;;;N;;;;;\n> > 200E;LEFT-TO-RIGHT MARK;Cf;0;L;;;;;N;;;;;\n> > 200F;RIGHT-TO-LEFT MARK;Cf;0;R;;;;;N;;;;;\n> > \n> > So maybe we need to take Cf characters in this file into account, in\n> > addition to Me and Mn (combining characters).\n> \n> Including them into unicode_combining_table.h actually worked, but I'm\n> not sure it is valid to include Cf's among Mn/Me's..\n\n\nUnicodeData.txt\n 174:00AD;SOFT HYPHEN;Cf;0;BN;;;;;N;;;;;\n\nSoft-hyphen seems like not zero-width.. usually...\n\n\n 0600;ARABIC NUMBER SIGN;Cf;0;AN;;;;;N;;;;;\n110BD;KAITHI NUMBER SIGN;Cf;0;L;;;;;N;;;;;\n\nMmm. These looks like not zero-width?\n\n\nHowever, it seems like basically a win if we include \"Cf\"s to the\n\"combining\" table?\n\n====\n 174:00AD;SOFT HYPHEN;Cf;0;BN;;;;;N;;;;;\n 1499:0600;ARABIC NUMBER SIGN;Cf;0;AN;;;;;N;;;;;\n 1500:0601;ARABIC SIGN SANAH;Cf;0;AN;;;;;N;;;;;\n 1501:0602;ARABIC FOOTNOTE MARKER;Cf;0;AN;;;;;N;;;;;\n 1502:0603;ARABIC SIGN SAFHA;Cf;0;AN;;;;;N;;;;;\n 1503:0604;ARABIC SIGN SAMVAT;Cf;0;AN;;;;;N;;;;;\n 1504:0605;ARABIC NUMBER MARK ABOVE;Cf;0;AN;;;;;N;;;;;\n 1527:061C;ARABIC LETTER MARK;Cf;0;AL;;;;;N;;;;;\n 1720:06DD;ARABIC END OF AYAH;Cf;0;AN;;;;;N;;;;;\n 1769:070F;SYRIAC ABBREVIATION MARK;Cf;0;AL;;;;;N;;;;;\n 2124:0890;ARABIC POUND MARK ABOVE;Cf;0;AN;;;;;N;;;;;\n 2125:0891;ARABIC PIASTRE MARK ABOVE;Cf;0;AN;;;;;N;;;;;\n 2200:08E2;ARABIC DISPUTED END OF AYAH;Cf;0;AN;;;;;N;;;;;\n 5517:180E;MONGOLIAN VOWEL SEPARATOR;Cf;0;BN;;;;;N;;;;;\n 7365:200B;ZERO WIDTH SPACE;Cf;0;BN;;;;;N;;;;;\n 7366:200C;ZERO WIDTH NON-JOINER;Cf;0;BN;;;;;N;;;;;\n 7367:200D;ZERO WIDTH JOINER;Cf;0;BN;;;;;N;;;;;\n 7368:200E;LEFT-TO-RIGHT MARK;Cf;0;L;;;;;N;;;;;\n 7369:200F;RIGHT-TO-LEFT MARK;Cf;0;R;;;;;N;;;;;\n 7396:202A;LEFT-TO-RIGHT EMBEDDING;Cf;0;LRE;;;;;N;;;;;\n 7397:202B;RIGHT-TO-LEFT EMBEDDING;Cf;0;RLE;;;;;N;;;;;\n 7398:202C;POP DIRECTIONAL FORMATTING;Cf;0;PDF;;;;;N;;;;;\n 7399:202D;LEFT-TO-RIGHT OVERRIDE;Cf;0;LRO;;;;;N;;;;;\n 7400:202E;RIGHT-TO-LEFT OVERRIDE;Cf;0;RLO;;;;;N;;;;;\n 7450:2060;WORD JOINER;Cf;0;BN;;;;;N;;;;;\n 7451:2061;FUNCTION APPLICATION;Cf;0;BN;;;;;N;;;;;\n 7452:2062;INVISIBLE TIMES;Cf;0;BN;;;;;N;;;;;\n 7453:2063;INVISIBLE SEPARATOR;Cf;0;BN;;;;;N;;;;;\n 7454:2064;INVISIBLE PLUS;Cf;0;BN;;;;;N;;;;;\n 7455:2066;LEFT-TO-RIGHT ISOLATE;Cf;0;LRI;;;;;N;;;;;\n 7456:2067;RIGHT-TO-LEFT ISOLATE;Cf;0;RLI;;;;;N;;;;;\n 7457:2068;FIRST STRONG ISOLATE;Cf;0;FSI;;;;;N;;;;;\n 7458:2069;POP DIRECTIONAL ISOLATE;Cf;0;PDI;;;;;N;;;;;\n 7459:206A;INHIBIT SYMMETRIC SWAPPING;Cf;0;BN;;;;;N;;;;;\n 7460:206B;ACTIVATE SYMMETRIC SWAPPING;Cf;0;BN;;;;;N;;;;;\n 7461:206C;INHIBIT ARABIC FORM SHAPING;Cf;0;BN;;;;;N;;;;;\n 7462:206D;ACTIVATE ARABIC FORM SHAPING;Cf;0;BN;;;;;N;;;;;\n 7463:206E;NATIONAL DIGIT SHAPES;Cf;0;BN;;;;;N;;;;;\n 7464:206F;NOMINAL DIGIT SHAPES;Cf;0;BN;;;;;N;;;;;\n 16660:FEFF;ZERO WIDTH NO-BREAK SPACE;Cf;0;BN;;;;;N;BYTE ORDER MARK;;;;\n 16886:FFF9;INTERLINEAR ANNOTATION ANCHOR;Cf;0;ON;;;;;N;;;;;\n 16887:FFFA;INTERLINEAR ANNOTATION SEPARATOR;Cf;0;ON;;;;;N;;;;;\n 16888:FFFB;INTERLINEAR ANNOTATION TERMINATOR;Cf;0;ON;;;;;N;;;;;\n 19731:110BD;KAITHI NUMBER SIGN;Cf;0;L;;;;;N;;;;;\n 19737:110CD;KAITHI NUMBER SIGN ABOVE;Cf;0;L;;;;;N;;;;;\n 24043:13430;EGYPTIAN HIEROGLYPH VERTICAL JOINER;Cf;0;L;;;;;N;;;;;\n 24044:13431;EGYPTIAN HIEROGLYPH HORIZONTAL JOINER;Cf;0;L;;;;;N;;;;;\n 24045:13432;EGYPTIAN HIEROGLYPH INSERT AT TOP START;Cf;0;L;;;;;N;;;;;\n 24046:13433;EGYPTIAN HIEROGLYPH INSERT AT BOTTOM START;Cf;0;L;;;;;N;;;;;\n 24047:13434;EGYPTIAN HIEROGLYPH INSERT AT TOP END;Cf;0;L;;;;;N;;;;;\n 24048:13435;EGYPTIAN HIEROGLYPH INSERT AT BOTTOM END;Cf;0;L;;;;;N;;;;;\n 24049:13436;EGYPTIAN HIEROGLYPH OVERLAY MIDDLE;Cf;0;L;;;;;N;;;;;\n 24050:13437;EGYPTIAN HIEROGLYPH BEGIN SEGMENT;Cf;0;L;;;;;N;;;;;\n 24051:13438;EGYPTIAN HIEROGLYPH END SEGMENT;Cf;0;L;;;;;N;;;;;\n 27838:1BCA0;SHORTHAND FORMAT LETTER OVERLAP;Cf;0;BN;;;;;N;;;;;\n 27839:1BCA1;SHORTHAND FORMAT CONTINUING OVERLAP;Cf;0;BN;;;;;N;;;;;\n 27840:1BCA2;SHORTHAND FORMAT DOWN STEP;Cf;0;BN;;;;;N;;;;;\n 27841:1BCA3;SHORTHAND FORMAT UP STEP;Cf;0;BN;;;;;N;;;;;\n 28386:1D173;MUSICAL SYMBOL BEGIN BEAM;Cf;0;BN;;;;;N;;;;;\n 28387:1D174;MUSICAL SYMBOL END BEAM;Cf;0;BN;;;;;N;;;;;\n 28388:1D175;MUSICAL SYMBOL BEGIN TIE;Cf;0;BN;;;;;N;;;;;\n 28389:1D176;MUSICAL SYMBOL END TIE;Cf;0;BN;;;;;N;;;;;\n 28390:1D177;MUSICAL SYMBOL BEGIN SLUR;Cf;0;BN;;;;;N;;;;;\n 28391:1D178;MUSICAL SYMBOL END SLUR;Cf;0;BN;;;;;N;;;;;\n 28392:1D179;MUSICAL SYMBOL BEGIN PHRASE;Cf;0;BN;;;;;N;;;;;\n 28393:1D17A;MUSICAL SYMBOL END PHRASE;Cf;0;BN;;;;;N;;;;;\n 34286:E0001;LANGUAGE TAG;Cf;0;BN;;;;;N;;;;;\n 34287:E0020;TAG SPACE;Cf;0;BN;;;;;N;;;;;\n 34288:E0021;TAG EXCLAMATION MARK;Cf;0;BN;;;;;N;;;;;\n 34289:E0022;TAG QUOTATION MARK;Cf;0;BN;;;;;N;;;;;\n 34290:E0023;TAG NUMBER SIGN;Cf;0;BN;;;;;N;;;;;\n 34291:E0024;TAG DOLLAR SIGN;Cf;0;BN;;;;;N;;;;;\n 34292:E0025;TAG PERCENT SIGN;Cf;0;BN;;;;;N;;;;;\n 34293:E0026;TAG AMPERSAND;Cf;0;BN;;;;;N;;;;;\n 34294:E0027;TAG APOSTROPHE;Cf;0;BN;;;;;N;;;;;\n 34295:E0028;TAG LEFT PARENTHESIS;Cf;0;BN;;;;;N;;;;;\n 34296:E0029;TAG RIGHT PARENTHESIS;Cf;0;BN;;;;;N;;;;;\n 34297:E002A;TAG ASTERISK;Cf;0;BN;;;;;N;;;;;\n 34298:E002B;TAG PLUS SIGN;Cf;0;BN;;;;;N;;;;;\n 34299:E002C;TAG COMMA;Cf;0;BN;;;;;N;;;;;\n 34300:E002D;TAG HYPHEN-MINUS;Cf;0;BN;;;;;N;;;;;\n 34301:E002E;TAG FULL STOP;Cf;0;BN;;;;;N;;;;;\n 34302:E002F;TAG SOLIDUS;Cf;0;BN;;;;;N;;;;;\n 34303:E0030;TAG DIGIT ZERO;Cf;0;BN;;;;;N;;;;;\n 34304:E0031;TAG DIGIT ONE;Cf;0;BN;;;;;N;;;;;\n 34305:E0032;TAG DIGIT TWO;Cf;0;BN;;;;;N;;;;;\n 34306:E0033;TAG DIGIT THREE;Cf;0;BN;;;;;N;;;;;\n 34307:E0034;TAG DIGIT FOUR;Cf;0;BN;;;;;N;;;;;\n 34308:E0035;TAG DIGIT FIVE;Cf;0;BN;;;;;N;;;;;\n 34309:E0036;TAG DIGIT SIX;Cf;0;BN;;;;;N;;;;;\n 34310:E0037;TAG DIGIT SEVEN;Cf;0;BN;;;;;N;;;;;\n 34311:E0038;TAG DIGIT EIGHT;Cf;0;BN;;;;;N;;;;;\n 34312:E0039;TAG DIGIT NINE;Cf;0;BN;;;;;N;;;;;\n 34313:E003A;TAG COLON;Cf;0;BN;;;;;N;;;;;\n 34314:E003B;TAG SEMICOLON;Cf;0;BN;;;;;N;;;;;\n 34315:E003C;TAG LESS-THAN SIGN;Cf;0;BN;;;;;N;;;;;\n 34316:E003D;TAG EQUALS SIGN;Cf;0;BN;;;;;N;;;;;\n 34317:E003E;TAG GREATER-THAN SIGN;Cf;0;BN;;;;;N;;;;;\n 34318:E003F;TAG QUESTION MARK;Cf;0;BN;;;;;N;;;;;\n 34319:E0040;TAG COMMERCIAL AT;Cf;0;BN;;;;;N;;;;;\n 34320:E0041;TAG LATIN CAPITAL LETTER A;Cf;0;BN;;;;;N;;;;;\n 34321:E0042;TAG LATIN CAPITAL LETTER B;Cf;0;BN;;;;;N;;;;;\n 34322:E0043;TAG LATIN CAPITAL LETTER C;Cf;0;BN;;;;;N;;;;;\n 34323:E0044;TAG LATIN CAPITAL LETTER D;Cf;0;BN;;;;;N;;;;;\n 34324:E0045;TAG LATIN CAPITAL LETTER E;Cf;0;BN;;;;;N;;;;;\n 34325:E0046;TAG LATIN CAPITAL LETTER F;Cf;0;BN;;;;;N;;;;;\n 34326:E0047;TAG LATIN CAPITAL LETTER G;Cf;0;BN;;;;;N;;;;;\n 34327:E0048;TAG LATIN CAPITAL LETTER H;Cf;0;BN;;;;;N;;;;;\n 34328:E0049;TAG LATIN CAPITAL LETTER I;Cf;0;BN;;;;;N;;;;;\n 34329:E004A;TAG LATIN CAPITAL LETTER J;Cf;0;BN;;;;;N;;;;;\n 34330:E004B;TAG LATIN CAPITAL LETTER K;Cf;0;BN;;;;;N;;;;;\n 34331:E004C;TAG LATIN CAPITAL LETTER L;Cf;0;BN;;;;;N;;;;;\n 34332:E004D;TAG LATIN CAPITAL LETTER M;Cf;0;BN;;;;;N;;;;;\n 34333:E004E;TAG LATIN CAPITAL LETTER N;Cf;0;BN;;;;;N;;;;;\n 34334:E004F;TAG LATIN CAPITAL LETTER O;Cf;0;BN;;;;;N;;;;;\n 34335:E0050;TAG LATIN CAPITAL LETTER P;Cf;0;BN;;;;;N;;;;;\n 34336:E0051;TAG LATIN CAPITAL LETTER Q;Cf;0;BN;;;;;N;;;;;\n 34337:E0052;TAG LATIN CAPITAL LETTER R;Cf;0;BN;;;;;N;;;;;\n 34338:E0053;TAG LATIN CAPITAL LETTER S;Cf;0;BN;;;;;N;;;;;\n 34339:E0054;TAG LATIN CAPITAL LETTER T;Cf;0;BN;;;;;N;;;;;\n 34340:E0055;TAG LATIN CAPITAL LETTER U;Cf;0;BN;;;;;N;;;;;\n 34341:E0056;TAG LATIN CAPITAL LETTER V;Cf;0;BN;;;;;N;;;;;\n 34342:E0057;TAG LATIN CAPITAL LETTER W;Cf;0;BN;;;;;N;;;;;\n 34343:E0058;TAG LATIN CAPITAL LETTER X;Cf;0;BN;;;;;N;;;;;\n 34344:E0059;TAG LATIN CAPITAL LETTER Y;Cf;0;BN;;;;;N;;;;;\n 34345:E005A;TAG LATIN CAPITAL LETTER Z;Cf;0;BN;;;;;N;;;;;\n 34346:E005B;TAG LEFT SQUARE BRACKET;Cf;0;BN;;;;;N;;;;;\n 34347:E005C;TAG REVERSE SOLIDUS;Cf;0;BN;;;;;N;;;;;\n 34348:E005D;TAG RIGHT SQUARE BRACKET;Cf;0;BN;;;;;N;;;;;\n 34349:E005E;TAG CIRCUMFLEX ACCENT;Cf;0;BN;;;;;N;;;;;\n 34350:E005F;TAG LOW LINE;Cf;0;BN;;;;;N;;;;;\n 34351:E0060;TAG GRAVE ACCENT;Cf;0;BN;;;;;N;;;;;\n 34352:E0061;TAG LATIN SMALL LETTER A;Cf;0;BN;;;;;N;;;;;\n 34353:E0062;TAG LATIN SMALL LETTER B;Cf;0;BN;;;;;N;;;;;\n 34354:E0063;TAG LATIN SMALL LETTER C;Cf;0;BN;;;;;N;;;;;\n 34355:E0064;TAG LATIN SMALL LETTER D;Cf;0;BN;;;;;N;;;;;\n 34356:E0065;TAG LATIN SMALL LETTER E;Cf;0;BN;;;;;N;;;;;\n 34357:E0066;TAG LATIN SMALL LETTER F;Cf;0;BN;;;;;N;;;;;\n 34358:E0067;TAG LATIN SMALL LETTER G;Cf;0;BN;;;;;N;;;;;\n 34359:E0068;TAG LATIN SMALL LETTER H;Cf;0;BN;;;;;N;;;;;\n 34360:E0069;TAG LATIN SMALL LETTER I;Cf;0;BN;;;;;N;;;;;\n 34361:E006A;TAG LATIN SMALL LETTER J;Cf;0;BN;;;;;N;;;;;\n 34362:E006B;TAG LATIN SMALL LETTER K;Cf;0;BN;;;;;N;;;;;\n 34363:E006C;TAG LATIN SMALL LETTER L;Cf;0;BN;;;;;N;;;;;\n 34364:E006D;TAG LATIN SMALL LETTER M;Cf;0;BN;;;;;N;;;;;\n 34365:E006E;TAG LATIN SMALL LETTER N;Cf;0;BN;;;;;N;;;;;\n 34366:E006F;TAG LATIN SMALL LETTER O;Cf;0;BN;;;;;N;;;;;\n 34367:E0070;TAG LATIN SMALL LETTER P;Cf;0;BN;;;;;N;;;;;\n 34368:E0071;TAG LATIN SMALL LETTER Q;Cf;0;BN;;;;;N;;;;;\n 34369:E0072;TAG LATIN SMALL LETTER R;Cf;0;BN;;;;;N;;;;;\n 34370:E0073;TAG LATIN SMALL LETTER S;Cf;0;BN;;;;;N;;;;;\n 34371:E0074;TAG LATIN SMALL LETTER T;Cf;0;BN;;;;;N;;;;;\n 34372:E0075;TAG LATIN SMALL LETTER U;Cf;0;BN;;;;;N;;;;;\n 34373:E0076;TAG LATIN SMALL LETTER V;Cf;0;BN;;;;;N;;;;;\n 34374:E0077;TAG LATIN SMALL LETTER W;Cf;0;BN;;;;;N;;;;;\n 34375:E0078;TAG LATIN SMALL LETTER X;Cf;0;BN;;;;;N;;;;;\n 34376:E0079;TAG LATIN SMALL LETTER Y;Cf;0;BN;;;;;N;;;;;\n 34377:E007A;TAG LATIN SMALL LETTER Z;Cf;0;BN;;;;;N;;;;;\n 34378:E007B;TAG LEFT CURLY BRACKET;Cf;0;BN;;;;;N;;;;;\n 34379:E007C;TAG VERTICAL LINE;Cf;0;BN;;;;;N;;;;;\n 34380:E007D;TAG RIGHT CURLY BRACKET;Cf;0;BN;;;;;N;;;;;\n 34381:E007E;TAG TILDE;Cf;0;BN;;;;;N;;;;;\n 34382:E007F;CANCEL TAG;Cf;0;BN;;;;;N;;;;;\n\n====\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 02 Sep 2022 14:17:19 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: broken table formatting in psql"
},
{
"msg_contents": "On Fri, Sep 2, 2022 at 12:17 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 01 Sep 2022 18:22:06 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > At Thu, 1 Sep 2022 15:00:38 +0700, John Naylor <john.naylor@enterprisedb.com> wrote in\n> > > UnicodeData.txt has this:\n> > >\n> > > 200B;ZERO WIDTH SPACE;Cf;0;BN;;;;;N;;;;;\n> > > 200C;ZERO WIDTH NON-JOINER;Cf;0;BN;;;;;N;;;;;\n> > > 200D;ZERO WIDTH JOINER;Cf;0;BN;;;;;N;;;;;\n> > > 200E;LEFT-TO-RIGHT MARK;Cf;0;L;;;;;N;;;;;\n> > > 200F;RIGHT-TO-LEFT MARK;Cf;0;R;;;;;N;;;;;\n> > >\n> > > So maybe we need to take Cf characters in this file into account, in\n> > > addition to Me and Mn (combining characters).\n> >\n> > Including them into unicode_combining_table.h actually worked, but I'm\n> > not sure it is valid to include Cf's among Mn/Me's..\n\nLooking at the definition, Cf means \"other, format\" category, \"Format\ncharacter that affects the layout of text or the operation of text\nprocesses, but is not normally rendered\". [1]\n\n> UnicodeData.txt\n> 174:00AD;SOFT HYPHEN;Cf;0;BN;;;;;N;;;;;\n>\n> Soft-hyphen seems like not zero-width.. usually...\n\nI gather it only appears at line breaks, which I doubt we want to handle.\n\n> 0600;ARABIC NUMBER SIGN;Cf;0;AN;;;;;N;;;;;\n> 110BD;KAITHI NUMBER SIGN;Cf;0;L;;;;;N;;;;;\n>\n> Mmm. These looks like not zero-width?\n\nThere are glyphs, but there is something special about the first one:\n\nselect U&'\\0600';\n\nLooks like this in psql (substituting 'X' to avoid systemic differences):\n\n+----------+\n| ?column? |\n+----------+\n| X |\n+----------+\n(1 row)\n\nCopy from psql to vim or nano:\n\n+----------+\n| ?column? |\n+----------+\n| X |\n+----------+\n(1 row)\n\n...so it does mess up the border the same way. The second\n(U&'\\+0110bd') doesn't render for me.\n\n> However, it seems like basically a win if we include \"Cf\"s to the\n> \"combining\" table?\n\nThere seems to be a case for that. If we did include those, we should\nrename the table to match.\n\nI found this old document from 2002 on \"default ignorable\" characters\nthat normally have no visible glyph:\n\nhttps://unicode.org/L2/L2002/02368-default-ignorable.html\n\nIf there is any doubt about including all of Cf, we could also just\nadd a branch in wchar.c to hard-code the 200B-200F range.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Sep 2022 13:43:50 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: broken table formatting in psql"
},
{
"msg_contents": "At Fri, 2 Sep 2022 13:43:50 +0700, John Naylor <john.naylor@enterprisedb.com> wrote in \n> On Fri, Sep 2, 2022 at 12:17 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > > Including them into unicode_combining_table.h actually worked, but I'm\n> > > not sure it is valid to include Cf's among Mn/Me's..\n> \n> Looking at the definition, Cf means \"other, format\" category, \"Format\n> character that affects the layout of text or the operation of text\n> processes, but is not normally rendered\". [1]\n> \n> > UnicodeData.txt\n> > 174:00AD;SOFT HYPHEN;Cf;0;BN;;;;;N;;;;;\n> >\n> > Soft-hyphen seems like not zero-width.. usually...\n> \n> I gather it only appears at line breaks, which I doubt we want to handle.\n\nYeah. Sounds reasonable. (Emacs always renders it, though..)\n\n> > 0600;ARABIC NUMBER SIGN;Cf;0;AN;;;;;N;;;;;\n> > 110BD;KAITHI NUMBER SIGN;Cf;0;L;;;;;N;;;;;\n> >\n> > Mmm. These looks like not zero-width?\n> \n> There are glyphs, but there is something special about the first one:\n> \n> select U&'\\0600';\n> \n> Looks like this in psql (substituting 'X' to avoid systemic differences):\n> \n> +----------+\n> | ?column? |\n> +----------+\n> | X |\n> +----------+\n> (1 row)\n> \n> Copy from psql to vim or nano:\n> \n> +----------+\n> | ?column? |\n> +----------+\n> | X |\n> +----------+\n> (1 row)\n> \n> ...so it does mess up the border the same way. The second\n> (U&'\\+0110bd') doesn't render for me.\n\nAnyway it is inevitably rendering-environment dependent.\n\n> > However, it seems like basically a win if we include \"Cf\"s to the\n> > \"combining\" table?\n>\n> There seems to be a case for that. If we did include those, we should\n> rename the table to match.\n\nAgreed:)\n\n> I found this old document from 2002 on \"default ignorable\" characters\n> that normally have no visible glyph:\n> \n> https://unicode.org/L2/L2002/02368-default-ignorable.html\n\nMmm. Too old?\n\n> If there is any doubt about including all of Cf, we could also just\n> add a branch in wchar.c to hard-code the 200B-200F range.\n\nIf every way has defect to the similar extent, I think we will choose\nto use authoritative data at least for the first step. We might want\nto have additional filtering on it but it would be another issue,\nmaybe.\n\nAttached is the first cut of that. (The commit messages is not great,\nthough.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 02 Sep 2022 17:19:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: broken table formatting in psql"
},
{
"msg_contents": "On 2022-Sep-02, Kyotaro Horiguchi wrote:\n\n> UnicodeData.txt\n> 174:00AD;SOFT HYPHEN;Cf;0;BN;;;;;N;;;;;\n> \n> Soft-hyphen seems like not zero-width.. usually...\n\nSoft-hyphen *is* zero width. It should not be displayed. It's just a\nmarker so that typesetting software knows where to add real hyphens in\ncase a word is too long to appear in a single line.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 2 Sep 2022 14:43:03 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: broken table formatting in psql"
},
{
"msg_contents": "On Fri, Sep 2, 2022 at 3:19 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 2 Sep 2022 13:43:50 +0700, John Naylor <john.naylor@enterprisedb.com> wrote in\n> > If there is any doubt about including all of Cf, we could also just\n> > add a branch in wchar.c to hard-code the 200B-200F range.\n>\n> If every way has defect to the similar extent, I think we will choose\n> to use authoritative data at least for the first step. We might want\n> to have additional filtering on it but it would be another issue,\n> maybe.\n>\n> Attached is the first cut of that. (The commit messages is not great,\n> though.)\n\nOkay, the patch looks good to me overall. Comparing releases, some\nother ranges were in v11 but left out in v12 with the transition to\nusing a script:\n\n0x070F\n{0x200B, 0x200F}\n{0x202A, 0x202E}\n{0x206A, 0x206F}\n0xFEFF\n{0xFFF9, 0xFFFB}\n\nDoes anyone want to advocate for backpatching these missing ranges to\nv12 and up? v12 still has a table in-line so trivial to remedy, but\nv13 and up use a script, so these exceptions would likely have to use\nhard-coded branches to keep from bringing in new changes.\n\nIf so, does anyone want to advocate for including this patch in v15?\nIt claims Unicode 14.0.0, and this would make that claim more\ntechnically correct as well as avoiding additional branches.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Sep 2022 12:39:19 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: broken table formatting in psql"
},
{
"msg_contents": "čt 8. 9. 2022 v 7:39 odesílatel John Naylor <john.naylor@enterprisedb.com>\nnapsal:\n\n> On Fri, Sep 2, 2022 at 3:19 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Fri, 2 Sep 2022 13:43:50 +0700, John Naylor <\n> john.naylor@enterprisedb.com> wrote in\n> > > If there is any doubt about including all of Cf, we could also just\n> > > add a branch in wchar.c to hard-code the 200B-200F range.\n> >\n> > If every way has defect to the similar extent, I think we will choose\n> > to use authoritative data at least for the first step. We might want\n> > to have additional filtering on it but it would be another issue,\n> > maybe.\n> >\n> > Attached is the first cut of that. (The commit messages is not great,\n> > though.)\n>\n> Okay, the patch looks good to me overall. Comparing releases, some\n> other ranges were in v11 but left out in v12 with the transition to\n> using a script:\n>\n> 0x070F\n> {0x200B, 0x200F}\n> {0x202A, 0x202E}\n> {0x206A, 0x206F}\n> 0xFEFF\n> {0xFFF9, 0xFFFB}\n>\n> Does anyone want to advocate for backpatching these missing ranges to\n> v12 and up? v12 still has a table in-line so trivial to remedy, but\n> v13 and up use a script, so these exceptions would likely have to use\n> hard-coded branches to keep from bringing in new changes.\n>\n> If so, does anyone want to advocate for including this patch in v15?\n> It claims Unicode 14.0.0, and this would make that claim more\n> technically correct as well as avoiding additional branches.\n>\n\nI think it can be fixed just in v15 and master. This issue has no impact\non SQL.\n\nThank you for fixing this issue\n\nRegards\n\nPavel\n\n\n\n\n\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com\n>\n\nčt 8. 9. 2022 v 7:39 odesílatel John Naylor <john.naylor@enterprisedb.com> napsal:On Fri, Sep 2, 2022 at 3:19 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 2 Sep 2022 13:43:50 +0700, John Naylor <john.naylor@enterprisedb.com> wrote in\n> > If there is any doubt about including all of Cf, we could also just\n> > add a branch in wchar.c to hard-code the 200B-200F range.\n>\n> If every way has defect to the similar extent, I think we will choose\n> to use authoritative data at least for the first step. We might want\n> to have additional filtering on it but it would be another issue,\n> maybe.\n>\n> Attached is the first cut of that. (The commit messages is not great,\n> though.)\n\nOkay, the patch looks good to me overall. Comparing releases, some\nother ranges were in v11 but left out in v12 with the transition to\nusing a script:\n\n0x070F\n{0x200B, 0x200F}\n{0x202A, 0x202E}\n{0x206A, 0x206F}\n0xFEFF\n{0xFFF9, 0xFFFB}\n\nDoes anyone want to advocate for backpatching these missing ranges to\nv12 and up? v12 still has a table in-line so trivial to remedy, but\nv13 and up use a script, so these exceptions would likely have to use\nhard-coded branches to keep from bringing in new changes.\n\nIf so, does anyone want to advocate for including this patch in v15?\nIt claims Unicode 14.0.0, and this would make that claim more\ntechnically correct as well as avoiding additional branches.I think it can be fixed just in v15 and master. This issue has no impact on SQL. Thank you for fixing this issueRegardsPavel\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 8 Sep 2022 07:50:45 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: broken table formatting in psql"
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 12:51 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>> Does anyone want to advocate for backpatching these missing ranges to\n>> v12 and up? v12 still has a table in-line so trivial to remedy, but\n>> v13 and up use a script, so these exceptions would likely have to use\n>> hard-coded branches to keep from bringing in new changes.\n>>\n>> If so, does anyone want to advocate for including this patch in v15?\n>> It claims Unicode 14.0.0, and this would make that claim more\n>> technically correct as well as avoiding additional branches.\n>\n>\n> I think it can be fixed just in v15 and master. This issue has no impact on SQL.\n\nWell, if the regressions from v11 are not important enough to\nbackpatch, there is not as much of a case to backpatch the full fix to\nv15 either.\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 12 Sep 2022 12:37:47 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: broken table formatting in psql"
},
{
"msg_contents": "po 12. 9. 2022 v 7:37 odesílatel John Naylor <john.naylor@enterprisedb.com>\nnapsal:\n\n> On Thu, Sep 8, 2022 at 12:51 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> >\n> >> Does anyone want to advocate for backpatching these missing ranges to\n> >> v12 and up? v12 still has a table in-line so trivial to remedy, but\n> >> v13 and up use a script, so these exceptions would likely have to use\n> >> hard-coded branches to keep from bringing in new changes.\n> >>\n> >> If so, does anyone want to advocate for including this patch in v15?\n> >> It claims Unicode 14.0.0, and this would make that claim more\n> >> technically correct as well as avoiding additional branches.\n> >\n> >\n> > I think it can be fixed just in v15 and master. This issue has no\n> impact on SQL.\n>\n> Well, if the regressions from v11 are not important enough to\n> backpatch, there is not as much of a case to backpatch the full fix to\n> v15 either.\n>\n\nThis is not a critical issue, really. On second thought, I don't see the\npoint in releasing fresh Postgres with this bug, where there is know bugfix\n- and this bugfix should be compatible (at this moment) with 16.\n\nPostgreSQL 15 was not released yet.\n\nRegards\n\nPavel\n\n-- \n> John Naylor\n> EDB: http://www.enterprisedb.com\n>\n\npo 12. 9. 2022 v 7:37 odesílatel John Naylor <john.naylor@enterprisedb.com> napsal:On Thu, Sep 8, 2022 at 12:51 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>> Does anyone want to advocate for backpatching these missing ranges to\n>> v12 and up? v12 still has a table in-line so trivial to remedy, but\n>> v13 and up use a script, so these exceptions would likely have to use\n>> hard-coded branches to keep from bringing in new changes.\n>>\n>> If so, does anyone want to advocate for including this patch in v15?\n>> It claims Unicode 14.0.0, and this would make that claim more\n>> technically correct as well as avoiding additional branches.\n>\n>\n> I think it can be fixed just in v15 and master. This issue has no impact on SQL.\n\nWell, if the regressions from v11 are not important enough to\nbackpatch, there is not as much of a case to backpatch the full fix to\nv15 either.This is not a critical issue, really. On second thought, I don't see the point in releasing fresh Postgres with this bug, where there is know bugfix - and this bugfix should be compatible (at this moment) with 16.PostgreSQL 15 was not released yet.RegardsPavel\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 12 Sep 2022 07:44:20 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: broken table formatting in psql"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 12:44 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> This is not a critical issue, really. On second thought, I don't see the point in releasing fresh Postgres with this bug, where there is know bugfix - and this bugfix should be compatible (at this moment) with 16.\n\nI agree the actual logic/data change is low-risk. The patch renames\ntwo files, which seems a bit much this late in the cycle. Maybe that's\nokay, but I'd like someone else to opine before doing so.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 12 Sep 2022 15:28:49 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: broken table formatting in psql"
},
{
"msg_contents": "po 12. 9. 2022 v 10:29 odesílatel John Naylor <john.naylor@enterprisedb.com>\nnapsal:\n\n> On Mon, Sep 12, 2022 at 12:44 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > This is not a critical issue, really. On second thought, I don't see\n> the point in releasing fresh Postgres with this bug, where there is know\n> bugfix - and this bugfix should be compatible (at this moment) with 16.\n>\n> I agree the actual logic/data change is low-risk. The patch renames\n> two files, which seems a bit much this late in the cycle. Maybe that's\n> okay, but I'd like someone else to opine before doing so.\n>\n\nunderstand\n\nPavel\n\n>\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com\n>\n\npo 12. 9. 2022 v 10:29 odesílatel John Naylor <john.naylor@enterprisedb.com> napsal:On Mon, Sep 12, 2022 at 12:44 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> This is not a critical issue, really. On second thought, I don't see the point in releasing fresh Postgres with this bug, where there is know bugfix - and this bugfix should be compatible (at this moment) with 16.\n\nI agree the actual logic/data change is low-risk. The patch renames\ntwo files, which seems a bit much this late in the cycle. Maybe that's\nokay, but I'd like someone else to opine before doing so.understandPavel \n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 12 Sep 2022 11:30:02 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: broken table formatting in psql"
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 12:39 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> On Fri, Sep 2, 2022 at 3:19 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Fri, 2 Sep 2022 13:43:50 +0700, John Naylor <john.naylor@enterprisedb.com> wrote in\n> > > If there is any doubt about including all of Cf, we could also just\n> > > add a branch in wchar.c to hard-code the 200B-200F range.\n> >\n> > If every way has defect to the similar extent, I think we will choose\n> > to use authoritative data at least for the first step. We might want\n> > to have additional filtering on it but it would be another issue,\n> > maybe.\n> >\n> > Attached is the first cut of that. (The commit messages is not great,\n> > though.)\n>\n> Okay, the patch looks good to me overall.\n\nAs discussed, I pushed to master only, with only one additional\ncomment in the perl script to describe Me/Mn/Cf.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 13 Sep 2022 16:22:58 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: broken table formatting in psql"
}
] |
[
{
"msg_contents": "Hello,\n\nWhen using pg_dump (or pg_restore) with option \"--clean\", there is some \nSQL code to drop every objects at the beginning.\n\nThe DROP statement for a view involving circular dependencies is :\n\nCREATE OR REPLACE VIEW [...]\n\n(see commit message of d8c05aff for a much better explanation)\n\nIf the view is not in the \"public\" schema, and the target database is \nempty, this statement fails, because the schema hasn't been created yet.\n\nThe attached patches are a TAP test which can be used to reproduce the \nbug, and a proposed fix. They apply to the master branch.\n\nBest regards,\nFrédéric",
"msg_date": "Thu, 1 Sep 2022 09:13:10 +0200",
"msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] minor bug fix for pg_dump --clean"
},
{
"msg_contents": "Hi,\n\nGood catch! Here are several points for improvement:\n1. pg_dump.c:17380\nMaybe better to write simpler\n\n\nappendPQExpBuffer(delcmd, \"CREATE SCHEMA IF NOT EXISTS %s;\\n\",\ntbinfo->dobj.namespace->dobj.name);\n\nbecause there is a schema name inside the `tbinfo->dobj.namespace->dobj.name\n`\n\n2. pg_backup_archiver.c:588\nHere are no necessary spaces before and after braces, and no spaces around\nthe '+' sign.\n\n( strncmp(dropStmt, \"CREATE SCHEMA IF NOT EXISTS\", 27) == 0 &&\n strstr(dropStmt+29, \"CREATE OR REPLACE VIEW\") ))\n\n\nBest regards,\nViktoria Shepard\n\nчт, 1 сент. 2022 г. в 12:13, Frédéric Yhuel <frederic.yhuel@dalibo.com>:\n\n> Hello,\n>\n> When using pg_dump (or pg_restore) with option \"--clean\", there is some\n> SQL code to drop every objects at the beginning.\n>\n> The DROP statement for a view involving circular dependencies is :\n>\n> CREATE OR REPLACE VIEW [...]\n>\n> (see commit message of d8c05aff for a much better explanation)\n>\n> If the view is not in the \"public\" schema, and the target database is\n> empty, this statement fails, because the schema hasn't been created yet.\n>\n> The attached patches are a TAP test which can be used to reproduce the\n> bug, and a proposed fix. They apply to the master branch.\n>\n> Best regards,\n> Frédéric\n\nHi,Good catch! Here are several points for improvement:1. pg_dump.c:17380Maybe better to write simplerappendPQExpBuffer(delcmd, \"CREATE SCHEMA IF NOT EXISTS %s;\\n\", tbinfo->dobj.namespace->dobj.name);because there is a schema name inside the `tbinfo->dobj.namespace->dobj.name`2. pg_backup_archiver.c:588Here are no necessary spaces before and after braces, and no spaces around the '+' sign.( strncmp(dropStmt, \"CREATE SCHEMA IF NOT EXISTS\", 27) == 0 && strstr(dropStmt+29, \"CREATE OR REPLACE VIEW\") ))Best regards,Viktoria Shepardчт, 1 сент. 2022 г. в 12:13, Frédéric Yhuel <frederic.yhuel@dalibo.com>:Hello,\n\nWhen using pg_dump (or pg_restore) with option \"--clean\", there is some \nSQL code to drop every objects at the beginning.\n\nThe DROP statement for a view involving circular dependencies is :\n\nCREATE OR REPLACE VIEW [...]\n\n(see commit message of d8c05aff for a much better explanation)\n\nIf the view is not in the \"public\" schema, and the target database is \nempty, this statement fails, because the schema hasn't been created yet.\n\nThe attached patches are a TAP test which can be used to reproduce the \nbug, and a proposed fix. They apply to the master branch.\n\nBest regards,\nFrédéric",
"msg_date": "Mon, 24 Oct 2022 05:12:46 +0500",
"msg_from": "=?UTF-8?B?0JLQuNC60YLQvtGA0LjRjyDQqNC10L/QsNGA0LQ=?=\n <we.viktory@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] minor bug fix for pg_dump --clean"
},
{
"msg_contents": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com> writes:\n> When using pg_dump (or pg_restore) with option \"--clean\", there is some \n> SQL code to drop every objects at the beginning.\n\nYup ...\n\n> The DROP statement for a view involving circular dependencies is :\n> CREATE OR REPLACE VIEW [...]\n> (see commit message of d8c05aff for a much better explanation)\n> If the view is not in the \"public\" schema, and the target database is \n> empty, this statement fails, because the schema hasn't been created yet.\n> The attached patches are a TAP test which can be used to reproduce the \n> bug, and a proposed fix. They apply to the master branch.\n\nI am disinclined to accept this as a valid bug, because there's never\nbeen any guarantee that a --clean script would execute error-free in\na database that doesn't match what the source database contains.\n\n(The pg_dump documentation used to say that in so many words.\nI see that whoever added the --if-exists option was under the\nfond illusion that that fixes all cases, which it surely does not.\nWe need to back off the promises a bit there.)\n\nAn example of a case that won't execute error-free is if the view\nhaving a circular dependency includes a column of a non-built-in\ndata type. If you try to run that in an empty database, you'll\nget an error from the CREATE OR REPLACE VIEW's reference to that\ndata type. For instance, if I adjust your test case to make\nthe \"payload\" column be of type hstore, I get something like\n\npsql:dumpresult.sql:22: ERROR: type \"public.hstore\" does not exist\nLINE 4: NULL::public.hstore AS payload;\n ^\n\nThe same type of failure occurs for user-defined functions and\noperators that use a non-built-in type, and I'm sure there are\nmore cases in the same vein. But it gets *really* messy if\nthe target database isn't completely empty, but contains objects\nwith different properties than the dump script expects; for example,\nif the view being discussed here exists with a different column set\nthan the script thinks, or if the dependency chains aren't all the\nsame.\n\nIf this fix were cleaner I might be inclined to accept it anyway,\nbut it's not very clean at all --- for example, it's far from\nobvious to me what are the side-effects of changing the filter\nin RestoreArchive like that. Nor am I sure that the schema\nyou want to create is guaranteed to get dropped again later in\nevery use-case.\n\nSo I think mainly what we ought to do here is to adjust the\ndocumentation to make it clearer that --clean is not guaranteed\nto work without errors unless the target database has the same\nset of objects as the source. --if-exists can reduce the set\nof error cases, but not eliminate it. Possibly we should be\nmore enthusiastic about recommending --create --clean (ie,\ndrop and recreate the whole database) instead.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 23 Oct 2022 21:01:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] minor bug fix for pg_dump --clean"
},
{
"msg_contents": "On 10/24/22 03:01, Tom Lane wrote:\n> =?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com> writes:\n>> When using pg_dump (or pg_restore) with option \"--clean\", there is some\n>> SQL code to drop every objects at the beginning.\n> \n> Yup ...\n> \n>> The DROP statement for a view involving circular dependencies is :\n>> CREATE OR REPLACE VIEW [...]\n>> (see commit message of d8c05aff for a much better explanation)\n>> If the view is not in the \"public\" schema, and the target database is\n>> empty, this statement fails, because the schema hasn't been created yet.\n>> The attached patches are a TAP test which can be used to reproduce the\n>> bug, and a proposed fix. They apply to the master branch.\n> \n> I am disinclined to accept this as a valid bug, because there's never\n> been any guarantee that a --clean script would execute error-free in\n> a database that doesn't match what the source database contains.\n> \n> (The pg_dump documentation used to say that in so many words.\n> I see that whoever added the --if-exists option was under the\n> fond illusion that that fixes all cases, which it surely does not.\n> We need to back off the promises a bit there.)\n> \n> An example of a case that won't execute error-free is if the view\n> having a circular dependency includes a column of a non-built-in\n> data type. If you try to run that in an empty database, you'll\n> get an error from the CREATE OR REPLACE VIEW's reference to that\n> data type. For instance, if I adjust your test case to make\n> the \"payload\" column be of type hstore, I get something like\n> \n> psql:dumpresult.sql:22: ERROR: type \"public.hstore\" does not exist\n> LINE 4: NULL::public.hstore AS payload;\n> ^\n> \n> The same type of failure occurs for user-defined functions and\n> operators that use a non-built-in type, and I'm sure there are\n> more cases in the same vein. But it gets *really* messy if\n> the target database isn't completely empty, but contains objects\n> with different properties than the dump script expects; for example,\n> if the view being discussed here exists with a different column set\n> than the script thinks, or if the dependency chains aren't all the\n> same.\n> \n> If this fix were cleaner I might be inclined to accept it anyway,\n> but it's not very clean at all --- for example, it's far from\n> obvious to me what are the side-effects of changing the filter\n> in RestoreArchive like that. Nor am I sure that the schema\n> you want to create is guaranteed to get dropped again later in\n> every use-case.\n> \n\nHi Tom, Viktoria,\n\nThank you for your review Viktoria!\n\nThank you for this detailed explanation, Tom! I didn't have great hope \nfor this patch. I thought that the TAP test could be accepted, but now I \ncan see that it is clearly useless.\n\n\n> So I think mainly what we ought to do here is to adjust the\n> documentation to make it clearer that --clean is not guaranteed\n> to work without errors unless the target database has the same\n> set of objects as the source. --if-exists can reduce the set\n> of error cases, but not eliminate it. Possibly we should be\n> more enthusiastic about recommending --create --clean (ie,\n> drop and recreate the whole database) instead.\n> \n\nI beleive a documentation patch would be useful, indeed.\n\nBest regards,\nFrédéric\n\n\n",
"msg_date": "Mon, 24 Oct 2022 09:02:46 +0200",
"msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] minor bug fix for pg_dump --clean"
},
{
"msg_contents": "\nFYI, this was improved in a recent commit:\n\n\tcommit 75af0f401f\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\tDate: Fri Sep 29 13:13:54 2023 -0400\n\t\n\t Doc: improve description of dump/restore's --clean and --if-exists.\n\t\n\t Try to make these option descriptions a little clearer for novices.\n\t Per gripe from Attila Gulyás.\n\t\n\t Discussion: https://postgr.es/m/169590536647.3727336.11070254203649648453@wrigleys.postgresql.org\n\n\n---------------------------------------------------------------------------\n\nOn Mon, Oct 24, 2022 at 09:02:46AM +0200, Frédéric Yhuel wrote:\n> On 10/24/22 03:01, Tom Lane wrote:\n> > =?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com> writes:\n> > > When using pg_dump (or pg_restore) with option \"--clean\", there is some\n> > > SQL code to drop every objects at the beginning.\n> > \n> > Yup ...\n> > \n> > > The DROP statement for a view involving circular dependencies is :\n> > > CREATE OR REPLACE VIEW [...]\n> > > (see commit message of d8c05aff for a much better explanation)\n> > > If the view is not in the \"public\" schema, and the target database is\n> > > empty, this statement fails, because the schema hasn't been created yet.\n> > > The attached patches are a TAP test which can be used to reproduce the\n> > > bug, and a proposed fix. They apply to the master branch.\n> > \n> > I am disinclined to accept this as a valid bug, because there's never\n> > been any guarantee that a --clean script would execute error-free in\n> > a database that doesn't match what the source database contains.\n> > \n> > (The pg_dump documentation used to say that in so many words.\n> > I see that whoever added the --if-exists option was under the\n> > fond illusion that that fixes all cases, which it surely does not.\n> > We need to back off the promises a bit there.)\n> > \n> > An example of a case that won't execute error-free is if the view\n> > having a circular dependency includes a column of a non-built-in\n> > data type. If you try to run that in an empty database, you'll\n> > get an error from the CREATE OR REPLACE VIEW's reference to that\n> > data type. For instance, if I adjust your test case to make\n> > the \"payload\" column be of type hstore, I get something like\n> > \n> > psql:dumpresult.sql:22: ERROR: type \"public.hstore\" does not exist\n> > LINE 4: NULL::public.hstore AS payload;\n> > ^\n> > \n> > The same type of failure occurs for user-defined functions and\n> > operators that use a non-built-in type, and I'm sure there are\n> > more cases in the same vein. But it gets *really* messy if\n> > the target database isn't completely empty, but contains objects\n> > with different properties than the dump script expects; for example,\n> > if the view being discussed here exists with a different column set\n> > than the script thinks, or if the dependency chains aren't all the\n> > same.\n> > \n> > If this fix were cleaner I might be inclined to accept it anyway,\n> > but it's not very clean at all --- for example, it's far from\n> > obvious to me what are the side-effects of changing the filter\n> > in RestoreArchive like that. Nor am I sure that the schema\n> > you want to create is guaranteed to get dropped again later in\n> > every use-case.\n> > \n> \n> Hi Tom, Viktoria,\n> \n> Thank you for your review Viktoria!\n> \n> Thank you for this detailed explanation, Tom! I didn't have great hope for\n> this patch. I thought that the TAP test could be accepted, but now I can see\n> that it is clearly useless.\n> \n> \n> > So I think mainly what we ought to do here is to adjust the\n> > documentation to make it clearer that --clean is not guaranteed\n> > to work without errors unless the target database has the same\n> > set of objects as the source. --if-exists can reduce the set\n> > of error cases, but not eliminate it. Possibly we should be\n> > more enthusiastic about recommending --create --clean (ie,\n> > drop and recreate the whole database) instead.\n> > \n> \n> I beleive a documentation patch would be useful, indeed.\n> \n> Best regards,\n> Frédéric\n> \n> \n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 27 Oct 2023 18:55:12 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] minor bug fix for pg_dump --clean"
}
] |
[
{
"msg_contents": "Inside *add_local_<>_reloption*, we should pass NoLock instead of\nthe magic 0 to init_<>_reloption, which makes more sense.\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Thu, 1 Sep 2022 16:18:49 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "use NoLock instead of the magic number 0"
}
] |
[
{
"msg_contents": "*TextDatumGetCString* calls palloc to alloc memory for the option\ntext datum, in some cases the the memory is allocated in\n*TopTransactionContext*, this may cause memory leak for a long\nrunning backend.\n---\n src/backend/access/common/reloptions.c | 1 +\n 1 file changed, 1 insertion(+)\n\ndiff --git a/src/backend/access/common/reloptions.c\nb/src/backend/access/common/reloptions.c\nindex 609329bb21..6076677aef 100644\n--- a/src/backend/access/common/reloptions.c\n+++ b/src/backend/access/common/reloptions.c\n@@ -1360,6 +1360,7 @@ untransformRelOptions(Datum options)\n val = (Node *) makeString(pstrdup(p));\n }\n result = lappend(result, makeDefElem(pstrdup(s), val, -1));\n+ pfree(s);\n }\n\n return result;\n-- \n2.33.0\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Thu, 1 Sep 2022 16:36:33 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH v1] fix potential memory leak in untransformRelOptions"
},
{
"msg_contents": "> On 1 Sep 2022, at 10:36, Junwang Zhao <zhjwpku@gmail.com> wrote:\n\n> *TextDatumGetCString* calls palloc to alloc memory for the option\n> text datum, in some cases the the memory is allocated in\n> *TopTransactionContext*, this may cause memory leak for a long\n> running backend.\n\nWouldn't that be a fairly small/contained leak in comparison to memory spent\nduring a long running transaction? Do you have any example of transforming\nreloptions in a loop into TopTransactionContext where it might add up?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 1 Sep 2022 14:14:52 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] fix potential memory leak in untransformRelOptions"
},
{
"msg_contents": "Junwang Zhao <zhjwpku@gmail.com> writes:\n> result = lappend(result, makeDefElem(pstrdup(s), val, -1));\n> + pfree(s);\n\nI wonder why it's pstrdup'ing s in the first place.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Sep 2022 10:10:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] fix potential memory leak in untransformRelOptions"
},
{
"msg_contents": "On Thu, Sep 1, 2022 at 10:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Junwang Zhao <zhjwpku@gmail.com> writes:\n> > result = lappend(result, makeDefElem(pstrdup(s), val, -1));\n> > + pfree(s);\n>\n> I wonder why it's pstrdup'ing s in the first place.\n>\nMaybe it's pstrdup'ing s so that the caller should take care of the free?\n\nI'm a little confused when we should call *pfree* and when we should not.\nA few lines before there is a call *text_to_cstring* in which it invokes\n*pfree* to free the unpacked text [0]. I'm just thinking that since *s* has\nbeen duplicated, we should free it, that's where the patch comes from.\n\n[0]:\n```\nchar *\ntext_to_cstring(const text *t)\n{\n /* must cast away the const, unfortunately */\n text *tunpacked = pg_detoast_datum_packed(unconstify(text *, t));\n int len = VARSIZE_ANY_EXHDR(tunpacked);\n char *result;\n\n result = (char *) palloc(len + 1);\n memcpy(result, VARDATA_ANY(tunpacked), len);\n result[len] = '\\0';\n\n if (tunpacked != t)\n pfree(tunpacked);\n\n return result;\n}\n```\n\n> regards, tom lane\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Thu, 1 Sep 2022 22:38:41 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] fix potential memory leak in untransformRelOptions"
},
{
"msg_contents": "Junwang Zhao <zhjwpku@gmail.com> writes:\n> I'm a little confused when we should call *pfree* and when we should not.\n> A few lines before there is a call *text_to_cstring* in which it invokes\n> *pfree* to free the unpacked text [0]. I'm just thinking that since *s* has\n> been duplicated, we should free it, that's where the patch comes from.\n\nBy and large, the server is designed so that small memory leaks don't\nmatter: the space will be reclaimed when the current memory context\nis deleted, and most code runs in reasonably short-lived contexts.\nIndividually pfree'ing such allocations is actually a net negative,\nbecause it costs cycles and code space.\n\nThere are places where a leak *does* matter, but unless you can\ndemonstrate that this is one, it's not worth changing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Sep 2022 13:13:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] fix potential memory leak in untransformRelOptions"
},
{
"msg_contents": "got it, thanks.\n\nTom Lane <tgl@sss.pgh.pa.us>于2022年9月2日 周五01:13写道:\n\n> Junwang Zhao <zhjwpku@gmail.com> writes:\n> > I'm a little confused when we should call *pfree* and when we should not.\n> > A few lines before there is a call *text_to_cstring* in which it invokes\n> > *pfree* to free the unpacked text [0]. I'm just thinking that since *s*\n> has\n> > been duplicated, we should free it, that's where the patch comes from.\n>\n> By and large, the server is designed so that small memory leaks don't\n> matter: the space will be reclaimed when the current memory context\n> is deleted, and most code runs in reasonably short-lived contexts.\n> Individually pfree'ing such allocations is actually a net negative,\n> because it costs cycles and code space.\n>\n> There are places where a leak *does* matter, but unless you can\n> demonstrate that this is one, it's not worth changing.\n>\n> regards, tom lane\n>\n-- \nRegards\nJunwang Zhao\n\ngot it, thanks.Tom Lane <tgl@sss.pgh.pa.us>于2022年9月2日 周五01:13写道:Junwang Zhao <zhjwpku@gmail.com> writes:\n> I'm a little confused when we should call *pfree* and when we should not.\n> A few lines before there is a call *text_to_cstring* in which it invokes\n> *pfree* to free the unpacked text [0]. I'm just thinking that since *s* has\n> been duplicated, we should free it, that's where the patch comes from.\n\nBy and large, the server is designed so that small memory leaks don't\nmatter: the space will be reclaimed when the current memory context\nis deleted, and most code runs in reasonably short-lived contexts.\nIndividually pfree'ing such allocations is actually a net negative,\nbecause it costs cycles and code space.\n\nThere are places where a leak *does* matter, but unless you can\ndemonstrate that this is one, it's not worth changing.\n\n regards, tom lane\n-- RegardsJunwang Zhao",
"msg_date": "Fri, 2 Sep 2022 07:08:44 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] fix potential memory leak in untransformRelOptions"
},
{
"msg_contents": "On 2022-Sep-01, Tom Lane wrote:\n\n> Junwang Zhao <zhjwpku@gmail.com> writes:\n> > result = lappend(result, makeDefElem(pstrdup(s), val, -1));\n> > + pfree(s);\n> \n> I wonder why it's pstrdup'ing s in the first place.\n\nYeah, I think both the pstrdups in that function are useless. The\nDefElems can just point to the correct portion of the (already pstrdup'd\nby TextDatumGetCString) copy of optiondatums[i]. We modify that copy to\ninstall \\0 in the place where the = is, and that copy is not freed\nanywhere.\n\ndiff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c\nindex 609329bb21..0aa4b334ab 100644\n--- a/src/backend/access/common/reloptions.c\n+++ b/src/backend/access/common/reloptions.c\n@@ -1357,9 +1357,9 @@ untransformRelOptions(Datum options)\n \t\tif (p)\n \t\t{\n \t\t\t*p++ = '\\0';\n-\t\t\tval = (Node *) makeString(pstrdup(p));\n+\t\t\tval = (Node *) makeString(p);\n \t\t}\n-\t\tresult = lappend(result, makeDefElem(pstrdup(s), val, -1));\n+\t\tresult = lappend(result, makeDefElem(s, val, -1));\n \t}\n \n \treturn result;\n\nI think these pstrdups were already not necessary when the function was\nadded in 265f904d8f25, because textout() was already known to return a\npalloc'ed copy of its input; but later 220db7ccd8c8 made this contract\neven more explicit.\n\nKeeping 's' and removing the pstrdups better uses memory, because we\nhave a single palloc'ed chunk per option rather than two.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 9 Sep 2022 16:20:50 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] fix potential memory leak in untransformRelOptions"
},
{
"msg_contents": "On 2022-Sep-09, Alvaro Herrera wrote:\n\n> Keeping 's' and removing the pstrdups better uses memory, because we\n> have a single palloc'ed chunk per option rather than two.\n\nPushed. This is pretty much cosmetic, so no backpatch.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"The Gord often wonders why people threaten never to come back after they've\nbeen told never to return\" (www.actsofgord.com)\n\n\n",
"msg_date": "Tue, 13 Sep 2022 12:01:47 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] fix potential memory leak in untransformRelOptions"
}
] |
[
{
"msg_contents": "Hi,\n\nwalsenders currently read WAL data from disk to send it to all\nreplicas (standbys or subscribers connected via streaming or logical\nreplication respectively). This means that walsenders have to wait\nuntil the WAL data is flushed to the disk. There are a few issues with\nthis approach:\n\n1. IO saturation on the primary. The amount of read IO required for\nall walsenders combined can be huge given the sheer number of\nwalsenders typically present at any given point of time in production\nenvironments (e.g. for high availability, disaster recovery,\nread-replicas or subscribers) and life cycle of WAL senders is usually\nlonger (one maintains replicas for a long period of time in\nproduction). For example, a quick 30 minute pgbench run with 1\nprimary, 1 async standby, and 1 sync standby shows that 35 GB of WAL\nis read from disk on the primary with 3.3 million times for 2\nwalsenders [3].\n2. Increased query response times, particularly for synchronous\nstandbys, because of WAL flush at primary and standbys usually happen\nat different times.\n3. Increased replication lag, especially if the WAL data is read from\ndisk despite it being present in wal_buffers at times.\n\nTo improve these issues, I’m proposing that, whenever possible, to let\nwalsenders send WAL directly from wal_buffers to replicas before it is\nflushed to disk. This idea is also noted elsewhere [1]. Standbys can\nchoose to store the received WAL in wal_buffers (note that the\nwal_buffers in standbys are allocated but not used until the\npromotion) and flush if they are full OR store WAL directly to disk,\nbypassing wal_buffers, but replay only up the flush LSN sent by\nprimary. Logical subscribers can choose to not apply the WAL beyond\nthe flush LSN sent by the primary. This approach has the following\nadvantages:\n\n1. Reduces disk IO or read system calls on the primary.\n2. Reduces replication lag.\n3. Enables better use of allocated wal_buffers on the standbys.\n4. Enables parallel flushing of WAL to disks on both primary and standbys.\n5. Disallows async standbys or subscribers getting ahead of the sync\nstandbys, discussed in the thread at [1], reducing efforts required\nduring failovers.\n\nThis approach has a couple of challenges:\n\n1. Increases stress on wal_buffers - right now there are no readers\nfor wal_buffers on the primary. This could be problematic if there are\nboth many concurrent readers and concurrent writers.\n2. wal_buffers hit ratio can be low for write-heavy workloads. In this\ncase disk reads are inevitable.\n3. Requires a change to replication protocol. We might have to send\nflush LSN to replicas and receive their flush LSN as an\nacknowledgement.\n4. Requires careful design for replicas not to replay beyond the\nreceived flush LSN. For example, what happens if the wal_buffers get\nfull, should we write the WAL to disk? What happens if the primary or\nreplicas crash? Will they have to get the unwritten, lost WAL present\nin wal_buffers again?\n\nI would like to summarize the whole work into the following 3\nindependent items and focus individually on each of them:\n\n1. Allow walsenders to read WAL directly from wal_buffers when\npossible - initial patches and results will be posted soon. This has\nits own advantages, the comment [2] talks about it.\n2. Allow WAL writes and flush to disk happen nearly at the same time\nboth at primary and standbys.\n3. Disallow async standbys or subscribers getting ahead of the sync standbys.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/20220309020123.sneaoijlg3rszvst%40alap3.anarazel.de\n[2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/transam/xlogreader.c;h=f17e80948d17ff0e2e92fd1677d1a0da06778fc7;hb=7fed801135bae14d63b11ee4a10f6083767046d8#l1457\n[3] shared_buffers = 8GB\nmax_wal_size = 32GB\ncheckpoint_timeout = 15min\ntrack_wal_io_timing = on\nwal_buffers = 16MB (auto-tuned values, not manually set)\nUbuntu VM: c5.4xlarge - AWS EC2 instance\nRAM: 32GB\nVCores: 16\nSSD: 512GB\n\n./pgbench —initialize —scale=300 postgres\n./pgbench —jobs=16 —progress=300 —client=32 —time=1800 —username=ubuntu postgres\n\n-[ RECORD 1 ]------------+---------------\napplication_name | async_standby1\nwal_read | 1685714\nwal_read_bytes | 17726209880\nwal_read_time | 7746.622\n-[ RECORD 2 ]------------+---------------\napplication_name | sync_standby1\nwal_read | 1685771\nwal_read_bytes | 17726209880\nwal_read_time | 6002.679\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Thu, 1 Sep 2022 17:41:44 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Proposal: Allow walsenders to send WAL directly from wal_buffers to\n replicas"
}
] |
[
{
"msg_contents": "Hi,\nExcuse me for posting on this thread.\n\nCoverity has a complaints about aset.c\nCID 1497225 (#1 of 2): Out-of-bounds write (OVERRUN)3. overrun-local:\nOverrunning\narray set->freelist of 11 8-byte elements at element index 1073741823 (byte\noffset 8589934591) using index fidx (which evaluates to 1073741823).\n\nCID 1497225 (#2 of 2): Out-of-bounds write (OVERRUN)3. overrun-local:\nOverrunning\narray set->freelist of 11 8-byte elements at element index 1073741823 (byte\noffset 8589934591) using index fidx (which evaluates to 1073741823).\n\nI think that this is an oversight.\n\ndiff --git a/src/backend/utils/mmgr/aset.c b/src/backend/utils/mmgr/aset.c\nindex b6eeb8abab..8f709514b2 100644\n--- a/src/backend/utils/mmgr/aset.c\n+++ b/src/backend/utils/mmgr/aset.c\n@@ -1024,7 +1024,7 @@ AllocSetFree(void *pointer)\n }\n else\n {\n- int fidx = MemoryChunkGetValue(chunk);\n+ Size fidx = MemoryChunkGetValue(chunk);\n AllocBlock block = MemoryChunkGetBlock(chunk);\n AllocFreeListLink *link = GetFreeListLink(chunk);\n\n\nMemoryChunkGetValue return Size not int.\n\nNot sure if this fix is enough.\n\nregards,\nRanier Vilela\n\nHi,Excuse me for posting on this thread.Coverity has a complaints about aset.c\nCID 1497225 (#1 of 2): Out-of-bounds write (OVERRUN)3. overrun-local: Overrunning\n array set->freelist of 11 8-byte elements at element index \n1073741823 (byte offset 8589934591) using index fidx (which evaluates to\n 1073741823).\n\nCID 1497225 (#2 of 2): Out-of-bounds write (OVERRUN)3. overrun-local: Overrunning\n array set->freelist of 11 8-byte elements at element index \n1073741823 (byte offset 8589934591) using index fidx (which evaluates to\n 1073741823).\nI think that this is an oversight.diff --git a/src/backend/utils/mmgr/aset.c b/src/backend/utils/mmgr/aset.cindex b6eeb8abab..8f709514b2 100644--- a/src/backend/utils/mmgr/aset.c+++ b/src/backend/utils/mmgr/aset.c@@ -1024,7 +1024,7 @@ AllocSetFree(void *pointer) \t} \telse \t{-\t\tint\t\t\tfidx = MemoryChunkGetValue(chunk);+\t\tSize\t\t\tfidx = MemoryChunkGetValue(chunk); \t\tAllocBlock\tblock = MemoryChunkGetBlock(chunk); \t\tAllocFreeListLink *link = GetFreeListLink(chunk);\nMemoryChunkGetValue return Size not int.Not sure if this fix is enough.regards,Ranier Vilela",
"msg_date": "Thu, 1 Sep 2022 10:27:24 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nNow that we have some optimized linear search routines [0], I thought I'd\nquickly check whether we could use them elsewhere. To start, I took\nanother look at a previously posted patch [1] and noticed two potentially\nuseful applications of pg_lfind32(). The attached patches replace the\nopen-coded linear searches with calls to pg_lfind32(). I haven't done any\nperformance analysis with these patches yet, and the overall impact might\nbe limited, but it seemed like low-hanging fruit.\n\nI'm hoping to spend a bit more time looking for additional applications of\nthe pg_lfind*() suite of functions (and anywhere else where SIMD might be\nuseful, really). If you have any ideas in mind, I'm all ears.\n\n[0] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/include/port/pg_lfind.h\n[1] https://postgr.es/m/20220802221301.GA742739%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 1 Sep 2022 11:51:53 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "make additional use of optimized linear search routines"
},
{
"msg_contents": "On Fri, Sep 2, 2022 at 2:52 AM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> I'm hoping to spend a bit more time looking for additional applications of\n> the pg_lfind*() suite of functions (and anywhere else where SIMD might be\n> useful, really). If you have any ideas in mind, I'm all ears.\n\n\n+1 for the proposal. I did some simple grep work in the source codes but\nnot too much outputs besides the two places addressed in your patches.\n\nHere are some places I noticed that might be optimized with pg_lfind*().\nBut 1) these places have issues that arguments differ in signedness, and\n2) I'm not sure whether they are performance-sensitive or not.\n\nIn check_valid_internal_signature()\n\n for (int i = 0; i < nargs; i++)\n {\n if (declared_arg_types[i] == ret_type)\n return NULL; /* OK */\n }\n\n\nIn validateFkOnDeleteSetColumns()\n\n for (int j = 0; j < numfks; j++)\n {\n if (fkattnums[j] == setcol_attnum)\n {\n seen = true;\n break;\n }\n }\n\nIn pg_isolation_test_session_is_blocked()\n\n for (i = 0; i < num_blocking_pids; i++)\n for (j = 0; j < num_interesting_pids; j++)\n {\n if (blocking_pids[i] == interesting_pids[j])\n PG_RETURN_BOOL(true);\n }\n\nIn dotrim()\n\n for (i = 0; i < setlen; i++)\n {\n if (str_ch == set[i])\n break;\n }\n if (i >= setlen)\n break; /* no match here */\n\nAnd the function has_lock_conflicts().\n\nThanks\nRichard\n\nOn Fri, Sep 2, 2022 at 2:52 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\nI'm hoping to spend a bit more time looking for additional applications of\nthe pg_lfind*() suite of functions (and anywhere else where SIMD might be\nuseful, really). If you have any ideas in mind, I'm all ears. +1 for the proposal. I did some simple grep work in the source codes butnot too much outputs besides the two places addressed in your patches.Here are some places I noticed that might be optimized with pg_lfind*().But 1) these places have issues that arguments differ in signedness, and2) I'm not sure whether they are performance-sensitive or not.In check_valid_internal_signature() for (int i = 0; i < nargs; i++) { if (declared_arg_types[i] == ret_type) return NULL; /* OK */ }In validateFkOnDeleteSetColumns() for (int j = 0; j < numfks; j++) { if (fkattnums[j] == setcol_attnum) { seen = true; break; } }In pg_isolation_test_session_is_blocked() for (i = 0; i < num_blocking_pids; i++) for (j = 0; j < num_interesting_pids; j++) { if (blocking_pids[i] == interesting_pids[j]) PG_RETURN_BOOL(true); }In dotrim() for (i = 0; i < setlen; i++) { if (str_ch == set[i]) break; } if (i >= setlen) break; /* no match here */And the function has_lock_conflicts().ThanksRichard",
"msg_date": "Fri, 2 Sep 2022 20:15:46 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: make additional use of optimized linear search routines"
},
{
"msg_contents": "On Fri, Sep 02, 2022 at 08:15:46PM +0800, Richard Guo wrote:\n> +1 for the proposal. I did some simple grep work in the source codes but\n> not too much outputs besides the two places addressed in your patches.\n\nThanks for taking a look!\n\n> Here are some places I noticed that might be optimized with pg_lfind*().\n> But 1) these places have issues that arguments differ in signedness, and\n> 2) I'm not sure whether they are performance-sensitive or not.\n\nYeah, I doubt that these typically deal with many elements or are\nperformance-sensitive enough to bother with.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 2 Sep 2022 15:16:11 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: make additional use of optimized linear search routines"
},
{
"msg_contents": "On Fri, Sep 02, 2022 at 03:16:11PM -0700, Nathan Bossart wrote:\n> On Fri, Sep 02, 2022 at 08:15:46PM +0800, Richard Guo wrote:\n>> +1 for the proposal. I did some simple grep work in the source codes but\n>> not too much outputs besides the two places addressed in your patches.\n> \n> Thanks for taking a look!\n\nOhoh. This sounds like a good idea to me, close to what John has\napplied lately. I'll take a closer look..\n--\nMichael",
"msg_date": "Sat, 3 Sep 2022 10:06:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: make additional use of optimized linear search routines"
},
{
"msg_contents": "On Sat, Sep 03, 2022 at 10:06:58AM +0900, Michael Paquier wrote:\n> Ohoh. This sounds like a good idea to me, close to what John has\n> applied lately. I'll take a closer look..\n\nSo, the two code paths patched here are rather isolated. The one in\nTransactionIdIsInProgress() requires an overflowed set of subxids\nstill running, something similar to what the isolation test\nsubxid-overflow does. XidIsConcurrent() is also kind of hard to\nreason about with a benchmark.\n\nAnyway, I did not know about the work done with SIMD instructions in\npg_lfind.h and after playing the API I have run some micro benchmarks\nwith on pg_lfind32() and I can see some improvements. With a range of\n100~10k elements in a fixed number of repeated calls with a for loop\nand lfind(), I could not get up to the 40% speedup. That was somewhat\ncloser to 15%~20% on x86 and 20%~25% with arm64. There is a trend \nwhere things got better with a higher number of elements with\nlfind().\n\nIn short, switching those code paths to use the linear search routines\nlooks like a good thing in the long-term, so I would like to apply\nthis patch. If you have any comments or objections, please feel\nfree.\n--\nMichael",
"msg_date": "Wed, 21 Sep 2022 14:40:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: make additional use of optimized linear search routines"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 1:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> In short, switching those code paths to use the linear search routines\n> looks like a good thing in the long-term, so I would like to apply\n> this patch. If you have any comments or objections, please feel\n> free.\n\n\nYeah, I agree that the changes in the patch are meaningful even if the\nperformance gain is limited.\n\nI wonder if there are other code paths we can replace with the linear\nsearch routines. I tried to search for them but no luck.\n\nThanks\nRichard\n\nOn Wed, Sep 21, 2022 at 1:40 PM Michael Paquier <michael@paquier.xyz> wrote:\nIn short, switching those code paths to use the linear search routines\nlooks like a good thing in the long-term, so I would like to apply\nthis patch. If you have any comments or objections, please feel\nfree. Yeah, I agree that the changes in the patch are meaningful even if theperformance gain is limited.I wonder if there are other code paths we can replace with the linearsearch routines. I tried to search for them but no luck.ThanksRichard",
"msg_date": "Wed, 21 Sep 2022 14:28:08 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: make additional use of optimized linear search routines"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 02:28:08PM +0800, Richard Guo wrote:\n> I wonder if there are other code paths we can replace with the linear\n> search routines. I tried to search for them but no luck.\n\nI have been looking at a couple of simple patterns across the tree but\nno luck here either. Well, if someone spots something, it could\nalways be done later. For now I have applied the bits discussed on\nthis thread.\n--\nMichael",
"msg_date": "Thu, 22 Sep 2022 09:52:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: make additional use of optimized linear search routines"
},
{
"msg_contents": "On Thu, Sep 22, 2022 at 09:52:57AM +0900, Michael Paquier wrote:\n> On Wed, Sep 21, 2022 at 02:28:08PM +0800, Richard Guo wrote:\n>> I wonder if there are other code paths we can replace with the linear\n>> search routines. I tried to search for them but no luck.\n> \n> I have been looking at a couple of simple patterns across the tree but\n> no luck here either. Well, if someone spots something, it could\n> always be done later. For now I have applied the bits discussed on\n> this thread.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Sep 2022 21:12:41 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: make additional use of optimized linear search routines"
}
] |
[
{
"msg_contents": "Revert SQL/JSON features\n\nThe reverts the following and makes some associated cleanups:\n\n commit f79b803dc: Common SQL/JSON clauses\n commit f4fb45d15: SQL/JSON constructors\n commit 5f0adec25: Make STRING an unreserved_keyword.\n commit 33a377608: IS JSON predicate\n commit 1a36bc9db: SQL/JSON query functions\n commit 606948b05: SQL JSON functions\n commit 49082c2cc: RETURNING clause for JSON() and JSON_SCALAR()\n commit 4e34747c8: JSON_TABLE\n commit fadb48b00: PLAN clauses for JSON_TABLE\n commit 2ef6f11b0: Reduce running time of jsonb_sqljson test\n commit 14d3f24fa: Further improve jsonb_sqljson parallel test\n commit a6baa4bad: Documentation for SQL/JSON features\n commit b46bcf7a4: Improve readability of SQL/JSON documentation.\n commit 112fdb352: Fix finalization for json_objectagg and friends\n commit fcdb35c32: Fix transformJsonBehavior\n commit 4cd8717af: Improve a couple of sql/json error messages\n commit f7a605f63: Small cleanups in SQL/JSON code\n commit 9c3d25e17: Fix JSON_OBJECTAGG uniquefying bug\n commit a79153b7a: Claim SQL standard compliance for SQL/JSON features\n commit a1e7616d6: Rework SQL/JSON documentation\n commit 8d9f9634e: Fix errors in copyfuncs/equalfuncs support for JSON node types.\n commit 3c633f32b: Only allow returning string types or bytea from json_serialize\n commit 67b26703b: expression eval: Fix EEOP_JSON_CONSTRUCTOR and EEOP_JSONEXPR size.\n\nThe release notes are also adjusted.\n\nBackpatch to release 15.\n\nDiscussion: https://postgr.es/m/40d2c882-bcac-19a9-754d-4299e1d87ac7@postgresql.org\n\nBranch\n------\nREL_15_STABLE\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/96ef3237bf741c12390003e90a4d7115c0c854b7\n\nModified Files\n--------------\ndoc/src/sgml/func.sgml | 1065 +-----------\ndoc/src/sgml/keywords/sql2016-02-reserved.txt | 1 -\ndoc/src/sgml/release-15.sgml | 93 --\nsrc/backend/catalog/sql_features.txt | 30 +-\nsrc/backend/commands/explain.c | 8 +-\nsrc/backend/executor/execExpr.c | 359 +----\nsrc/backend/executor/execExprInterp.c | 743 ---------\nsrc/backend/executor/nodeTableFuncscan.c | 23 +-\nsrc/backend/jit/llvm/llvmjit_expr.c | 18 -\nsrc/backend/jit/llvm/llvmjit_types.c | 3 -\nsrc/backend/nodes/copyfuncs.c | 566 -------\nsrc/backend/nodes/equalfuncs.c | 442 -----\nsrc/backend/nodes/makefuncs.c | 122 --\nsrc/backend/nodes/nodeFuncs.c | 486 ------\nsrc/backend/nodes/outfuncs.c | 175 --\nsrc/backend/nodes/readfuncs.c | 213 ---\nsrc/backend/optimizer/path/costsize.c | 3 +-\nsrc/backend/optimizer/util/clauses.c | 78 -\nsrc/backend/parser/Makefile | 1 -\nsrc/backend/parser/gram.y | 1115 +------------\nsrc/backend/parser/parse_clause.c | 12 +-\nsrc/backend/parser/parse_collate.c | 7 -\nsrc/backend/parser/parse_expr.c | 1502 -----------------\nsrc/backend/parser/parse_jsontable.c | 732 ---------\nsrc/backend/parser/parse_relation.c | 7 +-\nsrc/backend/parser/parse_target.c | 40 -\nsrc/backend/parser/parser.c | 16 -\nsrc/backend/utils/adt/format_type.c | 4 -\nsrc/backend/utils/adt/formatting.c | 45 +-\nsrc/backend/utils/adt/json.c | 553 +------\nsrc/backend/utils/adt/jsonb.c | 352 +---\nsrc/backend/utils/adt/jsonb_util.c | 39 +-\nsrc/backend/utils/adt/jsonfuncs.c | 71 +-\nsrc/backend/utils/adt/jsonpath.c | 257 ---\nsrc/backend/utils/adt/jsonpath_exec.c | 844 +---------\nsrc/backend/utils/adt/ruleutils.c | 719 +--------\nsrc/backend/utils/misc/queryjumble.c | 72 -\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/pg_aggregate.dat | 22 -\nsrc/include/catalog/pg_proc.dat | 74 -\nsrc/include/executor/execExpr.h | 98 --\nsrc/include/executor/executor.h | 2 -\nsrc/include/nodes/makefuncs.h | 12 -\nsrc/include/nodes/nodes.h | 28 -\nsrc/include/nodes/parsenodes.h | 287 ----\nsrc/include/nodes/primnodes.h | 265 +--\nsrc/include/parser/kwlist.h | 26 -\nsrc/include/parser/parse_clause.h | 3 -\nsrc/include/utils/formatting.h | 4 -\nsrc/include/utils/json.h | 26 -\nsrc/include/utils/jsonb.h | 33 -\nsrc/include/utils/jsonfuncs.h | 7 -\nsrc/include/utils/jsonpath.h | 37 -\nsrc/interfaces/ecpg/preproc/ecpg.trailer | 41 +-\nsrc/interfaces/ecpg/preproc/parse.pl | 2 -\nsrc/interfaces/ecpg/preproc/parser.c | 14 -\nsrc/test/regress/expected/json_sqljson.out | 24 -\nsrc/test/regress/expected/jsonb_sqljson.out | 2135 -------------------------\nsrc/test/regress/expected/opr_sanity.out | 6 +-\nsrc/test/regress/expected/sqljson.out | 1320 ---------------\nsrc/test/regress/parallel_schedule | 2 +-\nsrc/test/regress/sql/json_sqljson.sql | 15 -\nsrc/test/regress/sql/jsonb_sqljson.sql | 977 -----------\nsrc/test/regress/sql/opr_sanity.sql | 6 +-\nsrc/test/regress/sql/sqljson.sql | 471 ------\nsrc/tools/pgindent/typedefs.list | 15 -\n66 files changed, 350 insertions(+), 16420 deletions(-)",
"msg_date": "Thu, 01 Sep 2022 21:13:27 +0000",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "pgsql: Revert SQL/JSON features"
},
{
"msg_contents": "Re: Andrew Dunstan\n> Revert SQL/JSON features\n> \n> The reverts the following and makes some associated cleanups:\n\n-void\n+static void\n json_categorize_type(Oid typoid,\n JsonTypeCategory *tcategory,\n Oid *outfuncoid)\n\nThis chunk broke PostGIS 3.3.0 compiled with 15beta3, when used with\n15beta4:\n\npsql -Xc 'CREATE EXTENSION postgis'\nERROR: could not load library \"/usr/lib/postgresql/15/lib/postgis-3.so\": /usr/lib/postgresql/15/lib/postgis-3.so: undefined symbol: json_categorize_type\n\nThe PostGIS source has this comment:\n\n * The following code was all cut and pasted directly from\n * json.c from the Postgres source tree as of 2019-03-28.\n * It would be far better if these were exported from the\n * backend so we could just use them here. Maybe someday.\n * Sequel: 2022-04-04 That some day finally came in PG15\n...\n#if POSTGIS_PGSQL_VERSION < 170\nstatic void\njson_categorize_type(Oid typoid,\n JsonTypeCategory *tcategory,\n Oid *outfuncoid)\n\nThe \"< 17\" part was added on 2022-09-03, probably because of this\nbreakage.\n\nRecompiling the (unmodified) 3.3.0 against 15beta4 seems to fix the\nproblem.\n\nSo, there is probably no issue here, but I suggest this \"static\" might\nbe considered to be removed again so PostGIS can use it.\n\nChristoph\n\n\n",
"msg_date": "Wed, 7 Sep 2022 11:01:22 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "PostGIS and json_categorize_type (Re: pgsql: Revert SQL/JSON\n features)"
},
{
"msg_contents": "Re: To Andrew Dunstan\n> The \"< 17\" part was added on 2022-09-03, probably because of this\n> breakage.\n> \n> Recompiling the (unmodified) 3.3.0 against 15beta4 seems to fix the\n> problem.\n\nErr sorry, my local build environment was still on beta3.\n\nPostGIS 3.3.0 is broken now with 15beta4:\n\n10:52:29 lwgeom_out_geojson.c:54:35: error: unknown type name ‘JsonTypeCategory’\n10:52:29 54 | JsonTypeCategory tcategory, Oid outfuncoid,\n10:52:29 | ^~~~~~~~~~~~~~~~\n...\n\n> So, there is probably no issue here, but I suggest this \"static\" might\n> be considered to be removed again so PostGIS can use it.\n\nI guess either PostgreSQL or PostGIS need to make a new release to fix that.\n\nChristoph\n\n\n",
"msg_date": "Wed, 7 Sep 2022 11:07:35 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGIS and json_categorize_type (Re: pgsql: Revert SQL/JSON\n features)"
},
{
"msg_contents": "On Wed, Sep 07, 2022 at 11:07:35AM +0200, Christoph Berg wrote:\n> I guess either PostgreSQL or PostGIS need to make a new release to fix that.\n\nPostgis is already planning on it.\nhttps://lists.osgeo.org/pipermail/postgis-devel/2022-September/thread.html\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 7 Sep 2022 04:11:09 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [postgis-devel] PostGIS and json_categorize_type (Re: pgsql:\n Revert SQL/JSON features)"
},
{
"msg_contents": "Re: Justin Pryzby\n> > I guess either PostgreSQL or PostGIS need to make a new release to fix that.\n> \n> Postgis is already planning on it.\n> https://lists.osgeo.org/pipermail/postgis-devel/2022-September/thread.html\n\nThanks. I was skimming the postgis-devel list, but did not read the\nsubjects carefully enough to spot it.\n\nChristoph\n\n\n",
"msg_date": "Wed, 7 Sep 2022 11:15:08 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: [postgis-devel] PostGIS and json_categorize_type (Re: pgsql:\n Revert SQL/JSON features)"
}
] |
[
{
"msg_contents": "find_my_exec() wants to obtain an absolute, symlink-free path\nto the program's own executable, for what seem to me good\nreasons. However, chasing down symlinks is left to its\nsubroutine resolve_symlinks(), which does this:\n\n * To resolve a symlink properly, we have to chdir into its directory and\n * then chdir to where the symlink points; otherwise we may fail to\n * resolve relative links correctly (consider cases involving mount\n * points, for example). After following the final symlink, we use\n * getcwd() to figure out where the heck we're at.\n\nand then afterwards it has to chdir back to the original cwd.\nThat last step is a bit of a sore spot, because sometimes\n(especially in sudo situations) we may not have the privileges\nnecessary to do that; I think this is the cause of the complaint\nat [1]. Anyway the whole thing seems a bit excessively Rube\nGoldbergian. I'm wondering why we couldn't just read the\nsymlink(s), concatenate them together, and use canonicalize_path()\nto clean up any mess.\n\nThis code was mine originally (336969e49), but I sure don't\nremember why I wrote it like that. I know we didn't have a\nrobust version of canonicalize_path() then, and that may have\nbeen the main issue, but that offhand comment about mount\npoints bothers me. But I can't reconstruct precisely what\nI was worried about there. The only contemporaneous discussion\nthread I can find is [2], which doesn't go into coding details.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CAH8yC8kOj0pmHF1RbK2Gb2t4YCcNG-5h0TwZ7yxk3Hzw6C0Otg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/4973.1099605411%40sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 01 Sep 2022 19:39:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "On Thu, 1 Sept 2022 at 19:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\nThis code was mine originally (336969e49), but I sure don't\n> remember why I wrote it like that. I know we didn't have a\n> robust version of canonicalize_path() then, and that may have\n> been the main issue, but that offhand comment about mount\n> points bothers me. But I can't reconstruct precisely what\n> I was worried about there. The only contemporaneous discussion\n> thread I can find is [2], which doesn't go into coding details.\n>\n\nDoes this happen in a context where we need to worried about the directory\nstructure changing under us, either accidentally or maliciously?\n\nI'm wondering because I understand cd'ing through the structure can avoid\nsome of the related problems and might be the reason for doing it that way\noriginally. My impression is that the modern equivalent would be to use\nopenat() with O_PATH to step through the hierarchy. But then I'm not clear\non how to get back to the absolute path, given a file descriptor for the\nfinal directory.\n\nOn Thu, 1 Sept 2022 at 19:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\nThis code was mine originally (336969e49), but I sure don't\nremember why I wrote it like that. I know we didn't have a\nrobust version of canonicalize_path() then, and that may have\nbeen the main issue, but that offhand comment about mount\npoints bothers me. But I can't reconstruct precisely what\nI was worried about there. The only contemporaneous discussion\nthread I can find is [2], which doesn't go into coding details.Does this happen in a context where we need to worried about the directory structure changing under us, either accidentally or maliciously?I'm wondering because I understand cd'ing through the structure can avoid some of the related problems and might be the reason for doing it that way originally. My impression is that the modern equivalent would be to use openat() with O_PATH to step through the hierarchy. But then I'm not clear on how to get back to the absolute path, given a file descriptor for the final directory.",
"msg_date": "Thu, 1 Sep 2022 21:27:12 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> On Thu, 1 Sept 2022 at 19:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This code was mine originally (336969e49), but I sure don't\n>> remember why I wrote it like that.\n\n> Does this happen in a context where we need to worried about the directory\n> structure changing under us, either accidentally or maliciously?\n\nWell, one of the reasons it'd be a good idea to not change cwd is\nthat then you don't have to worry about that moving while you're\nmessing around. But everything else that we're considering here is\neither a component of PATH or a directory/symlink associated with\nthe PG installation. If $badguy has control of any of that,\nyou've already lost, so I'm not excited about worrying about it.\n\n> I'm wondering because I understand cd'ing through the structure can avoid\n> some of the related problems and might be the reason for doing it that way\n> originally.\n\nPretty sure I was not thinking about that. I might have been\nthinking about AFS installations, which IIRC often have two nominal\npaths associated with them. But I don't recall any details about how\nthat works, and anyway the comment says nothing about AFS.\n\n> My impression is that the modern equivalent would be to use\n> openat() with O_PATH to step through the hierarchy. But then I'm not clear\n> on how to get back to the absolute path, given a file descriptor for the\n> final directory.\n\nYeah. The point here is not to open a particular file, but to derive\na pathname string for where the file is.\n\nWhat I'm thinking right at the moment is that we don't necessarily\nhave to have the exact path that getcwd() would report. We need\n*some* path-in-absolute-form that works. This leads me to think\nthat both the AFS case and the mount-point case are red herrings.\nBut I can't shake the feeling that I'm missing something.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Sep 2022 22:48:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "\nOn 2022-09-01 Th 19:39, Tom Lane wrote:\n> find_my_exec() wants to obtain an absolute, symlink-free path\n> to the program's own executable, for what seem to me good\n> reasons. However, chasing down symlinks is left to its\n> subroutine resolve_symlinks(), which does this:\n>\n> * To resolve a symlink properly, we have to chdir into its directory and\n> * then chdir to where the symlink points; otherwise we may fail to\n> * resolve relative links correctly (consider cases involving mount\n> * points, for example). After following the final symlink, we use\n> * getcwd() to figure out where the heck we're at.\n>\n> and then afterwards it has to chdir back to the original cwd.\n> That last step is a bit of a sore spot, because sometimes\n> (especially in sudo situations) we may not have the privileges\n> necessary to do that; I think this is the cause of the complaint\n> at [1]. Anyway the whole thing seems a bit excessively Rube\n> Goldbergian. I'm wondering why we couldn't just read the\n> symlink(s), concatenate them together, and use canonicalize_path()\n> to clean up any mess.\n>\n> This code was mine originally (336969e49), but I sure don't\n> remember why I wrote it like that. I know we didn't have a\n> robust version of canonicalize_path() then, and that may have\n> been the main issue, but that offhand comment about mount\n> points bothers me. But I can't reconstruct precisely what\n> I was worried about there. The only contemporaneous discussion\n> thread I can find is [2], which doesn't go into coding details.\n>\n> Thoughts?\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/flat/CAH8yC8kOj0pmHF1RbK2Gb2t4YCcNG-5h0TwZ7yxk3Hzw6C0Otg%40mail.gmail.com\n> [2] https://www.postgresql.org/message-id/flat/4973.1099605411%40sss.pgh.pa.us\n>\n>\n\nThese days there seem to be library functions that do this, realpath(3)\nand canonicalize_file_name(3). The latter is what seems to be called by\nreadlink(1). Should we be using one of those? I don't know how portable\nthey are. I don't see them here :-(\n<https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/crt-alphabetical-function-reference?view=msvc-170>\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 3 Sep 2022 11:06:54 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-09-01 Th 19:39, Tom Lane wrote:\n>> find_my_exec() wants to obtain an absolute, symlink-free path\n>> to the program's own executable, for what seem to me good\n>> reasons. However, chasing down symlinks is left to its\n>> subroutine resolve_symlinks(), which does this:\n\n> These days there seem to be library functions that do this, realpath(3)\n> and canonicalize_file_name(3). The latter is what seems to be called by\n> readlink(1). Should we be using one of those?\n\nOh! I see realpath() in POSIX, but not canonicalize_file_name().\nIt does look like realpath() would be helpful here, although if\nit's not present on Windows that's a problem.\n\nQuick googling suggests that _fullpath() could be used as a substitute.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Sep 2022 11:21:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "I wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> These days there seem to be library functions that do this, realpath(3)\n>> and canonicalize_file_name(3). The latter is what seems to be called by\n>> readlink(1). Should we be using one of those?\n\n> Oh! I see realpath() in POSIX, but not canonicalize_file_name().\n> It does look like realpath() would be helpful here, although if\n> it's not present on Windows that's a problem.\n\nAfter some surveying of man pages, I conclude that\n\n(1) realpath() exists on all platforms of interest except Windows,\nwhere it looks like we can use _fullpath() instead.\n\n(2) AIX and Solaris 10 only implement the SUSv2 semantics,\nwhere the caller must supply a buffer that it has no good way\nto determine a safe size for. Annoying.\n\n(3) The Solaris 10 man page has this interesting disclaimer:\n\n The realpath() function might fail to return to the current\n directory if an error occurs.\n\nwhich implies that on that platform it's basically implemented\nin the same way as our current code. Sigh.\n\nI think we can ignore (3) though. Solaris 11 seems to have an\nup-to-speed implementation of realpath(), and 10 will be EOL\nin January 2024 according to Wikipedia.\n\nAs for (2), both systems promise to report EINVAL for a null\npointer, which is also what SUSv2 says. So I think what we\ncan do is approximately\n\n\tptr = realpath(fname, NULL);\n\tif (ptr == NULL && errno == EINVAL)\n\t{\n\t\tptr = pg_malloc(MAXPGPATH);\n\t\tptr = realpath(fname, ptr);\n\t}\n\nand just take it on faith that MAXPGPATH is enough on those\nplatforms.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Sep 2022 15:15:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "Here's a draft patch for this. It seems to work on Linux,\nbut the Windows code is just speculation. In particular,\nI did\n\n\tpath = _fullpath(NULL, fname, 0);\n\tif (path == NULL)\n\t\t_dosmaperr(GetLastError());\n\nbut I'm not really sure that the _dosmaperr bit is needed,\nbecause the _fullpath man page I found makes reference to\nsetting \"errno\" [1]. It's likely to be hard to test, because\nmost of the possible error cases should be nigh unreachable\nin our usage; we already know the input is a valid reference\nto an executable file.\n\nBTW, I noticed what seems a flat-out bug in validate_exec:\n\n /* Win32 requires a .exe suffix for stat() */\n- if (strlen(path) >= strlen(\".exe\") &&\n+ if (strlen(path) < strlen(\".exe\") ||\n pg_strcasecmp(path + strlen(path) - strlen(\".exe\"), \".exe\") != 0)\n\nNobody's noticed because none of our executables have base names\nshorter than 4 characters, but it's still a bug.\n\n\t\t\tregards, tom lane\n\n[1] https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/fullpath-wfullpath?view=msvc-170",
"msg_date": "Sat, 03 Sep 2022 22:41:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "On 02.09.22 01:39, Tom Lane wrote:\n> find_my_exec() wants to obtain an absolute, symlink-free path\n> to the program's own executable, for what seem to me good\n> reasons.\n\nI still think they are bad reasons, and we should kill all that code. \nJust sayin' ...\n\n\n",
"msg_date": "Mon, 12 Sep 2022 17:28:34 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 02.09.22 01:39, Tom Lane wrote:\n>> find_my_exec() wants to obtain an absolute, symlink-free path\n>> to the program's own executable, for what seem to me good\n>> reasons.\n\n> I still think they are bad reasons, and we should kill all that code. \n> Just sayin' ...\n\nAre you proposing we give up the support for relocatable installations?\nI'm not here to defend that feature, but I bet somebody will. (And\ndoesn't \"make check\" depend on it?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Sep 2022 11:33:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "On 12.09.22 17:33, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 02.09.22 01:39, Tom Lane wrote:\n>>> find_my_exec() wants to obtain an absolute, symlink-free path\n>>> to the program's own executable, for what seem to me good\n>>> reasons.\n> \n>> I still think they are bad reasons, and we should kill all that code.\n>> Just sayin' ...\n> \n> Are you proposing we give up the support for relocatable installations?\n> I'm not here to defend that feature, but I bet somebody will. (And\n> doesn't \"make check\" depend on it?)\n\nI'm complaining specifically about the resolving of symlinks. Why does\n\n$ /usr/local/opt/postgresql@13/bin/pg_config --bindir\n\nprint\n\n/usr/local/Cellar/postgresql@13/13.8/bin\n\nwhen it clearly should print\n\n/usr/local/opt/postgresql@13/bin\n\nThis is unrelated to the support for relocatable installations, AFAICT.\n\n\n\n",
"msg_date": "Mon, 12 Sep 2022 21:48:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 12.09.22 17:33, Tom Lane wrote:\n>> Are you proposing we give up the support for relocatable installations?\n>> I'm not here to defend that feature, but I bet somebody will. (And\n>> doesn't \"make check\" depend on it?)\n\n> I'm complaining specifically about the resolving of symlinks. Why does\n\n> $ /usr/local/opt/postgresql@13/bin/pg_config --bindir\n> print\n> /usr/local/Cellar/postgresql@13/13.8/bin\n> when it clearly should print\n> /usr/local/opt/postgresql@13/bin\n\nI'm not sure about your setup there, but if you mean that\n/usr/local/opt/postgresql@13/bin is a symlink reading more or less\n\"./13.8/bin\", I doubt that failing to canonicalize that is a good idea.\nThe point of finding the bindir is mainly to be able to navigate to its\nsibling directories such as lib/, etc/, share/. There's no certainty\nthat a symlink leading to the bin directory will have sibling symlinks\nto those other directories.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Sep 2022 16:07:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "\nOn 2022-09-12 Mo 16:07, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 12.09.22 17:33, Tom Lane wrote:\n>>> Are you proposing we give up the support for relocatable installations?\n>>> I'm not here to defend that feature, but I bet somebody will. (And\n>>> doesn't \"make check\" depend on it?)\n>> I'm complaining specifically about the resolving of symlinks. Why does\n>> $ /usr/local/opt/postgresql@13/bin/pg_config --bindir\n>> print\n>> /usr/local/Cellar/postgresql@13/13.8/bin\n>> when it clearly should print\n>> /usr/local/opt/postgresql@13/bin\n> I'm not sure about your setup there, but if you mean that\n> /usr/local/opt/postgresql@13/bin is a symlink reading more or less\n> \"./13.8/bin\", I doubt that failing to canonicalize that is a good idea.\n> The point of finding the bindir is mainly to be able to navigate to its\n> sibling directories such as lib/, etc/, share/. There's no certainty\n> that a symlink leading to the bin directory will have sibling symlinks\n> to those other directories.\n>\n> \t\t\t\n\n\nI think the discussion here is a bit tangential to the original topic.\n\nThe point you make is reasonable, but it seems a bit more likely that in\nthe case Peter cites the symlink is one level higher in the tree, in\nwhich case there's probably little value in resolving the symlink. Maybe\nwe could compromise and check if a path exists and only resolve symlinks\nif it does not?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 12 Sep 2022 18:52:28 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I think the discussion here is a bit tangential to the original topic.\n\nIndeed, because I just wanted to reimplement *how* we resolve the\nexecutable path to absolute, not question whether we should do it at all.\n\n> The point you make is reasonable, but it seems a bit more likely that in\n> the case Peter cites the symlink is one level higher in the tree, in\n> which case there's probably little value in resolving the symlink. Maybe\n> we could compromise and check if a path exists and only resolve symlinks\n> if it does not?\n\nIt's non-negotiable that we apply realpath() or a handmade equivalent\nif the path we find to the executable turns out to be relative, ie\nyou did \"../../postgres/bin/psql\" or the equivalent. In the case of\nthe server, we *will* chdir to someplace else, rendering the original\npath useless. psql might chdir in response to a user command, so it\nlikewise had better resolve the installation location while it can.\n\nWe could maybe skip realpath() if we find what appears to be an\nabsolute path to the executable. However, I think that fails in\ntoo many situations. As an example, if I do\n\tln -s /path/to/psql ~/bin\nand then invoke psql using that symlink, we're not going to be\nable to find any of the installation's supporting files unless\nwe resolve the symlink. The executable path we'd deduce after\nexamining PATH is /home/tgl/bin/psql, which is plenty absolute,\nbut it doesn't help us find the rest of the PG installation.\nThat case works today, and I think a lot of people will be\nsad if we break it.\n\nI'm not familiar with how homebrew sets up the installation\nlayout, but I'm suspicious that the situation Peter refers to\nhas a similar problem, only with a symlink for the bin directory\nnot the individual executable.\n\nI think the only potentially-workable alternative design is\nto forget about relocatable installations and insist that the\nsupporting files be found at the installation path designated\nat configure time. But, again, that seems likely to break a\nlot of setups that work today. And I've still not heard a\npositive reason why we should change it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Sep 2022 19:26:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "On 13.09.22 01:26, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> I think the discussion here is a bit tangential to the original topic.\n> \n> Indeed, because I just wanted to reimplement *how* we resolve the\n> executable path to absolute, not question whether we should do it at all.\n\nWell, if we decided not to do it, then we could just delete the code and \nnot have to think about how to change it.\n\n> I'm not familiar with how homebrew sets up the installation\n> layout, but I'm suspicious that the situation Peter refers to\n> has a similar problem, only with a symlink for the bin directory\n> not the individual executable.\n\nI think the two contradicting use cases are:\n\n1) You configure and install with prefix=/usr/local/pgsql, and then \nsymlink ~/bin/pg_ctl -> /usr/local/pgsql/bin/pg_ctl; hoping that that \nwill allow pg_ctl to find the other programs it needs in \n/usr/local/pgsql/bin. This is what we currently support.\n\n2) You configure and install with prefix=/usr/local/pgsql-14, and then \nsymlink /usr/local/pgsql -> /usr/local/pgsql-14; hoping that you can \nthen use /usr/local/pgsql as if that's where it actually is. We don't \ncurrently support that. (Note that it would work if you made a copy of \nthe tree instead of using the symlink.)\n\nI don't know if anyone uses #1 or what the details of such use are.\n\n#2 is how Homebrew and some other packaging systems work.\n\n\n\n",
"msg_date": "Tue, 13 Sep 2022 16:37:34 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> 2) You configure and install with prefix=/usr/local/pgsql-14, and then \n> symlink /usr/local/pgsql -> /usr/local/pgsql-14; hoping that you can \n> then use /usr/local/pgsql as if that's where it actually is. We don't \n> currently support that. (Note that it would work if you made a copy of \n> the tree instead of using the symlink.)\n\nWhat about it does not work?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Sep 2022 11:16:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "On 13.09.22 17:16, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> 2) You configure and install with prefix=/usr/local/pgsql-14, and then\n>> symlink /usr/local/pgsql -> /usr/local/pgsql-14; hoping that you can\n>> then use /usr/local/pgsql as if that's where it actually is. We don't\n>> currently support that. (Note that it would work if you made a copy of\n>> the tree instead of using the symlink.)\n> \n> What about it does not work?\n\nThe problem is if another package or extension uses pg_config to find, \nsay, libdir, includedir, or bindir and integrates it into its own build \nsystem or its own build products. If those directories point to \n/usr/local/pgsql/{bin,include,lib}, then there is no problem. But if \nthey point to /usr/local/pgsql-14.5/{bin,include,lib}, then the next \nminor update will break those other packages.\n\n\n\n",
"msg_date": "Thu, 15 Sep 2022 16:22:27 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 13.09.22 17:16, Tom Lane wrote:\n>> What about it does not work?\n\n> The problem is if another package or extension uses pg_config to find, \n> say, libdir, includedir, or bindir and integrates it into its own build \n> system or its own build products. If those directories point to \n> /usr/local/pgsql/{bin,include,lib}, then there is no problem. But if \n> they point to /usr/local/pgsql-14.5/{bin,include,lib}, then the next \n> minor update will break those other packages.\n\nThat seems ... a tad far-fetched, and even more to the point,\nit'd be the other package's fault not ours. We have never promised\nthat those directories point to anyplace that's not PG-specific.\nI certainly do not buy that that's a good argument for breaking\nPostgres installation setups that work today.\n\nAlso, there is nothing in that scenario that is in any way dependent\non the use of symlinks, or even absolute paths, so I don't quite\nsee the relevance to the current discussion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Sep 2022 10:43:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "On Sun, Sep 4, 2022 at 2:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Here's a draft patch for this. It seems to work on Linux,\n> but the Windows code is just speculation. In particular,\n> I did\n>\n> path = _fullpath(NULL, fname, 0);\n> if (path == NULL)\n> _dosmaperr(GetLastError());\n>\n> but I'm not really sure that the _dosmaperr bit is needed,\n> because the _fullpath man page I found makes reference to\n> setting \"errno\" [1]. It's likely to be hard to test, because\n> most of the possible error cases should be nigh unreachable\n> in our usage; we already know the input is a valid reference\n> to an executable file.\n\nI tried lots of crazy stuff[1] to try to get an error out of this\nthing, but came up empty handed. Unlike realpath(), _fullpath()\ndoesn't resolve symlinks (or junctions), so I guess there's less to go\nwrong. It still needs the present working directory, which is a\nper-drive concept on this OS, but even bogus drives don't seem to\nproduce an error (despite what the manual says).\n\nI'd still lean towards assuming errno is set, given that the manual\nreferences errno and not GetLastError(). Typical manual pages\nexplicitly tell you when GetLastError() has the error (example:\nGetFullPathName(), for which this might be intended as a more Unix-y\nwrapper, but even if so there's nothing to say that _fullpath() can't\nset errno directly itself, in which case you might clobber it that\nway).\n\n[1] https://cirrus-ci.com/task/4935917730267136?logs=main\n\n\n",
"msg_date": "Tue, 27 Sep 2022 14:58:38 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I tried lots of crazy stuff[1] to try to get an error out of this\n> thing, but came up empty handed. Unlike realpath(), _fullpath()\n> doesn't resolve symlinks (or junctions), so I guess there's less to go\n> wrong. It still needs the present working directory, which is a\n> per-drive concept on this OS, but even bogus drives don't seem to\n> produce an error (despite what the manual says).\n\nInteresting.\n\n> I'd still lean towards assuming errno is set, given that the manual\n> references errno and not GetLastError().\n\nAgreed. In the attached, I drop the _dosmaperr() step and instead\njust do \"errno = 0\" before the call. That way, if we ever do manage\nto hit a _fullpath() failure, we can at least tell whether the errno\nthat's reported is real or not.\n\nIn this version I've attempted to resolve Peter's complaint by only\napplying realpath() when the executable path we've obtained is relative\nor has a symlink as the last component. Things will definitely not\nwork right if either of those is true and we make no effort to get\na more trustworthy path. I concede that things will usually work okay\nwithout resolving a symlink that's two or more levels up the path,\nbut I wonder how much we want to trust that. Suppose somebody changes\nsuch a symlink while the server is running --- nothing very good is\nlikely to happen if it suddenly starts consulting some other libdir\nor sharedir. Maybe we need to add a flag telling whether we want\nthis behavior? TBH I think that pg_config is the only place I'd\nbe comfortable with doing it like this. Peter, would your concerns\nbe satisfied if we just made pg_config do it?\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 04 Oct 2022 14:07:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "On 15.09.22 16:43, Tom Lane wrote:\n> That seems ... a tad far-fetched, and even more to the point,\n> it'd be the other package's fault not ours. We have never promised\n> that those directories point to anyplace that's not PG-specific.\n> I certainly do not buy that that's a good argument for breaking\n> Postgres installation setups that work today.\n> \n> Also, there is nothing in that scenario that is in any way dependent\n> on the use of symlinks, or even absolute paths, so I don't quite\n> see the relevance to the current discussion.\n\nHere is another variant of the same problem:\n\nI have\n\n$ which meson\n/usr/local/bin/meson\n\nMeson records its own path (somewhere under meson-info/ AFAICT), so it \ncan re-run itself when any of the meson.build files change. But since \nthe above is a symlink, it records its own location as \n\"/usr/local/Cellar/meson/0.63.1/bin/meson\". So now, whenever the meson \npackage updates (even if it's just 0.63.0 -> 0.63.1), my build tree is \nbroken.\n\nTo clarify, this instance is not at all the fault of any code in \nPostgreSQL. But it's another instance where resolving symlinks just \nbecause we can causing problems.\n\n\n\n",
"msg_date": "Wed, 5 Oct 2022 08:23:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> To clarify, this instance is not at all the fault of any code in \n> PostgreSQL. But it's another instance where resolving symlinks just \n> because we can causing problems.\n\n[ shrug... ] *Not* resolving symlinks when we can causes its\nown set of problems, which maybe we don't see very clearly\nbecause we have been doing it like that for a couple of decades.\nI remain pretty hesitant to change this behavior.\n\nWhat did you think of the compromise proposal to change only\nthe paths that pg_config outputs? I've not tried to code that,\nbut I think it should be feasible.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Oct 2022 09:59:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "On 05.10.22 15:59, Tom Lane wrote:\n> What did you think of the compromise proposal to change only\n> the paths that pg_config outputs? I've not tried to code that,\n> but I think it should be feasible.\n\nI don't think I understand what this proposal actually means. What \nwould be the behavior of pg_config and how would it be different from \nbefore?\n\n\n\n",
"msg_date": "Wed, 22 Mar 2023 10:28:01 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 05.10.22 15:59, Tom Lane wrote:\n>> What did you think of the compromise proposal to change only\n>> the paths that pg_config outputs? I've not tried to code that,\n>> but I think it should be feasible.\n\n> I don't think I understand what this proposal actually means. What \n> would be the behavior of pg_config and how would it be different from \n> before?\n\nWhat I had in mind was:\n\n* server and most frontend programs keep the same behavior, that is\nfully resolve their executable's path to an absolute path (and then\nnavigate to the rest of the installation from there); but now they'll\nuse realpath() to avoid chdir'ing while they do that.\n\n* pg_config applies realpath() if its initial PATH search produces a\nrelative path to the executable, or if the last component of that path\nis a symlink. Otherwise leave it alone, which would have the effect of\nnot expanding directory-level symlinks.\n\nI think that changing pg_config's behavior would be enough to resolve\nthe complaints you listed, but perhaps I'm missing some fine points.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Mar 2023 15:52:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
},
{
"msg_contents": "I wrote:\n> I think that changing pg_config's behavior would be enough to resolve\n> the complaints you listed, but perhaps I'm missing some fine points.\n\nMeanwhile I've gone ahead and pushed my v1 patch (plus Munro's\nrecommendation about _fullpath error handling), so we can see if\nthe buildfarm blows up. The question of whether we can sometimes\nskip replacement of symlinks seems like material for a second patch\nin any case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 Mar 2023 18:20:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Can we avoid chdir'ing in resolve_symlinks() ?"
}
] |
[
{
"msg_contents": "Previously, the automatically generated names were entirely undocumented. That\nis not a very good state of affairs: although it's possibly inconvenient to\nrigidly specify what it is since it's an implementation detail, these names are\nuser-visible, and it would be good to have documentation at all.\n\n\n\n\n",
"msg_date": "Thu, 1 Sep 2022 17:47:10 -0700",
"msg_from": "jadel@mercury.com",
"msg_from_op": true,
"msg_subject": "[PATCH] docs: Document the automatically generated names for indices"
},
{
"msg_contents": "From: Jade Lovelace <jadel@mercury.com>\n\nI have intentionally been careful to not guarantee that the\nautomatically generated name *is* what's documented, but instead give\nthe general idea.\n\nKnowing the format is useful for people writing migrations/ORM tools\nwhich need to name their unique indices and may wish to use the\nautomatic naming system.\n\nSigned-off-by: Jade Lovelace <jadel@mercury.com>\n---\n doc/src/sgml/ref/create_index.sgml | 9 +++++++--\n doc/src/sgml/ref/create_table.sgml | 30 ++++++++++++++++++++++++++++++\n 2 files changed, 37 insertions(+), 2 deletions(-)\n\ndiff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml\nindex a5bac9f..7354267 100644\n--- a/doc/src/sgml/ref/create_index.sgml\n+++ b/doc/src/sgml/ref/create_index.sgml\n@@ -206,9 +206,14 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] <replaceable class=\n table. The name of the index must be distinct from the name of any\n other relation (table, sequence, index, view, materialized view, or\n foreign table) in that schema.\n+ </para>\n+ <para>\n If the name is omitted, <productname>PostgreSQL</productname> chooses a\n- suitable name based on the parent table's name and the indexed column\n- name(s).\n+ suitable name based on the parent table's name and the indexed\n+ column name(s). Generally, teh generated name will be something of the\n+ shape <literal>tablename_columnname_idx</literal>, but this may vary in\n+ the event of long table names, where it is truncated, or if the index\n+ already exists, where a number is appended.\n </para>\n </listitem>\n </varlistentry>\ndiff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml\nindex 6bbf15e..bdabd86 100644\n--- a/doc/src/sgml/ref/create_table.sgml\n+++ b/doc/src/sgml/ref/create_table.sgml\n@@ -796,6 +796,36 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n (Double-quotes are needed to specify constraint names that contain spaces.)\n If a constraint name is not specified, the system generates a name.\n </para>\n+ <para>\n+ Generally, the generated name will be something of the shape\n+ <literal>tablename_columnname_constrainttype</literal>, but this may vary in the\n+ event of long table names, where it is truncated, or if the index\n+ already exists, where a number is appended.\n+ <literal>constrainttype</literal> is one of the following:\n+\n+ <itemizedlist>\n+ <listitem>\n+ <para><literal>pkey</literal> in the case of primary keys</para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ <literal>excl</literal> in the case of\n+ <link linkend=\"sql-createtable-exclude\">\n+ <literal>EXCLUDE</literal>\n+ </link> constraints.\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ <literal>key</literal> in the case of\n+ <link linkend=\"sql-createtrigger\"><firstterm>constraint triggers</firstterm></link>.\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para><literal>idx</literal> in other cases.</para>\n+ </listitem>\n+ </itemizedlist>\n+ </para>\n </listitem>\n </varlistentry>\n \n-- \n2.37.1\n\n\n\n",
"msg_date": "Thu, 1 Sep 2022 17:47:11 -0700",
"msg_from": "jadel@mercury.com",
"msg_from_op": true,
"msg_subject": "[PATCH] docs: Document the automatically generated names for indices"
},
{
"msg_contents": "On Thu, Sep 1, 2022 at 05:47:11PM -0700, jadel@mercury.com wrote:\n> From: Jade Lovelace <jadel@mercury.com>\n> \n> I have intentionally been careful to not guarantee that the\n> automatically generated name *is* what's documented, but instead give\n> the general idea.\n> \n> Knowing the format is useful for people writing migrations/ORM tools\n> which need to name their unique indices and may wish to use the\n> automatic naming system.\n\nUh, I always that if people didn't want to specify an index name, that\nthey didn't care. I think there are enough concurrency issues listed\nbelow that I don't see the value of documenting this. If they really\ncare, they can look at the source code.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 2 Sep 2022 11:46:32 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] docs: Document the automatically generated names for\n indices"
}
] |
[
{
"msg_contents": "Hi, hackers\n\n\nI've met an assertion failure of logical decoding with below scenario on HEAD.\n\n---\n<preparation>\ncreate table tab1 (val integer);\nselect 'init' from pg_create_logical_replication_slot('regression_slot', 'test_decoding');\n\n<session1>\nbegin;\nsavepoint sp1;\ninsert into tab1 values (1);\n\n<session2>\ncheckpoint; -- for RUNNING_XACT\nselect data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n\n<session1>\ntruncate tab1; -- for NEW_CID\ncommit;\nbegin;\ninsert into tab1 values (3);\n\n<session2>\ncheckpoint;\nselect data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n\n<session1>\ncommit;\n\n<session2>\n\nselect data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n---\n\n\nHere, it's not a must but is advisable to make LOG_SNAPSHOT_INTERVAL_MS bigger so that\nwe can issue RUNNING_XACT according to our checkpoint commands explicitly.\n\nIn the above scenario, the first checkpoint generates RUNNING_XACT after the wal record\n(for ReorderBufferAssignChild) that associates sub transaction with its top transaction.\nThis means that once we restart from RUNNING_XACT, we lose the association between top\ntransaction and sub transaction and then we can't mark the top transaction as catalog\nmodifying transaction by decoding NEW_CID (written after RUNNING_XACT), if the\nsub transaction changes the catalog.\n\nTherefore, this leads to the failure for the assert that can check\nthe consistency that when one sub transaction modifies the catalog,\nits top transaction should be marked so as well.\n\nI feel we need to remember the relationship between top transaction and sub transaction\nin the serialized snapshot even before changing catalog at decoding RUNNING_XACT,\nso that we can keep track of the association after the restart. What do you think ?\n\n\nThe stack call of this failure and related information is below.\n\n(gdb) bt\n#0 0x00007f2632588387 in raise () from /lib64/libc.so.6\n#1 0x00007f2632589a78 in abort () from /lib64/libc.so.6\n#2 0x0000000000b3eba1 in ExceptionalCondition (conditionName=0xd137e0 \"!needs_snapshot || needs_timetravel\",\n errorType=0xd130c5 \"FailedAssertion\", fileName=0xd130b9 \"snapbuild.c\", lineNumber=1116) at assert.c:69\n#3 0x0000000000911257 in SnapBuildCommitTxn (builder=0x23f0638, lsn=22386632, xid=728, nsubxacts=1,\n subxacts=0x2bfcc88, xinfo=79) at snapbuild.c:1116\n#4 0x00000000008fa420 in DecodeCommit (ctx=0x23e0108, buf=0x7fff4a1f9220, parsed=0x7fff4a1f9020, xid=728,\n two_phase=false) at decode.c:630\n#5 0x00000000008f9953 in xact_decode (ctx=0x23e0108, buf=0x7fff4a1f9220) at decode.c:216\n#6 0x00000000008f967d in LogicalDecodingProcessRecord (ctx=0x23e0108, record=0x23e04a0) at decode.c:119\n#7 0x0000000000900b63 in pg_logical_slot_get_changes_guts (fcinfo=0x23d80a8, confirm=true, binary=false)\n at logicalfuncs.c:271\n#8 0x0000000000900ca0 in pg_logical_slot_get_changes (fcinfo=0x23d80a8) at logicalfuncs.c:338\n...\n(gdb) frame 3\n#3 0x0000000000911257 in SnapBuildCommitTxn (builder=0x23f0638, lsn=22386632, xid=728, nsubxacts=1,\n subxacts=0x2bfcc88, xinfo=79) at snapbuild.c:1116\n1116 Assert(!needs_snapshot || needs_timetravel);\n(gdb) list\n1111 {\n1112 /* record that we cannot export a general snapshot anymore */\n1113 builder->committed.includes_all_transactions = false;\n1114 }\n1115\n1116 Assert(!needs_snapshot || needs_timetravel);\n1117\n1118 /*\n1119 * Adjust xmax of the snapshot builder, we only do that for committed,\n1120 * catalog modifying, transactions, everything else isn't interesting for\n\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Fri, 2 Sep 2022 00:56:43 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "test_decoding assertion failure for the loss of top-sub transaction\n relationship"
},
{
"msg_contents": "Hi Hackers,\n\n> Therefore, this leads to the failure for the assert that can check\n> the consistency that when one sub transaction modifies the catalog,\n> its top transaction should be marked so as well.\n> \n> I feel we need to remember the relationship between top transaction and sub\n> transaction\n> in the serialized snapshot even before changing catalog at decoding\n> RUNNING_XACT,\n> so that we can keep track of the association after the restart. What do you think ?\n\nPSA patch that fixes the failure.\nThis adds pairs of sub-top transactions to the SnapBuild, and it will be serialized and restored.\nThe pair will be checked when we mark the ReorderBufferTXN as RBTXN_HAS_CATALOG_CHANGES.\n\nThanks to off-list discussion with Osumi-san.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Fri, 2 Sep 2022 01:08:04 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "Good catch, and thanks for the patch!\n\nAt Fri, 2 Sep 2022 01:08:04 +0000, \"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com> wrote in \n> PSA patch that fixes the failure.\n> This adds pairs of sub-top transactions to the SnapBuild, and it will be serialized and restored.\n> The pair will be checked when we mark the ReorderBufferTXN as RBTXN_HAS_CATALOG_CHANGES.\n\nA commit record has all subtransaction ids and SnapBuildCommitTxn()\nalready checks if every one has catalog changes before checking the\ntop transaction's catalog changes. So, no need to record top-sub\ntransaction relationship to serialized snapshots. If any of the\nsubtransactions has catalog changes, the commit contains catalog\nchanges.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 02 Sep 2022 13:16:05 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "On Fri, Sep 2, 2022 at 6:38 AM kuroda.hayato@fujitsu.com\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Hi Hackers,\n>\n> > Therefore, this leads to the failure for the assert that can check\n> > the consistency that when one sub transaction modifies the catalog,\n> > its top transaction should be marked so as well.\n> >\n> > I feel we need to remember the relationship between top transaction and sub\n> > transaction\n> > in the serialized snapshot even before changing catalog at decoding\n> > RUNNING_XACT,\n> > so that we can keep track of the association after the restart. What do you think ?\n>\n> PSA patch that fixes the failure.\n> This adds pairs of sub-top transactions to the SnapBuild, and it will be serialized and restored.\n> The pair will be checked when we mark the ReorderBufferTXN as RBTXN_HAS_CATALOG_CHANGES.\n\nIt seems that SnapBuildCommitTxn() is already taking care of adding\nthe top transaction to the committed transaction if any subtransaction\nhas the catalog changes, it has just missed setting the flag so I\nthink just setting the flag like this should be sufficient no?\n\ndiff --git a/src/backend/replication/logical/snapbuild.c\nb/src/backend/replication/logical/snapbuild.c\nindex 1ff2c12..ee3f695 100644\n--- a/src/backend/replication/logical/snapbuild.c\n+++ b/src/backend/replication/logical/snapbuild.c\n@@ -1086,6 +1086,7 @@ SnapBuildCommitTxn(SnapBuild *builder,\nXLogRecPtr lsn, TransactionId xid,\n else if (sub_needs_timetravel)\n {\n /* track toplevel txn as well, subxact alone isn't meaningful */\n+ needs_timetravel = true;\n SnapBuildAddCommittedTxn(builder, xid);\n }\n else if (needs_timetravel)\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Sep 2022 10:59:56 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "At Fri, 2 Sep 2022 10:59:56 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Fri, Sep 2, 2022 at 6:38 AM kuroda.hayato@fujitsu.com\n> <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > Hi Hackers,\n> >\n> > > Therefore, this leads to the failure for the assert that can check\n> > > the consistency that when one sub transaction modifies the catalog,\n> > > its top transaction should be marked so as well.\n> > >\n> > > I feel we need to remember the relationship between top transaction and sub\n> > > transaction\n> > > in the serialized snapshot even before changing catalog at decoding\n> > > RUNNING_XACT,\n> > > so that we can keep track of the association after the restart. What do you think ?\n> >\n> > PSA patch that fixes the failure.\n> > This adds pairs of sub-top transactions to the SnapBuild, and it will be serialized and restored.\n> > The pair will be checked when we mark the ReorderBufferTXN as RBTXN_HAS_CATALOG_CHANGES.\n> \n> It seems that SnapBuildCommitTxn() is already taking care of adding\n> the top transaction to the committed transaction if any subtransaction\n> has the catalog changes, it has just missed setting the flag so I\n> think just setting the flag like this should be sufficient no?\n\nOops! That's right.\n\n> diff --git a/src/backend/replication/logical/snapbuild.c\n> b/src/backend/replication/logical/snapbuild.c\n> index 1ff2c12..ee3f695 100644\n> --- a/src/backend/replication/logical/snapbuild.c\n> +++ b/src/backend/replication/logical/snapbuild.c\n> @@ -1086,6 +1086,7 @@ SnapBuildCommitTxn(SnapBuild *builder,\n> XLogRecPtr lsn, TransactionId xid,\n> else if (sub_needs_timetravel)\n> {\n> /* track toplevel txn as well, subxact alone isn't meaningful */\n> + needs_timetravel = true;\n> SnapBuildAddCommittedTxn(builder, xid);\n> }\n> else if (needs_timetravel)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 02 Sep 2022 14:46:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "Dear Horiguchi-san, Dilip,\n\nThank you for replying!\n\n> > It seems that SnapBuildCommitTxn() is already taking care of adding\n> > the top transaction to the committed transaction if any subtransaction\n> > has the catalog changes, it has just missed setting the flag so I\n> > think just setting the flag like this should be sufficient no?\n> \n> Oops! That's right.\n\nBasically I agreed, but I was not sure the message \"found top level transaction...\"\nshould be output or not. It may be useful even if one of sub transactions contains the change.\n\nHow about following?\n\ndiff --git a/src/backend/replication/logical/snapbuild.c b/src/backend/replication/logical/snapbuild.c\nindex bf72ad45ec..a630522907 100644\n--- a/src/backend/replication/logical/snapbuild.c\n+++ b/src/backend/replication/logical/snapbuild.c\n@@ -1086,8 +1086,17 @@ SnapBuildCommitTxn(SnapBuild *builder, XLogRecPtr lsn, TransactionId xid,\n }\n }\n \n- /* if top-level modified catalog, it'll need a snapshot */\n- if (SnapBuildXidHasCatalogChanges(builder, xid, xinfo))\n+ /*\n+ * if top-level or one of sub modified catalog, it'll need a snapshot.\n+ *\n+ * Normally the second check is not needed because the relation between\n+ * top-sub transactions is tracked on the ReorderBuffer layer, and the top\n+ * transaction is marked as containing catalog changes if its children are.\n+ * But in some cases the relation may be missed, in which case only the sub\n+ * transaction may be marked as containing catalog changes.\n+ */\n+ if (SnapBuildXidHasCatalogChanges(builder, xid, xinfo)\n+ || sub_needs_timetravel)\n {\n elog(DEBUG2, \"found top level transaction %u, with catalog changes\",\n xid);\n@@ -1095,11 +1104,6 @@ SnapBuildCommitTxn(SnapBuild *builder, XLogRecPtr lsn, TransactionId xid,\n needs_timetravel = true;\n SnapBuildAddCommittedTxn(builder, xid);\n }\n- else if (sub_needs_timetravel)\n- {\n- /* track toplevel txn as well, subxact alone isn't meaningful */\n- SnapBuildAddCommittedTxn(builder, xid);\n- }\n else if (needs_timetravel)\n {\n elog(DEBUG2, \"forced transaction %u to do timetravel\", xid);\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Fri, 2 Sep 2022 05:54:58 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "On Fri, Sep 2, 2022 at 11:25 AM kuroda.hayato@fujitsu.com\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Horiguchi-san, Dilip,\n>\n> Thank you for replying!\n>\n> > > It seems that SnapBuildCommitTxn() is already taking care of adding\n> > > the top transaction to the committed transaction if any subtransaction\n> > > has the catalog changes, it has just missed setting the flag so I\n> > > think just setting the flag like this should be sufficient no?\n> >\n> > Oops! That's right.\n>\n> Basically I agreed, but I was not sure the message \"found top level transaction...\"\n> should be output or not. It may be useful even if one of sub transactions contains the change.\n>\n> How about following?\n>\n> diff --git a/src/backend/replication/logical/snapbuild.c b/src/backend/replication/logical/snapbuild.c\n> index bf72ad45ec..a630522907 100644\n> --- a/src/backend/replication/logical/snapbuild.c\n> +++ b/src/backend/replication/logical/snapbuild.c\n> @@ -1086,8 +1086,17 @@ SnapBuildCommitTxn(SnapBuild *builder, XLogRecPtr lsn, TransactionId xid,\n> }\n> }\n>\n> - /* if top-level modified catalog, it'll need a snapshot */\n> - if (SnapBuildXidHasCatalogChanges(builder, xid, xinfo))\n> + /*\n> + * if top-level or one of sub modified catalog, it'll need a snapshot.\n> + *\n> + * Normally the second check is not needed because the relation between\n> + * top-sub transactions is tracked on the ReorderBuffer layer, and the top\n> + * transaction is marked as containing catalog changes if its children are.\n> + * But in some cases the relation may be missed, in which case only the sub\n> + * transaction may be marked as containing catalog changes.\n> + */\n> + if (SnapBuildXidHasCatalogChanges(builder, xid, xinfo)\n> + || sub_needs_timetravel)\n> {\n> elog(DEBUG2, \"found top level transaction %u, with catalog changes\",\n> xid);\n> @@ -1095,11 +1104,6 @@ SnapBuildCommitTxn(SnapBuild *builder, XLogRecPtr lsn, TransactionId xid,\n> needs_timetravel = true;\n> SnapBuildAddCommittedTxn(builder, xid);\n> }\n> - else if (sub_needs_timetravel)\n> - {\n> - /* track toplevel txn as well, subxact alone isn't meaningful */\n> - SnapBuildAddCommittedTxn(builder, xid);\n> - }\n> else if (needs_timetravel)\n> {\n> elog(DEBUG2, \"forced transaction %u to do timetravel\", xid);\n\nYeah, I am fine with this as well.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Sep 2022 11:27:23 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "At Fri, 2 Sep 2022 11:27:23 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Fri, Sep 2, 2022 at 11:25 AM kuroda.hayato@fujitsu.com\n> <kuroda.hayato@fujitsu.com> wrote:\n> > How about following?\n> >\n> > diff --git a/src/backend/replication/logical/snapbuild.c b/src/backend/replication/logical/snapbuild.c\n> > index bf72ad45ec..a630522907 100644\n> > --- a/src/backend/replication/logical/snapbuild.c\n> > +++ b/src/backend/replication/logical/snapbuild.c\n> > @@ -1086,8 +1086,17 @@ SnapBuildCommitTxn(SnapBuild *builder, XLogRecPtr lsn, TransactionId xid,\n> > }\n> > }\n> >\n> > - /* if top-level modified catalog, it'll need a snapshot */\n> > - if (SnapBuildXidHasCatalogChanges(builder, xid, xinfo))\n> > + /*\n> > + * if top-level or one of sub modified catalog, it'll need a snapshot.\n> > + *\n> > + * Normally the second check is not needed because the relation between\n> > + * top-sub transactions is tracked on the ReorderBuffer layer, and the top\n> > + * transaction is marked as containing catalog changes if its children are.\n> > + * But in some cases the relation may be missed, in which case only the sub\n> > + * transaction may be marked as containing catalog changes.\n> > + */\n> > + if (SnapBuildXidHasCatalogChanges(builder, xid, xinfo)\n> > + || sub_needs_timetravel)\n> > {\n> > elog(DEBUG2, \"found top level transaction %u, with catalog changes\",\n> > xid);\n> > @@ -1095,11 +1104,6 @@ SnapBuildCommitTxn(SnapBuild *builder, XLogRecPtr lsn, TransactionId xid,\n> > needs_timetravel = true;\n> > SnapBuildAddCommittedTxn(builder, xid);\n> > }\n> > - else if (sub_needs_timetravel)\n> > - {\n> > - /* track toplevel txn as well, subxact alone isn't meaningful */\n> > - SnapBuildAddCommittedTxn(builder, xid);\n> > - }\n> > else if (needs_timetravel)\n> > {\n> > elog(DEBUG2, \"forced transaction %u to do timetravel\", xid);\n> \n> Yeah, I am fine with this as well.\n\nI'm basically fine, too. But this is a bug that needs back-patching\nback to 10. This change changes the condition for the DEBUG2 message.\nSo we need to add an awkward if() condition for the DEBUG2 message.\nLooking that the messages have different debug-level, I doubt there\nhave been a chance they are useful. If we remove the two DEBUGx\nmessages, I'm fine with the change.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 02 Sep 2022 15:54:56 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "On Fri, Sep 2, 2022 at 12:25 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 2 Sep 2022 11:27:23 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > On Fri, Sep 2, 2022 at 11:25 AM kuroda.hayato@fujitsu.com\n> > <kuroda.hayato@fujitsu.com> wrote:\n> > > How about following?\n> > >\n> > > diff --git a/src/backend/replication/logical/snapbuild.c b/src/backend/replication/logical/snapbuild.c\n> > > index bf72ad45ec..a630522907 100644\n> > > --- a/src/backend/replication/logical/snapbuild.c\n> > > +++ b/src/backend/replication/logical/snapbuild.c\n> > > @@ -1086,8 +1086,17 @@ SnapBuildCommitTxn(SnapBuild *builder, XLogRecPtr lsn, TransactionId xid,\n> > > }\n> > > }\n> > >\n> > > - /* if top-level modified catalog, it'll need a snapshot */\n> > > - if (SnapBuildXidHasCatalogChanges(builder, xid, xinfo))\n> > > + /*\n> > > + * if top-level or one of sub modified catalog, it'll need a snapshot.\n> > > + *\n> > > + * Normally the second check is not needed because the relation between\n> > > + * top-sub transactions is tracked on the ReorderBuffer layer, and the top\n> > > + * transaction is marked as containing catalog changes if its children are.\n> > > + * But in some cases the relation may be missed, in which case only the sub\n> > > + * transaction may be marked as containing catalog changes.\n> > > + */\n> > > + if (SnapBuildXidHasCatalogChanges(builder, xid, xinfo)\n> > > + || sub_needs_timetravel)\n> > > {\n> > > elog(DEBUG2, \"found top level transaction %u, with catalog changes\",\n> > > xid);\n> > > @@ -1095,11 +1104,6 @@ SnapBuildCommitTxn(SnapBuild *builder, XLogRecPtr lsn, TransactionId xid,\n> > > needs_timetravel = true;\n> > > SnapBuildAddCommittedTxn(builder, xid);\n> > > }\n> > > - else if (sub_needs_timetravel)\n> > > - {\n> > > - /* track toplevel txn as well, subxact alone isn't meaningful */\n> > > - SnapBuildAddCommittedTxn(builder, xid);\n> > > - }\n> > > else if (needs_timetravel)\n> > > {\n> > > elog(DEBUG2, \"forced transaction %u to do timetravel\", xid);\n> >\n> > Yeah, I am fine with this as well.\n>\n> I'm basically fine, too. But this is a bug that needs back-patching\n> back to 10.\n>\n\nI have not verified but I think we need to backpatch this till 14\nbecause prior to that in DecodeCommit, we use to set the top-level txn\nas having catalog changes based on the if there are invalidation\nmessages in the commit record. So, in the current scenario shared by\nOsumi-San, before SnapBuildCommitTxn(), the top-level txn will be\nmarked as having catalog changes.\n\n> This change changes the condition for the DEBUG2 message.\n> So we need to add an awkward if() condition for the DEBUG2 message.\n> Looking that the messages have different debug-level, I doubt there\n> have been a chance they are useful. If we remove the two DEBUGx\n> messages, I'm fine with the change.\n>\n\nI think these DEBUG2 messages could be useful, so instead of removing\nthese, I suggest we should follow Dilip's proposed fix and maybe add a\nnew DEBUG2 message on the lines of ((\"forced transaction %u to do\ntimetravel due to one of its subtransaction\", xid) in the else if\n(sub_needs_timetravel) condition if we think that will be useful too\nbut I am fine leaving the addition of new DEBUG2 message.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 2 Sep 2022 15:57:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "> > I'm basically fine, too. But this is a bug that needs back-patching\r\n> > back to 10.\r\n> >\r\n> \r\n> I have not verified but I think we need to backpatch this till 14\r\n> because prior to that in DecodeCommit, we use to set the top-level txn\r\n> as having catalog changes based on the if there are invalidation\r\n> messages in the commit record. So, in the current scenario shared by\r\n> Osumi-San, before SnapBuildCommitTxn(), the top-level txn will be\r\n> marked as having catalog changes.\r\n\r\nI and Osumi-san are now investigating that, so please wait further reports and patches.\r\n\r\n> > This change changes the condition for the DEBUG2 message.\r\n> > So we need to add an awkward if() condition for the DEBUG2 message.\r\n> > Looking that the messages have different debug-level, I doubt there\r\n> > have been a chance they are useful. If we remove the two DEBUGx\r\n> > messages, I'm fine with the change.\r\n> >\r\n> \r\n> I think these DEBUG2 messages could be useful, so instead of removing\r\n> these, I suggest we should follow Dilip's proposed fix and maybe add a\r\n> new DEBUG2 message on the lines of ((\"forced transaction %u to do\r\n> timetravel due to one of its subtransaction\", xid) in the else if\r\n> (sub_needs_timetravel) condition if we think that will be useful too\r\n> but I am fine leaving the addition of new DEBUG2 message.\r\n\r\nI agreed both that DEBUG2 messages are still useful but we should not\r\nchange the condition for output. So I prefer the idea suggested by Amit.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 2 Sep 2022 11:06:17 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "On Fri, Sep 2, 2022 at 6:26 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n>\n> I've met an assertion failure of logical decoding with below scenario on HEAD.\n>\n> ---\n> <preparation>\n> create table tab1 (val integer);\n> select 'init' from pg_create_logical_replication_slot('regression_slot', 'test_decoding');\n>\n> <session1>\n> begin;\n> savepoint sp1;\n> insert into tab1 values (1);\n>\n> <session2>\n> checkpoint; -- for RUNNING_XACT\n> select data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n>\n> <session1>\n> truncate tab1; -- for NEW_CID\n> commit;\n> begin;\n> insert into tab1 values (3);\n>\n\nBTW, if I just change the truncate statement to \"Analyze tab1\" in your\nentire test then I am getting a different assertion failure:\n\npostgres.exe!ExceptionalCondition(const char * conditionName, const\nchar * errorType, const char * fileName, int lineNumber) Line 70 C\npostgres.exe!AssertTXNLsnOrder(ReorderBuffer * rb) Line 902 C\npostgres.exe!ReorderBufferTXNByXid(ReorderBuffer * rb, unsigned int\nxid, bool create, bool * is_new, unsigned __int64 lsn, bool\ncreate_as_top) Line 681 C\npostgres.exe!ReorderBufferAddNewTupleCids(ReorderBuffer * rb, unsigned\nint xid, unsigned __int64 lsn, RelFileLocator locator, ItemPointerData\ntid, unsigned int cmin, unsigned int cmax, unsigned int combocid) Line\n3188 C\npostgres.exe!SnapBuildProcessNewCid(SnapBuild * builder, unsigned int\nxid, unsigned __int64 lsn, xl_heap_new_cid * xlrec) Line 823 C\npostgres.exe!heap2_decode(LogicalDecodingContext * ctx,\nXLogRecordBuffer * buf) Line 408 C\npostgres.exe!LogicalDecodingProcessRecord(LogicalDecodingContext *\nctx, XLogReaderState * record) Line 119 C\npostgres.exe!pg_logical_slot_get_changes_guts(FunctionCallInfoBaseData\n* fcinfo, bool confirm, bool binary) Line 274 C\npostgres.exe!pg_logical_slot_get_changes(FunctionCallInfoBaseData *\nfcinfo) Line 339 C\n\nThis is matching with call stack we see intermittently in the BF\n[1][2]. The difference with your scenario is that the Truncate\nstatement generates an additional WAL XLOG_STANDBY_LOCK prior to\nXLOG_HEAP2_NEW_CID.\n\nI think we can fix this in the below ways:\na. Assert(prev_first_lsn <= cur_txn->first_lsn); -- Explain in\ncomments that it is possible when subtransaction and transaction are\nnot previously logged as it happened in this scenario\nb. track txn of prev_first_lsn (say as prev_txn) and check if\nprev_txn's toptxn is the same as cur_txn or cur_txn's toptxn is the\nsame as the prev_txn then perform assert mentioned in (a) else, keep\nthe current Assert.\n\nIt seems (b) will be more robust.\n\nThoughts?\n\nNote: I added Sawada-San as sometime back we had an offlist discussion\non this intermittent BF failure but we were not able to reach the\nexact test which can show this failure.\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-08-20%2002%3A45%3A34\n[2] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-25%2018%3A50%3A09\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 3 Sep 2022 10:13:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "Dear hackers,\r\n\r\n> I agreed both that DEBUG2 messages are still useful but we should not\r\n> change the condition for output. So I prefer the idea suggested by Amit.\r\n\r\nPSA newer patch, which contains the fix and test.\r\n\r\n> > I have not verified but I think we need to backpatch this till 14\r\n> > because prior to that in DecodeCommit, we use to set the top-level txn\r\n> > as having catalog changes based on the if there are invalidation\r\n> > messages in the commit record. So, in the current scenario shared by\r\n> > Osumi-San, before SnapBuildCommitTxn(), the top-level txn will be\r\n> > marked as having catalog changes.\r\n> \r\n> I and Osumi-san are now investigating that, so please wait further reports and\r\n> patches.\r\n\r\nWe investigated it about older versions, and in some versions *another stack-trace* has been found.\r\n\r\n\r\nAbout PG10-13, indeed, the failure was not occurred.\r\nIn these versions transactions are regarded as\r\nthat have catalog changes when the commit record has XACT_XINFO_HAS_INVALS flag.\r\nThis flag will be set if the transaction has invalidation messages.\r\n\r\nWhen sub transaction changes system catalogs and user commits,\r\nall invalidation messages allocated in sub transaction will be transferred to top transaction.\r\nTherefore both transactions will be marked as containing catalog changes.\r\n\r\n\r\nAbout PG14 and 15, however, another stack-trace has been found.\r\nWhile executing the same workload, we got followings at the same SQL statement;\r\n\r\n```\r\n(gdb) backtrace\r\n#0 0x00007fa78c6dc387 in raise () from /lib64/libc.so.6\r\n#1 0x00007fa78c6dda78 in abort () from /lib64/libc.so.6\r\n#2 0x0000000000b16680 in ExceptionalCondition (conditionName=0xcd3ab0 \"txn->ninvalidations == 0\", errorType=0xcd3284 \"FailedAssertion\", \r\n fileName=0xcd32d0 \"reorderbuffer.c\", lineNumber=2936) at assert.c:69\r\n#3 0x00000000008e9e70 in ReorderBufferForget (rb=0x12b5b10, xid=735, lsn=24125384) at reorderbuffer.c:2936\r\n#4 0x00000000008d9493 in DecodeCommit (ctx=0x12a2d20, buf=0x7ffe08b236b0, parsed=0x7ffe08b23510, xid=734, two_phase=false) at decode.c:733\r\n#5 0x00000000008d8962 in DecodeXactOp (ctx=0x12a2d20, buf=0x7ffe08b236b0) at decode.c:279\r\n#6 0x00000000008d85e2 in LogicalDecodingProcessRecord (ctx=0x12a2d20, record=0x12a30e0) at decode.c:142\r\n#7 0x00000000008dfef2 in pg_logical_slot_get_changes_guts (fcinfo=0x129acb0, confirm=true, binary=false) at logicalfuncs.c:296\r\n#8 0x00000000008e002f in pg_logical_slot_get_changes (fcinfo=0x129acb0) at logicalfuncs.c:365\r\n...\r\n(gdb) frame 4\r\n#4 0x00000000008d9493 in DecodeCommit (ctx=0x14cfd20, buf=0x7ffc638b0ca0, parsed=0x7ffc638b0b00, xid=734, two_phase=false) at decode.c:733\r\n733 ReorderBufferForget(ctx->reorder, parsed->subxacts[i], buf->origptr);\r\n(gdb) list\r\n728 */\r\n729 if (DecodeTXNNeedSkip(ctx, buf, parsed->dbId, origin_id))\r\n730 {\r\n731 for (i = 0; i < parsed->nsubxacts; i++)\r\n732 {\r\n733 ReorderBufferForget(ctx->reorder, parsed->subxacts[i], buf->origptr);\r\n734 }\r\n735 ReorderBufferForget(ctx->reorder, xid, buf->origptr);\r\n736 \r\n737 return;\r\n(gdb) frame 3\r\n#3 0x00000000008e9e70 in ReorderBufferForget (rb=0x14e2b10, xid=735, lsn=24125152) at reorderbuffer.c:2936\r\n2936 Assert(txn->ninvalidations == 0);\r\n(gdb) list\r\n2931 */\r\n2932 if (txn->base_snapshot != NULL && txn->ninvalidations > 0)\r\n2933 ReorderBufferImmediateInvalidation(rb, txn->ninvalidations,\r\n2934 txn->invalidations);\r\n2935 else\r\n2936 Assert(txn->ninvalidations == 0);\r\n2937 \r\n2938 /* remove potential on-disk data, and deallocate */\r\n2939 ReorderBufferCleanupTXN(rb, txn);\r\n2940 }\r\n(gdb) print *txn\r\n$1 = {txn_flags = 3, xid = 735, toplevel_xid = 734, gid = 0x0, first_lsn = 24113488, final_lsn = 24125152, end_lsn = 0, toptxn = 0x14ecb98, \r\n restart_decoding_lsn = 24113304, origin_id = 0, origin_lsn = 0, commit_time = 0, base_snapshot = 0x0, base_snapshot_lsn = 0, \r\n base_snapshot_node = {prev = 0x14ecc00, next = 0x14e2b28}, snapshot_now = 0x0, command_id = 4294967295, nentries = 5, nentries_mem = 5, \r\n changes = {head = {prev = 0x14eecf8, next = 0x14eeb18}}, tuplecids = {head = {prev = 0x14ecb10, next = 0x14ecb10}}, ntuplecids = 0, \r\n tuplecid_hash = 0x0, toast_hash = 0x0, subtxns = {head = {prev = 0x14ecb38, next = 0x14ecb38}}, nsubtxns = 0, ninvalidations = 3, \r\n invalidations = 0x14e2d28, node = {prev = 0x14ecc68, next = 0x14ecc68}, size = 452, total_size = 452, concurrent_abort = false, \r\n output_plugin_private = 0x0}\r\n```\r\n\r\nIn these versions DecodeCommit() said OK. However, we have met another failure\r\nbecause the ReorderBufferTXN of the sub transaction had invalidation messages but it did not have base_snapshot.\r\n\r\nI thought that this failure was occurred the only the base_snapshot of the sub transaction is transferred via ReorderBufferTransferSnapToParent()\r\nwhen a transaction is assigned as child, but its invalidation messages are not.\r\n\r\nI was not sure what's the proper way to fix it.\r\nThe solution I've thought at first was transporting all invalidations from sub to top like ReorderBufferTransferSnapToParent(),\r\nbut I do not know its side effect. Moreover, how do we deal with ReorderBufferChange?\r\nShould we transfer them too? If so, how about the ordering of changes?\r\nAlternative solustion was just remove the assertion, but was it OK?\r\n\r\nHow do you think? I want to hear your comments and suggestions...\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 6 Sep 2022 07:47:21 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "> I was not sure what's the proper way to fix it.\r\n> The solution I've thought at first was transporting all invalidations from sub to top\r\n> like ReorderBufferTransferSnapToParent(),\r\n> but I do not know its side effect. Moreover, how do we deal with\r\n> ReorderBufferChange?\r\n> Should we transfer them too? If so, how about the ordering of changes?\r\n> Alternative solustion was just remove the assertion, but was it OK?\r\n\r\nPSA the PoC patch for discussion. In this patch only invalidation messages are transported,\r\nchanges hold by subtxn are ignored. This can be passed the reported workload.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 6 Sep 2022 09:16:56 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "On Tue, Sep 6, 2022 at 1:17 PM kuroda.hayato@fujitsu.com\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear hackers,\n>\n> > I agreed both that DEBUG2 messages are still useful but we should not\n> > change the condition for output. So I prefer the idea suggested by Amit.\n>\n> PSA newer patch, which contains the fix and test.\n>\n> > > I have not verified but I think we need to backpatch this till 14\n> > > because prior to that in DecodeCommit, we use to set the top-level txn\n> > > as having catalog changes based on the if there are invalidation\n> > > messages in the commit record. So, in the current scenario shared by\n> > > Osumi-San, before SnapBuildCommitTxn(), the top-level txn will be\n> > > marked as having catalog changes.\n> >\n> > I and Osumi-san are now investigating that, so please wait further reports and\n> > patches.\n>\n> We investigated it about older versions, and in some versions *another stack-trace* has been found.\n>\n>\n> About PG10-13, indeed, the failure was not occurred.\n> In these versions transactions are regarded as\n> that have catalog changes when the commit record has XACT_XINFO_HAS_INVALS flag.\n> This flag will be set if the transaction has invalidation messages.\n>\n> When sub transaction changes system catalogs and user commits,\n> all invalidation messages allocated in sub transaction will be transferred to top transaction.\n> Therefore both transactions will be marked as containing catalog changes.\n>\n>\n> About PG14 and 15, however, another stack-trace has been found.\n> While executing the same workload, we got followings at the same SQL statement;\n>\n\nDid you get this new assertion failure after you applied the patch for\nthe first failure? Because otherwise, how can you reach it with the\nsame test case?\n\nAbout patch:\nelse if (sub_needs_timetravel)\n {\n- /* track toplevel txn as well, subxact alone isn't meaningful */\n+ elog(DEBUG2, \"forced transaction %u to do timetravel due to one of\nits subtransaction\",\n+ xid);\n+ needs_timetravel = true;\n SnapBuildAddCommittedTxn(builder, xid);\n\nWhy did you remove the above comment? I think it still makes sense to retain it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 6 Sep 2022 15:55:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThanks for giving comments!\r\n\r\n> Did you get this new assertion failure after you applied the patch for\r\n> the first failure? Because otherwise, how can you reach it with the\r\n> same test case?\r\n\r\nThe first failure is occurred only in the HEAD, so I did not applied the first patch\r\nto REL14 and REL15.\r\nThis difference is caused because the commit [Fix catalog lookup...] in REL15(272248a) and older is different\r\nfrom the HEAD one.\r\nIn order versions SnapBuildXidSetCatalogChanges() has been added. In the function\r\na transaction will be marked as containing catalog changes if the transaction is in InitialRunningXacts,\r\nand after that the relation between sub-top transactions is assigned based on the parsed->subxact.\r\nThe marking avoids the first failure, but the assignment triggers new failure.\r\n\r\n\r\n> About patch:\r\n> else if (sub_needs_timetravel)\r\n> {\r\n> - /* track toplevel txn as well, subxact alone isn't meaningful */\r\n> + elog(DEBUG2, \"forced transaction %u to do timetravel due to one of\r\n> its subtransaction\",\r\n> + xid);\r\n> + needs_timetravel = true;\r\n> SnapBuildAddCommittedTxn(builder, xid);\r\n> \r\n> Why did you remove the above comment? I think it still makes sense to retain it.\r\n\r\nFixed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 7 Sep 2022 02:06:02 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "On Wed, Sep 7, 2022 at 11:06 AM kuroda.hayato@fujitsu.com\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Amit,\n>\n> Thanks for giving comments!\n>\n> > Did you get this new assertion failure after you applied the patch for\n> > the first failure? Because otherwise, how can you reach it with the\n> > same test case?\n>\n> The first failure is occurred only in the HEAD, so I did not applied the first patch\n> to REL14 and REL15.\n> This difference is caused because the commit [Fix catalog lookup...] in REL15(272248a) and older is different\n> from the HEAD one.\n> In order versions SnapBuildXidSetCatalogChanges() has been added. In the function\n> a transaction will be marked as containing catalog changes if the transaction is in InitialRunningXacts,\n> and after that the relation between sub-top transactions is assigned based on the parsed->subxact.\n> The marking avoids the first failure, but the assignment triggers new failure.\n>\n>\n> > About patch:\n> > else if (sub_needs_timetravel)\n> > {\n> > - /* track toplevel txn as well, subxact alone isn't meaningful */\n> > + elog(DEBUG2, \"forced transaction %u to do timetravel due to one of\n> > its subtransaction\",\n> > + xid);\n> > + needs_timetravel = true;\n> > SnapBuildAddCommittedTxn(builder, xid);\n> >\n> > Why did you remove the above comment? I think it still makes sense to retain it.\n>\n> Fixed.\n\nHere are some review comments for v2 patch:\n\n+# Test that we can force the top transaction to do timetravel when one of sub\n+# transactions needs that. This is necessary when we restart decoding\nfrom RUNNING_XACT\n+# without the wal to associate subtransaction to its top transaction.\n\nI don't think the second sentence is necessary.\n\n---\nThe last decoding\n+# starts from the first checkpoint and NEW_CID of \"s0_truncate\"\ndoesn't mark the top\n+# transaction as catalog modifying transaction. In this scenario, the\nenforcement sets\n+# needs_timetravel to true even if the top transaction is regarded as\nthat it does not\n+# have catalog changes and thus the decoding works without a\ncontradition that one\n+# subtransaction needed timetravel while its top transaction didn't.\n\nI don't understand the last sentence, probably it's a long sentence.\n\nHow about the following description?\n\n# Test that we can handle the case where only subtransaction is marked\nas containing\n# catalog changes. The last decoding starts from NEW_CID generated by\n\"s0_truncate\" and\n# marks only the subtransaction as containing catalog changes but we\ndon't create the\n# association between top-level transaction and subtransaction yet.\nWhen decoding the\n# commit record of the top-level transaction, we must force the\ntop-level transaction\n# to do timetravel since one of its subtransactions is marked as\ncontaining catalog changes.\n\n---\n+ elog(DEBUG2, \"forced transaction %u to do timetravel due to one of\nits subtransaction\",\n+ xid);\n+ needs_timetravel = true;\n\nI think \"one of its subtransaction\" should be \"one of its subtransactions\".\n\nRegards,\n\n--\nMasahiko Sawada\n\n\n",
"msg_date": "Wed, 12 Oct 2022 12:29:16 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "Dear Sawada-san,\n\nThank you for reviewing HEAD patch! PSA v3 patch.\n\n> +# Test that we can force the top transaction to do timetravel when one of sub\n> +# transactions needs that. This is necessary when we restart decoding\n> from RUNNING_XACT\n> +# without the wal to associate subtransaction to its top transaction.\n> \n> I don't think the second sentence is necessary.\n> \n> ---\n> The last decoding\n> +# starts from the first checkpoint and NEW_CID of \"s0_truncate\"\n> doesn't mark the top\n> +# transaction as catalog modifying transaction. In this scenario, the\n> enforcement sets\n> +# needs_timetravel to true even if the top transaction is regarded as\n> that it does not\n> +# have catalog changes and thus the decoding works without a\n> contradition that one\n> +# subtransaction needed timetravel while its top transaction didn't.\n> \n> I don't understand the last sentence, probably it's a long sentence.\n> \n> How about the following description?\n> \n> # Test that we can handle the case where only subtransaction is marked\n> as containing\n> # catalog changes. The last decoding starts from NEW_CID generated by\n> \"s0_truncate\" and\n> # marks only the subtransaction as containing catalog changes but we\n> don't create the\n> # association between top-level transaction and subtransaction yet.\n> When decoding the\n> # commit record of the top-level transaction, we must force the\n> top-level transaction\n> # to do timetravel since one of its subtransactions is marked as\n> containing catalog changes.\n\nSeems good, I replaced all of comments to yours.\n\n> + elog(DEBUG2, \"forced transaction %u to do timetravel due to one of\n> its subtransaction\",\n> + xid);\n> + needs_timetravel = true;\n> \n> I think \"one of its subtransaction\" should be \"one of its subtransactions\".\n\nFixed. \n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Wed, 12 Oct 2022 05:56:02 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "On Wed, Sep 7, 2022 at 11:06 AM kuroda.hayato@fujitsu.com\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Amit,\n>\n> Thanks for giving comments!\n>\n> > Did you get this new assertion failure after you applied the patch for\n> > the first failure? Because otherwise, how can you reach it with the\n> > same test case?\n>\n> The first failure is occurred only in the HEAD, so I did not applied the first patch\n> to REL14 and REL15.\n> This difference is caused because the commit [Fix catalog lookup...] in REL15(272248a) and older is different\n> from the HEAD one.\n> In order versions SnapBuildXidSetCatalogChanges() has been added. In the function\n> a transaction will be marked as containing catalog changes if the transaction is in InitialRunningXacts,\n> and after that the relation between sub-top transactions is assigned based on the parsed->subxact.\n> The marking avoids the first failure, but the assignment triggers new failure.\n>\n\nFYI, as I just replied to the related thread[1], the assertion failure\nin REL14 and REL15 can be fixed by the patch proposed there. So I'd\nlike to see how the discussion goes. Regardless of this proposed fix,\nthe patch proposed by Kuroda-san is required for HEAD, REL14, and\nREL15, in order to fix the assertion failure in SnapBuildCommitTxn().\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoA1gV9pfu8hoXpTQBWH8uEMRg_F_MKM%2BU3Sr0HnyH4AUQ%40mail.gmail.com\n\n-- \nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 12 Oct 2022 15:10:50 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "Dear Sawada-san,\r\n\r\n> FYI, as I just replied to the related thread[1], the assertion failure\r\n> in REL14 and REL15 can be fixed by the patch proposed there. So I'd\r\n> like to see how the discussion goes. Regardless of this proposed fix,\r\n> the patch proposed by Kuroda-san is required for HEAD, REL14, and\r\n> REL15, in order to fix the assertion failure in SnapBuildCommitTxn().\r\n\r\nI understood that my patches for REL14 and REL15 might be not needed.\r\nI will check the thread later. Thanks!\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 12 Oct 2022 06:35:32 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
},
{
"msg_contents": "On Wed, Oct 12, 2022 at 3:35 PM kuroda.hayato@fujitsu.com\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Sawada-san,\n>\n> > FYI, as I just replied to the related thread[1], the assertion failure\n> > in REL14 and REL15 can be fixed by the patch proposed there. So I'd\n> > like to see how the discussion goes. Regardless of this proposed fix,\n> > the patch proposed by Kuroda-san is required for HEAD, REL14, and\n> > REL15, in order to fix the assertion failure in SnapBuildCommitTxn().\n>\n> I understood that my patches for REL14 and REL15 might be not needed.\n\nNo, sorry for confusing you. I meant that even if we agreed with the\npatch I proposed there, your patch is still required to fix the issue.\n\nRegards,\n\n-- \nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 12 Oct 2022 16:19:54 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: test_decoding assertion failure for the loss of top-sub\n transaction relationship"
}
] |
[
{
"msg_contents": "Over on [1], there was a question about why it wasn't possible to\ncreate the following table:\n\nCREATE TABLE foobar(\n id BIGINT NOT NULL PRIMARY KEY,\n baz VARCHAR NULL DEFAULT NULL\n) PARTITION BY HASH(my_func(id));\n\nThe above is disallowed by 2 checks in DefineIndex().\n\n1. If the partitioned key contains an expression we disallow the\naddition of the constraint, per:\n\n/*\n* It may be possible to support UNIQUE constraints when partition\n* keys are expressions, but is it worth it? Give up for now.\n*/\nif (key->partattrs[i] == 0)\n ereport(ERROR,\n\n2. We insist that the primary key / unique constraint contain all of\nthe columns that the partitioned key does.\n\nWe only mention #2 in the docs [2], but we don't mention anything\nabout if the columns can be part of a function call or expression or\nnot, per:\n\n\"Unique constraints (and hence primary keys) on partitioned tables\nmust include all the partition key columns. This limitation exists\nbecause the individual indexes making up the constraint can only\ndirectly enforce uniqueness within their own partitions; therefore,\nthe partition structure itself must guarantee that there are not\nduplicates in different partitions.\"\n\nThe attached attempts to clarify these restrictions more accurately\nbased on the current code's restrictions.\n\nIf there's no objections or suggestions for better wording, I'd like\nto commit the attached.\n\nDavid\n\n\n[1] https://www.postgresql.org/message-id/CAH7vdhNF0EdYZz3GLpgE3RSJLwWLhEk7A_fiKS9dPBT3Dz_3eA@mail.gmail.com\n[2] https://www.postgresql.org/docs/devel/ddl-partitioning.html",
"msg_date": "Fri, 2 Sep 2022 21:44:08 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Clarify restriction on partitioned tables primary key / unique\n indexes"
},
{
"msg_contents": "\nOp 02-09-2022 om 11:44 schreef David Rowley:\n> Over on [1], there was a question about why it wasn't possible to\n> create the following table:\n> \n> CREATE TABLE foobar(\n> id BIGINT NOT NULL PRIMARY KEY,\n> baz VARCHAR NULL DEFAULT NULL\n> ) PARTITION BY HASH(my_func(id));\n> \n> \n> The attached attempts to clarify these restrictions more accurately\n> based on the current code's restrictions.\n> \n> If there's no objections or suggestions for better wording, I'd like\n> to commit the attached.\n\nMinimal changes:\n\n'To create a unique or primary key constraints on partitioned table'\n\nshould be\n\n'To create unique or primary key constraints on partitioned tables'\n\n\nErik\n\n\n",
"msg_date": "Fri, 2 Sep 2022 12:01:09 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: Clarify restriction on partitioned tables primary key / unique\n indexes"
},
{
"msg_contents": "On Fri, 2 Sept 2022 at 22:01, Erik Rijkers <er@xs4all.nl> wrote:\n> Minimal changes:\n>\n> 'To create a unique or primary key constraints on partitioned table'\n>\n> should be\n>\n> 'To create unique or primary key constraints on partitioned tables'\n\nThanks. I ended up adjusting it to:\n\n\"To create a unique or primary key constraint on a partitioned table,\"\n\nDavid",
"msg_date": "Fri, 2 Sep 2022 22:06:48 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Clarify restriction on partitioned tables primary key / unique\n indexes"
},
{
"msg_contents": "On Fri, 2 Sept 2022 at 22:06, David Rowley <dgrowleyml@gmail.com> wrote:\n> Thanks. I ended up adjusting it to:\n>\n> \"To create a unique or primary key constraint on a partitioned table,\"\n\nand pushed.\n\nThanks for having a look at this Erik.\n\nDavid\n\n\n",
"msg_date": "Mon, 5 Sep 2022 18:46:48 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Clarify restriction on partitioned tables primary key / unique\n indexes"
}
] |
[
{
"msg_contents": "Commit 4232c4b40 introduced userspace access vector cache in sepgsql, and\nremoved all callers of sepgsql_check_perms. Searching the usual repos for\nusage in 3rd party code comes up blank. Is there any reason not to remove it\nas per the attached?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Fri, 2 Sep 2022 11:56:31 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Remove dead code from sepgsql"
},
{
"msg_contents": "On Fri, Sep 2, 2022 at 5:56 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> Commit 4232c4b40 introduced userspace access vector cache in sepgsql, and\n> removed all callers of sepgsql_check_perms. Searching the usual repos for\n> usage in 3rd party code comes up blank. Is there any reason not to remove it\n> as per the attached?\n\nNot to my knowledge.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Sep 2022 11:55:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove dead code from sepgsql"
},
{
"msg_contents": "> On 2 Sep 2022, at 17:55, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Fri, Sep 2, 2022 at 5:56 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> Commit 4232c4b40 introduced userspace access vector cache in sepgsql, and\n>> removed all callers of sepgsql_check_perms. Searching the usual repos for\n>> usage in 3rd party code comes up blank. Is there any reason not to remove it\n>> as per the attached?\n> \n> Not to my knowledge.\n\nThanks for confirming, done that way.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 2 Sep 2022 20:52:49 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Remove dead code from sepgsql"
}
] |
[
{
"msg_contents": "In funcs.sgml, the value fed into jsonb_path_exists_tz was wrong; fixed \nas attached.\n(was inadvertently reverted with the big JSON revert)\n\nErik Rijkers",
"msg_date": "Fri, 2 Sep 2022 16:25:38 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "json docs fix jsonb_path_exists_tz again"
},
{
"msg_contents": "On Fri, Sep 02, 2022 at 04:25:38PM +0200, Erik Rijkers wrote:\n> In funcs.sgml, the value fed into jsonb_path_exists_tz was wrong; fixed as\n> attached.\n>\n> (was inadvertently reverted with the big JSON revert)\n\nYeah, good catch. This comes from 2f2b18b. There is a second\ninconsistency with jsonb_set_lax(). I'll go fix both.\n--\nMichael",
"msg_date": "Sat, 3 Sep 2022 09:59:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: json docs fix jsonb_path_exists_tz again"
},
{
"msg_contents": "\nOn 2022-09-02 Fr 20:59, Michael Paquier wrote:\n> On Fri, Sep 02, 2022 at 04:25:38PM +0200, Erik Rijkers wrote:\n>> In funcs.sgml, the value fed into jsonb_path_exists_tz was wrong; fixed as\n>> attached.\n>>\n>> (was inadvertently reverted with the big JSON revert)\n> Yeah, good catch. This comes from 2f2b18b. There is a second\n> inconsistency with jsonb_set_lax(). I'll go fix both.\n\n\nThanks for fixing, you beat me to it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 3 Sep 2022 10:01:25 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: json docs fix jsonb_path_exists_tz again"
},
{
"msg_contents": "On Sat, Sep 03, 2022 at 10:01:25AM -0400, Andrew Dunstan wrote:\n> Thanks for fixing, you beat me to it.\n\nNo problem, I was just passing by :)\n--\nMichael",
"msg_date": "Sun, 4 Sep 2022 14:40:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: json docs fix jsonb_path_exists_tz again"
}
] |
[
{
"msg_contents": "Hi,\n\nWe have a number of places in the system where we are using\nobject-oriented design patterns. For example, a foreign data wrapper\nreturns a table of function pointers which are basically methods for\noperating on a planner or executor node that corresponds to a foreign\ntable that uses that foreign data wrapper. More simply, a\nTupleTableSlot or TableAmRoutine or bbstreamer or bbsink object\ncontains a pointer to a table of callbacks which are methods that can\nbe applied to that object. walmethods.c/h also try to do something\nsort of like this, but I find the way that they do it really weird,\nbecause while Create{Directory|Tar}WalMethod() does return a table of\ncallbacks, those callbacks aren't tied to any specific object.\nInstead, each set of callbacks refers to the one and only object of\nthat type that can ever exist, and the pointer to that object is\nstored in a global variable managed by walmethods.c. So whereas in\nother cases we give you the object and then a way to get the\ncorresponding set of callbacks, here we only give you the callbacks,\nand we therefore have to impose the artificial restriction that there\ncan only ever be one object.\n\nI think it would be better to structure things so that Walfile and\nWalWriteMethod function as abstract base classes; that is, each is a\nstruct containing those members that are common to all\nimplementations, and then each implementation extends that struct with\nwhatever additional members it needs. One advantage of this is that it\nwould allow us to simplify the communication between receivelog.c and\nwalmethods.c. Right now, for example, there's a get_current_pos()\nmethod in WalWriteMethods. The way that works is that\nWalDirectoryMethod has a struct where it stores a 'curpos' value that\nis returned by this method, and WalTrMethod has a different struct\nthat also stores a 'currpos' value that is returned by this method.\nThere is no real benefit in having the same variable in two different\nstructs and having to access it via a callback when we could just put\nit into a common struct and access it directly. There's also a\ncompression_algorithm() method which has exactly the same issue,\nthough that is an overall property of the WalWriteMethod rather than a\nper-Walfile property. There's also a getlasterr callback which is\nbasically just duplicate code across the two implementations; we could\nunify that code. There's also a global variable current_walfile_name[]\nin receivelog.c which only needs to exist because the file name is\ninconveniently hidden inside the WalWriteMethod abstraction layer; we\ncan just make it visible.\n\nAttached are a couple of hastily-written patches implementing this.\nThere might be good arguments for more thoroughly renaming some of the\nthings these patches touch, but I thought that doing any more renaming\nwould make it less clear what the core of the change is, so I'm\nposting it like this for now. One thing I noticed while writing these\npatches is that the existing code isn't very clear about whether\n\"Walfile\" is supposed to be an abstraction for a pointer to the\nimplementation-specific struct, or the struct itself. From looking at\nwalmethods.h, you'd think it's a pointer to the struct, because we\ndeclare typedef void *Walfile. walmethods.c agrees, but receivelog.c\ntakes a different view, declaring all of its variables as type\n\"Walfile *\". This doesn't cause a compiler error because void * is\njust as interchangeable with void ** as it is with DirectoryMethodFile\n* or TarMethodFile *, but I think it is clearly a mistake, and the\napproach I'm proposing here makes such mistakes more difficult to\nmake.\n\nAside from the stuff that I am complaining about here which is mostly\nstylistic, I think that the division of labor between receivelog.c and\nwalmethods.c is questionable in a number of ways. There are things\nwhich are specific to one walmethod or the other that are handled in\nthe common code (receivelog.c) rather than the type-specific code\n(walmethod.c), and in general it feels like receivelog.c knows way too\nmuch about what is really happening beneath the abstraction layer that\nwalmethods.c supposedly creates. This comment is one of the clearer\nexamples of this:\n\n /*\n * When streaming to files, if an existing file exists we verify that it's\n * either empty (just created), or a complete WalSegSz segment (in which\n * case it has been created and padded). Anything else indicates a corrupt\n * file. Compressed files have no need for padding, so just ignore this\n * case.\n *\n * When streaming to tar, no file with this name will exist before, so we\n * never have to verify a size.\n */\n\nThere's nothing generic here. We're not describing an algorithm that\ncould be used with any walmethod that might exist now or in the\nfuture. We're describing something that will produce the right result\ngiven the two walmethods we actually have and the actual behavior of\nthe callbacks of each one. I don't really know what to do about this\npart of the problem; these pieces of code are deeply intertwined in\ncomplex ways that don't seem simple to untangle. Maybe I'll have a\nbetter idea later, or perhaps someone else will. For now, I'd like to\nget some thoughts on the attached refactoring patches that deal with\nsome more superficial aspects of the problem.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 2 Sep 2022 11:52:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "walmethods.c/h are doing some strange things"
},
{
"msg_contents": "On 02.09.22 17:52, Robert Haas wrote:\n> Attached are a couple of hastily-written patches implementing this.\n> There might be good arguments for more thoroughly renaming some of the\n> things these patches touch, but I thought that doing any more renaming\n> would make it less clear what the core of the change is, so I'm\n> posting it like this for now. One thing I noticed while writing these\n> patches is that the existing code isn't very clear about whether\n> \"Walfile\" is supposed to be an abstraction for a pointer to the\n> implementation-specific struct, or the struct itself. From looking at\n> walmethods.h, you'd think it's a pointer to the struct, because we\n> declare typedef void *Walfile. walmethods.c agrees, but receivelog.c\n> takes a different view, declaring all of its variables as type\n> \"Walfile *\". This doesn't cause a compiler error because void * is\n> just as interchangeable with void ** as it is with DirectoryMethodFile\n> * or TarMethodFile *, but I think it is clearly a mistake, and the\n> approach I'm proposing here makes such mistakes more difficult to\n> make.\n\nThis direction does make sense IMO.\n\n\n",
"msg_date": "Mon, 12 Sep 2022 17:24:32 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: walmethods.c/h are doing some strange things"
},
{
"msg_contents": "From: Robert Haas <robertmhaas@gmail.com>\nDate: Friday, 2 September 2022 at 9:23 PM\nTo: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nSubject: walmethods.c/h are doing some strange things\nHi,\n\nWe have a number of places in the system where we are using\nobject-oriented design patterns. For example, a foreign data wrapper\nreturns a table of function pointers which are basically methods for\noperating on a planner or executor node that corresponds to a foreign\ntable that uses that foreign data wrapper. More simply, a\nTupleTableSlot or TableAmRoutine or bbstreamer or bbsink object\ncontains a pointer to a table of callbacks which are methods that can\nbe applied to that object. walmethods.c/h also try to do something\nsort of like this, but I find the way that they do it really weird,\nbecause while Create{Directory|Tar}WalMethod() does return a table of\ncallbacks, those callbacks aren't tied to any specific object.\nInstead, each set of callbacks refers to the one and only object of\nthat type that can ever exist, and the pointer to that object is\nstored in a global variable managed by walmethods.c. So whereas in\nother cases we give you the object and then a way to get the\ncorresponding set of callbacks, here we only give you the callbacks,\nand we therefore have to impose the artificial restriction that there\ncan only ever be one object.\n\nI think it would be better to structure things so that Walfile and\nWalWriteMethod function as abstract base classes; that is, each is a\nstruct containing those members that are common to all\nimplementations, and then each implementation extends that struct with\nwhatever additional members it needs. One advantage of this is that it\nwould allow us to simplify the communication between receivelog.c and\nwalmethods.c. Right now, for example, there's a get_current_pos()\nmethod in WalWriteMethods. The way that works is that\nWalDirectoryMethod has a struct where it stores a 'curpos' value that\nis returned by this method, and WalTrMethod has a different struct\nthat also stores a 'currpos' value that is returned by this method.\nThere is no real benefit in having the same variable in two different\nstructs and having to access it via a callback when we could just put\nit into a common struct and access it directly. There's also a\ncompression_algorithm() method which has exactly the same issue,\nthough that is an overall property of the WalWriteMethod rather than a\nper-Walfile property. There's also a getlasterr callback which is\nbasically just duplicate code across the two implementations; we could\nunify that code. There's also a global variable current_walfile_name[]\nin receivelog.c which only needs to exist because the file name is\ninconveniently hidden inside the WalWriteMethod abstraction layer; we\ncan just make it visible.\n\n\nAttached are a couple of hastily-written patches implementing this.\nThere might be good arguments for more thoroughly renaming some of the\nthings these patches touch, but I thought that doing any more renaming\nwould make it less clear what the core of the change is, so I'm\nposting it like this for now. One thing I noticed while writing these\npatches is that the existing code isn't very clear about whether\n\"Walfile\" is supposed to be an abstraction for a pointer to the\nimplementation-specific struct, or the struct itself. From looking at\nwalmethods.h, you'd think it's a pointer to the struct, because we\ndeclare typedef void *Walfile. walmethods.c agrees, but receivelog.c\ntakes a different view, declaring all of its variables as type\n\"Walfile *\". This doesn't cause a compiler error because void * is\njust as interchangeable with void ** as it is with DirectoryMethodFile\n* or TarMethodFile *, but I think it is clearly a mistake, and the\napproach I'm proposing here makes such mistakes more difficult to\nmake.\n\n+ 1 on being able to restrict making such mistakes. I had a quick look at the patch\nand the refactoring makes sense.\n\nThanks,\nSravan Kumar\nwww.enterprisedb.com\n\n\nAside from the stuff that I am complaining about here which is mostly\nstylistic, I think that the division of labor between receivelog.c and\nwalmethods.c is questionable in a number of ways. There are things\nwhich are specific to one walmethod or the other that are handled in\nthe common code (receivelog.c) rather than the type-specific code\n(walmethod.c), and in general it feels like receivelog.c knows way too\nmuch about what is really happening beneath the abstraction layer that\nwalmethods.c supposedly creates. This comment is one of the clearer\nexamples of this:\n\n /*\n * When streaming to files, if an existing file exists we verify that it's\n * either empty (just created), or a complete WalSegSz segment (in which\n * case it has been created and padded). Anything else indicates a corrupt\n * file. Compressed files have no need for padding, so just ignore this\n * case.\n *\n * When streaming to tar, no file with this name will exist before, so we\n * never have to verify a size.\n */\n\nThere's nothing generic here. We're not describing an algorithm that\ncould be used with any walmethod that might exist now or in the\nfuture. We're describing something that will produce the right result\ngiven the two walmethods we actually have and the actual behavior of\nthe callbacks of each one. I don't really know what to do about this\npart of the problem; these pieces of code are deeply intertwined in\ncomplex ways that don't seem simple to untangle. Maybe I'll have a\nbetter idea later, or perhaps someone else will. For now, I'd like to\nget some thoughts on the attached refactoring patches that deal with\nsome more superficial aspects of the problem.\n\nThanks,\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n\n\n\n\n\n\n\n \n \n\n\nFrom: Robert Haas <robertmhaas@gmail.com>\nDate: Friday, 2 September 2022 at 9:23 PM\nTo: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nSubject: walmethods.c/h are doing some strange things\n\n\nHi,\n\nWe have a number of places in the system where we are using\nobject-oriented design patterns. For example, a foreign data wrapper\nreturns a table of function pointers which are basically methods for\noperating on a planner or executor node that corresponds to a foreign\ntable that uses that foreign data wrapper. More simply, a\nTupleTableSlot or TableAmRoutine or bbstreamer or bbsink object\ncontains a pointer to a table of callbacks which are methods that can\nbe applied to that object. walmethods.c/h also try to do something\nsort of like this, but I find the way that they do it really weird,\nbecause while Create{Directory|Tar}WalMethod() does return a table of\ncallbacks, those callbacks aren't tied to any specific object.\nInstead, each set of callbacks refers to the one and only object of\nthat type that can ever exist, and the pointer to that object is\nstored in a global variable managed by walmethods.c. So whereas in\nother cases we give you the object and then a way to get the\ncorresponding set of callbacks, here we only give you the callbacks,\nand we therefore have to impose the artificial restriction that there\ncan only ever be one object.\n\nI think it would be better to structure things so that Walfile and\nWalWriteMethod function as abstract base classes; that is, each is a\nstruct containing those members that are common to all\nimplementations, and then each implementation extends that struct with\nwhatever additional members it needs. One advantage of this is that it\nwould allow us to simplify the communication between receivelog.c and\nwalmethods.c. Right now, for example, there's a get_current_pos()\nmethod in WalWriteMethods. The way that works is that\nWalDirectoryMethod has a struct where it stores a 'curpos' value that\nis returned by this method, and WalTrMethod has a different struct\nthat also stores a 'currpos' value that is returned by this method.\nThere is no real benefit in having the same variable in two different\nstructs and having to access it via a callback when we could just put\nit into a common struct and access it directly. There's also a\ncompression_algorithm() method which has exactly the same issue,\nthough that is an overall property of the WalWriteMethod rather than a\nper-Walfile property. There's also a getlasterr callback which is\nbasically just duplicate code across the two implementations; we could\nunify that code. There's also a global variable current_walfile_name[]\nin receivelog.c which only needs to exist because the file name is\ninconveniently hidden inside the WalWriteMethod abstraction layer; we\ncan just make it visible.\n\n\nAttached are a couple of hastily-written patches implementing this.\nThere might be good arguments for more thoroughly renaming some of the\nthings these patches touch, but I thought that doing any more renaming\nwould make it less clear what the core of the change is, so I'm\nposting it like this for now. One thing I noticed while writing these\npatches is that the existing code isn't very clear about whether\n\"Walfile\" is supposed to be an abstraction for a pointer to the\nimplementation-specific struct, or the struct itself. From looking at\nwalmethods.h, you'd think it's a pointer to the struct, because we\ndeclare typedef void *Walfile. walmethods.c agrees, but receivelog.c\ntakes a different view, declaring all of its variables as type\n\"Walfile *\". This doesn't cause a compiler error because void * is\njust as interchangeable with void ** as it is with DirectoryMethodFile\n* or TarMethodFile *, but I think it is clearly a mistake, and the\napproach I'm proposing here makes such mistakes more difficult to\nmake.\n \n+ 1 on being able to restrict making such mistakes. I had a quick look at the patch\nand the refactoring makes sense.\n \nThanks,\nSravan Kumar\nwww.enterprisedb.com\n\n\nAside from the stuff that I am complaining about here which is mostly\nstylistic, I think that the division of labor between receivelog.c and\nwalmethods.c is questionable in a number of ways. There are things\nwhich are specific to one walmethod or the other that are handled in\nthe common code (receivelog.c) rather than the type-specific code\n(walmethod.c), and in general it feels like receivelog.c knows way too\nmuch about what is really happening beneath the abstraction layer that\nwalmethods.c supposedly creates. This comment is one of the clearer\nexamples of this:\n\n /*\n * When streaming to files, if an existing file exists we verify that it's\n * either empty (just created), or a complete WalSegSz segment (in which\n * case it has been created and padded). Anything else indicates a corrupt\n * file. Compressed files have no need for padding, so just ignore this\n * case.\n *\n * When streaming to tar, no file with this name will exist before, so we\n * never have to verify a size.\n */\n\nThere's nothing generic here. We're not describing an algorithm that\ncould be used with any walmethod that might exist now or in the\nfuture. We're describing something that will produce the right result\ngiven the two walmethods we actually have and the actual behavior of\nthe callbacks of each one. I don't really know what to do about this\npart of the problem; these pieces of code are deeply intertwined in\ncomplex ways that don't seem simple to untangle. Maybe I'll have a\nbetter idea later, or perhaps someone else will. For now, I'd like to\nget some thoughts on the attached refactoring patches that deal with\nsome more superficial aspects of the problem.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 15 Sep 2022 08:52:33 +0000",
"msg_from": "velagandula sravan kumar <sravan_velag@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: walmethods.c/h are doing some strange things"
},
{
"msg_contents": "At Fri, 2 Sep 2022 11:52:38 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> that type that can ever exist, and the pointer to that object is\n> stored in a global variable managed by walmethods.c. So whereas in\n> other cases we give you the object and then a way to get the\n> corresponding set of callbacks, here we only give you the callbacks,\n> and we therefore have to impose the artificial restriction that there\n> can only ever be one object.\n\nMakes sense to me.\n\n> There is no real benefit in having the same variable in two different\n> structs and having to access it via a callback when we could just put\n> it into a common struct and access it directly. There's also a\n> compression_algorithm() method which has exactly the same issue,\n..\n> though that is an overall property of the WalWriteMethod rather than a\n> per-Walfile property. There's also a getlasterr callback which is\n> basically just duplicate code across the two implementations; we could\n> unify that code. There's also a global variable current_walfile_name[]\n> in receivelog.c which only needs to exist because the file name is\n> inconveniently hidden inside the WalWriteMethod abstraction layer; we\n> can just make it visible.\n\nSounds sensible.\n\n> Attached are a couple of hastily-written patches implementing this.\n\n> patches is that the existing code isn't very clear about whether\n> \"Walfile\" is supposed to be an abstraction for a pointer to the\n> implementation-specific struct, or the struct itself. From looking at\n> walmethods.h, you'd think it's a pointer to the struct, because we\n> declare typedef void *Walfile. walmethods.c agrees, but receivelog.c\n> takes a different view, declaring all of its variables as type\n> \"Walfile *\". This doesn't cause a compiler error because void * is\n> just as interchangeable with void ** as it is with DirectoryMethodFile\n> * or TarMethodFile *, but I think it is clearly a mistake, and the\n> approach I'm proposing here makes such mistakes more difficult to\n> make.\n\n+1. I remember I thought the same thing when I was faced with the\ncode before.\n\n> Aside from the stuff that I am complaining about here which is mostly\n> stylistic, I think that the division of labor between receivelog.c and\n> walmethods.c is questionable in a number of ways. There are things\n> which are specific to one walmethod or the other that are handled in\n> the common code (receivelog.c) rather than the type-specific code\n> (walmethod.c), and in general it feels like receivelog.c knows way too\n> much about what is really happening beneath the abstraction layer that\n> walmethods.c supposedly creates. This comment is one of the clearer\n> examples of this:\n> \n> /*\n> * When streaming to files, if an existing file exists we verify that it's\n> * either empty (just created), or a complete WalSegSz segment (in which\n> * case it has been created and padded). Anything else indicates a corrupt\n> * file. Compressed files have no need for padding, so just ignore this\n> * case.\n> *\n> * When streaming to tar, no file with this name will exist before, so we\n> * never have to verify a size.\n> */\n> \n> There's nothing generic here. We're not describing an algorithm that\n> could be used with any walmethod that might exist now or in the\n> future. We're describing something that will produce the right result\n> given the two walmethods we actually have and the actual behavior of\n> the callbacks of each one. I don't really know what to do about this\n> part of the problem; these pieces of code are deeply intertwined in\n> complex ways that don't seem simple to untangle. Maybe I'll have a\n\nI agree to the view. That part seems to be a part of\nopen_for_write()'s body functions. But, I'm not sure how we untangle\nthem at a glance, too. In the first place, I'm not sure why we need\nto do that despite the file going to be overwritten from the\nbeginning, though..\n\n> better idea later, or perhaps someone else will. For now, I'd like to\n> get some thoughts on the attached refactoring patches that deal with\n> some more superficial aspects of the problem.\n\nregares.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 16 Sep 2022 10:39:29 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: walmethods.c/h are doing some strange things"
},
{
"msg_contents": "On Thu, Sep 15, 2022 at 9:39 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Fri, 2 Sep 2022 11:52:38 -0400, Robert Haas <robertmhaas@gmail.com> wrote in\n> > that type that can ever exist, and the pointer to that object is\n> > stored in a global variable managed by walmethods.c. So whereas in\n> > other cases we give you the object and then a way to get the\n> > corresponding set of callbacks, here we only give you the callbacks,\n> > and we therefore have to impose the artificial restriction that there\n> > can only ever be one object.\n>\n> Makes sense to me.\n\nOK, I have committed the patches. Before doing that, I fixed a couple\nof bugs in the first one, and did a little bit of rebasing of the\nsecond one.\n\n> I agree to the view. That part seems to be a part of\n> open_for_write()'s body functions. But, I'm not sure how we untangle\n> them at a glance, too. In the first place, I'm not sure why we need\n> to do that despite the file going to be overwritten from the\n> beginning, though..\n\nI suspect that we're going to have to untangle things bit by bit.\n\nOne place where I think we might be able to improve things is with the\nhandling of compression suffixes (e.g. .gz, .lz4) and temp suffixes\n(e.g. .partial, .tmp). At present, responsibility for adding these\nsuffixes to pathnames is spread across receivelog.c and walmethods.c\nin a way that, to me, looks pretty random. It's not exactly clear to\nme what the best design is here right now, but I think either (A)\nreceivelog.c should take full responsibility for computing the exact\nfilename and walmethods.c should just blindly write the data into the\nexact filename it's given, or else (B) receivelog.c should take no\nresponsibility for pathname construction and the fact that there is\npathname munging happening should be hidden inside walmethods.c. Right\nnow, walmethods.c is doing pathname munging, but the munging is\nvisible from receivelog.c, so the responsibility is spread across the\ntwo files rather than being the sole responsibility of either.\n\nWhat's also a little bit aggravating about this is that it doesn't\nfeel like we're accomplishing all that much code reuse here.\nwalmethods.c and receivelog.c are shared only between pg_basebackup\nand pg_receivewal, but the tar method is used only by pg_basebackup.\nThe directory mode is shared, but actually the two programs need a\nbunch of different things. pg_recievelog needs a bunch of logic to\ndeal with the possibility that it is receiving data into an existing\ndirectory and that this directory might at any time start to be used\nto feed a standby, or used for PITR. That's not an issue for\npg_basebackup: if it fails, the whole directory will be removed, so\nthere is no need to worry about fsyncs or padding or overwriting\nexisting files. On the other hand, pg_basebackup needs a bunch of\nlogic to create a .done file for each WAL segment which is not\nrequired in the case of pg_receivewal. It feels like we've got as much\nconditional logic as we do common logic...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Sep 2022 14:02:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: walmethods.c/h are doing some strange things"
}
] |
[
{
"msg_contents": "Per the meson thread, we also need to have a conversation about\nwhat's the oldest bison and flex versions still worth supporting.\nThe ones that had been our reference points are no longer\nrepresented in the buildfarm, and it seems not likely to be\nworth resurrecting copies of those.\n\nAs fodder for discussion, here's a scraping of the currently-tested\nversions. (This counts only animals running configure, ie not MSVC.\nAlso, my query looked back a few months, so some recently-dead\nanimals are still here.)\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 02 Sep 2022 15:08:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Minimum bison and flex versions"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-02 15:08:01 -0400, Tom Lane wrote:\n> As fodder for discussion, here's a scraping of the currently-tested\n> versions. (This counts only animals running configure, ie not MSVC.\n> Also, my query looked back a few months, so some recently-dead\n> animals are still here.)\n\nIf we also count older branches, there's a few alive cases of old bison:\n\n REL_10_STABLE | {2,4,1} | 2022-09-01 12:30:05 | {castoroides,frogmouth}\n REL_10_STABLE | {2,4,2} | 2022-09-01 08:30:15 | {brolga}\n\nAll the other animals using bison < 3.0.2 are dead. There's two live animals\nusing 3.0.2:\n HEAD | {3,0,2} | 2022-09-01 19:23:26 | {chipmunk,topminnow}\n\nSo it looks like we could trivialy go to 3.0 as the minimum. The number of\nanimals using some version of 3.0 is quite large:\n\nchipmunk,topminnow,parula,perch,buri,cotinga,tern,elasmobranch,trilobite,urocryon,vulpes,avocet,wobbegong,ayu,batfish,cavefish,bichir,grison,hippopotamus,hornet,hoverfly,chimaera,jay,chub,clam,blossomcrown,lorikeet,mandrill,mantid,massasauga,cuon,curculio,demoiselle,quokka,rhinoceros,mussurana,dhole,butterflyfish,snakefly,spurfowl,sungazer,tadarida,bonito\n\nSo I don't think we could easily go to something newer.\n\nThere's nothing in 3.1-3.5 release notes [1] that looks particularly helpful\nfor us to require on a quick glance. 2.6 would be nice to have as noted\ne.g. in\nhttps://postgr.es/m/CAFBsxsEospoUX%3DQYkfC%3DWcJqNB%2BiZtBf%3DBaRwn-zbHa48X0NKQ%40mail.gmail.com\nbut as noted in Tom's followup, apple still ships 2.3.\n\n2.3 is the last bison version using GPLv2, so it's unlikely that apple will\never update. Given that I'm not sure how much we should feel beholden to\nsupport that, given that we'll eventually have to bite the bullet.\n\n\nFor flex, the minimum after prariedog's demise seems to be 2.5.35, with a\ndecent population. Skimming the release notes [2] between 2.5.31 and 2.5.35\ndoesn't show anything particularly interesting. But given that we don't have\ncoverage and that 2.5.35 was released in 2008, it seems we could just update\nour requirements so that we have test coverage?\n\nGreetings,\n\nAndres Freund\n\n[1] https://savannah.gnu.org/news/?group_id=56\n[2] https://github.com/westes/flex/blob/master/NEWS\n\n\n",
"msg_date": "Fri, 2 Sep 2022 13:12:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Minimum bison and flex versions"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-09-02 15:08:01 -0400, Tom Lane wrote:\n>> As fodder for discussion, here's a scraping of the currently-tested\n>> versions. (This counts only animals running configure, ie not MSVC.\n>> Also, my query looked back a few months, so some recently-dead\n>> animals are still here.)\n\n> All the other animals using bison < 3.0.2 are dead.\n\nUh, what?\n\n longfin | 2022-09-02 16:09:42 | configure: using bison (GNU Bison) 2.3\n sifaka | 2022-09-02 16:02:05 | configure: using bison (GNU Bison) 2.3\n frogfish | 2022-08-21 17:59:26 | configure: using bison (GNU Bison) 2.5\n lapwing | 2022-09-02 16:40:12 | configure: using bison (GNU Bison) 2.5\n skate | 2022-09-02 07:27:10 | configure: using bison (GNU Bison) 2.5\n snapper | 2022-09-02 13:38:22 | configure: using bison (GNU Bison) 2.5\n prion | 2022-09-02 16:03:16 | configure: using bison (GNU Bison) 2.7\n shelduck | 2022-09-02 06:42:13 | configure: using bison (GNU Bison) 2.7\n\nI'm not sure why frogfish hasn't reported in for a few days, but these\nothers are certainly still live.\n\nlongfin and sifaka are using the Apple-provided copy of bison.\nIf we move the minimum version above 2.3, that will cause some\npain for all Mac-based developers. Maybe not much, because most\nprobably can get it from homebrew or macports, but some.\n\n> 2.3 is the last bison version using GPLv2, so it's unlikely that apple will\n> ever update. Given that I'm not sure how much we should feel beholden to\n> support that, given that we'll eventually have to bite the bullet.\n\nSeeing that they're au courant on flex (2.6.4), it certainly looks like\na license problem rather than that they just don't care about these tools\nat all. Nonetheless, I want to see a pretty solid benefit from breaking\ncompatibility with 2.3, and I'm not convinced we're there yet.\n\n> For flex, the minimum after prariedog's demise seems to be 2.5.35, with a\n> decent population. Skimming the release notes [2] between 2.5.31 and 2.5.35\n> doesn't show anything particularly interesting. But given that we don't have\n> coverage and that 2.5.35 was released in 2008, it seems we could just update\n> our requirements so that we have test coverage?\n\nYeah, I think setting the minimum to 2.5.35 is a no-brainer there.\n(If memory serves, the only major difference between 2.5.33 and 2.5.35\nwas a security fix that LTS distros cherry-picked without changing\ntheir version numbers, so that anything claiming to be 2.5.33 today\nis probably effectively 2.5.35 anyway.) There aren't a huge number\nof animals still on 2.5.35:\n\n frogfish | 2022-08-21 17:59:26 | configure: using flex 2.5.35\n hoverfly | 2022-09-02 16:02:01 | configure: using flex 2.5.35\n lapwing | 2022-09-02 16:40:12 | configure: using flex 2.5.35\n skate | 2022-09-02 07:27:10 | configure: using flex 2.5.35\n snapper | 2022-09-02 13:38:22 | configure: using flex 2.5.35\n\nbut on the other hand I don't know that we'd gain anything by making\nthem update.\n\nI'd be content for now to set the minimums at 2.3 and 2.5.35.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Sep 2022 16:29:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Minimum bison and flex versions"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-02 16:29:22 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-09-02 15:08:01 -0400, Tom Lane wrote:\n> >> As fodder for discussion, here's a scraping of the currently-tested\n> >> versions. (This counts only animals running configure, ie not MSVC.\n> >> Also, my query looked back a few months, so some recently-dead\n> >> animals are still here.)\n> \n> > All the other animals using bison < 3.0.2 are dead.\n> \n> Uh, what?\n\nArgh, regex fail. I looked for a version with 3 components :/.\n\n\n> I'd be content for now to set the minimums at 2.3 and 2.5.35.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Sep 2022 14:04:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Minimum bison and flex versions"
},
{
"msg_contents": "On Sat, Sep 3, 2022 at 4:04 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-09-02 16:29:22 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2022-09-02 15:08:01 -0400, Tom Lane wrote:\n> > I'd be content for now to set the minimums at 2.3 and 2.5.35.\n>\n> +1\n\nHere are autoconf-only patches to that effect.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 6 Sep 2022 13:32:32 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Minimum bison and flex versions"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-06 13:32:32 +0700, John Naylor wrote:\n> Here are autoconf-only patches to that effect.\n\nLooks like you did actually include src/tools/msvc as well :)\n\n\n> Subject: [PATCH v2 2/2] Bump minimum version of Flex to 2.5.35\n\nLGTM.\n\n\n> From b7f35ae5e0fd55f8dceb2a4a546be3b50065a09c Mon Sep 17 00:00:00 2001\n> From: John Naylor <john.naylor@postgresql.org>\n> Date: Tue, 6 Sep 2022 11:41:58 +0700\n> Subject: [PATCH v2 1/2] Bump minimum version of Bison to 2.3\n> \n> Since the retirement of some older buildfarm members, the oldest Bison\n> that gets regular testing is 2.3. MacOS ships that version, and will\n> continue doing so for the forseeable future because of Apple's policy\n> regarding GPLv3. While Mac users could use a package manager to install\n> a newer version, there is no compelling reason to do so at this time.\n\ns/to do so/to force them to do so/?\n\n\n> --- a/src/pl/plpgsql/src/pl_gram.y\n> +++ b/src/pl/plpgsql/src/pl_gram.y\n> @@ -39,10 +39,7 @@\n> /*\n> * Bison doesn't allocate anything that needs to live across parser calls,\n> * so we can easily have it use palloc instead of malloc. This prevents\n> - * memory leaks if we error out during parsing. Note this only works with\n> - * bison >= 2.0. However, in bison 1.875 the default is to use alloca()\n> - * if possible, so there's not really much problem anyhow, at least if\n> - * you're building with gcc.\n> + * memory leaks if we error out during parsing.\n> */\n> #define YYMALLOC palloc\n> #define YYFREE pfree\n\nGetting rid of all that copy-pasted stuff alone seems worth doing this :)\n\n\nLGTM.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Sep 2022 10:07:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Minimum bison and flex versions"
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 12:07 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-09-06 13:32:32 +0700, John Naylor wrote:\n> > Here are autoconf-only patches to that effect.\n>\n> Looks like you did actually include src/tools/msvc as well :)\n\nAh, in my head I meant \"no patches for the Meson branch\". :-)\n\nCI fails on MSVC pg_upgrade, but that shows up in some existing CF bot\nfailures and in any case doesn't have a grammar, so I have pushed,\nthanks for looking!\n\n> > Since the retirement of some older buildfarm members, the oldest Bison\n> > that gets regular testing is 2.3. MacOS ships that version, and will\n> > continue doing so for the forseeable future because of Apple's policy\n> > regarding GPLv3. While Mac users could use a package manager to install\n> > a newer version, there is no compelling reason to do so at this time.\n>\n> s/to do so/to force them to do so/?\n\nThere are good reasons for a dev to install a newer Bison, like better\ndiagnostics, so used this language.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 9 Sep 2022 12:59:09 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Minimum bison and flex versions"
}
] |
[
{
"msg_contents": "Hi,\n\nI think that this is a typo.\n\nAt function circle_same the second isnan test is wrong.\n\nAttached fix patch.\n\nregards,\nRanier Vilela",
"msg_date": "Fri, 2 Sep 2022 16:08:34 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix typo function circle_same (src/backend/utils/adt/geo_ops.c)"
},
{
"msg_contents": "> On 2 Sep 2022, at 21:08, Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> At function circle_same the second isnan test is wrong.\n\nYeah, that seems pretty wrong. Did you attempt to procure a test for when this\nyields the wrong result?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 2 Sep 2022 21:15:03 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo function circle_same (src/backend/utils/adt/geo_ops.c)"
},
{
"msg_contents": "Em sex., 2 de set. de 2022 às 16:15, Daniel Gustafsson <daniel@yesql.se>\nescreveu:\n\n> > On 2 Sep 2022, at 21:08, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> > At function circle_same the second isnan test is wrong.\n>\n> Yeah, that seems pretty wrong. Did you attempt to procure a test for when\n> this\n> yields the wrong result?\n>\nHi Daniel,\nUnfortunately not.\n\nregards,\nRanier Vilela\n\n>\n>\n\nEm sex., 2 de set. de 2022 às 16:15, Daniel Gustafsson <daniel@yesql.se> escreveu:> On 2 Sep 2022, at 21:08, Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> At function circle_same the second isnan test is wrong.\n\nYeah, that seems pretty wrong. Did you attempt to procure a test for when this\nyields the wrong result?Hi Daniel,Unfortunately not.regards,Ranier Vilela",
"msg_date": "Fri, 2 Sep 2022 16:22:25 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typo function circle_same (src/backend/utils/adt/geo_ops.c)"
},
{
"msg_contents": "> On 2 Sep 2022, at 21:22, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> \n> Em sex., 2 de set. de 2022 às 16:15, Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> escreveu:\n> > On 2 Sep 2022, at 21:08, Ranier Vilela <ranier.vf@gmail.com <mailto:ranier.vf@gmail.com>> wrote:\n> \n> > At function circle_same the second isnan test is wrong.\n> \n> Yeah, that seems pretty wrong. Did you attempt to procure a test for when this\n> yields the wrong result?\n> Hi Daniel,\n> Unfortunately not.\n\nOn HEAD, the below query yields what seems to be the wrong result for the \"same\nas\" operator:\n\npostgres=# select '<(0,0),NaN>'::circle ~= '<(0,0),1>'::circle;\n ?column?\n----------\n t\n(1 row)\n\nWith the patch applied, it returns the expected:\n\npostgres=# select '<(0,0),NaN>'::circle ~= '<(0,0),1>'::circle;\n ?column?\n----------\n f\n(1 row)\n\nThere seems to be surprisingly few tests around these geo operators?\n\nThis was introduced in c4c340088 so any fix needs to be backpatched to v12. I\nwill do some more testing and digging to verify and will take care of it.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 2 Sep 2022 21:55:15 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo function circle_same (src/backend/utils/adt/geo_ops.c)"
},
{
"msg_contents": "Hi,\n\nOn Fri, Sep 01, 2022 at 09:55:15PM +0200, Daniel Gustafsson wrote:\n> > On 2 Sep 2022, at 21:22, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > Em sex., 2 de set. de 2022 �s 16:15, Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> escreveu:\n> > > On 2 Sep 2022, at 21:08, Ranier Vilela <ranier.vf@gmail.com <mailto:ranier.vf@gmail.com>> wrote:\n> >\n> > > At function circle_same the second isnan test is wrong.\n> >\n> > Yeah, that seems pretty wrong. Did you attempt to procure a test for when this\n> > yields the wrong result?\n> > Hi Daniel,\n> > Unfortunately not.\n>\n> On HEAD, the below query yields what seems to be the wrong result for the \"same\n> as\" operator:\n>\n> postgres=# select '<(0,0),NaN>'::circle ~= '<(0,0),1>'::circle;\n> ?column?\n> ----------\n> t\n> (1 row)\n>\n> With the patch applied, it returns the expected:\n>\n> postgres=# select '<(0,0),NaN>'::circle ~= '<(0,0),1>'::circle;\n> ?column?\n> ----------\n> f\n> (1 row)\n>\n> There seems to be surprisingly few tests around these geo operators?\n\nYeah, there are unfortunately a lot of problems around those and NaN, with\nmultiple reports in the past (I recall [1] and [2] but there were others).\nThere was a CF entry that tried to improve things [3], part of it was committed\nbut not all [4], and clearly some more work is needed.\n\n[1] https://www.postgresql.org/message-id/flat/CAGf+fX70rWFOk5cd00uMfa__0yP+vtQg5ck7c2Onb-Yczp0URA@mail.gmail.com\n[2] https://www.postgresql.org/message-id/20210330095751.x5hnqbqcxilzwjlm@nol\n[3] https://commitfest.postgresql.org/38/2710/\n[4] https://www.postgresql.org/message-id/3558828.1659045056@sss.pgh.pa.us\n\n\n",
"msg_date": "Sat, 3 Sep 2022 15:36:17 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo function circle_same (src/backend/utils/adt/geo_ops.c)"
},
{
"msg_contents": "> On 3 Sep 2022, at 09:36, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Yeah, there are unfortunately a lot of problems around those and NaN, with\n> multiple reports in the past (I recall [1] and [2] but there were others).\n\nNaNs are indeed incredibly complicated, but I think we are sort of in a good\nplace here given it's testing for equality in floats. The commit message of\nc4c34008854654279ec30067d72fc5d174d2f42f carries an explanation:\n\n\tThe float datatypes consider NaNs values to be equal and greater than\n\tall non-NaN values. This change considers NaNs equal only for equality\n\toperators. The placement operators, contains, overlaps, left/right of\n\tetc. continue to return false when NaNs are involved.\n\nFrom testing and reading I believe the fix in this thread is correct, but since\nNaNs are involved I will take another look at this with fresh eyes before going\nahead.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sun, 4 Sep 2022 00:39:20 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo function circle_same (src/backend/utils/adt/geo_ops.c)"
},
{
"msg_contents": "Em sáb., 3 de set. de 2022 às 19:39, Daniel Gustafsson <daniel@yesql.se>\nescreveu:\n\n> > On 3 Sep 2022, at 09:36, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> > Yeah, there are unfortunately a lot of problems around those and NaN,\n> with\n> > multiple reports in the past (I recall [1] and [2] but there were\n> others).\n>\n> NaNs are indeed incredibly complicated, but I think we are sort of in a\n> good\n> place here given it's testing for equality in floats. The commit message\n> of\n> c4c34008854654279ec30067d72fc5d174d2f42f carries an explanation:\n>\n> The float datatypes consider NaNs values to be equal and greater\n> than\n> all non-NaN values. This change considers NaNs equal only for\n> equality\n> operators. The placement operators, contains, overlaps,\n> left/right of\n> etc. continue to return false when NaNs are involved.\n>\n> From testing and reading I believe the fix in this thread is correct, but\n> since\n> NaNs are involved I will take another look at this with fresh eyes before\n> going\n> ahead.\n>\nYeah, the fix is correct.\n\nBut with Windows 10 build, I got this diff result:\n\ndiff -w -U3\nC:/dll/postgres_dev/postgres_master/src/test/regress/expected/geometry.out\nC:/dll/postgres_dev/postgres_master/src/test/regress/results/geometry.out\n---\nC:/dll/postgres_dev/postgres_master/src/test/regress/expected/geometry.out\n2022-09-01 08:05:03.685931000 -0300\n+++\nC:/dll/postgres_dev/postgres_master/src/test/regress/results/geometry.out\n2022-09-04 09:27:47.133617800 -0300\n@@ -4380,9 +4380,8 @@\n <(100,200),10> | <(100,200),10>\n <(100,1),115> | <(100,1),115>\n <(3,5),0> | <(3,5),0>\n- <(3,5),NaN> | <(3,5),0>\n <(3,5),NaN> | <(3,5),NaN>\n-(9 rows)\n+(8 rows)\n\n -- Overlap with circle\n SELECT c1.f1, c2.f1 FROM CIRCLE_TBL c1, CIRCLE_TBL c2 WHERE c1.f1 && c2.f1;\n\nNot sure why.\n\nregards,\nRanier Vilela\n\nEm sáb., 3 de set. de 2022 às 19:39, Daniel Gustafsson <daniel@yesql.se> escreveu:> On 3 Sep 2022, at 09:36, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Yeah, there are unfortunately a lot of problems around those and NaN, with\n> multiple reports in the past (I recall [1] and [2] but there were others).\n\nNaNs are indeed incredibly complicated, but I think we are sort of in a good\nplace here given it's testing for equality in floats. The commit message of\nc4c34008854654279ec30067d72fc5d174d2f42f carries an explanation:\n\n The float datatypes consider NaNs values to be equal and greater than\n all non-NaN values. This change considers NaNs equal only for equality\n operators. The placement operators, contains, overlaps, left/right of\n etc. continue to return false when NaNs are involved.\n\n From testing and reading I believe the fix in this thread is correct, but since\nNaNs are involved I will take another look at this with fresh eyes before going\nahead.Yeah, the fix is correct.But with Windows 10 build, I got this diff result:diff -w -U3 C:/dll/postgres_dev/postgres_master/src/test/regress/expected/geometry.out C:/dll/postgres_dev/postgres_master/src/test/regress/results/geometry.out--- C:/dll/postgres_dev/postgres_master/src/test/regress/expected/geometry.out\t2022-09-01 08:05:03.685931000 -0300+++ C:/dll/postgres_dev/postgres_master/src/test/regress/results/geometry.out\t2022-09-04 09:27:47.133617800 -0300@@ -4380,9 +4380,8 @@ <(100,200),10> | <(100,200),10> <(100,1),115> | <(100,1),115> <(3,5),0> | <(3,5),0>- <(3,5),NaN> | <(3,5),0> <(3,5),NaN> | <(3,5),NaN>-(9 rows)+(8 rows) -- Overlap with circle SELECT c1.f1, c2.f1 FROM CIRCLE_TBL c1, CIRCLE_TBL c2 WHERE c1.f1 && c2.f1;Not sure why.regards,Ranier Vilela",
"msg_date": "Sun, 4 Sep 2022 09:39:25 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typo function circle_same (src/backend/utils/adt/geo_ops.c)"
},
{
"msg_contents": "> On 4 Sep 2022, at 14:39, Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> But with Windows 10 build, I got this diff result:\n> \n> diff -w -U3 C:/dll/postgres_dev/postgres_master/src/test/regress/expected/geometry.out C:/dll/postgres_dev/postgres_master/src/test/regress/results/geometry.out\n> --- C:/dll/postgres_dev/postgres_master/src/test/regress/expected/geometry.out 2022-09-01 08:05:03.685931000 -0300\n> +++ C:/dll/postgres_dev/postgres_master/src/test/regress/results/geometry.out 2022-09-04 09:27:47.133617800 -0300\n> @@ -4380,9 +4380,8 @@\n> <(100,200),10> | <(100,200),10>\n> <(100,1),115> | <(100,1),115>\n> <(3,5),0> | <(3,5),0>\n> - <(3,5),NaN> | <(3,5),0>\n> <(3,5),NaN> | <(3,5),NaN>\n> -(9 rows)\n> +(8 rows)\n> \n> -- Overlap with circle\n> SELECT c1.f1, c2.f1 FROM CIRCLE_TBL c1, CIRCLE_TBL c2 WHERE c1.f1 && c2.f1;\n> \n> Not sure why.\n\nThat's not just on Windows, and it makes total sense as a radius of NaN isn't\nthe same as a radius of zero.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 5 Sep 2022 10:24:25 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo function circle_same (src/backend/utils/adt/geo_ops.c)"
},
{
"msg_contents": "Created a CF entry.\nhttps://commitfest.postgresql.org/40/3883/\n\nAttached a patch with a fix correction to regress output.\n\nI think this needs to be backpatched until version 12.\n\nregards,\nRanier Vilela",
"msg_date": "Sat, 10 Sep 2022 07:58:23 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typo function circle_same (src/backend/utils/adt/geo_ops.c)"
},
{
"msg_contents": "Thank you for the commit.\n\nregards,\nRanier Vilela\n\nThank you for the commit.regards,Ranier Vilela",
"msg_date": "Mon, 12 Sep 2022 08:14:21 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typo function circle_same (src/backend/utils/adt/geo_ops.c)"
}
] |
[
{
"msg_contents": "Postgres 14 commit 5b861baa55 added hardening to nbtree page deletion.\nThis had the effect of making nbtree VACUUM robust against misbehaving\noperator classes -- we just LOG the problem and move on, without\nthrowing an error. In practice a \"misbehaving operator class\" is often\na problem with collation versioning.\n\nI think that this should be backpatched now, to protect users from\nparticularly nasty problems that hitting the error eventually leads\nto.\n\nAn error ends the whole VACUUM operation. If VACUUM cannot delete the\npage the first time, there is no reason to think that it'll be any\ndifferent on the second or the tenth attempt. The eventual result\n(absent user/DBA intervention) is that no antiwraparound autovacuum\nwill ever complete, leading to an outage when the system hits\nxidStopLimit. (Actually this scenario won't result in the system\nhitting xidStopLimit where the failsafe is available, but that's\nanother thing that is only in 14, so that's not any help.)\n\nThis seems low risk. The commit in question is very simple. It just\ndowngrades an old 9.4-era ereport() from ERROR to LOG, and adds a\n\"return false;\" immediately after that. The function in question is\nfundamentally structured in a way that allows it to back out of page\ndeletion because of problems that are far removed from where the\ncaller starts from. When and why we back out of page deletion is\nalready opaque to the caller, so it's very hard to imagine a new\nproblem caused by backpatching. Besides all this, 14 has been out for\na while now.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 2 Sep 2022 14:13:15 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Backpatching nbtree VACUUM (page deletion) hardening"
},
{
"msg_contents": "On Fri, Sep 02, 2022 at 02:13:15PM -0700, Peter Geoghegan wrote:\n> I think that this should be backpatched now, to protect users from\n> particularly nasty problems that hitting the error eventually leads\n> to.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 2 Sep 2022 15:45:15 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Backpatching nbtree VACUUM (page deletion) hardening"
},
{
"msg_contents": "On Fri, Sep 02, 2022 at 02:13:15PM -0700, Peter Geoghegan wrote:\n> Postgres 14 commit 5b861baa55 added hardening to nbtree page deletion.\n> This had the effect of making nbtree VACUUM robust against misbehaving\n> operator classes -- we just LOG the problem and move on, without\n> throwing an error. In practice a \"misbehaving operator class\" is often\n> a problem with collation versioning.\n\nThis has been a problem for years, and still for years to come with\nlibc updates. I am not much into this stuff, but does running VACUUM\nin this case help with the state of the index that used a past,\nnow-invalid, collation (be it libc or ICU) to get a bit cleaned up?\n\n> An error ends the whole VACUUM operation. If VACUUM cannot delete the\n> page the first time, there is no reason to think that it'll be any\n> different on the second or the tenth attempt. The eventual result\n> (absent user/DBA intervention) is that no antiwraparound autovacuum\n> will ever complete, leading to an outage when the system hits\n> xidStopLimit. (Actually this scenario won't result in the system\n> hitting xidStopLimit where the failsafe is available, but that's\n> another thing that is only in 14, so that's not any help.)\n\nWhen written like that, this surely sounds extremely bad and this\nwould need more complex chirurgy (or just running with a build that\nincludes this patch?).\n\n> This seems low risk. The commit in question is very simple. It just\n> downgrades an old 9.4-era ereport() from ERROR to LOG, and adds a\n> \"return false;\" immediately after that. The function in question is\n> fundamentally structured in a way that allows it to back out of page\n> deletion because of problems that are far removed from where the\n> caller starts from. When and why we back out of page deletion is\n> already opaque to the caller, so it's very hard to imagine a new\n> problem caused by backpatching. Besides all this, 14 has been out for\n> a while now.\n\nYeah, I can take it that we would have seen reports if this was an\nissue, and I don't recall seeing one on the community lists, at\nleast.\n--\nMichael",
"msg_date": "Sat, 3 Sep 2022 10:14:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Backpatching nbtree VACUUM (page deletion) hardening"
},
{
"msg_contents": "On Fri, Sep 2, 2022 at 6:14 PM Michael Paquier <michael@paquier.xyz> wrote:\n> This has been a problem for years, and still for years to come with\n> libc updates. I am not much into this stuff, but does running VACUUM\n> in this case help with the state of the index that used a past,\n> now-invalid, collation (be it libc or ICU) to get a bit cleaned up?\n\nYes -- nbtree VACUUM generally can cope quite well, even when the\nindex is corrupt. It should mostly manage to do what is expected here,\neven with a misbehaving opclass, because it relies as little as\npossible on user-defined opclass code.\n\nEven without the hardening in place, nbtree VACUUM will still do a\n*surprisingly* good job of recovering when the opclass is broken in\nsome way: VACUUM just needs the insertion scankey operator class code\nto initially determine roughly where to look for the to-be-deleted\npage's downlink, one level up in the tree. Even when an operator class\nis wildly broken (e.g. the comparator gives a result that it\ndetermines at random), we still won't see problems in nbtree VACUUM\nmost of the time -- because even being roughly correct is good enough\nin practice!\n\nYou have to be quite unlucky to hit this, even when the opclass is\nwildly broken (which is probably much less common than \"moderately\nbroken\").\n\n> When written like that, this surely sounds extremely bad and this\n> would need more complex chirurgy (or just running with a build that\n> includes this patch?).\n\nThe patch will fix the case in question, which I have seen internal\nAWS reports about -- though the initial fix that went into 14 wasn't\ndriven by any complaint from any user. I just happened to notice that\nwe were throwing an ERROR in nbtree VACUUM for no good reason, which\nis something that should be avoided on general principle.\n\nIn theory there could be other ways in which you'd run into the same\nbasic problem (in any index AM). The important point is that we're\nbetter off not throwing any errors in the first place, but if we must\nthen they had better not be errors that will be repeated again and\nagain, without any chance of the problem going away naturally. (Not\nthat it never makes sense to just throw an error; there are meaningful\ngradations of \"totally unacceptable problem\".)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 2 Sep 2022 18:51:27 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Backpatching nbtree VACUUM (page deletion) hardening"
},
{
"msg_contents": "On Fri, Sep 2, 2022 at 6:51 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Yes -- nbtree VACUUM generally can cope quite well, even when the\n> index is corrupt. It should mostly manage to do what is expected here,\n> even with a misbehaving opclass, because it relies as little as\n> possible on user-defined opclass code.\n\nI just backpatched the hardening commit from 14 to every supported branch.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 5 Sep 2022 11:22:36 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Backpatching nbtree VACUUM (page deletion) hardening"
}
] |
[
{
"msg_contents": "Hi,\n\nbuilding PG with meson on windows I occasionally got weird errors around\nscanners. Sometimes scanner generation would fail with\n\n win_flex.exe: error deleting file C:\\Users\\myadmin\\AppData\\Local\\Temp\\~flex_out_main_2\n\nsometimes the generated scanner would just be corrupted.\n\n\nI was confused by only hitting this in local VM, not in CI, but after finally\nhunting it down it made more sense:\nhttps://github.com/lexxmark/winflexbison/issues/86\n\nhttps://github.com/lexxmark/winflexbison/blob/master/flex/src/main.c#L1051\n\nIt uses a temporary file name without any concurrency protection. Looks like\nwindows' _tempnam is pretty darn awful and returns a predictable name as long\nas no conflicting file exists.\n\nOur documentation doesn't point to winflexbison, but recommends using\nflex/bison from msys. But I've certainly read about others using winflexbison,\ne.g. [1] [2]. The flex/bison in 'chocolatey', which I've also seen referenced,\nis afaics winflexbison.\n\n\nAfaict the issue also exists in our traditional windows build - but I've not\nseen anybody report this as an issue.\n\n\nI started this thread to document the issue, in case developers using visual\nstudio are hitting this today.\n\n\nIt looks like a similar issue exists for the windows bison port:\nhttps://github.com/lexxmark/winflexbison/blob/b2a94ad5fd82cf4a54690a879e14ff9511b77273/bison/src/output.c#L816\n\nI've not observed that failure, presumably because the window for it is much\nshorter.\n\n\nFor the meson build it is trivial to \"address\" this by setting FLEX_TMP_DIR to\na private directory (which we already need to deal with lex.backup), something\nsimilar could be done for src/tools/msvc.\n\n\nUnless we think it'd be better to just refuse to work with winflexbison?\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/ae44b04a-2087-fa9d-c6b1-b1dcbbacf4ae%40dunslane.net\n[2] https://www.postgresql.org/message-id/CAGRY4nxJNqEjr1NtdB8%3DdcOwwsWTqQfykyvgp1i_udCtw--BkQ%40mail.gmail.com\n\n\n",
"msg_date": "Fri, 2 Sep 2022 15:43:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "win_flex.exe (and likely win_bison.exe) isn't concurrency safe"
},
{
"msg_contents": "On Sat, Sep 3, 2022 at 4:13 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> building PG with meson on windows I occasionally got weird errors around\n> scanners. Sometimes scanner generation would fail with\n>\n> win_flex.exe: error deleting file C:\\Users\\myadmin\\AppData\\Local\\Temp\\~flex_out_main_2\n>\n> sometimes the generated scanner would just be corrupted.\n>\n>\n> I was confused by only hitting this in local VM, not in CI, but after finally\n> hunting it down it made more sense:\n> https://github.com/lexxmark/winflexbison/issues/86\n>\n> https://github.com/lexxmark/winflexbison/blob/master/flex/src/main.c#L1051\n>\n> It uses a temporary file name without any concurrency protection. Looks like\n> windows' _tempnam is pretty darn awful and returns a predictable name as long\n> as no conflicting file exists.\n>\n> Our documentation doesn't point to winflexbison, but recommends using\n> flex/bison from msys. But I've certainly read about others using winflexbison,\n> e.g. [1] [2]. The flex/bison in 'chocolatey', which I've also seen referenced,\n> is afaics winflexbison.\n>\n>\n> Afaict the issue also exists in our traditional windows build - but I've not\n> seen anybody report this as an issue.\n>\n>\n> I started this thread to document the issue, in case developers using visual\n> studio are hitting this today.\n>\n\nI regularly use visual studio for the development work but never hit\nthis issue probably because I am using flex/bison from msys.\n\n>\n> It looks like a similar issue exists for the windows bison port:\n> https://github.com/lexxmark/winflexbison/blob/b2a94ad5fd82cf4a54690a879e14ff9511b77273/bison/src/output.c#L816\n>\n> I've not observed that failure, presumably because the window for it is much\n> shorter.\n>\n>\n> For the meson build it is trivial to \"address\" this by setting FLEX_TMP_DIR to\n> a private directory (which we already need to deal with lex.backup), something\n> similar could be done for src/tools/msvc.\n>\n>\n> Unless we think it'd be better to just refuse to work with winflexbison?\n>\n\nPersonally, I have never used this but I feel it would be good to keep\nit working especially if others are using it.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 3 Sep 2022 10:33:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: win_flex.exe (and likely win_bison.exe) isn't concurrency safe"
},
{
"msg_contents": "On Sat, Sep 03, 2022 at 10:33:43AM +0530, Amit Kapila wrote:\n> On Sat, Sep 3, 2022 at 4:13 AM Andres Freund <andres@anarazel.de> wrote:\n> > For the meson build it is trivial to \"address\" this by setting FLEX_TMP_DIR to\n> > a private directory (which we already need to deal with lex.backup), something\n> > similar could be done for src/tools/msvc.\n> >\n> >\n> > Unless we think it'd be better to just refuse to work with winflexbison?\n> >\n> \n> Personally, I have never used this but I feel it would be good to keep\n> it working especially if others are using it.\n\nMy windows environments rely on winflexbison so I'd prefer if we keep the\ncompatibility (or improve it). I personally never hit that problem, but I\ndon't work much on Windows either.\n\n\n",
"msg_date": "Sat, 3 Sep 2022 13:21:51 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: win_flex.exe (and likely win_bison.exe) isn't concurrency safe"
},
{
"msg_contents": "\nOn 2022-09-02 Fr 18:43, Andres Freund wrote:\n> Hi,\n>\n> building PG with meson on windows I occasionally got weird errors around\n> scanners. Sometimes scanner generation would fail with\n>\n> win_flex.exe: error deleting file C:\\Users\\myadmin\\AppData\\Local\\Temp\\~flex_out_main_2\n>\n> sometimes the generated scanner would just be corrupted.\n>\n>\n> I was confused by only hitting this in local VM, not in CI, but after finally\n> hunting it down it made more sense:\n> https://github.com/lexxmark/winflexbison/issues/86\n>\n> https://github.com/lexxmark/winflexbison/blob/master/flex/src/main.c#L1051\n>\n> It uses a temporary file name without any concurrency protection. Looks like\n> windows' _tempnam is pretty darn awful and returns a predictable name as long\n> as no conflicting file exists.\n>\n> Our documentation doesn't point to winflexbison, but recommends using\n> flex/bison from msys. But I've certainly read about others using winflexbison,\n> e.g. [1] [2]. The flex/bison in 'chocolatey', which I've also seen referenced,\n> is afaics winflexbison.\n>\n>\n> Afaict the issue also exists in our traditional windows build - but I've not\n> seen anybody report this as an issue.\n>\n>\n> I started this thread to document the issue, in case developers using visual\n> studio are hitting this today.\n>\n>\n> It looks like a similar issue exists for the windows bison port:\n> https://github.com/lexxmark/winflexbison/blob/b2a94ad5fd82cf4a54690a879e14ff9511b77273/bison/src/output.c#L816\n>\n> I've not observed that failure, presumably because the window for it is much\n> shorter.\n>\n>\n> For the meson build it is trivial to \"address\" this by setting FLEX_TMP_DIR to\n> a private directory (which we already need to deal with lex.backup), something\n> similar could be done for src/tools/msvc.\n>\n>\n> Unless we think it'd be better to just refuse to work with winflexbison?\n\n\nNo, I think your workaround is better if it works. I don't want to\nrequire installation of msys to build with MSVC if that's avoidable.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 3 Sep 2022 09:31:50 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: win_flex.exe (and likely win_bison.exe) isn't concurrency safe"
}
] |
[
{
"msg_contents": "Hi\n\nI got fresh warnings when I build an extension\n\nIn file included from\n/usr/local/pgsql/master/include/server/mb/pg_wchar.h:22,\n from src/format.c:17:\n/usr/local/pgsql/master/include/server/port/simd.h: In function\n‘vector8_has’:\n/usr/local/pgsql/master/include/server/port/simd.h:168:27: warning:\ncomparison of integer expressions of different signedness: ‘int’ and ‘long\nunsigned int’ [-Wsign-compare]\n 168 | for (int i = 0; i < sizeof(Vector8); i++)\n | ^\n/usr/local/pgsql/master/include/server/port/simd.h: In function\n‘vector8_has_le’:\n/usr/local/pgsql/master/include/server/port/simd.h:219:27: warning:\ncomparison of integer expressions of different signedness: ‘int’ and ‘long\nunsigned int’ [-Wsign-compare]\n 219 | for (int i = 0; i < sizeof(Vector8); i++)\n | ^\n\n[pavel@localhost plpgsql_check]$ uname -a\nLinux localhost.localdomain 5.18.19-200.fc36.x86_64 #1 SMP PREEMPT_DYNAMIC\nSun Aug 21 15:52:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux\n[pavel@localhost plpgsql_check]$ gcc --version\ngcc (GCC) 12.2.1 20220819 (Red Hat 12.2.1-1)\nCopyright (C) 2022 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\nRegards\n\nPavel\n\nHiI got fresh warnings when I build an extensionIn file included from /usr/local/pgsql/master/include/server/mb/pg_wchar.h:22, from src/format.c:17:/usr/local/pgsql/master/include/server/port/simd.h: In function ‘vector8_has’:/usr/local/pgsql/master/include/server/port/simd.h:168:27: warning: comparison of integer expressions of different signedness: ‘int’ and ‘long unsigned int’ [-Wsign-compare] 168 | for (int i = 0; i < sizeof(Vector8); i++) | ^/usr/local/pgsql/master/include/server/port/simd.h: In function ‘vector8_has_le’:/usr/local/pgsql/master/include/server/port/simd.h:219:27: warning: comparison of integer expressions of different signedness: ‘int’ and ‘long unsigned int’ [-Wsign-compare] 219 | for (int i = 0; i < sizeof(Vector8); i++) | ^[pavel@localhost plpgsql_check]$ uname -aLinux localhost.localdomain 5.18.19-200.fc36.x86_64 #1 SMP PREEMPT_DYNAMIC Sun Aug 21 15:52:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux[pavel@localhost plpgsql_check]$ gcc --versiongcc (GCC) 12.2.1 20220819 (Red Hat 12.2.1-1)Copyright (C) 2022 Free Software Foundation, Inc.This is free software; see the source for copying conditions. There is NOwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.RegardsPavel",
"msg_date": "Sat, 3 Sep 2022 07:30:03 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "warning: comparison of integer expressions of different signedness\n related to simd.h"
},
{
"msg_contents": "On Sat, Sep 3, 2022 at 12:30 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> Hi\n>\n> I got fresh warnings when I build an extension\n>\n> In file included from /usr/local/pgsql/master/include/server/mb/pg_wchar.h:22,\n> from src/format.c:17:\n> /usr/local/pgsql/master/include/server/port/simd.h: In function ‘vector8_has’:\n> /usr/local/pgsql/master/include/server/port/simd.h:168:27: warning: comparison of integer expressions of different signedness: ‘int’ and ‘long unsigned int’ [-Wsign-compare]\n> 168 | for (int i = 0; i < sizeof(Vector8); i++)\n> | ^\n\n\"int\" should probably be \"Size\" -- does that remove the warning?\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 3 Sep 2022 12:50:24 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: warning: comparison of integer expressions of different\n signedness related to simd.h"
},
{
"msg_contents": "so 3. 9. 2022 v 7:50 odesílatel John Naylor <john.naylor@enterprisedb.com>\nnapsal:\n\n> On Sat, Sep 3, 2022 at 12:30 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > Hi\n> >\n> > I got fresh warnings when I build an extension\n> >\n> > In file included from\n> /usr/local/pgsql/master/include/server/mb/pg_wchar.h:22,\n> > from src/format.c:17:\n> > /usr/local/pgsql/master/include/server/port/simd.h: In function\n> ‘vector8_has’:\n> > /usr/local/pgsql/master/include/server/port/simd.h:168:27: warning:\n> comparison of integer expressions of different signedness: ‘int’ and ‘long\n> unsigned int’ [-Wsign-compare]\n> > 168 | for (int i = 0; i < sizeof(Vector8); i++)\n> > | ^\n>\n> \"int\" should probably be \"Size\" -- does that remove the warning?\n>\n\nyes, it removes warnings\n\nPavel\n\n>\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com\n>\n\nso 3. 9. 2022 v 7:50 odesílatel John Naylor <john.naylor@enterprisedb.com> napsal:On Sat, Sep 3, 2022 at 12:30 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> Hi\n>\n> I got fresh warnings when I build an extension\n>\n> In file included from /usr/local/pgsql/master/include/server/mb/pg_wchar.h:22,\n> from src/format.c:17:\n> /usr/local/pgsql/master/include/server/port/simd.h: In function ‘vector8_has’:\n> /usr/local/pgsql/master/include/server/port/simd.h:168:27: warning: comparison of integer expressions of different signedness: ‘int’ and ‘long unsigned int’ [-Wsign-compare]\n> 168 | for (int i = 0; i < sizeof(Vector8); i++)\n> | ^\n\n\"int\" should probably be \"Size\" -- does that remove the warning?yes, it removes warningsPavel \n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Sat, 3 Sep 2022 07:53:52 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: warning: comparison of integer expressions of different\n signedness related to simd.h"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Sat, Sep 3, 2022 at 12:30 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> /usr/local/pgsql/master/include/server/port/simd.h: In function ‘vector8_has’:\n>> /usr/local/pgsql/master/include/server/port/simd.h:168:27: warning: comparison of integer expressions of different signedness: ‘int’ and ‘long unsigned int’ [-Wsign-compare]\n>> 168 | for (int i = 0; i < sizeof(Vector8); i++)\n\n> \"int\" should probably be \"Size\" -- does that remove the warning?\n\nAgreed, should be Size or size_t, or else cast the sizeof() result.\nBut I wonder why none of the buildfarm is showing such a warning.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Sep 2022 01:57:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: warning: comparison of integer expressions of different\n signedness related to simd.h"
},
{
"msg_contents": "so 3. 9. 2022 v 7:57 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > On Sat, Sep 3, 2022 at 12:30 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >> /usr/local/pgsql/master/include/server/port/simd.h: In function\n> ‘vector8_has’:\n> >> /usr/local/pgsql/master/include/server/port/simd.h:168:27: warning:\n> comparison of integer expressions of different signedness: ‘int’ and ‘long\n> unsigned int’ [-Wsign-compare]\n> >> 168 | for (int i = 0; i < sizeof(Vector8); i++)\n>\n> > \"int\" should probably be \"Size\" -- does that remove the warning?\n>\n> Agreed, should be Size or size_t, or else cast the sizeof() result.\n> But I wonder why none of the buildfarm is showing such a warning.\n>\n\nI got this warning when I compiled plgsql_check against master with enabled\nasserts\n\nhttps://github.com/okbob/plpgsql_check\n\nIn file included from\n/usr/local/pgsql/master/include/server/mb/pg_wchar.h:22,\n from src/format.c:17:\n/usr/local/pgsql/master/include/server/port/simd.h: In function\n‘vector8_has_le’:\n/usr/local/pgsql/master/include/server/port/simd.h:219:27: warning:\ncomparison of integer expressions of different signedness: ‘int’ and ‘long\nunsigned int’ [-Wsign-compare]\n 219 | for (int i = 0; i < sizeof(Vector8); i++)\n | ^\n\n\n\n\n\n> regards, tom lane\n>\n\nso 3. 9. 2022 v 7:57 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:John Naylor <john.naylor@enterprisedb.com> writes:\n> On Sat, Sep 3, 2022 at 12:30 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> /usr/local/pgsql/master/include/server/port/simd.h: In function ‘vector8_has’:\n>> /usr/local/pgsql/master/include/server/port/simd.h:168:27: warning: comparison of integer expressions of different signedness: ‘int’ and ‘long unsigned int’ [-Wsign-compare]\n>> 168 | for (int i = 0; i < sizeof(Vector8); i++)\n\n> \"int\" should probably be \"Size\" -- does that remove the warning?\n\nAgreed, should be Size or size_t, or else cast the sizeof() result.\nBut I wonder why none of the buildfarm is showing such a warning.I got this warning when I compiled plgsql_check against master with enabled assertshttps://github.com/okbob/plpgsql_checkIn file included from /usr/local/pgsql/master/include/server/mb/pg_wchar.h:22, from src/format.c:17:/usr/local/pgsql/master/include/server/port/simd.h: In function ‘vector8_has_le’:/usr/local/pgsql/master/include/server/port/simd.h:219:27: warning: comparison of integer expressions of different signedness: ‘int’ and ‘long unsigned int’ [-Wsign-compare] 219 | for (int i = 0; i < sizeof(Vector8); i++) | ^\n\n regards, tom lane",
"msg_date": "Sat, 3 Sep 2022 08:02:02 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: warning: comparison of integer expressions of different\n signedness related to simd.h"
},
{
"msg_contents": "On Sat, Sep 3, 2022 at 12:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > On Sat, Sep 3, 2022 at 12:30 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> >> /usr/local/pgsql/master/include/server/port/simd.h: In function ‘vector8_has’:\n> >> /usr/local/pgsql/master/include/server/port/simd.h:168:27: warning: comparison of integer expressions of different signedness: ‘int’ and ‘long unsigned int’ [-Wsign-compare]\n> >> 168 | for (int i = 0; i < sizeof(Vector8); i++)\n>\n> > \"int\" should probably be \"Size\" -- does that remove the warning?\n>\n> Agreed, should be Size or size_t, or else cast the sizeof() result.\n> But I wonder why none of the buildfarm is showing such a warning.\n\nIf I add -Wsign-compare to CPPFLAGS, I get dozens of warnings all over\nthe place. It's probably unreasonable for extensions to expect to\ncompile cleanly with warnings that the core server doesn't use, but\nthis header is clearly wrong and easy to remedy, so I've pushed a\npatch.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 4 Sep 2022 09:32:49 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: warning: comparison of integer expressions of different\n signedness related to simd.h"
}
] |
[
{
"msg_contents": "Hi hackers,\nI write a tiny patch about vacuumlo to improve test coverage.\nI hope my work is meaningful.\n\n---\nRegards,\nDongWook Lee.",
"msg_date": "Sat, 3 Sep 2022 17:27:39 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "vacuumlo: add test to vacuumlo for test coverage"
},
{
"msg_contents": "> On 3 Sep 2022, at 10:27, Dong Wook Lee <sh95119@gmail.com> wrote:\n\n> I write a tiny patch about vacuumlo to improve test coverage.\n\nIf we are paying for setting up a cluster we might as well test more scenarios\nthan just the one. Perhaps some other low-hanging fruit like calling vacuumlo\non a non-existing databsase, on one where no LO have been made etc?\n\nOne thing about the patch:\n\n+IPC::Run::run [ 'vacuumlo', '-v', '-n', '-p', $port, 'postgres' ], '>', \\$stdout;\n\nThis should use run_command() which provides facilities for running commands\nand capturing STDOUT. With this the test can be rewritten something like:\n\nmy ($out, $err) = run_command(['vacuumlo', .. ]);\nlike($out, ..);\n\nrun_command() is defined in PostgreSQL::Test::Utils.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 8 Sep 2022 14:53:32 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: vacuumlo: add test to vacuumlo for test coverage"
},
{
"msg_contents": "2022年9月3日(土) 17:28 Dong Wook Lee <sh95119@gmail.com>:\n>\n> Hi hackers,\n> I write a tiny patch about vacuumlo to improve test coverage.\n> I hope my work is meaningful.\n\nHi\n\nWhile reviewing the patch backlog, we have determined that this patch adds\none or more TAP tests but has not added the test to the \"meson.build\" file.\n\nTo do this, locate the relevant \"meson.build\" file for each test and add it\nin the 'tests' dictionary, which will look something like this:\n\n 'tap': {\n 'tests': [\n 't/001_basic.pl',\n ],\n },\n\nFor some additional details please see this Wiki article:\n\n https://wiki.postgresql.org/wiki/Meson_for_patch_authors\n\nFor more information on the meson build system for PostgreSQL see:\n\n https://wiki.postgresql.org/wiki/Meson\n\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Wed, 16 Nov 2022 13:48:30 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuumlo: add test to vacuumlo for test coverage"
},
{
"msg_contents": "On Wed, 16 Nov 2022 at 10:18, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n>\n> 2022年9月3日(土) 17:28 Dong Wook Lee <sh95119@gmail.com>:\n> >\n> > Hi hackers,\n> > I write a tiny patch about vacuumlo to improve test coverage.\n> > I hope my work is meaningful.\n>\n> Hi\n>\n> While reviewing the patch backlog, we have determined that this patch adds\n> one or more TAP tests but has not added the test to the \"meson.build\" file.\n>\n> To do this, locate the relevant \"meson.build\" file for each test and add it\n> in the 'tests' dictionary, which will look something like this:\n>\n> 'tap': {\n> 'tests': [\n> 't/001_basic.pl',\n> ],\n> },\n>\n> For some additional details please see this Wiki article:\n>\n> https://wiki.postgresql.org/wiki/Meson_for_patch_authors\n>\n> For more information on the meson build system for PostgreSQL see:\n>\n> https://wiki.postgresql.org/wiki/Meson\n\nHi DongWook Lee,\n\nPlease plan to work on the comment and provide a patch. As CommitFest\n2023-01 is currently underway, this would be an excellent time to\nupdate the patch and get the patch in a better shape.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 17 Jan 2023 17:10:45 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuumlo: add test to vacuumlo for test coverage"
},
{
"msg_contents": "On Tue, 17 Jan 2023 at 17:10, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, 16 Nov 2022 at 10:18, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> >\n> > 2022年9月3日(土) 17:28 Dong Wook Lee <sh95119@gmail.com>:\n> > >\n> > > Hi hackers,\n> > > I write a tiny patch about vacuumlo to improve test coverage.\n> > > I hope my work is meaningful.\n> >\n> > Hi\n> >\n> > While reviewing the patch backlog, we have determined that this patch adds\n> > one or more TAP tests but has not added the test to the \"meson.build\" file.\n> >\n> > To do this, locate the relevant \"meson.build\" file for each test and add it\n> > in the 'tests' dictionary, which will look something like this:\n> >\n> > 'tap': {\n> > 'tests': [\n> > 't/001_basic.pl',\n> > ],\n> > },\n> >\n> > For some additional details please see this Wiki article:\n> >\n> > https://wiki.postgresql.org/wiki/Meson_for_patch_authors\n> >\n> > For more information on the meson build system for PostgreSQL see:\n> >\n> > https://wiki.postgresql.org/wiki/Meson\n>\n> Hi DongWook Lee,\n>\n> Please plan to work on the comment and provide a patch. As CommitFest\n> 2023-01 is currently underway, this would be an excellent time to\n> update the patch and get the patch in a better shape.\n\nThere has been no updates on this thread for some time, so this has\nbeen switched as Returned with Feedback. Feel free to open it in the\nnext commitfest if you plan to continue on this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 31 Jan 2023 23:20:36 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuumlo: add test to vacuumlo for test coverage"
}
] |
[
{
"msg_contents": "Hi hackers,\nI try to add to psql test about --help, \\e, and the encoding option.\n\n---\nRegards,\nDongWook Lee.\n\n\n",
"msg_date": "Sat, 3 Sep 2022 18:08:38 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "add test to psql for coverage with --help, \\e option, encoding option"
},
{
"msg_contents": "I confirmed that I missed the patch file.\nAnd the current code is different from when I wrote the patch, so I\ndon't think my patch will be meaningful anymore.\n\nOn Sat, Sep 3, 2022 at 6:08 PM Dong Wook Lee <sh95119@gmail.com> wrote:\n>\n> Hi hackers,\n> I try to add to psql test about --help, \\e, and the encoding option.\n>\n> ---\n> Regards,\n> DongWook Lee.",
"msg_date": "Sat, 3 Sep 2022 18:35:39 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add test to psql for coverage with --help, \\e option,\n encoding option"
}
] |
[
{
"msg_contents": "I've been trying to figure out why my new buildfarm animal mamba\noccasionally fails the pg_basebackup tests [1][2]. I've not run\nthat to ground yet, but one thing I've found that's consistently\nreproducible everywhere is that pg_basebackup's --gzip switch\nmisbehaves. The manual says, and the principle of least astonishment\nagrees, that that should invoke gzip with the default compression\nlevel. However, the three test cases beginning at about line 810 of\n010_pg_basebackup.pl produce these output file sizes on my x86_64\nLinux machine:\n\nbackup_gzip (\"--compress 1\"):\ntotal 3672\n-rw-r-----. 1 postgres postgres 137756 Sep 2 23:38 backup_manifest\n-rw-r-----. 1 postgres postgres 3538992 Sep 2 23:38 base.tar.gz\n-rw-r-----. 1 postgres postgres 73991 Sep 2 23:38 pg_wal.tar.gz\n\nbackup_gzip2 (\"--gzip\"):\ntotal 19544\n-rw-r-----. 1 postgres postgres 137756 Sep 2 23:38 backup_manifest\n-rw-r-----. 1 postgres postgres 3086972 Sep 2 23:38 base.tar.gz\n-rw-r-----. 1 postgres postgres 16781399 Sep 2 23:38 pg_wal.tar.gz\n\nbackup_gzip3 (\"--compress gzip:1\"):\ntotal 3672\n-rw-r-----. 1 postgres postgres 137756 Sep 2 23:38 backup_manifest\n-rw-r-----. 1 postgres postgres 3539006 Sep 2 23:38 base.tar.gz\n-rw-r-----. 1 postgres postgres 73989 Sep 2 23:38 pg_wal.tar.gz\n\nIt makes sense that base.tar.gz is compressed a little better with\n--gzip than with level-1 compression, but why is pg_wal.tar.gz not\ncompressed at all? It looks like the problem probably boils down to\nwhich of \"-1\" and \"0\" means \"default behavior\" vs \"no compression\",\nwith different code layers interpreting that differently. I can't\nfind exactly where that's happening, but I did manage to stop the\nfailures with this crude hack:\n\ndiff --git a/src/bin/pg_basebackup/walmethods.c b/src/bin/pg_basebackup/walmethods.c\nindex e90aa0ba37..edddd9b578 100644\n--- a/src/bin/pg_basebackup/walmethods.c\n+++ b/src/bin/pg_basebackup/walmethods.c\n@@ -1358,7 +1358,7 @@ CreateWalTarMethod(const char *tarbase,\n \tsprintf(tar_data->tarfilename, \"%s%s\", tarbase, suffix);\n \ttar_data->fd = -1;\n \ttar_data->compression_algorithm = compression_algorithm;\n-\ttar_data->compression_level = compression_level;\n+\ttar_data->compression_level = compression_level > 0 ? compression_level : Z_DEFAULT_COMPRESSION;\n \ttar_data->sync = sync;\n #ifdef HAVE_LIBZ\n \tif (compression_algorithm == PG_COMPRESSION_GZIP)\n\nThat's not right as a real fix, because it would have the effect\nthat \"--compress gzip:0\" would also invoke default compression,\nwhereas what it should do is produce the uncompressed output\nwe're actually getting. Both cases have compression_level == 0\nby the time we reach here, though.\n\nI suspect that there are related bugs in other code paths in this\nrat's nest of undocumented functions and dubious API abstractions;\nbut since it's all undocumented, who can say which places are wrong\nand which are not?\n\nI might not ding this code quite this hard, if I hadn't had\nequally-unpleasant encounters with it previously (eg 248c3a937).\nIt's a mess, and I do not find it to be up to project standards.\n\nA vaguely-related matter is that the deflateParams calls all pass \"0\"\nas the third parameter:\n\n if (deflateParams(tar_data->zp, tar_data->compression_level, 0) != Z_OK)\n\nAside from being unreadable, that's entirely unwarranted familiarity\nwith the innards of libz. zlib.h says you should be writing a named\nconstant, probably Z_DEFAULT_STRATEGY.\n\nBTW, I'm fairly astonished that anyone would have thought that three\ncomplete pg_basebackup cycles testing essentially-identical options\nwere a good use of developer time and buildfarm cycles from here to\neternity. Even if digging into it did expose a bug, the test case\ndeserves little credit for that, because it entirely failed to call\nattention to the problem. I had to whack the script pretty hard\njust to get it to not delete the evidence.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2022-09-01%2018%3A38%3A27\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2022-08-31%2011%3A46%3A09\n\n\n",
"msg_date": "Sat, 03 Sep 2022 11:11:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "On Sat, Sep 03, 2022 at 11:11:29AM -0400, Tom Lane wrote:\n> It makes sense that base.tar.gz is compressed a little better with\n> --gzip than with level-1 compression, but why is pg_wal.tar.gz not\n> compressed at all? It looks like the problem probably boils down to\n> which of \"-1\" and \"0\" means \"default behavior\" vs \"no compression\",\n> with different code layers interpreting that differently. I can't\n> find exactly where that's happening, but I did manage to stop the\n> failures with this crude hack:\n\nThere is a distinction coming in pg_basebackup.c from the way we\ndeparse the compression specification and the default compression\nlevel that should be assigned if there is no level directly specified\nby the user. It seems to me that the error comes from this code in\nBaseBackup() when we are under STREAM_WAL (default):\n\n if (client_compress->algorithm == PG_COMPRESSION_GZIP)\n {\n wal_compress_algorithm = PG_COMPRESSION_GZIP;\n wal_compress_level =\n (client_compress->options & PG_COMPRESSION_OPTION_LEVEL)\n != 0 ? client_compress->level : 0;\n\nffd5365 has missed that wal_compress_level should be set to\nZ_DEFAULT_COMPRESSION if there is nothing set in the compression\nspec for a zlib build. pg_receivewal.c enforces that already.\n\n> That's not right as a real fix, because it would have the effect\n> that \"--compress gzip:0\" would also invoke default compression,\n> whereas what it should do is produce the uncompressed output\n> we're actually getting. Both cases have compression_level == 0\n> by the time we reach here, though.\n\nNope, that would not be right.\n\n> BTW, I'm fairly astonished that anyone would have thought that three\n> complete pg_basebackup cycles testing essentially-identical options\n> were a good use of developer time and buildfarm cycles from here to\n> eternity. Even if digging into it did expose a bug, the test case\n> deserves little credit for that, because it entirely failed to call\n> attention to the problem. I had to whack the script pretty hard\n> just to get it to not delete the evidence.\n\nThe introduction of the compression specification has introduced a lot\nof patterns where we expect or not expect compression to happen, and\non top of that this needs to be careful about backward-compatibility.\n--\nMichael",
"msg_date": "Sun, 4 Sep 2022 14:20:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "On Sun, Sep 04, 2022 at 02:20:52PM +0900, Michael Paquier wrote:\n> ffd5365 has missed that wal_compress_level should be set to\n> Z_DEFAULT_COMPRESSION if there is nothing set in the compression\n> spec for a zlib build. pg_receivewal.c enforces that already.\n\nSo, I have looked at this one. And it seems to me that the confusion\ncomes down to the existence of PG_COMPRESSION_OPTION_LEVEL. I have\nconsidered a couple of approaches here, like introducing an extra\nroutine in compression.c to assign a default compression level, but\nmy conclusion is at the end simpler: we always finish by setting up a\nlevel even if the caller wants nothing, in which can we can just use\neach library's default. And lz4, zstd and zlib are able to handle the\ncase where a default is given down to their internal routines just\nfine.\n\nAttached is the patch I am finishing with, consisting of:\n- the removal of PG_COMPRESSION_OPTION_LEVEL.\n- assigning a default compression level when nothing is specified in\nthe spec.\n- a couple of complifications in pg_receivewal, pg_basebackup and the\nbackend code as there is no need to worry about the compression\nlevel.\n\nA nice effect of this approach is that we can centralize the checks on\nlz4, zstd and zlib when a build does not support any of these\noptions, as well as centralize the place where the default compression\nlevels are set. This passes all the regression tests, and it fixes\nthe issue reported. (Note that I have yet to run tests with all the\nlibraries disabled in ./configure.)\n\nThoughts?\n--\nMichael",
"msg_date": "Tue, 13 Sep 2022 16:13:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Attached is the patch I am finishing with, consisting of:\n> - the removal of PG_COMPRESSION_OPTION_LEVEL.\n> - assigning a default compression level when nothing is specified in\n> the spec.\n> - a couple of complifications in pg_receivewal, pg_basebackup and the\n> backend code as there is no need to worry about the compression\n> level.\n\nThis looks good to me. It seems simpler, and I concur that it\nfixes the described problem. I now see\n\ntmp_check/backup_gzip:\ntotal 3668\n-rw-r-----. 1 postgres postgres 137756 Sep 13 17:29 backup_manifest\n-rw-r-----. 1 postgres postgres 3537499 Sep 13 17:29 base.tar.gz\n-rw-r-----. 1 postgres postgres 73989 Sep 13 17:29 pg_wal.tar.gz\n\ntmp_check/backup_gzip2:\ntotal 3168\n-rw-r-----. 1 postgres postgres 137756 Sep 13 17:29 backup_manifest\n-rw-r-----. 1 postgres postgres 3083516 Sep 13 17:29 base.tar.gz\n-rw-r-----. 1 postgres postgres 17069 Sep 13 17:29 pg_wal.tar.gz\n\ntmp_check/backup_gzip3:\ntotal 3668\n-rw-r-----. 1 postgres postgres 137756 Sep 13 17:29 backup_manifest\n-rw-r-----. 1 postgres postgres 3537517 Sep 13 17:29 base.tar.gz\n-rw-r-----. 1 postgres postgres 73988 Sep 13 17:29 pg_wal.tar.gz\n\nwhich looks sane: the gzip2 case should, and does, have better\ncompression than the other two.\n\nBTW, this bit:\n\ndiff --git a/src/bin/pg_basebackup/t/010_pg_basebackup.pl b/src/bin/pg_basebackup/t/010_pg_basebackup.pl\nindex 3d1a4ddd5c..40f1d3f7e2 100644\n--- a/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n+++ b/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n@@ -860,9 +860,6 @@ SKIP:\n \tmy $gzip_is_valid =\n \t system_log($gzip, '--test', @zlib_files, @zlib_files2, @zlib_files3);\n \tis($gzip_is_valid, 0, \"gzip verified the integrity of compressed data\");\n-\trmtree(\"$tempdir/backup_gzip\");\n-\trmtree(\"$tempdir/backup_gzip2\");\n-\trmtree(\"$tempdir/backup_gzip3\");\n }\n \n # Test background stream process terminating before the basebackup has\n\nis something I tried along the way to diagnosing the problem, and\nit turns out to have exactly zero effect. The $tempdir is some\ntemporary subdirectory of tmp_check that will get nuked at the end\nof the TAP test no matter what. So these rmtrees are merely making\nthe evidence disappear a bit faster; it will anyway.\n\nWhat I did to diagnose the problem was this:\n\n@@ -860,9 +860,9 @@ SKIP:\n my $gzip_is_valid =\n system_log($gzip, '--test', @zlib_files, @zlib_files2, @zlib_files3);\n is($gzip_is_valid, 0, \"gzip verified the integrity of compressed data\");\n- rmtree(\"$tempdir/backup_gzip\");\n- rmtree(\"$tempdir/backup_gzip2\");\n- rmtree(\"$tempdir/backup_gzip3\");\n+ system_log('mv', \"$tempdir/backup_gzip\", \"tmp_check\");\n+ system_log('mv', \"$tempdir/backup_gzip2\", \"tmp_check\");\n+ system_log('mv', \"$tempdir/backup_gzip3\", \"tmp_check\");\n }\n \n # Test background stream process terminating before the basebackup has\n\nwhich is not real clean, since then the files get left behind even\non success, which I doubt we want either.\n\nAnyway, I have no objection to dropping the rmtrees, since they're\npretty useless as the code stands. Just wanted to mention this\nissue for the archives.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Sep 2022 17:38:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "On Tue, Sep 13, 2022 at 05:38:47PM -0400, Tom Lane wrote:\n> is something I tried along the way to diagnosing the problem, and\n> it turns out to have exactly zero effect. The $tempdir is some\n> temporary subdirectory of tmp_check that will get nuked at the end\n> of the TAP test no matter what. So these rmtrees are merely making\n> the evidence disappear a bit faster; it will anyway.\n\nFWIW, I just stick a die() in the middle of the code path when I want\nto look at specific results. Similar method, same result.\n\n> # Test background stream process terminating before the basebackup has\n> \n> which is not real clean, since then the files get left behind even\n> on success, which I doubt we want either.\n\nAnother thing that could be done here is to use the same base location\nas the cluster nodes aka $PostgreSQL::Test::Utils::tmp_check. That\nwould mean storing in a repo more data associated to the base backups\nafter a fresh run, though. I am not sure that small machine would\nlike this accumulation in a single run even if disk space is cheap\nthese days.\n\n> Anyway, I have no objection to dropping the rmtrees, since they're\n> pretty useless as the code stands. Just wanted to mention this\n> issue for the archives.\n\nI see more ways to change the existing behavior, so for now I have\nleft that untouched.\n\nAnd so, I have spent a couple of hours torturing the patch, applying\nit after a few tweaks and CI runs:\n- --without-zlib was causing a failure in the pg_basebackup tests as\nwe have a few tests that parse and validate a set of invalid specs for\nthe client-side and server-side compression. With zlib around, the\ntests and their expected results are unchanged, that's just a \nconsequence of moving the assignment of a default level much earlier.\n- pg_basebackup was triggering an assertion when using client-lz4 or\nclient-zstd as we use the directory method of walmethods.c. In this\ncase, we just support zlib as compression and enforce no compression\nwhen we are under lz4 or zstd. This came from an over-simplification\nof the code. There is a gap in the testing of pg_basebackup,\nactually, because we have zero tests for LZ4 and zstd there.\n- The documentation of the replication protocol needed some\nadjustments for the default comporession levels.\n\nThe buildfarm is green so I think that we are good. I have closed the\nopen item.\n\nYou have mentioned upthread an extra thing about the fact that we pass\ndown 0 to deflateParams(). This is indeed wrong and we are lucky that\nZ_DEFAULT_STRATEGY maps to 0. Better to fix and backpatch this one\ndown to where gzip compression has been added to walmethods.c.. I'll\njust do that in a bit after double-checking the area and the other\nroutines.\n--\nMichael",
"msg_date": "Wed, 14 Sep 2022 13:59:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> And so, I have spent a couple of hours torturing the patch, applying\n> it after a few tweaks and CI runs:\n> ...\n> The buildfarm is green so I think that we are good. I have closed the\n> open item.\n\n+1, thanks for taking care of that.\n\nAs far as my original complaint about mamba goes, I've not quite\nbeen able to run it to ground. However, I found that NetBSD\nseems to be shipping unmodified zlib 1.2.11, which contains a\nnumber of known bugs in deflate_stored() --- that is, the code\npath implementing compression level 0. Red Hat for one is\ncarrying several back-patched fixes in that part of zlib.\nSo for the moment I'm willing to write it off as \"not our bug\".\nWe aren't intentionally testing compression level 0, and hardly\nanybody would intentionally use it in the field, so it's not\nreally a thing worth worrying about IMO. But if mamba continues\nto show failures in that test then it will be worth looking closer.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Sep 2022 01:18:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "> On 13 Sep 2022, at 23:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> The $tempdir is some temporary subdirectory of tmp_check that will get nuked at\n> the end of the TAP test no matter what. So these rmtrees are merely making the\n> evidence disappear a bit faster; it will anyway.\n\n\nMaybe the creation of $tempdir should take PG_TEST_NOCLEAN into account and not\nregister CLEANUP if set?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 14 Sep 2022 10:26:42 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "On Wed, Sep 14, 2022 at 10:26:42AM +0200, Daniel Gustafsson wrote:\n> Maybe the creation of $tempdir should take PG_TEST_NOCLEAN into account and not\n> register CLEANUP if set?\n\nAgreed. It sounds like a good idea to me to extend that to temporary\npaths, and then check those rmtree() calls where the tests would not\nretain too much data for small-ish machines.\n\nBy the way, should we document PG_TEST_TIMEOUT_DEFAULT and\nPG_TEST_NOCLEAN not only in src/test/perl/README but also doc/?. We\nprovide something in the docs about PROVE_FLAGS and PROVE_TESTS, for\nexample. \n--\nMichael",
"msg_date": "Fri, 16 Sep 2022 11:22:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "On Tue, Sep 13, 2022 at 04:13:20PM +0900, Michael Paquier wrote:\n> diff --git a/src/common/compression.c b/src/common/compression.c\n> index da3c291c0f..ac26287d54 100644\n> --- a/src/common/compression.c\n> +++ b/src/common/compression.c\n> @@ -249,36 +299,49 @@ expect_integer_value(char *keyword, char *value, pg_compress_specification *resu\n> char *\n> validate_compress_specification(pg_compress_specification *spec)\n> {\n> +\tint\t\t\tmin_level = 1;\n> +\tint\t\t\tmax_level = 1;\n> +\tint\t\t\tdefault_level = 0;\n> +\n> \t/* If it didn't even parse OK, it's definitely no good. */\n> \tif (spec->parse_error != NULL)\n> \t\treturn spec->parse_error;\n> \n> \t/*\n> -\t * If a compression level was specified, check that the algorithm expects\n> -\t * a compression level and that the level is within the legal range for\n> -\t * the algorithm.\n> +\t * Check that the algorithm expects a compression level and it is\n> +\t * is within the legal range for the algorithm.\n> \t */\n> -\tif ((spec->options & PG_COMPRESSION_OPTION_LEVEL) != 0)\n> +\tswitch (spec->algorithm)\n> \t{\n> -\t\tint\t\t\tmin_level = 1;\n> -\t\tint\t\t\tmax_level;\n> -\n> -\t\tif (spec->algorithm == PG_COMPRESSION_GZIP)\n> +\t\tcase PG_COMPRESSION_GZIP:\n> \t\t\tmax_level = 9;\n> -\t\telse if (spec->algorithm == PG_COMPRESSION_LZ4)\n> +#ifdef HAVE_LIBZ\n> +\t\t\tdefault_level = Z_DEFAULT_COMPRESSION;\n> +#endif\n> +\t\t\tbreak;\n> +\t\tcase PG_COMPRESSION_LZ4:\n> \t\t\tmax_level = 12;\n> -\t\telse if (spec->algorithm == PG_COMPRESSION_ZSTD)\n> +\t\t\tdefault_level = 0;\t/* fast mode */\n> +\t\t\tbreak;\n> +\t\tcase PG_COMPRESSION_ZSTD:\n> \t\t\tmax_level = 22;\n\nI should've suggested to add:\n\n> \t\t\tmin_level = -7;\n\nwhich has been supported since zstd 1.3.4 (and postgres requires 1.4.0).\n\nI think at some point (maybe before releasing 1.3.4) the range was\nincreased to very large(small), negative levels. It's possible to query\nthe library about the lowest supported compression level, but then\nthere's a complication regarding the client-side library version vs the\nserver-side version. So it seems better to just use -7.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 21 Sep 2022 19:31:48 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 07:31:48PM -0500, Justin Pryzby wrote:\n> I think at some point (maybe before releasing 1.3.4) the range was\n> increased to very large(small), negative levels. It's possible to query\n> the library about the lowest supported compression level, but then\n> there's a complication regarding the client-side library version vs the\n> server-side version. So it seems better to just use -7.\n\nIndeed. Contrary to the default level, there are no variables for the\nminimum and maximum levels. As you are pointing out, a lookup at \nzstd_compress.c shows that we have ZSTD_minCLevel() and\nZSTD_maxCLevel() that assign the bounds. Both are available since\n1.4.0. We still need a backend-side check as the level passed with a\nBASE_BACKUP command would be only validated there. It seems to me \nthat this is going to be less of a headache in the long-term if we\njust use those routines at runtime, as zstd wants to keep some freedom\nwith the min and max bounds for the compression level, at least that's\nthe flexibility that this gives the library. So I would tweak things\nas the attached.\n--\nMichael",
"msg_date": "Thu, 22 Sep 2022 10:25:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "On Thu, Sep 22, 2022 at 10:25:11AM +0900, Michael Paquier wrote:\n> On Wed, Sep 21, 2022 at 07:31:48PM -0500, Justin Pryzby wrote:\n> > I think at some point (maybe before releasing 1.3.4) the range was\n> > increased to very large(small), negative levels. It's possible to query\n> > the library about the lowest supported compression level, but then\n> > there's a complication regarding the client-side library version vs the\n> > server-side version. So it seems better to just use -7.\n> \n> Indeed. Contrary to the default level, there are no variables for the\n> minimum and maximum levels. As you are pointing out, a lookup at \n> zstd_compress.c shows that we have ZSTD_minCLevel() and\n> ZSTD_maxCLevel() that assign the bounds. Both are available since\n> 1.4.0. We still need a backend-side check as the level passed with a\n> BASE_BACKUP command would be only validated there. It seems to me \n> that this is going to be less of a headache in the long-term if we\n> just use those routines at runtime, as zstd wants to keep some freedom\n> with the min and max bounds for the compression level, at least that's\n> the flexibility that this gives the library. So I would tweak things\n> as the attached.\n\nOkay. Will that complicate tests at all? It looks like it's not an\nissue for the tests currently proposed in the CF APP.\nhttps://commitfest.postgresql.org/39/3835/\n\nHowever the patch ends up, +0.75 to backpatch it to v15 rather than\ncalling it a new feature in v16.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 21 Sep 2022 22:37:16 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> However the patch ends up, +0.75 to backpatch it to v15 rather than\n> calling it a new feature in v16.\n\nI don't have any opinion on the concrete merits of this change,\nbut I want to note that 15rc1 wraps on Monday, and we don't like\npeople pushing noncritical changes shortly before a wrap. There\nis not a lot of time for fooling around here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Sep 2022 23:43:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 11:43:56PM -0400, Tom Lane wrote:\n> I don't have any opinion on the concrete merits of this change,\n> but I want to note that 15rc1 wraps on Monday, and we don't like\n> people pushing noncritical changes shortly before a wrap. There\n> is not a lot of time for fooling around here.\n\nIf I were to do it in the next couple of hours, we'd still have quite\na couple of days of coverage, which is plenty as far as I understand?\n\nSaying that, it is not a critical change. Just to give some numbers,\nfor a fresh initdb's instance base.tar.zst is at:\n- 3.6MB at level 0.\n- 3.8MB at level 1.\n- 3.6MB at level 2.\n- 4.3MB at level -1.\n- 4.6MB at level -2.\n- 6.1MB at level -7.\n\nI am not sure if there would be a huge demand for this much control\nover the current [1,22], but the library wants to control dynamically\nthe bounds and has the APIs to allow that.\n--\nMichael",
"msg_date": "Thu, 22 Sep 2022 13:34:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Sep 21, 2022 at 11:43:56PM -0400, Tom Lane wrote:\n>> I don't have any opinion on the concrete merits of this change,\n>> but I want to note that 15rc1 wraps on Monday, and we don't like\n>> people pushing noncritical changes shortly before a wrap. There\n>> is not a lot of time for fooling around here.\n\n> If I were to do it in the next couple of hours, we'd still have quite\n> a couple of days of coverage, which is plenty as far as I understand?\n\nSure. I'd say we have 48 hours to choose whether to put this in v15.\nBut not more than that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Sep 2022 00:47:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "On Thu, Sep 22, 2022 at 12:47:34AM -0400, Tom Lane wrote:\n> Sure. I'd say we have 48 hours to choose whether to put this in v15.\n> But not more than that.\n\nI have a window to be able to look at the buildfarm today, tomorrow\nbeing harder, so I have adjusted that now on both HEAD and\nREL_15_STABLE for consistency.\n--\nMichael",
"msg_date": "Thu, 22 Sep 2022 20:21:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "> On 16 Sep 2022, at 04:22, Michael Paquier <michael@paquier.xyz> wrote:\n\n> By the way, should we document PG_TEST_TIMEOUT_DEFAULT and\n> PG_TEST_NOCLEAN not only in src/test/perl/README but also doc/?. We\n> provide something in the docs about PROVE_FLAGS and PROVE_TESTS, for\n> example. \n\nI think that's a good idea, not everyone running tests will read the internals\ndocumentation (or even know abou it even). How about the attached?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Wed, 2 Nov 2022 21:42:12 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "On Wed, Nov 02, 2022 at 09:42:12PM +0100, Daniel Gustafsson wrote:\n> I think that's a good idea, not everyone running tests will read the internals\n> documentation (or even know abou it even). How about the attached?\n\nThanks for the patch. Perhaps this should be mentioned additionally\nin install-windows.sgml? I have not tested, but as long as these\nvariables are configured with a \"set\" command in a command prompt,\nthey would be passed down to the processes triggered by vcregress.pl\n(see for example TESTLOGDIR and TESTDATADIR).\n--\nMichael",
"msg_date": "Thu, 3 Nov 2022 20:49:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "> On 3 Nov 2022, at 12:49, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Nov 02, 2022 at 09:42:12PM +0100, Daniel Gustafsson wrote:\n>> I think that's a good idea, not everyone running tests will read the internals\n>> documentation (or even know abou it even). How about the attached?\n> \n> Thanks for the patch. Perhaps this should be mentioned additionally\n> in install-windows.sgml? I have not tested, but as long as these\n> variables are configured with a \"set\" command in a command prompt,\n> they would be passed down to the processes triggered by vcregress.pl\n> (see for example TESTLOGDIR and TESTDATADIR).\n\nThat's probably a good idea, I've amended the patch with that and also made the\nCPAN mention of IPC::Run into a ulink like how it is in the Windows section in\npassing. To avoid duplicating the info in the docs I made it into a sect2\nwhich can be linked to. How about this version?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Mon, 14 Nov 2022 13:36:56 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> How about this version?\n\nThis isn't correct shell syntax is it?\n\n+PG_TEST_NOCLEAN make -C src/bin/pg_dump check\n\nI think you meant\n\n+PG_TEST_NOCLEAN=1 make -C src/bin/pg_dump check\n\nor the like.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 14 Nov 2022 09:23:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "> On 14 Nov 2022, at 15:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> How about this version?\n> \n> This isn't correct shell syntax is it?\n> \n> +PG_TEST_NOCLEAN make -C src/bin/pg_dump check\n> \n> I think you meant\n> \n> +PG_TEST_NOCLEAN=1 make -C src/bin/pg_dump check\n> \n> or the like.\n\nUgh, yes, that's what it should say.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 14 Nov 2022 15:27:14 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "On Mon, Nov 14, 2022 at 03:27:14PM +0100, Daniel Gustafsson wrote:\n> Ugh, yes, that's what it should say.\n\nA split sounds fine by me. On top of what Tom has mentioned, I have\nspotted two small-ish things.\n\n- This module is available from CPAN or an operating system package.\n+ This module is available from\n+ <ulink url=\"https://metacpan.org/release/IPC-Run\">CPAN</ulink>\n+ or an operating system package.\n\nIt looks like there is a second one in install-windows.sgml.\n\n+ Many operations in the test suites use a 180 second timeout, which on slow\nNit: s/180 second/180-second/?\n--\nMichael",
"msg_date": "Tue, 15 Nov 2022 08:58:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "> On 15 Nov 2022, at 00:58, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Mon, Nov 14, 2022 at 03:27:14PM +0100, Daniel Gustafsson wrote:\n>> Ugh, yes, that's what it should say.\n> \n> A split sounds fine by me. On top of what Tom has mentioned, I have\n> spotted two small-ish things.\n> \n> - This module is available from CPAN or an operating system package.\n> + This module is available from\n> + <ulink url=\"https://metacpan.org/release/IPC-Run\">CPAN</ulink>\n> + or an operating system package.\n> \n> It looks like there is a second one in install-windows.sgml.\n\nNot sure I follow. IPC::Run is already linked to with a ulink from that page\n(albeit with an empty tag rendering the URL instead).\n\nA related nitpick I found though is that metacpan has changed their URL\nstructure and these links now 301 redirect. The attached 0001 fixes that first\nbefore applying the other part.\n\n> + Many operations in the test suites use a 180 second timeout, which on slow\n> Nit: s/180 second/180-second/?\n\nOk.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Tue, 15 Nov 2022 11:09:54 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "On Tue, Nov 15, 2022 at 11:09:54AM +0100, Daniel Gustafsson wrote:\n>> On 15 Nov 2022, at 00:58, Michael Paquier <michael@paquier.xyz> wrote:\n>> \n>> On Mon, Nov 14, 2022 at 03:27:14PM +0100, Daniel Gustafsson wrote:\n>>> Ugh, yes, that's what it should say.\n>> \n>> A split sounds fine by me. On top of what Tom has mentioned, I have\n>> spotted two small-ish things.\n>> \n>> - This module is available from CPAN or an operating system package.\n>> + This module is available from\n>> + <ulink url=\"https://metacpan.org/release/IPC-Run\">CPAN</ulink>\n>> + or an operating system package.\n>> \n>> It looks like there is a second one in install-windows.sgml.\n> \n> Not sure I follow. IPC::Run is already linked to with a ulink from that page\n> (albeit with an empty tag rendering the URL instead).\n\nAh, I did not notice that there was already a link to that with\nIPC::Run. Anyway, shouldn't CPAN be marked at least as an <acronym>\nif we are not going to use a link on it? acronyms.sgml lists it, just\nsaying.\n\n> A related nitpick I found though is that metacpan has changed their URL\n> structure and these links now 301 redirect. The attached 0001 fixes that first\n> before applying the other part.\n\nWFM.\n--\nMichael",
"msg_date": "Wed, 16 Nov 2022 10:02:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
},
{
"msg_contents": "> On 16 Nov 2022, at 02:02, Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Nov 15, 2022 at 11:09:54AM +0100, Daniel Gustafsson wrote:\n>>> On 15 Nov 2022, at 00:58, Michael Paquier <michael@paquier.xyz> wrote:\n\n>>> It looks like there is a second one in install-windows.sgml.\n>> \n>> Not sure I follow. IPC::Run is already linked to with a ulink from that page\n>> (albeit with an empty tag rendering the URL instead).\n> \n> Ah, I did not notice that there was already a link to that with\n> IPC::Run. Anyway, shouldn't CPAN be marked at least as an <acronym>\n> if we are not going to use a link on it? acronyms.sgml lists it, just\n> saying.\n\nFair enough. I fixed that and applied this to HEAD.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 16 Nov 2022 10:34:58 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup's --gzip switch misbehaves"
}
] |
[
{
"msg_contents": "Hi,\nIn CheckLDAPAuth(), around line 2606:\n\n if (r != LDAP_SUCCESS)\n {\n ereport(LOG,\n (errmsg(\"could not search LDAP for filter \\\"%s\\\" on\nserver \\\"%s\\\": %s\",\n\nIt seems that the call to ldap_msgfree() is missing in the above case.\nAccording to\nhttps://www.openldap.org/software//man.cgi?query=ldap_search_s&sektion=3&apropos=0&manpath=OpenLDAP+2.4-Release\n:\n\n Note that *res* parameter of *ldap*_*search*_*ext*_*s()*\nand *ldap*_*search*_*s()*\n should be freed with *ldap*_*msgfree()* regardless of return\nvalue of these\n functions.\n\nPlease see the attached patch which frees the search_message in the above case.\n\n\nThanks",
"msg_date": "Sat, 3 Sep 2022 17:00:30 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "freeing LDAPMessage in CheckLDAPAuth"
},
{
"msg_contents": "On Sat, Sep 03, 2022 at 05:00:30PM -0700, Zhihong Yu wrote:\n> Note that *res* parameter of *ldap*_*search*_*ext*_*s()*\n> and *ldap*_*search*_*s()*\n> should be freed with *ldap*_*msgfree()* regardless of return\n> value of these\n> functions.\n> \n> Please see the attached patch which frees the search_message in the above case.\n\nYep, nice catch, I am reading the same thing as you do. I can see\nthat we already do that after a failing ldap_search_st() call in\nfe-connect.c for libpq. Hence, similarly, we'd better call\nldap_msgfree() on search_message when it is not NULL after a search\nfailure, no? The patch you are proposing does not do that.\n--\nMichael",
"msg_date": "Sun, 4 Sep 2022 14:40:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: freeing LDAPMessage in CheckLDAPAuth"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sat, Sep 03, 2022 at 05:00:30PM -0700, Zhihong Yu wrote:\n>> Please see the attached patch which frees the search_message in the above case.\n\n> Yep, nice catch, I am reading the same thing as you do. I can see\n> that we already do that after a failing ldap_search_st() call in\n> fe-connect.c for libpq. Hence, similarly, we'd better call\n> ldap_msgfree() on search_message when it is not NULL after a search\n> failure, no? The patch you are proposing does not do that.\n\nI can't get too excited about this. All of the error exit paths in\nbackend authentication code will lead immediately to process exit, so\nthe possibility of some memory being leaked really has no consequences\nworth worrying about. If we *were* worried about it, sprinkling a few\nmore ldap_msgfree() calls into the existing code would hardly make it\nmore bulletproof. There's lots of psprintf() and other\nPostgres-universe calls in that code that could potentially fail and\nforce an elog exit without reaching ldap_msgfree. So if you wanted to\nmake this completely clean you'd need to resort to doing the freeing\nin PG_CATCH blocks ... and I don't see any value in hacking it to that\nextent.\n\nWhat might be worth inspecting is the code paths in frontend libpq\nthat call ldap_msgfree(), because on the client side we don't get to\nassume that an error will lead to immediate process exit. If we've\nmissed any cleanups over there, that *would* be worth fixing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Sep 2022 01:52:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: freeing LDAPMessage in CheckLDAPAuth"
},
{
"msg_contents": "On Sun, Sep 04, 2022 at 01:52:10AM -0400, Tom Lane wrote:\n> I can't get too excited about this. All of the error exit paths in\n> backend authentication code will lead immediately to process exit, so\n> the possibility of some memory being leaked really has no consequences\n> worth worrying about. If we *were* worried about it, sprinkling a few\n> more ldap_msgfree() calls into the existing code would hardly make it\n> more bulletproof.\n\nEven if this is not critical in the backend for this authentication\npath, I'd like to think that it is still a good practice for future\ncode so as anything code-pasted around would get the call. So I see\nno reason to not put smth on HEAD at least.\n\n> There's lots of psprintf() and other\n> Postgres-universe calls in that code that could potentially fail and\n> force an elog exit without reaching ldap_msgfree. So if you wanted to\n> make this completely clean you'd need to resort to doing the freeing\n> in PG_CATCH blocks ... and I don't see any value in hacking it to that\n> extent.\n\nAgreed. I cannot get excited about going down to that in this case.\n\n> What might be worth inspecting is the code paths in frontend libpq\n> that call ldap_msgfree(), because on the client side we don't get to\n> assume that an error will lead to immediate process exit. If we've\n> missed any cleanups over there, that *would* be worth fixing.\n\nFWIW, I have looked at the frontend while writing my previous message\nand did not notice anything.\n--\nMichael",
"msg_date": "Sun, 4 Sep 2022 16:25:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: freeing LDAPMessage in CheckLDAPAuth"
},
{
"msg_contents": "On Sun, Sep 4, 2022 at 12:25 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sun, Sep 04, 2022 at 01:52:10AM -0400, Tom Lane wrote:\n> > I can't get too excited about this. All of the error exit paths in\n> > backend authentication code will lead immediately to process exit, so\n> > the possibility of some memory being leaked really has no consequences\n> > worth worrying about. If we *were* worried about it, sprinkling a few\n> > more ldap_msgfree() calls into the existing code would hardly make it\n> > more bulletproof.\n>\n> Even if this is not critical in the backend for this authentication\n> path, I'd like to think that it is still a good practice for future\n> code so as anything code-pasted around would get the call. So I see\n> no reason to not put smth on HEAD at least.\n>\nHi,\nHere is updated patch as you suggested in your previous email.\n\nThanks",
"msg_date": "Sun, 4 Sep 2022 03:58:06 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: freeing LDAPMessage in CheckLDAPAuth"
},
{
"msg_contents": "On Sun, Sep 4, 2022 at 3:58 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Sun, Sep 4, 2022 at 12:25 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n>\n>> On Sun, Sep 04, 2022 at 01:52:10AM -0400, Tom Lane wrote:\n>> > I can't get too excited about this. All of the error exit paths in\n>> > backend authentication code will lead immediately to process exit, so\n>> > the possibility of some memory being leaked really has no consequences\n>> > worth worrying about. If we *were* worried about it, sprinkling a few\n>> > more ldap_msgfree() calls into the existing code would hardly make it\n>> > more bulletproof.\n>>\n>> Even if this is not critical in the backend for this authentication\n>> path, I'd like to think that it is still a good practice for future\n>> code so as anything code-pasted around would get the call. So I see\n>> no reason to not put smth on HEAD at least.\n>>\n> Hi,\n> Here is updated patch as you suggested in your previous email.\n>\n> Thanks\n>\nHi,\nPlease take a look at patch v3.\n\nThanks",
"msg_date": "Sun, 4 Sep 2022 06:52:37 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: freeing LDAPMessage in CheckLDAPAuth"
},
{
"msg_contents": "On Sun, Sep 04, 2022 at 06:52:37AM -0700, Zhihong Yu wrote:\n> Please take a look at patch v3.\n\nFine as far as it goes. I would have put the initialization of\nsearch_message closer to ldap_search_s() for consistency with libpq.\nThat's what we do with ldap_search_st().\n--\nMichael",
"msg_date": "Mon, 5 Sep 2022 14:37:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: freeing LDAPMessage in CheckLDAPAuth"
},
{
"msg_contents": "On Sun, Sep 4, 2022 at 10:37 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sun, Sep 04, 2022 at 06:52:37AM -0700, Zhihong Yu wrote:\n> > Please take a look at patch v3.\n>\n> Fine as far as it goes. I would have put the initialization of\n> search_message closer to ldap_search_s() for consistency with libpq.\n> That's what we do with ldap_search_st().\n> --\n> Michael\n>\nHi,\nHere is patch v4.",
"msg_date": "Mon, 5 Sep 2022 02:50:09 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: freeing LDAPMessage in CheckLDAPAuth"
},
{
"msg_contents": "On Mon, Sep 05, 2022 at 02:50:09AM -0700, Zhihong Yu wrote:\n> Here is patch v4.\n\nFWIW, I am fine with what you are basically doing with v4, so I'd like\nto apply that on HEAD on the basis of consistency with libpq. As Tom\nsaid, this authentication path will fail, but I'd like to think that\nthis is a good practice anyway. I'll wait a few days first, in case\nothers have comments.\n--\nMichael",
"msg_date": "Wed, 7 Sep 2022 08:24:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: freeing LDAPMessage in CheckLDAPAuth"
}
] |
[
{
"msg_contents": "Hi, The ECPG preprocessor converts the code\"static VARCHAR str1[10], str2[20], str3[30];\"into\"static struct varchar_1 { int len; char arr[ 10 ]; } str1 ; struct varchar_2 { int len; char arr[ 20 ]; } str2 ; struct varchar_3 { int len; char arr[ 30 ]; } str3 ;\".Storage declaration applies only to the first structure. The patch in the attachment fixes the bug. Storage declaration will be repeated before each structure.The patch is on github too: https://github.com/andr-sokolov/postgresql/commit/c8f8fc7a211938569e7d46c91a428d8cb25b6f9c --Andrey SokolovArenadata https://arenadata.tech/",
"msg_date": "Sun, 04 Sep 2022 13:49:53 +0300",
"msg_from": "Andrey Sokolov <a.sokolov@arenadata.io>",
"msg_from_op": true,
"msg_subject": "[BUG] Storage declaration in ECPG "
},
{
"msg_contents": "At Sun, 04 Sep 2022 13:49:53 +0300, Andrey Sokolov <a.sokolov@arenadata.io> wrote in \n> Hi,\n> \n> The ECPG preprocessor converts the code\n> \"static VARCHAR str1[10], str2[20], str3[30];\"\n> into\n> \"static struct varchar_1 { int len; char arr[ 10 ]; } str1 ;\n> struct varchar_2 { int len; char arr[ 20 ]; } str2 ;\n> struct varchar_3 { int len; char arr[ 30 ]; } str3 ;\".\n> Storage declaration applies only to the first structure.\n\nGood catch!\n\n> The patch in the attachment fixes the bug. Storage declaration will be\n> repeated before each structure.\n> The patch is on github too:\n> https://github.com/andr-sokolov/postgresql/commit/c8f8fc7a211938569e7d46c91a428d8cb25b6f9c\n\nAnd the code looks good to me.\n\nAbout the test, don't we need the test for non-varchar/bytea static\nvariables like \"static int inta, intb, intc;\"?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 05 Sep 2022 17:12:10 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Storage declaration in ECPG "
},
{
"msg_contents": "05.09.2022, 11:12, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com>: About the test, don't we need the test for non-varchar/bytea staticvariables like \"static int inta, intb, intc;\"?Good idea, thanks. I have added tests for static int and bytea. The new patch is in the attachment and here https://github.com/andr-sokolov/postgresql/commit/5a4adc1b5a2a0adfc152debcaf825e7a95a47450",
"msg_date": "Wed, 07 Sep 2022 23:57:47 +0300",
"msg_from": "Andrey Sokolov <a.sokolov@arenadata.io>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Storage declaration in ECPG"
},
{
"msg_contents": "Andrey Sokolov <a.sokolov@arenadata.io> writes:\n> [ v2-0001-Fix-storage-declaration-in-ECPG.patch ]\n\nPushed. I didn't think a whole new test case was appropriate,\neither from the patch-footprint or test-runtime standpoint,\nso I just added a couple of declarations to preproc/variable.pgc.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Sep 2022 15:36:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Storage declaration in ECPG"
}
] |
[
{
"msg_contents": "Hi,\n\nI've been running some valgrind tests on rpi4/aarch64, and I get a crash\nin test_decoding ddl test in ~50% runs. I don't see the same failure\nwithout valgrind or on 32-bit system (hundreds of runs, no crashes), so\nI suspect this is a race condition, and with valgrind the timing changes\nin a way to make it more likely.\n\nThe crash always happens in the \"ddl\" test. The backtrace always looks\nlike this:\n\n (ExceptionalCondition+0x98)[0x8f6f7c]\n (+0x57a7ec)[0x6827ec]\n (+0x579edc)[0x681edc]\n (ReorderBufferAddNewTupleCids+0x60)[0x686758]\n (SnapBuildProcessNewCid+0x94)[0x68b920]\n (heap2_decode+0x17c)[0x671584]\n (LogicalDecodingProcessRecord+0xbc)[0x670cd0]\n (+0x570f88)[0x678f88]\n (pg_logical_slot_get_changes+0x1c)[0x6790fc]\n (ExecMakeTableFunctionResult+0x29c)[0x4a92c0]\n (+0x3be638)[0x4c6638]\n (+0x3a2c14)[0x4aac14]\n (ExecScan+0x8c)[0x4aaca8]\n (+0x3bea14)[0x4c6a14]\n (+0x39ea60)[0x4a6a60]\n (+0x392378)[0x49a378]\n (+0x39520c)[0x49d20c]\n (standard_ExecutorRun+0x214)[0x49aad8]\n (ExecutorRun+0x64)[0x49a8b8]\n (+0x62f53c)[0x73753c]\n (PortalRun+0x27c)[0x737198]\n (+0x627e78)[0x72fe78]\n (PostgresMain+0x9a0)[0x73512c]\n (+0x547be8)[0x64fbe8]\n (+0x547540)[0x64f540]\n (+0x542d30)[0x64ad30]\n (PostmasterMain+0x1460)[0x64a574]\n (+0x418888)[0x520888]\n\nI'm unable to get a better backtrace from the valgrind-produces core\nusign gdb, for some reason.\n\nHowever, I've modified AssertTXNLsnOrder() - which is where the assert\nis checked - to also dump toplevel_by_lsn instead of just triggering the\nassert, and the result is always like this:\n\n WARNING: ==============================================\n WARNING: txn xid 849 top 0 first 30264752 0/1CDCDB0 final 0 0/0\n WARNING: txn xid 848 top 0 first 30264752 0/1CDCDB0 final 0 0/0\n WARNING: ==============================================\n\nThe LSNs change a bit between the runs, but the failing transactions are\nalways 848 and 849. Also, both transactions have exactly the same info.\n\nBut the very first WAL record for 849 is\n\n ASSIGNMENT xtop 848: subxacts: 849\n\nso it's strange 849 is in the toplevel_by_lsn list at all, because it\nclearly is a subxact of 848.\n\nFurthermore, the WAL is almost exactly the same in both cases. Attached\nare two dumps from a failed and successful run (only the part related to\nthese two xids is included). There are very few differences - there is a\nPRUNE in the failed case, and a LOCK / RUNNING_XACTS moved a bit.\n\n\nAny ideas?\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 4 Sep 2022 13:04:34 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\", File:\n \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Sun, Sep 4, 2022 at 4:34 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> I've been running some valgrind tests on rpi4/aarch64, and I get a crash\n> in test_decoding ddl test in ~50% runs. I don't see the same failure\n> without valgrind or on 32-bit system (hundreds of runs, no crashes), so\n> I suspect this is a race condition, and with valgrind the timing changes\n> in a way to make it more likely.\n>\n> The crash always happens in the \"ddl\" test. The backtrace always looks\n> like this:\n>\n> (ExceptionalCondition+0x98)[0x8f6f7c]\n> (+0x57a7ec)[0x6827ec]\n> (+0x579edc)[0x681edc]\n> (ReorderBufferAddNewTupleCids+0x60)[0x686758]\n> (SnapBuildProcessNewCid+0x94)[0x68b920]\n> (heap2_decode+0x17c)[0x671584]\n> (LogicalDecodingProcessRecord+0xbc)[0x670cd0]\n> (+0x570f88)[0x678f88]\n> (pg_logical_slot_get_changes+0x1c)[0x6790fc]\n> (ExecMakeTableFunctionResult+0x29c)[0x4a92c0]\n> (+0x3be638)[0x4c6638]\n> (+0x3a2c14)[0x4aac14]\n> (ExecScan+0x8c)[0x4aaca8]\n> (+0x3bea14)[0x4c6a14]\n> (+0x39ea60)[0x4a6a60]\n> (+0x392378)[0x49a378]\n> (+0x39520c)[0x49d20c]\n> (standard_ExecutorRun+0x214)[0x49aad8]\n> (ExecutorRun+0x64)[0x49a8b8]\n> (+0x62f53c)[0x73753c]\n> (PortalRun+0x27c)[0x737198]\n> (+0x627e78)[0x72fe78]\n> (PostgresMain+0x9a0)[0x73512c]\n> (+0x547be8)[0x64fbe8]\n> (+0x547540)[0x64f540]\n> (+0x542d30)[0x64ad30]\n> (PostmasterMain+0x1460)[0x64a574]\n> (+0x418888)[0x520888]\n>\n> I'm unable to get a better backtrace from the valgrind-produces core\n> usign gdb, for some reason.\n>\n> However, I've modified AssertTXNLsnOrder() - which is where the assert\n> is checked - to also dump toplevel_by_lsn instead of just triggering the\n> assert, and the result is always like this:\n>\n> WARNING: ==============================================\n> WARNING: txn xid 849 top 0 first 30264752 0/1CDCDB0 final 0 0/0\n> WARNING: txn xid 848 top 0 first 30264752 0/1CDCDB0 final 0 0/0\n> WARNING: ==============================================\n>\n> The LSNs change a bit between the runs, but the failing transactions are\n> always 848 and 849. Also, both transactions have exactly the same info.\n>\n> But the very first WAL record for 849 is\n>\n> ASSIGNMENT xtop 848: subxacts: 849\n>\n> so it's strange 849 is in the toplevel_by_lsn list at all, because it\n> clearly is a subxact of 848.\n>\n\nThere is no guarantee that toplevel_by_lsn won't have subxact. As per\nmy understanding, the problem I reported in the email [1] is the same\nand we have seen this in BF failures as well. I posted a way to\nreproduce it in that email. It seems this is possible if the decoding\ngets XLOG_HEAP2_NEW_CID as the first record (belonging to a\nsubtransaction) after XLOG_RUNNING_XACTS.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1LK1nxOTL32OP%3DejhPoBsUP4Bvwb3Ly%3DfethyJ-KbaXyw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sun, 4 Sep 2022 17:19:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "\n\nOn 9/4/22 13:49, Amit Kapila wrote:\n> On Sun, Sep 4, 2022 at 4:34 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> I've been running some valgrind tests on rpi4/aarch64, and I get a crash\n>> in test_decoding ddl test in ~50% runs. I don't see the same failure\n>> without valgrind or on 32-bit system (hundreds of runs, no crashes), so\n>> I suspect this is a race condition, and with valgrind the timing changes\n>> in a way to make it more likely.\n>>\n>> The crash always happens in the \"ddl\" test. The backtrace always looks\n>> like this:\n>>\n>> (ExceptionalCondition+0x98)[0x8f6f7c]\n>> (+0x57a7ec)[0x6827ec]\n>> (+0x579edc)[0x681edc]\n>> (ReorderBufferAddNewTupleCids+0x60)[0x686758]\n>> (SnapBuildProcessNewCid+0x94)[0x68b920]\n>> (heap2_decode+0x17c)[0x671584]\n>> (LogicalDecodingProcessRecord+0xbc)[0x670cd0]\n>> (+0x570f88)[0x678f88]\n>> (pg_logical_slot_get_changes+0x1c)[0x6790fc]\n>> (ExecMakeTableFunctionResult+0x29c)[0x4a92c0]\n>> (+0x3be638)[0x4c6638]\n>> (+0x3a2c14)[0x4aac14]\n>> (ExecScan+0x8c)[0x4aaca8]\n>> (+0x3bea14)[0x4c6a14]\n>> (+0x39ea60)[0x4a6a60]\n>> (+0x392378)[0x49a378]\n>> (+0x39520c)[0x49d20c]\n>> (standard_ExecutorRun+0x214)[0x49aad8]\n>> (ExecutorRun+0x64)[0x49a8b8]\n>> (+0x62f53c)[0x73753c]\n>> (PortalRun+0x27c)[0x737198]\n>> (+0x627e78)[0x72fe78]\n>> (PostgresMain+0x9a0)[0x73512c]\n>> (+0x547be8)[0x64fbe8]\n>> (+0x547540)[0x64f540]\n>> (+0x542d30)[0x64ad30]\n>> (PostmasterMain+0x1460)[0x64a574]\n>> (+0x418888)[0x520888]\n>>\n>> I'm unable to get a better backtrace from the valgrind-produces core\n>> usign gdb, for some reason.\n>>\n>> However, I've modified AssertTXNLsnOrder() - which is where the assert\n>> is checked - to also dump toplevel_by_lsn instead of just triggering the\n>> assert, and the result is always like this:\n>>\n>> WARNING: ==============================================\n>> WARNING: txn xid 849 top 0 first 30264752 0/1CDCDB0 final 0 0/0\n>> WARNING: txn xid 848 top 0 first 30264752 0/1CDCDB0 final 0 0/0\n>> WARNING: ==============================================\n>>\n>> The LSNs change a bit between the runs, but the failing transactions are\n>> always 848 and 849. Also, both transactions have exactly the same info.\n>>\n>> But the very first WAL record for 849 is\n>>\n>> ASSIGNMENT xtop 848: subxacts: 849\n>>\n>> so it's strange 849 is in the toplevel_by_lsn list at all, because it\n>> clearly is a subxact of 848.\n>>\n> \n> There is no guarantee that toplevel_by_lsn won't have subxact.\n\nI don't think that's quite true - toplevel_by_lsn should not contain any\n*known* subxacts. Yes, we may initially add a subxact to the list, but\nas soon as we get assignment record, it should be removed. See what\nReorderBufferAssignChild does.\n\nAnd in this case the ASSIGNMENT is the first WAL record we get for 849\n(in fact, isn't that guaranteed since 7259736a6e?), so we know from the\nvery beginning 849 is a subxact.\n\n\n> As per\n> my understanding, the problem I reported in the email [1] is the same\n> and we have seen this in BF failures as well. I posted a way to\n> reproduce it in that email. It seems this is possible if the decoding\n> gets XLOG_HEAP2_NEW_CID as the first record (belonging to a\n> subtransaction) after XLOG_RUNNING_XACTS.\n> \n\nInteresting. That's certainly true for WAL in the crashing case:\n\nrmgr: Standby len (rec/tot): 58/ 58, tx: 0, lsn:\n0/01CDCD70, prev 0/01CDCD10, desc: RUNNING_XACTS nextXid 850\nlatestCompletedXid 847 oldestRunningXid 848; 1 xacts: 848\nrmgr: Heap2 len (rec/tot): 60/ 60, tx: 849, lsn:\n0/01CDCDB0, prev 0/01CDCD70, desc: NEW_CID rel 1663/16384/1249; tid\n58/38; cmin: 1, cmax: 14, combo: 6\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 4 Sep 2022 14:24:15 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On 9/4/22 14:24, Tomas Vondra wrote:\n> \n> \n> On 9/4/22 13:49, Amit Kapila wrote:\n>> On Sun, Sep 4, 2022 at 4:34 PM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> I've been running some valgrind tests on rpi4/aarch64, and I get a crash\n>>> in test_decoding ddl test in ~50% runs. I don't see the same failure\n>>> without valgrind or on 32-bit system (hundreds of runs, no crashes), so\n>>> I suspect this is a race condition, and with valgrind the timing changes\n>>> in a way to make it more likely.\n>>>\n>>> The crash always happens in the \"ddl\" test. The backtrace always looks\n>>> like this:\n>>>\n>>> (ExceptionalCondition+0x98)[0x8f6f7c]\n>>> (+0x57a7ec)[0x6827ec]\n>>> (+0x579edc)[0x681edc]\n>>> (ReorderBufferAddNewTupleCids+0x60)[0x686758]\n>>> (SnapBuildProcessNewCid+0x94)[0x68b920]\n>>> (heap2_decode+0x17c)[0x671584]\n>>> (LogicalDecodingProcessRecord+0xbc)[0x670cd0]\n>>> (+0x570f88)[0x678f88]\n>>> (pg_logical_slot_get_changes+0x1c)[0x6790fc]\n>>> (ExecMakeTableFunctionResult+0x29c)[0x4a92c0]\n>>> (+0x3be638)[0x4c6638]\n>>> (+0x3a2c14)[0x4aac14]\n>>> (ExecScan+0x8c)[0x4aaca8]\n>>> (+0x3bea14)[0x4c6a14]\n>>> (+0x39ea60)[0x4a6a60]\n>>> (+0x392378)[0x49a378]\n>>> (+0x39520c)[0x49d20c]\n>>> (standard_ExecutorRun+0x214)[0x49aad8]\n>>> (ExecutorRun+0x64)[0x49a8b8]\n>>> (+0x62f53c)[0x73753c]\n>>> (PortalRun+0x27c)[0x737198]\n>>> (+0x627e78)[0x72fe78]\n>>> (PostgresMain+0x9a0)[0x73512c]\n>>> (+0x547be8)[0x64fbe8]\n>>> (+0x547540)[0x64f540]\n>>> (+0x542d30)[0x64ad30]\n>>> (PostmasterMain+0x1460)[0x64a574]\n>>> (+0x418888)[0x520888]\n>>>\n>>> I'm unable to get a better backtrace from the valgrind-produces core\n>>> usign gdb, for some reason.\n>>>\n>>> However, I've modified AssertTXNLsnOrder() - which is where the assert\n>>> is checked - to also dump toplevel_by_lsn instead of just triggering the\n>>> assert, and the result is always like this:\n>>>\n>>> WARNING: ==============================================\n>>> WARNING: txn xid 849 top 0 first 30264752 0/1CDCDB0 final 0 0/0\n>>> WARNING: txn xid 848 top 0 first 30264752 0/1CDCDB0 final 0 0/0\n>>> WARNING: ==============================================\n>>>\n>>> The LSNs change a bit between the runs, but the failing transactions are\n>>> always 848 and 849. Also, both transactions have exactly the same info.\n>>>\n>>> But the very first WAL record for 849 is\n>>>\n>>> ASSIGNMENT xtop 848: subxacts: 849\n>>>\n>>> so it's strange 849 is in the toplevel_by_lsn list at all, because it\n>>> clearly is a subxact of 848.\n>>>\n>>\n>> There is no guarantee that toplevel_by_lsn won't have subxact.\n> \n> I don't think that's quite true - toplevel_by_lsn should not contain any\n> *known* subxacts. Yes, we may initially add a subxact to the list, but\n> as soon as we get assignment record, it should be removed. See what\n> ReorderBufferAssignChild does.\n> \n> And in this case the ASSIGNMENT is the first WAL record we get for 849\n> (in fact, isn't that guaranteed since 7259736a6e?), so we know from the\n> very beginning 849 is a subxact.\n> \n> \n>> As per\n>> my understanding, the problem I reported in the email [1] is the same\n>> and we have seen this in BF failures as well. I posted a way to\n>> reproduce it in that email. It seems this is possible if the decoding\n>> gets XLOG_HEAP2_NEW_CID as the first record (belonging to a\n>> subtransaction) after XLOG_RUNNING_XACTS.\n>>\n> \n> Interesting. That's certainly true for WAL in the crashing case:\n> \n> rmgr: Standby len (rec/tot): 58/ 58, tx: 0, lsn:\n> 0/01CDCD70, prev 0/01CDCD10, desc: RUNNING_XACTS nextXid 850\n> latestCompletedXid 847 oldestRunningXid 848; 1 xacts: 848\n> rmgr: Heap2 len (rec/tot): 60/ 60, tx: 849, lsn:\n> 0/01CDCDB0, prev 0/01CDCD70, desc: NEW_CID rel 1663/16384/1249; tid\n> 58/38; cmin: 1, cmax: 14, combo: 6\n> \n\nI investigated using the pgdata from the crashed run (can provide, if\nyou have rpi4 or some other aarch64 machine), and the reason is pretty\nsimple - the restart_lsn for the slot is 0/1CDCD70, which is looong\nafter the subxact assignment, so we add both xids as toplevel.\n\nThat seems broken - if we skip the assignment like this, doesn't that\nbreak spill-to-disk and/or streaming? IIRC that's exactly why we had to\nstart logging assignments immediately with wal_level=logical.\n\nOr maybe we're not dealing with the restart_lsn properly, and we should\nhave ignored those records. Both xacts started long before the restart\nLSN, so we're not seeing the whole xact anyway.\n\nHowever, when processing the NEW_CID record:\n\ntx: 849, lsn: 0/01CDCDB0, prev 0/01CDCD70, desc: NEW_CID rel\n1663/16384/1249; tid 58/38; cmin: 1, cmax: 14, combo: 6\n\nwe ultimately do this in SnapBuildProcessNewCid:\n\n#1 0x0000005566cccdb4 in ReorderBufferAddNewTupleCids (rb=0x559dd64218,\nxid=848, lsn=30264752, locator=..., tid=..., cmin=1, cmax=14,\ncombocid=6) at reorderbuffer.c:3218\n#2 0x0000005566cd1f7c in SnapBuildProcessNewCid (builder=0x559dd6a248,\nxid=849, lsn=30264752, xlrec=0x559dd6e1e0) at snapbuild.c:818\n\nso in fact we *know* 849 is a subxact of 848, but we don't call\nReorderBufferAssignChild in this case. In fact we can't even do the\nassignment easily in this case, because we create the subxact first, so\nthat the crash happens right when we attempt to create the toplevel one,\nand we never even get a chance to do the assignment:\n\n1) process the NEW_CID record, logged for 849 (subxact)\n2) process CIDs in the WAL record, which has topleve_xid 848\n\n\nSo IMHO we need to figure out what to do for WAL records that create\nboth the toplevel and subxact - either we need to skip them, or rethink\nhow we create the ReorderBufferTXN structs.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 4 Sep 2022 16:08:34 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On 9/4/22 16:08, Tomas Vondra wrote:\n> ...\n> \n> so in fact we *know* 849 is a subxact of 848, but we don't call\n> ReorderBufferAssignChild in this case. In fact we can't even do the\n> assignment easily in this case, because we create the subxact first, so\n> that the crash happens right when we attempt to create the toplevel one,\n> and we never even get a chance to do the assignment:\n> \n> 1) process the NEW_CID record, logged for 849 (subxact)\n> 2) process CIDs in the WAL record, which has topleve_xid 848\n> \n> \n> So IMHO we need to figure out what to do for WAL records that create\n> both the toplevel and subxact - either we need to skip them, or rethink\n> how we create the ReorderBufferTXN structs.\n> \n\nThis fixes the crash for me, by adding a ReorderBufferAssignChild call\nto SnapBuildProcessNewCid, and tweaking ReorderBufferAssignChild to\nensure we don't try to create the top xact before updating the subxact\nand removing it from the toplevel_by_lsn list.\n\nEssentially, what's happening is this:\n\n1) We read the NEW_CID record, which is logged with XID 849, i.e. the\nsubxact. But we don't know it's a subxact, so we create it as a\ntop-level xact with the LSN.\n\n2) We start processing contents of the NEW_CID, which however has info\nthat 849 is subxact of 848, calls ReorderBufferAddNewTupleCids which\npromptly does ReorderBufferTXNByXid() with the top-level XID, which\ncreates it with the same LSN, and crashes because of the assert.\n\nI'm not sure what's the right/proper way to fix this ...\n\nThe problem is ReorderBufferAssignChild was coded in a way that did not\nexpect the subxact to be created first (as a top-level xact). And\nindeed, if I add Assert(false) to the (!new_sub) branch that converts\ntop-level xact to subxact, check-world still passes. So we never test\nthis case, but the NEW_CID breaks this assumption and creates them in\nthe opposite order (i.e. subxact first).\n\nSo the patch \"fixes\" this by\n\n(a) tweaking ReorderBufferAssignChild to first remove the subxact from\nthe list of top-level transactions\n\n(b) call ReorderBufferAssignChild when processing NEW_CID\n\nHowever, I wonder whether we even have to process these records? If the\nrestart_lsn is half-way through the xact, so can we even decode it?\nMaybe we can just skip all of this, somehow? We'd still need to remember\n849 is a subxact of 848, at least, so that we know to skip it too.\n\n\nThread [1] suggested to relax the assert to allow the same LSN, provided\nit's xact and it's subxact. That goes directly against the expectation\nthe toplevel_by_lsn list contains no known subxacts, and I don't think\nwe should be relaxing that. After all, just tweaking the LSN does not\nreally fix the issue, because not remembering it's xact+subxact is part\nof the issue. In principle, I think the issue is exactly the opposite,\ni.e. that we don't realize 849 is a subxact, and leave it in the list.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 4 Sep 2022 19:40:14 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Sun, Sep 4, 2022 at 7:38 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 9/4/22 14:24, Tomas Vondra wrote:\n> >\n> >> As per\n> >> my understanding, the problem I reported in the email [1] is the same\n> >> and we have seen this in BF failures as well. I posted a way to\n> >> reproduce it in that email. It seems this is possible if the decoding\n> >> gets XLOG_HEAP2_NEW_CID as the first record (belonging to a\n> >> subtransaction) after XLOG_RUNNING_XACTS.\n> >>\n> >\n> > Interesting. That's certainly true for WAL in the crashing case:\n> >\n> > rmgr: Standby len (rec/tot): 58/ 58, tx: 0, lsn:\n> > 0/01CDCD70, prev 0/01CDCD10, desc: RUNNING_XACTS nextXid 850\n> > latestCompletedXid 847 oldestRunningXid 848; 1 xacts: 848\n> > rmgr: Heap2 len (rec/tot): 60/ 60, tx: 849, lsn:\n> > 0/01CDCDB0, prev 0/01CDCD70, desc: NEW_CID rel 1663/16384/1249; tid\n> > 58/38; cmin: 1, cmax: 14, combo: 6\n> >\n>\n> I investigated using the pgdata from the crashed run (can provide, if\n> you have rpi4 or some other aarch64 machine), and the reason is pretty\n> simple - the restart_lsn for the slot is 0/1CDCD70, which is looong\n> after the subxact assignment, so we add both xids as toplevel.\n>\n> That seems broken - if we skip the assignment like this, doesn't that\n> break spill-to-disk and/or streaming? IIRC that's exactly why we had to\n> start logging assignments immediately with wal_level=logical.\n>\n\nWe had started logging assignments immediately in commit 0bead9af48\nfor streaming transactions in PG14. This issue exists prior to that. I\nhave tried and reproduced it in PG13 but I think it will be there even\nbefore that. So, I am not sure if the spilling behavior is broken due\nto this. I think if we don't get assignment recording before\nprocessing changes during decoding commit then we could miss sending\nthe changes which won't be the case here. Do you see any other\nproblem?\n\n> Or maybe we're not dealing with the restart_lsn properly, and we should\n> have ignored those records. Both xacts started long before the restart\n> LSN, so we're not seeing the whole xact anyway.\n>\n\nRight, but is that problematic? The restart LSN will be used as a\npoint to start reading the WAL and that helps in building a consistent\nsnapshot. However, for decoding to send the commit, we use\nstart_decoding_at point which will ensure that we send complete\ntransactions.\n\n> However, when processing the NEW_CID record:\n>\n> tx: 849, lsn: 0/01CDCDB0, prev 0/01CDCD70, desc: NEW_CID rel\n> 1663/16384/1249; tid 58/38; cmin: 1, cmax: 14, combo: 6\n>\n> we ultimately do this in SnapBuildProcessNewCid:\n>\n> #1 0x0000005566cccdb4 in ReorderBufferAddNewTupleCids (rb=0x559dd64218,\n> xid=848, lsn=30264752, locator=..., tid=..., cmin=1, cmax=14,\n> combocid=6) at reorderbuffer.c:3218\n> #2 0x0000005566cd1f7c in SnapBuildProcessNewCid (builder=0x559dd6a248,\n> xid=849, lsn=30264752, xlrec=0x559dd6e1e0) at snapbuild.c:818\n>\n> so in fact we *know* 849 is a subxact of 848, but we don't call\n> ReorderBufferAssignChild in this case. In fact we can't even do the\n> assignment easily in this case, because we create the subxact first, so\n> that the crash happens right when we attempt to create the toplevel one,\n> and we never even get a chance to do the assignment:\n>\n> 1) process the NEW_CID record, logged for 849 (subxact)\n> 2) process CIDs in the WAL record, which has topleve_xid 848\n>\n>\n> So IMHO we need to figure out what to do for WAL records that create\n> both the toplevel and subxact - either we need to skip them, or rethink\n> how we create the ReorderBufferTXN structs.\n>\n\nAs per my understanding, we can't skip them as they are used to build\nthe snapshot.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 5 Sep 2022 10:02:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Sun, Sep 4, 2022 at 11:10 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 9/4/22 16:08, Tomas Vondra wrote:\n> > ...\n> >\n> > so in fact we *know* 849 is a subxact of 848, but we don't call\n> > ReorderBufferAssignChild in this case. In fact we can't even do the\n> > assignment easily in this case, because we create the subxact first, so\n> > that the crash happens right when we attempt to create the toplevel one,\n> > and we never even get a chance to do the assignment:\n> >\n> > 1) process the NEW_CID record, logged for 849 (subxact)\n> > 2) process CIDs in the WAL record, which has topleve_xid 848\n> >\n> >\n> > So IMHO we need to figure out what to do for WAL records that create\n> > both the toplevel and subxact - either we need to skip them, or rethink\n> > how we create the ReorderBufferTXN structs.\n> >\n>\n> This fixes the crash for me, by adding a ReorderBufferAssignChild call\n> to SnapBuildProcessNewCid, and tweaking ReorderBufferAssignChild to\n> ensure we don't try to create the top xact before updating the subxact\n> and removing it from the toplevel_by_lsn list.\n>\n> Essentially, what's happening is this:\n>\n> 1) We read the NEW_CID record, which is logged with XID 849, i.e. the\n> subxact. But we don't know it's a subxact, so we create it as a\n> top-level xact with the LSN.\n>\n> 2) We start processing contents of the NEW_CID, which however has info\n> that 849 is subxact of 848, calls ReorderBufferAddNewTupleCids which\n> promptly does ReorderBufferTXNByXid() with the top-level XID, which\n> creates it with the same LSN, and crashes because of the assert.\n>\n> I'm not sure what's the right/proper way to fix this ...\n>\n> The problem is ReorderBufferAssignChild was coded in a way that did not\n> expect the subxact to be created first (as a top-level xact).\n>\n\nI think there was a previously hard-coded way to detect that and we\nhave removed it in commit\n(https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e3ff789acfb2754cd7b5e87f6f4463fd08e35996).\nI think it is possible that subtransaction gets logged without\nprevious top-level txn record as shown in the commit shared.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 5 Sep 2022 12:05:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "\n\nOn 9/5/22 06:32, Amit Kapila wrote:\n> On Sun, Sep 4, 2022 at 7:38 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 9/4/22 14:24, Tomas Vondra wrote:\n>>>\n>>>> As per\n>>>> my understanding, the problem I reported in the email [1] is the same\n>>>> and we have seen this in BF failures as well. I posted a way to\n>>>> reproduce it in that email. It seems this is possible if the decoding\n>>>> gets XLOG_HEAP2_NEW_CID as the first record (belonging to a\n>>>> subtransaction) after XLOG_RUNNING_XACTS.\n>>>>\n>>>\n>>> Interesting. That's certainly true for WAL in the crashing case:\n>>>\n>>> rmgr: Standby len (rec/tot): 58/ 58, tx: 0, lsn:\n>>> 0/01CDCD70, prev 0/01CDCD10, desc: RUNNING_XACTS nextXid 850\n>>> latestCompletedXid 847 oldestRunningXid 848; 1 xacts: 848\n>>> rmgr: Heap2 len (rec/tot): 60/ 60, tx: 849, lsn:\n>>> 0/01CDCDB0, prev 0/01CDCD70, desc: NEW_CID rel 1663/16384/1249; tid\n>>> 58/38; cmin: 1, cmax: 14, combo: 6\n>>>\n>>\n>> I investigated using the pgdata from the crashed run (can provide, if\n>> you have rpi4 or some other aarch64 machine), and the reason is pretty\n>> simple - the restart_lsn for the slot is 0/1CDCD70, which is looong\n>> after the subxact assignment, so we add both xids as toplevel.\n>>\n>> That seems broken - if we skip the assignment like this, doesn't that\n>> break spill-to-disk and/or streaming? IIRC that's exactly why we had to\n>> start logging assignments immediately with wal_level=logical.\n>>\n> \n> We had started logging assignments immediately in commit 0bead9af48\n> for streaming transactions in PG14. This issue exists prior to that. I\n> have tried and reproduced it in PG13 but I think it will be there even\n> before that. So, I am not sure if the spilling behavior is broken due\n> to this. I think if we don't get assignment recording before\n> processing changes during decoding commit then we could miss sending\n> the changes which won't be the case here. Do you see any other\n> problem?\n> \n\nI can't, but that's hardly a proof of anything. You're right spilling to\ndisk may not be broken by this, though. I forgot it precedes assignments\nbeing logged immediately, so it does not rely on that.\n\n>> Or maybe we're not dealing with the restart_lsn properly, and we should\n>> have ignored those records. Both xacts started long before the restart\n>> LSN, so we're not seeing the whole xact anyway.\n>>\n> \n> Right, but is that problematic? The restart LSN will be used as a\n> point to start reading the WAL and that helps in building a consistent\n> snapshot. However, for decoding to send the commit, we use\n> start_decoding_at point which will ensure that we send complete\n> transactions.\n> \n\nWhich part would not be problematic? There's some sort of a bug, that's\nfor sure.\n\nI think it's mostly clear we won't output this transaction, because the\nrestart LSN is half-way through. We can either ignore it at commit time,\nand then we have to make everything work in case we miss assignments (or\nany other part of the transaction).\n\nOr we can ignore stuff early, and not even process some of the changes.\nFor example in this case do we need to process the NEW_CID contents for\ntransaction 848? If we can skip that bit, the problem will disappear.\n\nBut maybe this is futile and there are other similar issues ...\n\n>> However, when processing the NEW_CID record:\n>>\n>> tx: 849, lsn: 0/01CDCDB0, prev 0/01CDCD70, desc: NEW_CID rel\n>> 1663/16384/1249; tid 58/38; cmin: 1, cmax: 14, combo: 6\n>>\n>> we ultimately do this in SnapBuildProcessNewCid:\n>>\n>> #1 0x0000005566cccdb4 in ReorderBufferAddNewTupleCids (rb=0x559dd64218,\n>> xid=848, lsn=30264752, locator=..., tid=..., cmin=1, cmax=14,\n>> combocid=6) at reorderbuffer.c:3218\n>> #2 0x0000005566cd1f7c in SnapBuildProcessNewCid (builder=0x559dd6a248,\n>> xid=849, lsn=30264752, xlrec=0x559dd6e1e0) at snapbuild.c:818\n>>\n>> so in fact we *know* 849 is a subxact of 848, but we don't call\n>> ReorderBufferAssignChild in this case. In fact we can't even do the\n>> assignment easily in this case, because we create the subxact first, so\n>> that the crash happens right when we attempt to create the toplevel one,\n>> and we never even get a chance to do the assignment:\n>>\n>> 1) process the NEW_CID record, logged for 849 (subxact)\n>> 2) process CIDs in the WAL record, which has topleve_xid 848\n>>\n>>\n>> So IMHO we need to figure out what to do for WAL records that create\n>> both the toplevel and subxact - either we need to skip them, or rethink\n>> how we create the ReorderBufferTXN structs.\n>>\n> \n> As per my understanding, we can't skip them as they are used to build\n> the snapshot.\n> \n\nDon't we know 848 (the top-level xact) won't be decoded? In that case we\nwon't need the snapshot, so why build it? Of course, the NEW_CID is\nlogged with xid 849 and we don't know it's subxact of 848, but when\nprocessing the NEW_CID content we can realize that (xl_heap_new_cid does\ninclude top_xid).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 5 Sep 2022 08:44:07 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Mon, Sep 5, 2022 at 12:14 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 9/5/22 06:32, Amit Kapila wrote:\n> > On Sun, Sep 4, 2022 at 7:38 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> On 9/4/22 14:24, Tomas Vondra wrote:\n> >>>\n> >>>> As per\n> >>>> my understanding, the problem I reported in the email [1] is the same\n> >>>> and we have seen this in BF failures as well. I posted a way to\n> >>>> reproduce it in that email. It seems this is possible if the decoding\n> >>>> gets XLOG_HEAP2_NEW_CID as the first record (belonging to a\n> >>>> subtransaction) after XLOG_RUNNING_XACTS.\n> >>>>\n> >>>\n> >>> Interesting. That's certainly true for WAL in the crashing case:\n> >>>\n> >>> rmgr: Standby len (rec/tot): 58/ 58, tx: 0, lsn:\n> >>> 0/01CDCD70, prev 0/01CDCD10, desc: RUNNING_XACTS nextXid 850\n> >>> latestCompletedXid 847 oldestRunningXid 848; 1 xacts: 848\n> >>> rmgr: Heap2 len (rec/tot): 60/ 60, tx: 849, lsn:\n> >>> 0/01CDCDB0, prev 0/01CDCD70, desc: NEW_CID rel 1663/16384/1249; tid\n> >>> 58/38; cmin: 1, cmax: 14, combo: 6\n> >>>\n> >>\n> >> I investigated using the pgdata from the crashed run (can provide, if\n> >> you have rpi4 or some other aarch64 machine), and the reason is pretty\n> >> simple - the restart_lsn for the slot is 0/1CDCD70, which is looong\n> >> after the subxact assignment, so we add both xids as toplevel.\n> >>\n> >> That seems broken - if we skip the assignment like this, doesn't that\n> >> break spill-to-disk and/or streaming? IIRC that's exactly why we had to\n> >> start logging assignments immediately with wal_level=logical.\n> >>\n> >\n> > We had started logging assignments immediately in commit 0bead9af48\n> > for streaming transactions in PG14. This issue exists prior to that. I\n> > have tried and reproduced it in PG13 but I think it will be there even\n> > before that. So, I am not sure if the spilling behavior is broken due\n> > to this. I think if we don't get assignment recording before\n> > processing changes during decoding commit then we could miss sending\n> > the changes which won't be the case here. Do you see any other\n> > problem?\n> >\n>\n> I can't, but that's hardly a proof of anything. You're right spilling to\n> disk may not be broken by this, though. I forgot it precedes assignments\n> being logged immediately, so it does not rely on that.\n>\n> >> Or maybe we're not dealing with the restart_lsn properly, and we should\n> >> have ignored those records. Both xacts started long before the restart\n> >> LSN, so we're not seeing the whole xact anyway.\n> >>\n> >\n> > Right, but is that problematic? The restart LSN will be used as a\n> > point to start reading the WAL and that helps in building a consistent\n> > snapshot. However, for decoding to send the commit, we use\n> > start_decoding_at point which will ensure that we send complete\n> > transactions.\n> >\n>\n> Which part would not be problematic? There's some sort of a bug, that's\n> for sure.\n>\n\nIt is possible that there is some other problem here that I am\nmissing. But at this stage, I don't see anything wrong other than the\nassertion you have reported.\n\n> I think it's mostly clear we won't output this transaction, because the\n> restart LSN is half-way through. We can either ignore it at commit time,\n> and then we have to make everything work in case we miss assignments (or\n> any other part of the transaction).\n>\n\nNote, traditionally, we only form these assignments at commit time\nafter deciding whether to skip such commits. So, ideally, there\nshouldn't be any fundamental problem with not making these\nassociations before deciding whether we need to replay (send\ndownstream) any particular transaction.\n\n> Or we can ignore stuff early, and not even process some of the changes.\n> For example in this case do we need to process the NEW_CID contents for\n> transaction 848? If we can skip that bit, the problem will disappear.\n>\n> But maybe this is futile and there are other similar issues ...\n>\n> >> However, when processing the NEW_CID record:\n> >>\n> >> tx: 849, lsn: 0/01CDCDB0, prev 0/01CDCD70, desc: NEW_CID rel\n> >> 1663/16384/1249; tid 58/38; cmin: 1, cmax: 14, combo: 6\n> >>\n> >> we ultimately do this in SnapBuildProcessNewCid:\n> >>\n> >> #1 0x0000005566cccdb4 in ReorderBufferAddNewTupleCids (rb=0x559dd64218,\n> >> xid=848, lsn=30264752, locator=..., tid=..., cmin=1, cmax=14,\n> >> combocid=6) at reorderbuffer.c:3218\n> >> #2 0x0000005566cd1f7c in SnapBuildProcessNewCid (builder=0x559dd6a248,\n> >> xid=849, lsn=30264752, xlrec=0x559dd6e1e0) at snapbuild.c:818\n> >>\n> >> so in fact we *know* 849 is a subxact of 848, but we don't call\n> >> ReorderBufferAssignChild in this case. In fact we can't even do the\n> >> assignment easily in this case, because we create the subxact first, so\n> >> that the crash happens right when we attempt to create the toplevel one,\n> >> and we never even get a chance to do the assignment:\n> >>\n> >> 1) process the NEW_CID record, logged for 849 (subxact)\n> >> 2) process CIDs in the WAL record, which has topleve_xid 848\n> >>\n> >>\n> >> So IMHO we need to figure out what to do for WAL records that create\n> >> both the toplevel and subxact - either we need to skip them, or rethink\n> >> how we create the ReorderBufferTXN structs.\n> >>\n> >\n> > As per my understanding, we can't skip them as they are used to build\n> > the snapshot.\n> >\n>\n> Don't we know 848 (the top-level xact) won't be decoded? In that case we\n> won't need the snapshot, so why build it?\n>\n\nBut this transaction id can be part of committed.xip array if it has\nmade any catalog changes. We add the transaction/subtransaction to\nthis array before deciding whether to skip decoding/replay of its\ncommit.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 5 Sep 2022 15:42:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "\n\nOn 9/5/22 08:35, Amit Kapila wrote:\n> On Sun, Sep 4, 2022 at 11:10 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 9/4/22 16:08, Tomas Vondra wrote:\n>>> ...\n>>>\n>>> so in fact we *know* 849 is a subxact of 848, but we don't call\n>>> ReorderBufferAssignChild in this case. In fact we can't even do the\n>>> assignment easily in this case, because we create the subxact first, so\n>>> that the crash happens right when we attempt to create the toplevel one,\n>>> and we never even get a chance to do the assignment:\n>>>\n>>> 1) process the NEW_CID record, logged for 849 (subxact)\n>>> 2) process CIDs in the WAL record, which has topleve_xid 848\n>>>\n>>>\n>>> So IMHO we need to figure out what to do for WAL records that create\n>>> both the toplevel and subxact - either we need to skip them, or rethink\n>>> how we create the ReorderBufferTXN structs.\n>>>\n>>\n>> This fixes the crash for me, by adding a ReorderBufferAssignChild call\n>> to SnapBuildProcessNewCid, and tweaking ReorderBufferAssignChild to\n>> ensure we don't try to create the top xact before updating the subxact\n>> and removing it from the toplevel_by_lsn list.\n>>\n>> Essentially, what's happening is this:\n>>\n>> 1) We read the NEW_CID record, which is logged with XID 849, i.e. the\n>> subxact. But we don't know it's a subxact, so we create it as a\n>> top-level xact with the LSN.\n>>\n>> 2) We start processing contents of the NEW_CID, which however has info\n>> that 849 is subxact of 848, calls ReorderBufferAddNewTupleCids which\n>> promptly does ReorderBufferTXNByXid() with the top-level XID, which\n>> creates it with the same LSN, and crashes because of the assert.\n>>\n>> I'm not sure what's the right/proper way to fix this ...\n>>\n>> The problem is ReorderBufferAssignChild was coded in a way that did not\n>> expect the subxact to be created first (as a top-level xact).\n>>\n> \n> I think there was a previously hard-coded way to detect that and we\n> have removed it in commit\n> (https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e3ff789acfb2754cd7b5e87f6f4463fd08e35996).\n> I think it is possible that subtransaction gets logged without\n> previous top-level txn record as shown in the commit shared.\n> \n\nWell, yes and no.\n\nThis wouldn't detect the issue, because the assert happens in the first\nReorderBufferTXNByXid(), so it's still crash (in assert-enabled build,\nat least).\n\nMaybe removing the assumption was the wrong thing, and we should have\nchanged the code so that we don't violate it? That's kinda what my \"fix\"\ndoes, in a way.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 5 Sep 2022 12:49:55 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "\n\nOn 9/5/22 12:12, Amit Kapila wrote:\n> On Mon, Sep 5, 2022 at 12:14 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 9/5/22 06:32, Amit Kapila wrote:\n>>> On Sun, Sep 4, 2022 at 7:38 PM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>> On 9/4/22 14:24, Tomas Vondra wrote:\n>>>>>\n>>>>>> As per\n>>>>>> my understanding, the problem I reported in the email [1] is the same\n>>>>>> and we have seen this in BF failures as well. I posted a way to\n>>>>>> reproduce it in that email. It seems this is possible if the decoding\n>>>>>> gets XLOG_HEAP2_NEW_CID as the first record (belonging to a\n>>>>>> subtransaction) after XLOG_RUNNING_XACTS.\n>>>>>>\n>>>>>\n>>>>> Interesting. That's certainly true for WAL in the crashing case:\n>>>>>\n>>>>> rmgr: Standby len (rec/tot): 58/ 58, tx: 0, lsn:\n>>>>> 0/01CDCD70, prev 0/01CDCD10, desc: RUNNING_XACTS nextXid 850\n>>>>> latestCompletedXid 847 oldestRunningXid 848; 1 xacts: 848\n>>>>> rmgr: Heap2 len (rec/tot): 60/ 60, tx: 849, lsn:\n>>>>> 0/01CDCDB0, prev 0/01CDCD70, desc: NEW_CID rel 1663/16384/1249; tid\n>>>>> 58/38; cmin: 1, cmax: 14, combo: 6\n>>>>>\n>>>>\n>>>> I investigated using the pgdata from the crashed run (can provide, if\n>>>> you have rpi4 or some other aarch64 machine), and the reason is pretty\n>>>> simple - the restart_lsn for the slot is 0/1CDCD70, which is looong\n>>>> after the subxact assignment, so we add both xids as toplevel.\n>>>>\n>>>> That seems broken - if we skip the assignment like this, doesn't that\n>>>> break spill-to-disk and/or streaming? IIRC that's exactly why we had to\n>>>> start logging assignments immediately with wal_level=logical.\n>>>>\n>>>\n>>> We had started logging assignments immediately in commit 0bead9af48\n>>> for streaming transactions in PG14. This issue exists prior to that. I\n>>> have tried and reproduced it in PG13 but I think it will be there even\n>>> before that. So, I am not sure if the spilling behavior is broken due\n>>> to this. I think if we don't get assignment recording before\n>>> processing changes during decoding commit then we could miss sending\n>>> the changes which won't be the case here. Do you see any other\n>>> problem?\n>>>\n>>\n>> I can't, but that's hardly a proof of anything. You're right spilling to\n>> disk may not be broken by this, though. I forgot it precedes assignments\n>> being logged immediately, so it does not rely on that.\n>>\n>>>> Or maybe we're not dealing with the restart_lsn properly, and we should\n>>>> have ignored those records. Both xacts started long before the restart\n>>>> LSN, so we're not seeing the whole xact anyway.\n>>>>\n>>>\n>>> Right, but is that problematic? The restart LSN will be used as a\n>>> point to start reading the WAL and that helps in building a consistent\n>>> snapshot. However, for decoding to send the commit, we use\n>>> start_decoding_at point which will ensure that we send complete\n>>> transactions.\n>>>\n>>\n>> Which part would not be problematic? There's some sort of a bug, that's\n>> for sure.\n>>\n> \n> It is possible that there is some other problem here that I am\n> missing. But at this stage, I don't see anything wrong other than the\n> assertion you have reported.\n> \n\nI'm not sure I agree with that. I'm not convinced the assert is at\nfault, it might easily be that it hints there's a logic bug somewhere.\n\n>> I think it's mostly clear we won't output this transaction, because the\n>> restart LSN is half-way through. We can either ignore it at commit time,\n>> and then we have to make everything work in case we miss assignments (or\n>> any other part of the transaction).\n>>\n> \n> Note, traditionally, we only form these assignments at commit time\n> after deciding whether to skip such commits. So, ideally, there\n> shouldn't be any fundamental problem with not making these\n> associations before deciding whether we need to replay (send\n> downstream) any particular transaction.\n> \n\nIsn't that self-contradictory? Either we form these assignments at\ncommit time, or we support streaming (in which case it clearly can't\nhappen at commit time). AFAICS that's exactly why we started logging\n(and processing) assignments immediately, no?\n\n>> Or we can ignore stuff early, and not even process some of the changes.\n>> For example in this case do we need to process the NEW_CID contents for\n>> transaction 848? If we can skip that bit, the problem will disappear.\n>>\n>> But maybe this is futile and there are other similar issues ...\n>>\n>>>> However, when processing the NEW_CID record:\n>>>>\n>>>> tx: 849, lsn: 0/01CDCDB0, prev 0/01CDCD70, desc: NEW_CID rel\n>>>> 1663/16384/1249; tid 58/38; cmin: 1, cmax: 14, combo: 6\n>>>>\n>>>> we ultimately do this in SnapBuildProcessNewCid:\n>>>>\n>>>> #1 0x0000005566cccdb4 in ReorderBufferAddNewTupleCids (rb=0x559dd64218,\n>>>> xid=848, lsn=30264752, locator=..., tid=..., cmin=1, cmax=14,\n>>>> combocid=6) at reorderbuffer.c:3218\n>>>> #2 0x0000005566cd1f7c in SnapBuildProcessNewCid (builder=0x559dd6a248,\n>>>> xid=849, lsn=30264752, xlrec=0x559dd6e1e0) at snapbuild.c:818\n>>>>\n>>>> so in fact we *know* 849 is a subxact of 848, but we don't call\n>>>> ReorderBufferAssignChild in this case. In fact we can't even do the\n>>>> assignment easily in this case, because we create the subxact first, so\n>>>> that the crash happens right when we attempt to create the toplevel one,\n>>>> and we never even get a chance to do the assignment:\n>>>>\n>>>> 1) process the NEW_CID record, logged for 849 (subxact)\n>>>> 2) process CIDs in the WAL record, which has topleve_xid 848\n>>>>\n>>>>\n>>>> So IMHO we need to figure out what to do for WAL records that create\n>>>> both the toplevel and subxact - either we need to skip them, or rethink\n>>>> how we create the ReorderBufferTXN structs.\n>>>>\n>>>\n>>> As per my understanding, we can't skip them as they are used to build\n>>> the snapshot.\n>>>\n>>\n>> Don't we know 848 (the top-level xact) won't be decoded? In that case we\n>> won't need the snapshot, so why build it?\n>>\n> \n> But this transaction id can be part of committed.xip array if it has\n> made any catalog changes. We add the transaction/subtransaction to\n> this array before deciding whether to skip decoding/replay of its\n> commit.\n> \n\nHmm, yeah. It's been a while since I last looked into how we build\nsnapshots and how we share them between the transactions :-( If we share\nthe snapshots between transactions, you're probably right we can't just\nskip these changes.\n\nHowever, doesn't that pretty much mean we *have* to do something about\nthe assignment? I mean, suppose we miss the assignment (like now), so\nthat we end up with two TXNs that we think are top-level. And then we\nget the commit for the actual top-level transaction. AFAICS that won't\nclean-up the subxact, and we end up with a lingering TXN.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 5 Sep 2022 13:54:24 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Mon, Sep 5, 2022 at 5:24 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 9/5/22 12:12, Amit Kapila wrote:\n> > On Mon, Sep 5, 2022 at 12:14 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > It is possible that there is some other problem here that I am\n> > missing. But at this stage, I don't see anything wrong other than the\n> > assertion you have reported.\n> >\n>\n> I'm not sure I agree with that. I'm not convinced the assert is at\n> fault, it might easily be that it hints there's a logic bug somewhere.\n>\n\nIt is possible but let's try to prove it. I am also keen to know if\nthis hints at a logic bug somewhere.\n\n> >> I think it's mostly clear we won't output this transaction, because the\n> >> restart LSN is half-way through. We can either ignore it at commit time,\n> >> and then we have to make everything work in case we miss assignments (or\n> >> any other part of the transaction).\n> >>\n> >\n> > Note, traditionally, we only form these assignments at commit time\n> > after deciding whether to skip such commits. So, ideally, there\n> > shouldn't be any fundamental problem with not making these\n> > associations before deciding whether we need to replay (send\n> > downstream) any particular transaction.\n> >\n>\n> Isn't that self-contradictory? Either we form these assignments at\n> commit time, or we support streaming (in which case it clearly can't\n> happen at commit time).\n>\n\nI was talking about non-streaming cases which also have this assert\nproblem as seen in this thread. I am intentionally keeping streaming\ncases out of this discussion as it happens without those and by\nincluding streaming in the discussion, we will add another angle to\nthis problem which may not be required.\n\n> AFAICS that's exactly why we started logging\n> (and processing) assignments immediately, no?\n>\n> >> Or we can ignore stuff early, and not even process some of the changes.\n> >> For example in this case do we need to process the NEW_CID contents for\n> >> transaction 848? If we can skip that bit, the problem will disappear.\n> >>\n> >> But maybe this is futile and there are other similar issues ...\n> >>\n> >>>> However, when processing the NEW_CID record:\n> >>>>\n> >>>> tx: 849, lsn: 0/01CDCDB0, prev 0/01CDCD70, desc: NEW_CID rel\n> >>>> 1663/16384/1249; tid 58/38; cmin: 1, cmax: 14, combo: 6\n> >>>>\n> >>>> we ultimately do this in SnapBuildProcessNewCid:\n> >>>>\n> >>>> #1 0x0000005566cccdb4 in ReorderBufferAddNewTupleCids (rb=0x559dd64218,\n> >>>> xid=848, lsn=30264752, locator=..., tid=..., cmin=1, cmax=14,\n> >>>> combocid=6) at reorderbuffer.c:3218\n> >>>> #2 0x0000005566cd1f7c in SnapBuildProcessNewCid (builder=0x559dd6a248,\n> >>>> xid=849, lsn=30264752, xlrec=0x559dd6e1e0) at snapbuild.c:818\n> >>>>\n> >>>> so in fact we *know* 849 is a subxact of 848, but we don't call\n> >>>> ReorderBufferAssignChild in this case. In fact we can't even do the\n> >>>> assignment easily in this case, because we create the subxact first, so\n> >>>> that the crash happens right when we attempt to create the toplevel one,\n> >>>> and we never even get a chance to do the assignment:\n> >>>>\n> >>>> 1) process the NEW_CID record, logged for 849 (subxact)\n> >>>> 2) process CIDs in the WAL record, which has topleve_xid 848\n> >>>>\n> >>>>\n> >>>> So IMHO we need to figure out what to do for WAL records that create\n> >>>> both the toplevel and subxact - either we need to skip them, or rethink\n> >>>> how we create the ReorderBufferTXN structs.\n> >>>>\n> >>>\n> >>> As per my understanding, we can't skip them as they are used to build\n> >>> the snapshot.\n> >>>\n> >>\n> >> Don't we know 848 (the top-level xact) won't be decoded? In that case we\n> >> won't need the snapshot, so why build it?\n> >>\n> >\n> > But this transaction id can be part of committed.xip array if it has\n> > made any catalog changes. We add the transaction/subtransaction to\n> > this array before deciding whether to skip decoding/replay of its\n> > commit.\n> >\n>\n> Hmm, yeah. It's been a while since I last looked into how we build\n> snapshots and how we share them between the transactions :-( If we share\n> the snapshots between transactions, you're probably right we can't just\n> skip these changes.\n>\n> However, doesn't that pretty much mean we *have* to do something about\n> the assignment? I mean, suppose we miss the assignment (like now), so\n> that we end up with two TXNs that we think are top-level. And then we\n> get the commit for the actual top-level transaction. AFAICS that won't\n> clean-up the subxact, and we end up with a lingering TXN.\n>\n\nI think we will clean up such a subxact. Such a xact should be skipped\nvia DecodeTXNNeedSkip() and then it will call ReorderBufferForget()\nfor each of the subxacts and that will make sure that we clean up each\nof subtxn's.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 6 Sep 2022 11:29:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "Hi,\n\nOn Tue, Sep 6, 2022 at 3:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Sep 5, 2022 at 5:24 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > On 9/5/22 12:12, Amit Kapila wrote:\n> > > On Mon, Sep 5, 2022 at 12:14 PM Tomas Vondra\n> > > <tomas.vondra@enterprisedb.com> wrote:\n> > >\n> > > It is possible that there is some other problem here that I am\n> > > missing. But at this stage, I don't see anything wrong other than the\n> > > assertion you have reported.\n> > >\n> >\n> > I'm not sure I agree with that. I'm not convinced the assert is at\n> > fault, it might easily be that it hints there's a logic bug somewhere.\n> >\n>\n> It is possible but let's try to prove it. I am also keen to know if\n> this hints at a logic bug somewhere.\n>\n> > >> I think it's mostly clear we won't output this transaction, because the\n> > >> restart LSN is half-way through. We can either ignore it at commit time,\n> > >> and then we have to make everything work in case we miss assignments (or\n> > >> any other part of the transaction).\n> > >>\n> > >\n> > > Note, traditionally, we only form these assignments at commit time\n> > > after deciding whether to skip such commits. So, ideally, there\n> > > shouldn't be any fundamental problem with not making these\n> > > associations before deciding whether we need to replay (send\n> > > downstream) any particular transaction.\n\nAgreed.\n\nSummarizing this issue, the assertion check in AssertTXNLsnOrder()\nfails as reported because the current logical decoding cannot properly\nhandle the case where the decoding restarts from NEW_CID. Since we\ndon't make the association between top-level transaction and its\nsubtransaction while decoding NEW_CID (ie, in\nSnapBuildProcessNewCid()), two transactions are created in\nReorderBuffer as top-txn and have the same LSN. This failure happens\non all supported versions.\n\nTo fix the problem, one idea is that we make the association between\ntop-txn and sub-txn during that by calling ReorderBufferAssignChild(),\nas Tomas proposed. On the other hand, since we don't guarantee to make\nthe association between the top-level transaction and its\nsub-transactions until we try to decode the actual contents of the\ntransaction, it makes sense to me that instead of trying to solve by\nmaking association, we need to change the code which are assuming that\nit is associated.\n\nI've attached the patch for this idea. With the patch, we skip the\nassertion checks in AssertTXNLsnOrder() until we reach the LSN at\nwhich we start decoding the contents of transaction, ie.\nstart_decoding_at in SnapBuild. The minor concern is other way that\nthe assertion check could miss some faulty cases where two unrelated\ntop-transactions could have same LSN. With this patch, it will pass\nfor such a case. Therefore, for transactions that we skipped checking,\nwe do the check when we reach the LSN.\n\nPlease note that to pass the new regression tests, the fix proposed in\na related thread[1] is required. Particularly, we need:\n\n@@ -1099,6 +1099,9 @@ SnapBuildCommitTxn(SnapBuild *builder,\nXLogRecPtr lsn, TransactionId xid,\n else if (sub_needs_timetravel)\n {\n /* track toplevel txn as well, subxact alone isn't meaningful */\n+ elog(DEBUG2, \"forced transaction %u to do timetravel\ndue to one of its subtransaction\",\n+ xid);\n+ needs_timetravel = true;\n SnapBuildAddCommittedTxn(builder, xid);\n }\n else if (needs_timetravel)\n\nA side benefit of this approach is that we can fix another assertion\nfailure too that happens on REL14 and REL15 and reported here[2]. In\nthe commits 68dcce247f1a(REL14) and 272248a0c1(REL15), the reason why\nwe make the association between sub-txns to top-txn in\nSnapBuildXidSetCatalogChanges() is just to avoid the assertion failure\nin AssertTXNLsnOrder(). However, since the invalidation messages are\nnot transported from sub-txn to top-txn during the assignment, another\nassertion check in ReorderBufferForget() fails when forgetting the\nsubtransaction. If we apply this idea of skipping the assertion\nchecks, we no longer need to make the such association in\nSnapBuildXidSetCatalogChanges() and resolve this issue as well.\n\n> > >>>\n> > >>\n> > >> Don't we know 848 (the top-level xact) won't be decoded? In that case we\n> > >> won't need the snapshot, so why build it?\n> > >>\n> > >\n> > > But this transaction id can be part of committed.xip array if it has\n> > > made any catalog changes. We add the transaction/subtransaction to\n> > > this array before deciding whether to skip decoding/replay of its\n> > > commit.\n> > >\n> >\n> > Hmm, yeah. It's been a while since I last looked into how we build\n> > snapshots and how we share them between the transactions :-( If we share\n> > the snapshots between transactions, you're probably right we can't just\n> > skip these changes.\n> >\n> > However, doesn't that pretty much mean we *have* to do something about\n> > the assignment? I mean, suppose we miss the assignment (like now), so\n> > that we end up with two TXNs that we think are top-level. And then we\n> > get the commit for the actual top-level transaction. AFAICS that won't\n> > clean-up the subxact, and we end up with a lingering TXN.\n> >\n>\n> I think we will clean up such a subxact. Such a xact should be skipped\n> via DecodeTXNNeedSkip() and then it will call ReorderBufferForget()\n> for each of the subxacts and that will make sure that we clean up each\n> of subtxn's.\n>\n\nRight.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/TYAPR01MB58666BD6BE24853269624282F5419%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n[2] https://www.postgresql.org/message-id/TYAPR01MB58660803BCAA7849C8584AA4F57E9%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n\n--\nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 12 Oct 2022 14:48:14 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Wed, Oct 12, 2022 at 11:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Summarizing this issue, the assertion check in AssertTXNLsnOrder()\n> fails as reported because the current logical decoding cannot properly\n> handle the case where the decoding restarts from NEW_CID. Since we\n> don't make the association between top-level transaction and its\n> subtransaction while decoding NEW_CID (ie, in\n> SnapBuildProcessNewCid()), two transactions are created in\n> ReorderBuffer as top-txn and have the same LSN. This failure happens\n> on all supported versions.\n>\n> To fix the problem, one idea is that we make the association between\n> top-txn and sub-txn during that by calling ReorderBufferAssignChild(),\n> as Tomas proposed. On the other hand, since we don't guarantee to make\n> the association between the top-level transaction and its\n> sub-transactions until we try to decode the actual contents of the\n> transaction, it makes sense to me that instead of trying to solve by\n> making association, we need to change the code which are assuming that\n> it is associated.\n>\n> I've attached the patch for this idea. With the patch, we skip the\n> assertion checks in AssertTXNLsnOrder() until we reach the LSN at\n> which we start decoding the contents of transaction, ie.\n> start_decoding_at in SnapBuild. The minor concern is other way that\n> the assertion check could miss some faulty cases where two unrelated\n> top-transactions could have same LSN. With this patch, it will pass\n> for such a case. Therefore, for transactions that we skipped checking,\n> we do the check when we reach the LSN.\n>\n\n>\n--- a/src/backend/replication/logical/decode.c\n+++ b/src/backend/replication/logical/decode.c\n@@ -113,6 +113,15 @@\nLogicalDecodingProcessRecord(LogicalDecodingContext *ctx,\nXLogReaderState *recor\n buf.origptr);\n }\n\n+#ifdef USE_ASSERT_CHECKING\n+ /*\n+ * Check the order of transaction LSNs when we reached the start decoding\n+ * LSN. See the comments in AssertTXNLsnOrder() for details.\n+ */\n+ if (SnapBuildGetStartDecodingAt(ctx->snapshot_builder) == buf.origptr)\n+ AssertTXNLsnOrder(ctx->reorder);\n+#endif\n+\n rmgr = GetRmgr(XLogRecGetRmid(record));\n>\n\nI am not able to think how/when this check will be useful. Because we\nskipped assert checking only for records that are prior to\nstart_decoding_at point, I think for those records ordering should\nhave been checked before the restart. start_decoding_at point will be\neither (a) confirmed_flush location, or (b) lsn sent by client, and\nany record prior to that must have been processed before restart.\n\nNow, say we have commit records for multiple transactions which are\nafter start_decoding_at but all their changes are before\nstart_decoding_at, then we won't check their ordering at commit time\nbut OTOH, we would have checked their ordering before restart. Isn't\nthat sufficient?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 13 Oct 2022 12:38:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Thu, Oct 13, 2022 at 4:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 12, 2022 at 11:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Summarizing this issue, the assertion check in AssertTXNLsnOrder()\n> > fails as reported because the current logical decoding cannot properly\n> > handle the case where the decoding restarts from NEW_CID. Since we\n> > don't make the association between top-level transaction and its\n> > subtransaction while decoding NEW_CID (ie, in\n> > SnapBuildProcessNewCid()), two transactions are created in\n> > ReorderBuffer as top-txn and have the same LSN. This failure happens\n> > on all supported versions.\n> >\n> > To fix the problem, one idea is that we make the association between\n> > top-txn and sub-txn during that by calling ReorderBufferAssignChild(),\n> > as Tomas proposed. On the other hand, since we don't guarantee to make\n> > the association between the top-level transaction and its\n> > sub-transactions until we try to decode the actual contents of the\n> > transaction, it makes sense to me that instead of trying to solve by\n> > making association, we need to change the code which are assuming that\n> > it is associated.\n> >\n> > I've attached the patch for this idea. With the patch, we skip the\n> > assertion checks in AssertTXNLsnOrder() until we reach the LSN at\n> > which we start decoding the contents of transaction, ie.\n> > start_decoding_at in SnapBuild. The minor concern is other way that\n> > the assertion check could miss some faulty cases where two unrelated\n> > top-transactions could have same LSN. With this patch, it will pass\n> > for such a case. Therefore, for transactions that we skipped checking,\n> > we do the check when we reach the LSN.\n> >\n>\n> >\n> --- a/src/backend/replication/logical/decode.c\n> +++ b/src/backend/replication/logical/decode.c\n> @@ -113,6 +113,15 @@\n> LogicalDecodingProcessRecord(LogicalDecodingContext *ctx,\n> XLogReaderState *recor\n> buf.origptr);\n> }\n>\n> +#ifdef USE_ASSERT_CHECKING\n> + /*\n> + * Check the order of transaction LSNs when we reached the start decoding\n> + * LSN. See the comments in AssertTXNLsnOrder() for details.\n> + */\n> + if (SnapBuildGetStartDecodingAt(ctx->snapshot_builder) == buf.origptr)\n> + AssertTXNLsnOrder(ctx->reorder);\n> +#endif\n> +\n> rmgr = GetRmgr(XLogRecGetRmid(record));\n> >\n>\n> I am not able to think how/when this check will be useful. Because we\n> skipped assert checking only for records that are prior to\n> start_decoding_at point, I think for those records ordering should\n> have been checked before the restart. start_decoding_at point will be\n> either (a) confirmed_flush location, or (b) lsn sent by client, and\n> any record prior to that must have been processed before restart.\n\nGood point. I was considering the case where the client sets far ahead\nLSN but it's not worth considering this case in this context. I've\nupdated the patch accoringly.\n\nRegards,\n\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 17 Oct 2022 10:34:41 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Wed, Oct 12, 2022 at 11:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Please note that to pass the new regression tests, the fix proposed in\n> a related thread[1] is required. Particularly, we need:\n>\n> @@ -1099,6 +1099,9 @@ SnapBuildCommitTxn(SnapBuild *builder,\n> XLogRecPtr lsn, TransactionId xid,\n> else if (sub_needs_timetravel)\n> {\n> /* track toplevel txn as well, subxact alone isn't meaningful */\n> + elog(DEBUG2, \"forced transaction %u to do timetravel\n> due to one of its subtransaction\",\n> + xid);\n> + needs_timetravel = true;\n> SnapBuildAddCommittedTxn(builder, xid);\n> }\n> else if (needs_timetravel)\n>\n> A side benefit of this approach is that we can fix another assertion\n> failure too that happens on REL14 and REL15 and reported here[2]. In\n> the commits 68dcce247f1a(REL14) and 272248a0c1(REL15), the reason why\n> we make the association between sub-txns to top-txn in\n> SnapBuildXidSetCatalogChanges() is just to avoid the assertion failure\n> in AssertTXNLsnOrder(). However, since the invalidation messages are\n> not transported from sub-txn to top-txn during the assignment, another\n> assertion check in ReorderBufferForget() fails when forgetting the\n> subtransaction. If we apply this idea of skipping the assertion\n> checks, we no longer need to make the such association in\n> SnapBuildXidSetCatalogChanges() and resolve this issue as well.\n>\n\nIIUC, here you are speaking of three different changes. Change-1: Add\na check in AssertTXNLsnOrder() to skip assert checking till we reach\nstart_decoding_at. Change-2: Set needs_timetravel to true in one of\nthe else if branches in SnapBuildCommitTxn(). Change-3: Remove the\ncall to ReorderBufferAssignChild() from SnapBuildXidSetCatalogChanges\nin PG-14/15 as that won't be required after Change-1.\n\nAFAIU, Change-1 is required till v10; Change-2 and Change-3 are\nrequired in HEAD/v15/v14 to fix the problem. Now, the second and third\nchanges are not required in branches prior to v14 because we don't\nrecord invalidations via XLOG_XACT_INVALIDATIONS record. However, if\nwe want, we can even back-patch Change-2 and Change-3 to keep the code\nconsistent or maybe just Change-3.\n\nIs my understanding correct?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 17 Oct 2022 13:09:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Mon, Oct 17, 2022 at 4:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 12, 2022 at 11:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Please note that to pass the new regression tests, the fix proposed in\n> > a related thread[1] is required. Particularly, we need:\n> >\n> > @@ -1099,6 +1099,9 @@ SnapBuildCommitTxn(SnapBuild *builder,\n> > XLogRecPtr lsn, TransactionId xid,\n> > else if (sub_needs_timetravel)\n> > {\n> > /* track toplevel txn as well, subxact alone isn't meaningful */\n> > + elog(DEBUG2, \"forced transaction %u to do timetravel\n> > due to one of its subtransaction\",\n> > + xid);\n> > + needs_timetravel = true;\n> > SnapBuildAddCommittedTxn(builder, xid);\n> > }\n> > else if (needs_timetravel)\n> >\n> > A side benefit of this approach is that we can fix another assertion\n> > failure too that happens on REL14 and REL15 and reported here[2]. In\n> > the commits 68dcce247f1a(REL14) and 272248a0c1(REL15), the reason why\n> > we make the association between sub-txns to top-txn in\n> > SnapBuildXidSetCatalogChanges() is just to avoid the assertion failure\n> > in AssertTXNLsnOrder(). However, since the invalidation messages are\n> > not transported from sub-txn to top-txn during the assignment, another\n> > assertion check in ReorderBufferForget() fails when forgetting the\n> > subtransaction. If we apply this idea of skipping the assertion\n> > checks, we no longer need to make the such association in\n> > SnapBuildXidSetCatalogChanges() and resolve this issue as well.\n> >\n>\n> IIUC, here you are speaking of three different changes. Change-1: Add\n> a check in AssertTXNLsnOrder() to skip assert checking till we reach\n> start_decoding_at. Change-2: Set needs_timetravel to true in one of\n> the else if branches in SnapBuildCommitTxn(). Change-3: Remove the\n> call to ReorderBufferAssignChild() from SnapBuildXidSetCatalogChanges\n> in PG-14/15 as that won't be required after Change-1.\n\nYes.\n\n>\n> AFAIU, Change-1 is required till v10; Change-2 and Change-3 are\n> required in HEAD/v15/v14 to fix the problem.\n\nIIUC Change-2 is required in v16 and HEAD but not mandatory in v15 and\nv14. The reason why we need Change-2 is that there is a case where we\nmark only subtransactions as containing catalog change while not doing\nthat for its top-level transaction. In v15 and v14, since we mark both\nsubtransactions and top-level transaction in\nSnapBuildXidSetCatalogChanges() as containing catalog changes, we\ndon't get the assertion failure at \"Assert(!needs_snapshot ||\nneeds_timetravel)\".\n\nRegarding Change-3, it's required in v15 and v14 but not in HEAD and\nv16. Since we didn't add SnapBuildXidSetCatalogChanges() to v16 and\nHEAD, Change-3 cannot be applied to the two branches.\n\n> Now, the second and third\n> changes are not required in branches prior to v14 because we don't\n> record invalidations via XLOG_XACT_INVALIDATIONS record. However, if\n> we want, we can even back-patch Change-2 and Change-3 to keep the code\n> consistent or maybe just Change-3.\n\nRight. I don't think it's a good idea to back-patch Change-2 in\nbranches prior to v14 as it's not a relevant issue. Regarding\nback-patching Change-3 to branches prior 14, I think it may be okay\ntil v11, but I'd be hesitant for v10 as the final release comes in a\nmonth.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 18 Oct 2022 09:58:27 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "Dear Sawada-san, Amit,\r\n\r\n> IIUC Change-2 is required in v16 and HEAD but not mandatory in v15 and\r\n> v14. The reason why we need Change-2 is that there is a case where we\r\n> mark only subtransactions as containing catalog change while not doing\r\n> that for its top-level transaction. In v15 and v14, since we mark both\r\n> subtransactions and top-level transaction in\r\n> SnapBuildXidSetCatalogChanges() as containing catalog changes, we\r\n> don't get the assertion failure at \"Assert(!needs_snapshot ||\r\n> needs_timetravel)\".\r\n\r\nIncidentally, I agreed that Change-2 is needed for HEAD (and v16), not v15 and v14.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 18 Oct 2022 01:30:57 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 6:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Oct 17, 2022 at 4:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > IIUC, here you are speaking of three different changes. Change-1: Add\n> > a check in AssertTXNLsnOrder() to skip assert checking till we reach\n> > start_decoding_at. Change-2: Set needs_timetravel to true in one of\n> > the else if branches in SnapBuildCommitTxn(). Change-3: Remove the\n> > call to ReorderBufferAssignChild() from SnapBuildXidSetCatalogChanges\n> > in PG-14/15 as that won't be required after Change-1.\n>\n> Yes.\n>\n> >\n> > AFAIU, Change-1 is required till v10; Change-2 and Change-3 are\n> > required in HEAD/v15/v14 to fix the problem.\n>\n> IIUC Change-2 is required in v16 and HEAD\n>\n\nWhy are you referring v16 and HEAD separately?\n\n> but not mandatory in v15 and\n> v14. The reason why we need Change-2 is that there is a case where we\n> mark only subtransactions as containing catalog change while not doing\n> that for its top-level transaction. In v15 and v14, since we mark both\n> subtransactions and top-level transaction in\n> SnapBuildXidSetCatalogChanges() as containing catalog changes, we\n> don't get the assertion failure at \"Assert(!needs_snapshot ||\n> needs_timetravel)\".\n>\n> Regarding Change-3, it's required in v15 and v14 but not in HEAD and\n> v16. Since we didn't add SnapBuildXidSetCatalogChanges() to v16 and\n> HEAD, Change-3 cannot be applied to the two branches.\n>\n> > Now, the second and third\n> > changes are not required in branches prior to v14 because we don't\n> > record invalidations via XLOG_XACT_INVALIDATIONS record. However, if\n> > we want, we can even back-patch Change-2 and Change-3 to keep the code\n> > consistent or maybe just Change-3.\n>\n> Right. I don't think it's a good idea to back-patch Change-2 in\n> branches prior to v14 as it's not a relevant issue.\n>\n\nFair enough but then why to even backpatch it to v15 and v14?\n\n> Regarding\n> back-patching Change-3 to branches prior 14, I think it may be okay\n> til v11, but I'd be hesitant for v10 as the final release comes in a\n> month.\n>\n\nSo to fix the issue in all branches, what we need to do is to\nbackpatch change-1: in all branches till v10, change-2: in HEAD, and\nchange-3: in V15 and V14. Additionally, we think, it is okay to\nbackpatch change-3 till v11 as it is mainly done to avoid the problem\nfixed by change-1 and it makes code consistent in back branches.\n\nI think because the test case proposed needs all three changes, we can\npush the change-1 without a test case and then as a second patch have\nchange-2 for HEAD and change-3 for back branches with the test case.\nDo you have any other ideas to proceed here?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 18 Oct 2022 09:37:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 1:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 18, 2022 at 6:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Oct 17, 2022 at 4:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > IIUC, here you are speaking of three different changes. Change-1: Add\n> > > a check in AssertTXNLsnOrder() to skip assert checking till we reach\n> > > start_decoding_at. Change-2: Set needs_timetravel to true in one of\n> > > the else if branches in SnapBuildCommitTxn(). Change-3: Remove the\n> > > call to ReorderBufferAssignChild() from SnapBuildXidSetCatalogChanges\n> > > in PG-14/15 as that won't be required after Change-1.\n> >\n> > Yes.\n> >\n> > >\n> > > AFAIU, Change-1 is required till v10; Change-2 and Change-3 are\n> > > required in HEAD/v15/v14 to fix the problem.\n> >\n> > IIUC Change-2 is required in v16 and HEAD\n> >\n>\n> Why are you referring v16 and HEAD separately?\n\nSorry, my wrong, I was confused.\n\n>\n> > but not mandatory in v15 and\n> > v14. The reason why we need Change-2 is that there is a case where we\n> > mark only subtransactions as containing catalog change while not doing\n> > that for its top-level transaction. In v15 and v14, since we mark both\n> > subtransactions and top-level transaction in\n> > SnapBuildXidSetCatalogChanges() as containing catalog changes, we\n> > don't get the assertion failure at \"Assert(!needs_snapshot ||\n> > needs_timetravel)\".\n> >\n> > Regarding Change-3, it's required in v15 and v14 but not in HEAD and\n> > v16. Since we didn't add SnapBuildXidSetCatalogChanges() to v16 and\n> > HEAD, Change-3 cannot be applied to the two branches.\n> >\n> > > Now, the second and third\n> > > changes are not required in branches prior to v14 because we don't\n> > > record invalidations via XLOG_XACT_INVALIDATIONS record. However, if\n> > > we want, we can even back-patch Change-2 and Change-3 to keep the code\n> > > consistent or maybe just Change-3.\n> >\n> > Right. I don't think it's a good idea to back-patch Change-2 in\n> > branches prior to v14 as it's not a relevant issue.\n> >\n>\n> Fair enough but then why to even backpatch it to v15 and v14?\n\nOops, it's a typo. I wanted to say Change-2 should be back-patched only to HEAD.\n\n>\n> > Regarding\n> > back-patching Change-3 to branches prior 14, I think it may be okay\n> > til v11, but I'd be hesitant for v10 as the final release comes in a\n> > month.\n> >\n>\n> So to fix the issue in all branches, what we need to do is to\n> backpatch change-1: in all branches till v10, change-2: in HEAD, and\n> change-3: in V15 and V14. Additionally, we think, it is okay to\n> backpatch change-3 till v11 as it is mainly done to avoid the problem\n> fixed by change-1 and it makes code consistent in back branches.\n\nRight.\n\n>\n> I think because the test case proposed needs all three changes, we can\n> push the change-1 without a test case and then as a second patch have\n> change-2 for HEAD and change-3 for back branches with the test case.\n> Do you have any other ideas to proceed here?\n\nI found another test case that causes the assertion failure at\n\"Assert(!needs_snapshot || needs_timetravel);\" on all branches. I've\nattached the patch for the test case. In this test case, I modified a\nuser-catalog table instead of system-catalog table. That way, we don't\ngenerate invalidation messages while generating NEW_CID records. As a\nresult, we mark only the subtransactions as containing catalog change\nand don't make association between top-level and sub transactions. The\nassertion failure happens on all supported branches. If we need to fix\nthis (I believe so), Change-2 needs to be backpatched to all supported\nbranches.\n\nThere are three changes as Amit mentioned, and regarding the test\ncase, we have three test cases I've attached: truncate_testcase.patch,\nanalyze_testcase.patch, uesr_catalog_testcase.patch. The relationship\nbetween assertion failures and test cases are very complex. I could\nnot find any test case to cause only one assertion failure on all\nbranches. One idea to proceed is:\n\nPatch-1 includes Change-1 and is applied to all branches.\n\nPatch-2 includes Change-2 and the user_catalog test case, and is\napplied to all branches.\n\nPatch-3 includes Change-3 and the truncate test case (or the analyze\ntest case), and is applied to v14 and v15 (also till v11 if we\nprefer).\n\nThe patch-1 doesn't include any test case but the user_catalog test\ncase can test both Change-1 and Change-2 on all branches. In v15 and\nv14, the analyze test case causes both the assertions at\n\"Assert(txn->ninvalidations == 0);\" and \"Assert(prev_first_lsn <\ncur_txn->first_lsn);\" whereas the truncate test case causes the\nassertion only at \"Assert(txn->ninvalidations == 0);\". Since the\npatch-2 is applied on top of the patch-1, there is no difference in\nterms of testing Change-2.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 18 Oct 2022 17:14:52 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 1:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> >\n> > I think because the test case proposed needs all three changes, we can\n> > push the change-1 without a test case and then as a second patch have\n> > change-2 for HEAD and change-3 for back branches with the test case.\n> > Do you have any other ideas to proceed here?\n>\n> I found another test case that causes the assertion failure at\n> \"Assert(!needs_snapshot || needs_timetravel);\" on all branches. I've\n> attached the patch for the test case. In this test case, I modified a\n> user-catalog table instead of system-catalog table. That way, we don't\n> generate invalidation messages while generating NEW_CID records. As a\n> result, we mark only the subtransactions as containing catalog change\n> and don't make association between top-level and sub transactions. The\n> assertion failure happens on all supported branches. If we need to fix\n> this (I believe so), Change-2 needs to be backpatched to all supported\n> branches.\n>\n> There are three changes as Amit mentioned, and regarding the test\n> case, we have three test cases I've attached: truncate_testcase.patch,\n> analyze_testcase.patch, uesr_catalog_testcase.patch. The relationship\n> between assertion failures and test cases are very complex. I could\n> not find any test case to cause only one assertion failure on all\n> branches. One idea to proceed is:\n>\n> Patch-1 includes Change-1 and is applied to all branches.\n>\n> Patch-2 includes Change-2 and the user_catalog test case, and is\n> applied to all branches.\n>\n> Patch-3 includes Change-3 and the truncate test case (or the analyze\n> test case), and is applied to v14 and v15 (also till v11 if we\n> prefer).\n>\n> The patch-1 doesn't include any test case but the user_catalog test\n> case can test both Change-1 and Change-2 on all branches.\n>\n\nI was wondering if it makes sense to commit both Change-1 and Change-2\ntogether as one patch? Both assertions are caused by a single test\ncase and are related to the general problem that the association of\ntop and sub transaction is only guaranteed to be formed before we\ndecode transaction changes. Also, it would be good to fix the problem\nwith a test case that can cause it. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 18 Oct 2022 16:19:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Mon, Oct 17, 2022 at 7:05 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Oct 13, 2022 at 4:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > --- a/src/backend/replication/logical/decode.c\n> > +++ b/src/backend/replication/logical/decode.c\n> > @@ -113,6 +113,15 @@\n> > LogicalDecodingProcessRecord(LogicalDecodingContext *ctx,\n> > XLogReaderState *recor\n> > buf.origptr);\n> > }\n> >\n> > +#ifdef USE_ASSERT_CHECKING\n> > + /*\n> > + * Check the order of transaction LSNs when we reached the start decoding\n> > + * LSN. See the comments in AssertTXNLsnOrder() for details.\n> > + */\n> > + if (SnapBuildGetStartDecodingAt(ctx->snapshot_builder) == buf.origptr)\n> > + AssertTXNLsnOrder(ctx->reorder);\n> > +#endif\n> > +\n> > rmgr = GetRmgr(XLogRecGetRmid(record));\n> > >\n> >\n> > I am not able to think how/when this check will be useful. Because we\n> > skipped assert checking only for records that are prior to\n> > start_decoding_at point, I think for those records ordering should\n> > have been checked before the restart. start_decoding_at point will be\n> > either (a) confirmed_flush location, or (b) lsn sent by client, and\n> > any record prior to that must have been processed before restart.\n>\n> Good point. I was considering the case where the client sets far ahead\n> LSN but it's not worth considering this case in this context. I've\n> updated the patch accoringly.\n>\n\nOne minor comment:\nCan we slightly change the comment: \". The ordering of the records\nprior to the LSN, we should have been checked before the restart.\" to\n\". The ordering of the records prior to the start_decoding_at LSN\nshould have been checked before the restart.\"?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 18 Oct 2022 16:25:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 7:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 18, 2022 at 1:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > >\n> > > I think because the test case proposed needs all three changes, we can\n> > > push the change-1 without a test case and then as a second patch have\n> > > change-2 for HEAD and change-3 for back branches with the test case.\n> > > Do you have any other ideas to proceed here?\n> >\n> > I found another test case that causes the assertion failure at\n> > \"Assert(!needs_snapshot || needs_timetravel);\" on all branches. I've\n> > attached the patch for the test case. In this test case, I modified a\n> > user-catalog table instead of system-catalog table. That way, we don't\n> > generate invalidation messages while generating NEW_CID records. As a\n> > result, we mark only the subtransactions as containing catalog change\n> > and don't make association between top-level and sub transactions. The\n> > assertion failure happens on all supported branches. If we need to fix\n> > this (I believe so), Change-2 needs to be backpatched to all supported\n> > branches.\n> >\n> > There are three changes as Amit mentioned, and regarding the test\n> > case, we have three test cases I've attached: truncate_testcase.patch,\n> > analyze_testcase.patch, uesr_catalog_testcase.patch. The relationship\n> > between assertion failures and test cases are very complex. I could\n> > not find any test case to cause only one assertion failure on all\n> > branches. One idea to proceed is:\n> >\n> > Patch-1 includes Change-1 and is applied to all branches.\n> >\n> > Patch-2 includes Change-2 and the user_catalog test case, and is\n> > applied to all branches.\n> >\n> > Patch-3 includes Change-3 and the truncate test case (or the analyze\n> > test case), and is applied to v14 and v15 (also till v11 if we\n> > prefer).\n> >\n> > The patch-1 doesn't include any test case but the user_catalog test\n> > case can test both Change-1 and Change-2 on all branches.\n> >\n>\n> I was wondering if it makes sense to commit both Change-1 and Change-2\n> together as one patch? Both assertions are caused by a single test\n> case and are related to the general problem that the association of\n> top and sub transaction is only guaranteed to be formed before we\n> decode transaction changes. Also, it would be good to fix the problem\n> with a test case that can cause it. What do you think?\n\nYeah, it makes sense to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 18 Oct 2022 21:53:32 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 7:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Oct 17, 2022 at 7:05 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Oct 13, 2022 at 4:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > --- a/src/backend/replication/logical/decode.c\n> > > +++ b/src/backend/replication/logical/decode.c\n> > > @@ -113,6 +113,15 @@\n> > > LogicalDecodingProcessRecord(LogicalDecodingContext *ctx,\n> > > XLogReaderState *recor\n> > > buf.origptr);\n> > > }\n> > >\n> > > +#ifdef USE_ASSERT_CHECKING\n> > > + /*\n> > > + * Check the order of transaction LSNs when we reached the start decoding\n> > > + * LSN. See the comments in AssertTXNLsnOrder() for details.\n> > > + */\n> > > + if (SnapBuildGetStartDecodingAt(ctx->snapshot_builder) == buf.origptr)\n> > > + AssertTXNLsnOrder(ctx->reorder);\n> > > +#endif\n> > > +\n> > > rmgr = GetRmgr(XLogRecGetRmid(record));\n> > > >\n> > >\n> > > I am not able to think how/when this check will be useful. Because we\n> > > skipped assert checking only for records that are prior to\n> > > start_decoding_at point, I think for those records ordering should\n> > > have been checked before the restart. start_decoding_at point will be\n> > > either (a) confirmed_flush location, or (b) lsn sent by client, and\n> > > any record prior to that must have been processed before restart.\n> >\n> > Good point. I was considering the case where the client sets far ahead\n> > LSN but it's not worth considering this case in this context. I've\n> > updated the patch accoringly.\n> >\n>\n> One minor comment:\n> Can we slightly change the comment: \". The ordering of the records\n> prior to the LSN, we should have been checked before the restart.\" to\n> \". The ordering of the records prior to the start_decoding_at LSN\n> should have been checked before the restart.\"?\n\nAgreed. I'll update the patch accordingly.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 18 Oct 2022 21:54:22 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 9:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Oct 18, 2022 at 7:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Oct 18, 2022 at 1:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > >\n> > > > I think because the test case proposed needs all three changes, we can\n> > > > push the change-1 without a test case and then as a second patch have\n> > > > change-2 for HEAD and change-3 for back branches with the test case.\n> > > > Do you have any other ideas to proceed here?\n> > >\n> > > I found another test case that causes the assertion failure at\n> > > \"Assert(!needs_snapshot || needs_timetravel);\" on all branches. I've\n> > > attached the patch for the test case. In this test case, I modified a\n> > > user-catalog table instead of system-catalog table. That way, we don't\n> > > generate invalidation messages while generating NEW_CID records. As a\n> > > result, we mark only the subtransactions as containing catalog change\n> > > and don't make association between top-level and sub transactions. The\n> > > assertion failure happens on all supported branches. If we need to fix\n> > > this (I believe so), Change-2 needs to be backpatched to all supported\n> > > branches.\n> > >\n> > > There are three changes as Amit mentioned, and regarding the test\n> > > case, we have three test cases I've attached: truncate_testcase.patch,\n> > > analyze_testcase.patch, uesr_catalog_testcase.patch. The relationship\n> > > between assertion failures and test cases are very complex. I could\n> > > not find any test case to cause only one assertion failure on all\n> > > branches. One idea to proceed is:\n> > >\n> > > Patch-1 includes Change-1 and is applied to all branches.\n> > >\n> > > Patch-2 includes Change-2 and the user_catalog test case, and is\n> > > applied to all branches.\n> > >\n> > > Patch-3 includes Change-3 and the truncate test case (or the analyze\n> > > test case), and is applied to v14 and v15 (also till v11 if we\n> > > prefer).\n> > >\n> > > The patch-1 doesn't include any test case but the user_catalog test\n> > > case can test both Change-1 and Change-2 on all branches.\n> > >\n> >\n> > I was wondering if it makes sense to commit both Change-1 and Change-2\n> > together as one patch? Both assertions are caused by a single test\n> > case and are related to the general problem that the association of\n> > top and sub transaction is only guaranteed to be formed before we\n> > decode transaction changes. Also, it would be good to fix the problem\n> > with a test case that can cause it. What do you think?\n>\n> Yeah, it makes sense to me.\n>\n\nI've attached two patches that need to be back-patched to all branches\nand includes Change-1, Change-2, and a test case for them. FYI this\npatch resolves the assertion failure reported in this thread as well\nas one reported in another thread[2]. So I borrowed some of the\nchanges from the patch[2] Osumi-san recently proposed.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/TYCPR01MB83733C6CEAE47D0280814D5AED7A9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n[2] https://www.postgresql.org/message-id/TYAPR01MB5866B30A1439043B1FC3F21EF5229%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 19 Oct 2022 11:58:59 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Wed, Oct 19, 2022 at 11:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Oct 18, 2022 at 9:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Oct 18, 2022 at 7:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Oct 18, 2022 at 1:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > >\n> > > > > I think because the test case proposed needs all three changes, we can\n> > > > > push the change-1 without a test case and then as a second patch have\n> > > > > change-2 for HEAD and change-3 for back branches with the test case.\n> > > > > Do you have any other ideas to proceed here?\n> > > >\n> > > > I found another test case that causes the assertion failure at\n> > > > \"Assert(!needs_snapshot || needs_timetravel);\" on all branches. I've\n> > > > attached the patch for the test case. In this test case, I modified a\n> > > > user-catalog table instead of system-catalog table. That way, we don't\n> > > > generate invalidation messages while generating NEW_CID records. As a\n> > > > result, we mark only the subtransactions as containing catalog change\n> > > > and don't make association between top-level and sub transactions. The\n> > > > assertion failure happens on all supported branches. If we need to fix\n> > > > this (I believe so), Change-2 needs to be backpatched to all supported\n> > > > branches.\n> > > >\n> > > > There are three changes as Amit mentioned, and regarding the test\n> > > > case, we have three test cases I've attached: truncate_testcase.patch,\n> > > > analyze_testcase.patch, uesr_catalog_testcase.patch. The relationship\n> > > > between assertion failures and test cases are very complex. I could\n> > > > not find any test case to cause only one assertion failure on all\n> > > > branches. One idea to proceed is:\n> > > >\n> > > > Patch-1 includes Change-1 and is applied to all branches.\n> > > >\n> > > > Patch-2 includes Change-2 and the user_catalog test case, and is\n> > > > applied to all branches.\n> > > >\n> > > > Patch-3 includes Change-3 and the truncate test case (or the analyze\n> > > > test case), and is applied to v14 and v15 (also till v11 if we\n> > > > prefer).\n> > > >\n> > > > The patch-1 doesn't include any test case but the user_catalog test\n> > > > case can test both Change-1 and Change-2 on all branches.\n> > > >\n> > >\n> > > I was wondering if it makes sense to commit both Change-1 and Change-2\n> > > together as one patch? Both assertions are caused by a single test\n> > > case and are related to the general problem that the association of\n> > > top and sub transaction is only guaranteed to be formed before we\n> > > decode transaction changes. Also, it would be good to fix the problem\n> > > with a test case that can cause it. What do you think?\n> >\n> > Yeah, it makes sense to me.\n> >\n>\n> I've attached two patches that need to be back-patched to all branches\n> and includes Change-1, Change-2, and a test case for them. FYI this\n> patch resolves the assertion failure reported in this thread as well\n> as one reported in another thread[2]. So I borrowed some of the\n> changes from the patch[2] Osumi-san recently proposed.\n>\n\nI've attached patches for Change-3 with a test case. Please review them as well.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 19 Oct 2022 13:09:35 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Wed, Oct 19, 2022 at 11:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Oct 18, 2022 at 9:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Oct 18, 2022 at 7:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Oct 18, 2022 at 1:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > >\n> > > > > I think because the test case proposed needs all three changes, we can\n> > > > > push the change-1 without a test case and then as a second patch have\n> > > > > change-2 for HEAD and change-3 for back branches with the test case.\n> > > > > Do you have any other ideas to proceed here?\n> > > >\n> > > > I found another test case that causes the assertion failure at\n> > > > \"Assert(!needs_snapshot || needs_timetravel);\" on all branches. I've\n> > > > attached the patch for the test case. In this test case, I modified a\n> > > > user-catalog table instead of system-catalog table. That way, we don't\n> > > > generate invalidation messages while generating NEW_CID records. As a\n> > > > result, we mark only the subtransactions as containing catalog change\n> > > > and don't make association between top-level and sub transactions. The\n> > > > assertion failure happens on all supported branches. If we need to fix\n> > > > this (I believe so), Change-2 needs to be backpatched to all supported\n> > > > branches.\n> > > >\n> > > > There are three changes as Amit mentioned, and regarding the test\n> > > > case, we have three test cases I've attached: truncate_testcase.patch,\n> > > > analyze_testcase.patch, uesr_catalog_testcase.patch. The relationship\n> > > > between assertion failures and test cases are very complex. I could\n> > > > not find any test case to cause only one assertion failure on all\n> > > > branches. One idea to proceed is:\n> > > >\n> > > > Patch-1 includes Change-1 and is applied to all branches.\n> > > >\n> > > > Patch-2 includes Change-2 and the user_catalog test case, and is\n> > > > applied to all branches.\n> > > >\n> > > > Patch-3 includes Change-3 and the truncate test case (or the analyze\n> > > > test case), and is applied to v14 and v15 (also till v11 if we\n> > > > prefer).\n> > > >\n> > > > The patch-1 doesn't include any test case but the user_catalog test\n> > > > case can test both Change-1 and Change-2 on all branches.\n> > > >\n> > >\n> > > I was wondering if it makes sense to commit both Change-1 and Change-2\n> > > together as one patch? Both assertions are caused by a single test\n> > > case and are related to the general problem that the association of\n> > > top and sub transaction is only guaranteed to be formed before we\n> > > decode transaction changes. Also, it would be good to fix the problem\n> > > with a test case that can cause it. What do you think?\n> >\n> > Yeah, it makes sense to me.\n> >\n>\n> I've attached two patches that need to be back-patched to all branches\n> and includes Change-1, Change-2, and a test case for them. FYI this\n> patch resolves the assertion failure reported in this thread as well\n> as one reported in another thread[2]. So I borrowed some of the\n> changes from the patch[2] Osumi-san recently proposed.\n>\n\nAmit pointed out offlist that the changes in reorderbuffer.c is not\npgindent'ed. I've run pgindent and attached updated patches.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 19 Oct 2022 16:38:21 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Wed, Oct 19, 2022 at 1:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Oct 19, 2022 at 11:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> >\n> > I've attached two patches that need to be back-patched to all branches\n> > and includes Change-1, Change-2, and a test case for them. FYI this\n> > patch resolves the assertion failure reported in this thread as well\n> > as one reported in another thread[2]. So I borrowed some of the\n> > changes from the patch[2] Osumi-san recently proposed.\n> >\n>\n> Amit pointed out offlist that the changes in reorderbuffer.c is not\n> pgindent'ed. I've run pgindent and attached updated patches.\n>\n\nThanks, I have tested these across all branches till v10 and it works\nas expected. I am planning to push this tomorrow unless I see any\nfurther comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 19 Oct 2022 16:47:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Wed, Oct 19, 2022 at 9:40 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached patches for Change-3 with a test case. Please review them as well.\n>\n\nThe patch looks mostly good to me apart from few minor comments which\nare as follows:\n1.\n+# The last decoding restarts from the first checkpoint, and add\ninvalidation messages\n+# generated by \"s0_truncate\" to the subtransaction. When decoding the\ncommit record of\n+# the top-level transaction, we mark both top-level transaction and\nits subtransactions\n+# as containing catalog changes. However, we check if we don't create\nthe association\n+# between top-level and subtransactions at this time. Otherwise, we\nmiss executing\n+# invalidation messages when forgetting the transaction.\n+permutation \"s0_init\" \"s0_begin\" \"s0_savepoint\" \"s0_insert\"\n\"s1_checkpoint\" \"s1_get_changes\" \"s0_truncate\" \"s0_commit\" \"s0_begin\"\n\"s0_insert\" \"s1_checkpoint\" \"s1_get_changes\" \"s0_commit\"\n\"s1_get_changes\"\n\nThe second part of this comment seems to say things more than required\nwhich makes it less clear. How about something like: \"The last\ndecoding restarts from the first checkpoint and adds invalidation\nmessages generated by \"s0_truncate\" to the subtransaction. While\nprocessing the commit record for the top-level transaction, we decide\nto skip this xact but ensure that corresponding invalidation messages\nget processed.\"?\n\n2.\n+ /*\n+ * We will assign subtransactions to the top transaction before\n+ * replaying the contents of the transaction.\n+ */\n\nI don't think we need this comment.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 20 Oct 2022 15:27:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Wed, Oct 19, 2022 at 4:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 19, 2022 at 1:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Oct 19, 2022 at 11:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > >\n> > > I've attached two patches that need to be back-patched to all branches\n> > > and includes Change-1, Change-2, and a test case for them. FYI this\n> > > patch resolves the assertion failure reported in this thread as well\n> > > as one reported in another thread[2]. So I borrowed some of the\n> > > changes from the patch[2] Osumi-san recently proposed.\n> > >\n> >\n> > Amit pointed out offlist that the changes in reorderbuffer.c is not\n> > pgindent'ed. I've run pgindent and attached updated patches.\n> >\n>\n> Thanks, I have tested these across all branches till v10 and it works\n> as expected. I am planning to push this tomorrow unless I see any\n> further comments.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 20 Oct 2022 16:39:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Thu, Oct 20, 2022 at 8:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 19, 2022 at 4:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Oct 19, 2022 at 1:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Oct 19, 2022 at 11:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > >\n> > > > I've attached two patches that need to be back-patched to all branches\n> > > > and includes Change-1, Change-2, and a test case for them. FYI this\n> > > > patch resolves the assertion failure reported in this thread as well\n> > > > as one reported in another thread[2]. So I borrowed some of the\n> > > > changes from the patch[2] Osumi-san recently proposed.\n> > > >\n> > >\n> > > Amit pointed out offlist that the changes in reorderbuffer.c is not\n> > > pgindent'ed. I've run pgindent and attached updated patches.\n> > >\n> >\n> > Thanks, I have tested these across all branches till v10 and it works\n> > as expected. I am planning to push this tomorrow unless I see any\n> > further comments.\n> >\n>\n> Pushed.\n\nThank you!\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 21 Oct 2022 11:26:45 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Thu, Oct 20, 2022 at 6:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 19, 2022 at 9:40 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached patches for Change-3 with a test case. Please review them as well.\n> >\n>\n> The patch looks mostly good to me apart from few minor comments which\n> are as follows:\n> 1.\n> +# The last decoding restarts from the first checkpoint, and add\n> invalidation messages\n> +# generated by \"s0_truncate\" to the subtransaction. When decoding the\n> commit record of\n> +# the top-level transaction, we mark both top-level transaction and\n> its subtransactions\n> +# as containing catalog changes. However, we check if we don't create\n> the association\n> +# between top-level and subtransactions at this time. Otherwise, we\n> miss executing\n> +# invalidation messages when forgetting the transaction.\n> +permutation \"s0_init\" \"s0_begin\" \"s0_savepoint\" \"s0_insert\"\n> \"s1_checkpoint\" \"s1_get_changes\" \"s0_truncate\" \"s0_commit\" \"s0_begin\"\n> \"s0_insert\" \"s1_checkpoint\" \"s1_get_changes\" \"s0_commit\"\n> \"s1_get_changes\"\n>\n> The second part of this comment seems to say things more than required\n> which makes it less clear. How about something like: \"The last\n> decoding restarts from the first checkpoint and adds invalidation\n> messages generated by \"s0_truncate\" to the subtransaction. While\n> processing the commit record for the top-level transaction, we decide\n> to skip this xact but ensure that corresponding invalidation messages\n> get processed.\"?\n>\n> 2.\n> + /*\n> + * We will assign subtransactions to the top transaction before\n> + * replaying the contents of the transaction.\n> + */\n>\n> I don't think we need this comment.\n>\n\nThank you for the comment! I agreed with all comments and I've updated\npatches accordingly.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 21 Oct 2022 11:31:22 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Fri, Oct 21, 2022 at 8:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Thank you for the comment! I agreed with all comments and I've updated\n> patches accordingly.\n>\n\nPushed after removing the test case from v11-13 branches as it is not\nrelevant to those branches and the test-1 in\ncatalog_change_snapshot.spec already tests the same case for those\nbranches.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 21 Oct 2022 11:19:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "Hello,\n\n21.10.2022 08:49, Amit Kapila wrote:\n> On Fri, Oct 21, 2022 at 8:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> Thank you for the comment! I agreed with all comments and I've updated\n>> patches accordingly.\n>>\n> Pushed after removing the test case from v11-13 branches as it is not\n> relevant to those branches and the test-1 in\n> catalog_change_snapshot.spec already tests the same case for those\n> branches.\n\nI've managed to get that assertion failure again (on master) while playing\nwith the concurrent installcheck. This can be easily reproduced with the\nfollowing script:\nnumclients=5\nfor ((c=1;c<=numclients;c++)); do\n cp -r contrib/test_decoding contrib/test_decoding_$c\n sed \"s/isolation_slot/isolation_slot_$c/\" -i contrib/test_decoding_$c/specs/catalog_change_snapshot.spec # Use \nindependent slots\n sed \"$(printf '$p; %.0s' `seq 50`)\" -i contrib/test_decoding_$c/specs/catalog_change_snapshot.spec # Repeat the last \npermutation 50 times\ndone\nfor ((c=1;c<=numclients;c++)); do\n EXTRA_REGRESS_OPTS=\"--dbname=regress_$c\" make -s installcheck-force -C contrib/test_decoding_$c USE_MODULE_DB=1 \n >\"installcheck-$c.log\" 2>&1 &\ndone\nwait\ngrep 'TRAP:' server.log\n\nProduces for me:\nTRAP: failed Assert(\"prev_first_lsn < cur_txn->first_lsn\"), File: \"reorderbuffer.c\", Line: 942, PID: 3794105\nTRAP: failed Assert(\"prev_first_lsn < cur_txn->first_lsn\"), File: \"reorderbuffer.c\", Line: 942, PID: 3794104\nTRAP: failed Assert(\"prev_first_lsn < cur_txn->first_lsn\"), File: \"reorderbuffer.c\", Line: 942, PID: 3794099\nTRAP: failed Assert(\"prev_first_lsn < cur_txn->first_lsn\"), File: \"reorderbuffer.c\", Line: 942, PID: 3794105\nTRAP: failed Assert(\"prev_first_lsn < cur_txn->first_lsn\"), File: \"reorderbuffer.c\", Line: 942, PID: 3794104\nTRAP: failed Assert(\"prev_first_lsn < cur_txn->first_lsn\"), File: \"reorderbuffer.c\", Line: 942, PID: 3794099\n\nWith the debug logging added inside AssertTXNLsnOrder() I see:\nctx->snapshot_builder->start_decoding_at: 209807224, ctx->reader->EndRecPtr: 210043072,\nSnapBuildXactNeedsSkip(ctx->snapshot_builder, ctx->reader->EndRecPtr): 0\nand inside the loop:\ncur_txn->first_lsn: 209792872\ncur_txn->first_lsn: 209975744\ncur_txn->first_lsn: 210043008\ncur_txn->first_lsn: 210043008\nand it triggers the Assert.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 6 Jun 2023 12:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On 6/6/23 11:00, Alexander Lakhin wrote:\n> Hello,\n> ...> With the debug logging added inside AssertTXNLsnOrder() I see:\n> ctx->snapshot_builder->start_decoding_at: 209807224,\n> ctx->reader->EndRecPtr: 210043072,\n> SnapBuildXactNeedsSkip(ctx->snapshot_builder, ctx->reader->EndRecPtr): 0\n> and inside the loop:\n> cur_txn->first_lsn: 209792872\n> cur_txn->first_lsn: 209975744\n> cur_txn->first_lsn: 210043008\n> cur_txn->first_lsn: 210043008\n> and it triggers the Assert.\n> \n\nSo what's the prev_first_lsn value for these first_lsn values? How does\nit change over time? Did you try looking at the pg_waldump for these\npositions?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 6 Jun 2023 11:56:16 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "Hello Tomas,\n\n06.06.2023 12:56, Tomas Vondra wrote:\n> On 6/6/23 11:00, Alexander Lakhin wrote:\n>> Hello,\n>> ...> With the debug logging added inside AssertTXNLsnOrder() I see:\n>> ctx->snapshot_builder->start_decoding_at: 209807224,\n>> ctx->reader->EndRecPtr: 210043072,\n>> SnapBuildXactNeedsSkip(ctx->snapshot_builder, ctx->reader->EndRecPtr): 0\n>> and inside the loop:\n>> cur_txn->first_lsn: 209792872\n>> cur_txn->first_lsn: 209975744\n>> cur_txn->first_lsn: 210043008\n>> cur_txn->first_lsn: 210043008\n>> and it triggers the Assert.\n>>\n> So what's the prev_first_lsn value for these first_lsn values? How does\n> it change over time? Did you try looking at the pg_waldump for these\n> positions?\n\nWith more logging I've got (for another run):\nReorderBufferTXNByXid| xid: 3397, lsn: c1fbc80\n\nctx->snapshot_builder->start_decoding_at: c1f2cc0, ctx->reader->EndRecPtr: c1fbcc0, \nSnapBuildXactNeedsSkip(ctx->snapshot_builder, ctx->reader->EndRecPtr): 0\nprev_first_lsn: 0, cur_txn->first_lsn: c1fbc80\nprev_first_lsn: c1fbc80, cur_txn->first_lsn: c1fbc80\nTRAP: failed Assert(\"prev_first_lsn < cur_txn->first_lsn\") ...\n\nwaldump for 00000001000000000000000C shows:\ngrep c1fbc80:\nrmgr: Heap2 len (rec/tot): 60/ 60, tx: 3398, lsn: 0/0C1FBC80, prev 0/0C1FBC50, desc: NEW_CID rel: \n1663/18763/19987, tid: 0/1, cmin: 1, cmax: 4294967295, combo: 4294967295\nrmgr: Heap len (rec/tot): 59/ 59, tx: 3398, lsn: 0/0C1FBCC0, prev 0/0C1FBC80, desc: INSERT+INIT off: \n1, flags: 0x08, blkref #0: rel 1663/18763/19987 blk 0\n\ngrep '( 3397| 3398)'\nrmgr: Transaction len (rec/tot): 43/ 43, tx: 3398, lsn: 0/0C1F2B20, prev 0/0C1F2688, desc: ASSIGNMENT xtop \n3397: subxacts: 3398\nrmgr: Heap len (rec/tot): 59/ 59, tx: 3398, lsn: 0/0C1F2B50, prev 0/0C1F2B20, desc: INSERT+INIT off: \n1, flags: 0x08, blkref #0: rel 1663/18763/19981 blk 0\nrmgr: Standby len (rec/tot): 62/ 62, tx: 0, lsn: 0/0C1F2BD0, prev 0/0C1F2B90, desc: RUNNING_XACTS \nnextXid 3400 latestCompletedXid 3396 oldestRunningXid 3397; 2 xacts: 3399 3397; 1 subxacts: 3398\nrmgr: Standby len (rec/tot): 58/ 58, tx: 0, lsn: 0/0C1F2C80, prev 0/0C1F2C50, desc: RUNNING_XACTS \nnextXid 3400 latestCompletedXid 3399 oldestRunningXid 3397; 1 xacts: 3397; 1 subxacts: 3398\nrmgr: XLOG len (rec/tot): 114/ 114, tx: 0, lsn: 0/0C1F2CC0, prev 0/0C1F2C80, desc: \nCHECKPOINT_ONLINE redo 0/C1F2C10; tli 1; prev tli 1; fpw true; xid 0:3400; oid 24576; multi 13; offset 29; oldest xid \n722 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 3397; online\nrmgr: Standby len (rec/tot): 62/ 62, tx: 0, lsn: 0/0C1FBAD0, prev 0/0C1FBAA0, desc: RUNNING_XACTS \nnextXid 3401 latestCompletedXid 3399 oldestRunningXid 3397; 2 xacts: 3400 3397; 1 subxacts: 3398\nrmgr: Heap2 len (rec/tot): 60/ 60, tx: 3398, lsn: 0/0C1FBC80, prev 0/0C1FBC50, desc: NEW_CID rel: \n1663/18763/19987, tid: 0/1, cmin: 1, cmax: 4294967295, combo: 4294967295\nrmgr: Heap len (rec/tot): 59/ 59, tx: 3398, lsn: 0/0C1FBCC0, prev 0/0C1FBC80, desc: INSERT+INIT off: \n1, flags: 0x08, blkref #0: rel 1663/18763/19987 blk 0\nrmgr: Transaction len (rec/tot): 54/ 54, tx: 3397, lsn: 0/0C1FBD00, prev 0/0C1FBCC0, desc: COMMIT \n2023-06-06 13:55:26.955268 MSK; subxacts: 3398\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 6 Jun 2023 15:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "\n\nOn 6/6/23 14:00, Alexander Lakhin wrote:\n> Hello Tomas,\n> \n> 06.06.2023 12:56, Tomas Vondra wrote:\n>> On 6/6/23 11:00, Alexander Lakhin wrote:\n>>> Hello,\n>>> ...> With the debug logging added inside AssertTXNLsnOrder() I see:\n>>> ctx->snapshot_builder->start_decoding_at: 209807224,\n>>> ctx->reader->EndRecPtr: 210043072,\n>>> SnapBuildXactNeedsSkip(ctx->snapshot_builder, ctx->reader->EndRecPtr): 0\n>>> and inside the loop:\n>>> cur_txn->first_lsn: 209792872\n>>> cur_txn->first_lsn: 209975744\n>>> cur_txn->first_lsn: 210043008\n>>> cur_txn->first_lsn: 210043008\n>>> and it triggers the Assert.\n>>>\n>> So what's the prev_first_lsn value for these first_lsn values? How does\n>> it change over time? Did you try looking at the pg_waldump for these\n>> positions?\n> \n> With more logging I've got (for another run):\n> ReorderBufferTXNByXid| xid: 3397, lsn: c1fbc80\n> \n> ctx->snapshot_builder->start_decoding_at: c1f2cc0,\n> ctx->reader->EndRecPtr: c1fbcc0,\n> SnapBuildXactNeedsSkip(ctx->snapshot_builder, ctx->reader->EndRecPtr): 0\n> prev_first_lsn: 0, cur_txn->first_lsn: c1fbc80\n> prev_first_lsn: c1fbc80, cur_txn->first_lsn: c1fbc80\n> TRAP: failed Assert(\"prev_first_lsn < cur_txn->first_lsn\") ...\n> \n> waldump for 00000001000000000000000C shows:\n> grep c1fbc80:\n> rmgr: Heap2 len (rec/tot): 60/ 60, tx: 3398, lsn:\n> 0/0C1FBC80, prev 0/0C1FBC50, desc: NEW_CID rel: 1663/18763/19987, tid:\n> 0/1, cmin: 1, cmax: 4294967295, combo: 4294967295\n> rmgr: Heap len (rec/tot): 59/ 59, tx: 3398, lsn:\n> 0/0C1FBCC0, prev 0/0C1FBC80, desc: INSERT+INIT off: 1, flags: 0x08,\n> blkref #0: rel 1663/18763/19987 blk 0\n> \n> grep '( 3397| 3398)'\n\nI've been able to reproduce this, after messing with the script a little\nbit (I had to skip the test_decoding regression tests, because that was\ncomplaining about slots already existing etc).\n\nAnyway, AssertTXNLsnOrder sees these two transactions (before aborting):\n\n 26662 0/6462E6F0 (first 0/0)\n 26661 0/6462E6F0 (first 0/6462E6F0)\n\n\nwhere 26661 is the top xact, 26662 is a subxact of 26661. This is\nclearly a problem, because we really should not have subxact in this\nlist once the assignment gets applied.\n\nAnd the relevant WAL looks like this:\n\n---------------------------------------------------------------------\n26662, lsn: 0/645EDAA0, prev 0/645EDA60, desc: ASSIGNMENT xtop 26661:\nsubxacts: 26662\n26662, lsn: 0/645EDAD0, prev 0/645EDAA0, desc: INSERT+INIT off: 1,\nflags: 0x08, blkref #0: rel 1663/125447/126835 blk 0\n...\n 0, lsn: 0/6462E5D8, prev 0/6462E2A0, desc: RUNNING_XACTS nextXid\n26673 latestCompletedXid 26672 oldestRunningXid 26661; 3 xacts: 26667\n26661 26664; 3 subxacts: 26668 26662 26665\n...\n26662, lsn: 0/6462E6F0, prev 0/6462E678, desc: NEW_CID rel:\n1663/125447/126841, tid: 0/1, cmin: 1, cmax: 4294967295, combo: 4294967295\n26662, lsn: 0/6462E730, prev 0/6462E6F0, desc: INSERT+INIT off: 1,\nflags: 0x08, blkref #0: rel 1663/125447/126841 blk 0\n26661, lsn: 0/6462E770, prev 0/6462E730, desc: COMMIT 2023-06-06\n16:41:24.442870 CEST; subxacts: 26662\n---------------------------------------------------------------------\n\nso the assignment is the *first* thing that happens for these xacts.\n\nHowever, we skip the assignment, because the log for this call of\nget_changes says this:\n\n LOG: logical decoding found consistent point at 0/6462E5D8\n\nso we fail to realize the 26662 is a subxact.\n\nThen when processing the NEW_CID, SnapBuildProcessNewCid chimes in and\ndoes this:\n\n ReorderBufferXidSetCatalogChanges(builder->reorder, xid, lsn);\n\n ReorderBufferAddNewTupleCids(builder->reorder, xlrec->top_xid, lsn,\n xlrec->target_locator, xlrec->target_tid,\n xlrec->cmin, xlrec->cmax,\n xlrec->combocid);\n\nand ReorderBufferAddNewTupleCids() proceeds to enter an entry for the\npassed XID (which is xlrec->top_xid, 26661), but with LSN of the WAL\nrecord. But ReorderBufferXidSetCatalogChanges() already did the same\nthing for the subxact 26662, as it has no idea it's a subxact (due to\nthe skipped assignment).\n\nI haven't figured out what exactly is happening / what it should be\ndoing instead. But it seems wrong to skip the assignment - I wonder if\nSnapBuildProcessRunningXacts might be doing that too eagerly.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 6 Jun 2023 17:42:05 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "\n\nOn 6/6/23 17:42, Tomas Vondra wrote:\n> \n> \n> On 6/6/23 14:00, Alexander Lakhin wrote:\n>> Hello Tomas,\n>>\n>> 06.06.2023 12:56, Tomas Vondra wrote:\n>>> On 6/6/23 11:00, Alexander Lakhin wrote:\n>>>> Hello,\n>>>> ...> With the debug logging added inside AssertTXNLsnOrder() I see:\n>>>> ctx->snapshot_builder->start_decoding_at: 209807224,\n>>>> ctx->reader->EndRecPtr: 210043072,\n>>>> SnapBuildXactNeedsSkip(ctx->snapshot_builder, ctx->reader->EndRecPtr): 0\n>>>> and inside the loop:\n>>>> cur_txn->first_lsn: 209792872\n>>>> cur_txn->first_lsn: 209975744\n>>>> cur_txn->first_lsn: 210043008\n>>>> cur_txn->first_lsn: 210043008\n>>>> and it triggers the Assert.\n>>>>\n>>> So what's the prev_first_lsn value for these first_lsn values? How does\n>>> it change over time? Did you try looking at the pg_waldump for these\n>>> positions?\n>>\n>> With more logging I've got (for another run):\n>> ReorderBufferTXNByXid| xid: 3397, lsn: c1fbc80\n>>\n>> ctx->snapshot_builder->start_decoding_at: c1f2cc0,\n>> ctx->reader->EndRecPtr: c1fbcc0,\n>> SnapBuildXactNeedsSkip(ctx->snapshot_builder, ctx->reader->EndRecPtr): 0\n>> prev_first_lsn: 0, cur_txn->first_lsn: c1fbc80\n>> prev_first_lsn: c1fbc80, cur_txn->first_lsn: c1fbc80\n>> TRAP: failed Assert(\"prev_first_lsn < cur_txn->first_lsn\") ...\n>>\n>> waldump for 00000001000000000000000C shows:\n>> grep c1fbc80:\n>> rmgr: Heap2 len (rec/tot): 60/ 60, tx: 3398, lsn:\n>> 0/0C1FBC80, prev 0/0C1FBC50, desc: NEW_CID rel: 1663/18763/19987, tid:\n>> 0/1, cmin: 1, cmax: 4294967295, combo: 4294967295\n>> rmgr: Heap len (rec/tot): 59/ 59, tx: 3398, lsn:\n>> 0/0C1FBCC0, prev 0/0C1FBC80, desc: INSERT+INIT off: 1, flags: 0x08,\n>> blkref #0: rel 1663/18763/19987 blk 0\n>>\n>> grep '( 3397| 3398)'\n> \n> I've been able to reproduce this, after messing with the script a little\n> bit (I had to skip the test_decoding regression tests, because that was\n> complaining about slots already existing etc).\n> \n> Anyway, AssertTXNLsnOrder sees these two transactions (before aborting):\n> \n> 26662 0/6462E6F0 (first 0/0)\n> 26661 0/6462E6F0 (first 0/6462E6F0)\n> \n> \n> where 26661 is the top xact, 26662 is a subxact of 26661. This is\n> clearly a problem, because we really should not have subxact in this\n> list once the assignment gets applied.\n> \n> And the relevant WAL looks like this:\n> \n> ---------------------------------------------------------------------\n> 26662, lsn: 0/645EDAA0, prev 0/645EDA60, desc: ASSIGNMENT xtop 26661:\n> subxacts: 26662\n> 26662, lsn: 0/645EDAD0, prev 0/645EDAA0, desc: INSERT+INIT off: 1,\n> flags: 0x08, blkref #0: rel 1663/125447/126835 blk 0\n> ...\n> 0, lsn: 0/6462E5D8, prev 0/6462E2A0, desc: RUNNING_XACTS nextXid\n> 26673 latestCompletedXid 26672 oldestRunningXid 26661; 3 xacts: 26667\n> 26661 26664; 3 subxacts: 26668 26662 26665\n> ...\n> 26662, lsn: 0/6462E6F0, prev 0/6462E678, desc: NEW_CID rel:\n> 1663/125447/126841, tid: 0/1, cmin: 1, cmax: 4294967295, combo: 4294967295\n> 26662, lsn: 0/6462E730, prev 0/6462E6F0, desc: INSERT+INIT off: 1,\n> flags: 0x08, blkref #0: rel 1663/125447/126841 blk 0\n> 26661, lsn: 0/6462E770, prev 0/6462E730, desc: COMMIT 2023-06-06\n> 16:41:24.442870 CEST; subxacts: 26662\n> ---------------------------------------------------------------------\n> \n> so the assignment is the *first* thing that happens for these xacts.\n> \n> However, we skip the assignment, because the log for this call of\n> get_changes says this:\n> \n> LOG: logical decoding found consistent point at 0/6462E5D8\n> \n> so we fail to realize the 26662 is a subxact.\n> \n> Then when processing the NEW_CID, SnapBuildProcessNewCid chimes in and\n> does this:\n> \n> ReorderBufferXidSetCatalogChanges(builder->reorder, xid, lsn);\n> \n> ReorderBufferAddNewTupleCids(builder->reorder, xlrec->top_xid, lsn,\n> xlrec->target_locator, xlrec->target_tid,\n> xlrec->cmin, xlrec->cmax,\n> xlrec->combocid);\n> \n> and ReorderBufferAddNewTupleCids() proceeds to enter an entry for the\n> passed XID (which is xlrec->top_xid, 26661), but with LSN of the WAL\n> record. But ReorderBufferXidSetCatalogChanges() already did the same\n> thing for the subxact 26662, as it has no idea it's a subxact (due to\n> the skipped assignment).\n> \n> I haven't figured out what exactly is happening / what it should be\n> doing instead. But it seems wrong to skip the assignment - I wonder if\n> SnapBuildProcessRunningXacts might be doing that too eagerly.\n> \n\nIn investigated this a bit more, and the problem actually seems to be\nmore like this:\n\n1) we create a new logical replication slot\n\n2) while building the initial snapshot, we start with current insert\nlocation, and then process records\n\n3) for RUNNING_XACTS we call SnapBuildProcessRunningXacts, which calls\nSnapBuildFindSnapshot\n\n4) SnapBuildFindSnapshot does this:\n\n else if (!builder->building_full_snapshot &&\n SnapBuildRestore(builder, lsn))\n {\n /* there won't be any state to cleanup */\n return false;\n }\n\n5) because create_logical_replication_slot and get_changes both call\nCreateInitDecodingContext with needs_full_snapshot=false, we end up\ncalling SnapBuildRestore()\n\n6) once in a while this likely hits a snapshot created by a concurrent\nsession (for another logical slot) with SNAPBUILD_CONSISTENT state\n\n\nI don't know what's the correct fix for this. Maybe we should set\nneeds_full_snapshot=true in create_logical_replication_slot when\ncreating the initial snapshot? Maybe we should use true even in\npg_logical_slot_get_changes_guts? This seems to fix the crashes ...\n\nThat'll prevent reading the serialized snapshots like this, but how\ncould that ever work? It seems pretty much guaranteed to ignore any\nassignments that happened right before the snapshot?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 7 Jun 2023 02:48:27 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Wed, Jun 7, 2023 at 6:18 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 6/6/23 17:42, Tomas Vondra wrote:\n> >\n>\n> In investigated this a bit more, and the problem actually seems to be\n> more like this:\n>\n> 1) we create a new logical replication slot\n>\n> 2) while building the initial snapshot, we start with current insert\n> location, and then process records\n>\n> 3) for RUNNING_XACTS we call SnapBuildProcessRunningXacts, which calls\n> SnapBuildFindSnapshot\n>\n> 4) SnapBuildFindSnapshot does this:\n>\n> else if (!builder->building_full_snapshot &&\n> SnapBuildRestore(builder, lsn))\n> {\n> /* there won't be any state to cleanup */\n> return false;\n> }\n>\n> 5) because create_logical_replication_slot and get_changes both call\n> CreateInitDecodingContext with needs_full_snapshot=false, we end up\n> calling SnapBuildRestore()\n>\n> 6) once in a while this likely hits a snapshot created by a concurrent\n> session (for another logical slot) with SNAPBUILD_CONSISTENT state\n>\n\nI think this analysis doesn't seem to match what you mentioned in the\nprevious email which is as follows:\n> > However, we skip the assignment, because the log for this call of\n> > get_changes says this:\n> >\n> > LOG: logical decoding found consistent point at 0/6462E5D8\n> >\n> > so we fail to realize the 26662 is a subxact.\n\nThis is because the above LOG is printed when\n\"running->oldestRunningXid == running->nextXid\" not when we restore\nthe snapshot. Am, I missing something?\n\n>\n> I don't know what's the correct fix for this. Maybe we should set\n> needs_full_snapshot=true in create_logical_replication_slot when\n> creating the initial snapshot? Maybe we should use true even in\n> pg_logical_slot_get_changes_guts? This seems to fix the crashes ...\n>\n\nI don't think that is advisable because setting \"needs_full_snapshot\"\nto true for decoding means the snapshot will start tracking\nnon-catalog committed xacts as well which is costly and is not\nrequired for this case.\n\n> That'll prevent reading the serialized snapshots like this, but how\n> could that ever work? It seems pretty much guaranteed to ignore any\n> assignments that happened right before the snapshot?\n>\n\nThis part needs some analysis/thoughts. BTW, do you mean that it skips\nthe assignment (a) because the assignment record is before we reach a\nconsistent point, or (b) because we start reading WAL after the\nassignment, or (c) something else? If you intend to say (a) then can\nyou please point me to the code you are referring to for the same?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 7 Jun 2023 10:48:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "\n\nOn 6/7/23 07:18, Amit Kapila wrote:\n> On Wed, Jun 7, 2023 at 6:18 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 6/6/23 17:42, Tomas Vondra wrote:\n>>>\n>>\n>> In investigated this a bit more, and the problem actually seems to be\n>> more like this:\n>>\n>> 1) we create a new logical replication slot\n>>\n>> 2) while building the initial snapshot, we start with current insert\n>> location, and then process records\n>>\n>> 3) for RUNNING_XACTS we call SnapBuildProcessRunningXacts, which calls\n>> SnapBuildFindSnapshot\n>>\n>> 4) SnapBuildFindSnapshot does this:\n>>\n>> else if (!builder->building_full_snapshot &&\n>> SnapBuildRestore(builder, lsn))\n>> {\n>> /* there won't be any state to cleanup */\n>> return false;\n>> }\n>>\n>> 5) because create_logical_replication_slot and get_changes both call\n>> CreateInitDecodingContext with needs_full_snapshot=false, we end up\n>> calling SnapBuildRestore()\n>>\n>> 6) once in a while this likely hits a snapshot created by a concurrent\n>> session (for another logical slot) with SNAPBUILD_CONSISTENT state\n>>\n> \n> I think this analysis doesn't seem to match what you mentioned in the\n> previous email which is as follows:\n>>> However, we skip the assignment, because the log for this call of\n>>> get_changes says this:\n>>>\n>>> LOG: logical decoding found consistent point at 0/6462E5D8\n>>>\n>>> so we fail to realize the 26662 is a subxact.\n> \n> This is because the above LOG is printed when\n> \"running->oldestRunningXid == running->nextXid\" not when we restore\n> the snapshot. Am, I missing something?\n> \n\nThere are multiple places in snapbuild.c with the same message. Two in\nSnapBuildFindSnapshot (one of them being the one you mentioned) and one\nin SnapBuildRestore (which is the one actually triggered).\n\n>>\n>> I don't know what's the correct fix for this. Maybe we should set\n>> needs_full_snapshot=true in create_logical_replication_slot when\n>> creating the initial snapshot? Maybe we should use true even in\n>> pg_logical_slot_get_changes_guts? This seems to fix the crashes ...\n>>\n> \n> I don't think that is advisable because setting \"needs_full_snapshot\"\n> to true for decoding means the snapshot will start tracking\n> non-catalog committed xacts as well which is costly and is not\n> required for this case.\n> \n\nTrue. TBH I managed to forget most of these details, so I meant it more\nlike a data point that it seems to fix the issue for me.\n\n>> That'll prevent reading the serialized snapshots like this, but how\n>> could that ever work? It seems pretty much guaranteed to ignore any\n>> assignments that happened right before the snapshot?\n>>\n> \n> This part needs some analysis/thoughts. BTW, do you mean that it skips\n> the assignment (a) because the assignment record is before we reach a\n> consistent point, or (b) because we start reading WAL after the\n> assignment, or (c) something else? If you intend to say (a) then can\n> you please point me to the code you are referring to for the same?\n> \n\nWell, I think the issue is pretty clear - we end up with an initial\nsnapshot that's in between the ASSIGNMENT and NEW_CID, and because\nNEW_CID has both xact and subxact XID it fails because we add two TXNs\nwith the same LSN, not realizing one of them is subxact.\n\nThat's obviously wrong, although somewhat benign in production because\nit only fails because of hitting an assert. Regular builds are likely to\njust ignore it, although I haven't checked if the COMMIT cleanup (I\nwonder if we remove the subxact from the toplevel list on commit).\n\nI think the problem is we just grab an existing snapshot, before all\nrunning xacts complete. Maybe we should fix that, and leave the\nneeds_full_snapshot alone. Haven't tried that, though.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 7 Jun 2023 14:32:15 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Wed, Jun 7, 2023 at 6:02 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n> Well, I think the issue is pretty clear - we end up with an initial\n> snapshot that's in between the ASSIGNMENT and NEW_CID, and because\n> NEW_CID has both xact and subxact XID it fails because we add two TXNs\n> with the same LSN, not realizing one of them is subxact.\n>\n> That's obviously wrong, although somewhat benign in production because\n> it only fails because of hitting an assert.\n>\n\nDoesn't this indicate that we can end up decoding a partial\ntransaction when we restore a snapshot? Won't that be a problem even\nfor production?\n\n> Regular builds are likely to\n> just ignore it, although I haven't checked if the COMMIT cleanup (I\n> wonder if we remove the subxact from the toplevel list on commit).\n>\n> I think the problem is we just grab an existing snapshot, before all\n> running xacts complete. Maybe we should fix that, and leave the\n> needs_full_snapshot alone.\n>\n\nIt is not clear what exactly you have in mind to fix this because if\nthere is no running xact, we don't even need to restore the snapshot\nbecause of a prior check \"if (running->oldestRunningXid ==\nrunning->nextXid)\". I think the main problem is that we started\ndecoding immediately from the point where we restored a snapshot as at\nthat point we could have some partial running xacts.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 13 Jun 2023 09:34:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Tuesday, June 13, 2023 12:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Wed, Jun 7, 2023 at 6:02 PM Tomas Vondra\r\n> <tomas.vondra@enterprisedb.com> wrote:\r\n> >\r\n> >\r\n> > Well, I think the issue is pretty clear - we end up with an initial\r\n> > snapshot that's in between the ASSIGNMENT and NEW_CID, and because\r\n> > NEW_CID has both xact and subxact XID it fails because we add two TXNs\r\n> > with the same LSN, not realizing one of them is subxact.\r\n> >\r\n> > That's obviously wrong, although somewhat benign in production because\r\n> > it only fails because of hitting an assert.\r\n> >\r\n> \r\n> Doesn't this indicate that we can end up decoding a partial transaction when\r\n> we restore a snapshot? Won't that be a problem even for production?\r\n\r\nYes, I think it can cause the problem that only partial changes of a transaction are streamed.\r\nI tried to reproduce this and here are the steps. Note, to make sure the test\r\nwon't be affected by other running_xact WALs, I changed LOG_SNAPSHOT_INTERVAL_MS\r\nto a bigger number.\r\n\r\nsession 1:\r\n-----\r\ncreate table test(a int);\r\nSELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot_1', 'test_decoding');\r\n-----\r\n\r\nsession 2:\r\n-----\r\n- Start a transaction\r\nBEGIN;\r\nINSERT INTO test VALUES(1);\r\n-----\r\n\r\nsession 3:\r\n-----\r\n- Create another slot isolation_slot_2, it should choose a restart_point which is\r\n- after the changes that happened in session 2. Note, to let the current slot\r\n- restore another snapshot, we need to use gdb to block the current backend at\r\n- SnapBuildFindSnapshot(), the backend should have logged the running_xacts WAL\r\n- before reaching SnapBuildFindSnapshot.\r\n\r\nSELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot_2', 'test_decoding');\r\n-----\r\n\r\nsession 1:\r\n-----\r\n- Since there is a running_xacts which session 3 logged, the current backend will\r\n- serialize the snapshot when decoding the running_xacts WAL, and the snapshot\r\n- can be used by other slots (e.g. isolation_slot_2)\r\n\r\nSELECT data FROM pg_logical_slot_get_changes('isolation_slot_1', NULL, NULL, 'skip-empty-xacts', '1', 'include-xids', '0');\r\n-----\r\n\r\nsession 2:\r\n-----\r\n- Insert some different data and commit the transaction.\r\n\r\nINSERT INTO test VALUES(2);\r\nINSERT INTO test VALUES(3);\r\nINSERT INTO test VALUES(4);\r\nCOMMIT\r\n-----\r\n\r\nsession 3:\r\n-----\r\n- Release the process and try to stream the changes, since the restart point is\r\n- at the middle of the transaction, it will stream partial changes of the\r\n- transaction which was committed in session 2:\r\n\r\nSELECT data FROM pg_logical_slot_get_changes('isolation_slot_2', NULL, NULL, 'skip-empty-xacts', '1', 'include-xids', '0');\r\n-----\r\n\r\nResults (partial streamed changes):\r\npostgres=# SELECT data FROM pg_logical_slot_get_changes('isolation_slot_2', NULL, NULL, 'skip-empty-xacts', '1', 'include-xids', '0');\r\n data\r\n-----------------------------------------\r\n BEGIN\r\n table public.test: INSERT: a[integer]:2\r\n table public.test: INSERT: a[integer]:3\r\n table public.test: INSERT: a[integer]:4\r\n COMMIT\r\n(5 rows)\r\n\r\n> \r\n> > Regular builds are likely to\r\n> > just ignore it, although I haven't checked if the COMMIT cleanup (I\r\n> > wonder if we remove the subxact from the toplevel list on commit).\r\n> >\r\n> > I think the problem is we just grab an existing snapshot, before all\r\n> > running xacts complete. Maybe we should fix that, and leave the\r\n> > needs_full_snapshot alone.\r\n> >\r\n> \r\n> It is not clear what exactly you have in mind to fix this because if there is no\r\n> running xact, we don't even need to restore the snapshot because of a prior\r\n> check \"if (running->oldestRunningXid ==\r\n> running->nextXid)\". I think the main problem is that we started\r\n> decoding immediately from the point where we restored a snapshot as at that\r\n> point we could have some partial running xacts.\r\n\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Tue, 13 Jun 2023 04:18:31 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Tuesday, June 13, 2023 12:19 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com> wrote:\r\n> \r\n> On Tuesday, June 13, 2023 12:04 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Wed, Jun 7, 2023 at 6:02 PM Tomas Vondra\r\n> > <tomas.vondra@enterprisedb.com> wrote:\r\n> > >\r\n> > >\r\n> > > Well, I think the issue is pretty clear - we end up with an initial\r\n> > > snapshot that's in between the ASSIGNMENT and NEW_CID, and because\r\n> > > NEW_CID has both xact and subxact XID it fails because we add two\r\n> > > TXNs with the same LSN, not realizing one of them is subxact.\r\n> > >\r\n> > > That's obviously wrong, although somewhat benign in production\r\n> > > because it only fails because of hitting an assert.\r\n> > >\r\n> >\r\n> > Doesn't this indicate that we can end up decoding a partial\r\n> > transaction when we restore a snapshot? Won't that be a problem even for\r\n> production?\r\n> \r\n> Yes, I think it can cause the problem that only partial changes of a transaction\r\n> are streamed.\r\n> I tried to reproduce this and here are the steps. Note, to make sure the test\r\n> won't be affected by other running_xact WALs, I changed\r\n> LOG_SNAPSHOT_INTERVAL_MS to a bigger number.\r\n> \r\n> session 1:\r\n> -----\r\n> create table test(a int);\r\n> SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot_1',\r\n> 'test_decoding');\r\n> -----\r\n> \r\n> session 2:\r\n> -----\r\n> - Start a transaction\r\n> BEGIN;\r\n> INSERT INTO test VALUES(1);\r\n> -----\r\n> \r\n> session 3:\r\n> -----\r\n> - Create another slot isolation_slot_2, it should choose a restart_point which is\r\n> - after the changes that happened in session 2. Note, to let the current slot\r\n> - restore another snapshot, we need to use gdb to block the current backend\r\n> at\r\n> - SnapBuildFindSnapshot(), the backend should have logged the running_xacts\r\n> WAL\r\n> - before reaching SnapBuildFindSnapshot.\r\n> \r\n> SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot_2',\r\n> 'test_decoding');\r\n> -----\r\n> \r\n> session 1:\r\n> -----\r\n> - Since there is a running_xacts which session 3 logged, the current backend\r\n> will\r\n> - serialize the snapshot when decoding the running_xacts WAL, and the\r\n> snapshot\r\n> - can be used by other slots (e.g. isolation_slot_2)\r\n> \r\n> SELECT data FROM pg_logical_slot_get_changes('isolation_slot_1', NULL, NULL,\r\n> 'skip-empty-xacts', '1', 'include-xids', '0');\r\n> -----\r\n> \r\n> session 2:\r\n> -----\r\n> - Insert some different data and commit the transaction.\r\n> \r\n> INSERT INTO test VALUES(2);\r\n> INSERT INTO test VALUES(3);\r\n> INSERT INTO test VALUES(4);\r\n> COMMIT\r\n> -----\r\n> \r\n> session 3:\r\n> -----\r\n> - Release the process and try to stream the changes, since the restart point is\r\n> - at the middle of the transaction, it will stream partial changes of the\r\n> - transaction which was committed in session 2:\r\n> \r\n> SELECT data FROM pg_logical_slot_get_changes('isolation_slot_2', NULL, NULL,\r\n> 'skip-empty-xacts', '1', 'include-xids', '0');\r\n> -----\r\n> \r\n> Results (partial streamed changes):\r\n> postgres=# SELECT data FROM pg_logical_slot_get_changes('isolation_slot_2',\r\n> NULL, NULL, 'skip-empty-xacts', '1', 'include-xids', '0');\r\n> data\r\n> -----------------------------------------\r\n> BEGIN\r\n> table public.test: INSERT: a[integer]:2 table public.test: INSERT: a[integer]:3\r\n> table public.test: INSERT: a[integer]:4 COMMIT\r\n> (5 rows)\r\n> \r\n\r\nOne idea to fix the partial change stream problem would be that we record all\r\nthe running transaction's xid when restoring the snapshot in\r\nSnapBuildFindSnapshot(), and in the following decoding, we skip decoding\r\nchanges for the recorded transaction. Or we can do similar to 7f13ac8(serialize the\r\ninformation of running xacts if any)\r\n\r\nBut one point I am not very sure is that we might retore snapshot in\r\nSnapBuildSerializationPoint() as well where we don't have running transactions\r\ninformation. Although SnapBuildSerializationPoint() is invoked for\r\nXLOG_END_OF_RECOVERY and XLOG_CHECKPOINT_SHUTDOWN records which seems no\r\nrunning transaction will be there when logging. But I am not 100% sure if the\r\nproblem can happen in this case as well.\r\n\r\nThoughts ?\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Wed, 14 Jun 2023 03:15:30 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "\n\nOn 6/14/23 05:15, Zhijie Hou (Fujitsu) wrote:\n> On Tuesday, June 13, 2023 12:19 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com> wrote:\n>>\n>> On Tuesday, June 13, 2023 12:04 PM Amit Kapila <amit.kapila16@gmail.com>\n>> wrote:\n>>>\n>>> On Wed, Jun 7, 2023 at 6:02 PM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>>\n>>>> Well, I think the issue is pretty clear - we end up with an initial\n>>>> snapshot that's in between the ASSIGNMENT and NEW_CID, and because\n>>>> NEW_CID has both xact and subxact XID it fails because we add two\n>>>> TXNs with the same LSN, not realizing one of them is subxact.\n>>>>\n>>>> That's obviously wrong, although somewhat benign in production\n>>>> because it only fails because of hitting an assert.\n>>>>\n>>>\n>>> Doesn't this indicate that we can end up decoding a partial\n>>> transaction when we restore a snapshot? Won't that be a problem even for\n>> production?\n>>\n>> Yes, I think it can cause the problem that only partial changes of a transaction\n>> are streamed.\n>> I tried to reproduce this and here are the steps. Note, to make sure the test\n>> won't be affected by other running_xact WALs, I changed\n>> LOG_SNAPSHOT_INTERVAL_MS to a bigger number.\n>>\n>> session 1:\n>> -----\n>> create table test(a int);\n>> SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot_1',\n>> 'test_decoding');\n>> -----\n>>\n>> session 2:\n>> -----\n>> - Start a transaction\n>> BEGIN;\n>> INSERT INTO test VALUES(1);\n>> -----\n>>\n>> session 3:\n>> -----\n>> - Create another slot isolation_slot_2, it should choose a restart_point which is\n>> - after the changes that happened in session 2. Note, to let the current slot\n>> - restore another snapshot, we need to use gdb to block the current backend\n>> at\n>> - SnapBuildFindSnapshot(), the backend should have logged the running_xacts\n>> WAL\n>> - before reaching SnapBuildFindSnapshot.\n>>\n>> SELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot_2',\n>> 'test_decoding');\n>> -----\n>>\n>> session 1:\n>> -----\n>> - Since there is a running_xacts which session 3 logged, the current backend\n>> will\n>> - serialize the snapshot when decoding the running_xacts WAL, and the\n>> snapshot\n>> - can be used by other slots (e.g. isolation_slot_2)\n>>\n>> SELECT data FROM pg_logical_slot_get_changes('isolation_slot_1', NULL, NULL,\n>> 'skip-empty-xacts', '1', 'include-xids', '0');\n>> -----\n>>\n>> session 2:\n>> -----\n>> - Insert some different data and commit the transaction.\n>>\n>> INSERT INTO test VALUES(2);\n>> INSERT INTO test VALUES(3);\n>> INSERT INTO test VALUES(4);\n>> COMMIT\n>> -----\n>>\n>> session 3:\n>> -----\n>> - Release the process and try to stream the changes, since the restart point is\n>> - at the middle of the transaction, it will stream partial changes of the\n>> - transaction which was committed in session 2:\n>>\n>> SELECT data FROM pg_logical_slot_get_changes('isolation_slot_2', NULL, NULL,\n>> 'skip-empty-xacts', '1', 'include-xids', '0');\n>> -----\n>>\n>> Results (partial streamed changes):\n>> postgres=# SELECT data FROM pg_logical_slot_get_changes('isolation_slot_2',\n>> NULL, NULL, 'skip-empty-xacts', '1', 'include-xids', '0');\n>> data\n>> -----------------------------------------\n>> BEGIN\n>> table public.test: INSERT: a[integer]:2 table public.test: INSERT: a[integer]:3\n>> table public.test: INSERT: a[integer]:4 COMMIT\n>> (5 rows)\n>>\n> \n> One idea to fix the partial change stream problem would be that we record all\n> the running transaction's xid when restoring the snapshot in\n> SnapBuildFindSnapshot(), and in the following decoding, we skip decoding\n> changes for the recorded transaction. Or we can do similar to 7f13ac8(serialize the\n> information of running xacts if any)\n> \n\nWe need to think about how to fix this in backbranches, and the idea\nwith serializing running transactions seems rather unbackpatchable (as\nit changes on-disk state).\n\n> But one point I am not very sure is that we might retore snapshot in\n> SnapBuildSerializationPoint() as well where we don't have running transactions\n> information. Although SnapBuildSerializationPoint() is invoked for\n> XLOG_END_OF_RECOVERY and XLOG_CHECKPOINT_SHUTDOWN records which seems no\n> running transaction will be there when logging. But I am not 100% sure if the\n> problem can happen in this case as well.\n> \n\nSo, is the problem that we grab an existing snapshot in SnapBuildRestore\nwhen called from SnapBuildFindSnapshot? If so, would it be possible to\njust skip this while building the initial snapshot?\n\nI tried that (by commenting out the block in SnapBuildFindSnapshot), but\nit causes some output changes in test_decoding regression tests. I\nhaven't investigated why exactly.\n\nAlso, can you try if we still stream the partial transaction with\ncreate_logical_replication_slot building a full snapshot?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 14 Jun 2023 11:05:11 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "On Wednesday, June 14, 2023 5:05 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\r\n> On 6/14/23 05:15, Zhijie Hou (Fujitsu) wrote:\r\n> > On Tuesday, June 13, 2023 12:19 PM Zhijie Hou (Fujitsu)\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >>\r\n> >> On Tuesday, June 13, 2023 12:04 PM Amit Kapila\r\n> >> <amit.kapila16@gmail.com>\r\n> >> wrote:\r\n> >>>\r\n> >>> On Wed, Jun 7, 2023 at 6:02 PM Tomas Vondra\r\n> >>> <tomas.vondra@enterprisedb.com> wrote:\r\n> >>>>\r\n> >>>>\r\n> >>>> Well, I think the issue is pretty clear - we end up with an initial\r\n> >>>> snapshot that's in between the ASSIGNMENT and NEW_CID, and because\r\n> >>>> NEW_CID has both xact and subxact XID it fails because we add two\r\n> >>>> TXNs with the same LSN, not realizing one of them is subxact.\r\n> >>>>\r\n> >>>> That's obviously wrong, although somewhat benign in production\r\n> >>>> because it only fails because of hitting an assert.\r\n> >>>>\r\n> >>>\r\n> >>> Doesn't this indicate that we can end up decoding a partial\r\n> >>> transaction when we restore a snapshot? Won't that be a problem even\r\n> >>> for\r\n> >> production?\r\n> >>\r\n> >> Yes, I think it can cause the problem that only partial changes of a\r\n> >> transaction are streamed.\r\n> >> I tried to reproduce this and here are the steps. Note, to make sure\r\n> >> the test won't be affected by other running_xact WALs, I changed\r\n> >> LOG_SNAPSHOT_INTERVAL_MS to a bigger number.\r\n> >>\r\n> >> session 1:\r\n> >> -----\r\n> >> create table test(a int);\r\n> >> SELECT 'init' FROM\r\n> >> pg_create_logical_replication_slot('isolation_slot_1',\r\n> >> 'test_decoding');\r\n> >> -----\r\n> >>\r\n> >> session 2:\r\n> >> -----\r\n> >> - Start a transaction\r\n> >> BEGIN;\r\n> >> INSERT INTO test VALUES(1);\r\n> >> -----\r\n> >>\r\n> >> session 3:\r\n> >> -----\r\n> >> - Create another slot isolation_slot_2, it should choose a\r\n> >> restart_point which is\r\n> >> - after the changes that happened in session 2. Note, to let the\r\n> >> current slot\r\n> >> - restore another snapshot, we need to use gdb to block the current\r\n> >> backend at\r\n> >> - SnapBuildFindSnapshot(), the backend should have logged the\r\n> >> running_xacts WAL\r\n> >> - before reaching SnapBuildFindSnapshot.\r\n> >>\r\n> >> SELECT 'init' FROM\r\n> >> pg_create_logical_replication_slot('isolation_slot_2',\r\n> >> 'test_decoding');\r\n> >> -----\r\n> >>\r\n> >> session 1:\r\n> >> -----\r\n> >> - Since there is a running_xacts which session 3 logged, the current\r\n> >> backend will\r\n> >> - serialize the snapshot when decoding the running_xacts WAL, and the\r\n> >> snapshot\r\n> >> - can be used by other slots (e.g. isolation_slot_2)\r\n> >>\r\n> >> SELECT data FROM pg_logical_slot_get_changes('isolation_slot_1',\r\n> >> NULL, NULL, 'skip-empty-xacts', '1', 'include-xids', '0');\r\n> >> -----\r\n> >>\r\n> >> session 2:\r\n> >> -----\r\n> >> - Insert some different data and commit the transaction.\r\n> >>\r\n> >> INSERT INTO test VALUES(2);\r\n> >> INSERT INTO test VALUES(3);\r\n> >> INSERT INTO test VALUES(4);\r\n> >> COMMIT\r\n> >> -----\r\n> >>\r\n> >> session 3:\r\n> >> -----\r\n> >> - Release the process and try to stream the changes, since the\r\n> >> restart point is\r\n> >> - at the middle of the transaction, it will stream partial changes of\r\n> >> the\r\n> >> - transaction which was committed in session 2:\r\n> >>\r\n> >> SELECT data FROM pg_logical_slot_get_changes('isolation_slot_2',\r\n> >> NULL, NULL, 'skip-empty-xacts', '1', 'include-xids', '0');\r\n> >> -----\r\n> >>\r\n> >> Results (partial streamed changes):\r\n> >> postgres=# SELECT data FROM\r\n> >> pg_logical_slot_get_changes('isolation_slot_2',\r\n> >> NULL, NULL, 'skip-empty-xacts', '1', 'include-xids', '0');\r\n> >> data\r\n> >> -----------------------------------------\r\n> >> BEGIN\r\n> >> table public.test: INSERT: a[integer]:2 table public.test: INSERT:\r\n> >> a[integer]:3 table public.test: INSERT: a[integer]:4 COMMIT\r\n> >> (5 rows)\r\n> >>\r\n> >\r\n> > One idea to fix the partial change stream problem would be that we\r\n> > record all the running transaction's xid when restoring the snapshot\r\n> > in SnapBuildFindSnapshot(), and in the following decoding, we skip\r\n> > decoding changes for the recorded transaction. Or we can do similar to\r\n> > 7f13ac8(serialize the information of running xacts if any)\r\n> >\r\n> \r\n> We need to think about how to fix this in backbranches, and the idea with\r\n> serializing running transactions seems rather unbackpatchable (as it changes\r\n> on-disk state).\r\n> \r\n> > But one point I am not very sure is that we might retore snapshot in\r\n> > SnapBuildSerializationPoint() as well where we don't have running\r\n> > transactions information. Although SnapBuildSerializationPoint() is\r\n> > invoked for XLOG_END_OF_RECOVERY and XLOG_CHECKPOINT_SHUTDOWN\r\n> records\r\n> > which seems no running transaction will be there when logging. But I\r\n> > am not 100% sure if the problem can happen in this case as well.\r\n> >\r\n...\r\n> \r\n> Also, can you try if we still stream the partial transaction with\r\n> create_logical_replication_slot building a full snapshot?\r\n\r\nYes, It can fix this problem because force create_logical_replication_slot\r\nbuild a full snapshot can avoid restoring the snapshot. But I am not sure if\r\nthis is the best fix for this for the same reason(it's costly) mentioned by\r\nAmit[1].\r\n\r\n[1] https://www.postgresql.org/message-id/CAA4eK1Kro0rej%3DZXMhcdjs%2BaYsZvNywu3-cqdRUtyAp4zqpVWw%40mail.gmail.com\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Wed, 14 Jun 2023 13:39:11 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
},
{
"msg_contents": "\n\nOn 6/14/23 15:39, Zhijie Hou (Fujitsu) wrote:\n> On Wednesday, June 14, 2023 5:05 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>> ...\n>>\n>> Also, can you try if we still stream the partial transaction with\n>> create_logical_replication_slot building a full snapshot?\n> \n> Yes, It can fix this problem because force create_logical_replication_slot\n> build a full snapshot can avoid restoring the snapshot. But I am not sure if\n> this is the best fix for this for the same reason(it's costly) mentioned by\n> Amit[1].\n> \n\nCostly compared to the current behavior? Sure, but considering the\ncurrent behavior is incorrect/broken, that seems rather irrelevant.\nIncorrect behavior can be infinitely faster.\n\nI doubt it's significantly costlier than just setting the \"full\nsnapshot\" flag when building the initial snapshot - sure, it will take\nmore time than now, but that's kinda the whole point. It seems to me the\nproblem is exactly that it *doesn't* wait long enough.\n\nI may be misunderstanding the solution you proposed, but this:\n\n One idea to fix the partial change stream problem would be that we\n record all the running transaction's xid when restoring the snapshot\n in SnapBuildFindSnapshot(), and in the following decoding, we skip\n decoding changes for the recorded transaction\n\nsounds pretty close to what building a correct snapshot actually does.\n\nBut maybe I'm wrong - ultimately, the way to compare those approaches\nseems to be to prototype this proposal, and do some tests.\n\nThere's also the question of back branches, and it seems way simpler to\njust flip a flag and disable broken optimization than doing fairly\ncomplex stuff to save it.\n\nI'd also point out that (a) this only affects the initial snapshot, not\nevery time we start the decoding context, and (b) the slots created from\nwalsender already do that with (unless when copy_data=false).\n\nSo if needs_full_snapshot=true fixes the issue, I'd just do that as the\nfirst step - in master and backpatches. And then if we want to salvage\nthe optimization, we can try fixing it in master.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 14 Jun 2023 17:27:10 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\",\n File: \"reorderbuffer.c\", Line: 927, PID: 568639)"
}
] |
[
{
"msg_contents": "Hi,\n\nThe latest build fails.\nhttps://github.com/postgres/postgres/runs/8176044869\n\nIn file included from\n/tmp/cpluspluscheck.ggpN3I/test.cpp:3:[11:12:13.290]\n/tmp/cirrus-ci-build/contrib/cube/cubeparse.h:77:19: error: ‘NDBOX’\nwas not declared in this scope[11:12:13.290] 77 | int cube_yyparse\n(NDBOX **result, Size scanbuflen);[11:12:13.290] |\n ^~~~~[11:12:13.290]\n/tmp/cirrus-ci-build/contrib/cube/cubeparse.h:77:27: error: ‘result’\nwas not declared in this scope[11:12:13.290] 77 | int cube_yyparse\n(NDBOX **result, Size scanbuflen);[11:12:13.290] |\n ^~~~~~[11:12:13.290]\n/tmp/cirrus-ci-build/contrib/cube/cubeparse.h:77:40: error: expected\nprimary-expression before ‘scanbuflen’[11:12:13.290] 77 | int\ncube_yyparse (NDBOX **result, Size scanbuflen);[11:12:13.290] |\n ^~~~~~~~~~[11:12:13.290]\n/tmp/cirrus-ci-build/contrib/cube/cubeparse.h:77:50: error: expression\nlist treated as compound expression in initializer\n[-fpermissive][11:12:13.290] 77 | int cube_yyparse (NDBOX **result,\nSize scanbuflen);[11:12:13.290] |\n ^[11:12:13.455] In file included from\n/tmp/cpluspluscheck.ggpN3I/test.cpp:3:[11:12:13.455] segparse.h:90:18:\nerror: ‘SEG’ was not declared in this scope[11:12:13.456]\nsegparse.h:90:23: error: ‘result’ was not declared in this\nscope[11:12:13.860] make: *** [GNUmakefile:141: cpluspluscheck] Error\n1\n\n\nNow I have some trouble in c.h with one my tools:\nWindows 10 64 bits\nmsvc 2019 64 bits\n#error must have a working 64-bit integer datatype\n\nStrange.\n\nregards,\nRanier Vilela\n\nHi,The latest build fails.https://github.com/postgres/postgres/runs/8176044869\nIn file included from /tmp/cpluspluscheck.ggpN3I/test.cpp:3:\n[11:12:13.290] /tmp/cirrus-ci-build/contrib/cube/cubeparse.h:77:19: error: ‘NDBOX’ was not declared in this scope\n[11:12:13.290] 77 | int cube_yyparse (NDBOX **result, Size scanbuflen);\n[11:12:13.290] | ^~~~~\n[11:12:13.290] /tmp/cirrus-ci-build/contrib/cube/cubeparse.h:77:27: error: ‘result’ was not declared in this scope\n[11:12:13.290] 77 | int cube_yyparse (NDBOX **result, Size scanbuflen);\n[11:12:13.290] | ^~~~~~\n[11:12:13.290] /tmp/cirrus-ci-build/contrib/cube/cubeparse.h:77:40: error: expected primary-expression before ‘scanbuflen’\n[11:12:13.290] 77 | int cube_yyparse (NDBOX **result, Size scanbuflen);\n[11:12:13.290] | ^~~~~~~~~~\n[11:12:13.290] /tmp/cirrus-ci-build/contrib/cube/cubeparse.h:77:50: error: expression list treated as compound expression in initializer [-fpermissive]\n[11:12:13.290] 77 | int cube_yyparse (NDBOX **result, Size scanbuflen);\n[11:12:13.290] | ^\n[11:12:13.455] In file included from /tmp/cpluspluscheck.ggpN3I/test.cpp:3:\n[11:12:13.455] segparse.h:90:18: error: ‘SEG’ was not declared in this scope\n[11:12:13.456] segparse.h:90:23: error: ‘result’ was not declared in this scope\n[11:12:13.860] make: *** [GNUmakefile:141: cpluspluscheck] Error 1\nNow I have some trouble in c.h with one my tools:Windows 10 64 bitsmsvc 2019 64 bits#error must have a working 64-bit integer datatypeStrange.regards,Ranier Vilela",
"msg_date": "Sun, 4 Sep 2022 09:58:34 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Latest build fails"
},
{
"msg_contents": "Em dom., 4 de set. de 2022 às 09:58, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n>\n> Now I have some trouble in c.h with one my tools:\n> Windows 10 64 bits\n> msvc 2019 64 bits\n> #error must have a working 64-bit integer datatype\n>\nNevermind , found the cause.\n\nregards,\nRanier Vilela\n\nEm dom., 4 de set. de 2022 às 09:58, Ranier Vilela <ranier.vf@gmail.com> escreveu:Now I have some trouble in c.h with one my tools:Windows 10 64 bitsmsvc 2019 64 bits#error must have a working 64-bit integer datatypeNevermind , found the cause.regards,Ranier Vilela",
"msg_date": "Sun, 4 Sep 2022 10:08:01 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Latest build fails"
}
] |
[
{
"msg_contents": "Greetings!\n\nOne of our clients experienced a crash of startup process with an error\n\"invalid memory alloc request size 1073741824\" on a hot standby, which\nended in replica reinit.\n\nAccording to logs, startup process crashed while trying to replay\n\"Standby/LOCK\" record with a huge list of locks(see attached\nreplicalog_tail.tgz):\n\nFATAL: XX000: invalid memory alloc request size 1073741824\nCONTEXT: WAL redo at 7/327F9248 for Standby/LOCK: xid 1638575 db 7550635\nrel 8500880 xid 1638575 db 7550635 rel 10324499...\nLOCATION: repalloc, mcxt.c:1075\nBACKTRACE:\n postgres: startup recovering\n000000010000000700000033(repalloc+0x61) [0x8d7611]\n postgres: startup recovering 000000010000000700000033() [0x691c29]\n postgres: startup recovering 000000010000000700000033() [0x691c74]\n postgres: startup recovering 000000010000000700000033(lappend+0x16)\n[0x691e76]\n postgres: startup recovering\n000000010000000700000033(StandbyAcquireAccessExclusiveLock+0xdd) [0x7786bd]\n postgres: startup recovering\n000000010000000700000033(standby_redo+0x5d) [0x7789ed]\n postgres: startup recovering\n000000010000000700000033(StartupXLOG+0x1055) [0x51d7c5]\n postgres: startup recovering\n000000010000000700000033(StartupProcessMain+0xcd) [0x71d65d]\n postgres: startup recovering\n000000010000000700000033(AuxiliaryProcessMain+0x40c) [0x52c7cc]\n postgres: startup recovering 000000010000000700000033() [0x71a62e]\n postgres: startup recovering\n000000010000000700000033(PostmasterMain+0xe74) [0x71cf74]\n postgres: startup recovering 000000010000000700000033(main+0x70d)\n[0x4891ad]\n /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f0be2db3555]\nLOG: 00000: startup process (PID 2650) exited with exit code 1\n\nLooks like startup process at some point hits the MaxAllocSize\nlimit(memutils.h), which forbids allocation of more than 1gb-1 bytes.\n\nJudging by pg_waldump output, there was long running transaction on\nprimary, that sequentially locked and modified a lot of tables. Right\nbefore the crash there was about 85k exclusively locked tables.\n\nTrying to reproduce the issue, I found out that the problem is not so much\nin the number of locks, but in the duration of the transaction. Replaying\n\"Standby/LOCK\" records, the startup process eventually crashes with the\nmentioned error if a long transaction holds a large number of locks long\nenough.\n\nI managed to reproduce the situation on 13.7, 14.4, 15beta3 and master\nusing the following steps:\n1) get primary and replica with the following settings:\nmax_locks_per_transaction = '10000' and max_connections = '1000'\n2) create 950k tables\n3) lock them in AccessExclusive mode in transaction and leave it in \"idle\nin transaction state\"\n4) make some side activity with pgbench (pgbench -R 100 -P 5 -T 7200 -c 1)\nIn about 20-30 minutes startup process crashes with the same error.\n\nAs far as I understand, there is fixed amount of AccessExclusive locks in\nthis scenario. 950k exclusive locks acquired by \"long running\" transaction\nand no additional exclusive locks held by pgbench. But startup consumes\nmore and more memory while replaying records, that contain exacly the same\nlist of locks. Could it be a memory leak? If not, is there any way to\nimprove this behavior?\n\nIf you're going to reproduce it, get primary and replica with enough RAM\nand simultaneously run on primary:\n$ *pgbench -i && pgbench -R 100 -P 5 -T 7200 -c 1* in one terminal\n$ *psql -f 950k_locks.sql* in another terminal\nand observe startup memory usage and replica's logs.\n\nBest regards,\nDmitry Kuzmin",
"msg_date": "Mon, 5 Sep 2022 20:19:58 +1000",
"msg_from": "Dmitriy Kuzmin <kuzmin.db4@gmail.com>",
"msg_from_op": true,
"msg_subject": "Startup process on a hot standby crashes with an error \"invalid\n memory alloc request size 1073741824\" while replaying \"Standby/LOCK\" records"
},
{
"msg_contents": "On Mon, 5 Sept 2022 at 22:38, Dmitriy Kuzmin <kuzmin.db4@gmail.com> wrote:\n> One of our clients experienced a crash of startup process with an error \"invalid memory alloc request size 1073741824\" on a hot standby, which ended in replica reinit.\n>\n> According to logs, startup process crashed while trying to replay \"Standby/LOCK\" record with a huge list of locks(see attached replicalog_tail.tgz):\n>\n> FATAL: XX000: invalid memory alloc request size 1073741824\n> CONTEXT: WAL redo at 7/327F9248 for Standby/LOCK: xid 1638575 db 7550635 rel 8500880 xid 1638575 db 7550635 rel 10324499...\n> LOCATION: repalloc, mcxt.c:1075\n> BACKTRACE:\n> postgres: startup recovering 000000010000000700000033(repalloc+0x61) [0x8d7611]\n> postgres: startup recovering 000000010000000700000033() [0x691c29]\n> postgres: startup recovering 000000010000000700000033() [0x691c74]\n> postgres: startup recovering 000000010000000700000033(lappend+0x16) [0x691e76]\n\nThis must be the repalloc() in enlarge_list(). 1073741824 / 8 is\n134,217,728 (2^27). That's quite a bit more than 1 lock per your 950k\ntables.\n\nI wonder why the RecoveryLockListsEntry.locks list is getting so long.\n\nfrom the file you attached, I see:\n$ cat replicalog_tail | grep -Eio \"rel\\s([0-9]+)\" | wc -l\n950000\n\nSo that confirms there were 950k relations in the xl_standby_locks.\nThe contents of that message seem to be produced by standby_desc().\nThat should be the same WAL record that's processed by standby_redo()\nwhich adds the 950k locks to the RecoveryLockListsEntry.\n\nI'm not seeing why 950k becomes 134m.\n\nDavid\n\n\n",
"msg_date": "Tue, 6 Sep 2022 00:13:13 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Startup process on a hot standby crashes with an error \"invalid\n memory alloc request size 1073741824\" while replaying \"Standby/LOCK\" records"
},
{
"msg_contents": "Thanks, David!\n\nLet me know if there's any additional information i could provide.\n\nBest regards,\nDmitry Kuzmin\n\nпн, 5 сент. 2022 г. в 22:13, David Rowley <dgrowleyml@gmail.com>:\n\n> On Mon, 5 Sept 2022 at 22:38, Dmitriy Kuzmin <kuzmin.db4@gmail.com> wrote:\n> > One of our clients experienced a crash of startup process with an error\n> \"invalid memory alloc request size 1073741824\" on a hot standby, which\n> ended in replica reinit.\n> >\n> > According to logs, startup process crashed while trying to replay\n> \"Standby/LOCK\" record with a huge list of locks(see attached\n> replicalog_tail.tgz):\n> >\n> > FATAL: XX000: invalid memory alloc request size 1073741824\n> > CONTEXT: WAL redo at 7/327F9248 for Standby/LOCK: xid 1638575 db\n> 7550635 rel 8500880 xid 1638575 db 7550635 rel 10324499...\n> > LOCATION: repalloc, mcxt.c:1075\n> > BACKTRACE:\n> > postgres: startup recovering\n> 000000010000000700000033(repalloc+0x61) [0x8d7611]\n> > postgres: startup recovering 000000010000000700000033()\n> [0x691c29]\n> > postgres: startup recovering 000000010000000700000033()\n> [0x691c74]\n> > postgres: startup recovering\n> 000000010000000700000033(lappend+0x16) [0x691e76]\n>\n> This must be the repalloc() in enlarge_list(). 1073741824 / 8 is\n> 134,217,728 (2^27). That's quite a bit more than 1 lock per your 950k\n> tables.\n>\n> I wonder why the RecoveryLockListsEntry.locks list is getting so long.\n>\n> from the file you attached, I see:\n> $ cat replicalog_tail | grep -Eio \"rel\\s([0-9]+)\" | wc -l\n> 950000\n>\n> So that confirms there were 950k relations in the xl_standby_locks.\n> The contents of that message seem to be produced by standby_desc().\n> That should be the same WAL record that's processed by standby_redo()\n> which adds the 950k locks to the RecoveryLockListsEntry.\n>\n> I'm not seeing why 950k becomes 134m.\n>\n> David\n>\n\nThanks, David!Let me know if there's any additional information i could provide.Best regards,Dmitry Kuzminпн, 5 сент. 2022 г. в 22:13, David Rowley <dgrowleyml@gmail.com>:On Mon, 5 Sept 2022 at 22:38, Dmitriy Kuzmin <kuzmin.db4@gmail.com> wrote:\n> One of our clients experienced a crash of startup process with an error \"invalid memory alloc request size 1073741824\" on a hot standby, which ended in replica reinit.\n>\n> According to logs, startup process crashed while trying to replay \"Standby/LOCK\" record with a huge list of locks(see attached replicalog_tail.tgz):\n>\n> FATAL: XX000: invalid memory alloc request size 1073741824\n> CONTEXT: WAL redo at 7/327F9248 for Standby/LOCK: xid 1638575 db 7550635 rel 8500880 xid 1638575 db 7550635 rel 10324499...\n> LOCATION: repalloc, mcxt.c:1075\n> BACKTRACE:\n> postgres: startup recovering 000000010000000700000033(repalloc+0x61) [0x8d7611]\n> postgres: startup recovering 000000010000000700000033() [0x691c29]\n> postgres: startup recovering 000000010000000700000033() [0x691c74]\n> postgres: startup recovering 000000010000000700000033(lappend+0x16) [0x691e76]\n\nThis must be the repalloc() in enlarge_list(). 1073741824 / 8 is\n134,217,728 (2^27). That's quite a bit more than 1 lock per your 950k\ntables.\n\nI wonder why the RecoveryLockListsEntry.locks list is getting so long.\n\nfrom the file you attached, I see:\n$ cat replicalog_tail | grep -Eio \"rel\\s([0-9]+)\" | wc -l\n950000\n\nSo that confirms there were 950k relations in the xl_standby_locks.\nThe contents of that message seem to be produced by standby_desc().\nThat should be the same WAL record that's processed by standby_redo()\nwhich adds the 950k locks to the RecoveryLockListsEntry.\n\nI'm not seeing why 950k becomes 134m.\n\nDavid",
"msg_date": "Tue, 6 Sep 2022 16:38:43 +1000",
"msg_from": "Dmitriy Kuzmin <kuzmin.db4@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Startup process on a hot standby crashes with an error \"invalid\n memory alloc request size 1073741824\" while replaying \"Standby/LOCK\" records"
},
{
"msg_contents": "[ redirecting to -hackers because patch attached ]\n\nDavid Rowley <dgrowleyml@gmail.com> writes:\n> So that confirms there were 950k relations in the xl_standby_locks.\n> The contents of that message seem to be produced by standby_desc().\n> That should be the same WAL record that's processed by standby_redo()\n> which adds the 950k locks to the RecoveryLockListsEntry.\n\n> I'm not seeing why 950k becomes 134m.\n\nI figured out what the problem is. The standby's startup process\nretains knowledge of all these locks in standby.c's RecoveryLockLists\ndata structure, which *has no de-duplication capability*. It'll add\nanother entry to the per-XID list any time it's told about a given\nexclusive lock. And checkpoints cause us to regurgitate the entire\nset of currently-held exclusive locks into the WAL. So if you have\na process holding a lot of exclusive locks, and sitting on them\nacross multiple checkpoints, standby startup processes will bloat.\nIt's not a true leak, in that we know where the memory is and\nwe'll release it whenever we see that XID commit/abort. And I doubt\nthat this is a common usage pattern, which probably explains the\nlack of previous complaints. Still, bloat bad.\n\nPFA a quick-hack fix that solves this issue by making per-transaction\nsubsidiary hash tables. That's overkill perhaps; I'm a little worried\nabout whether this slows down normal cases more than it's worth.\nBut we ought to do something about this, because aside from the\nduplication aspect the current storage of these lists seems mighty\nspace-inefficient.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 04 Oct 2022 18:54:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Startup process on a hot standby crashes with an error \"invalid\n memory alloc request size 1073741824\" while replaying \"Standby/LOCK\" records"
},
{
"msg_contents": "I wrote:\n> PFA a quick-hack fix that solves this issue by making per-transaction\n> subsidiary hash tables. That's overkill perhaps; I'm a little worried\n> about whether this slows down normal cases more than it's worth.\n> But we ought to do something about this, because aside from the\n> duplication aspect the current storage of these lists seems mighty\n> space-inefficient.\n\nAfter further thought, maybe it'd be better to do it as attached,\nwith one long-lived hash table for all the locks. This is a shade\nless space-efficient than the current code once you account for\ndynahash overhead, but the per-transaction overhead should be lower\nthan the previous patch since we only need to create/destroy a hash\ntable entry not a whole hash table.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 04 Oct 2022 19:53:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Startup process on a hot standby crashes with an error \"invalid\n memory alloc request size 1073741824\" while replaying \"Standby/LOCK\" records"
},
{
"msg_contents": "On Tue, Oct 04, 2022 at 07:53:11PM -0400, Tom Lane wrote:\n> I wrote:\n>> PFA a quick-hack fix that solves this issue by making per-transaction\n>> subsidiary hash tables. That's overkill perhaps; I'm a little worried\n>> about whether this slows down normal cases more than it's worth.\n>> But we ought to do something about this, because aside from the\n>> duplication aspect the current storage of these lists seems mighty\n>> space-inefficient.\n> \n> After further thought, maybe it'd be better to do it as attached,\n> with one long-lived hash table for all the locks. This is a shade\n> less space-efficient than the current code once you account for\n> dynahash overhead, but the per-transaction overhead should be lower\n> than the previous patch since we only need to create/destroy a hash\n> table entry not a whole hash table.\n\nThis feels like a natural way to solve this problem. I saw several cases\nof the issue that was fixed with 6301c3a, so I'm inclined to believe this\nusage pattern is actually somewhat common.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 4 Oct 2022 17:15:31 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Startup process on a hot standby crashes with an error \"invalid\n memory alloc request size 1073741824\" while replaying \"Standby/LOCK\" records"
},
{
"msg_contents": "At Tue, 4 Oct 2022 17:15:31 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Tue, Oct 04, 2022 at 07:53:11PM -0400, Tom Lane wrote:\n> > I wrote:\n> >> PFA a quick-hack fix that solves this issue by making per-transaction\n> >> subsidiary hash tables. That's overkill perhaps; I'm a little worried\n> >> about whether this slows down normal cases more than it's worth.\n> >> But we ought to do something about this, because aside from the\n> >> duplication aspect the current storage of these lists seems mighty\n> >> space-inefficient.\n> > \n> > After further thought, maybe it'd be better to do it as attached,\n> > with one long-lived hash table for all the locks. This is a shade\n> > less space-efficient than the current code once you account for\n> > dynahash overhead, but the per-transaction overhead should be lower\n> > than the previous patch since we only need to create/destroy a hash\n> > table entry not a whole hash table.\n\nFirst one is straight forward outcome from the current implement but I\nlike the new one. I agree that it is natural and that the expected\noverhead per (typical) transaction is lower than both the first one\nand doing the same operation on a list. I don't think that space\ninefficiency in that extent doesn't matter since it is the startup\nprocess.\n\n> This feels like a natural way to solve this problem. I saw several cases\n> of the issue that was fixed with 6301c3a, so I'm inclined to believe this\n> usage pattern is actually somewhat common.\n\nSo releasing locks becomes somewhat slower? But it seems to still be\nfar faster than massively repetitive head-removal in a list.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 05 Oct 2022 10:41:03 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Startup process on a hot standby crashes with an error\n \"invalid memory alloc request size 1073741824\" while replaying\n \"Standby/LOCK\" records"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Tue, 4 Oct 2022 17:15:31 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n>> On Tue, Oct 04, 2022 at 07:53:11PM -0400, Tom Lane wrote:\n>>> After further thought, maybe it'd be better to do it as attached,\n>>> with one long-lived hash table for all the locks.\n\n> First one is straight forward outcome from the current implement but I\n> like the new one. I agree that it is natural and that the expected\n> overhead per (typical) transaction is lower than both the first one\n> and doing the same operation on a list. I don't think that space\n> inefficiency in that extent doesn't matter since it is the startup\n> process.\n\nTo get some hard numbers about this, I made a quick hack to collect\ngetrusage() numbers for the startup process (patch attached for\ndocumentation purposes). I then ran the recovery/t/027_stream_regress.pl\ntest a few times and collected the stats (also attached). This seems\nlike a reasonably decent baseline test, since the core regression tests\ncertainly take lots of AccessExclusiveLocks what with all the DDL\ninvolved, though they shouldn't ever take large numbers at once. Also\nthey don't run long enough for any lock list bloat to occur, so these\nnumbers don't reflect a case where the patches would provide benefit.\n\nIf you look hard, there's maybe about a 1% user-CPU penalty for patch 2,\nalthough that's well below the run-to-run variation so it's hard to be\nsure that it's real. The same comments apply to the max resident size\nstats. So I'm comforted that there's not a significant penalty here.\n\nI'll go ahead with patch 2 if there's not objection.\n\nOne other point to discuss: should we consider back-patching? I've\ngot mixed feelings about that myself. I don't think that cases where\nthis helps significantly are at all mainstream, so I'm kind of leaning\nto \"patch HEAD only\".\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 05 Oct 2022 11:30:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Startup process on a hot standby crashes with an error \"invalid\n memory alloc request size 1073741824\" while replaying \"Standby/LOCK\" records"
},
{
"msg_contents": "On Wed, Oct 05, 2022 at 11:30:22AM -0400, Tom Lane wrote:\n> One other point to discuss: should we consider back-patching? I've\n> got mixed feelings about that myself. I don't think that cases where\n> this helps significantly are at all mainstream, so I'm kind of leaning\n> to \"patch HEAD only\".\n\n+1. It can always be back-patched in the future if there are additional\nreports.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 5 Oct 2022 12:00:55 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Startup process on a hot standby crashes with an error \"invalid\n memory alloc request size 1073741824\" while replaying \"Standby/LOCK\" records"
},
{
"msg_contents": "On Wed, 5 Oct 2022 at 16:30, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > At Tue, 4 Oct 2022 17:15:31 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in\n> >> On Tue, Oct 04, 2022 at 07:53:11PM -0400, Tom Lane wrote:\n> >>> After further thought, maybe it'd be better to do it as attached,\n> >>> with one long-lived hash table for all the locks.\n>\n> > First one is straight forward outcome from the current implement but I\n> > like the new one. I agree that it is natural and that the expected\n> > overhead per (typical) transaction is lower than both the first one\n> > and doing the same operation on a list. I don't think that space\n> > inefficiency in that extent doesn't matter since it is the startup\n> > process.\n>\n> To get some hard numbers about this, I made a quick hack to collect\n> getrusage() numbers for the startup process (patch attached for\n> documentation purposes). I then ran the recovery/t/027_stream_regress.pl\n> test a few times and collected the stats (also attached). This seems\n> like a reasonably decent baseline test, since the core regression tests\n> certainly take lots of AccessExclusiveLocks what with all the DDL\n> involved, though they shouldn't ever take large numbers at once. Also\n> they don't run long enough for any lock list bloat to occur, so these\n> numbers don't reflect a case where the patches would provide benefit.\n>\n> If you look hard, there's maybe about a 1% user-CPU penalty for patch 2,\n> although that's well below the run-to-run variation so it's hard to be\n> sure that it's real. The same comments apply to the max resident size\n> stats. So I'm comforted that there's not a significant penalty here.\n>\n> I'll go ahead with patch 2 if there's not objection.\n\nHappy to see this change.\n\n> One other point to discuss: should we consider back-patching? I've\n> got mixed feelings about that myself. I don't think that cases where\n> this helps significantly are at all mainstream, so I'm kind of leaning\n> to \"patch HEAD only\".\n\nIt looks fine to eventually backpatch, since StandbyReleaseLockTree()\nwas optimized to only be called when the transaction had actually done\nsome AccessExclusiveLocks.\n\nSo the performance loss is minor and isolated to the users of such\nlocks, so I see no problems with it.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 10 Oct 2022 13:24:34 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Startup process on a hot standby crashes with an error \"invalid\n memory alloc request size 1073741824\" while replaying \"Standby/LOCK\" records"
},
{
"msg_contents": "> On Wed, 5 Oct 2022 at 16:30, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > One other point to discuss: should we consider back-patching? I've\n> > got mixed feelings about that myself. I don't think that cases where\n> > this helps significantly are at all mainstream, so I'm kind of leaning\n> > to \"patch HEAD only\".\n\nAt Mon, 10 Oct 2022 13:24:34 +0100, Simon Riggs <simon.riggs@enterprisedb.com> wrote in \n> It looks fine to eventually backpatch, since StandbyReleaseLockTree()\n> was optimized to only be called when the transaction had actually done\n> some AccessExclusiveLocks.\n> \n> So the performance loss is minor and isolated to the users of such\n> locks, so I see no problems with it.\n\nAt Wed, 5 Oct 2022 12:00:55 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> +1. It can always be back-patched in the future if there are additional\n> reports.\n\nThe third +1 from me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 11 Oct 2022 15:48:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Startup process on a hot standby crashes with an error\n \"invalid memory alloc request size 1073741824\" while replaying\n \"Standby/LOCK\" records"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nDue to experiments with columnar data storage I've decided to revive this\nthread -\nTable AM modifications to accept column projection lists\n<https://www.postgresql.org/message-id/flat/CAE-ML+9RmTNzKCNTZPQf8O3b-UjHWGFbSoXpQa3Wvuc8YBbEQw@mail.gmail.com>\n\nTo remind:\n\nThis patch introduces a set of changes to the table AM APIs, making them\naccept a column projection list. That helps columnar table AMs, so that\nthey don't need to fetch all columns from disk, but only the ones\nactually needed.\n\nThe set of changes in this patch is not exhaustive -\nthere are many more opportunities that are discussed in the TODO section\nbelow. Before digging deeper, we want to elicit early feedback on the\nAPI changes and the column extraction logic.\n\nTableAM APIs that have been modified are:\n\n1. Sequential scan APIs\n2. Index scan APIs\n3. API to lock and return a row\n4. API to fetch a single row\n\nWe have seen performance benefits in Zedstore for many of the optimized\noperations [0]. This patch is extracted from the larger patch shared in\n[0].\n\n\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/",
"msg_date": "Mon, 5 Sep 2022 17:38:51 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
},
{
"msg_contents": "On Mon, Sep 05, 2022 at 05:38:51PM +0300, Nikita Malakhov wrote:\n> Due to experiments with columnar data storage I've decided to revive this\n> thread - Table AM modifications to accept column projection lists\n> <https://www.postgresql.org/message-id/flat/CAE-ML+9RmTNzKCNTZPQf8O3b-UjHWGFbSoXpQa3Wvuc8YBbEQw@mail.gmail.com>\n> \n> To remind:\n> \n> This patch introduces a set of changes to the table AM APIs, making them\n> accept a column projection list. That helps columnar table AMs, so that\n> they don't need to fetch all columns from disk, but only the ones\n> actually needed.\n> \n> The set of changes in this patch is not exhaustive -\n> there are many more opportunities that are discussed in the TODO section\n> below. Before digging deeper, we want to elicit early feedback on the\n> API changes and the column extraction logic.\n> \n> TableAM APIs that have been modified are:\n> \n> 1. Sequential scan APIs\n> 2. Index scan APIs\n> 3. API to lock and return a row\n> 4. API to fetch a single row\n> \n> We have seen performance benefits in Zedstore for many of the optimized\n> operations [0]. This patch is extracted from the larger patch shared in\n> [0].\n\nWhat parts of the original patch were left out ? This seems to be the\nsame size as the original.\n\nWith some special build options like -DWRITE_READ_PARSE_PLAN_TREES, this\ncurrently fails with:\n\nWARNING: outfuncs/readfuncs failed to produce equal parse tree\n\nThere's poor code coverage in PopulateNeededColumnsForScan()\nIndexNext(), check_default_partition_contents() and nodeSeqscan.c.\nhttps://cirrus-ci.com/task/5516554904272896\nhttps://api.cirrus-ci.com/v1/artifact/task/5516554904272896/coverage/coverage/00-index.html\n\nIs it currently possible to hit those code paths in postgres ? If not,\nyou may need to invent a minimal columnar extension to allow excercising\nthat.\n\nNote that the cirrusci link is on top of my branch which runs \"extended\"\nchecks in cirrusci, but you can also run code coverage report locally\nwith --enable-coverage.\n\nWhen you mail next, please run pgindent first (BTW there's a debian\npackage in PGDG for pgindent).\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 5 Sep 2022 11:36:11 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
},
{
"msg_contents": "Hi hackers!\n\nThis is the original patch rebased onto v15 master with conflicts resolved.\nI'm currently\nstudying it and latest comments in the original thread, and would try go\nthe way that\nwas mentioned in the thread (last message) -\n[1]\nhttps://stratos.seas.harvard.edu/files/stratos/files/columnstoresfntdbs.pdf\n[2] https://github.com/zhihuiFan/postgres/tree/lazy_material_v2\nI agree it is not in the state for review, so I've decided not to change\npatch status,\njust revive the thread because we found that Pluggable Storage API is not\nsomewhat\nnot sufficient.\nThanks for the recommendations, I'll check that out.\n\nOn Mon, Sep 5, 2022 at 7:36 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Mon, Sep 05, 2022 at 05:38:51PM +0300, Nikita Malakhov wrote:\n> > Due to experiments with columnar data storage I've decided to revive this\n> > thread - Table AM modifications to accept column projection lists\n> > <\n> https://www.postgresql.org/message-id/flat/CAE-ML+9RmTNzKCNTZPQf8O3b-UjHWGFbSoXpQa3Wvuc8YBbEQw@mail.gmail.com\n> >\n> >\n> > To remind:\n> >\n> > This patch introduces a set of changes to the table AM APIs, making them\n> > accept a column projection list. That helps columnar table AMs, so that\n> > they don't need to fetch all columns from disk, but only the ones\n> > actually needed.\n> >\n> > The set of changes in this patch is not exhaustive -\n> > there are many more opportunities that are discussed in the TODO section\n> > below. Before digging deeper, we want to elicit early feedback on the\n> > API changes and the column extraction logic.\n> >\n> > TableAM APIs that have been modified are:\n> >\n> > 1. Sequential scan APIs\n> > 2. Index scan APIs\n> > 3. API to lock and return a row\n> > 4. API to fetch a single row\n> >\n> > We have seen performance benefits in Zedstore for many of the optimized\n> > operations [0]. This patch is extracted from the larger patch shared in\n> > [0].\n>\n> What parts of the original patch were left out ? This seems to be the\n> same size as the original.\n>\n> With some special build options like -DWRITE_READ_PARSE_PLAN_TREES, this\n> currently fails with:\n>\n> WARNING: outfuncs/readfuncs failed to produce equal parse tree\n>\n> There's poor code coverage in PopulateNeededColumnsForScan()\n> IndexNext(), check_default_partition_contents() and nodeSeqscan.c.\n> https://cirrus-ci.com/task/5516554904272896\n>\n> https://api.cirrus-ci.com/v1/artifact/task/5516554904272896/coverage/coverage/00-index.html\n>\n> Is it currently possible to hit those code paths in postgres ? If not,\n> you may need to invent a minimal columnar extension to allow excercising\n> that.\n>\n> Note that the cirrusci link is on top of my branch which runs \"extended\"\n> checks in cirrusci, but you can also run code coverage report locally\n> with --enable-coverage.\n>\n> When you mail next, please run pgindent first (BTW there's a debian\n> package in PGDG for pgindent).\n>\n> --\n> Justin\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi hackers!This is the original patch rebased onto v15 master with conflicts resolved. I'm currentlystudying it and latest comments in the original thread, and would try go the way that was mentioned in the thread (last message) -[1] https://stratos.seas.harvard.edu/files/stratos/files/columnstoresfntdbs.pdf[2] https://github.com/zhihuiFan/postgres/tree/lazy_material_v2I agree it is not in the state for review, so I've decided not to change patch status,just revive the thread because we found that Pluggable Storage API is not somewhatnot sufficient.Thanks for the recommendations, I'll check that out.On Mon, Sep 5, 2022 at 7:36 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Mon, Sep 05, 2022 at 05:38:51PM +0300, Nikita Malakhov wrote:\n> Due to experiments with columnar data storage I've decided to revive this\n> thread - Table AM modifications to accept column projection lists\n> <https://www.postgresql.org/message-id/flat/CAE-ML+9RmTNzKCNTZPQf8O3b-UjHWGFbSoXpQa3Wvuc8YBbEQw@mail.gmail.com>\n> \n> To remind:\n> \n> This patch introduces a set of changes to the table AM APIs, making them\n> accept a column projection list. That helps columnar table AMs, so that\n> they don't need to fetch all columns from disk, but only the ones\n> actually needed.\n> \n> The set of changes in this patch is not exhaustive -\n> there are many more opportunities that are discussed in the TODO section\n> below. Before digging deeper, we want to elicit early feedback on the\n> API changes and the column extraction logic.\n> \n> TableAM APIs that have been modified are:\n> \n> 1. Sequential scan APIs\n> 2. Index scan APIs\n> 3. API to lock and return a row\n> 4. API to fetch a single row\n> \n> We have seen performance benefits in Zedstore for many of the optimized\n> operations [0]. This patch is extracted from the larger patch shared in\n> [0].\n\nWhat parts of the original patch were left out ? This seems to be the\nsame size as the original.\n\nWith some special build options like -DWRITE_READ_PARSE_PLAN_TREES, this\ncurrently fails with:\n\nWARNING: outfuncs/readfuncs failed to produce equal parse tree\n\nThere's poor code coverage in PopulateNeededColumnsForScan()\nIndexNext(), check_default_partition_contents() and nodeSeqscan.c.\nhttps://cirrus-ci.com/task/5516554904272896\nhttps://api.cirrus-ci.com/v1/artifact/task/5516554904272896/coverage/coverage/00-index.html\n\nIs it currently possible to hit those code paths in postgres ? If not,\nyou may need to invent a minimal columnar extension to allow excercising\nthat.\n\nNote that the cirrusci link is on top of my branch which runs \"extended\"\nchecks in cirrusci, but you can also run code coverage report locally\nwith --enable-coverage.\n\nWhen you mail next, please run pgindent first (BTW there's a debian\npackage in PGDG for pgindent).\n\n-- \nJustin\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Mon, 5 Sep 2022 19:51:32 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
},
{
"msg_contents": "On Mon, Sep 5, 2022 at 9:51 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi hackers!\n>\n> This is the original patch rebased onto v15 master with conflicts\n> resolved. I'm currently\n> studying it and latest comments in the original thread, and would try go\n> the way that\n> was mentioned in the thread (last message) -\n> [1]\n> https://stratos.seas.harvard.edu/files/stratos/files/columnstoresfntdbs.pdf\n> [2] https://github.com/zhihuiFan/postgres/tree/lazy_material_v2\n> I agree it is not in the state for review, so I've decided not to change\n> patch status,\n> just revive the thread because we found that Pluggable Storage API is not\n> somewhat\n> not sufficient.\n> Thanks for the recommendations, I'll check that out.\n>\n> Hi,\nbq. is not somewhat not sufficient.\n\nI am a bit confused by the double negation.\nI guess you meant insufficient.\n\nCheers\n\nOn Mon, Sep 5, 2022 at 9:51 AM Nikita Malakhov <hukutoc@gmail.com> wrote:Hi hackers!This is the original patch rebased onto v15 master with conflicts resolved. I'm currentlystudying it and latest comments in the original thread, and would try go the way that was mentioned in the thread (last message) -[1] https://stratos.seas.harvard.edu/files/stratos/files/columnstoresfntdbs.pdf[2] https://github.com/zhihuiFan/postgres/tree/lazy_material_v2I agree it is not in the state for review, so I've decided not to change patch status,just revive the thread because we found that Pluggable Storage API is not somewhatnot sufficient.Thanks for the recommendations, I'll check that out.Hi,bq. is not somewhat not sufficient. I am a bit confused by the double negation.I guess you meant insufficient.Cheers",
"msg_date": "Mon, 5 Sep 2022 12:40:46 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
}
] |
[
{
"msg_contents": "Hi,\n\nMacro exec_subplan_get_plan is not used anymore.\nAttach a patch to remove it.\n\nRegards,\nZhang Mingli",
"msg_date": "Tue, 6 Sep 2022 00:39:30 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Remove dead macro exec_subplan_get_plan"
},
{
"msg_contents": "Zhang Mingli <zmlpostgres@gmail.com> writes:\n> Macro exec_subplan_get_plan is not used anymore.\n> Attach a patch to remove it.\n\nHm, I wonder why it's not used anymore. Maybe we no longer need\nthat list at all? If we do, should use of the macro be\nre-introduced in the accessors?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Sep 2022 13:18:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove dead macro exec_subplan_get_plan"
},
{
"msg_contents": "On Tue, Sep 6, 2022 at 1:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Zhang Mingli <zmlpostgres@gmail.com> writes:\n> > Macro exec_subplan_get_plan is not used anymore.\n> > Attach a patch to remove it.\n>\n> Hm, I wonder why it's not used anymore. Maybe we no longer need\n> that list at all? If we do, should use of the macro be\n> re-introduced in the accessors?\n\n\nSeems nowadays no one fetches the Plan from PlannedStmt->subplans with a\ncertain plan_id any more. Previously back in eab6b8b2 where this macro\nwas introduced, it was used in explain_outNode and ExecInitSubPlan.\n\nI find a similar macro, planner_subplan_get_plan, who fetches the Plan\nfrom glob->subplans. We can use it in the codes where needed. For\nexample, in the new function SS_make_multiexprs_unique.\n\n /* Found one, get the associated subplan */\n- plan = (Plan *) list_nth(root->glob->subplans, splan->plan_id - 1);\n+ plan = planner_subplan_get_plan(root, splan);\n\nThanks\nRichard\n\nOn Tue, Sep 6, 2022 at 1:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Zhang Mingli <zmlpostgres@gmail.com> writes:\n> Macro exec_subplan_get_plan is not used anymore.\n> Attach a patch to remove it.\n\nHm, I wonder why it's not used anymore. Maybe we no longer need\nthat list at all? If we do, should use of the macro be\nre-introduced in the accessors? Seems nowadays no one fetches the Plan from PlannedStmt->subplans with acertain plan_id any more. Previously back in eab6b8b2 where this macrowas introduced, it was used in explain_outNode and ExecInitSubPlan.I find a similar macro, planner_subplan_get_plan, who fetches the Planfrom glob->subplans. We can use it in the codes where needed. Forexample, in the new function SS_make_multiexprs_unique. /* Found one, get the associated subplan */- plan = (Plan *) list_nth(root->glob->subplans, splan->plan_id - 1);+ plan = planner_subplan_get_plan(root, splan);ThanksRichard",
"msg_date": "Tue, 6 Sep 2022 10:21:52 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove dead macro exec_subplan_get_plan"
},
{
"msg_contents": "Hi,all\n\nRegards,\nZhang Mingli\nOn Sep 6, 2022, 10:22 +0800, Richard Guo <guofenglinux@gmail.com>, wrote:\n>\n> On Tue, Sep 6, 2022 at 1:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Zhang Mingli <zmlpostgres@gmail.com> writes:\n> > > Macro exec_subplan_get_plan is not used anymore.\n> > > Attach a patch to remove it.\n> >\n> > Hm, I wonder why it's not used anymore. Maybe we no longer need\n> > that list at all? If we do, should use of the macro be\n> > re-introduced in the accessors?\n\nThe PlannedStmt->subplans list is still used at several places.\n\n> Seems nowadays no one fetches the Plan from PlannedStmt->subplans with a\n> certain plan_id any more. Previously back in eab6b8b2 where this macro\n> was introduced, it was used in explain_outNode and ExecInitSubPlan.\n>\n> I find a similar macro, planner_subplan_get_plan, who fetches the Plan\n> from glob->subplans. We can use it in the codes where needed. For\n> example, in the new function SS_make_multiexprs_unique.\n>\n> /* Found one, get the associated subplan */\n> - plan = (Plan *) list_nth(root->glob->subplans, splan->plan_id - 1);\n> + plan = planner_subplan_get_plan(root, splan);\n>\n> Thanks\n> Richard\n\nYeah, searched on history and found:\nexec_subplan_get_plan was once used in ExecInitSubPlan() to create planstate.\n\n```\nPlan\t *plan = exec_subplan_get_plan(estate->es_plannedstmt, subplan);\n...\nnode->planstate = ExecInitNode(plan, sp_estate, eflags);\n```\n\nAnd now in ExecInitSubPlan(), planstate comes from es_subplanstates.\n\n```\n/* Link the SubPlanState to already-initialized subplan */\nsstate->planstate = (PlanState *) list_nth(estate->es_subplanstates, subplan->plan_id - 1);\n```\n\nAnd estate->es_subplanstates is evaluated through a for-range of subplans list at some functions.\n\n```\nforeach(l, plannedstmt->subplans)\n{\n ...\n estate->es_subplanstates = lappend(estate->es_subplanstates, subplanstate);\n}\n```\n\n\n\n\n\n\n\n\n\nHi,all\n\n\nRegards,\nZhang Mingli\n\n\n\nOn Sep 6, 2022, 10:22 +0800, Richard Guo <guofenglinux@gmail.com>, wrote:\n\nOn Tue, Sep 6, 2022 at 1:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nZhang Mingli <zmlpostgres@gmail.com> writes:\n> Macro exec_subplan_get_plan is not used anymore.\n> Attach a patch to remove it.\n\nHm, I wonder why it's not used anymore. Maybe we no longer need\nthat list at all? If we do, should use of the macro be\nre-introduced in the accessors?\n\n\nThe PlannedStmt->subplans list is still used at several places.\n \nSeems nowadays no one fetches the Plan from PlannedStmt->subplans with a\ncertain plan_id any more. Previously back in eab6b8b2 where this macro\nwas introduced, it was used in explain_outNode and ExecInitSubPlan.\n\nI find a similar macro, planner_subplan_get_plan, who fetches the Plan\nfrom glob->subplans. We can use it in the codes where needed. For\nexample, in the new function SS_make_multiexprs_unique.\n\n /* Found one, get the associated subplan */\n- plan = (Plan *) list_nth(root->glob->subplans, splan->plan_id - 1);\n+ plan = planner_subplan_get_plan(root, splan);\n\nThanks\nRichard\n\nYeah, searched on history and found:\nexec_subplan_get_plan was once used in ExecInitSubPlan() to create planstate.\n\n```\nPlan\t *plan = exec_subplan_get_plan(estate->es_plannedstmt, subplan);...node->planstate = ExecInitNode(plan, sp_estate, eflags);```And now in ExecInitSubPlan(), planstate comes from es_subplanstates.\n\n```\n/* Link the SubPlanState to already-initialized subplan */\nsstate->planstate = (PlanState *) list_nth(estate->es_subplanstates, subplan->plan_id - 1);\n```\n\nAnd estate->es_subplanstates is evaluated through a for-range of subplans list at some functions.\n\n```\nforeach(l, plannedstmt->subplans)\n{\n ...\n estate->es_subplanstates = lappend(estate->es_subplanstates, subplanstate);\n}\n```",
"msg_date": "Tue, 6 Sep 2022 15:50:08 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove dead macro exec_subplan_get_plan"
},
{
"msg_contents": "On Tue, Sep 6, 2022 at 12:39 AM Zhang Mingli <zmlpostgres@gmail.com> wrote:\n\n> Macro exec_subplan_get_plan is not used anymore.\n> Attach a patch to remove it.\n>\n\nHow about add it to the CF to not lose track of it?\n\nThanks\nRichard\n\nOn Tue, Sep 6, 2022 at 12:39 AM Zhang Mingli <zmlpostgres@gmail.com> wrote:\nMacro exec_subplan_get_plan is not used anymore.\nAttach a patch to remove it. How about add it to the CF to not lose track of it?ThanksRichard",
"msg_date": "Fri, 16 Sep 2022 14:47:36 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove dead macro exec_subplan_get_plan"
},
{
"msg_contents": "On Sep 16, 2022, 14:47 +0800, Richard Guo <guofenglinux@gmail.com>, wrote:\n>\n> On Tue, Sep 6, 2022 at 12:39 AM Zhang Mingli <zmlpostgres@gmail.com> wrote:\n> > Macro exec_subplan_get_plan is not used anymore.\n> > Attach a patch to remove it.\n>\n> How about add it to the CF to not lose track of it?\nWill add it, thanks~\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\n\n\n\nOn Sep 16, 2022, 14:47 +0800, Richard Guo <guofenglinux@gmail.com>, wrote:\n\nOn Tue, Sep 6, 2022 at 12:39 AM Zhang Mingli <zmlpostgres@gmail.com> wrote:\nMacro exec_subplan_get_plan is not used anymore.\nAttach a patch to remove it.\n \nHow about add it to the CF to not lose track of it?\nWill add it, thanks~\n\nRegards,\nZhang Mingli",
"msg_date": "Fri, 16 Sep 2022 15:16:41 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove dead macro exec_subplan_get_plan"
},
{
"msg_contents": "On Fri, 16 Sept 2022 at 03:33, Zhang Mingli <zmlpostgres@gmail.com> wrote:\n>\n> On Sep 16, 2022, 14:47 +0800, Richard Guo <guofenglinux@gmail.com>, wrote:\n>\n> How about add it to the CF to not lose track of it?\n>\n> Will add it, thanks~\n\nI guess not losing track of it is only helpful if we do eventually\ncommit it. Otherwise we would rather lose track of it :)\n\nI think the conclusion here was that the actual list is still used and\ncleaning up unused macros isn't worth the hassle unless it's part of\nsome larger patch? I mean, it doesn't seem like a lot of hassle but\nnobody seems to have been interested in pursuing it since 2022 so I\nguess it's not going to happen.\n\nI don't want to keep moving patches forward release to release that\nnobody's interested in committing. So I'm going to mark this one\nrejected for now. We can always update that if it comes up again.\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Tue, 4 Apr 2023 14:03:45 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove dead macro exec_subplan_get_plan"
}
] |
[
{
"msg_contents": "Attached is a patch series that attempts to modernize our GUC\ninfrastructure, in particular removing the performance bottlenecks\nit has when there are lots of GUC variables. I wrote this because\nI am starting to question the schema-variables patch [1] --- that's\ngetting to be quite a large patch and I grow less and less sure\nthat it's solving a problem our users want solved. I think what\npeople actually want is better support of the existing mechanism\nfor ad-hoc session variables, namely abusing custom GUCs for that\npurpose. One of the big reasons we have been resistant to formally\nsupporting that is fear of the poor performance guc.c would have\nwith lots of variables. But we already have quite a lot of them:\n\nregression=# select count(*) from pg_settings;\n count \n-------\n 353\n(1 row)\n\nand more are getting added all the time. I think this patch series\ncould likely be justified just in terms of positive effect on core\nperformance, never mind user-added GUCs.\n\n0001 and 0002 below are concerned with converting guc.c to store its\ndata in a dedicated memory context (GUCMemoryContext) instead of using\nraw malloc(). This is not directly a performance improvement, and\nI'm prepared to drop the idea if there's a lot of pushback, but I\nthink it would be a good thing to do. The only hard reason for using\nmalloc() there was the lack of ability to avoid throwing elog(ERROR)\non out-of-memory in palloc(). But mcxt.c grew that ability years ago.\nSwitching to a dedicated context would greatly improve visibility and\naccountability of GUC-related allocations. Also, the 0003 patch will\nswitch guc.c to relying on a palloc-based hashtable, and it seems a\nbit silly to have part of the data structure in palloc and part in\nmalloc. However 0002 is a bit invasive, in that it forces code\nchanges in GUC check callbacks, if they want to reallocate the new\nvalue or create an \"extra\" data structure. My feeling is that not\nenough external modules use those facilities for this to pose a big\nproblem. However, the ones that are subject to it will have a\nnon-fun time tracking down why their code is crashing. (The recent\ncontext-header changes mean that you get a very obscure failure when\ntrying to pfree() a malloc'd chunk -- for me, that typically ends\nin an assertion failure in generation.c. Can we make that less\nconfusing?)\n\n0003 replaces guc.c's bsearch-a-sorted-array lookup infrastructure\nwith a dynahash hash table. (I also looked at simplehash, but it\nhas no support for not elog'ing on OOM, and it increases the size\nof guc.o by 10KB or so.) This fixes the worse-than-O(N^2) time\nneeded to create N new GUCs, as in\n\ndo $$\nbegin\n for i in 1..10000 loop\n perform set_config('foo.bar' || i::text, i::text, false);\n end loop;\nend $$;\n\nOn my machine, this takes about 4700 ms in HEAD, versus 23 ms\nwith this patch. However, the places that were linearly scanning\nthe array now need to use hash_seq_search, so some other things\nlike transaction shutdown (AtEOXact_GUC) get slower.\n\nTo address that, 0004 adds some auxiliary lists that link together\njust the variables that are interesting for specific purposes.\nThis is helpful even without considering the possibility of a\nlot of user-added GUCs: in a typical session, for example, not\nmany of those 353 GUCs have non-default values, and even fewer\nget modified in any one transaction (typically, anyway).\n\nAs an example of the speedup from 0004, these DO loops:\n\ncreate or replace function func_with_set(int) returns int\nstrict immutable language plpgsql as\n$$ begin return $1; end $$\nset enable_seqscan = false;\n\ndo $$\nbegin\n for i in 1..100000 loop\n perform func_with_set(i);\n end loop;\nend $$;\n\ndo $$\nbegin\n for i in 1..100000 loop\n begin\n perform func_with_set(i);\n exception when others then raise;\n end;\n end loop;\nend $$;\n\ntake about 260 and 320 ms respectively for me, in HEAD with\njust the stock set of variables. But after creating 10000\nGUCs with the previous DO loop, they're up to about 3200 ms.\n0004 brings that back down to being indistinguishable from the\nspeed with few GUCs.\n\nSo I think this is good cleanup in its own right, plus it\nremoves one major objection to considering user-defined GUCs\nas a supported feature.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CAFj8pRD053CY_N4%3D6SvPe7ke6xPbh%3DK50LUAOwjC3jm8Me9Obg%40mail.gmail.com",
"msg_date": "Mon, 05 Sep 2022 18:27:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Modernizing our GUC infrastructure"
},
{
"msg_contents": "Hi,\n\n> I wrote this because I am starting to question the schema-variables patch\n> [1] --- that's getting to be quite a large patch and I grow less and less\n> sure that it's solving a problem our users want solved. --- that's getting\n> to be quite a large patch and I grow less and less sure that it's solving a\n> problem our users want solved. I think what people actually want is better\n> support of the existing mechanism for ad-hoc session variables, namely\n> abusing custom GUCs for that purpose.\n\nI don't really have an opinion on the highlevel directional question, yet\nanyway. But the stuff you're talking about changing in guc.c seem like a good\nidea independently.\n\n\nOn 2022-09-05 18:27:46 -0400, Tom Lane wrote:\n> 0001 and 0002 below are concerned with converting guc.c to store its\n> data in a dedicated memory context (GUCMemoryContext) instead of using\n> raw malloc(). This is not directly a performance improvement, and\n> I'm prepared to drop the idea if there's a lot of pushback, but I\n> think it would be a good thing to do.\n\n+1 - I've been annoyed at this a couple times, even just because it makes it\nharder to identify memory leaks etc.\n\n\n> The only hard reason for using\n> malloc() there was the lack of ability to avoid throwing elog(ERROR)\n> on out-of-memory in palloc(). But mcxt.c grew that ability years ago.\n> Switching to a dedicated context would greatly improve visibility and\n> accountability of GUC-related allocations. Also, the 0003 patch will\n> switch guc.c to relying on a palloc-based hashtable, and it seems a\n> bit silly to have part of the data structure in palloc and part in\n> malloc. However 0002 is a bit invasive, in that it forces code\n> changes in GUC check callbacks, if they want to reallocate the new\n> value or create an \"extra\" data structure. My feeling is that not\n> enough external modules use those facilities for this to pose a big\n> problem. However, the ones that are subject to it will have a\n> non-fun time tracking down why their code is crashing.\n\nThat sucks, but I think it's a bullet we're going to have to bite at some\npoint.\n\nPerhaps we could do something like checking MemoryContextContains() and assert\nif not allocated in the right context? That way the crash is at least\nobvious. Or perhaps even transparently reallocate in that case? It does look\nlike MemoryContextContains() currently is broken, I've raised that in the\nother thread.\n\n\n> (The recent context-header changes mean that you get a very obscure failure\n> when trying to pfree() a malloc'd chunk -- for me, that typically ends in an\n> assertion failure in generation.c. Can we make that less confusing?)\n\nHm. We can do better in assert builds, but I'm not sure we want to add the\noverhead of explicit checks in normal builds, IIRC David measured the overhead\nof additional branches in pfree, and it was noticable.\n\n\n> 0003 replaces guc.c's bsearch-a-sorted-array lookup infrastructure\n> with a dynahash hash table. (I also looked at simplehash, but it\n> has no support for not elog'ing on OOM, and it increases the size\n> of guc.o by 10KB or so.)\n\nDynahash seems reasonable here. Hard to believe raw lookup speed is a relevant\nbottleneck and due to the string names the key would be pretty wide (could\nobviously just be done via pointer, but then the locality benefits aren't as\nbig).\n\n\n> However, the places that were linearly scanning the array now need to use\n> hash_seq_search, so some other things like transaction shutdown\n> (AtEOXact_GUC) get slower.\n>\n> To address that, 0004 adds some auxiliary lists that link together\n> just the variables that are interesting for specific purposes.\n\nSeems sane.\n\n\nIt's only half related, but since we're talking about renovating guc.c: I\nthink it'd be good if we split the list of GUCs from the rest of the guc\nmachinery. Both for humans and compilers it's getting pretty large. And\ncommonly one either wants to edit the definition of GUCs or wants to edit the\nGUC machinery.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Sep 2022 16:32:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Modernizing our GUC infrastructure"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It's only half related, but since we're talking about renovating guc.c: I\n> think it'd be good if we split the list of GUCs from the rest of the guc\n> machinery. Both for humans and compilers it's getting pretty large. And\n> commonly one either wants to edit the definition of GUCs or wants to edit the\n> GUC machinery.\n\nI don't mind doing that, but it seems like an independent patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Sep 2022 19:50:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Modernizing our GUC infrastructure"
},
{
"msg_contents": "Hi Tom,\n\n@@ -5836,74 +5865,106 @@ build_guc_variables(void)\n }\n\n /*\n- * Create table with 20% slack\n+ * Create hash table with 20% slack\n */\n size_vars = num_vars + num_vars / 4;\n\nShould we change 20% to 25%, I thought that might be\na typo.\n\nOn Tue, Sep 6, 2022 at 6:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Attached is a patch series that attempts to modernize our GUC\n> infrastructure, in particular removing the performance bottlenecks\n> it has when there are lots of GUC variables. I wrote this because\n> I am starting to question the schema-variables patch [1] --- that's\n> getting to be quite a large patch and I grow less and less sure\n> that it's solving a problem our users want solved. I think what\n> people actually want is better support of the existing mechanism\n> for ad-hoc session variables, namely abusing custom GUCs for that\n> purpose. One of the big reasons we have been resistant to formally\n> supporting that is fear of the poor performance guc.c would have\n> with lots of variables. But we already have quite a lot of them:\n>\n> regression=# select count(*) from pg_settings;\n> count\n> -------\n> 353\n> (1 row)\n>\n> and more are getting added all the time. I think this patch series\n> could likely be justified just in terms of positive effect on core\n> performance, never mind user-added GUCs.\n>\n> 0001 and 0002 below are concerned with converting guc.c to store its\n> data in a dedicated memory context (GUCMemoryContext) instead of using\n> raw malloc(). This is not directly a performance improvement, and\n> I'm prepared to drop the idea if there's a lot of pushback, but I\n> think it would be a good thing to do. The only hard reason for using\n> malloc() there was the lack of ability to avoid throwing elog(ERROR)\n> on out-of-memory in palloc(). But mcxt.c grew that ability years ago.\n> Switching to a dedicated context would greatly improve visibility and\n> accountability of GUC-related allocations. Also, the 0003 patch will\n> switch guc.c to relying on a palloc-based hashtable, and it seems a\n> bit silly to have part of the data structure in palloc and part in\n> malloc. However 0002 is a bit invasive, in that it forces code\n> changes in GUC check callbacks, if they want to reallocate the new\n> value or create an \"extra\" data structure. My feeling is that not\n> enough external modules use those facilities for this to pose a big\n> problem. However, the ones that are subject to it will have a\n> non-fun time tracking down why their code is crashing. (The recent\n> context-header changes mean that you get a very obscure failure when\n> trying to pfree() a malloc'd chunk -- for me, that typically ends\n> in an assertion failure in generation.c. Can we make that less\n> confusing?)\n>\n> 0003 replaces guc.c's bsearch-a-sorted-array lookup infrastructure\n> with a dynahash hash table. (I also looked at simplehash, but it\n> has no support for not elog'ing on OOM, and it increases the size\n> of guc.o by 10KB or so.) This fixes the worse-than-O(N^2) time\n> needed to create N new GUCs, as in\n>\n> do $$\n> begin\n> for i in 1..10000 loop\n> perform set_config('foo.bar' || i::text, i::text, false);\n> end loop;\n> end $$;\n>\n> On my machine, this takes about 4700 ms in HEAD, versus 23 ms\n> with this patch. However, the places that were linearly scanning\n> the array now need to use hash_seq_search, so some other things\n> like transaction shutdown (AtEOXact_GUC) get slower.\n>\n> To address that, 0004 adds some auxiliary lists that link together\n> just the variables that are interesting for specific purposes.\n> This is helpful even without considering the possibility of a\n> lot of user-added GUCs: in a typical session, for example, not\n> many of those 353 GUCs have non-default values, and even fewer\n> get modified in any one transaction (typically, anyway).\n>\n> As an example of the speedup from 0004, these DO loops:\n>\n> create or replace function func_with_set(int) returns int\n> strict immutable language plpgsql as\n> $$ begin return $1; end $$\n> set enable_seqscan = false;\n>\n> do $$\n> begin\n> for i in 1..100000 loop\n> perform func_with_set(i);\n> end loop;\n> end $$;\n>\n> do $$\n> begin\n> for i in 1..100000 loop\n> begin\n> perform func_with_set(i);\n> exception when others then raise;\n> end;\n> end loop;\n> end $$;\n>\n> take about 260 and 320 ms respectively for me, in HEAD with\n> just the stock set of variables. But after creating 10000\n> GUCs with the previous DO loop, they're up to about 3200 ms.\n> 0004 brings that back down to being indistinguishable from the\n> speed with few GUCs.\n>\n> So I think this is good cleanup in its own right, plus it\n> removes one major objection to considering user-defined GUCs\n> as a supported feature.\n>\n> regards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/flat/CAFj8pRD053CY_N4%3D6SvPe7ke6xPbh%3DK50LUAOwjC3jm8Me9Obg%40mail.gmail.com\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Tue, 6 Sep 2022 10:45:47 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Modernizing our GUC infrastructure"
},
{
"msg_contents": "Junwang Zhao <zhjwpku@gmail.com> writes:\n> /*\n> - * Create table with 20% slack\n> + * Create hash table with 20% slack\n> */\n> size_vars = num_vars + num_vars / 4;\n\n> Should we change 20% to 25%, I thought that might be\n> a typo.\n\nNo ... 20% of the allocated space is spare.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Sep 2022 22:48:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Modernizing our GUC infrastructure"
},
{
"msg_contents": "ah, yes, that makes sense ;)\n\nOn Tue, Sep 6, 2022 at 10:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Junwang Zhao <zhjwpku@gmail.com> writes:\n> > /*\n> > - * Create table with 20% slack\n> > + * Create hash table with 20% slack\n> > */\n> > size_vars = num_vars + num_vars / 4;\n>\n> > Should we change 20% to 25%, I thought that might be\n> > a typo.\n>\n> No ... 20% of the allocated space is spare.\n>\n> regards, tom lane\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Tue, 6 Sep 2022 11:02:30 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Modernizing our GUC infrastructure"
},
{
"msg_contents": "Hi\n\nút 6. 9. 2022 v 0:28 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Attached is a patch series that attempts to modernize our GUC\n> infrastructure, in particular removing the performance bottlenecks\n> it has when there are lots of GUC variables. I wrote this because\n> I am starting to question the schema-variables patch [1] --- that's\n> getting to be quite a large patch and I grow less and less sure\n> that it's solving a problem our users want solved. I think what\n> people actually want is better support of the existing mechanism\n> for ad-hoc session variables, namely abusing custom GUCs for that\n> purpose. One of the big reasons we have been resistant to formally\n> supporting that is fear of the poor performance guc.c would have\n> with lots of variables. But we already have quite a lot of them:\n>\n>\nThe bad performance is not the main reason for implementing session\nvariables (and in almost all cases the performance of GUC is not a problem,\nbecause it is not a bottleneck, and in some terrible cases, I can save the\nGUC to a variable). There are other differences:\n\n1. Session variables can be persistent - so the usage of session variables\ncan be checked by static analyze like plpgsql_check\n\n2. Session variables supports not atomic data types - so the work with row\ntypes or arrays is much more comfortable and faster, because there is no\nconversion string <-> binary\n\n3. Session variables allows to set access rights\n\n4. Session variables are nullable and allowed to specify default values.\n\nI don't think so users have ten thousand GUC and the huge count of GUC is\nthe main performance problem. The source of the performance problem is\nstoring the value only as string.\n\nRegards\n\nPavel\n\nHiút 6. 9. 2022 v 0:28 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Attached is a patch series that attempts to modernize our GUC\ninfrastructure, in particular removing the performance bottlenecks\nit has when there are lots of GUC variables. I wrote this because\nI am starting to question the schema-variables patch [1] --- that's\ngetting to be quite a large patch and I grow less and less sure\nthat it's solving a problem our users want solved. I think what\npeople actually want is better support of the existing mechanism\nfor ad-hoc session variables, namely abusing custom GUCs for that\npurpose. One of the big reasons we have been resistant to formally\nsupporting that is fear of the poor performance guc.c would have\nwith lots of variables. But we already have quite a lot of them:\nThe bad performance is not the main reason for implementing session variables (and in almost all cases the performance of GUC is not a problem, because it is not a bottleneck, and in some terrible cases, I can save the GUC to a variable). There are other differences:1. Session variables can be persistent - so the usage of session variables can be checked by static analyze like plpgsql_check2. Session variables supports not atomic data types - so the work with row types or arrays is much more comfortable and faster, because there is no conversion string <-> binary3. Session variables allows to set access rights 4. Session variables are nullable and allowed to specify default values.I don't think so users have ten thousand GUC and the huge count of GUC is the main performance problem. The source of the performance problem is storing the value only as string.RegardsPavel",
"msg_date": "Tue, 6 Sep 2022 06:32:21 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Modernizing our GUC infrastructure"
},
{
"msg_contents": "út 6. 9. 2022 v 6:32 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> út 6. 9. 2022 v 0:28 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n>> Attached is a patch series that attempts to modernize our GUC\n>> infrastructure, in particular removing the performance bottlenecks\n>> it has when there are lots of GUC variables. I wrote this because\n>> I am starting to question the schema-variables patch [1] --- that's\n>> getting to be quite a large patch and I grow less and less sure\n>> that it's solving a problem our users want solved. I think what\n>> people actually want is better support of the existing mechanism\n>> for ad-hoc session variables, namely abusing custom GUCs for that\n>> purpose. One of the big reasons we have been resistant to formally\n>> supporting that is fear of the poor performance guc.c would have\n>> with lots of variables. But we already have quite a lot of them:\n>>\n>>\n> The bad performance is not the main reason for implementing session\n> variables (and in almost all cases the performance of GUC is not a problem,\n> because it is not a bottleneck, and in some terrible cases, I can save the\n> GUC to a variable). There are other differences:\n>\n> 1. Session variables can be persistent - so the usage of session variables\n> can be checked by static analyze like plpgsql_check\n>\n\nmore precious - metadata of session variables are persistent\n\n\n>\n> 2. Session variables supports not atomic data types - so the work with row\n> types or arrays is much more comfortable and faster, because there is no\n> conversion string <-> binary\n>\n> 3. Session variables allows to set access rights\n>\n> 4. Session variables are nullable and allowed to specify default values.\n>\n> I don't think so users have ten thousand GUC and the huge count of GUC is\n> the main performance problem. The source of the performance problem is\n> storing the value only as string.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n\nút 6. 9. 2022 v 6:32 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:Hiút 6. 9. 2022 v 0:28 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Attached is a patch series that attempts to modernize our GUC\ninfrastructure, in particular removing the performance bottlenecks\nit has when there are lots of GUC variables. I wrote this because\nI am starting to question the schema-variables patch [1] --- that's\ngetting to be quite a large patch and I grow less and less sure\nthat it's solving a problem our users want solved. I think what\npeople actually want is better support of the existing mechanism\nfor ad-hoc session variables, namely abusing custom GUCs for that\npurpose. One of the big reasons we have been resistant to formally\nsupporting that is fear of the poor performance guc.c would have\nwith lots of variables. But we already have quite a lot of them:\nThe bad performance is not the main reason for implementing session variables (and in almost all cases the performance of GUC is not a problem, because it is not a bottleneck, and in some terrible cases, I can save the GUC to a variable). There are other differences:1. Session variables can be persistent - so the usage of session variables can be checked by static analyze like plpgsql_checkmore precious - metadata of session variables are persistent 2. Session variables supports not atomic data types - so the work with row types or arrays is much more comfortable and faster, because there is no conversion string <-> binary3. Session variables allows to set access rights 4. Session variables are nullable and allowed to specify default values.I don't think so users have ten thousand GUC and the huge count of GUC is the main performance problem. The source of the performance problem is storing the value only as string.RegardsPavel",
"msg_date": "Tue, 6 Sep 2022 06:35:56 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Modernizing our GUC infrastructure"
},
{
"msg_contents": "Hi,\n\nOn Tue, Sep 06, 2022 at 06:32:21AM +0200, Pavel Stehule wrote:\n> Hi\n>\n> �t 6. 9. 2022 v 0:28 odes�latel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n> > Attached is a patch series that attempts to modernize our GUC\n> > infrastructure, in particular removing the performance bottlenecks\n> > it has when there are lots of GUC variables. I wrote this because\n> > I am starting to question the schema-variables patch [1] --- that's\n> > getting to be quite a large patch and I grow less and less sure\n> > that it's solving a problem our users want solved. I think what\n> > people actually want is better support of the existing mechanism\n> > for ad-hoc session variables, namely abusing custom GUCs for that\n> > purpose. One of the big reasons we have been resistant to formally\n> > supporting that is fear of the poor performance guc.c would have\n> > with lots of variables. But we already have quite a lot of them:\n> >\n> >\n> The bad performance is not the main reason for implementing session\n> variables (and in almost all cases the performance of GUC is not a problem,\n> because it is not a bottleneck, and in some terrible cases, I can save the\n> GUC to a variable). There are other differences:\n>\n> 1. Session variables metadata can be persistent - so the usage of session\n> variables can be checked by static analyze like plpgsql_check\n>\n> 2. Session variables supports not atomic data types - so the work with row\n> types or arrays is much more comfortable and faster, because there is no\n> conversion string <-> binary\n>\n> 3. Session variables allows to set access rights\n>\n> 4. Session variables are nullable and allowed to specify default values.\n>\n> I don't think so users have ten thousand GUC and the huge count of GUC is\n> the main performance problem. The source of the performance problem is\n> storing the value only as string.\n\nI think we can also mention those differences with the proposed schema\nvariables:\n\n- schema variables have normal SQL integration, having to use current_setting()\n isn't ideal (on top of only supporting text) and doesn't really play nice\n with pg_stat_statements for instance\n\n- schema variables implement stability in a single SQL statement (not in\n plpgsql), while current_setting always report the latest set value. This one\n may or may not be wanted, and maybe the discrepancy with procedural languages\n would be too problematic, but it's still something proposed\n\n\n",
"msg_date": "Tue, 6 Sep 2022 12:58:18 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Modernizing our GUC infrastructure"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> The bad performance is not the main reason for implementing session\n> variables (and in almost all cases the performance of GUC is not a problem,\n> because it is not a bottleneck, and in some terrible cases, I can save the\n> GUC to a variable). There are other differences:\n\nWell, yeah, the schema-variables patch offers a bunch of other features.\nWhat I'm not sure about is whether there's actually much field demand\nfor those. I think if we fix guc.c's performance issues and add some\nsimple features on top of that, like the ability to declare bool, int,\nfloat data types not just string for a user-defined GUC, we'd have\nexactly what a lot of people want --- not least because it'd be\nupwards-compatible with what they are already doing.\n\nHowever, that's probably a debate to have on the other thread not here.\nThis patch doesn't foreclose pushing forward with the schema-variables\npatch, if people want that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Sep 2022 01:25:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Modernizing our GUC infrastructure"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> út 6. 9. 2022 v 6:32 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>> 1. Session variables can be persistent - so the usage of session variables\n>> can be checked by static analyze like plpgsql_check\n\n> more precious - metadata of session variables are persistent\n\nRight ... so the question is, is that a feature or a bug?\n\nI think there's a good analogy here to temporary tables. The SQL\nspec says that temp-table schemas are persistent and database-wide,\nbut what we actually have is that they are session-local. People\noccasionally propose that we implement the SQL semantics for that,\nbut in the last twenty-plus years no one has bothered to write a\ncommittable patch to support it ... much less remove the existing\nbehavior in favor of that, which I'm pretty sure no one would think\nis a good idea.\n\nSo, is it actually a good idea to have persistent metadata for\nsession variables? I'd say that the issue is at best debatable,\nand at worst proven wrong by a couple of decades of experience.\nIn what way are session variables less mutable than temp tables?\n\nStill, this discussion would be better placed on the other thread.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Sep 2022 01:42:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Modernizing our GUC infrastructure"
},
{
"msg_contents": "út 6. 9. 2022 v 7:42 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > út 6. 9. 2022 v 6:32 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> > napsal:\n> >> 1. Session variables can be persistent - so the usage of session\n> variables\n> >> can be checked by static analyze like plpgsql_check\n>\n> > more precious - metadata of session variables are persistent\n>\n> Right ... so the question is, is that a feature or a bug?\n>\n> I think there's a good analogy here to temporary tables. The SQL\n> spec says that temp-table schemas are persistent and database-wide,\n> but what we actually have is that they are session-local. People\n> occasionally propose that we implement the SQL semantics for that,\n> but in the last twenty-plus years no one has bothered to write a\n> committable patch to support it ... much less remove the existing\n> behavior in favor of that, which I'm pretty sure no one would think\n> is a good idea.\n>\n> So, is it actually a good idea to have persistent metadata for\n> session variables? I'd say that the issue is at best debatable,\n> and at worst proven wrong by a couple of decades of experience.\n> In what way are session variables less mutable than temp tables?\n>\n\nThe access pattern is very different. The session variable is like the temp\ntable with exactly one row. It reduces a lot of overheads with storage (for\nreading, for writing).\n\nFor example, the minimum size of an empty temp table is 8KB. You can store\nall \"like\" session values to one temp table, but then there will be brutal\noverhead with reading.\n\n\n>\n> Still, this discussion would be better placed on the other thread.\n>\n\nsure - faster GUC is great - there are a lot of applications that overuse\nGUC, because there are no other solutions now. But I don't think so it is\ngood solution when somebody need some like global variables in procedural\ncode. And the design of session variables is more wide.\n\nRegards\n\nPavel\n\n>\n> regards, tom lane\n>\n\nút 6. 9. 2022 v 7:42 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> út 6. 9. 2022 v 6:32 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>> 1. Session variables can be persistent - so the usage of session variables\n>> can be checked by static analyze like plpgsql_check\n\n> more precious - metadata of session variables are persistent\n\nRight ... so the question is, is that a feature or a bug?\n\nI think there's a good analogy here to temporary tables. The SQL\nspec says that temp-table schemas are persistent and database-wide,\nbut what we actually have is that they are session-local. People\noccasionally propose that we implement the SQL semantics for that,\nbut in the last twenty-plus years no one has bothered to write a\ncommittable patch to support it ... much less remove the existing\nbehavior in favor of that, which I'm pretty sure no one would think\nis a good idea.\n\nSo, is it actually a good idea to have persistent metadata for\nsession variables? I'd say that the issue is at best debatable,\nand at worst proven wrong by a couple of decades of experience.\nIn what way are session variables less mutable than temp tables?The access pattern is very different. The session variable is like the temp table with exactly one row. It reduces a lot of overheads with storage (for reading, for writing).For example, the minimum size of an empty temp table is 8KB. You can store all \"like\" session values to one temp table, but then there will be brutal overhead with reading. \n\nStill, this discussion would be better placed on the other thread.sure - faster GUC is great - there are a lot of applications that overuse GUC, because there are no other solutions now. But I don't think so it is good solution when somebody need some like global variables in procedural code. And the design of session variables is more wide.RegardsPavel\n\n regards, tom lane",
"msg_date": "Tue, 6 Sep 2022 08:09:42 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Modernizing our GUC infrastructure"
},
{
"msg_contents": "On Tue, Sep 6, 2022 at 1:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think there's a good analogy here to temporary tables. The SQL\n> spec says that temp-table schemas are persistent and database-wide,\n> but what we actually have is that they are session-local. People\n> occasionally propose that we implement the SQL semantics for that,\n> but in the last twenty-plus years no one has bothered to write a\n> committable patch to support it ... much less remove the existing\n> behavior in favor of that, which I'm pretty sure no one would think\n> is a good idea.\n\nWell, I've thought about doing this a few times, but it's a real pain\nin the neck, primarily because we store metadata that needs to be\nper-instantiation in the catalog rows: relfrozenxid, relminmxid, and\nthe relation statistics. So I'm not sure \"no one has bothered\" is\nquite the right way to characterize it. \"no one has been able to\nadequately untangle the mess\" might be more accurate.\n\n> So, is it actually a good idea to have persistent metadata for\n> session variables? I'd say that the issue is at best debatable,\n> and at worst proven wrong by a couple of decades of experience.\n> In what way are session variables less mutable than temp tables?\n\nI haven't looked at that patch at all, but I would assume that\nvariables would have SQL types, and that we would never add GUCs with\nSQL types, which seems like a pretty major semantic difference.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Sep 2022 08:21:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Modernizing our GUC infrastructure"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Sep 6, 2022 at 1:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think there's a good analogy here to temporary tables. The SQL\n>> spec says that temp-table schemas are persistent and database-wide,\n>> but what we actually have is that they are session-local.\n\n> Well, I've thought about doing this a few times, but it's a real pain\n> in the neck, primarily because we store metadata that needs to be\n> per-instantiation in the catalog rows: relfrozenxid, relminmxid, and\n> the relation statistics. So I'm not sure \"no one has bothered\" is\n> quite the right way to characterize it. \"no one has been able to\n> adequately untangle the mess\" might be more accurate.\n\nI could agree on \"no one has thought it was worth the work\". It could\nbe made to happen if we were sufficiently motivated, but we aren't.\nI believe a big chunk of the reason is that the SQL semantics are not\nobviously better than what we have. And some of the advantages they\ndo have, like less catalog thrashing, wouldn't apply in the session\nvariable case.\n\n> I haven't looked at that patch at all, but I would assume that\n> variables would have SQL types, and that we would never add GUCs with\n> SQL types, which seems like a pretty major semantic difference.\n\nYeah, I do not think we'd want to extend GUCs beyond the existing\nbool/int/float/string cases, since they have to be readable under\nnon-transactional circumstances. Having said that, that covers\nan awful lot of practical territory. Schema variables of\narbitrary SQL types sound cool, sure, but how many real use cases\nare there that can't be met with the GUC types?\n\nI think a large part of the reason the schema-variables patch\nhas gone sideways for so many years is that it's an ambitious\noverdesign.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Sep 2022 10:05:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Modernizing our GUC infrastructure"
},
{
"msg_contents": "Hi\n\n\n> I think a large part of the reason the schema-variables patch\n> has gone sideways for so many years is that it's an ambitious\n> overdesign.\n>\n\nLast two weeks this patch is shorter and shorter. I removed a large part\nrelated to check of type consistency, because I can do this check more\neasily - and other work is done by dependencies.\n\nBig thanks to Julien - it does a lot of work and he shows me a lot of\nissues and possibilities on how to fix it. With Julien work this patch\nmoved forward. Years before it was just a prototype.\n\nThis patch is not too complex - important part is session_variable.c with\n1500 lines , and it is almost simple code - store value to hashtab, and\ncleaning hash tab on sinval or on transaction end or abort + debug routine.\n\n[pavel@localhost commands]$ cloc session_variable.c\n 1 text file.\n 1 unique file.\n 0 files ignored.\n\ngithub.com/AlDanial/cloc v 1.90 T=0.02 s (50.0 files/s, 77011.1 lines/s)\n-------------------------------------------------------------------------------\nLanguage files blank comment\ncode\n-------------------------------------------------------------------------------\nC 1 257 463\n 820\n-------------------------------------------------------------------------------\n\nIn other files there are +/- mechanical code\n\n\n\n\n\n>\n> regards, tom lane\n>\n>\n>\n\nHi\n\nI think a large part of the reason the schema-variables patch\nhas gone sideways for so many years is that it's an ambitious\noverdesign.Last two weeks this patch is shorter and shorter. I removed a large part related to check of type consistency, because I can do this check more easily - and other work is done by dependencies.Big thanks to Julien - it does a lot of work and he shows me a lot of issues and possibilities on how to fix it. With Julien work this patch moved forward. Years before it was just a prototype.This patch is not too complex - important part is session_variable.c with 1500 lines , and it is almost simple code - store value to hashtab, and cleaning hash tab on sinval or on transaction end or abort + debug routine.[pavel@localhost commands]$ cloc session_variable.c 1 text file. 1 unique file. 0 files ignored.github.com/AlDanial/cloc v 1.90 T=0.02 s (50.0 files/s, 77011.1 lines/s)-------------------------------------------------------------------------------Language files blank comment code-------------------------------------------------------------------------------C 1 257 463 820-------------------------------------------------------------------------------In other files there are +/- mechanical code \n\n regards, tom lane",
"msg_date": "Tue, 6 Sep 2022 16:33:36 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Modernizing our GUC infrastructure"
},
{
"msg_contents": "On Tue, Sep 6, 2022 at 10:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I haven't looked at that patch at all, but I would assume that\n> > variables would have SQL types, and that we would never add GUCs with\n> > SQL types, which seems like a pretty major semantic difference.\n>\n> Yeah, I do not think we'd want to extend GUCs beyond the existing\n> bool/int/float/string cases, since they have to be readable under\n> non-transactional circumstances. Having said that, that covers\n> an awful lot of practical territory. Schema variables of\n> arbitrary SQL types sound cool, sure, but how many real use cases\n> are there that can't be met with the GUC types?\n\nWell, if you use an undefined custom GUC, you're just going to get a\nstring data type, I believe, which is pretty well equivalent to not\nhaving any type checking at all. You could extend that in some way to\nallow users to create dummy GUCs of any type supported by the\nmechanism, but I think that's mostly stacking one hack on top of\nanother. I believe there's good evidence that users want variables\nbased on SQL data types, whereas I can't see any reason why users\nwould variables based on GUC data types. It is of course true that the\nGUC data types cover the cases people are mostly likely to want, but\nthat's just because it covers the most generally useful data types. If\nyou can want to pass an integer between one part of your application\nand another, why can't you want to pass a numeric or a bytea? I think\nyou can, and I think people do.\n\nThis is not really an endorsement of the SQL variables patch, which I\nhaven't studied and which for all I know may have lots of problems,\neither as to design or as to implementation. But I think it's a little\ncrazy to pretend that the ability to store strings - or even values of\nany GUC type - into a fictional GUC is an adequate substitute for SQL\nvariables. Honestly, the fact that you can do that in the first place\nseems more like an undesirable wart necessitated by the way loadable\nmodules interact with the GUC system than a feature -- but even if it\nwere a feature, it's not the same feature.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Sep 2022 11:13:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Modernizing our GUC infrastructure"
},
{
"msg_contents": "I wrote:\n> Attached is a patch series that attempts to modernize our GUC\n> infrastructure, in particular removing the performance bottlenecks\n> it has when there are lots of GUC variables.\n\nRebased over 0a20ff54f.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 13 Sep 2022 12:17:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Modernizing our GUC infrastructure"
},
{
"msg_contents": "I wrote:\n>> Attached is a patch series that attempts to modernize our GUC\n>> infrastructure, in particular removing the performance bottlenecks\n>> it has when there are lots of GUC variables.\n\n> Rebased over 0a20ff54f.\n\nHere's a v3 rebased up to HEAD. The only real change is that I added\na couple of \"Assert(GetMemoryChunkContext(ptr) == GUCMemoryContext)\"\nchecks in hopes of improving detection of not-updated code that is\nstill using malloc/free where it should be using guc_malloc/guc_free.\nThis is per the nearby discussion of whether the mcxt.c infrastructure\ncould recognize that [1]. I experimented a bit with leaving out parts\nof the 0002 patch to simulate such mistakes, and at least on a Linux\nbox that seems to produce fairly intelligible errors now. In the case\nof free'ing a palloc'd pointer, what you get is a message from glibc\nfollowed by abort(), so their error detection is pretty solid too.\n\nI'm feeling pretty good about this patchset now. Does anyone want\nto review it further?\n\n\t\t\tregards, tom lane\n\n[1] https://postgr.es/m/2910981.1665080361%40sss.pgh.pa.us",
"msg_date": "Fri, 07 Oct 2022 15:31:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Modernizing our GUC infrastructure"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nJust trying the new column/row filter on v15, I found this issue that\ncould be replicated very easily.\n\n\"\"\"\npostgres=# create table t1(i serial primary key);\nCREATE TABLE\npostgres=# alter table t1 drop i;\nALTER TABLE\npostgres=# alter table t1 add id serial primary key;\nALTER TABLE\npostgres=# create publication pub_t1 for table t1;\nCREATE PUBLICATION\n\npostgres=# select * from pg_publication_tables where pubname = 'pub_t1' \\gx\n-[ RECORD 1 ]---------------------------------\npubname | pub_t1\nschemaname | public\ntablename | t1\nattnames | {........pg.dropped.1........,id}\nrowfilter |\n\"\"\"\n\nThis could be solved by adding a \"NOT attisdropped\", simple patch\nattached.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL",
"msg_date": "Mon, 5 Sep 2022 21:49:45 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "pg_publication_tables show dropped columns"
},
{
"msg_contents": "Jaime Casanova <jcasanov@systemguards.com.ec> writes:\n> Just trying the new column/row filter on v15, I found this issue that\n> could be replicated very easily.\n\nBleah. Post-beta4 catversion bump, here we come.\n\n> This could be solved by adding a \"NOT attisdropped\", simple patch\n> attached.\n\nThat view seems quite inefficient as written --- I wonder if we\ncan't do better by nuking the join-to-unnest business and putting\nthe restriction in a WHERE clause on the pg_attribute scan.\nThe query plan that you get for it right now is certainly awful.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Sep 2022 23:13:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_publication_tables show dropped columns"
},
{
"msg_contents": "On Tuesday, September 6, 2022 11:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Jaime Casanova <jcasanov@systemguards.com.ec> writes:\n> > Just trying the new column/row filter on v15, I found this issue that\n> > could be replicated very easily.\n> \n> Bleah. Post-beta4 catversion bump, here we come.\n\nOh, Sorry for the miss.\n\n> > This could be solved by adding a \"NOT attisdropped\", simple patch\n> > attached.\n> \n> That view seems quite inefficient as written --- I wonder if we can't do better by\n> nuking the join-to-unnest business and putting the restriction in a WHERE\n> clause on the pg_attribute scan.\n> The query plan that you get for it right now is certainly awful.\n\nI agree and try to improve the query as suggested.\n\nHere is the new version patch.\nI think the query plan and cost looks better after applying the patch.\n\nBest regards,\nHou zj",
"msg_date": "Tue, 6 Sep 2022 07:57:43 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_publication_tables show dropped columns"
},
{
"msg_contents": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com> writes:\n> Here is the new version patch.\n> I think the query plan and cost looks better after applying the patch.\n\nLGTM, pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Sep 2022 18:01:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_publication_tables show dropped columns"
},
{
"msg_contents": "On Wednesday, September 7, 2022 6:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Subject: Re: pg_publication_tables show dropped columns\n> \n> \"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com> writes:\n> > Here is the new version patch.\n> > I think the query plan and cost looks better after applying the patch.\n> \n> LGTM, pushed.\n\nThanks for pushing.\n\nBest regards,\nHou zj\n\n\n",
"msg_date": "Wed, 7 Sep 2022 04:51:48 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_publication_tables show dropped columns"
}
] |
[
{
"msg_contents": "I've noticed that some callers of PathNameOpenFile()\n(e.g. bbsink_server_begin_archive()) consider the call failed even if the\nfunction returned zero, while other ones do check whether the file descriptor\nis strictly negative. Since the file descriptor is actually returned by the\nopen() system call, I assume that zero is a valid result, isn't it?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Tue, 06 Sep 2022 09:26:08 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Return value of PathNameOpenFile()"
},
{
"msg_contents": "> On 6 Sep 2022, at 09:26, Antonin Houska <ah@cybertec.at> wrote:\n> \n> I've noticed that some callers of PathNameOpenFile()\n> (e.g. bbsink_server_begin_archive()) consider the call failed even if the\n> function returned zero, while other ones do check whether the file descriptor\n> is strictly negative. Since the file descriptor is actually returned by the\n> open() system call, I assume that zero is a valid result, isn't it?\n\nAgreed, zero should be valid as it's a non-negative integer. However, callers\nin fd.c are themselves checking for (fd <= 0) in some cases, and some have done\nso since the very early days of the codebase, so I wonder if there historically\nused to be a platform which considered 0 an invalid fd?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 6 Sep 2022 09:51:43 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Return value of PathNameOpenFile()"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Agreed, zero should be valid as it's a non-negative integer. However, callers\n> in fd.c are themselves checking for (fd <= 0) in some cases, and some have done\n> so since the very early days of the codebase, so I wonder if there historically\n> used to be a platform which considered 0 an invalid fd?\n\nI'm betting it's a thinko that never got caught because 0 would\nalways be taken up by stdin. Maybe you'd notice if you tried to\nclose-and-reopen stdin, but that's not something the server ever does.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Sep 2022 10:12:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Return value of PathNameOpenFile()"
},
{
"msg_contents": "> On 6 Sep 2022, at 16:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> Agreed, zero should be valid as it's a non-negative integer. However, callers\n>> in fd.c are themselves checking for (fd <= 0) in some cases, and some have done\n>> so since the very early days of the codebase, so I wonder if there historically\n>> used to be a platform which considered 0 an invalid fd?\n> \n> I'm betting it's a thinko that never got caught because 0 would\n> always be taken up by stdin. Maybe you'd notice if you tried to\n> close-and-reopen stdin, but that's not something the server ever does.\n\nDoh, of course. The attached is a quick (lightly make check tested) take on\nallowing 0, but I'm not sure that's what we want?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Tue, 6 Sep 2022 20:47:49 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Return value of PathNameOpenFile()"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Doh, of course. The attached is a quick (lightly make check tested) take on\n> allowing 0, but I'm not sure that's what we want?\n\nActually, wait a second. At least some of these are not dealing\nwith kernel FDs but with our \"virtual FD\" abstraction. For those,\nzero is indeed an invalid value. There might well be some \"<= 0\"\nversus \"< 0\" errors in this area, but it requires closer inspection.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Sep 2022 15:44:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Return value of PathNameOpenFile()"
},
{
"msg_contents": "> On 6 Sep 2022, at 21:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> Doh, of course. The attached is a quick (lightly make check tested) take on\n>> allowing 0, but I'm not sure that's what we want?\n> \n> Actually, wait a second. At least some of these are not dealing\n> with kernel FDs but with our \"virtual FD\" abstraction. For those,\n> zero is indeed an invalid value.\n\nYes and no, I think; PathNameOpenFile kind of returns a Vfd on success but a\nkernel fd on failure which makes this a bit confusing. Abbreviated for space,\nthe code looks like this:\n\n file = AllocateVfd();\n vfdP = &VfdCache[file];\n ...\n\n vfdP->fd = BasicOpenFilePerm(fileName, fileFlags, fileMode);\n\n if (vfdP->fd < 0)\n {\n ...\n return -1;\n }\n\n ...\n return file;\n\nSo if the underlying BasicOpenFilePerm fails then open(2) failed and we return\n-1, which is what open(2) returned. If it succeeds, then we return the Vfd\nreturned by AllocateVfd which can never be zero, as thats the VFD_CLOSED\nringbuffer anchor. Since AllocateVfd doesn't return on error, it's easy to\nconfuse oneself on exactly which error is propagated.\n\nChecking for (fd <= 0) on an fd returned from PathNameOpenFile is thus not\nwrong, albeit wearing both belts and suspenders since it can never return 0.\nChanging them seem like codechurn for no reason even if the check for 0 is\nsuperfluous. The callsites that only check for (fd < 0) are equally correct,\nand need not be changed.\n\nCallers of BasicOpenFile need however allow for zero since they get the kernel\nfd back, which AFAICT from scanning that they all do.\n\nSo in summary, I can't spot a callsite which isn't safe.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 7 Sep 2022 12:32:05 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Return value of PathNameOpenFile()"
}
] |
[
{
"msg_contents": "\nIn commit dc7420c2c9, the RecentGlobalXmin variable is removed, however,\nthere are some places that reference it.\n\n$ grep 'RecentGlobalXmin' -rn src/\nsrc/backend/replication/logical/launcher.c:101: * the secondary effect that it sets RecentGlobalXmin. (This is critical\nsrc/backend/utils/init/postinit.c:790: * interested in the secondary effect that it sets RecentGlobalXmin. (This\nsrc/backend/postmaster/autovacuum.c:1898: * the secondary effect that it sets RecentGlobalXmin. (This is critical\n\nIt's out-of-date, doesn't it? I'm not sure s/RecentGlobalXmin/RecentXmin/g\nis right. Any thoughts?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Tue, 06 Sep 2022 16:03:32 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Out-of-date comments about RecentGlobalXmin?"
},
{
"msg_contents": "Hi,\n\nRegards,\nZhang Mingli\nOn Sep 6, 2022, 16:03 +0800, Japin Li <japinli@hotmail.com>, wrote:\n>\n> In commit dc7420c2c9, the RecentGlobalXmin variable is removed, however,\n> there are some places that reference it.\n>\n> $ grep 'RecentGlobalXmin' -rn src/\n> src/backend/replication/logical/launcher.c:101: * the secondary effect that it sets RecentGlobalXmin. (This is critical\n> src/backend/utils/init/postinit.c:790: * interested in the secondary effect that it sets RecentGlobalXmin. (This\n> src/backend/postmaster/autovacuum.c:1898: * the secondary effect that it sets RecentGlobalXmin. (This is critical\n>\nYeah, RecentGlobalXmin is removed.\n> It's out-of-date, doesn't it? I'm not sure s/RecentGlobalXmin/RecentXmin/g\n> is right. Any thoughts?\nI’m afraid not, RecentGlobalXmin is split to several GlobalVis* variables.\nNeed to check one by one.\n\n\n\n\n\n\n\nHi,\n\n\nRegards,\nZhang Mingli\n\n\n\nOn Sep 6, 2022, 16:03 +0800, Japin Li <japinli@hotmail.com>, wrote:\n\nIn commit dc7420c2c9, the RecentGlobalXmin variable is removed, however,\nthere are some places that reference it.\n\n$ grep 'RecentGlobalXmin' -rn src/\nsrc/backend/replication/logical/launcher.c:101: * the secondary effect that it sets RecentGlobalXmin. (This is critical\nsrc/backend/utils/init/postinit.c:790: * interested in the secondary effect that it sets RecentGlobalXmin. (This\nsrc/backend/postmaster/autovacuum.c:1898: * the secondary effect that it sets RecentGlobalXmin. (This is critical\n\nYeah, RecentGlobalXmin is removed.\nIt's out-of-date, doesn't it? I'm not sure s/RecentGlobalXmin/RecentXmin/g\nis right. Any thoughts?\nI’m afraid not, RecentGlobalXmin is split to several GlobalVis* variables.\nNeed to check one by one.",
"msg_date": "Tue, 6 Sep 2022 16:10:18 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-date comments about RecentGlobalXmin?"
},
{
"msg_contents": "> On 6 Sep 2022, at 10:10, Zhang Mingli <zmlpostgres@gmail.com> wrote:\n> On Sep 6, 2022, 16:03 +0800, Japin Li <japinli@hotmail.com>, wrote:\n\n> It's out-of-date, doesn't it? I'm not sure s/RecentGlobalXmin/RecentXmin/g\n> is right. Any thoughts?\n> I’m afraid not, RecentGlobalXmin is split to several GlobalVis* variables.\n> Need to check one by one.\n\nIt's a set of functions actually and not variables IIRC.\n\nIt's worth looking at the entire comment and not just the grep output though,\nas these three places share the exact same comment. Note the second paragraph:\n\n /*\n * Start a new transaction here before first access to db, and get a\n * snapshot. We don't have a use for the snapshot itself, but we're\n * interested in the secondary effect that it sets RecentGlobalXmin. (This\n * is critical for anything that reads heap pages, because HOT may decide\n * to prune them even if the process doesn't attempt to modify any\n * tuples.)\n *\n * FIXME: This comment is inaccurate / the code buggy. A snapshot that is\n * not pushed/active does not reliably prevent HOT pruning (->xmin could\n * e.g. be cleared when cache invalidations are processed).\n */\n\nThis was added in dc7420c2c92 which removed RecentGlobalXmin, addressing that\nFIXME would of course be very welcome.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 6 Sep 2022 10:17:37 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-date comments about RecentGlobalXmin?"
},
{
"msg_contents": "\nOn Tue, 06 Sep 2022 at 16:17, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 6 Sep 2022, at 10:10, Zhang Mingli <zmlpostgres@gmail.com> wrote:\n>> On Sep 6, 2022, 16:03 +0800, Japin Li <japinli@hotmail.com>, wrote:\n>\n>> It's out-of-date, doesn't it? I'm not sure s/RecentGlobalXmin/RecentXmin/g\n>> is right. Any thoughts?\n>> I’m afraid not, RecentGlobalXmin is split to several GlobalVis* variables.\n>> Need to check one by one.\n>\n> It's a set of functions actually and not variables IIRC.\n>\n> It's worth looking at the entire comment and not just the grep output though,\n> as these three places share the exact same comment. Note the second paragraph:\n>\n> /*\n> * Start a new transaction here before first access to db, and get a\n> * snapshot. We don't have a use for the snapshot itself, but we're\n> * interested in the secondary effect that it sets RecentGlobalXmin. (This\n> * is critical for anything that reads heap pages, because HOT may decide\n> * to prune them even if the process doesn't attempt to modify any\n> * tuples.)\n> *\n> * FIXME: This comment is inaccurate / the code buggy. A snapshot that is\n> * not pushed/active does not reliably prevent HOT pruning (->xmin could\n> * e.g. be cleared when cache invalidations are processed).\n> */\n>\n> This was added in dc7420c2c92 which removed RecentGlobalXmin, addressing that\n> FIXME would of course be very welcome.\n\nMy bad! Thanks for pointing out this.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Tue, 06 Sep 2022 16:22:54 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Out-of-date comments about RecentGlobalXmin?"
}
] |
[
{
"msg_contents": "Hello!\n\nFound a periodic spike growth of the checkpoint_req counter on replica by 20-30 units\nafter large insert (~350Mb) on master.\nReproduction on master and replica with default conf:\n1) execute the command \"insert into test values (generate_series(1,1E7));\".\nThis leads to the table's growth by about 350Mb during about 15 secs (on my pc).\n2)The wal records start coming to the replica, and when their number exceeds a certain limit, a request is emitted to the checkpointer process to create restartpoint on the replica and checkpoint_req is incremented. With default settings, this limit is 42 segments.\n3) Restartpoint creation fails because a new restartpoint can only be created if the replica has received new WAL records about the checkpoint from the moment of the previous restartpoint. But there were no such records.\n4) When the next WAL segment is received by replica, the next request is generated to create a restartpoint on the replica, and so on.\n5) Finally, a WAL record about the checkpoint arrives on the replica, restartpoint is created and the growth of checkpoint_req stops.\nThe described process can be observed in the log with additional debugging. See insert_1E7_once.log attached. This\nlog is for v13 but master has the same behavior.\n\nCan we treat such behavior as a bug?\nIf so it seems possible to check if a creating of restartpoint is obviously impossible before sending request and don't send it at all if so.\n\nThe patch applied tries to fix it.\n\nWith best regards.\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 6 Sep 2022 14:02:53 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "May be BUG. Periodic burst growth of the checkpoint_req counter on\n replica."
},
{
"msg_contents": "At Tue, 6 Sep 2022 14:02:53 +0300, \"Anton A. Melnikov\" <aamelnikov@inbox.ru> wrote in \n> Can we treat such behavior as a bug?\n\nDepends on how we see the counter value. I think this can be annoying\nbut not a bug. CreateRestartPoint already handles that case.\n\nWhile standby is well catching up, startup may make requests once per\nsegment switch while primary is running the latest checkpoint since\nstandby won't receive a checkpoint record until the primary ends the\nlast checkpoint.\n\n> If so it seems possible to check if a creating of restartpoint is\n> obviously impossible before sending request and don't send it at all\n> if so.\n> \n> The patch applied tries to fix it.\n\nIt lets XLogPageRead run the same check with what CreateRestartPoint\ndoes, so it basically works (it is forgetting a lock, though. The\nreason for omitting the lock in CreateRestartPoint is that it knows\ncheckopinter is the only updator of the shared variable.). I'm not\nsure I like that for the code duplication.\n\n\nI'm not sure we need to fix that but if we do that, I would impletent\nIsNewCheckPointWALRecs() using XLogCtl->RedoRecPtr and\nXLogCtl->lastCheckPoint.redo instead since they are protected by the\nsame lock, and they work more correct way, that is, that can avoid\nrestartpoint requests while the last checkpoint is running. And I\nwould rename it as RestartPointAvailable() or something like that.\n\nOr I might want to add XLogRestartpointNeeded(readSegNo) to reduce the\nrequired number of info_lck by reading XLogCtl members at once.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 07 Sep 2022 16:39:46 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req\n counter on replica."
},
{
"msg_contents": "Hello!\n\nThank you very much for your feedback and essential remarks.\n\nOn 07.09.2022 10:39, Kyotaro Horiguchi wrote:\n> \n> It lets XLogPageRead run the same check with what CreateRestartPoint\n> does, so it basically works (it is forgetting a lock, though. The\n> reason for omitting the lock in CreateRestartPoint is that it knows\n> checkopinter is the only updator of the shared variable.). I'm not\n> sure I like that for the code duplication.\n> \n> I'm not sure we need to fix that but if we do that, I would impletent\n> IsNewCheckPointWALRecs() using XLogCtl->RedoRecPtr and\n> XLogCtl->lastCheckPoint.redo instead since they are protected by the\n> same lock, and they work more correct way, that is, that can avoid\n> restartpoint requests while the last checkpoint is running. And I\n> would rename it as RestartPointAvailable() or something like that.\n\nCorrected patch is attached (v2-0001-Fix-burst-checkpoint_req-growth.patch).\nThe access to Controlfile was removed so lwlock seems to be not needed.\nSome logic duplication is still present and and i'm not quite sure if\nit's possible to get rid of it. Would be glad to any suggestions.\n\n> Or I might want to add XLogRestartpointNeeded(readSegNo) to reduce the\n> required number of info_lck by reading XLogCtl members at once.\n\nIf we place this check into the XLogCheckpointNeeded() this will lead to a double\ntake of info_lck in XLogPageRead() when the restartpoint request is forming.\nAs it's done now taking of info_lck will be more rarely.\nIt seems i probably didn't understand your idea, please clarify it for me.\n\n> Depends on how we see the counter value. I think this can be annoying\n> but not a bug. CreateRestartPoint already handles that case. \n\nYes! It is in fact annoying as docs says that checkpoint_req counts\n\"the number of requested checkpoints that have been performed\".\nBut really checkpoints_req counts both the number of checkpoints requests\nand restartpoint ones which may not be performed and resources are not spent.\nThe second frightening factor is the several times faster growth\nof the checkpoints_timed counter on the replica vs primary due to scheduling\nreplays in 15 second if an attempt to create the restartpoint failed.\n\nHere is a patch that leaves all logic as is, but adds a stats about\nrestartpoints. (v1-0001-Add-restartpoint-stats.patch)\n.\nFor instance, for the same period on primary with this patch:\n# SELECT CURRENT_TIME; select * from pg_stat_bgwriter \\gx\n current_time\n--------------------\n 00:19:15.794561+03\n(1 row)\n\n-[ RECORD 1 ]---------+-----------------------------\ncheckpoints_timed | 4\ncheckpoints_req | 10\nrestartpoints_timed | 0\nrestartpoints_req | 0\nrestartpoints_done | 0\n\nOn replica:\n# SELECT CURRENT_TIME; select * from pg_stat_bgwriter \\gx\n current_time\n--------------------\n 00:19:11.363009+03\n(1 row)\n\n-[ RECORD 1 ]---------+------------------------------\ncheckpoints_timed | 0\ncheckpoints_req | 0\nrestartpoints_timed | 42\nrestartpoints_req | 67\nrestartpoints_done | 10\n\nOnly the counters checkpoints_timed, checkpoints_req and restartpoints_done give\nthe indication of resource-intensive operations.\nWithout this patch, the user would see on the replica something like this:\n\ncheckpoints_timed | 42\ncheckpoints_req | 109\n\n\nWith best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 19 Sep 2022 01:29:21 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "Hi,\n\nOn 2022-09-19 01:29:21 +0300, Anton A. Melnikov wrote:\n> Corrected patch is attached (v2-0001-Fix-burst-checkpoint_req-growth.patch).\n\nThis patch doesn't pass the main regression tests tests successfully:\n\nhttps://cirrus-ci.com/task/5502700019253248\n\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/rules.out /tmp/cirrus-ci-build/build/testrun/regress/regress/results/rules.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/rules.out\t2022-12-06 05:49:53.687424000 +0000\n+++ /tmp/cirrus-ci-build/build/testrun/regress/regress/results/rules.out\t2022-12-06 05:53:04.642690000 +0000\n@@ -1816,6 +1816,9 @@\n FROM pg_stat_get_archiver() s(archived_count, last_archived_wal, last_archived_time, failed_count, last_failed_wal, last_failed_time, stats_reset);\n pg_stat_bgwriter| SELECT pg_stat_get_bgwriter_timed_checkpoints() AS checkpoints_timed,\n pg_stat_get_bgwriter_requested_checkpoints() AS checkpoints_req,\n+ pg_stat_get_bgwriter_timed_restartpoints() AS restartpoints_timed,\n+ pg_stat_get_bgwriter_requested_restartpoints() AS restartpoints_req,\n+ pg_stat_get_bgwriter_performed_restartpoints() AS restartpoints_done,\n pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n pg_stat_get_bgwriter_buf_written_checkpoints() AS buffers_checkpoint,\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Dec 2022 10:44:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "Hello!\n\nOn 06.12.2022 21:44, Andres Freund wrote:\n> Hi,\n> \n> On 2022-09-19 01:29:21 +0300, Anton A. Melnikov wrote:\n>> Corrected patch is attached (v2-0001-Fix-burst-checkpoint_req-growth.patch).\n> \n> This patch doesn't pass the main regression tests tests successfully:\n> \n> https://cirrus-ci.com/task/5502700019253248\n> \n> diff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/rules.out /tmp/cirrus-ci-build/build/testrun/regress/regress/results/rules.out\n> --- /tmp/cirrus-ci-build/src/test/regress/expected/rules.out\t2022-12-06 05:49:53.687424000 +0000\n> +++ /tmp/cirrus-ci-build/build/testrun/regress/regress/results/rules.out\t2022-12-06 05:53:04.642690000 +0000\n> @@ -1816,6 +1816,9 @@\n> FROM pg_stat_get_archiver() s(archived_count, last_archived_wal, last_archived_time, failed_count, last_failed_wal, last_failed_time, stats_reset);\n> pg_stat_bgwriter| SELECT pg_stat_get_bgwriter_timed_checkpoints() AS checkpoints_timed,\n> pg_stat_get_bgwriter_requested_checkpoints() AS checkpoints_req,\n> + pg_stat_get_bgwriter_timed_restartpoints() AS restartpoints_timed,\n> + pg_stat_get_bgwriter_requested_restartpoints() AS restartpoints_req,\n> + pg_stat_get_bgwriter_performed_restartpoints() AS restartpoints_done,\n> pg_stat_get_checkpoint_write_time() AS checkpoint_write_time,\n> pg_stat_get_checkpoint_sync_time() AS checkpoint_sync_time,\n> pg_stat_get_bgwriter_buf_written_checkpoints() AS buffers_checkpoint,\n> \n> Greetings,\n> \n> Andres Freund\n\nThank you for pointing!\n\nI didn't think that the patch tester would apply both patch variants simultaneously,\nassuming that these are two different possible solutions of the problem.\nBut it's even good, let it check both at once!\n\nThere was an error in the second variant (Add-restartpoint-stats), i forgot to correct the rules.out.\nSo fixed the second variant and rebased the first one (Fix-burst-checkpoint_req-growth)\nto the current master.\n\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 7 Dec 2022 11:03:45 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "Hello!\n\n\nOn 15.03.2023 21:29, Gregory Stark (as CFM) wrote:\n\n> These patches that are \"Needs Review\" and have received no comments at\n> all since before March 1st are these. If your patch is amongst this\n> list I would suggest any of:\n> \n> 1) Move it yourself to the next CF (or withdraw it)\n> 2) Post to the list with any pending questions asking for specific\n> feedback -- it's much more likely to get feedback than just a generic\n> \"here's a patch plz review\"...\n> 3) Mark it Ready for Committer and possibly post explaining the\n> resolution to any earlier questions to make it easier for a committer\n> to understand the state\n>\n\nThere are two different patch variants and some discussion expected.\nSo moved them to the next CF.\n\nWith the best wishes!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 16 Mar 2023 15:39:00 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "Hi, Anton!\n\nOn Thu, Mar 16, 2023 at 2:39 PM Anton A. Melnikov <aamelnikov@inbox.ru> wrote:\n> On 15.03.2023 21:29, Gregory Stark (as CFM) wrote:\n>\n> > These patches that are \"Needs Review\" and have received no comments at\n> > all since before March 1st are these. If your patch is amongst this\n> > list I would suggest any of:\n> >\n> > 1) Move it yourself to the next CF (or withdraw it)\n> > 2) Post to the list with any pending questions asking for specific\n> > feedback -- it's much more likely to get feedback than just a generic\n> > \"here's a patch plz review\"...\n> > 3) Mark it Ready for Committer and possibly post explaining the\n> > resolution to any earlier questions to make it easier for a committer\n> > to understand the state\n> >\n>\n> There are two different patch variants and some discussion expected.\n> So moved them to the next CF.\n\nThank you for your detailed observation regarding the spike growth of\nthe checkpoint_req counter on the replica following a large insert\noperation on the master. After reviewing your description and the\nlog, I agree with Kyotaro Horiguchi that the behavior you've outlined,\nthough potentially perceived as annoying, does not constitute a bug in\nthe PostgreSQL.\n\nAfter examining the second patch\n(\"v2-0001-Add-restartpoint-stats.patch\"), it appears that adding\nadditional statistics as outlined in the patch is the most suitable\napproach to address the concerns raised. This solution provides more\nvisibility into the system's behavior without altering its core\nmechanics. However, it's essential that this additional functionality\nis accompanied by comprehensive documentation to ensure clear\nunderstanding and ease of use by the PostgreSQL community.\n\nPlease consider expanding the documentation to include detailed\nexplanations of the new statistics and their implications in various\nscenarios.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 28 Nov 2023 20:34:22 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "Thanks for remarks!\n\nOn 28.11.2023 21:34, Alexander Korotkov wrote:\n> After examining the second patch\n> (\"v2-0001-Add-restartpoint-stats.patch\"), it appears that adding\n> additional statistics as outlined in the patch is the most suitable\n> approach to address the concerns raised. This solution provides more\n> visibility into the system's behavior without altering its core\n> mechanics.\n\nAgreed. I left only this variant of the patch and rework it due to commit 96f05261.\nSo the new counters is in the pg_stat_checkpointer view now.\nPlease see the v3-0001-add-restartpoints-stats.patch attached.\n\n\n> However, it's essential that this additional functionality\n> is accompanied by comprehensive documentation to ensure clear\n> understanding and ease of use by the PostgreSQL community.\n> \n> Please consider expanding the documentation to include detailed\n> explanations of the new statistics and their implications in various\n> scenarios.\n\nIn the separate v3-0002-doc-for-restartpoints-stats.patch i added the definitions\nof the new counters into the \"28.2.15. pg_stat_checkpointer\" section\nand explanation of them with examples into the \"30.5.WAL Configuration\" one.\n\nWould be glad for any comments and and concerns.\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 4 Dec 2023 04:49:59 +0300",
"msg_from": "\"Anton A. Melnikov\" <a.melnikov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "Hi, Anton!\n\nOn Mon, Dec 4, 2023 at 3:50 AM Anton A. Melnikov <a.melnikov@postgrespro.ru>\nwrote:\n\n> Thanks for remarks!\n>\n> On 28.11.2023 21:34, Alexander Korotkov wrote:\n> > After examining the second patch\n> > (\"v2-0001-Add-restartpoint-stats.patch\"), it appears that adding\n> > additional statistics as outlined in the patch is the most suitable\n> > approach to address the concerns raised. This solution provides more\n> > visibility into the system's behavior without altering its core\n> > mechanics.\n>\n> Agreed. I left only this variant of the patch and rework it due to commit\n> 96f05261.\n> So the new counters is in the pg_stat_checkpointer view now.\n> Please see the v3-0001-add-restartpoints-stats.patch attached.\n>\n>\n> > However, it's essential that this additional functionality\n> > is accompanied by comprehensive documentation to ensure clear\n> > understanding and ease of use by the PostgreSQL community.\n> >\n> > Please consider expanding the documentation to include detailed\n> > explanations of the new statistics and their implications in various\n> > scenarios.\n>\n> In the separate v3-0002-doc-for-restartpoints-stats.patch i added the\n> definitions\n> of the new counters into the \"28.2.15. pg_stat_checkpointer\" section\n> and explanation of them with examples into the \"30.5.WAL Configuration\"\n> one.\n>\n> Would be glad for any comments and and concerns.\n>\n\nI made some grammar corrections to the docs and have written the commit\nmessage.\n\nI think this patch now looks good. I'm going to push this if there are no\nobjections.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Sat, 23 Dec 2023 00:04:12 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "On Sat, Dec 23, 2023 at 12:04 AM Alexander Korotkov\n<aekorotkov@gmail.com> wrote:\n> On Mon, Dec 4, 2023 at 3:50 AM Anton A. Melnikov <a.melnikov@postgrespro.ru> wrote:\n>>\n>> Thanks for remarks!\n>>\n>> On 28.11.2023 21:34, Alexander Korotkov wrote:\n>> > After examining the second patch\n>> > (\"v2-0001-Add-restartpoint-stats.patch\"), it appears that adding\n>> > additional statistics as outlined in the patch is the most suitable\n>> > approach to address the concerns raised. This solution provides more\n>> > visibility into the system's behavior without altering its core\n>> > mechanics.\n>>\n>> Agreed. I left only this variant of the patch and rework it due to commit 96f05261.\n>> So the new counters is in the pg_stat_checkpointer view now.\n>> Please see the v3-0001-add-restartpoints-stats.patch attached.\n>>\n>>\n>> > However, it's essential that this additional functionality\n>> > is accompanied by comprehensive documentation to ensure clear\n>> > understanding and ease of use by the PostgreSQL community.\n>> >\n>> > Please consider expanding the documentation to include detailed\n>> > explanations of the new statistics and their implications in various\n>> > scenarios.\n>>\n>> In the separate v3-0002-doc-for-restartpoints-stats.patch i added the definitions\n>> of the new counters into the \"28.2.15. pg_stat_checkpointer\" section\n>> and explanation of them with examples into the \"30.5.WAL Configuration\" one.\n>>\n>> Would be glad for any comments and and concerns.\n>\n>\n> I made some grammar corrections to the docs and have written the commit message.\n>\n> I think this patch now looks good. I'm going to push this if there are no objections.\n\nPushed!\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 25 Dec 2023 01:38:13 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "On 25.12.2023 02:38, Alexander Korotkov wrote:\n\n> Pushed!\n\nThanks a lot!\n\nWith the best regards!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 25 Dec 2023 11:23:27 +0300",
"msg_from": "\"Anton A. Melnikov\" <a.melnikov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "On Fri, Dec 22, 2023 at 11:04 PM Alexander Korotkov\n<aekorotkov@gmail.com> wrote:\n>\n> Hi, Anton!\n>\n> On Mon, Dec 4, 2023 at 3:50 AM Anton A. Melnikov <a.melnikov@postgrespro.ru> wrote:\n>>\n>> Thanks for remarks!\n>>\n>> On 28.11.2023 21:34, Alexander Korotkov wrote:\n>> > After examining the second patch\n>> > (\"v2-0001-Add-restartpoint-stats.patch\"), it appears that adding\n>> > additional statistics as outlined in the patch is the most suitable\n>> > approach to address the concerns raised. This solution provides more\n>> > visibility into the system's behavior without altering its core\n>> > mechanics.\n>>\n>> Agreed. I left only this variant of the patch and rework it due to commit 96f05261.\n>> So the new counters is in the pg_stat_checkpointer view now.\n>> Please see the v3-0001-add-restartpoints-stats.patch attached.\n>>\n>>\n>> > However, it's essential that this additional functionality\n>> > is accompanied by comprehensive documentation to ensure clear\n>> > understanding and ease of use by the PostgreSQL community.\n>> >\n>> > Please consider expanding the documentation to include detailed\n>> > explanations of the new statistics and their implications in various\n>> > scenarios.\n>>\n>> In the separate v3-0002-doc-for-restartpoints-stats.patch i added the definitions\n>> of the new counters into the \"28.2.15. pg_stat_checkpointer\" section\n>> and explanation of them with examples into the \"30.5.WAL Configuration\" one.\n>>\n>> Would be glad for any comments and and concerns.\n>\n>\n> I made some grammar corrections to the docs and have written the commit message.\n>\n> I think this patch now looks good. I'm going to push this if there are no objections.\n\nPer the docs, the sync_time, write_time and buffers_written only apply\nto checkpoints, not restartpoints. Is this correct? AFAICT from a\nquick look at the code they include both checkpoints and restartpoints\nin which case I think the docs should be clarified to indicate that?\n(Or if I'm wrong, and it doesn't include them, then shouldn't we have\nseparate counters for them?)\n\n//Magnus\n\n\n",
"msg_date": "Sat, 9 Mar 2024 15:38:00 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "On Sat, Mar 9, 2024 at 4:38 PM Magnus Hagander <magnus@hagander.net> wrote:\n> Per the docs, the sync_time, write_time and buffers_written only apply\n> to checkpoints, not restartpoints. Is this correct? AFAICT from a\n> quick look at the code they include both checkpoints and restartpoints\n> in which case I think the docs should be clarified to indicate that?\n\nRight, these fields work as before reflecting both checkpoints and\nrestartpoints. Documentation said checkpoints implying restartpoints\nas well. Now that we distinguish stats for checkpoints and\nrestartpoints, we need to update the docs. Please, check the patch\nattached.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Mon, 11 Mar 2024 02:39:37 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "On 11.03.2024 03:39, Alexander Korotkov wrote:\n> Now that we distinguish stats for checkpoints and\n> restartpoints, we need to update the docs. Please, check the patch\n> attached.\n\nMaybe bring the pg_stat_get_checkpointer_buffers_written() description in consistent with these changes?\nLike that:\n\n--- a/src/include/catalog/pg_proc.dat\n+++ b/src/include/catalog/pg_proc.dat\n@@ -5740 +5740 @@\n- descr => 'statistics: number of buffers written by the checkpointer',\n+ descr => 'statistics: number of buffers written during checkpoints and restartpoints',\n\nAnd after i took a fresh look at this patch i noticed a simple way to extract\nwrite_time and sync_time counters for restartpoints too.\n\nWhat do you think, is there a sense to do this?\n\n\nWith the best wishes,\n\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 11 Mar 2024 06:43:54 +0300",
"msg_from": "\"Anton A. Melnikov\" <a.melnikov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "On Mon, Mar 11, 2024 at 5:43 AM Anton A. Melnikov\n<a.melnikov@postgrespro.ru> wrote:\n> On 11.03.2024 03:39, Alexander Korotkov wrote:\n> > Now that we distinguish stats for checkpoints and\n> > restartpoints, we need to update the docs. Please, check the patch\n> > attached.\n>\n> Maybe bring the pg_stat_get_checkpointer_buffers_written() description in consistent with these changes?\n> Like that:\n>\n> --- a/src/include/catalog/pg_proc.dat\n> +++ b/src/include/catalog/pg_proc.dat\n> @@ -5740 +5740 @@\n> - descr => 'statistics: number of buffers written by the checkpointer',\n> + descr => 'statistics: number of buffers written during checkpoints and restartpoints',\n\nThis makes sense. I've included this into the revised patch.\n\n> And after i took a fresh look at this patch i noticed a simple way to extract\n> write_time and sync_time counters for restartpoints too.\n>\n> What do you think, is there a sense to do this?\n\nI'm not sure we need this. The ways we trigger checkpoints and\nrestartpoints are different. This is why we needed separate\nstatistics. But the process of writing buffers is the same.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Mon, 11 Mar 2024 11:48:25 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "On Mon, Mar 11, 2024 at 11:48 AM Alexander Korotkov\n<aekorotkov@gmail.com> wrote:\n>\n> On Mon, Mar 11, 2024 at 5:43 AM Anton A. Melnikov\n> <a.melnikov@postgrespro.ru> wrote:\n> > On 11.03.2024 03:39, Alexander Korotkov wrote:\n> > > Now that we distinguish stats for checkpoints and\n> > > restartpoints, we need to update the docs. Please, check the patch\n> > > attached.\n> >\n> > Maybe bring the pg_stat_get_checkpointer_buffers_written() description in consistent with these changes?\n> > Like that:\n> >\n> > --- a/src/include/catalog/pg_proc.dat\n> > +++ b/src/include/catalog/pg_proc.dat\n> > @@ -5740 +5740 @@\n> > - descr => 'statistics: number of buffers written by the checkpointer',\n> > + descr => 'statistics: number of buffers written during checkpoints and restartpoints',\n>\n> This makes sense. I've included this into the revised patch.\n\nPushed.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 14 Mar 2024 02:19:03 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "On 14.03.2024 03:19, Alexander Korotkov wrote:\n> \n> Pushed.\n> \n\nThanks!\n\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Thu, 14 Mar 2024 10:08:50 +0300",
"msg_from": "\"Anton A. Melnikov\" <a.melnikov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "\n\nOn 2024/03/14 9:19, Alexander Korotkov wrote:\n> On Mon, Mar 11, 2024 at 11:48 AM Alexander Korotkov\n> <aekorotkov@gmail.com> wrote:\n>>\n>> On Mon, Mar 11, 2024 at 5:43 AM Anton A. Melnikov\n>> <a.melnikov@postgrespro.ru> wrote:\n>>> On 11.03.2024 03:39, Alexander Korotkov wrote:\n>>>> Now that we distinguish stats for checkpoints and\n>>>> restartpoints, we need to update the docs. Please, check the patch\n>>>> attached.\n>>>\n>>> Maybe bring the pg_stat_get_checkpointer_buffers_written() description in consistent with these changes?\n>>> Like that:\n>>>\n>>> --- a/src/include/catalog/pg_proc.dat\n>>> +++ b/src/include/catalog/pg_proc.dat\n>>> @@ -5740 +5740 @@\n>>> - descr => 'statistics: number of buffers written by the checkpointer',\n>>> + descr => 'statistics: number of buffers written during checkpoints and restartpoints',\n>>\n>> This makes sense. I've included this into the revised patch.\n> \n> Pushed.\n\nIf I understand correctly, restartpoints_timed and restartpoints_done were\nseparated because a restartpoint can be skipped. restartpoints_timed counts\nwhen a restartpoint is triggered by a timeout, whether it runs or not,\nwhile restartpoints_done only tracks completed restartpoints.\n\nSimilarly, I believe checkpoints should be handled the same way.\nCheckpoints can also be skipped when the system is idle, but currently,\nnum_timed counts even the skipped ones, despite its documentation stating\nit's the \"Number of scheduled checkpoints that have been performed.\"\n\nWhy not separate num_timed into something like checkpoints_timed and\ncheckpoints_done to reflect these different counters?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n",
"msg_date": "Sat, 14 Sep 2024 00:20:46 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "Hi!\n\nOn 13.09.2024 18:20, Fujii Masao wrote:\n> \n> If I understand correctly, restartpoints_timed and restartpoints_done were\n> separated because a restartpoint can be skipped. restartpoints_timed counts\n> when a restartpoint is triggered by a timeout, whether it runs or not,\n> while restartpoints_done only tracks completed restartpoints.\n> \n> Similarly, I believe checkpoints should be handled the same way.\n> Checkpoints can also be skipped when the system is idle, but currently,\n> num_timed counts even the skipped ones, despite its documentation stating\n> it's the \"Number of scheduled checkpoints that have been performed.\"\n> \n> Why not separate num_timed into something like checkpoints_timed and\n> checkpoints_done to reflect these different counters?\n\n+1\nThis idea seems quite tenable to me.\n\nThere is a small clarification. Now if there were no skipped restartpoints then\nrestartpoints_done will be equal to restartpoints_timed + restartpoints_req.\nSimilar for checkpoints.\nSo i tried to introduce num_done counter for checkpoints in the patch attached.\n\nI'm not sure should we include testing for the case when num_done is less than\nnum_timed + num_requested to the regress tests. I haven't been able to get it in a short time yet.\n\nE.g. such a case may be obtained when an a error \"checkpoints are\noccurring too frequently\" as follows:\n-set checkpoint_timeout = 30 and checkpoint_warning = 40 in the postgresql.conf\n-start server\n-do periodically bulk insertions in the 1st client (e.g. insert into test values (generate_series(1,1E7));)\n-watch for pg_stat_checkpointer in the 2nd one:\n# SELECT CURRENT_TIME; select * from pg_stat_checkpointer;\n# \\watch\n\nAfter some time, in the log will appear:\n2024-09-16 16:38:47.888 MSK [193733] LOG: checkpoints are occurring too frequently (13 seconds apart)\n2024-09-16 16:38:47.888 MSK [193733] HINT: Consider increasing the configuration parameter \"max_wal_size\".\n\nAnd num_timed + num_requested will become greater than num_done.\n\nWould be nice to find some simpler and faster way.\n\n\nWith the best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 16 Sep 2024 17:30:35 +0300",
"msg_from": "\"Anton A. Melnikov\" <a.melnikov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "\n\nOn 2024/09/16 23:30, Anton A. Melnikov wrote:\n> +1\n> This idea seems quite tenable to me.\n> \n> There is a small clarification. Now if there were no skipped restartpoints then\n> restartpoints_done will be equal to restartpoints_timed + restartpoints_req.\n> Similar for checkpoints.\n> So i tried to introduce num_done counter for checkpoints in the patch attached.\n\nThanks for the patch! I believe this change is targeted for v18. For v17, however,\nwe should update the description of num_timed in the documentation. Thought?\nHere's a suggestion:\n\n\"Number of scheduled checkpoints due to timeout. Note that checkpoints may be\nskipped if the server has been idle since the last one, and this value counts\nboth completed and skipped checkpoints.\"\n\nRegarding the patch:\n \t\t\t\tif (do_restartpoint)\n \t\t\t\t\tPendingCheckpointerStats.restartpoints_performed++;\n+\t\t\t\telse\n+\t\t\t\t\tPendingCheckpointerStats.num_performed++;\n\nI expected the counter not to be incremented when a checkpoint is skipped,\nbut in this code, when a checkpoint is skipped, ckpt_performed is set to true,\ntriggering the counter increment. This seems wrong.\n\n\n> I'm not sure should we include testing for the case when num_done is less than\n> num_timed + num_requested to the regress tests. I haven't been able to get it in a short time yet.\n\nI'm not sure if that test is really necessary...\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n",
"msg_date": "Tue, 17 Sep 2024 11:47:07 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "On 2024/09/17 11:47, Fujii Masao wrote:\n> \n> \n> On 2024/09/16 23:30, Anton A. Melnikov wrote:\n>> +1\n>> This idea seems quite tenable to me.\n>>\n>> There is a small clarification. Now if there were no skipped restartpoints then\n>> restartpoints_done will be equal to restartpoints_timed + restartpoints_req.\n>> Similar for checkpoints.\n>> So i tried to introduce num_done counter for checkpoints in the patch attached.\n> \n> Thanks for the patch! I believe this change is targeted for v18. For v17, however,\n> we should update the description of num_timed in the documentation. Thought?\n> Here's a suggestion:\n> \n> \"Number of scheduled checkpoints due to timeout. Note that checkpoints may be\n> skipped if the server has been idle since the last one, and this value counts\n> both completed and skipped checkpoints.\"\n\nPatch attached.\nUnless there are any objections, I plan to commit this and back-patch it to v17.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 18 Sep 2024 19:21:10 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "On Wed, Sep 18, 2024 at 1:21 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2024/09/17 11:47, Fujii Masao wrote:\n> >\n> >\n> > On 2024/09/16 23:30, Anton A. Melnikov wrote:\n> >> +1\n> >> This idea seems quite tenable to me.\n> >>\n> >> There is a small clarification. Now if there were no skipped restartpoints then\n> >> restartpoints_done will be equal to restartpoints_timed + restartpoints_req.\n> >> Similar for checkpoints.\n> >> So i tried to introduce num_done counter for checkpoints in the patch attached.\n> >\n> > Thanks for the patch! I believe this change is targeted for v18. For v17, however,\n> > we should update the description of num_timed in the documentation. Thought?\n> > Here's a suggestion:\n> >\n> > \"Number of scheduled checkpoints due to timeout. Note that checkpoints may be\n> > skipped if the server has been idle since the last one, and this value counts\n> > both completed and skipped checkpoints.\"\n>\n> Patch attached.\n> Unless there are any objections, I plan to commit this and back-patch it to v17.\n\nI've checked this patch, it looks good to me.\n\nGenerally, it looks like I should be in charge for this, given I've\ncommitted previous patch by Anton. Thank you for reacting here faster\nthan me. Please, go ahead with the patch.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Wed, 18 Sep 2024 15:22:57 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "Fujii, Alexander thanks a lot!\n\nOn 17.09.2024 05:47, Fujii Masao wrote:\n> \n> Regarding the patch:\n> if (do_restartpoint)\n> PendingCheckpointerStats.restartpoints_performed++;\n> + else\n> + PendingCheckpointerStats.num_performed++;\n> \n> I expected the counter not to be incremented when a checkpoint is skipped,\n> but in this code, when a checkpoint is skipped, ckpt_performed is set to true,\n> triggering the counter increment. This seems wrong.\n\nTried to fix it via returning bool value from the CreateCheckPoint()\nsimilarly to the CreateRestartPoint().\n\nAnd slightly adjusted the patch so that it could be applied after yours.\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 18 Sep 2024 17:35:26 +0300",
"msg_from": "\"Anton A. Melnikov\" <a.melnikov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "\n\nOn 2024/09/18 21:22, Alexander Korotkov wrote:\n>> Patch attached.\n>> Unless there are any objections, I plan to commit this and back-patch it to v17.\n> \n> I've checked this patch, it looks good to me.\n> \n> Generally, it looks like I should be in charge for this, given I've\n> committed previous patch by Anton. Thank you for reacting here faster\n> than me. Please, go ahead with the patch.\n\nThanks for the review! Pushed!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n",
"msg_date": "Thu, 19 Sep 2024 02:21:40 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "\n\nOn 2024/09/18 23:35, Anton A. Melnikov wrote:\n> Fujii, Alexander thanks a lot!\n> \n> On 17.09.2024 05:47, Fujii Masao wrote:\n>>\n>> Regarding the patch:\n>> if (do_restartpoint)\n>> PendingCheckpointerStats.restartpoints_performed++;\n>> + else\n>> + PendingCheckpointerStats.num_performed++;\n>>\n>> I expected the counter not to be incremented when a checkpoint is skipped,\n>> but in this code, when a checkpoint is skipped, ckpt_performed is set to true,\n>> triggering the counter increment. This seems wrong.\n> \n> Tried to fix it via returning bool value from the CreateCheckPoint()\n> similarly to the CreateRestartPoint().\n> \n> And slightly adjusted the patch so that it could be applied after yours.\n\nThanks for updating the patch!\n\n-void\n+bool\n CreateCheckPoint(int flags)\n\nIt would be helpful to explain the new return value in the comment\nat the top of this function.\n\n-\t\t\t\tCreateCheckPoint(flags);\n-\t\t\t\tckpt_performed = true;\n+\t\t\t\tckpt_performed = CreateCheckPoint(flags);\n\nThis change could result in the next scheduled checkpoint being\ntriggered in 15 seconds if a checkpoint is skipped, which isn’t\nthe intended behavior.\n\n-{ oid => '2769',\n+{ oid => '6347',\n\nI don't think that the existing functions need to be reassigned new OIDs.\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n",
"msg_date": "Thu, 19 Sep 2024 03:04:28 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "On 18.09.2024 21:04, Fujii Masao wrote:\n> \n> - CreateCheckPoint(flags);\n> - ckpt_performed = true;\n> + ckpt_performed = CreateCheckPoint(flags);\n> \n> This change could result in the next scheduled checkpoint being\n> triggered in 15 seconds if a checkpoint is skipped, which isn’t\n> the intended behavior.\n\nThanks for pointing this out! This is really bug.\nRearranged the logic a bit to save the previous behavior\nin the v3 attached.\n\n> -void\n> +bool\n> CreateCheckPoint(int flags)\n> \n> It would be helpful to explain the new return value in the comment\n> at the top of this function.\n\nSure. Added an info about return value to the comment.\n\n> -{ oid => '2769',\n> +{ oid => '6347',\n> \n> I don't think that the existing functions need to be reassigned new OIDs.\n\nOk. Left oids as is in the v3. Just added a new one for\npg_stat_get_checkpointer_num_performed().\n\n\nWith the best regards!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 19 Sep 2024 13:16:38 +0300",
"msg_from": "\"Anton A. Melnikov\" <a.melnikov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "On 2024/09/19 19:16, Anton A. Melnikov wrote:\n> \n> On 18.09.2024 21:04, Fujii Masao wrote:\n>>\n>> - CreateCheckPoint(flags);\n>> - ckpt_performed = true;\n>> + ckpt_performed = CreateCheckPoint(flags);\n>>\n>> This change could result in the next scheduled checkpoint being\n>> triggered in 15 seconds if a checkpoint is skipped, which isn’t\n>> the intended behavior.\n> \n> Thanks for pointing this out! This is really bug.\n> Rearranged the logic a bit to save the previous behavior\n> in the v3 attached.\n\nThanks for updating the patch!\n\nI've attached the updated version (0001.patch). I made some cosmetic changes,\nincluding reverting the switch in the entries for pg_stat_get_checkpointer_write_time\nand pg_stat_get_checkpointer_sync_time in pg_proc.dat, as I didn’t think\nthat change was necessary. Could you please review the latest version?\n\nAfter we commit 0001.patch, how about applying 0002.patch, which updates\nthe documentation for the pg_stat_checkpointer view to clarify what types\nof checkpoints and restartpoints each counter tracks?\n\nIn 0002.patch, I also modified the description of num_requested from\n\"Number of backend requested checkpoints\" to remove \"backend,\" as it can\nbe confusing since num_requested includes requests from sources other than\nthe backend. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Sat, 21 Sep 2024 01:19:32 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "On 20.09.2024 19:19, Fujii Masao wrote:\n> I've attached the updated version (0001.patch). I made some cosmetic changes,\n> including reverting the switch in the entries for pg_stat_get_checkpointer_write_time\n> and pg_stat_get_checkpointer_sync_time in pg_proc.dat, as I didn’t think\n> that change was necessary. Could you please review the latest version?\n\nThanks for corrections!\nAll looks good for me.\nAs for switching in the pg_proc.dat entries the idea was to put them in order\nso that the pg_stat_get_checkpointer* functions were grouped together.\nI don't know if this is the common and accepted practice. Simply i like it better this way.\nSure, if you think it's unnecessary, let it stay as is with minimal diff.\n\n\n> After we commit 0001.patch, how about applying 0002.patch, which updates\n> the documentation for the pg_stat_checkpointer view to clarify what types\n> of checkpoints and restartpoints each counter tracks?\n\nI liked that the short definitions of the counters are now separated from\nthe description of its work features which are combined into one paragraph.\nIt seems to me that is much more logical and easier to understand.\n\nIn addition, checkpoints may be skipped due to \"checkpoints are occurring\ntoo frequently\" error. Not sure, but maybe add this information to\nthe new description?\n\n> In 0002.patch, I also modified the description of num_requested from\n> \"Number of backend requested checkpoints\" to remove \"backend,\" as it can\n> be confusing since num_requested includes requests from sources other than\n> the backend. Thought?\n\nAgreed. E.g. from xlog. Then maybe changed it also in the function\ndescriptions in the pg_proc.dat? For pg_stat_get_checkpointer_num_requested()\nand pg_stat_get_checkpointer_restartpoints_requested().\n\n\nAlso checked v4 with the travis patch-tester. All is ok.\n\nWith the best wishes!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sun, 22 Sep 2024 07:55:11 +0300",
"msg_from": "\"Anton A. Melnikov\" <a.melnikov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "\n\nOn 2024/09/22 13:55, Anton A. Melnikov wrote:\n> On 20.09.2024 19:19, Fujii Masao wrote:\n>> I've attached the updated version (0001.patch). I made some cosmetic changes,\n>> including reverting the switch in the entries for pg_stat_get_checkpointer_write_time\n>> and pg_stat_get_checkpointer_sync_time in pg_proc.dat, as I didn’t think\n>> that change was necessary. Could you please review the latest version?\n> \n> Thanks for corrections!\n> All looks good for me.\n\nThanks for the review! I've pushed the 0001 patch.\n\n\n> As for switching in the pg_proc.dat entries the idea was to put them in order\n> so that the pg_stat_get_checkpointer* functions were grouped together.\n> I don't know if this is the common and accepted practice. Simply i like it better this way.\n> Sure, if you think it's unnecessary, let it stay as is with minimal diff.\n\nI understand your point, but I didn't made that change to keep the diff minimal,\nwhich should make future back-patching easier.\n\n\n>> After we commit 0001.patch, how about applying 0002.patch, which updates\n>> the documentation for the pg_stat_checkpointer view to clarify what types\n>> of checkpoints and restartpoints each counter tracks?\n> \n> I liked that the short definitions of the counters are now separated from\n> the description of its work features which are combined into one paragraph.\n> It seems to me that is much more logical and easier to understand.\n\nThanks for the review!\n\n\n> In addition, checkpoints may be skipped due to \"checkpoints are occurring\n> too frequently\" error. Not sure, but maybe add this information to\n> the new description?\n\n From what I can see in the code, that error message doesn’t seem to indicate\nthe checkpoint is being skipped. In fact, checkpoints are still happening\nactually when that message appears. Am I misunderstanding something?\n\n\n>> In 0002.patch, I also modified the description of num_requested from\n>> \"Number of backend requested checkpoints\" to remove \"backend,\" as it can\n>> be confusing since num_requested includes requests from sources other than\n>> the backend. Thought?\n> \n> Agreed. E.g. from xlog. Then maybe changed it also in the function\n> descriptions in the pg_proc.dat? For pg_stat_get_checkpointer_num_requested()\n> and pg_stat_get_checkpointer_restartpoints_requested().\n\nYes, good catch!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n",
"msg_date": "Mon, 30 Sep 2024 12:26:56 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "\nOn 30.09.2024 06:26, Fujii Masao wrote:\n> Thanks for the review! I've pushed the 0001 patch.\n\nThanks a lot!\n\n>> As for switching in the pg_proc.dat entries the idea was to put them in order\n>> so that the pg_stat_get_checkpointer* functions were grouped together.\n>> I don't know if this is the common and accepted practice. Simply i like it better this way.\n>> Sure, if you think it's unnecessary, let it stay as is with minimal diff.\n> \n> I understand your point, but I didn't made that change to keep the diff minimal,\n> which should make future back-patching easier.\n\nAgreed. Its quite reasonable. I've not take into account the backporting\npossibility at all. This is of course wrong.\n\n>> In addition, checkpoints may be skipped due to \"checkpoints are occurring\n>> too frequently\" error. Not sure, but maybe add this information to\n>> the new description?\n> \n> From what I can see in the code, that error message doesn’t seem to indicate\n> the checkpoint is being skipped. In fact, checkpoints are still happening\n> actually when that message appears. Am I misunderstanding something?\n\nNo, you are right! This is my oversight. I didn't notice that elevel is just a log\nnot a error. Thanks!\n\n\nWith the best wishes,\n \n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 30 Sep 2024 10:00:00 +0300",
"msg_from": "\"Anton A. Melnikov\" <a.melnikov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
},
{
"msg_contents": "\n\nOn 2024/09/30 16:00, Anton A. Melnikov wrote:\n> \n> On 30.09.2024 06:26, Fujii Masao wrote:\n>> Thanks for the review! I've pushed the 0001 patch.\n> \n> Thanks a lot!\n> \n>>> As for switching in the pg_proc.dat entries the idea was to put them in order\n>>> so that the pg_stat_get_checkpointer* functions were grouped together.\n>>> I don't know if this is the common and accepted practice. Simply i like it better this way.\n>>> Sure, if you think it's unnecessary, let it stay as is with minimal diff.\n>>\n>> I understand your point, but I didn't made that change to keep the diff minimal,\n>> which should make future back-patching easier.\n> \n> Agreed. Its quite reasonable. I've not take into account the backporting\n> possibility at all. This is of course wrong.\n> \n>>> In addition, checkpoints may be skipped due to \"checkpoints are occurring\n>>> too frequently\" error. Not sure, but maybe add this information to\n>>> the new description?\n>>\n>> From what I can see in the code, that error message doesn’t seem to indicate\n>> the checkpoint is being skipped. In fact, checkpoints are still happening\n>> actually when that message appears. Am I misunderstanding something?\n> \n> No, you are right! This is my oversight. I didn't notice that elevel is just a log\n> not a error. Thanks!\n\nOk, so I pushed 0002.patch. Thanks for the review!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n",
"msg_date": "Tue, 1 Oct 2024 02:09:31 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: May be BUG. Periodic burst growth of the checkpoint_req counter\n on replica."
}
] |
[
{
"msg_contents": "Hi,\n\nI didn't understand the current wording of the NOTES section in\npsql(1) on which major versions psql is compatible with, so here's a\npatch to make that more explicit.\n\nChristoph",
"msg_date": "Tue, 6 Sep 2022 13:30:41 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "(doc patch) psql version compatibility"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> I didn't understand the current wording of the NOTES section in\n> psql(1) on which major versions psql is compatible with, so here's a\n> patch to make that more explicit.\n\nThis seems both very repetitive and incorrect in detail.\nSome operations will work fine with older servers, some won't.\nI don't think that we want to undertake documenting exactly\nwhich commands work how far back, so my inclination is to\nadd a nonspecific disclaimer along the lines of\n\n Some commands may not work with very old servers (before 9.2).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Sep 2022 10:21:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: (doc patch) psql version compatibility"
},
{
"msg_contents": "Re: Tom Lane\n> Christoph Berg <myon@debian.org> writes:\n> > I didn't understand the current wording of the NOTES section in\n> > psql(1) on which major versions psql is compatible with, so here's a\n> > patch to make that more explicit.\n> \n> This seems both very repetitive and incorrect in detail.\n\nMeh, I tried to preserve as much of the original text as reasonable,\nbut we can strip it down further.\n\n> Some operations will work fine with older servers, some won't.\n> I don't think that we want to undertake documenting exactly\n> which commands work how far back, so my inclination is to\n> add a nonspecific disclaimer along the lines of\n> \n> Some commands may not work with very old servers (before 9.2).\n\nI'd like it do say \"it works with 9.2\" when that's what the code is\nimplementing.\n\nHow about this?\n\n <para><application>psql</application> works with servers of the same\n or an older major version, back to 9.2. The general\n functionality of running SQL commands and displaying query results\n should also work with servers of other major versions, but\n backslash commands are particularly likely to fail.\n </para>\n <para>\n If you want to use <application>psql</application> to connect to several\n servers of different major versions, it is recommended that you use the\n newest version of <application>psql</application>. (To connect to pre-9.2 servers,\n use <application>psql</application> 14 or earlier.)\n </para>\n\n\nChristoph",
"msg_date": "Tue, 6 Sep 2022 16:40:37 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: (doc patch) psql version compatibility"
}
] |
[
{
"msg_contents": "The pg_upgrade manpage in PG 14 and earlier claims that upgrades from\n8.4 are supported, but that doesn't work:\n\n/usr/lib/postgresql/14/bin/pg_upgrade -b /usr/lib/postgresql/8.4/bin -B /usr/lib/postgresql/14/bin -p 5432 -P 5433 -d /var/lib/postgresql/8.4/upgr -o -D /etc/postgresql/8.4/upgr -D /etc/postgresql/14/upgr\nFinding the real data directory for the target cluster ok\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions ok\nThe source cluster lacks some required control information:\n latest checkpoint oldestXID\n\nCannot continue without required control information, terminating\nFailure, exiting\n\n8.4 -> 14/13/12/11/10/9.6 are all broken in the same way (using the\ntarget version's pg_upgrade of course)\n\n9.0 -> 14 and 8.4 -> 9.5 work.\n\n8.4 -> 15 \"works\" in the sense of that the non-support is correctly\ndocumented in the manpage and in the pg_upgrade output:\n\n/usr/lib/postgresql/15/bin/pg_upgrade -b /usr/lib/postgresql/8.4/bin -B /usr/lib/postgresql/15/bin -p 5432 -P 5433 -d /var/lib/postgresql/8.4/upgr -o -D /etc/postgresql/8.4/upgr -D /etc/postgresql/15/upgr\nFinding the real data directory for the target cluster ok\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions\nThis utility can only upgrade from PostgreSQL version 9.2 and later.\nFailure, exiting\n\n\nIs that failure intentional, and just not documented properly, or is\nthat a bug?\n\nChristoph\n\n\n",
"msg_date": "Tue, 6 Sep 2022 13:50:10 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "pg_upgrade major version compatibility"
},
{
"msg_contents": "On Tue, Sep 06, 2022 at 01:50:10PM +0200, Christoph Berg wrote:\n> The pg_upgrade manpage in PG 14 and earlier claims that upgrades from\n> 8.4 are supported, but that doesn't work:\n\nTom discovered 2 months ago that this was broken since a year prior.\n\nhttps://www.postgresql.org/message-id/1973418.1657040382%40sss.pgh.pa.us\n\nEvidently ever the docs still aren't updated to say or, nor the tool to\nfail gracefully.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 6 Sep 2022 17:10:18 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade major version compatibility"
}
] |
[
{
"msg_contents": "New chapter on transaction management, plus a few related changes.\n\nMarkup and links are not polished yet, so please comment initially on\nthe topics, descriptions and wording.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 6 Sep 2022 16:16:02 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "Op 06-09-2022 om 17:16 schreef Simon Riggs:\n> New chapter on transaction management, plus a few related changes.\n> \n> Markup and links are not polished yet, so please comment initially on\n> the topics, descriptions and wording.\n\n> [xact_docs.v2.patch] \n\nVery clear explanations, thank you.\n\nTwo typos:\n\n'not yet yet part' should be\n'not yet part'\n\n'extenal' should be\n'external'\n\n\nErik\n\n\n",
"msg_date": "Tue, 6 Sep 2022 18:19:35 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Tue, 6 Sept 2022 at 17:19, Erik Rijkers <er@xs4all.nl> wrote:\n>\n> Op 06-09-2022 om 17:16 schreef Simon Riggs:\n> > New chapter on transaction management, plus a few related changes.\n> >\n> > Markup and links are not polished yet, so please comment initially on\n> > the topics, descriptions and wording.\n>\n> > [xact_docs.v2.patch]\n>\n> Very clear explanations, thank you.\n>\n> Two typos:\n>\n> 'not yet yet part' should be\n> 'not yet part'\n>\n> 'extenal' should be\n> 'external'\n\nThanks, new version attached.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 6 Sep 2022 18:20:06 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Tue, Sep 06, 2022 at 04:16:02PM +0100, Simon Riggs wrote:\n> New chapter on transaction management, plus a few related changes.\n> \n> Markup and links are not polished yet, so please comment initially on\n> the topics, descriptions and wording.\n\nThis is useful. Nitpicks follow.\n\n\n+ If executed in a subtransaction this will return the top-level xid.\n\nI'd prefer it with a comma after \"subtransaction\"\n\n+SQL Standard. PostgreSQL supports SAVEPOINTs through the implementation\n+defined mechanism of subtransactions, which offer a superset of the required\n\nimplementation-defined\n\n+features. Prepared transactions allow PostgreSQL to implement what SQL\n\nwhat *the\n\n+<para>\n+All transactions are identified by a unique VirtualTransactionId (virtualxid or\n+vxid), e.g. 4/12532, which is comprised of the BackendId (in this example, 4)\n+and a sequentially assigned number unique to that backend, known as LocalXid\n\nsequentially-assigned\n\n+<para>\n+If a transaction attempts to write to the database it will be assigned a\n\ndatabase comma\n\n+property is used by the transaction system to say that if one xid is earlier\n+than another xid then the earlier transaction attempted to write before the\n\nxid comma then\n\n+<para>\n+Currently executing transactions are shown in the pg_locks view in columns\n\ncurrently-executing\n\n+\"virtualxid\" (text) and \"transactionid\" (xid), if an xid has been assigned.\n\nmaybe remove the \"if assigned\" part, since it's described next?\n\n+Read transactions will have a virtualxid but a NULL xid, while write\n+transactions will have both a virtualxid and an xid assigned.\n+</para>\n\n+<para>\n+Row-level read locks may require the assignment of a multixact ID (mxid), which\n+are recorded in the pg_multixact directory.\n\nwhich *is ?\n\n+top-level transaction. Subtransactions can also be started from other\n+subtransactions. As a result, the arrangement of transaction and subtransactions\n\ntransactions (plural) ?\n\n+form a hierarchy or tree. Thus, each subtransaction has one parent transaction.\n\n+At present in PostgreSQL, only one transaction or subtransaction can be active at\n+one time.\n\none time per command/query/request.\n\n+Subtransactions may end via a commit or abort without affecting their parent\n\nmay end either by committing or aborting, without ..\n\n+transaction, allowing the parent transaction to continue.\n\n+also be started in other ways, such as PL/pgSQL's EXCEPTION clause. PL/Python and\n+PL/TCL also support explicit subtransactions. Working with C API, users may also\n+call BeginInternalSubTransaction().\n\n*the C API ?\n\n+If a subtransaction is assigned an xid, we refer to this as a subxid. Read-only\n+subtransactions are not assigned a subxid, but when a subtransaction attempts to\n+write it will be assigned a subxid. We ensure that all of a subxid's parents, up\n\nwrite comma.\nOr say: \"subxid is not assigned until the subtransaction attempts to\nwrite\" ?\n\n+<para>\n+The more subtransactions each transaction uses, the greater the overhead for\n+transaction management. Up to 64 subxids are cached in shmem for each backend,\n\nbackend semicolon\n\n+Those commands are extensions to the SQL Standard, since the SQL syntax is not yet\n+yet part of the standard, though the Standard does refer to encompassing\n\nyet yet\n\n+Information relating to these is stored in pg_twophase. Currently prepared\n\ns/these/two-phase commits/\n\n+transactions can be inspected using pg_prepared_xacts view.\n+</para>\n\ncurrently-prepared ?\n\n\n",
"msg_date": "Tue, 6 Sep 2022 15:33:21 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Tue, 6 Sept 2022 at 21:33, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Sep 06, 2022 at 04:16:02PM +0100, Simon Riggs wrote:\n> > New chapter on transaction management, plus a few related changes.\n> >\n> > Markup and links are not polished yet, so please comment initially on\n> > the topics, descriptions and wording.\n>\n> This is useful. Nitpicks follow.\n\nCool, thanks.\n\n\n> +At present in PostgreSQL, only one transaction or subtransaction can be active at\n> +one time.\n>\n> one time per command/query/request.\n\nApart from that comment, all points accepted, thank you.\n\nI've also added further notes about prepared transactions.\n\nI attach a diff against the original patch, plus a new clean copy.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Wed, 7 Sep 2022 13:04:46 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On 2022-Sep-06, Simon Riggs wrote:\n\n> On Tue, 6 Sept 2022 at 17:19, Erik Rijkers <er@xs4all.nl> wrote:\n> >\n> > Op 06-09-2022 om 17:16 schreef Simon Riggs:\n> > > New chapter on transaction management, plus a few related changes.\n> > >\n> > > Markup and links are not polished yet, so please comment initially on\n> > > the topics, descriptions and wording.\n\nI think the concept of XID epoch should be described also, in or near\nthe paragraph that talks about wrapping around and switching between\nint32 and int64. Right now it's a bit unclear why/how that works.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Uno puede defenderse de los ataques; contra los elogios se esta indefenso\"\n\n\n",
"msg_date": "Thu, 8 Sep 2022 09:42:44 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Thu, 8 Sept 2022 at 08:42, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Sep-06, Simon Riggs wrote:\n>\n> > On Tue, 6 Sept 2022 at 17:19, Erik Rijkers <er@xs4all.nl> wrote:\n> > >\n> > > Op 06-09-2022 om 17:16 schreef Simon Riggs:\n> > > > New chapter on transaction management, plus a few related changes.\n> > > >\n> > > > Markup and links are not polished yet, so please comment initially on\n> > > > the topics, descriptions and wording.\n>\n> I think the concept of XID epoch should be described also, in or near\n> the paragraph that talks about wrapping around and switching between\n> int32 and int64. Right now it's a bit unclear why/how that works.\n\nWill do\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 9 Sep 2022 12:53:32 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Wed, Sep 7, 2022 at 8:05 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Tue, 6 Sept 2022 at 21:33, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Tue, Sep 06, 2022 at 04:16:02PM +0100, Simon Riggs wrote:\n> > > New chapter on transaction management, plus a few related changes.\n> > >\n> > > Markup and links are not polished yet, so please comment initially on\n> > > the topics, descriptions and wording.\n> >\n> I've also added further notes about prepared transactions.\n>\n> I attach a diff against the original patch, plus a new clean copy.\n>\n\nSome feedback on the v4 patch, hopefully useful.\n\n+<para>\n+Transactions may be started explicitly using BEGIN and COMMIT\ncommands, known as\n+a transaction block. A transaction will also be started and ended\nimplicitly for\n+each request when outside of a transaction block.\n+</para>\n\nTransactions may be managed explicitly using BEGIN and COMMIT commands, known as\na transaction block.\n\n\n+Committed subtransactions are recorded as committed if the main transaction\n+commits. The word subtransaction is often abbreviated to \"subxact\".\n+</para>\n\nCommitted subtransactions are only recorded as committed if the main\ntransaction commits,\notherwise any work done in a subtransaction will be rolled back or\naborted. The word subtransaction is\noften abbreviated as \"subxact\".\n\n+<para>\n+Subtransactions may be started explicitly by using the SAVEPOINT\ncommand, but may\n+also be started in other ways, such as PL/pgSQL's EXCEPTION clause.\nPL/Python and\n+PL/TCL also support explicit subtransactions. Working with the C API, users may\n+also call BeginInternalSubTransaction().\n+</para>\n\nI think this paragraph (or something similar) should be the opening\nparagraph for this section, so that readers are immediately given\ncontext for what PostgreSQL considers to be a subtransaction.\n\n\n+prepared transactions that were prepared before the last checkpoint.\nIn the typical\n+case, shorter-lived prepared transactions are stored only in shared\nmemory and WAL.\n+Currently-prepared transactions can be inspected using the\npg_prepared_xacts view.\n+</para>\n\nTransactions that are currently prepared can be inspected using the\npg_prepated_xacts view.\n\n* I thought the hyphenated wording looked odd, though I understand why\nyou used it. We don't use it elsewhere though (just `currently\nprepared` san hyphen) so re-worded to match the other wording.\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Sun, 11 Sep 2022 16:35:13 -0400",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Sun, 11 Sept 2022 at 21:35, Robert Treat <rob@xzilla.net> wrote:\n>\n> On Wed, Sep 7, 2022 at 8:05 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >\n> > On Tue, 6 Sept 2022 at 21:33, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > On Tue, Sep 06, 2022 at 04:16:02PM +0100, Simon Riggs wrote:\n> > > > New chapter on transaction management, plus a few related changes.\n> > > >\n> > > > Markup and links are not polished yet, so please comment initially on\n> > > > the topics, descriptions and wording.\n> > >\n> > I've also added further notes about prepared transactions.\n> >\n> > I attach a diff against the original patch, plus a new clean copy.\n> >\n>\n> Some feedback on the v4 patch, hopefully useful.\n>\n> +<para>\n> +Transactions may be started explicitly using BEGIN and COMMIT\n> commands, known as\n> +a transaction block. A transaction will also be started and ended\n> implicitly for\n> +each request when outside of a transaction block.\n> +</para>\n>\n> Transactions may be managed explicitly using BEGIN and COMMIT commands, known as\n> a transaction block.\n>\n>\n> +Committed subtransactions are recorded as committed if the main transaction\n> +commits. The word subtransaction is often abbreviated to \"subxact\".\n> +</para>\n>\n> Committed subtransactions are only recorded as committed if the main\n> transaction commits,\n> otherwise any work done in a subtransaction will be rolled back or\n> aborted. The word subtransaction is\n> often abbreviated as \"subxact\".\n>\n> +<para>\n> +Subtransactions may be started explicitly by using the SAVEPOINT\n> command, but may\n> +also be started in other ways, such as PL/pgSQL's EXCEPTION clause.\n> PL/Python and\n> +PL/TCL also support explicit subtransactions. Working with the C API, users may\n> +also call BeginInternalSubTransaction().\n> +</para>\n>\n> I think this paragraph (or something similar) should be the opening\n> paragraph for this section, so that readers are immediately given\n> context for what PostgreSQL considers to be a subtransaction.\n>\n>\n> +prepared transactions that were prepared before the last checkpoint.\n> In the typical\n> +case, shorter-lived prepared transactions are stored only in shared\n> memory and WAL.\n> +Currently-prepared transactions can be inspected using the\n> pg_prepared_xacts view.\n> +</para>\n>\n> Transactions that are currently prepared can be inspected using the\n> pg_prepated_xacts view.\n>\n> * I thought the hyphenated wording looked odd, though I understand why\n> you used it. We don't use it elsewhere though (just `currently\n> prepared` san hyphen) so re-worded to match the other wording.\n\nThanks Robert. I've tried to accommodate all of your thoughts, plus Alvaro's.\n\nNew v5 attached.\n\nHappy to receive further comments.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 13 Sep 2022 15:02:34 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Tue, Sep 13, 2022 at 03:02:34PM +0100, Simon Riggs wrote:\n> Thanks Robert. I've tried to accommodate all of your thoughts, plus Alvaro's.\n> \n> New v5 attached.\n> \n> Happy to receive further comments.\n\nI liked this patch very much. It gives details on a lot of the\ninternals we expose to users. Some of my changes were:\n\n* tightening the wording\n* restructuring the flow\n* splitting out user-visible details (prepared transactions) from\n internals, e.g., xid, vxid, subtransactions\n* adding references from places in our docs to these new sections\n\nI plan to apply this and backpatch it to all supported versions since\nthese details apply to all versions. These docs should enable our users\nto much better understand and monitor Postgres.\n\nUpdated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Thu, 13 Oct 2022 17:28:15 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "Op 13-10-2022 om 23:28 schreef Bruce Momjian:\n> On Tue, Sep 13, 2022 at 03:02:34PM +0100, Simon Riggs wrote:\n>> Thanks Robert. I've tried to accommodate all of your thoughts, plus Alvaro's.\n>>\n>> New v5 attached.\n>>\n>> Happy to receive further comments.\n> \n> I liked this patch very much. It gives details on a lot of the\n> internals we expose to users. Some of my changes were:\n> \n> * tightening the wording\n> * restructuring the flow\n> * splitting out user-visible details (prepared transactions) from\n> internals, e.g., xid, vxid, subtransactions\n> * adding references from places in our docs to these new sections\n\n> [xact.diff]\n\nI think that\n 'This chapter explains how the control the reliability of'\n\nshould be:\n'This chapter explains how to control the reliability of'\n\n\nAnd in these lines:\n+ together in a transactional manner. The commands <command>PREPARE\n+ TRANSACTION</command>, <command>COMMIT PREPARED</command> and\n+ <command>ROLLBACK PREPARED</command>. Two-phase transactions\n\n'The commands'\n\nshould be\n'The commands are'\n\n\nthanks,\n\nErik Rijkers\n\n\n> I plan to apply this and backpatch it to all supported versions since\n> these details apply to all versions. These docs should enable our users\n> to much better understand and monitor Postgres.\n> \n> Updated patch attached.\n\n\n",
"msg_date": "Thu, 13 Oct 2022 23:54:51 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Thu, Oct 13, 2022 at 11:54:51PM +0200, Erik Rijkers wrote:\n> > [xact.diff]\n> \n> I think that\n> 'This chapter explains how the control the reliability of'\n> \n> should be:\n> 'This chapter explains how to control the reliability of'\n> \n> \n> And in these lines:\n> + together in a transactional manner. The commands <command>PREPARE\n> + TRANSACTION</command>, <command>COMMIT PREPARED</command> and\n> + <command>ROLLBACK PREPARED</command>. Two-phase transactions\n> \n> 'The commands'\n> \n> should be\n> 'The commands are'\n\nThanks, updated patch attached. You can see the output at:\n\n\thttps://momjian.us/tmp/pgsql/\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Thu, 13 Oct 2022 18:06:38 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Thu, 13 Oct 2022 at 23:06, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> Thanks, updated patch attached. You can see the output at:\n\nThank you for your work to tighten and cleanup this patch, much appreciated.\n\nI had two minor typos, plus a slight rewording to avoid using the word\n\"global\" in the last section, since that is associated with\ndistributed or 2PC transactions. For those changes, I provide a\npatch-on-patch so you can see clearly.\n\nIn related changes, I've also done some major rewording of the RELEASE\nSAVEPOINT command, since it was incorrectly described as having \"no\nother user visible behavior\". A complex example is given to explain,\nusing the terminology established in the main patch.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Fri, 14 Oct 2022 08:55:05 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "+1 for this new chapter. This latest patch looks pretty good. I think\nthat introducing the concept of \"sub-commit\" as in Simon's follow-up\npatch clarifies things, though the word itself looks very odd. Maybe\nit's okay. The addition of the savepoint example looks good also.\n\nOn 2022-Oct-13, Bruce Momjian wrote:\n\n> + <para>\n> + <productname>PostgreSQL</productname> supports a two-phase commit (2PC)\n> + protocol that allows multiple distributed systems to work together\n> + in a transactional manner. The commands are <command>PREPARE\n> + TRANSACTION</command>, <command>COMMIT PREPARED</command> and\n\nI suggest/request that we try to avoid breaking tagged constants in\nDocBook; doing so makes it much easier to miss them later when grepping\nfor them (don't laugh, it has happened to me). Also, it breaks\nformatting in some weird cases. I know this makes editing a bit harder\nbecause you can't just reflow with your editor like you would normal\ntext. So this'd be:\n\n+ in a transactional manner. The commands are <command>PREPARE TRANSACTION</command>,\n+ <command>COMMIT PREPARED</command> and\n\nwith whatever word wrapping you like, except breaking between PREPARE\nand TRANSACTION.\n\n> + <para>\n> + In addition to <literal>vxid</literal> and <type>xid</type>,\n> + when a transaction is prepared it is also identified by a Global\n> + Transaction Identifier (<acronym>GID</acronym>). GIDs\n> + are string literals up to 200 bytes long, which must be\n> + unique amongst other currently prepared transactions.\n> + The mapping of GID to xid is shown in <link\n> + linkend=\"view-pg-prepared-xacts\"><structname>pg_prepared_xacts</structname></link>.\n> + </para>\n\nMaybe say \"is prepared for two-phase commit\", to make the topic of this\nparagraph more obvious?\n\n> + <para>\n> + Lock waits on table-level locks are shown waiting for\n> + <structfield>virtualxid</structfield>, while lock waits on row-level\n> + locks are shown waiting for <structfield>transactionid</structfield>.\n> + Row-level read and write locks are recorded directly in locked\n> + rows and can be inspected using the <xref linkend=\"pgrowlocks\"/>\n> + extension. Row-level read locks might also require the assignment\n> + of multixact IDs (<literal>mxid</literal>). Mxids are recorded in\n> + the <filename>pg_multixact</filename> directory.\n> + </para>\n\nHmm, ok.\n\n> + <para>\n> + The parent xid of each subxid is recorded in the\n> + <filename>pg_subtrans</filename> directory. No entry is made for\n> + top-level xids since they do not have a parent, nor is an entry made\n> + for read-only subtransactions.\n> + </para>\n\nMaybe say \"the immediate parent xid of each ...\", or is it too obvious?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Las cosas son buenas o malas segun las hace nuestra opinión\" (Lisias)\n\n\n",
"msg_date": "Fri, 14 Oct 2022 10:46:15 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Fri, 14 Oct 2022 at 09:46, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> +1 for this new chapter. This latest patch looks pretty good. I think\n> that introducing the concept of \"sub-commit\" as in Simon's follow-up\n> patch clarifies things, though the word itself looks very odd. Maybe\n> it's okay. The addition of the savepoint example looks good also.\n>\n> On 2022-Oct-13, Bruce Momjian wrote:\n>\n> > + <para>\n> > + <productname>PostgreSQL</productname> supports a two-phase commit (2PC)\n> > + protocol that allows multiple distributed systems to work together\n> > + in a transactional manner. The commands are <command>PREPARE\n> > + TRANSACTION</command>, <command>COMMIT PREPARED</command> and\n>\n> I suggest/request that we try to avoid breaking tagged constants in\n> DocBook; doing so makes it much easier to miss them later when grepping\n> for them (don't laugh, it has happened to me). Also, it breaks\n> formatting in some weird cases. I know this makes editing a bit harder\n> because you can't just reflow with your editor like you would normal\n> text. So this'd be:\n>\n> + in a transactional manner. The commands are <command>PREPARE TRANSACTION</command>,\n> + <command>COMMIT PREPARED</command> and\n>\n> with whatever word wrapping you like, except breaking between PREPARE\n> and TRANSACTION.\n>\n> > + <para>\n> > + In addition to <literal>vxid</literal> and <type>xid</type>,\n> > + when a transaction is prepared it is also identified by a Global\n> > + Transaction Identifier (<acronym>GID</acronym>). GIDs\n> > + are string literals up to 200 bytes long, which must be\n> > + unique amongst other currently prepared transactions.\n> > + The mapping of GID to xid is shown in <link\n> > + linkend=\"view-pg-prepared-xacts\"><structname>pg_prepared_xacts</structname></link>.\n> > + </para>\n>\n> Maybe say \"is prepared for two-phase commit\", to make the topic of this\n> paragraph more obvious?\n>\n> > + <para>\n> > + Lock waits on table-level locks are shown waiting for\n> > + <structfield>virtualxid</structfield>, while lock waits on row-level\n> > + locks are shown waiting for <structfield>transactionid</structfield>.\n> > + Row-level read and write locks are recorded directly in locked\n> > + rows and can be inspected using the <xref linkend=\"pgrowlocks\"/>\n> > + extension. Row-level read locks might also require the assignment\n> > + of multixact IDs (<literal>mxid</literal>). Mxids are recorded in\n> > + the <filename>pg_multixact</filename> directory.\n> > + </para>\n>\n> Hmm, ok.\n>\n> > + <para>\n> > + The parent xid of each subxid is recorded in the\n> > + <filename>pg_subtrans</filename> directory. No entry is made for\n> > + top-level xids since they do not have a parent, nor is an entry made\n> > + for read-only subtransactions.\n> > + </para>\n>\n> Maybe say \"the immediate parent xid of each ...\", or is it too obvious?\n\n+1 to all of those suggestions\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 14 Oct 2022 12:22:35 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Fri, 14 Oct 2022 at 08:55, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> In related changes, I've also done some major rewording of the RELEASE\n> SAVEPOINT command, since it was incorrectly described as having \"no\n> other user visible behavior\". A complex example is given to explain,\n> using the terminology established in the main patch.\n\nROLLBACK TO SAVEPOINT also needs some clarification, patch attached.\n\n(Commentary: It's confusing to me that ROLLBACK TO SAVEPOINT starts a\nnew subtransaction, whereas RELEASE SAVEPOINT does not. You might\nimagine they would both start a new subtransaction, but that is not\nthe case.)\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Fri, 14 Oct 2022 13:05:14 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Fri, Oct 14, 2022 at 08:55:05AM +0100, Simon Riggs wrote:\n> On Thu, 13 Oct 2022 at 23:06, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > Thanks, updated patch attached. You can see the output at:\n> \n> Thank you for your work to tighten and cleanup this patch, much appreciated.\n> \n> I had two minor typos, plus a slight rewording to avoid using the word\n> \"global\" in the last section, since that is associated with\n> distributed or 2PC transactions. For those changes, I provide a\n> patch-on-patch so you can see clearly.\n\nYes, I didn't like global either --- I like your wording. I added your\nother changes too, with slight rewording. Merged patch to be posted in\na later email.\n\n> In related changes, I've also done some major rewording of the RELEASE\n> SAVEPOINT command, since it was incorrectly described as having \"no\n> other user visible behavior\". A complex example is given to explain,\n> using the terminology established in the main patch.\n\nOkay, added.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 14 Oct 2022 15:48:54 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Fri, Oct 14, 2022 at 01:05:14PM +0100, Simon Riggs wrote:\n> On Fri, 14 Oct 2022 at 08:55, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> \n> > In related changes, I've also done some major rewording of the RELEASE\n> > SAVEPOINT command, since it was incorrectly described as having \"no\n> > other user visible behavior\". A complex example is given to explain,\n> > using the terminology established in the main patch.\n> \n> ROLLBACK TO SAVEPOINT also needs some clarification, patch attached.\n> \n> (Commentary: It's confusing to me that ROLLBACK TO SAVEPOINT starts a\n> new subtransaction, whereas RELEASE SAVEPOINT does not. You might\n> imagine they would both start a new subtransaction, but that is not\n> the case.)\n\nAgreed, added.\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 14 Oct 2022 15:49:04 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Fri, Oct 14, 2022 at 10:46:15AM +0200, Álvaro Herrera wrote:\n> +1 for this new chapter. This latest patch looks pretty good. I think\n> that introducing the concept of \"sub-commit\" as in Simon's follow-up\n> patch clarifies things, though the word itself looks very odd. Maybe\n> it's okay. The addition of the savepoint example looks good also.\n\nYes, I like that term since it isn't a permament commit.\n\n> On 2022-Oct-13, Bruce Momjian wrote:\n> \n> > + <para>\n> > + <productname>PostgreSQL</productname> supports a two-phase commit (2PC)\n> > + protocol that allows multiple distributed systems to work together\n> > + in a transactional manner. The commands are <command>PREPARE\n> > + TRANSACTION</command>, <command>COMMIT PREPARED</command> and\n> \n> I suggest/request that we try to avoid breaking tagged constants in\n> DocBook; doing so makes it much easier to miss them later when grepping\n> for them (don't laugh, it has happened to me). Also, it breaks\n> formatting in some weird cases. I know this makes editing a bit harder\n> because you can't just reflow with your editor like you would normal\n> text. So this'd be:\n> \n> + in a transactional manner. The commands are <command>PREPARE TRANSACTION</command>,\n> + <command>COMMIT PREPARED</command> and\n> \n> with whatever word wrapping you like, except breaking between PREPARE\n> and TRANSACTION.\n\nUh, I do a lot of word wraps and I don't think I can reaonably avoid\nthese splits.\n\n> \n> > + <para>\n> > + In addition to <literal>vxid</literal> and <type>xid</type>,\n> > + when a transaction is prepared it is also identified by a Global\n> > + Transaction Identifier (<acronym>GID</acronym>). GIDs\n> > + are string literals up to 200 bytes long, which must be\n> > + unique amongst other currently prepared transactions.\n> > + The mapping of GID to xid is shown in <link\n> > + linkend=\"view-pg-prepared-xacts\"><structname>pg_prepared_xacts</structname></link>.\n> > + </para>\n> \n> Maybe say \"is prepared for two-phase commit\", to make the topic of this\n> paragraph more obvious?\n\nAgreed.\n\n> > + <para>\n> > + The parent xid of each subxid is recorded in the\n> > + <filename>pg_subtrans</filename> directory. No entry is made for\n> > + top-level xids since they do not have a parent, nor is an entry made\n> > + for read-only subtransactions.\n> > + </para>\n> \n> Maybe say \"the immediate parent xid of each ...\", or is it too obvious?\n\nAgreed with your wording.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 14 Oct 2022 15:50:18 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Fri, Oct 14, 2022 at 12:22:35PM +0100, Simon Riggs wrote:\n> > > + <para>\n> > > + The parent xid of each subxid is recorded in the\n> > > + <filename>pg_subtrans</filename> directory. No entry is made for\n> > > + top-level xids since they do not have a parent, nor is an entry made\n> > > + for read-only subtransactions.\n> > > + </para>\n> >\n> > Maybe say \"the immediate parent xid of each ...\", or is it too obvious?\n> \n> +1 to all of those suggestions\n\nAttached is the merged patch from all the great comments I received. I\nhave also rebuilt the docs with the updated patch:\n\n\thttps://momjian.us/tmp/pgsql/\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Fri, 14 Oct 2022 15:51:16 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Fri, Oct 14, 2022 at 3:51 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Attached is the merged patch from all the great comments I received. I\n> have also rebuilt the docs with the updated patch:\n>\n> https://momjian.us/tmp/pgsql/\n>\n\n+ <command>RELEASE SAVEPOINT</command> also subcommits and destroys\n+ all savepoints that were established after the named savepoint was\n+ established. This means that any subtransactions of the named savepoint\n+ will also be subcommitted and destroyed.\n\nWonder if we should be more explicit that data changes are preserved,\nnot destroyed... something like:\n\"This means that any changes within subtransactions of the named\nsavepoint will be subcommitted and those subtransactions will be\ndestroyed.\"\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Fri, 14 Oct 2022 17:46:55 -0400",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Fri, Oct 14, 2022 at 05:46:55PM -0400, Robert Treat wrote:\n> On Fri, Oct 14, 2022 at 3:51 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Attached is the merged patch from all the great comments I received. I\n> > have also rebuilt the docs with the updated patch:\n> >\n> > https://momjian.us/tmp/pgsql/\n> >\n> \n> + <command>RELEASE SAVEPOINT</command> also subcommits and destroys\n> + all savepoints that were established after the named savepoint was\n> + established. This means that any subtransactions of the named savepoint\n> + will also be subcommitted and destroyed.\n> \n> Wonder if we should be more explicit that data changes are preserved,\n> not destroyed... something like:\n> \"This means that any changes within subtransactions of the named\n> savepoint will be subcommitted and those subtransactions will be\n> destroyed.\"\n\nGood point. I reread the section and there was just too much confusion\nover subtransactions, partly because the behavior just doesn't map\neasily to subtransaction. I therefore merged all three paragraphs into\none and tried to make the text saner; release_savepoint.sgml diff\nattached, URL content updated.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Sat, 15 Oct 2022 21:08:15 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Sun, 16 Oct 2022 at 02:08, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Fri, Oct 14, 2022 at 05:46:55PM -0400, Robert Treat wrote:\n> > On Fri, Oct 14, 2022 at 3:51 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > Attached is the merged patch from all the great comments I received. I\n> > > have also rebuilt the docs with the updated patch:\n> > >\n> > > https://momjian.us/tmp/pgsql/\n> > >\n> >\n> > + <command>RELEASE SAVEPOINT</command> also subcommits and destroys\n> > + all savepoints that were established after the named savepoint was\n> > + established. This means that any subtransactions of the named savepoint\n> > + will also be subcommitted and destroyed.\n> >\n> > Wonder if we should be more explicit that data changes are preserved,\n> > not destroyed... something like:\n> > \"This means that any changes within subtransactions of the named\n> > savepoint will be subcommitted and those subtransactions will be\n> > destroyed.\"\n>\n> Good point. I reread the section and there was just too much confusion\n> over subtransactions, partly because the behavior just doesn't map\n> easily to subtransaction. I therefore merged all three paragraphs into\n> one and tried to make the text saner; release_savepoint.sgml diff\n> attached, URL content updated.\n\nJust got around to reading this, thanks for changes.\n\nThe rewording doesn't work for me. The use of the word \"destroy\" is\nvery misleading, since the effect is to commit.\n\nSo I think we must avoid use of the word destroy completely. Possible\nrewording:\n\n<command>RELEASE SAVEPOINT</command> will subcommit the subtransaction\nestablished by the named savepoint, releasing any resources held by\nit. If there were any subtransactions created by the named savepoint,\nthese will also be subcommitted.\n\nThe point is that savepoints create subtransactions, but they are not\nthe only way to create them, so we cannot equate savepoint and\nsubtransaction completely.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 24 Oct 2022 16:01:51 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 11:02 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> On Sun, 16 Oct 2022 at 02:08, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Fri, Oct 14, 2022 at 05:46:55PM -0400, Robert Treat wrote:\n> > > On Fri, Oct 14, 2022 at 3:51 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > > Attached is the merged patch from all the great comments I received. I\n> > > > have also rebuilt the docs with the updated patch:\n> > > >\n> > > > https://momjian.us/tmp/pgsql/\n> > > >\n> > >\n> > > + <command>RELEASE SAVEPOINT</command> also subcommits and destroys\n> > > + all savepoints that were established after the named savepoint was\n> > > + established. This means that any subtransactions of the named savepoint\n> > > + will also be subcommitted and destroyed.\n> > >\n> > > Wonder if we should be more explicit that data changes are preserved,\n> > > not destroyed... something like:\n> > > \"This means that any changes within subtransactions of the named\n> > > savepoint will be subcommitted and those subtransactions will be\n> > > destroyed.\"\n> >\n> > Good point. I reread the section and there was just too much confusion\n> > over subtransactions, partly because the behavior just doesn't map\n> > easily to subtransaction. I therefore merged all three paragraphs into\n> > one and tried to make the text saner; release_savepoint.sgml diff\n> > attached, URL content updated.\n>\n> Just got around to reading this, thanks for changes.\n>\n> The rewording doesn't work for me. The use of the word \"destroy\" is\n> very misleading, since the effect is to commit.\n>\n> So I think we must avoid use of the word destroy completely. Possible\n> rewording:\n>\n> <command>RELEASE SAVEPOINT</command> will subcommit the subtransaction\n> established by the named savepoint, releasing any resources held by\n> it. If there were any subtransactions created by the named savepoint,\n> these will also be subcommitted.\n>\n\nI think it should be \"If there were any subtransactions of the named\nsavepoint, these will also be subcommitted\", but otherwise I think\nthis wording should work.\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Mon, 24 Oct 2022 12:42:44 -0400",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Sat, 2022-10-15 at 21:08 -0400, Bruce Momjian wrote:\n> I therefore merged all three paragraphs into\n> one and tried to make the text saner; release_savepoint.sgml diff\n> attached, URL content updated.\n\nI wanted to have a look at this, but I am confused. The original patch\nwas much bigger. Is this just an incremental patch? If yes, it would\nbe nice to have a \"grand total\" patch, so that I can read it all\nin one go.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 04 Nov 2022 16:17:28 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Fri, 4 Nov 2022 at 15:17, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Sat, 2022-10-15 at 21:08 -0400, Bruce Momjian wrote:\n> > I therefore merged all three paragraphs into\n> > one and tried to make the text saner; release_savepoint.sgml diff\n> > attached, URL content updated.\n>\n> I wanted to have a look at this, but I am confused. The original patch\n> was much bigger. Is this just an incremental patch? If yes, it would\n> be nice to have a \"grand total\" patch, so that I can read it all\n> in one go.\n\nAgreed; new compilation patch attached, including mine and then\nRobert's suggested rewordings.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Sat, 5 Nov 2022 10:08:09 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Fri, Nov 4, 2022 at 04:17:28PM +0100, Laurenz Albe wrote:\n> On Sat, 2022-10-15 at 21:08 -0400, Bruce Momjian wrote:\n> > I therefore merged all three paragraphs into\n> > one and tried to make the text saner; release_savepoint.sgml diff\n> > attached, URL content updated.\n> \n> I wanted to have a look at this, but I am confused. The original patch\n> was much bigger. Is this just an incremental patch? If yes, it would\n> be nice to have a \"grand total\" patch, so that I can read it all\n> in one go.\n\nYeah, I said:\n\n\tYes, I didn't like global either --- I like your wording. I added your\n\tother changes too, with slight rewording. Merged patch to be posted in\n\t -------------------------\n\ta later email.\n\nbut that was unclear. Let me post one now, and Simon posted one too.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Mon, 7 Nov 2022 02:43:00 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Mon, 7 Nov 2022 at 07:43, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Fri, Nov 4, 2022 at 04:17:28PM +0100, Laurenz Albe wrote:\n> > On Sat, 2022-10-15 at 21:08 -0400, Bruce Momjian wrote:\n> > > I therefore merged all three paragraphs into\n> > > one and tried to make the text saner; release_savepoint.sgml diff\n> > > attached, URL content updated.\n> >\n> > I wanted to have a look at this, but I am confused. The original patch\n> > was much bigger. Is this just an incremental patch? If yes, it would\n> > be nice to have a \"grand total\" patch, so that I can read it all\n> > in one go.\n>\n> Yeah, I said:\n>\n> Yes, I didn't like global either --- I like your wording. I added your\n> other changes too, with slight rewording. Merged patch to be posted in\n> -------------------------\n> a later email.\n>\n> but that was unclear. Let me post one now, and Simon posted one too.\n\nWhat I've posted is the merged patch, i.e. your latest patch, plus\nchanges to RELEASE SAVEPOINT from you on Oct 16, plus changes based on\nthe later comments from Robert and I.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 7 Nov 2022 10:58:05 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Sat, 2022-11-05 at 10:08 +0000, Simon Riggs wrote:\n> Agreed; new compilation patch attached, including mine and then\n> Robert's suggested rewordings.\n\nThanks. There is clearly a lot of usefule information in this.\n\nSome comments:\n\n> --- a/doc/src/sgml/func.sgml\n> +++ b/doc/src/sgml/func.sgml\n> @@ -24673,7 +24673,10 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n> <para>\n> Returns the current transaction's ID. It will assign a new one if the\n> current transaction does not have one already (because it has not\n> - performed any database updates).\n> + performed any database updates); see <xref\n> + linkend=\"transaction-id\"/> for details. If executed in a\n> + subtransaction this will return the top-level xid; see <xref\n> + linkend=\"subxacts\"/> for details.\n> </para></entry>\n> </row>\n\nI would use a comma after \"subtransaction\", and I think it would be better to write\n\"transaction ID\" instead of \"xid\".\n\n> @@ -24690,6 +24693,7 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n> ID is assigned yet. (It's best to use this variant if the transaction\n> might otherwise be read-only, to avoid unnecessary consumption of an\n> XID.)\n> + If executed in a subtransaction this will return the top-level xid.\n> </para></entry>\n> </row>\n\nSame as above.\n\n> @@ -24733,6 +24737,8 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n> <para>\n> Returns a current <firstterm>snapshot</firstterm>, a data structure\n> showing which transaction IDs are now in-progress.\n> + Only top-level xids are included in the snapshot; subxids are not\n> + shown; see <xref linkend=\"subxacts\"/> for details.\n> </para></entry>\n> </row>\n\nAgain, I would avoid \"xid\" and \"subxid\", or at least use \"transaction ID (xid)\"\nand similar.\n\n> --- a/doc/src/sgml/ref/release_savepoint.sgml\n> +++ b/doc/src/sgml/ref/release_savepoint.sgml\n> @@ -34,23 +34,16 @@ RELEASE [ SAVEPOINT ] <replaceable>savepoint_name</replaceable>\n> <title>Description</title>\n> \n> <para>\n> - <command>RELEASE SAVEPOINT</command> destroys a savepoint previously defined\n> - in the current transaction.\n> + <command>RELEASE SAVEPOINT</command> will subcommit the subtransaction\n> + established by the named savepoint, if one exists. This will release\n> + any resources held by the subtransaction. If there were any\n> + subtransactions of the named savepoint, these will also be subcommitted.\n> </para>\n> \n> <para>\n\n\"Subtransactions of the named savepoint\" is somewhat confusing; how about\n\"subtransactions of the subtransaction established by the named savepoint\"?\n\nIf that is too long and explicit, perhaps \"subtransactions of that subtransaction\".\n\n> @@ -78,7 +71,7 @@ RELEASE [ SAVEPOINT ] <replaceable>savepoint_name</replaceable>\n> \n> <para>\n> It is not possible to release a savepoint when the transaction is in\n> - an aborted state.\n> + an aborted state, to do that use <xref linkend=\"sql-rollback-to\"/>.\n> </para>\n> \n> <para>\n\nI think the following is more English:\n\"It is not possible ... state; to do that, use <xref .../>.\"\n\n> --- a/doc/src/sgml/ref/rollback.sgml\n> +++ b/doc/src/sgml/ref/rollback.sgml\n> @@ -56,11 +56,14 @@ ROLLBACK [ WORK | TRANSACTION ] [ AND [ NO ] CHAIN ]\n> <term><literal>AND CHAIN</literal></term>\n> <listitem>\n> <para>\n> - If <literal>AND CHAIN</literal> is specified, a new transaction is\n> + If <literal>AND CHAIN</literal> is specified, a new unaborted transaction is\n> immediately started with the same transaction characteristics (see <xref\n> linkend=\"sql-set-transaction\"/>) as the just finished one. Otherwise,\n> no new transaction is started.\n\nI don't think that is an improvement. \"Unaborted\" is an un-word. A new transaction\nis always \"unaborted\", isn't it?\n\n> --- a/doc/src/sgml/wal.sgml\n> +++ b/doc/src/sgml/wal.sgml\n> @@ -909,4 +910,36 @@\n> seem to be a problem in practice.\n> </para>\n> </sect1>\n> +\n> + <sect1 id=\"two-phase\">\n> +\n> + <title>Two-Phase Transactions</title>\n> +\n> + <para>\n> + <productname>PostgreSQL</productname> supports a two-phase commit (2PC)\n[...]\n> + <filename>pg_twophase</filename> directory. Currently-prepared\n> + transactions can be inspected using <link\n> + linkend=\"view-pg-prepared-xacts\"><structname>pg_prepared_xacts</structname></link>.\n> + </para>\n> + </sect1>\n> +\n> </chapter>\n\nI don't like \"currently-prepared\". How about:\n\"Transaction that are currently prepared can be inspected...\"\n\nThis is clearly interesting information, but I don't think the WAL chapter is the right\nplace for this. \"pg_twophase\" is already mentioned in \"storage.sgml\", and details about\nwhen exactly a prepared transaction is persisted may exceed the details level needed by\nthe end user.\n\nI'd look for that information in the reference page for PREPARE TRANSACTION; perhaps\nthat would be a better place. Or, even better, the new \"xact.sgml\" chapter.\n\n> --- /dev/null\n> +++ b/doc/src/sgml/xact.sgml\n\n+ <title>Transaction Management</title>\n\n+ The word transaction is often abbreviated as \"xact\".\n\nShould use <quote> here.\n\n> + <title>Transactions and Identifiers</title>\n\n> + <para>\n> + Once a transaction writes to the database, it is assigned a\n> + non-virtual <literal>TransactionId</literal> (or <type>xid</type>),\n> + e.g., <literal>278394</literal>. Xids are assigned sequentially\n> + using a global counter used by all databases within the\n> + <productname>PostgreSQL</productname> cluster. This property is used by\n> + the transaction system to order transactions by their first database\n> + write, i.e., lower-numbered xids started writing before higher-numbered\n> + xids. Of course, transactions might start in a different order.\n> + </para>\n\n\"This property\"? How about:\n\"Because transaction IDs are assigned sequentially, the transaction system can\nuse them to order transactions by their first database write\"\n\nI would want some additional information here: why does the transaction system have\nto order transactions by their first database write?\n\n\"Of course, transactions might start in a different order.\"\n\nNow that confuses me. Are you saying that BEGIN could be in a different order\nthan the first database write? Perhaps like this:\n\n\"Note that the order in which transactions perform their first database write\nmight be different from the order in which the transactions started.\"\n\n> + The internal transaction ID type <type>xid</type> is 32-bits wide\n\nThere should be no hyphen in \"32 bits wide\", just as in \"3 years old\".\n\n> + A 32-bit epoch is incremented during each\n> + wrap around.\n\nWe usually call this \"wraparound\" without a space.\n\n> + There is also a 64-bit type <type>xid8</type> which\n> + includes this epoch and therefore does not wrap around during the\n> + life of an installation and can be converted to xid by casting.\n\nRunning \"and\"s. Better:\n\n\"There is also ... and does not wrap ... life of an installation.\n <type>xid8</type> can be converted to <type>xid</type> by casting.\"\n\n> + Xids are used as the\n> + basis for <productname>PostgreSQL</productname>'s <link\n> + linkend=\"mvcc\">MVCC</link> concurrency mechanism, <link\n> + linkend=\"hot-standby\">Hot Standby</link>, and Read Replica servers.\n\nWhat is the difference between a hot standby and a read replica? I think\none of these terms is sufficient.\n\n> + In addition to <literal>vxid</literal> and <type>xid</type>,\n> + when a transaction is prepared for two-phase commit it\n> + is also identified by a Global Transaction Identifier\n> + (<acronym>GID</acronym>).\n\nBetter:\n\n\"In addition to <literal>vxid</literal> and <type>xid</type>,\n prepared transactions also have a Global Transaction Identifier\n (<acronym>GID</acronym>) that is assigned when the transaction is\n prepared for two-phase commit.\"\n\n> + <sect1 id=\"xact-locking\">\n> +\n> + <title>Transactions and Locking</title>\n> +\n> + <para>\n> + Currently-executing transactions are shown in <link\n> + linkend=\"view-pg-locks\"><structname>pg_locks</structname></link>\n> + in columns <structfield>virtualxid</structfield> and\n> + <structfield>transactionid</structfield>.\n\nBetter:\n\n\"The transaction IDs of currently executing transactions are shown in <link\n linkend=\"view-pg-locks\"><structname>pg_locks</structname></link>\n in the columns <structfield>virtualxid</structfield> and\n <structfield>transactionid</structfield>.\"\n\n> + Lock waits on table-level locks are shown waiting for\n> + <structfield>virtualxid</structfield>, while lock waits on row-level\n> + locks are shown waiting for <structfield>transactionid</structfield>.\n\nThat's not true. Transactions waiting for table-level locks are shown\nwaiting for a \"relation\" lock in both \"pg_stat_activity\" and \"pg_locks\".\n\n> + Row-level read and write locks are recorded directly in locked\n> + rows and can be inspected using the <xref linkend=\"pgrowlocks\"/>\n> + extension. Row-level read locks might also require the assignment\n> + of multixact IDs (<literal>mxid</literal>). Mxids are recorded in\n> + the <filename>pg_multixact</filename> directory.\n\n\"are recorded directly in *the* locked rows\"\n\nI think the mention of multixacts should link to\n<xref linkend=\"vacuum-for-multixact-wraparound\"/>. Again, I would not\nspecifically mention the directory, since it is already described in\n\"storage.sgml\", but I have no strong optinion there.\n\n> + <sect1 id=\"subxacts\">\n> +\n> + <title>Subtransactions</title>\n\n> + The word subtransaction is often abbreviated as\n> + <literal>subxact</literal>.\n\nI'd use <quote>, not <literal>.\n\n> + If a subtransaction is assigned a non-virtual transaction ID,\n> + its transaction ID is referred to as a <literal>subxid</literal>.\n\nAgain, I would use <quote>, since we don't <literal> \"subxid\"\nelsewhere.\n\n+ Up to\n+ 64 open subxids are cached in shared memory for each backend; after\n+ that point, the overhead increases significantly since we must look\n+ up subxid entries in <filename>pg_subtrans</filename>.\n\nComma before \"since\". Perhaps you should mention that this means disk I/O.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 07 Nov 2022 23:04:46 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Mon, 2022-11-07 at 23:04 +0100, Laurenz Albe wrote:\n> On Sat, 2022-11-05 at 10:08 +0000, Simon Riggs wrote:\n> > Agreed; new compilation patch attached, including mine and then\n> > Robert's suggested rewordings.\n> \n> Thanks. There is clearly a lot of usefule information in this.\n> \n> Some comments: [...]\n\nI thought some more about the patch, and I don't like the title\n\"Transaction Management\" for the new chapter. I'd expect some more\nfrom a chapter titled \"Internals\" / \"Transaction Management\".\n\nIn reality, the new chapter is about transaction IDs. So perhaps the\nname should reflect that, so that it does not mislead the reader.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 08 Nov 2022 04:37:59 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Mon, Nov 7, 2022 at 10:58:05AM +0000, Simon Riggs wrote:\n> What I've posted is the merged patch, i.e. your latest patch, plus\n> changes to RELEASE SAVEPOINT from you on Oct 16, plus changes based on\n> the later comments from Robert and I.\n\nThanks. I have two changes to your patch. First, I agree \"destroy\" is\nthe wrong word for this, but I don't think \"subcommit\" is good, for\nthree reasons:\n\n1. Release merges the non-aborted changes into the previous transaction\n_and_ frees their resources --- \"subcommit\" doesn't have both meanings,\nwhich I think means if we need a single word, we should use \"release\"\nand later define what that means.\n\n2. The \"subcommit\" concept doesn't closely match the user-visible\nbehavior, even though we use subtransactions to accomplish this. Release\nis more of a rollup/merge into the previously-active\ntransaction/savepoint.\n\n3. \"subcommit\" is an implementation detail that I don't think we should\nexpose to users in the manual pages.\n\nI adjusted the first paragraph of RELEASE SAVEPOINT to highlight the\nabove issues. My original patch had similar wording.\n\nThe first attachment shows my changes to your patch, and the second\nattachment is my full patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Mon, 7 Nov 2022 22:41:07 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Tue, Nov 8, 2022 at 04:37:59AM +0100, Laurenz Albe wrote:\n> On Mon, 2022-11-07 at 23:04 +0100, Laurenz Albe wrote:\n> > On Sat, 2022-11-05 at 10:08 +0000, Simon Riggs wrote:\n> > > Agreed; new compilation patch attached, including mine and then\n> > > Robert's suggested rewordings.\n> > \n> > Thanks. There is clearly a lot of usefule information in this.\n> > \n> > Some comments: [...]\n> \n> I thought some more about the patch, and I don't like the title\n> \"Transaction Management\" for the new chapter. I'd expect some more\n> from a chapter titled \"Internals\" / \"Transaction Management\".\n> \n> In reality, the new chapter is about transaction IDs. So perhaps the\n> name should reflect that, so that it does not mislead the reader.\n\nI renamed it to \"Transaction Processing\" since we also cover locking and\nsubtransactions. How is that?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Mon, 7 Nov 2022 22:43:56 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Mon, 2022-11-07 at 22:43 -0500, Bruce Momjian wrote:\n> > I thought some more about the patch, and I don't like the title\n> > \"Transaction Management\" for the new chapter. I'd expect some more\n> > from a chapter titled \"Internals\" / \"Transaction Management\".\n> > \n> > In reality, the new chapter is about transaction IDs. So perhaps the\n> > name should reflect that, so that it does not mislead the reader.\n> \n> I renamed it to \"Transaction Processing\" since we also cover locking and\n> subtransactions. How is that?\n\nIt is better. Did you take my suggestions from [1] into account in your\nlatest cumulative patch in [2]? Otherwise, it will be difficult to\nintegrate both.\n\nYours,\nLaurenz Albe\n\n [1]: https://postgr.es/m/3603e6e85544daa5300c7106c31bc52673711cd0.camel%40cybertec.at\n [2]: https://postgr.es/m/Y2nP04/3BHQOviVB%40momjian.us\n\n\n",
"msg_date": "Wed, 09 Nov 2022 09:12:43 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Mon, Nov 7, 2022 at 5:04 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Sat, 2022-11-05 at 10:08 +0000, Simon Riggs wrote:\n> > Agreed; new compilation patch attached, including mine and then\n> > Robert's suggested rewordings.\n>\n> Thanks. There is clearly a lot of usefule information in this.\n>\n> Some comments:\n>\n<snip>\n> > --- a/doc/src/sgml/ref/release_savepoint.sgml\n> > +++ b/doc/src/sgml/ref/release_savepoint.sgml\n> > @@ -34,23 +34,16 @@ RELEASE [ SAVEPOINT ] <replaceable>savepoint_name</replaceable>\n> > <title>Description</title>\n> >\n> > <para>\n> > - <command>RELEASE SAVEPOINT</command> destroys a savepoint previously defined\n> > - in the current transaction.\n> > + <command>RELEASE SAVEPOINT</command> will subcommit the subtransaction\n> > + established by the named savepoint, if one exists. This will release\n> > + any resources held by the subtransaction. If there were any\n> > + subtransactions of the named savepoint, these will also be subcommitted.\n> > </para>\n> >\n> > <para>\n>\n> \"Subtransactions of the named savepoint\" is somewhat confusing; how about\n> \"subtransactions of the subtransaction established by the named savepoint\"?\n>\n> If that is too long and explicit, perhaps \"subtransactions of that subtransaction\".\n>\n\nPersonally, I think these are more confusing.\n\n> > --- a/doc/src/sgml/ref/rollback.sgml\n> > +++ b/doc/src/sgml/ref/rollback.sgml\n> > @@ -56,11 +56,14 @@ ROLLBACK [ WORK | TRANSACTION ] [ AND [ NO ] CHAIN ]\n> > <term><literal>AND CHAIN</literal></term>\n> > <listitem>\n> > <para>\n> > - If <literal>AND CHAIN</literal> is specified, a new transaction is\n> > + If <literal>AND CHAIN</literal> is specified, a new unaborted transaction is\n> > immediately started with the same transaction characteristics (see <xref\n> > linkend=\"sql-set-transaction\"/>) as the just finished one. Otherwise,\n> > no new transaction is started.\n>\n> I don't think that is an improvement. \"Unaborted\" is an un-word. A new transaction\n> is always \"unaborted\", isn't it?\n>\n\nI thought about this as well when reviewing it, but I do think\nsomething is needed for the case where you have a transaction which\nhas suffered an error and then you issue \"rollback and chain\"; if you\njust say \"a new transaction is immediately started with the same\ntransaction characteristics\" it might imply to some the new\ntransaction has some kind of carry over of the previous broken\ntransaction... the use of the word unaborted makes it clear that the\nnew transaction is 100% functional.\n\n> > --- a/doc/src/sgml/wal.sgml\n> > +++ b/doc/src/sgml/wal.sgml\n> > @@ -909,4 +910,36 @@\n> > seem to be a problem in practice.\n> > </para>\n> > </sect1>\n> > +\n> > + <sect1 id=\"two-phase\">\n> > +\n> > + <title>Two-Phase Transactions</title>\n> > +\n> > + <para>\n> > + <productname>PostgreSQL</productname> supports a two-phase commit (2PC)\n> [...]\n> > + <filename>pg_twophase</filename> directory. Currently-prepared\n> > + transactions can be inspected using <link\n> > + linkend=\"view-pg-prepared-xacts\"><structname>pg_prepared_xacts</structname></link>.\n> > + </para>\n> > + </sect1>\n> > +\n> > </chapter>\n>\n> I don't like \"currently-prepared\". How about:\n> \"Transaction that are currently prepared can be inspected...\"\n>\n\nThis seems to align with other usage, so +1\n\n> This is clearly interesting information, but I don't think the WAL chapter is the right\n> place for this. \"pg_twophase\" is already mentioned in \"storage.sgml\", and details about\n> when exactly a prepared transaction is persisted may exceed the details level needed by\n> the end user.\n>\n> I'd look for that information in the reference page for PREPARE TRANSACTION; perhaps\n> that would be a better place. Or, even better, the new \"xact.sgml\" chapter.\n>\n> > --- /dev/null\n> > +++ b/doc/src/sgml/xact.sgml\n>\n> + <title>Transaction Management</title>\n>\n> + The word transaction is often abbreviated as \"xact\".\n>\n> Should use <quote> here.\n>\n> > + <title>Transactions and Identifiers</title>\n>\n> > + <para>\n> > + Once a transaction writes to the database, it is assigned a\n> > + non-virtual <literal>TransactionId</literal> (or <type>xid</type>),\n> > + e.g., <literal>278394</literal>. Xids are assigned sequentially\n> > + using a global counter used by all databases within the\n> > + <productname>PostgreSQL</productname> cluster. This property is used by\n> > + the transaction system to order transactions by their first database\n> > + write, i.e., lower-numbered xids started writing before higher-numbered\n> > + xids. Of course, transactions might start in a different order.\n> > + </para>\n>\n> \"This property\"? How about:\n> \"Because transaction IDs are assigned sequentially, the transaction system can\n> use them to order transactions by their first database write\"\n>\n\n+1\n\n> I would want some additional information here: why does the transaction system have\n> to order transactions by their first database write?\n>\n> \"Of course, transactions might start in a different order.\"\n>\n> Now that confuses me. Are you saying that BEGIN could be in a different order\n> than the first database write? Perhaps like this:\n>\n> \"Note that the order in which transactions perform their first database write\n> might be different from the order in which the transactions started.\"\n>\n\n+1\n\n> > + The internal transaction ID type <type>xid</type> is 32-bits wide\n>\n> There should be no hyphen in \"32 bits wide\", just as in \"3 years old\".\n>\n\nMinor aside, we should clean up glossary.sgml as well.\n\n> > + Xids are used as the\n> > + basis for <productname>PostgreSQL</productname>'s <link\n> > + linkend=\"mvcc\">MVCC</link> concurrency mechanism, <link\n> > + linkend=\"hot-standby\">Hot Standby</link>, and Read Replica servers.\n>\n> What is the difference between a hot standby and a read replica? I think\n> one of these terms is sufficient.\n>\n\nAgreed the distinction is not clear, although you could replace Hot\nStandby with Warm Standby.\n\n> > + In addition to <literal>vxid</literal> and <type>xid</type>,\n> > + when a transaction is prepared for two-phase commit it\n> > + is also identified by a Global Transaction Identifier\n> > + (<acronym>GID</acronym>).\n>\n> Better:\n>\n> \"In addition to <literal>vxid</literal> and <type>xid</type>,\n> prepared transactions also have a Global Transaction Identifier\n> (<acronym>GID</acronym>) that is assigned when the transaction is\n> prepared for two-phase commit.\"\n>\n\n+1\n\n> > + <sect1 id=\"xact-locking\">\n> > +\n> > + <title>Transactions and Locking</title>\n> > +\n> > + <para>\n> > + Currently-executing transactions are shown in <link\n> > + linkend=\"view-pg-locks\"><structname>pg_locks</structname></link>\n> > + in columns <structfield>virtualxid</structfield> and\n> > + <structfield>transactionid</structfield>.\n>\n> Better:\n>\n> \"The transaction IDs of currently executing transactions are shown in <link\n> linkend=\"view-pg-locks\"><structname>pg_locks</structname></link>\n> in the columns <structfield>virtualxid</structfield> and\n> <structfield>transactionid</structfield>.\"\n>\n> > + Lock waits on table-level locks are shown waiting for\n> > + <structfield>virtualxid</structfield>, while lock waits on row-level\n> > + locks are shown waiting for <structfield>transactionid</structfield>.\n>\n> That's not true. Transactions waiting for table-level locks are shown\n> waiting for a \"relation\" lock in both \"pg_stat_activity\" and \"pg_locks\".\n>\n> > + Row-level read and write locks are recorded directly in locked\n> > + rows and can be inspected using the <xref linkend=\"pgrowlocks\"/>\n> > + extension. Row-level read locks might also require the assignment\n> > + of multixact IDs (<literal>mxid</literal>). Mxids are recorded in\n> > + the <filename>pg_multixact</filename> directory.\n>\n> \"are recorded directly in *the* locked rows\"\n>\n> I think the mention of multixacts should link to\n> <xref linkend=\"vacuum-for-multixact-wraparound\"/>. Again, I would not\n> specifically mention the directory, since it is already described in\n> \"storage.sgml\", but I have no strong optinion there.\n>\n> > + <sect1 id=\"subxacts\">\n> > +\n> > + <title>Subtransactions</title>\n>\n> > + The word subtransaction is often abbreviated as\n> > + <literal>subxact</literal>.\n>\n> I'd use <quote>, not <literal>.\n>\n> > + If a subtransaction is assigned a non-virtual transaction ID,\n> > + its transaction ID is referred to as a <literal>subxid</literal>.\n>\n> Again, I would use <quote>, since we don't <literal> \"subxid\"\n> elsewhere.\n>\n> + Up to\n> + 64 open subxids are cached in shared memory for each backend; after\n> + that point, the overhead increases significantly since we must look\n> + up subxid entries in <filename>pg_subtrans</filename>.\n>\n> Comma before \"since\". Perhaps you should mention that this means disk I/O.\n>\n\nISTR that you only use a comma before since in cases where the\npreceding thought contains a negative.\n\nIn any case, are you thinking something like this:\n\n\" 64 open subxids are cached in shared memory for each backend; after\n that point the overhead increases significantly due to additional disk I/O\n from looking up subxid entries in <filename>pg_subtrans</filename>.\"\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Wed, 9 Nov 2022 09:16:18 -0500",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Wed, 2022-11-09 at 09:16 -0500, Robert Treat wrote:\n> On Mon, Nov 7, 2022 at 5:04 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > Some comments:\n> > \n> <snip>\n> > > --- a/doc/src/sgml/ref/release_savepoint.sgml\n> > > +++ b/doc/src/sgml/ref/release_savepoint.sgml\n> > > @@ -34,23 +34,16 @@ RELEASE [ SAVEPOINT ] <replaceable>savepoint_name</replaceable>\n> > > <title>Description</title>\n> > > \n> > > <para>\n> > > - <command>RELEASE SAVEPOINT</command> destroys a savepoint previously defined\n> > > - in the current transaction.\n> > > + <command>RELEASE SAVEPOINT</command> will subcommit the subtransaction\n> > > + established by the named savepoint, if one exists. This will release\n> > > + any resources held by the subtransaction. If there were any\n> > > + subtransactions of the named savepoint, these will also be subcommitted.\n> > > </para>\n> > > \n> > > <para>\n> > \n> > \"Subtransactions of the named savepoint\" is somewhat confusing; how about\n> > \"subtransactions of the subtransaction established by the named savepoint\"?\n> > \n> > If that is too long and explicit, perhaps \"subtransactions of that subtransaction\".\n> \n> Personally, I think these are more confusing.\n\nPerhaps. I was worried because everywhere else, the wording makes a clear distinction\nbetween a savepoint and the subtransaction created by a savepoint. But perhaps some\nsloppiness is better to avoid such word cascades.\n\n> > > --- a/doc/src/sgml/ref/rollback.sgml\n> > > +++ b/doc/src/sgml/ref/rollback.sgml\n> > > @@ -56,11 +56,14 @@ ROLLBACK [ WORK | TRANSACTION ] [ AND [ NO ] CHAIN ]\n> > > <term><literal>AND CHAIN</literal></term>\n> > > <listitem>\n> > > <para>\n> > > - If <literal>AND CHAIN</literal> is specified, a new transaction is\n> > > + If <literal>AND CHAIN</literal> is specified, a new unaborted transaction is\n> > > immediately started with the same transaction characteristics (see <xref\n> > > linkend=\"sql-set-transaction\"/>) as the just finished one. Otherwise,\n> > > no new transaction is started.\n> > \n> > I don't think that is an improvement. \"Unaborted\" is an un-word. A new transaction\n> > is always \"unaborted\", isn't it?\n> \n> I thought about this as well when reviewing it, but I do think\n> something is needed for the case where you have a transaction which\n> has suffered an error and then you issue \"rollback and chain\"; if you\n> just say \"a new transaction is immediately started with the same\n> transaction characteristics\" it might imply to some the new\n> transaction has some kind of carry over of the previous broken\n> transaction... the use of the word unaborted makes it clear that the\n> new transaction is 100% functional.\n\nA new transaction is never aborted in my understanding. Being aborted is not a\ncharacteristic of a transaction, but a state.\n\n> > \n> \n> \n> > > + The internal transaction ID type <type>xid</type> is 32-bits wide\n> > \n> > There should be no hyphen in \"32 bits wide\", just as in \"3 years old\".\n> \n> Minor aside, we should clean up glossary.sgml as well.\n\nRight, it has this:\n\n The numerical, unique, sequentially-assigned identifier that each\n transaction receives when it first causes a database modification.\n Frequently abbreviated as <firstterm>xid</firstterm>.\n When stored on disk, xids are only 32-bits wide, so only\n approximately four billion write transaction IDs can be generated;\n to permit the system to run for longer than that,\n <firstterm>epochs</firstterm> are used, also 32 bits wide.\n\nWhich reminds me that I should have suggested <firstterm> rather than\n<quote> where I complained about the use of <literal>.\n\n> > \n> > + Up to\n> > + 64 open subxids are cached in shared memory for each backend; after\n> > + that point, the overhead increases significantly since we must look\n> > + up subxid entries in <filename>pg_subtrans</filename>.\n> > \n> > Comma before \"since\". Perhaps you should mention that this means disk I/O.\n> \n> ISTR that you only use a comma before since in cases where the\n> preceding thought contains a negative.\n\nNot being a native speaker, I'll leave that to those who are; I went by feeling.\n\n> In any case, are you thinking something like this:\n> \n> \" 64 open subxids are cached in shared memory for each backend; after\n> that point the overhead increases significantly due to additional disk I/O\n> from looking up subxid entries in <filename>pg_subtrans</filename>.\"\n\nYes.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 10 Nov 2022 08:39:29 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On 2022-Nov-10, Laurenz Albe wrote:\n\n> On Wed, 2022-11-09 at 09:16 -0500, Robert Treat wrote:\n\n> > > > - If <literal>AND CHAIN</literal> is specified, a new transaction is\n> > > > + If <literal>AND CHAIN</literal> is specified, a new unaborted transaction is\n> > > > immediately started with the same transaction characteristics (see <xref\n> > > > linkend=\"sql-set-transaction\"/>) as the just finished one. Otherwise,\n> > > > no new transaction is started.\n> > > \n> > > I don't think that is an improvement. \"Unaborted\" is an un-word. A new transaction\n> > > is always \"unaborted\", isn't it?\n> > \n> > I thought about this as well when reviewing it, but I do think\n> > something is needed for the case where you have a transaction which\n> > has suffered an error and then you issue \"rollback and chain\"; if you\n> > just say \"a new transaction is immediately started with the same\n> > transaction characteristics\" it might imply to some the new\n> > transaction has some kind of carry over of the previous broken\n> > transaction... the use of the word unaborted makes it clear that the\n> > new transaction is 100% functional.\n> \n> A new transaction is never aborted in my understanding. Being aborted\n> is not a characteristic of a transaction, but a state.\n\nI agree, but maybe it's good to make the point explicit, because it\ndoesn't seem obvious. Perhaps something like\n\n\"If X is specified, a new transaction (never in aborted state) is\nimmediately started with the same transaction characteristics (see X) as\nthe just finished one. Otherwise ...\"\n\nGetting the wording of that parenthical comment right is tricky, though.\nWhat I propose above is not great, but I don't know how to make it\nbetter. Other ideas that seem slightly worse but may inspire someone:\n\n ... a new transaction (which is never in aborted state) is ...\n ... a new transaction (not in aborted state) is ...\n ... a new transaction (never aborted, even if the previous is) is ...\n ... a new (not-aborted) transaction is ...\n ... a new (never aborted) transaction is ...\n ... a new (never aborted, even if the previous is) transaction is ...\n ... a new (never aborted, regardless of the status of the previous one) transaction is ...\n\n\nMaybe there's a way to reword the entire phrase that leads to a better\nformulation of the idea.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 10 Nov 2022 12:17:57 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Mon, 7 Nov 2022 at 22:04, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Sat, 2022-11-05 at 10:08 +0000, Simon Riggs wrote:\n> > Agreed; new compilation patch attached, including mine and then\n> > Robert's suggested rewordings.\n>\n> Thanks. There is clearly a lot of usefule information in this.\n>\n> Some comments:\n>\n> > --- a/doc/src/sgml/func.sgml\n> > +++ b/doc/src/sgml/func.sgml\n> > @@ -24673,7 +24673,10 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n> > <para>\n> > Returns the current transaction's ID. It will assign a new one if the\n> > current transaction does not have one already (because it has not\n> > - performed any database updates).\n> > + performed any database updates); see <xref\n> > + linkend=\"transaction-id\"/> for details. If executed in a\n> > + subtransaction this will return the top-level xid; see <xref\n> > + linkend=\"subxacts\"/> for details.\n> > </para></entry>\n> > </row>\n>\n> I would use a comma after \"subtransaction\", and I think it would be better to write\n> \"transaction ID\" instead of \"xid\".\n>\n> > @@ -24690,6 +24693,7 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n> > ID is assigned yet. (It's best to use this variant if the transaction\n> > might otherwise be read-only, to avoid unnecessary consumption of an\n> > XID.)\n> > + If executed in a subtransaction this will return the top-level xid.\n> > </para></entry>\n> > </row>\n>\n> Same as above.\n>\n> > @@ -24733,6 +24737,8 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n> > <para>\n> > Returns a current <firstterm>snapshot</firstterm>, a data structure\n> > showing which transaction IDs are now in-progress.\n> > + Only top-level xids are included in the snapshot; subxids are not\n> > + shown; see <xref linkend=\"subxacts\"/> for details.\n> > </para></entry>\n> > </row>\n>\n> Again, I would avoid \"xid\" and \"subxid\", or at least use \"transaction ID (xid)\"\n> and similar.\n>\n> > --- a/doc/src/sgml/ref/release_savepoint.sgml\n> > +++ b/doc/src/sgml/ref/release_savepoint.sgml\n> > @@ -34,23 +34,16 @@ RELEASE [ SAVEPOINT ] <replaceable>savepoint_name</replaceable>\n> > <title>Description</title>\n> >\n> > <para>\n> > - <command>RELEASE SAVEPOINT</command> destroys a savepoint previously defined\n> > - in the current transaction.\n> > + <command>RELEASE SAVEPOINT</command> will subcommit the subtransaction\n> > + established by the named savepoint, if one exists. This will release\n> > + any resources held by the subtransaction. If there were any\n> > + subtransactions of the named savepoint, these will also be subcommitted.\n> > </para>\n> >\n> > <para>\n>\n> \"Subtransactions of the named savepoint\" is somewhat confusing; how about\n> \"subtransactions of the subtransaction established by the named savepoint\"?\n>\n> If that is too long and explicit, perhaps \"subtransactions of that subtransaction\".\n>\n> > @@ -78,7 +71,7 @@ RELEASE [ SAVEPOINT ] <replaceable>savepoint_name</replaceable>\n> >\n> > <para>\n> > It is not possible to release a savepoint when the transaction is in\n> > - an aborted state.\n> > + an aborted state, to do that use <xref linkend=\"sql-rollback-to\"/>.\n> > </para>\n> >\n> > <para>\n>\n> I think the following is more English:\n> \"It is not possible ... state; to do that, use <xref .../>.\"\n>\n> > --- a/doc/src/sgml/ref/rollback.sgml\n> > +++ b/doc/src/sgml/ref/rollback.sgml\n> > @@ -56,11 +56,14 @@ ROLLBACK [ WORK | TRANSACTION ] [ AND [ NO ] CHAIN ]\n> > <term><literal>AND CHAIN</literal></term>\n> > <listitem>\n> > <para>\n> > - If <literal>AND CHAIN</literal> is specified, a new transaction is\n> > + If <literal>AND CHAIN</literal> is specified, a new unaborted transaction is\n> > immediately started with the same transaction characteristics (see <xref\n> > linkend=\"sql-set-transaction\"/>) as the just finished one. Otherwise,\n> > no new transaction is started.\n>\n> I don't think that is an improvement. \"Unaborted\" is an un-word. A new transaction\n> is always \"unaborted\", isn't it?\n>\n> > --- a/doc/src/sgml/wal.sgml\n> > +++ b/doc/src/sgml/wal.sgml\n> > @@ -909,4 +910,36 @@\n> > seem to be a problem in practice.\n> > </para>\n> > </sect1>\n> > +\n> > + <sect1 id=\"two-phase\">\n> > +\n> > + <title>Two-Phase Transactions</title>\n> > +\n> > + <para>\n> > + <productname>PostgreSQL</productname> supports a two-phase commit (2PC)\n> [...]\n> > + <filename>pg_twophase</filename> directory. Currently-prepared\n> > + transactions can be inspected using <link\n> > + linkend=\"view-pg-prepared-xacts\"><structname>pg_prepared_xacts</structname></link>.\n> > + </para>\n> > + </sect1>\n> > +\n> > </chapter>\n>\n> I don't like \"currently-prepared\". How about:\n> \"Transaction that are currently prepared can be inspected...\"\n>\n> This is clearly interesting information, but I don't think the WAL chapter is the right\n> place for this. \"pg_twophase\" is already mentioned in \"storage.sgml\", and details about\n> when exactly a prepared transaction is persisted may exceed the details level needed by\n> the end user.\n>\n> I'd look for that information in the reference page for PREPARE TRANSACTION; perhaps\n> that would be a better place. Or, even better, the new \"xact.sgml\" chapter.\n>\n> > --- /dev/null\n> > +++ b/doc/src/sgml/xact.sgml\n>\n> + <title>Transaction Management</title>\n>\n> + The word transaction is often abbreviated as \"xact\".\n>\n> Should use <quote> here.\n>\n> > + <title>Transactions and Identifiers</title>\n>\n> > + <para>\n> > + Once a transaction writes to the database, it is assigned a\n> > + non-virtual <literal>TransactionId</literal> (or <type>xid</type>),\n> > + e.g., <literal>278394</literal>. Xids are assigned sequentially\n> > + using a global counter used by all databases within the\n> > + <productname>PostgreSQL</productname> cluster. This property is used by\n> > + the transaction system to order transactions by their first database\n> > + write, i.e., lower-numbered xids started writing before higher-numbered\n> > + xids. Of course, transactions might start in a different order.\n> > + </para>\n>\n> \"This property\"? How about:\n> \"Because transaction IDs are assigned sequentially, the transaction system can\n> use them to order transactions by their first database write\"\n>\n> I would want some additional information here: why does the transaction system have\n> to order transactions by their first database write?\n>\n> \"Of course, transactions might start in a different order.\"\n>\n> Now that confuses me. Are you saying that BEGIN could be in a different order\n> than the first database write? Perhaps like this:\n>\n> \"Note that the order in which transactions perform their first database write\n> might be different from the order in which the transactions started.\"\n>\n> > + The internal transaction ID type <type>xid</type> is 32-bits wide\n>\n> There should be no hyphen in \"32 bits wide\", just as in \"3 years old\".\n>\n> > + A 32-bit epoch is incremented during each\n> > + wrap around.\n>\n> We usually call this \"wraparound\" without a space.\n>\n> > + There is also a 64-bit type <type>xid8</type> which\n> > + includes this epoch and therefore does not wrap around during the\n> > + life of an installation and can be converted to xid by casting.\n>\n> Running \"and\"s. Better:\n>\n> \"There is also ... and does not wrap ... life of an installation.\n> <type>xid8</type> can be converted to <type>xid</type> by casting.\"\n>\n> > + Xids are used as the\n> > + basis for <productname>PostgreSQL</productname>'s <link\n> > + linkend=\"mvcc\">MVCC</link> concurrency mechanism, <link\n> > + linkend=\"hot-standby\">Hot Standby</link>, and Read Replica servers.\n>\n> What is the difference between a hot standby and a read replica? I think\n> one of these terms is sufficient.\n>\n> > + In addition to <literal>vxid</literal> and <type>xid</type>,\n> > + when a transaction is prepared for two-phase commit it\n> > + is also identified by a Global Transaction Identifier\n> > + (<acronym>GID</acronym>).\n>\n> Better:\n>\n> \"In addition to <literal>vxid</literal> and <type>xid</type>,\n> prepared transactions also have a Global Transaction Identifier\n> (<acronym>GID</acronym>) that is assigned when the transaction is\n> prepared for two-phase commit.\"\n>\n> > + <sect1 id=\"xact-locking\">\n> > +\n> > + <title>Transactions and Locking</title>\n> > +\n> > + <para>\n> > + Currently-executing transactions are shown in <link\n> > + linkend=\"view-pg-locks\"><structname>pg_locks</structname></link>\n> > + in columns <structfield>virtualxid</structfield> and\n> > + <structfield>transactionid</structfield>.\n>\n> Better:\n>\n> \"The transaction IDs of currently executing transactions are shown in <link\n> linkend=\"view-pg-locks\"><structname>pg_locks</structname></link>\n> in the columns <structfield>virtualxid</structfield> and\n> <structfield>transactionid</structfield>.\"\n>\n> > + Lock waits on table-level locks are shown waiting for\n> > + <structfield>virtualxid</structfield>, while lock waits on row-level\n> > + locks are shown waiting for <structfield>transactionid</structfield>.\n>\n> That's not true. Transactions waiting for table-level locks are shown\n> waiting for a \"relation\" lock in both \"pg_stat_activity\" and \"pg_locks\".\n\nAgreed.\n\nLock waits on table-locks are shown waiting for a lock type of\n<literal>relation</literal>,\nwhile lock waits on row-level locks are shown waiting for a lock type\nof <literal>transactionid</literal>.\nTable-level locks require only a virtualxid when the lock is less than an\nAccessExclusiveLock; in other cases an xid must be allocated.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 10 Nov 2022 11:31:25 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Thu, 2022-11-10 at 12:17 +0100, Alvaro Herrera wrote:\n> On 2022-Nov-10, Laurenz Albe wrote:\n> > On Wed, 2022-11-09 at 09:16 -0500, Robert Treat wrote:\n> > > > > - If <literal>AND CHAIN</literal> is specified, a new transaction is\n> > > > > + If <literal>AND CHAIN</literal> is specified, a new unaborted transaction is\n> > > > > immediately started with the same transaction characteristics (see <xref\n> > > > > linkend=\"sql-set-transaction\"/>) as the just finished one. Otherwise,\n> > > > > no new transaction is started.\n> > \n> > A new transaction is never aborted in my understanding. Being aborted\n> > is not a characteristic of a transaction, but a state.\n> \n> I agree, but maybe it's good to make the point explicit, because it\n> doesn't seem obvious. Perhaps something like\n> \n> \"If X is specified, a new transaction (never in aborted state) is\n> immediately started with the same transaction characteristics (see X) as\n> the just finished one. Otherwise ...\"\n> \n> Getting the wording of that parenthical comment right is tricky, though.\n> What I propose above is not great, but I don't know how to make it\n> better. Other ideas that seem slightly worse but may inspire someone:\n> \n> ... a new transaction (which is never in aborted state) is ...\n> ... a new transaction (not in aborted state) is ...\n> ... a new transaction (never aborted, even if the previous is) is ...\n> ... a new (not-aborted) transaction is ...\n> ... a new (never aborted) transaction is ...\n> ... a new (never aborted, even if the previous is) transaction is ...\n> ... a new (never aborted, regardless of the status of the previous one) transaction is ...\n> \n> \n> Maybe there's a way to reword the entire phrase that leads to a better\n> formulation of the idea.\n\nAny of your auggestions is better than \"unaborted\".\n\nHow about:\n\n If <literal>AND CHAIN</literal> is specified, a new transaction is\n immediately started with the same transaction characteristics (see <xref\n linkend=\"sql-set-transaction\"/>) as the just finished one.\n This new transaction won't be in the <quote>aborted</quote> state, even\n if the old transaction was aborted.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Sun, 13 Nov 2022 12:56:30 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Tue, 8 Nov 2022 at 03:41, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, Nov 7, 2022 at 10:58:05AM +0000, Simon Riggs wrote:\n> > What I've posted is the merged patch, i.e. your latest patch, plus\n> > changes to RELEASE SAVEPOINT from you on Oct 16, plus changes based on\n> > the later comments from Robert and I.\n>\n> Thanks. I have two changes to your patch. First, I agree \"destroy\" is\n> the wrong word for this, but I don't think \"subcommit\" is good, for\n> three reasons:\n>\n> 1. Release merges the non-aborted changes into the previous transaction\n> _and_ frees their resources --- \"subcommit\" doesn't have both meanings,\n> which I think means if we need a single word, we should use \"release\"\n> and later define what that means.\n>\n> 2. The \"subcommit\" concept doesn't closely match the user-visible\n> behavior, even though we use subtransactions to accomplish this. Release\n> is more of a rollup/merge into the previously-active\n> transaction/savepoint.\n>\n> 3. \"subcommit\" is an implementation detail that I don't think we should\n> expose to users in the manual pages.\n\nI don't understand this - you seem to be presuming that \"subcommit\"\nmeans something different and then objecting to that difference.\n\nFor me, \"Subcommit\" exactly matches what is happening because the code\ncomments and details already use Subcommit in exactly this way.\n\nThe main purpose of this patch is to talk about what is happening\nusing the same language as we do in the code. The gap between the code\nand the docs isn't helping anyone.\n\n> I adjusted the first paragraph of RELEASE SAVEPOINT to highlight the\n> above issues. My original patch had similar wording.\n>\n> The first attachment shows my changes to your patch, and the second\n> attachment is my full patch.\n\nOK, though this makes the patch tester look like this doesn't apply.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 15 Nov 2022 10:16:44 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Mon, Nov 7, 2022 at 11:04:46PM +0100, Laurenz Albe wrote:\n> On Sat, 2022-11-05 at 10:08 +0000, Simon Riggs wrote:\n> > Agreed; new compilation patch attached, including mine and then\n> > Robert's suggested rewordings.\n> \n> Thanks. There is clearly a lot of usefule information in this.\n\nSorry again for the long delay in replying to this.\n\n> Some comments:\n> \n> > --- a/doc/src/sgml/func.sgml\n> > +++ b/doc/src/sgml/func.sgml\n> > @@ -24673,7 +24673,10 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n> > <para>\n> > Returns the current transaction's ID. It will assign a new one if the\n> > current transaction does not have one already (because it has not\n> > - performed any database updates).\n> > + performed any database updates); see <xref\n> > + linkend=\"transaction-id\"/> for details. If executed in a\n> > + subtransaction this will return the top-level xid; see <xref\n> > + linkend=\"subxacts\"/> for details.\n> > </para></entry>\n> > </row>\n> \n> I would use a comma after \"subtransaction\", and I think it would be better to write\n> \"transaction ID\" instead of \"xid\".\n\nAgreed.\n\n> > @@ -24690,6 +24693,7 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n> > ID is assigned yet. (It's best to use this variant if the transaction\n> > might otherwise be read-only, to avoid unnecessary consumption of an\n> > XID.)\n> > + If executed in a subtransaction this will return the top-level xid.\n> > </para></entry>\n> > </row>\n> \n> Same as above.\n\nAgreed.\n\n> > @@ -24733,6 +24737,8 @@ SELECT collation for ('foo' COLLATE \"de_DE\");\n> > <para>\n> > Returns a current <firstterm>snapshot</firstterm>, a data structure\n> > showing which transaction IDs are now in-progress.\n> > + Only top-level xids are included in the snapshot; subxids are not\n> > + shown; see <xref linkend=\"subxacts\"/> for details.\n> > </para></entry>\n> > </row>\n> \n> Again, I would avoid \"xid\" and \"subxid\", or at least use \"transaction ID (xid)\"\n> and similar.\n\nDone.\n\n> > --- a/doc/src/sgml/ref/release_savepoint.sgml\n> > +++ b/doc/src/sgml/ref/release_savepoint.sgml\n> > @@ -34,23 +34,16 @@ RELEASE [ SAVEPOINT ] <replaceable>savepoint_name</replaceable>\n> > <title>Description</title>\n> > \n> > <para>\n> > - <command>RELEASE SAVEPOINT</command> destroys a savepoint previously defined\n> > - in the current transaction.\n> > + <command>RELEASE SAVEPOINT</command> will subcommit the subtransaction\n> > + established by the named savepoint, if one exists. This will release\n> > + any resources held by the subtransaction. If there were any\n> > + subtransactions of the named savepoint, these will also be subcommitted.\n> > </para>\n> > \n> > <para>\n> \n> \"Subtransactions of the named savepoint\" is somewhat confusing; how about\n> \"subtransactions of the subtransaction established by the named savepoint\"?\n> \n> If that is too long and explicit, perhaps \"subtransactions of that subtransaction\".\n\nThis paragraph has been rewritten to:\n\n <command>RELEASE SAVEPOINT</command> releases the named savepoint and\n all active savepoints that were created after the named savepoint,\n and frees their resources. All changes made since the creation of the\n savepoint, excluding rolled back savepoints changes, are merged into\n the transaction or savepoint that was active when the named savepoint\n was created. Changes made after <command>RELEASE SAVEPOINT</command>\n will also be part of this active transaction or savepoint.\n\n> > @@ -78,7 +71,7 @@ RELEASE [ SAVEPOINT ] <replaceable>savepoint_name</replaceable>\n> > \n> > <para>\n> > It is not possible to release a savepoint when the transaction is in\n> > - an aborted state.\n> > + an aborted state, to do that use <xref linkend=\"sql-rollback-to\"/>.\n> > </para>\n> > \n> > <para>\n> \n> I think the following is more English:\n> \"It is not possible ... state; to do that, use <xref .../>.\"\n\nChanged to:\n\n It is not possible to release a savepoint when the transaction is in\n an aborted state; to do that, use <xref linkend=\"sql-rollback-to\"/>.\n\n> > --- a/doc/src/sgml/ref/rollback.sgml\n> > +++ b/doc/src/sgml/ref/rollback.sgml\n> > @@ -56,11 +56,14 @@ ROLLBACK [ WORK | TRANSACTION ] [ AND [ NO ] CHAIN ]\n> > <term><literal>AND CHAIN</literal></term>\n> > <listitem>\n> > <para>\n> > - If <literal>AND CHAIN</literal> is specified, a new transaction is\n> > + If <literal>AND CHAIN</literal> is specified, a new unaborted transaction is\n> > immediately started with the same transaction characteristics (see <xref\n> > linkend=\"sql-set-transaction\"/>) as the just finished one. Otherwise,\n> > no new transaction is started.\n> \n> I don't think that is an improvement. \"Unaborted\" is an un-word. A new transaction\n> is always \"unaborted\", isn't it?\n\nAgreed.\n\n> > --- a/doc/src/sgml/wal.sgml\n> > +++ b/doc/src/sgml/wal.sgml\n> > @@ -909,4 +910,36 @@\n> > seem to be a problem in practice.\n> > </para>\n> > </sect1>\n> > +\n> > + <sect1 id=\"two-phase\">\n> > +\n> > + <title>Two-Phase Transactions</title>\n> > +\n> > + <para>\n> > + <productname>PostgreSQL</productname> supports a two-phase commit (2PC)\n> [...]\n> > + <filename>pg_twophase</filename> directory. Currently-prepared\n> > + transactions can be inspected using <link\n> > + linkend=\"view-pg-prepared-xacts\"><structname>pg_prepared_xacts</structname></link>.\n> > + </para>\n> > + </sect1>\n> > +\n> > </chapter>\n> \n> I don't like \"currently-prepared\". How about:\n> \"Transaction that are currently prepared can be inspected...\"\n\nYes, now:\n\n Transactions that are currently prepared can be inspected using <link\n linkend=\"view-pg-prepared-xacts\"><structname>pg_prepared_xacts</structname></link>.\n\n> This is clearly interesting information, but I don't think the WAL chapter is the right\n> place for this. \"pg_twophase\" is already mentioned in \"storage.sgml\", and details about\n> when exactly a prepared transaction is persisted may exceed the details level needed by\n> the end user.\n> \n> I'd look for that information in the reference page for PREPARE TRANSACTION; perhaps\n> that would be a better place. Or, even better, the new \"xact.sgml\" chapter.\n\nAgreed, moved to xact.sgml.\n\n> > --- /dev/null\n> > +++ b/doc/src/sgml/xact.sgml\n> \n> + <title>Transaction Management</title>\n> \n> + The word transaction is often abbreviated as \"xact\".\n> \n> Should use <quote> here.\n\nDone.\n\n> > + <title>Transactions and Identifiers</title>\n> \n> > + <para>\n> > + Once a transaction writes to the database, it is assigned a\n> > + non-virtual <literal>TransactionId</literal> (or <type>xid</type>),\n> > + e.g., <literal>278394</literal>. Xids are assigned sequentially\n> > + using a global counter used by all databases within the\n> > + <productname>PostgreSQL</productname> cluster. This property is used by\n> > + the transaction system to order transactions by their first database\n> > + write, i.e., lower-numbered xids started writing before higher-numbered\n> > + xids. Of course, transactions might start in a different order.\n> > + </para>\n> \n> \"This property\"? How about:\n> \"Because transaction IDs are assigned sequentially, the transaction system can\n> use them to order transactions by their first database write\"\n> \n> I would want some additional information here: why does the transaction system have\n> to order transactions by their first database write?\n> \n> \"Of course, transactions might start in a different order.\"\n> \n> Now that confuses me. Are you saying that BEGIN could be in a different order\n> than the first database write? Perhaps like this:\n> \n> \"Note that the order in which transactions perform their first database write\n> might be different from the order in which the transactions started.\"\n\nI rewrote the paragraph to be:\n\n Non-virtual <literal>TransactionId</literal> (or <type>xid</type>),\n e.g., <literal>278394</literal>, are assigned sequentially to\n transactions from a global counter used by all databases within\n the <productname>PostgreSQL</productname> cluster. This assignment\n happens when a transaction first writes to the database. This means\n lower-numbered xids started writing before higher-numbered xids.\n Note that the order in which transactions perform their first database\n write might be different from the order in which the transactions\n started, particularly if the transaction started with statements that\n only performed database reads.\n\n> > + The internal transaction ID type <type>xid</type> is 32-bits wide\n> \n> There should be no hyphen in \"32 bits wide\", just as in \"3 years old\".\n\nDone.\n\n> > + A 32-bit epoch is incremented during each\n> > + wrap around.\n> \n> We usually call this \"wraparound\" without a space.\n\nFixed.\n\n> > + There is also a 64-bit type <type>xid8</type> which\n> > + includes this epoch and therefore does not wrap around during the\n> > + life of an installation and can be converted to xid by casting.\n> \n> Running \"and\"s. Better:\n> \n> \"There is also ... and does not wrap ... life of an installation.\n> <type>xid8</type> can be converted to <type>xid</type> by casting.\"\n\nI went with:\n\n There is also a 64-bit type <type>xid8</type> which\n includes this epoch and therefore does not wrap around during the\n life of an installation; it can be converted to xid by casting.\n\n> > + Xids are used as the\n> > + basis for <productname>PostgreSQL</productname>'s <link\n> > + linkend=\"mvcc\">MVCC</link> concurrency mechanism, <link\n> > + linkend=\"hot-standby\">Hot Standby</link>, and Read Replica servers.\n> \n> What is the difference between a hot standby and a read replica? I think\n> one of these terms is sufficient.\n\nAgreed, I went with:\n\n Xids are used as the\n basis for <productname>PostgreSQL</productname>'s <link\n linkend=\"mvcc\">MVCC</link> concurrency mechanism and streaming\n replication.\n\n> > + In addition to <literal>vxid</literal> and <type>xid</type>,\n> > + when a transaction is prepared for two-phase commit it\n> > + is also identified by a Global Transaction Identifier\n> > + (<acronym>GID</acronym>).\n> \n> Better:\n> \n> \"In addition to <literal>vxid</literal> and <type>xid</type>,\n> prepared transactions also have a Global Transaction Identifier\n> (<acronym>GID</acronym>) that is assigned when the transaction is\n> prepared for two-phase commit.\"\n\nI went with:\n\n In addition to <literal>vxid</literal> and <type>xid</type>,\n prepared transactions are also assigned Global Transaction\n Identifiers (<acronym>GID</acronym>).\n\n> > + <sect1 id=\"xact-locking\">\n> > +\n> > + <title>Transactions and Locking</title>\n> > +\n> > + <para>\n> > + Currently-executing transactions are shown in <link\n> > + linkend=\"view-pg-locks\"><structname>pg_locks</structname></link>\n> > + in columns <structfield>virtualxid</structfield> and\n> > + <structfield>transactionid</structfield>.\n> \n> Better:\n> \n> \"The transaction IDs of currently executing transactions are shown in <link\n> linkend=\"view-pg-locks\"><structname>pg_locks</structname></link>\n> in the columns <structfield>virtualxid</structfield> and\n> <structfield>transactionid</structfield>.\"\n\nDone.\n\n> > + Lock waits on table-level locks are shown waiting for\n> > + <structfield>virtualxid</structfield>, while lock waits on row-level\n> > + locks are shown waiting for <structfield>transactionid</structfield>.\n> \n> That's not true. Transactions waiting for table-level locks are shown\n> waiting for a \"relation\" lock in both \"pg_stat_activity\" and \"pg_locks\".\n\nI tested and you are right. I went with more generic wording:\n\n Some lock types wait on <structfield>virtualxid</structfield>,\n while other types wait on <structfield>transactionid</structfield>.\n\n> > + Row-level read and write locks are recorded directly in locked\n> > + rows and can be inspected using the <xref linkend=\"pgrowlocks\"/>\n> > + extension. Row-level read locks might also require the assignment\n> > + of multixact IDs (<literal>mxid</literal>). Mxids are recorded in\n> > + the <filename>pg_multixact</filename> directory.\n> \n> \"are recorded directly in *the* locked rows\"\n\nDone.\n\n> I think the mention of multixacts should link to\n> <xref linkend=\"vacuum-for-multixact-wraparound\"/>. Again, I would not\n> specifically mention the directory, since it is already described in\n> \"storage.sgml\", but I have no strong optinion there.\n\nDone with:\n\n Row-level read locks might also require the assignment\n of multixact IDs (<literal>mxid</literal>; see <xref\n linkend=\"vacuum-for-multixact-wraparound\"/>).\n\n> > + <sect1 id=\"subxacts\">\n> > +\n> > + <title>Subtransactions</title>\n> \n> > + The word subtransaction is often abbreviated as\n> > + <literal>subxact</literal>.\n> \n> I'd use <quote>, not <literal>.\n\nDone.\n\n> > + If a subtransaction is assigned a non-virtual transaction ID,\n> > + its transaction ID is referred to as a <literal>subxid</literal>.\n> \n> Again, I would use <quote>, since we don't <literal> \"subxid\"\n> elsewhere.\n\nDone.\n\n> + Up to\n> + 64 open subxids are cached in shared memory for each backend; after\n> + that point, the overhead increases significantly since we must look\n> + up subxid entries in <filename>pg_subtrans</filename>.\n> \n> Comma before \"since\". Perhaps you should mention that this means disk I/O.\n\nI went with:\n\n Up to 64 open subxids are cached in shared memory for each backend; after\n that point, the storage I/O overhead increases significantly, since\n we must look up subxid entries in <filename>pg_subtrans</filename>.\n\nUpdated full patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Fri, 18 Nov 2022 14:11:50 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Wed, Nov 9, 2022 at 09:16:18AM -0500, Robert Treat wrote:\n> > \"Subtransactions of the named savepoint\" is somewhat confusing; how about\n> > \"subtransactions of the subtransaction established by the named savepoint\"?\n> >\n> > If that is too long and explicit, perhaps \"subtransactions of that subtransaction\".\n> >\n> \n> Personally, I think these are more confusing.\n\nThat text is gone.\n\n> > > --- a/doc/src/sgml/ref/rollback.sgml\n> > > +++ b/doc/src/sgml/ref/rollback.sgml\n> > > @@ -56,11 +56,14 @@ ROLLBACK [ WORK | TRANSACTION ] [ AND [ NO ] CHAIN ]\n> > > <term><literal>AND CHAIN</literal></term>\n> > > <listitem>\n> > > <para>\n> > > - If <literal>AND CHAIN</literal> is specified, a new transaction is\n> > > + If <literal>AND CHAIN</literal> is specified, a new unaborted transaction is\n> > > immediately started with the same transaction characteristics (see <xref\n> > > linkend=\"sql-set-transaction\"/>) as the just finished one. Otherwise,\n> > > no new transaction is started.\n> >\n> > I don't think that is an improvement. \"Unaborted\" is an un-word. A new transaction\n> > is always \"unaborted\", isn't it?\n> >\n> \n> I thought about this as well when reviewing it, but I do think\n> something is needed for the case where you have a transaction which\n> has suffered an error and then you issue \"rollback and chain\"; if you\n> just say \"a new transaction is immediately started with the same\n> transaction characteristics\" it might imply to some the new\n> transaction has some kind of carry over of the previous broken\n> transaction... the use of the word unaborted makes it clear that the\n> new transaction is 100% functional.\n\nI changed it to:\n\n a new (unaborted) transaction is immediately started\n\n> ISTR that you only use a comma before since in cases where the\n> preceding thought contains a negative.\n> \n> In any case, are you thinking something like this:\n> \n> \" 64 open subxids are cached in shared memory for each backend; after\n> that point the overhead increases significantly due to additional disk I/O\n> from looking up subxid entries in <filename>pg_subtrans</filename>.\"\n\nI went with:\n\n Up to 64 open subxids are cached in shared memory for\n each backend; after that point, the storage I/O overhead increases\n significantly due to additional lookups of subxid entries in\n <filename>pg_subtrans</filename>.\n\nNew patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Fri, 18 Nov 2022 14:28:41 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Thu, Nov 10, 2022 at 08:39:29AM +0100, Laurenz Albe wrote:\n> > > I don't think that is an improvement. \"Unaborted\" is an un-word. A new transaction\n> > > is always \"unaborted\", isn't it?\n> > \n> > I thought about this as well when reviewing it, but I do think\n> > something is needed for the case where you have a transaction which\n> > has suffered an error and then you issue \"rollback and chain\"; if you\n> > just say \"a new transaction is immediately started with the same\n> > transaction characteristics\" it might imply to some the new\n> > transaction has some kind of carry over of the previous broken\n> > transaction... the use of the word unaborted makes it clear that the\n> > new transaction is 100% functional.\n> \n> A new transaction is never aborted in my understanding. Being aborted is not a\n> characteristic of a transaction, but a state.\n\nI used \"(unaborted)\", which seems to be a compromise.\n\n> > > > + The internal transaction ID type <type>xid</type> is 32-bits wide\n> > > \n> > > There should be no hyphen in \"32 bits wide\", just as in \"3 years old\".\n> > \n> > Minor aside, we should clean up glossary.sgml as well.\n> \n> Right, it has this:\n> \n> The numerical, unique, sequentially-assigned identifier that each\n> transaction receives when it first causes a database modification.\n> Frequently abbreviated as <firstterm>xid</firstterm>.\n> When stored on disk, xids are only 32-bits wide, so only\n> approximately four billion write transaction IDs can be generated;\n> to permit the system to run for longer than that,\n> <firstterm>epochs</firstterm> are used, also 32 bits wide.\n> \n> Which reminds me that I should have suggested <firstterm> rather than\n> <quote> where I complained about the use of <literal>.\n\nI changed them to \"firstterm\".\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 18 Nov 2022 14:31:38 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Sun, Nov 13, 2022 at 12:56:30PM +0100, Laurenz Albe wrote:\n> > Maybe there's a way to reword the entire phrase that leads to a better\n> > formulation of the idea.\n> \n> Any of your auggestions is better than \"unaborted\".\n> \n> How about:\n> \n> If <literal>AND CHAIN</literal> is specified, a new transaction is\n> immediately started with the same transaction characteristics (see <xref\n> linkend=\"sql-set-transaction\"/>) as the just finished one.\n> This new transaction won't be in the <quote>aborted</quote> state, even\n> if the old transaction was aborted.\n\nI think I am going to keep \"(unaborted)\".\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 18 Nov 2022 14:33:26 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Thu, Nov 10, 2022 at 11:31:25AM +0000, Simon Riggs wrote:\n> > That's not true. Transactions waiting for table-level locks are shown\n> > waiting for a \"relation\" lock in both \"pg_stat_activity\" and \"pg_locks\".\n> \n> Agreed.\n> \n> Lock waits on table-locks are shown waiting for a lock type of\n> <literal>relation</literal>,\n> while lock waits on row-level locks are shown waiting for a lock type\n> of <literal>transactionid</literal>.\n> Table-level locks require only a virtualxid when the lock is less than an\n> AccessExclusiveLock; in other cases an xid must be allocated.\n\nYeah, I went with more generic wording since the point seems to be that\nsometimes xid and sometimes vxids are waited on.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 18 Nov 2022 14:34:34 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Tue, Nov 15, 2022 at 10:16:44AM +0000, Simon Riggs wrote:\n> On Tue, 8 Nov 2022 at 03:41, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Mon, Nov 7, 2022 at 10:58:05AM +0000, Simon Riggs wrote:\n> > > What I've posted is the merged patch, i.e. your latest patch, plus\n> > > changes to RELEASE SAVEPOINT from you on Oct 16, plus changes based on\n> > > the later comments from Robert and I.\n> >\n> > Thanks. I have two changes to your patch. First, I agree \"destroy\" is\n> > the wrong word for this, but I don't think \"subcommit\" is good, for\n> > three reasons:\n> >\n> > 1. Release merges the non-aborted changes into the previous transaction\n> > _and_ frees their resources --- \"subcommit\" doesn't have both meanings,\n> > which I think means if we need a single word, we should use \"release\"\n> > and later define what that means.\n> >\n> > 2. The \"subcommit\" concept doesn't closely match the user-visible\n> > behavior, even though we use subtransactions to accomplish this. Release\n> > is more of a rollup/merge into the previously-active\n> > transaction/savepoint.\n> >\n> > 3. \"subcommit\" is an implementation detail that I don't think we should\n> > expose to users in the manual pages.\n> \n> I don't understand this - you seem to be presuming that \"subcommit\"\n> means something different and then objecting to that difference.\n> \n> For me, \"Subcommit\" exactly matches what is happening because the code\n> comments and details already use Subcommit in exactly this way.\n> \n> The main purpose of this patch is to talk about what is happening\n> using the same language as we do in the code. The gap between the code\n> and the docs isn't helping anyone.\n\nI didn't think that was the purpose, and certainly not in the\nreference/ref/man pages. I thought the purpose was to explain the\nbehavior clearly, and in the \"Internals\" section, the internal API we\nexpose to users. I didn't think matching the code was ever a goal --- I\nthought that is what the README files are for.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 18 Nov 2022 14:37:39 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Fri, Nov 18, 2022 at 02:33:26PM -0500, Bruce Momjian wrote:\n> On Sun, Nov 13, 2022 at 12:56:30PM +0100, Laurenz Albe wrote:\n> > > Maybe there's a way to reword the entire phrase that leads to a better\n> > > formulation of the idea.\n> > \n> > Any of your auggestions is better than \"unaborted\".\n> > \n> > How about:\n> > \n> > If <literal>AND CHAIN</literal> is specified, a new transaction is\n> > immediately started with the same transaction characteristics (see <xref\n> > linkend=\"sql-set-transaction\"/>) as the just finished one.\n> > This new transaction won't be in the <quote>aborted</quote> state, even\n> > if the old transaction was aborted.\n> \n> I think I am going to keep \"(unaborted)\".\n\nAttached is the most current version of the patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Fri, 18 Nov 2022 14:38:25 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Fri, 2022-11-18 at 14:28 -0500, Bruce Momjian wrote:\n> New patch attached.\n\nThanks.\n\n> --- a/doc/src/sgml/ref/release_savepoint.sgml\n> +++ b/doc/src/sgml/ref/release_savepoint.sgml\n\n> + <command>RELEASE SAVEPOINT</command> releases the named savepoint and\n> + all active savepoints that were created after the named savepoint,\n> + and frees their resources. All changes made since the creation of the\n> + savepoint, excluding rolled back savepoints changes, are merged into\n> + the transaction or savepoint that was active when the named savepoint\n> + was created. Changes made after <command>RELEASE SAVEPOINT</command>\n> + will also be part of this active transaction or savepoint.\n\nI am not sure if \"rolled back savepoints changes\" is clear enough.\nI understand that you are trying to avoid the term \"subtransaction\".\nHow about:\n\n All changes made since the creation of the savepoint that didn't already\n get rolled back are merged ...\n\n> --- a/doc/src/sgml/ref/rollback.sgml\n> +++ b/doc/src/sgml/ref/rollback.sgml\n>\n> + If <literal>AND CHAIN</literal> is specified, a new (unaborted)\n\n*Sigh* I'll make one last plea for \"not aborted\".\n\n> --- /dev/null\n> +++ b/doc/src/sgml/xact.sgml\n\n> + <para>\n> + Transactions can be created explicitly using <command>BEGIN</command>\n> + and <command>COMMIT</command>, which creates a transaction block.\n> + An SQL statement outside of a transaction block automatically uses\n> + a single-statement transaction.\n> + </para>\n\nSorry, I should have noted that the first time around.\n\nTransactions are not created using COMMIT.\n\nPerhaps we should also avoid the term \"transaction block\". Even without speaking\nof a \"block\", way too many people confuse PL/pgSQL's BEGIN ... END blocks\nwith transactions. On the other hand, we use \"transaction block\" everywhere\nelse in the documentation...\n\nHow about:\n\n <para>\n Multi-statement transactions can be created explicitly using\n <command>BEGIN</command> or <command>START TRANSACTION</command> and\n are ended using <command>COMMIT</command> or <command>ROLLBACK</command>.\n An SQL statement outside of a transaction block automatically uses\n a single-statement transaction.\n </para>\n\n> + <sect1 id=\"xact-locking\">\n> +\n> + <title>Transactions and Locking</title>\n> +\n> + <para>\n> + The transaction IDs of currently executing transactions are shown in\n> + <link linkend=\"view-pg-locks\"><structname>pg_locks</structname></link>\n> + in columns <structfield>virtualxid</structfield> and\n> + <structfield>transactionid</structfield>. Read-only transactions\n> + will have <structfield>virtualxid</structfield>s but NULL\n> + <structfield>transactionid</structfield>s, while read-write transactions\n> + will have both as non-NULL.\n> + </para>\n\nPerhaps the following will be prettier than \"have both as non-NULL\":\n\n ..., while both columns will be set in read-write transactions.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 21 Nov 2022 11:15:36 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "Agreed on not using \"unaborted\", per previous discussion.\n\nOn 2022-Nov-21, Laurenz Albe wrote:\n\n> Perhaps we should also avoid the term \"transaction block\". Even without speaking\n> of a \"block\", way too many people confuse PL/pgSQL's BEGIN ... END blocks\n> with transactions. On the other hand, we use \"transaction block\" everywhere\n> else in the documentation...\n\nYeah, I don't understand why we need this \"transaction block\" term at\nall. It adds nothing. We could just use the term \"transaction\", and\nlittle meaning would be lost. When necessary, we could just say\n\"explicit transaction\" or something to that effect. In this particular\ncase, we could modify your proposed wording,\n\n> <para>\n> Multi-statement transactions can be created explicitly using\n> <command>BEGIN</command> or <command>START TRANSACTION</command> and\n> are ended using <command>COMMIT</command> or <command>ROLLBACK</command>.\n> An SQL statement outside of a transaction block automatically uses\n> a single-statement transaction.\n> </para>\n\nby removing the word \"block\":\n\n> Any SQL statement outside of an transaction automatically uses\n> a single-statement transaction.\n\nand perhaps add \"explicit\", but I don't think it's necessary:\n\n> Any SQL statement outside of an explicit transaction automatically\n> uses a single-statement transaction.\n\n\n(I also changed \"An\" to \"Any\" because it seems more natural, but I\nsuppose it's a stylistic choice.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 21 Nov 2022 11:35:09 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Mon, Nov 21, 2022 at 11:15:36AM +0100, Laurenz Albe wrote:\n> > --- a/doc/src/sgml/ref/release_savepoint.sgml\n> > +++ b/doc/src/sgml/ref/release_savepoint.sgml\n> \n> > + <command>RELEASE SAVEPOINT</command> releases the named savepoint and\n> > + all active savepoints that were created after the named savepoint,\n> > + and frees their resources. All changes made since the creation of the\n> > + savepoint, excluding rolled back savepoints changes, are merged into\n> > + the transaction or savepoint that was active when the named savepoint\n> > + was created. Changes made after <command>RELEASE SAVEPOINT</command>\n> > + will also be part of this active transaction or savepoint.\n> \n> I am not sure if \"rolled back savepoints changes\" is clear enough.\n> I understand that you are trying to avoid the term \"subtransaction\".\n> How about:\n> \n> All changes made since the creation of the savepoint that didn't already\n> get rolled back are merged ...\n\nYes, I like that, changed.\n\n> > --- a/doc/src/sgml/ref/rollback.sgml\n> > +++ b/doc/src/sgml/ref/rollback.sgml\n> >\n> > + If <literal>AND CHAIN</literal> is specified, a new (unaborted)\n> \n> *Sigh* I'll make one last plea for \"not aborted\".\n\nUh, I thought you wanted \"unaborted\", but I now changed it to \"not\naborted\".\n\n> > --- /dev/null\n> > +++ b/doc/src/sgml/xact.sgml\n> \n> > + <para>\n> > + Transactions can be created explicitly using <command>BEGIN</command>\n> > + and <command>COMMIT</command>, which creates a transaction block.\n> > + An SQL statement outside of a transaction block automatically uses\n> > + a single-statement transaction.\n> > + </para>\n> \n> Sorry, I should have noted that the first time around.\n> \n> Transactions are not created using COMMIT.\n> \n> Perhaps we should also avoid the term \"transaction block\". Even without speaking\n> of a \"block\", way too many people confuse PL/pgSQL's BEGIN ... END blocks\n> with transactions. On the other hand, we use \"transaction block\" everywhere\n> else in the documentation...\n> \n> How about:\n> \n> <para>\n> Multi-statement transactions can be created explicitly using\n> <command>BEGIN</command> or <command>START TRANSACTION</command> and\n> are ended using <command>COMMIT</command> or <command>ROLLBACK</command>.\n> An SQL statement outside of a transaction block automatically uses\n> a single-statement transaction.\n> </para>\n\nI used your wording, but technically you can use BEGIN/COMMIT with a\nsingle statement, so multi-statement it not a requirement, so I used\nyour text but removed \"Multi-statement\":\n\n\tTransactions can be created explicitly using <command>BEGIN</command> or\n\t<command>START TRANSACTION</command> and ended using\n\t<command>COMMIT</command> or <command>ROLLBACK</command>.\n\n> > + <sect1 id=\"xact-locking\">\n> > +\n> > + <title>Transactions and Locking</title>\n> > +\n> > + <para>\n> > + The transaction IDs of currently executing transactions are shown in\n> > + <link linkend=\"view-pg-locks\"><structname>pg_locks</structname></link>\n> > + in columns <structfield>virtualxid</structfield> and\n> > + <structfield>transactionid</structfield>. Read-only transactions\n> > + will have <structfield>virtualxid</structfield>s but NULL\n> > + <structfield>transactionid</structfield>s, while read-write transactions\n> > + will have both as non-NULL.\n> > + </para>\n> \n> Perhaps the following will be prettier than \"have both as non-NULL\":\n> \n> ..., while both columns will be set in read-write transactions.\n\nAgreed, changed. Updated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Tue, 22 Nov 2022 13:00:01 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Mon, Nov 21, 2022 at 11:35:09AM +0100, Álvaro Herrera wrote:\n> Agreed on not using \"unaborted\", per previous discussion.\n> \n> On 2022-Nov-21, Laurenz Albe wrote:\n> \n> > Perhaps we should also avoid the term \"transaction block\". Even without speaking\n> > of a \"block\", way too many people confuse PL/pgSQL's BEGIN ... END blocks\n> > with transactions. On the other hand, we use \"transaction block\" everywhere\n> > else in the documentation...\n> \n> Yeah, I don't understand why we need this \"transaction block\" term at\n> all. It adds nothing. We could just use the term \"transaction\", and\n> little meaning would be lost. When necessary, we could just say\n> \"explicit transaction\" or something to that effect. In this particular\n> case, we could modify your proposed wording,\n\nYes, I just posted that same thing:\n\n\tTransactions can be created explicitly using\n\n> > <para>\n> > Multi-statement transactions can be created explicitly using\n> > <command>BEGIN</command> or <command>START TRANSACTION</command> and\n> > are ended using <command>COMMIT</command> or <command>ROLLBACK</command>.\n> > An SQL statement outside of a transaction block automatically uses\n> > a single-statement transaction.\n> > </para>\n> \n> by removing the word \"block\":\n> \n> > Any SQL statement outside of an transaction automatically uses\n> > a single-statement transaction.\n> \n> and perhaps add \"explicit\", but I don't think it's necessary:\n> \n> > Any SQL statement outside of an explicit transaction automatically\n> > uses a single-statement transaction.\n\nFull paragraph is now:\n\n Transactions can be created explicitly using <command>BEGIN</command>\n or <command>START TRANSACTION</command> and ended using\n <command>COMMIT</command> or <command>ROLLBACK</command>. An SQL\n statement outside of an explicit transaction automatically uses a\n single-statement transaction.\n\n> (I also changed \"An\" to \"Any\" because it seems more natural, but I\n> suppose it's a stylistic choice.)\n\nI think we have a plurality mismatch so I went with \"SQL statements\" and\ndidn't need \"an\" or \"any\" (even newer paragraph version):\n\n Transactions can be created explicitly using <command>BEGIN</command>\n or <command>START TRANSACTION</command> and ended using\n <command>COMMIT</command> or <command>ROLLBACK</command>. SQL\n statements outside of explicit transactions automatically use\n single-statement transactions.\n\nUpdated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Tue, 22 Nov 2022 13:05:56 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "Op 22-11-2022 om 19:00 schreef Bruce Momjian:\n> On Mon, Nov 21, 2022 at 11:15:36AM +0100, Laurenz Albe wrote:\n>> ..., while both columns will be set in read-write transactions.\n> \n> Agreed, changed. Updated patch attached.\n\nIn func.sgml:\n\n'Only top-level transaction ID are' should be\n'Only top-level transaction IDs are'\n\n'subtransaction ID are' should be\n'subtransaction IDs are'\n\nIn xact.sgml:\n\n'Non-virtual <literal>TransactionId</literal> (or <type>xid</type>)' \nshould be\n'Non-virtual <literal>TransactionId</literal>s (or <type>xid</type>s)'\n\n\n\nErik Rijkers\n\n\n\n\n",
"msg_date": "Tue, 22 Nov 2022 19:47:26 +0100",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Tue, Nov 22, 2022 at 07:47:26PM +0100, Erik Rijkers wrote:\n> Op 22-11-2022 om 19:00 schreef Bruce Momjian:\n> > On Mon, Nov 21, 2022 at 11:15:36AM +0100, Laurenz Albe wrote:\n> > > ..., while both columns will be set in read-write transactions.\n> > \n> > Agreed, changed. Updated patch attached.\n> \n> In func.sgml:\n> \n> 'Only top-level transaction ID are' should be\n> 'Only top-level transaction IDs are'\n> \n> 'subtransaction ID are' should be\n> 'subtransaction IDs are'\n> \n> In xact.sgml:\n> \n> 'Non-virtual <literal>TransactionId</literal> (or <type>xid</type>)' should\n> be\n> 'Non-virtual <literal>TransactionId</literal>s (or <type>xid</type>s)'\n\nAgreed, updated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Tue, 22 Nov 2022 13:50:36 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Tue, 2022-11-22 at 13:50 -0500, Bruce Momjian wrote:\n> Agreed, updated patch attached.\n\nI cannot find any more problems, and I shouldn't mention the extra empty\nline at the end of the patch.\n\nI'd change the commitfest status to \"ready for committer\" now if it were\nnot already in that status.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 23 Nov 2022 08:57:33 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Tue, Nov 22, 2022 at 01:50:36PM -0500, Bruce Momjian wrote:\n> +\n> + <para>\n> + A more complex example with multiple nested subtransactions:\n> +<programlisting>\n> +BEGIN;\n> + INSERT INTO table1 VALUES (1);\n> + SAVEPOINT sp1;\n> + INSERT INTO table1 VALUES (2);\n> + SAVEPOINT sp2;\n> + INSERT INTO table1 VALUES (3);\n> + RELEASE SAVEPOINT sp2;\n> + INSERT INTO table1 VALUES (4))); -- generates an error\n> +</programlisting>\n> + In this example, the application requests the release of the savepoint\n> + <literal>sp2</literal>, which inserted 3. This changes the insert's\n> + transaction context to <literal>sp1</literal>. When the statement\n> + attempting to insert value 4 generates an error, the insertion of 2 and\n> + 4 are lost because they are in the same, now-rolled back savepoint,\n> + and value 3 is in the same transaction context. The application can\n> + now only choose one of these two commands, since all other commands\n> + will be ignored with a warning:\n> +<programlisting>\n> + ROLLBACK;\n> + ROLLBACK TO SAVEPOINT sp1;\n> +</programlisting>\n> + Choosing <command>ROLLBACK</command> will abort everything, including\n> + value 1, whereas <command>ROLLBACK TO SAVEPOINT sp1</command> will retain\n> + value 1 and allow the transaction to continue.\n> + </para>\n\nThis mentions a warning, but what happens is actually an error:\n\npostgres=!# select;\nERROR: current transaction is aborted, commands ignored until end of transaction block\n\n\n",
"msg_date": "Wed, 23 Nov 2022 02:17:19 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Wed, Nov 23, 2022 at 08:57:33AM +0100, Laurenz Albe wrote:\n> On Tue, 2022-11-22 at 13:50 -0500, Bruce Momjian wrote:\n> > Agreed, updated patch attached.\n> \n> I cannot find any more problems, and I shouldn't mention the extra empty\n> line at the end of the patch.\n\nFixed. ;-)\n\n> I'd change the commitfest status to \"ready for committer\" now if it were\n> not already in that status.\n\nI knew we would eventually get here. The feedback has been very helpful\nand I am excited about the content.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n",
"msg_date": "Wed, 23 Nov 2022 09:18:32 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Wed, Nov 23, 2022 at 02:17:19AM -0600, Justin Pryzby wrote:\n> On Tue, Nov 22, 2022 at 01:50:36PM -0500, Bruce Momjian wrote:\n> > +\n> > + <para>\n> > + A more complex example with multiple nested subtransactions:\n> > +<programlisting>\n> > +BEGIN;\n> > + INSERT INTO table1 VALUES (1);\n> > + SAVEPOINT sp1;\n> > + INSERT INTO table1 VALUES (2);\n> > + SAVEPOINT sp2;\n> > + INSERT INTO table1 VALUES (3);\n> > + RELEASE SAVEPOINT sp2;\n> > + INSERT INTO table1 VALUES (4))); -- generates an error\n> > +</programlisting>\n> > + In this example, the application requests the release of the savepoint\n> > + <literal>sp2</literal>, which inserted 3. This changes the insert's\n> > + transaction context to <literal>sp1</literal>. When the statement\n> > + attempting to insert value 4 generates an error, the insertion of 2 and\n> > + 4 are lost because they are in the same, now-rolled back savepoint,\n> > + and value 3 is in the same transaction context. The application can\n> > + now only choose one of these two commands, since all other commands\n> > + will be ignored with a warning:\n> > +<programlisting>\n> > + ROLLBACK;\n> > + ROLLBACK TO SAVEPOINT sp1;\n> > +</programlisting>\n> > + Choosing <command>ROLLBACK</command> will abort everything, including\n> > + value 1, whereas <command>ROLLBACK TO SAVEPOINT sp1</command> will retain\n> > + value 1 and allow the transaction to continue.\n> > + </para>\n> \n> This mentions a warning, but what happens is actually an error:\n> \n> postgres=!# select;\n> ERROR: current transaction is aborted, commands ignored until end of transaction block\n\nGood point, new text:\n\n\t The application can now only choose one of these two commands,\n\t since all other commands will be ignored:\n\nUpdated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Wed, 23 Nov 2022 09:53:15 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Fri, Nov 18, 2022 at 02:11:50PM -0500, Bruce Momjian wrote:\n> On Mon, Nov 7, 2022 at 11:04:46PM +0100, Laurenz Albe wrote:\n> > On Sat, 2022-11-05 at 10:08 +0000, Simon Riggs wrote:\n> > > Agreed; new compilation patch attached, including mine and then\n> > > Robert's suggested rewordings.\n> > \n> > Thanks. There is clearly a lot of usefule information in this.\n> \n> Sorry again for the long delay in replying to this.\n\nPatch applied back to PG 11. Thanks to Simon for getting this important\ninformation in our docs, and for the valuable feedback from others that\nmade this even better.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Tue, 29 Nov 2022 20:51:31 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On 30.11.22 02:51, Bruce Momjian wrote:\n> On Fri, Nov 18, 2022 at 02:11:50PM -0500, Bruce Momjian wrote:\n>> On Mon, Nov 7, 2022 at 11:04:46PM +0100, Laurenz Albe wrote:\n>>> On Sat, 2022-11-05 at 10:08 +0000, Simon Riggs wrote:\n>>>> Agreed; new compilation patch attached, including mine and then\n>>>> Robert's suggested rewordings.\n>>>\n>>> Thanks. There is clearly a lot of usefule information in this.\n>>\n>> Sorry again for the long delay in replying to this.\n> \n> Patch applied back to PG 11. Thanks to Simon for getting this important\n> information in our docs, and for the valuable feedback from others that\n> made this even better.\n\nI request that the backpatching of this be reverted. We don't want to \nhave major chapter numbers in the documentation changing between minor \nreleases.\n\nMore generally, major documentation additions shouldn't be backpatched, \nfor the same reasons we don't backpatch features.\n\n\n\n",
"msg_date": "Wed, 30 Nov 2022 07:33:44 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Wed, 30 Nov 2022 at 01:51, Bruce Momjian <bruce@momjian.us> wrote:\n\n> Thanks to Simon for getting this important\n> information in our docs, and for the valuable feedback from others that\n> made this even better.\n\nAnd thanks to you for pulling that all together Bruce.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 30 Nov 2022 06:59:25 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Wed, Nov 30, 2022 at 07:33:44AM +0100, Peter Eisentraut wrote:\n> On 30.11.22 02:51, Bruce Momjian wrote:\n> > Patch applied back to PG 11. Thanks to Simon for getting this important\n> > information in our docs, and for the valuable feedback from others that\n> > made this even better.\n> \n> I request that the backpatching of this be reverted. We don't want to have\n> major chapter numbers in the documentation changing between minor releases.\n\nUh, how do others feel about this? I wanted to get this information to\nall our supported releases as soon as possible, rather than waiting for\nPG 16 and for people to upgrade. To me it seems the chapter renumbering\nis worth that benefit.\n\nI could put the new chapter inside an existing numbered section in the\nback branches if people prefer that, but then PG 16 would have it in a\ndifferent place, which seems bad.\n\n> More generally, major documentation additions shouldn't be backpatched, for\n> the same reasons we don't backpatch features.\n\nI don't see how documentation additions and feature additions are\ncomparable. Feature additions change the behavior of the system, and\npotentially introduce application breakage and bugs, while documentation\nadditions, if they apply to the supported version, have no such risks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Wed, 30 Nov 2022 08:52:23 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Wed, Nov 30, 2022 at 6:52 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, Nov 30, 2022 at 07:33:44AM +0100, Peter Eisentraut wrote:\n> > On 30.11.22 02:51, Bruce Momjian wrote:\n> > > Patch applied back to PG 11. Thanks to Simon for getting this\n> important\n> > > information in our docs, and for the valuable feedback from others that\n> > > made this even better.\n> >\n> > I request that the backpatching of this be reverted. We don't want to\n> have\n> > major chapter numbers in the documentation changing between minor\n> releases.\n>\n> Uh, how do others feel about this? I wanted to get this information to\n> all our supported releases as soon as possible, rather than waiting for\n> PG 16 and for people to upgrade. To me it seems the chapter renumbering\n> is worth that benefit.\n>\n\nI'd maybe accept having it back-patched to v15 on that basis but not any\nfurther.\n\nBut I agree that our general behavior is to only apply this scope of update\nof the documentation to HEAD.\n\nDavid J.\n\nOn Wed, Nov 30, 2022 at 6:52 AM Bruce Momjian <bruce@momjian.us> wrote:On Wed, Nov 30, 2022 at 07:33:44AM +0100, Peter Eisentraut wrote:\n> On 30.11.22 02:51, Bruce Momjian wrote:\n> > Patch applied back to PG 11. Thanks to Simon for getting this important\n> > information in our docs, and for the valuable feedback from others that\n> > made this even better.\n> \n> I request that the backpatching of this be reverted. We don't want to have\n> major chapter numbers in the documentation changing between minor releases.\n\nUh, how do others feel about this? I wanted to get this information to\nall our supported releases as soon as possible, rather than waiting for\nPG 16 and for people to upgrade. To me it seems the chapter renumbering\nis worth that benefit.I'd maybe accept having it back-patched to v15 on that basis but not any further.But I agree that our general behavior is to only apply this scope of update of the documentation to HEAD.David J.",
"msg_date": "Wed, 30 Nov 2022 07:10:35 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Wed, Nov 30, 2022 at 07:10:35AM -0700, David G. Johnston wrote:\n> On Wed, Nov 30, 2022 at 6:52 AM Bruce Momjian <bruce@momjian.us> wrote:\n> I'd maybe accept having it back-patched to v15 on that basis but not any\n> further.\n> \n> But I agree that our general behavior is to only apply this scope of update of\n> the documentation to HEAD.\n\nIf everyone agrees this new chapter is helpful, and as helpful to PG 11\nusers as PG 16 users, why would we not give users this information in\nour docs now? What is the downside? Chapter numbers? Translations?\n\nI assume this new chapter would be mentioned in the minor release notes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Wed, 30 Nov 2022 10:02:54 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Wed, Nov 30, 2022 at 8:02 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, Nov 30, 2022 at 07:10:35AM -0700, David G. Johnston wrote:\n> > On Wed, Nov 30, 2022 at 6:52 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > I'd maybe accept having it back-patched to v15 on that basis but not any\n> > further.\n> >\n> > But I agree that our general behavior is to only apply this scope of\n> update of\n> > the documentation to HEAD.\n>\n> If everyone agrees this new chapter is helpful, and as helpful to PG 11\n> users as PG 16 users, why would we not give users this information in\n> our docs now? What is the downside? Chapter numbers? Translations?\n>\n\nAdmittedly the policy is more about \"we don't expend any effort to write\nback branch patches for this kind of material\" rather than \"we don't do\nback patching because it causes problems\". But it is an existing policy,\napplied consistently through the years, and treating the documentation like\na book, even though it is published in a non-physical medium, is a\nreasonable guideline to follow.\n\nMy desire to get it out in an official release early goes against this\npolicy, and I'm fine waiting for v16 on that basis. The only reason I'm\ngood with updating v15 is that I basically consider anything in the first 3\npoint releases of a major version to be a \"delta\" release.\n\nOne back-patchable idea to consider would be adding a note at the top of\nthe page(s) highlighting the fact that said material has been superseded by\nmore current documentation, with a link. But the idea of changing\nlong-released (see my delta comment above) material doesn't sit well with\nme or the policy.\n\n\n> I assume this new chapter would be mentioned in the minor release notes.\n>\n>\nWe don't do release notes for documentation changes.\n\nDavid J.\n\nOn Wed, Nov 30, 2022 at 8:02 AM Bruce Momjian <bruce@momjian.us> wrote:On Wed, Nov 30, 2022 at 07:10:35AM -0700, David G. Johnston wrote:\n> On Wed, Nov 30, 2022 at 6:52 AM Bruce Momjian <bruce@momjian.us> wrote:\n> I'd maybe accept having it back-patched to v15 on that basis but not any\n> further.\n> \n> But I agree that our general behavior is to only apply this scope of update of\n> the documentation to HEAD.\n\nIf everyone agrees this new chapter is helpful, and as helpful to PG 11\nusers as PG 16 users, why would we not give users this information in\nour docs now? What is the downside? Chapter numbers? Translations?Admittedly the policy is more about \"we don't expend any effort to write back branch patches for this kind of material\" rather than \"we don't do back patching because it causes problems\". But it is an existing policy, applied consistently through the years, and treating the documentation like a book, even though it is published in a non-physical medium, is a reasonable guideline to follow.My desire to get it out in an official release early goes against this policy, and I'm fine waiting for v16 on that basis. The only reason I'm good with updating v15 is that I basically consider anything in the first 3 point releases of a major version to be a \"delta\" release.One back-patchable idea to consider would be adding a note at the top of the page(s) highlighting the fact that said material has been superseded by more current documentation, with a link. But the idea of changing long-released (see my delta comment above) material doesn't sit well with me or the policy.\n\nI assume this new chapter would be mentioned in the minor release notes.We don't do release notes for documentation changes.David J.",
"msg_date": "Wed, 30 Nov 2022 08:25:19 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Wed, Nov 30, 2022 at 08:25:19AM -0700, David G. Johnston wrote:\n> On Wed, Nov 30, 2022 at 8:02 AM Bruce Momjian <bruce@momjian.us> wrote:\n> On Wed, Nov 30, 2022 at 07:10:35AM -0700, David G. Johnston wrote:\n> If everyone agrees this new chapter is helpful, and as helpful to PG 11\n> users as PG 16 users, why would we not give users this information in\n> our docs now? What is the downside? Chapter numbers? Translations?\n> \n> Admittedly the policy is more about \"we don't expend any effort to write back\n> branch patches for this kind of material\" rather than \"we don't do back\n> patching because it causes problems\". But it is an existing policy, applied\n> consistently through the years, and treating the documentation like a book,\n> even though it is published in a non-physical medium, is a reasonable guideline\n> to follow.\n\nWell, we have not backpatched cosmetic or wording improvements into back\nbranches, usually, though unclear wording has been backpatched because\nthe value is more significant than the disruption. I think we look at\ndoc backpatching on an individual basis, because there are limited\nrisks, unlike code changes.\n\n> My desire to get it out in an official release early goes against this policy,\n> and I'm fine waiting for v16 on that basis. The only reason I'm good with\n> updating v15 is that I basically consider anything in the first 3 point\n> releases of a major version to be a \"delta\" release.\n\nWell, yeah, I think the PG 15-16 is kind of an odd approach, though I\ncan see the value of doing that since we could say anyone who cares\nabout these details should be on the most recent major release. I think\nyou are reinforcing my basic approach that doc changes can't have a\nsimple blanket policy, unlike code, because of the limited risks and\nsignifican value.\n\n> One back-patchable idea to consider would be adding a note at the top of the\n> page(s) highlighting the fact that said material has been superseded by more\n> current documentation, with a link. But the idea of changing long-released\n> (see my delta comment above) material doesn't sit well with me or the policy.\n\nUh, I don't share that concern, as long as it is mentioned in the minor\nrelease notes.\n\n> I assume this new chapter would be mentioned in the minor release notes.\n> \n> We don't do release notes for documentation changes.\n\nUh, I certainly have done it for significant doc improvements in major\nreleases, so I don't see a problem in doing it for minor releases,\nespecially since this information has been needed in our docs for years.\n\nWhat I am basically saying is that \"we have always done it that way\" is\nan insufficient reason for me --- we should have some logic for _why_ we\nhave a policy, and I am not seeing that here.\n\nThis is obviously a bigger issue than this particular patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Wed, 30 Nov 2022 10:41:14 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On 2022-Nov-30, Bruce Momjian wrote:\n\n> On Wed, Nov 30, 2022 at 07:33:44AM +0100, Peter Eisentraut wrote:\n> > On 30.11.22 02:51, Bruce Momjian wrote:\n> > > Patch applied back to PG 11. Thanks to Simon for getting this important\n> > > information in our docs, and for the valuable feedback from others that\n> > > made this even better.\n> > \n> > I request that the backpatching of this be reverted. We don't want to have\n> > major chapter numbers in the documentation changing between minor releases.\n> \n> Uh, how do others feel about this? I wanted to get this information to\n> all our supported releases as soon as possible, rather than waiting for\n> PG 16 and for people to upgrade.\n\nI find it a bit shocking to have had it backpatched, even to 15 -- a\nwhole chapter in the documentation? I don't see why it wouldn't be\ntreated like any other \"major feature\" patch, which we only consider for\nthe development branch. Also, this is a first cut -- presumably we'll\nwant to copy-edit it before it becomes released material.\n\nNow, keep in mind that not having it backpatched does not mean that it\nis invisible to users. It is definitely visible, if they use the doc\nURL with /devel/ in it. And this information has been missing for 20+\nyears, how come it is so urgent to have it everywhere now?\n\nI agree that it should be reverted from all branches other than master.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 30 Nov 2022 18:20:22 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I find it a bit shocking to have had it backpatched, even to 15 -- a\n> whole chapter in the documentation? I don't see why it wouldn't be\n> treated like any other \"major feature\" patch, which we only consider for\n> the development branch. Also, this is a first cut -- presumably we'll\n> want to copy-edit it before it becomes released material.\n\nI think that last point is fairly convincing. I've not read the\nnew material, but I didn't get further than the first line of\nthe new chapter file before noting a copy-and-paste error:\n\n--- /dev/null\n+++ b/doc/src/sgml/xact.sgml\n@@ -0,0 +1,205 @@\n+<!-- doc/src/sgml/mvcc.sgml -->\n\nThat doesn't leave me with a warm feeling that it's ready to ship.\nI too vote for reverting it out of the released branches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 30 Nov 2022 12:31:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On Wed, Nov 30, 2022 at 12:31:55PM -0500, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > I find it a bit shocking to have had it backpatched, even to 15 -- a\n> > whole chapter in the documentation? I don't see why it wouldn't be\n> > treated like any other \"major feature\" patch, which we only consider for\n> > the development branch. Also, this is a first cut -- presumably we'll\n> > want to copy-edit it before it becomes released material.\n> \n> I think that last point is fairly convincing. I've not read the\n> new material, but I didn't get further than the first line of\n> the new chapter file before noting a copy-and-paste error:\n> \n> --- /dev/null\n> +++ b/doc/src/sgml/xact.sgml\n> @@ -0,0 +1,205 @@\n> +<!-- doc/src/sgml/mvcc.sgml -->\n\nFixed in master.\n\n> That doesn't leave me with a warm feeling that it's ready to ship.\n> I too vote for reverting it out of the released branches.\n\nPatch reverted in all back branches. I was hoping to get support for\nmore aggressive backpatches of docs, but obviously failed. I should\nhave been clearer about my intent to backpatch, and will have to\nconsider these issues in future doc backpatches.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Thu, 1 Dec 2022 10:48:04 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
},
{
"msg_contents": "On 2022-Dec-01, Bruce Momjian wrote:\n\n> Patch reverted in all back branches. I was hoping to get support for\n> more aggressive backpatches of docs, but obviously failed. I should\n> have been clearer about my intent to backpatch, and will have to\n> consider these issues in future doc backpatches.\n\nFWIW I am in favor of backpatching doc fixes (even if they're not\ncompletely trivial, such as 02d43ad6262d) and smallish additions of\ncontent (63a370938). But we've added a few terms to the glossary\n(606c38459, 3dddb2a82) and those weren't backpatched, which seems\nappropriate to me.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"The problem with the facetime model is not just that it's demoralizing, but\nthat the people pretending to work interrupt the ones actually working.\"\n (Paul Graham)\n\n\n",
"msg_date": "Thu, 1 Dec 2022 19:21:42 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: New docs chapter on Transaction Management and related changes"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm not sure what is causing this, but I have seen this twice. The\nsecond time without activity after changing the set of tables in a\nPUBLICATION.\n\ngdb says that debug_query_string contains:\n\n\"\"\"\nSTART_REPLICATION SLOT \"sub_pgbench\" LOGICAL 0/0 (proto_version '3', publication_names '\"pub_pgbench\"')START_REPLICATION SLOT \"sub_pgbench\" LOGICAL 0/0 (proto_version '3', publication_names '\"pub_pgbench\"')\n\"\"\"\n\nattached the backtrace.\n\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL",
"msg_date": "Tue, 6 Sep 2022 18:40:49 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-06 18:40:49 -0500, Jaime Casanova wrote:\n> I'm not sure what is causing this, but I have seen this twice. The\n> second time without activity after changing the set of tables in a\n> PUBLICATION.\n\nCan you describe the steps to reproduce?\n\nWhich git commit does this happen on?\n\n\n> gdb says that debug_query_string contains:\n> \n> \"\"\"\n> START_REPLICATION SLOT \"sub_pgbench\" LOGICAL 0/0 (proto_version '3', publication_names '\"pub_pgbench\"')START_REPLICATION SLOT \"sub_pgbench\" LOGICAL 0/0 (proto_version '3', publication_names '\"pub_pgbench\"')\n> \"\"\"\n> \n> attached the backtrace.\n> \n\n> #2 0x00005559bfd4f0ed in ExceptionalCondition (\n> conditionName=0x5559bff30e20 \"namestrcmp(&statent->slotname, NameStr(slot->data.name)) == 0\", errorType=0x5559bff30e0d \"FailedAssertion\", fileName=0x5559bff30dbb \"pgstat_replslot.c\", \n> lineNumber=89) at assert.c:69\n\nwhat are statent->slotname and slot->data.name?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Sep 2022 12:39:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "On Wed, Sep 07, 2022 at 12:39:08PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2022-09-06 18:40:49 -0500, Jaime Casanova wrote:\n> > I'm not sure what is causing this, but I have seen this twice. The\n> > second time without activity after changing the set of tables in a\n> > PUBLICATION.\n> \n> Can you describe the steps to reproduce?\n> \n\nI'm still trying to determine that\n\n> Which git commit does this happen on?\n> \n\n6e55ea79faa56db85a2b6c5bf94cee8acf8bfdb8 (Stamp 15beta4) \n\n> \n> > gdb says that debug_query_string contains:\n> > \n> > \"\"\"\n> > START_REPLICATION SLOT \"sub_pgbench\" LOGICAL 0/0 (proto_version '3', publication_names '\"pub_pgbench\"')START_REPLICATION SLOT \"sub_pgbench\" LOGICAL 0/0 (proto_version '3', publication_names '\"pub_pgbench\"')\n> > \"\"\"\n> > \n> > attached the backtrace.\n> > \n> \n> > #2 0x00005559bfd4f0ed in ExceptionalCondition (\n> > conditionName=0x5559bff30e20 \"namestrcmp(&statent->slotname, NameStr(slot->data.name)) == 0\", errorType=0x5559bff30e0d \"FailedAssertion\", fileName=0x5559bff30dbb \"pgstat_replslot.c\", \n> > lineNumber=89) at assert.c:69\n> \n> what are statent->slotname and slot->data.name?\n> \n\nslot->data.name seems to be the replication_slot record, and\nstatent->slotname comes from the in shared memory stats for that slot.\n\nAnd the assert happens when &statent->slotname.data comes empty, which \nis not frequent but it happens from time to time\n\nbtw, while I'm looking at this I found that we can drop a publication\nwhile there are active subscriptions pointing to it, is that something\nwe should allow?\nanyway, that is not the cause of this because the replication slot actually\nexists.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Fri, 9 Sep 2022 12:51:45 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "On Wed, Sep 07, 2022 at 12:39:08PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2022-09-06 18:40:49 -0500, Jaime Casanova wrote:\n> > I'm not sure what is causing this, but I have seen this twice. The\n> > second time without activity after changing the set of tables in a\n> > PUBLICATION.\n\nThis crash happens after a reset of statistics for a slot replication\n\n> Can you describe the steps to reproduce?\n> \n\nbin/pg_ctl -D data1 initdb\nbin/pg_ctl -D data1 -l logfile1 -o \"-c port=54315 -c wal_level=logical\" start\nbin/psql -p 54315 postgres <<EOF\n\tcreate table t1 (i int primary key);\n\tcreate publication pub1 for table t1;\nEOF\n\nbin/pg_ctl -D data2 initdb\nbin/pg_ctl -D data2 -l logfile2 -o \"-c port=54316\" start\nbin/psql -p 54316 postgres <<EOF\n\tcreate table t1 (i int primary key);\n\tcreate subscription sub1 connection 'host=/tmp port=54315 dbname=postgres' publication pub1;\nEOF\n\nbin/psql -p 54315 postgres <<EOF\n\tselect pg_stat_reset_replication_slot('sub1');\n\tinsert into t1 values(1);\nEOF\n\n\n\n> Which git commit does this happen on?\n> \n\njust tested again on f5047c1293acce3c6c3802b06825aa3a9f9aa55a\n\n> \n> > gdb says that debug_query_string contains:\n> > \n> > \"\"\"\n> > START_REPLICATION SLOT \"sub_pgbench\" LOGICAL 0/0 (proto_version '3', publication_names '\"pub_pgbench\"')START_REPLICATION SLOT \"sub_pgbench\" LOGICAL 0/0 (proto_version '3', publication_names '\"pub_pgbench\"')\n> > \"\"\"\n> > \n> > attached the backtrace.\n> > \n> \n> > #2 0x00005559bfd4f0ed in ExceptionalCondition (\n> > conditionName=0x5559bff30e20 \"namestrcmp(&statent->slotname, NameStr(slot->data.name)) == 0\", errorType=0x5559bff30e0d \"FailedAssertion\", fileName=0x5559bff30dbb \"pgstat_replslot.c\", \n> > lineNumber=89) at assert.c:69\n> \n> what are statent->slotname and slot->data.name?\n> \n\nand the problem seems to be that after zero'ing the stats that includes\nthe name of the replication slot, this simple patch fixes it... not sure\nif it's the right fix though...\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL",
"msg_date": "Tue, 13 Sep 2022 00:39:45 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "Nice finding.\n\nAt Tue, 13 Sep 2022 00:39:45 -0500, Jaime Casanova <jcasanov@systemguards.com.ec> wrote in \n> and the problem seems to be that after zero'ing the stats that includes\n> the name of the replication slot, this simple patch fixes it... not sure\n> if it's the right fix though...\n\nThat doesn't work. since what that function clears is not the name in\nthe slot struct but that in stats entry.\n\nThe cause is what pg_stat_reset_replslot wants to do does not match\nwhat pgstat feature thinks as reset.\n\nUnfortunately, we don't have a function to leave a portion alone after\na reset. However, fetching the stats entry in pgstat_reset_replslot is\nugly..\n\nI'm not sure this is less uglier but it works if\npgstat_report_replslot sets the name if it is found to be wiped\nout... But that hafly nullify the protction by the assertion just\nafter.\n\n\n--- a/src/backend/utils/activity/pgstat_replslot.c\n+++ b/src/backend/utils/activity/pgstat_replslot.c\n@@ -83,9 +83,11 @@ pgstat_report_replslot(ReplicationSlot *slot, const PgStat_StatReplSlotEntry *re\n \tstatent = &shstatent->stats;\n \n \t/*\n-\t * Any mismatch should have been fixed in pgstat_create_replslot() or\n-\t * pgstat_acquire_replslot().\n+\t * pgstat_create_replslot() and pgstat_acquire_replslot() enters the name,\n+\t * but pgstat_reset_replslot() may clear it.\n \t */\n+\tif (statent->slotname.data[0] == 0)\n+\t\tnamestrcpy(&shstatent->stats.slotname, NameStr(slot->data.name));\n \tAssert(namestrcmp(&statent->slotname, NameStr(slot->data.name)) == 0);\n\n\nAnother measure would be to add the region to wipe-out on reset to\nPgStat_KindInfo, but it seems too much.. (attached)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 13 Sep 2022 18:48:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "On Tue, Sep 13, 2022 at 06:48:45PM +0900, Kyotaro Horiguchi wrote:\n> Nice finding.\n> \n> At Tue, 13 Sep 2022 00:39:45 -0500, Jaime Casanova <jcasanov@systemguards.com.ec> wrote in \n> > and the problem seems to be that after zero'ing the stats that includes\n> > the name of the replication slot, this simple patch fixes it... not sure\n> > if it's the right fix though...\n> \n> That doesn't work. since what that function clears is not the name in\n> the slot struct but that in stats entry.\n> \n\nyou're right... the curious thing is that I tested it and it worked, but\nnow it doesn't... maybe it was too late for me...\n\n> The cause is what pg_stat_reset_replslot wants to do does not match\n> what pgstat feature thinks as reset.\n> \n[...]\n> \n> Another measure would be to add the region to wipe-out on reset to\n> PgStat_KindInfo, but it seems too much.. (attached)\n> \n\nThis patch solves the problem, i didn't like the other solution because\nas you say it partly nullify the protection of the assertion.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Tue, 13 Sep 2022 22:07:50 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "On Tue, Sep 13, 2022 at 10:07:50PM -0500, Jaime Casanova wrote:\n> On Tue, Sep 13, 2022 at 06:48:45PM +0900, Kyotaro Horiguchi wrote:\n> > \n> > Another measure would be to add the region to wipe-out on reset to\n> > PgStat_KindInfo, but it seems too much.. (attached)\n> > \n> \n> This patch solves the problem, i didn't like the other solution because\n> as you say it partly nullify the protection of the assertion.\n> \n\nI talked too fast, while it solves the immediate problem the patch as is\ncauses other crashes.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Thu, 15 Sep 2022 01:26:15 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "At Thu, 15 Sep 2022 01:26:15 -0500, Jaime Casanova <jcasanov@systemguards.com.ec> wrote in \n> On Tue, Sep 13, 2022 at 10:07:50PM -0500, Jaime Casanova wrote:\n> > On Tue, Sep 13, 2022 at 06:48:45PM +0900, Kyotaro Horiguchi wrote:\n> > > \n> > > Another measure would be to add the region to wipe-out on reset to\n> > > PgStat_KindInfo, but it seems too much.. (attached)\n> > > \n> > \n> > This patch solves the problem, i didn't like the other solution because\n> > as you say it partly nullify the protection of the assertion.\n> > \n> \n> I talked too fast, while it solves the immediate problem the patch as is\n> causes other crashes.\n\nWhere did the crash happen? Is it a bug introduced by it? Or does it\nroot to other cause?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 15 Sep 2022 17:30:11 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "On Thu, Sep 15, 2022 at 05:30:11PM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 15 Sep 2022 01:26:15 -0500, Jaime Casanova <jcasanov@systemguards.com.ec> wrote in \n> > On Tue, Sep 13, 2022 at 10:07:50PM -0500, Jaime Casanova wrote:\n> > > On Tue, Sep 13, 2022 at 06:48:45PM +0900, Kyotaro Horiguchi wrote:\n> > > > \n> > > > Another measure would be to add the region to wipe-out on reset to\n> > > > PgStat_KindInfo, but it seems too much.. (attached)\n> > > > \n> > > \n> > > This patch solves the problem, i didn't like the other solution because\n> > > as you say it partly nullify the protection of the assertion.\n> > > \n> > \n> > I talked too fast, while it solves the immediate problem the patch as is\n> > causes other crashes.\n> \n> Where did the crash happen? Is it a bug introduced by it? Or does it\n> root to other cause?\n> \n\nJust compile and run the installcheck tests.\n\nIt fails at ./src/backend/utils/activity/pgstat_shmem.c:530 inside\npgstat_release_entry_ref() because it expects a \"deadbeef\", it seems to\nbe a magic variable but cannot find what its use is.\n\nAttached a backtrace.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL",
"msg_date": "Thu, 15 Sep 2022 11:15:12 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "At Thu, 15 Sep 2022 11:15:12 -0500, Jaime Casanova <jcasanov@systemguards.com.ec> wrote in \n> It fails at ./src/backend/utils/activity/pgstat_shmem.c:530 inside\n\nThanks for the info. I reproduced by make check.. stupid..\n\nIt's the thinko about the base address of reset_off.\n\nSo the attached doesn't crash..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 16 Sep 2022 14:37:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "On Fri, Sep 16, 2022 at 02:37:17PM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 15 Sep 2022 11:15:12 -0500, Jaime Casanova <jcasanov@systemguards.com.ec> wrote in \n> > It fails at ./src/backend/utils/activity/pgstat_shmem.c:530 inside\n> \n> Thanks for the info. I reproduced by make check.. stupid..\n> \n> It's the thinko about the base address of reset_off.\n> \n> So the attached doesn't crash..\n> \n\nHi,\n\nJust confirming there have been no crash since this last patch.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Mon, 19 Sep 2022 11:04:03 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "At Mon, 19 Sep 2022 11:04:03 -0500, Jaime Casanova <jcasanov@systemguards.com.ec> wrote in \n> On Fri, Sep 16, 2022 at 02:37:17PM +0900, Kyotaro Horiguchi wrote:\n> > At Thu, 15 Sep 2022 11:15:12 -0500, Jaime Casanova <jcasanov@systemguards.com.ec> wrote in \n> > > It fails at ./src/backend/utils/activity/pgstat_shmem.c:530 inside\n> > \n> > Thanks for the info. I reproduced by make check.. stupid..\n> > \n> > It's the thinko about the base address of reset_off.\n> > \n> > So the attached doesn't crash..\n> > \n> \n> Hi,\n> \n> Just confirming there have been no crash since this last patch.\n\nThanks for confirmation.\n\nAlthouh I'm not sure whether this is the right direction, this seems\nto be an open item of 15?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 27 Sep 2022 11:44:29 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "Hi,\n\nI wonder if the correct fix here wouldn't be to move the slotname out of\nPgStat_StatReplSlotEntry?\n\n\nOn 2022-09-16 14:37:17 +0900, Kyotaro Horiguchi wrote:\n> diff --git a/src/backend/utils/activity/pgstat.c b/src/backend/utils/activity/pgstat.c\n> index 6224c498c2..ed3f3af4d9 100644\n> --- a/src/backend/utils/activity/pgstat.c\n> +++ b/src/backend/utils/activity/pgstat.c\n> @@ -263,6 +263,8 @@ static const PgStat_KindInfo pgstat_kind_infos[PGSTAT_NUM_KINDS] = {\n> \t\t.shared_size = sizeof(PgStatShared_Database),\n> \t\t.shared_data_off = offsetof(PgStatShared_Database, stats),\n> \t\t.shared_data_len = sizeof(((PgStatShared_Database *) 0)->stats),\n> +\t\t.reset_off = 0,\n> +\t\t.reset_len = sizeof(((PgStatShared_Database *) 0)->stats),\n> \t\t.pending_size = sizeof(PgStat_StatDBEntry),\n> \n> \t\t.flush_pending_cb = pgstat_database_flush_cb,\n> @@ -277,6 +279,8 @@ static const PgStat_KindInfo pgstat_kind_infos[PGSTAT_NUM_KINDS] = {\n> \t\t.shared_size = sizeof(PgStatShared_Relation),\n> \t\t.shared_data_off = offsetof(PgStatShared_Relation, stats),\n> \t\t.shared_data_len = sizeof(((PgStatShared_Relation *) 0)->stats),\n> +\t\t.reset_off = 0,\n> +\t\t.reset_len = sizeof(((PgStatShared_Relation *) 0)->stats),\n> \t\t.pending_size = sizeof(PgStat_TableStatus),\n> \n> \t\t.flush_pending_cb = pgstat_relation_flush_cb,\n> @@ -291,6 +295,8 @@ static const PgStat_KindInfo pgstat_kind_infos[PGSTAT_NUM_KINDS] = {\n> \t\t.shared_size = sizeof(PgStatShared_Function),\n> \t\t.shared_data_off = offsetof(PgStatShared_Function, stats),\n> \t\t.shared_data_len = sizeof(((PgStatShared_Function *) 0)->stats),\n> +\t\t.reset_off = 0,\n> +\t\t.reset_len = sizeof(((PgStatShared_Function *) 0)->stats),\n> \t\t.pending_size = sizeof(PgStat_BackendFunctionEntry),\n> \n> \t\t.flush_pending_cb = pgstat_function_flush_cb,\n> @@ -307,6 +313,10 @@ static const PgStat_KindInfo pgstat_kind_infos[PGSTAT_NUM_KINDS] = {\n> \t\t.shared_size = sizeof(PgStatShared_ReplSlot),\n> \t\t.shared_data_off = offsetof(PgStatShared_ReplSlot, stats),\n> \t\t.shared_data_len = sizeof(((PgStatShared_ReplSlot *) 0)->stats),\n> +\t\t/* reset doesn't wipe off slot name */\n> +\t\t.reset_off = offsetof(PgStat_StatReplSlotEntry, spill_txns),\n> +\t\t.reset_len = sizeof(((PgStatShared_ReplSlot *) 0)->stats),\n> +\t\toffsetof(PgStat_StatReplSlotEntry, spill_txns),\n\nI'm confused what this offsetof does here? It's not even assigned to a\nspecific field? Am I missing something?\n\nAlso, wouldn't we need to subtract something of the size?\n\n\n> diff --git a/src/backend/utils/activity/pgstat_shmem.c b/src/backend/utils/activity/pgstat_shmem.c\n> index ac98918688..09a8c3873c 100644\n> --- a/src/backend/utils/activity/pgstat_shmem.c\n> +++ b/src/backend/utils/activity/pgstat_shmem.c\n> @@ -915,8 +915,9 @@ shared_stat_reset_contents(PgStat_Kind kind, PgStatShared_Common *header,\n> {\n> \tconst PgStat_KindInfo *kind_info = pgstat_get_kind_info(kind);\n> \n> -\tmemset(pgstat_get_entry_data(kind, header), 0,\n> -\t\t pgstat_get_entry_len(kind));\n> +\tmemset((char *)pgstat_get_entry_data(kind, header) +\n> +\t\t kind_info->reset_off, 0,\n> +\t\t kind_info->reset_len);\n> \n> \tif (kind_info->reset_timestamp_cb)\n> \t\tkind_info->reset_timestamp_cb(header, ts);\n\nThis likely doesn't quite conform to what pgindent wants...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 26 Sep 2022 19:53:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "Thanks!\n\nAt Mon, 26 Sep 2022 19:53:02 -0700, Andres Freund <andres@anarazel.de> wrote in \n> I wonder if the correct fix here wouldn't be to move the slotname out of\n> PgStat_StatReplSlotEntry?\n\nUgh. Right. I thought its outer struct as purely the part for the\ncommon header. But we can freely place anything after the header\npart. I moved it to the outer struct. I didn't clear that part in\npgstat_create_relation() because it is filled in immediately.\n\nThe attached is that.\n\n> On 2022-09-16 14:37:17 +0900, Kyotaro Horiguchi wrote:\n...\n> > @@ -307,6 +313,10 @@ static const PgStat_KindInfo pgstat_kind_infos[PGSTAT_NUM_KINDS] = {\n> > \t\t.shared_size = sizeof(PgStatShared_ReplSlot),\n> > \t\t.shared_data_off = offsetof(PgStatShared_ReplSlot, stats),\n> > \t\t.shared_data_len = sizeof(((PgStatShared_ReplSlot *) 0)->stats),\n> > +\t\t/* reset doesn't wipe off slot name */\n> > +\t\t.reset_off = offsetof(PgStat_StatReplSlotEntry, spill_txns),\n> > +\t\t.reset_len = sizeof(((PgStatShared_ReplSlot *) 0)->stats),\n> > +\t\toffsetof(PgStat_StatReplSlotEntry, spill_txns),\n> \n> I'm confused what this offsetof does here? It's not even assigned to a\n> specific field? Am I missing something?\n> \n> Also, wouldn't we need to subtract something of the size?\n\nYeah, I felt it confusing. The last line above is offset from just\nafter the header part (it is PgStat_, not PgStatShared_). I first\nwrote that as you suggested but rewrote to shorter representation.\n\n> \n> > diff --git a/src/backend/utils/activity/pgstat_shmem.c b/src/backend/utils/activity/pgstat_shmem.c\n> > index ac98918688..09a8c3873c 100644\n> > --- a/src/backend/utils/activity/pgstat_shmem.c\n> > +++ b/src/backend/utils/activity/pgstat_shmem.c\n> > @@ -915,8 +915,9 @@ shared_stat_reset_contents(PgStat_Kind kind, PgStatShared_Common *header,\n> > {\n> > \tconst PgStat_KindInfo *kind_info = pgstat_get_kind_info(kind);\n> > \n> > -\tmemset(pgstat_get_entry_data(kind, header), 0,\n> > -\t\t pgstat_get_entry_len(kind));\n> > +\tmemset((char *)pgstat_get_entry_data(kind, header) +\n> > +\t\t kind_info->reset_off, 0,\n> > +\t\t kind_info->reset_len);\n> > \n> > \tif (kind_info->reset_timestamp_cb)\n> > \t\tkind_info->reset_timestamp_cb(header, ts);\n> \n> This likely doesn't quite conform to what pgindent wants...\n\nIn the first place, it's ugly...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 27 Sep 2022 14:52:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "On 9/27/22 1:52 AM, Kyotaro Horiguchi wrote:\r\n> Thanks!\r\n> \r\n> At Mon, 26 Sep 2022 19:53:02 -0700, Andres Freund <andres@anarazel.de> wrote in\r\n>> I wonder if the correct fix here wouldn't be to move the slotname out of\r\n>> PgStat_StatReplSlotEntry?\r\n> \r\n> Ugh. Right. I thought its outer struct as purely the part for the\r\n> common header. But we can freely place anything after the header\r\n> part. I moved it to the outer struct. I didn't clear that part in\r\n> pgstat_create_relation() because it is filled in immediately.\r\n> \r\n> The attached is that.\r\n\r\nThis is still listed as an open item[1] for v15. Does this fix proposed \r\naddress the issue?\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items",
"msg_date": "Wed, 5 Oct 2022 13:00:53 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-05 13:00:53 -0400, Jonathan S. Katz wrote:\n> On 9/27/22 1:52 AM, Kyotaro Horiguchi wrote:\n> > Thanks!\n> > \n> > At Mon, 26 Sep 2022 19:53:02 -0700, Andres Freund <andres@anarazel.de> wrote in\n> > > I wonder if the correct fix here wouldn't be to move the slotname out of\n> > > PgStat_StatReplSlotEntry?\n> > \n> > Ugh. Right. I thought its outer struct as purely the part for the\n> > common header. But we can freely place anything after the header\n> > part. I moved it to the outer struct. I didn't clear that part in\n> > pgstat_create_relation() because it is filled in immediately.\n> > \n> > The attached is that.\n> \n> This is still listed as an open item[1] for v15. Does this fix proposed\n> address the issue?\n\nUnfortunately not - it doesn't even pass the existing tests\n(test_decoding/001_repl_stats fails) :(.\n\nThe reason for that is that with the patch nothing restores the slotname when\nreading stats from disk. That turns out not to cause immediate issues, but at\nthe next shutdown the name won't be set, and we'll serialize the stats data\nwith an empty string as the name.\n\nI have two ideas how to fix it. As a design constraint, I'd be interested in\nthe RMTs opinion on the following:\nIs a cleaner fix that changes the stats format (i.e. existing stats will be\nthrown away when upgrading) or one that doesn't change the stats format\npreferrable?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 5 Oct 2022 17:44:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "On 10/5/22 8:44 PM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On 2022-10-05 13:00:53 -0400, Jonathan S. Katz wrote:\r\n>> On 9/27/22 1:52 AM, Kyotaro Horiguchi wrote:\r\n>>> Thanks!\r\n>>>\r\n>>> At Mon, 26 Sep 2022 19:53:02 -0700, Andres Freund <andres@anarazel.de> wrote in\r\n>>>> I wonder if the correct fix here wouldn't be to move the slotname out of\r\n>>>> PgStat_StatReplSlotEntry?\r\n>>>\r\n>>> Ugh. Right. I thought its outer struct as purely the part for the\r\n>>> common header. But we can freely place anything after the header\r\n>>> part. I moved it to the outer struct. I didn't clear that part in\r\n>>> pgstat_create_relation() because it is filled in immediately.\r\n>>>\r\n>>> The attached is that.\r\n>>\r\n>> This is still listed as an open item[1] for v15. Does this fix proposed\r\n>> address the issue?\r\n> \r\n> Unfortunately not - it doesn't even pass the existing tests\r\n> (test_decoding/001_repl_stats fails) :(.\r\n\r\nThanks for checking.\r\n\r\n> The reason for that is that with the patch nothing restores the slotname when\r\n> reading stats from disk. That turns out not to cause immediate issues, but at\r\n> the next shutdown the name won't be set, and we'll serialize the stats data\r\n> with an empty string as the name.\r\n\r\nAh.\r\n\r\n> I have two ideas how to fix it. As a design constraint, I'd be interested in\r\n> the RMTs opinion on the following:\r\n> Is a cleaner fix that changes the stats format (i.e. existing stats will be\r\n> thrown away when upgrading) or one that doesn't change the stats format\r\n> preferrable?\r\n\r\n[My opinion, will let Michael/John chime in]\r\n\r\nIdeally we would keep the stats on upgrade -- I think that's the better \r\nuser experience.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Wed, 5 Oct 2022 23:24:57 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "On Wed, Oct 05, 2022 at 11:24:57PM -0400, Jonathan S. Katz wrote:\n> On 10/5/22 8:44 PM, Andres Freund wrote:\n>> I have two ideas how to fix it. As a design constraint, I'd be interested in\n>> the RMTs opinion on the following:\n>> Is a cleaner fix that changes the stats format (i.e. existing stats will be\n>> thrown away when upgrading) or one that doesn't change the stats format\n>> preferrable?\n> \n> [My opinion, will let Michael/John chime in]\n> \n> Ideally we would keep the stats on upgrade -- I think that's the better user\n> experience.\n\nThe release has not happened yet, so I would be fine to remain\nflexible and bump again PGSTAT_FILE_FORMAT_ID so as we have the\ncleanest approach in place for the release and the future. At the\nend, we would throw automatically the data of a file that's marked\nwith a version that does not match with what we expect at load time,\nso there's a limited impact on the user experience, except, well,\nlosing these stats, of course.\n--\nMichael",
"msg_date": "Thu, 6 Oct 2022 13:44:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "At Thu, 6 Oct 2022 13:44:43 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Oct 05, 2022 at 11:24:57PM -0400, Jonathan S. Katz wrote:\n> > On 10/5/22 8:44 PM, Andres Freund wrote:\n> >> I have two ideas how to fix it. As a design constraint, I'd be interested in\n> >> the RMTs opinion on the following:\n> >> Is a cleaner fix that changes the stats format (i.e. existing stats will be\n> >> thrown away when upgrading) or one that doesn't change the stats format\n> >> preferrable?\n> > \n> > [My opinion, will let Michael/John chime in]\n> > \n> > Ideally we would keep the stats on upgrade -- I think that's the better user\n> > experience.\n> \n> The release has not happened yet, so I would be fine to remain\n> flexible and bump again PGSTAT_FILE_FORMAT_ID so as we have the\n> cleanest approach in place for the release and the future. At the\n> end, we would throw automatically the data of a file that's marked\n> with a version that does not match with what we expect at load time,\n> so there's a limited impact on the user experience, except, well,\n> losing these stats, of course.\n\n+1. FWIW, the atttached is an example of what it looks like if we\navoid file format change.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 06 Oct 2022 14:10:46 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "On 10/6/22 1:10 AM, Kyotaro Horiguchi wrote:\r\n> At Thu, 6 Oct 2022 13:44:43 +0900, Michael Paquier <michael@paquier.xyz> wrote in\r\n>> On Wed, Oct 05, 2022 at 11:24:57PM -0400, Jonathan S. Katz wrote:\r\n>>> On 10/5/22 8:44 PM, Andres Freund wrote:\r\n>>>> I have two ideas how to fix it. As a design constraint, I'd be interested in\r\n>>>> the RMTs opinion on the following:\r\n>>>> Is a cleaner fix that changes the stats format (i.e. existing stats will be\r\n>>>> thrown away when upgrading) or one that doesn't change the stats format\r\n>>>> preferrable?\r\n>>>\r\n>>> [My opinion, will let Michael/John chime in]\r\n>>>\r\n>>> Ideally we would keep the stats on upgrade -- I think that's the better user\r\n>>> experience.\r\n>>\r\n>> The release has not happened yet, so I would be fine to remain\r\n>> flexible and bump again PGSTAT_FILE_FORMAT_ID so as we have the\r\n>> cleanest approach in place for the release and the future.\r\n\r\nYes, I agree with this.\r\n\r\n> At the\r\n>> end, we would throw automatically the data of a file that's marked\r\n>> with a version that does not match with what we expect at load time,\r\n>> so there's a limited impact on the user experience, except, well,\r\n>> losing these stats, of course.\r\n\r\nI'm fine with this.\r\n\r\n> +1. FWIW, the atttached is an example of what it looks like if we\r\n> avoid file format change.\r\n\r\nThanks for the quick turnaround. I'll let others chime in on the review.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Thu, 6 Oct 2022 09:33:56 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-06 14:10:46 +0900, Kyotaro Horiguchi wrote:\n> +1. FWIW, the atttached is an example of what it looks like if we\n> avoid file format change.\n\nWhat about if we go the other direction - simply remove the name from the\nstats entry at all. I don't actually think we need it anymore. Unless I am\nmissing something right now - entirely possible! - the danger that\npgstat_acquire_replslot() mentions doesn't actually exist [anymore]. After a\ncrash we throw away the old stats data and if a slot is dropped while shut\ndown, we'll not load the slot data at startup.\n\nThe attached is a rough prototype, but should be enough for Jaime to test and\nHoriguchi to test / review / think about.\n\nAmit, I CCed you, since you've thought a bunch about the dangers in this area\ntoo.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 6 Oct 2022 15:59:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "On Fri, Oct 7, 2022 at 8:00 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-10-06 14:10:46 +0900, Kyotaro Horiguchi wrote:\n> > +1. FWIW, the atttached is an example of what it looks like if we\n> > avoid file format change.\n>\n> What about if we go the other direction - simply remove the name from the\n> stats entry at all. I don't actually think we need it anymore. Unless I am\n> missing something right now - entirely possible! - the danger that\n> pgstat_acquire_replslot() mentions doesn't actually exist [anymore]. After a\n> crash we throw away the old stats data and if a slot is dropped while shut\n> down, we'll not load the slot data at startup.\n\n+1. I think it works. Since the replication slot index doesn't change\nduring server running we can fetch the name from\nReplicationSlotCtl->replication_slots.\n\nIf we don't need the name in stats entry, pgstat_acquire_replslot() is\nno longer necessary?\n\nRegards,\n\n-- \nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 7 Oct 2022 12:14:40 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "At Fri, 7 Oct 2022 12:14:40 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> > What about if we go the other direction - simply remove the name from the\n> > stats entry at all. I don't actually think we need it anymore. Unless I am\n> > missing something right now - entirely possible! - the danger that\n> > pgstat_acquire_replslot() mentions doesn't actually exist [anymore]. After a\n> > crash we throw away the old stats data and if a slot is dropped while shut\n> > down, we'll not load the slot data at startup.\n\nThe key point of this is this:\n\n+\t * XXX: I think there cannot actually be data from an older slot\n+\t * here. After a crash we throw away the old stats data and if a slot is\n+\t * dropped while shut down, we'll not load the slot data at startup.\n\nI think this is true. Assuming that we don't recreate or rename\nobjects that have stats after writing out stats, we won't have stats\nfor a different object with the same name. If we can rely on that\nfact, the existing check in pgstat_acquire_replslot() becomes\nuseless. Thus we don't need to store object name in stats entry. I\nagree to that.\n\n> +1. I think it works. Since the replication slot index doesn't change\n> during server running we can fetch the name from\n> ReplicationSlotCtl->replication_slots.\n\nThat access seems safe in a bit different aspect, too. Both\ncheckpointer (and walsender) properly initialize ReplicationSlotCtl.\n\n\n> If we don't need the name in stats entry, pgstat_acquire_replslot() is\n> no longer necessary?\n\nI think so. The entry will be created at the first report.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 07 Oct 2022 15:30:43 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-07 15:30:43 +0900, Kyotaro Horiguchi wrote:\n> At Fri, 7 Oct 2022 12:14:40 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> > > What about if we go the other direction - simply remove the name from the\n> > > stats entry at all. I don't actually think we need it anymore. Unless I am\n> > > missing something right now - entirely possible! - the danger that\n> > > pgstat_acquire_replslot() mentions doesn't actually exist [anymore]. After a\n> > > crash we throw away the old stats data and if a slot is dropped while shut\n> > > down, we'll not load the slot data at startup.\n> \n> The key point of this is this:\n> \n> +\t * XXX: I think there cannot actually be data from an older slot\n> +\t * here. After a crash we throw away the old stats data and if a slot is\n> +\t * dropped while shut down, we'll not load the slot data at startup.\n> \n> I think this is true. Assuming that we don't recreate or rename\n> objects that have stats after writing out stats, we won't have stats\n> for a different object with the same name.\n\nThanks for thinking through this!\n\n\n> If we can rely on that fact, the existing check in pgstat_acquire_replslot()\n> becomes useless. Thus we don't need to store object name in stats entry. I\n> agree to that.\n\nI don't agree with this aspect. I think it's better to ensure that the stats\nobject exists when acquiring a slot, rather than later, when reporting. It's a\nlot simpler to think through the lock nesting etc that way.\n\nI'd also eventually like to remove the stats that are currently kept\nseparately in ReorderBuffer, and that will be easier/cheaper if we can rely on\nthe stats objects to have already been created.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Oct 2022 12:00:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-07 12:00:56 -0700, Andres Freund wrote:\n> On 2022-10-07 15:30:43 +0900, Kyotaro Horiguchi wrote:\n> > The key point of this is this:\n> > \n> > +\t * XXX: I think there cannot actually be data from an older slot\n> > +\t * here. After a crash we throw away the old stats data and if a slot is\n> > +\t * dropped while shut down, we'll not load the slot data at startup.\n> > \n> > I think this is true. Assuming that we don't recreate or rename\n> > objects that have stats after writing out stats, we won't have stats\n> > for a different object with the same name.\n> \n> Thanks for thinking through this!\n\n> > If we can rely on that fact, the existing check in pgstat_acquire_replslot()\n> > becomes useless. Thus we don't need to store object name in stats entry. I\n> > agree to that.\n> \n> I don't agree with this aspect. I think it's better to ensure that the stats\n> object exists when acquiring a slot, rather than later, when reporting. It's a\n> lot simpler to think through the lock nesting etc that way.\n> \n> I'd also eventually like to remove the stats that are currently kept\n> separately in ReorderBuffer, and that will be easier/cheaper if we can rely on\n> the stats objects to have already been created.\n\nHere's a cleaned up version of my earlier prototype.\n\n- I wrapped the access to ReplicationSlotCtl->replication_slots[i].data.name\n in a new function bool ReplicationSlotName(index, name). I'm not entirely\n happy with that name, as it sounds like a more general accessor than it is -\n I toyed with ReplicationSlotNameForIndex(), but that seemed somewhat\n unnecessary, I don't forsee a need for another name accessor.\n\n Anyone wants to weigh in?\n\n- Substantial comment and commit message polishing\n\n- I'm planning to drop PgStat_StatReplSlotEntry.slotname and a\n PGSTAT_FILE_FORMAT_ID bump in master and to rename slotname to\n slotname_unused in 15.\n\n- Self-review found a bug, I copy-pasted create=false in the call to\n pgstat_get_entry_ref() in pgstat_acquire_replslot(). This did *NOT* cause\n any test failures - clearly our test coverage in this area is woefully\n inadequate.\n\n- This patch does not contain a test for the fix. I think this can only be\n tested by a tap test starting pg_recvlogical in the background and checking\n pg_recvlogical's output. That type of test is notoriously hard to be\n reliable, so committing it shortly before the release is wrapped seems like\n a bad idea.\n\nI manually verified that:\n- a continually active slot reporting stats after pgstat_reset_replslot()\n works correctly (this is what crashed before)\n- replslot stats reporting works correctly after stats were removed\n- upgrading from pre-fix to post-fix preserves stats (when keeping slotname /\n not increasing the stats version, of course)\n\n\nI'm planning to push this either later tonight (if I feel up to it after\ncooking dinner) or tomorrow morning PST, due to the release wrap deadline.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 7 Oct 2022 19:56:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-07 19:56:33 -0700, Andres Freund wrote:\n> I'm planning to push this either later tonight (if I feel up to it after\n> cooking dinner) or tomorrow morning PST, due to the release wrap deadline.\n\nI looked this over again, tested a bit more, and pushed the adjusted 15 and\nmaster versions to github to get a CI run. Once that passes, as I expect, I'll\npush them for real.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 8 Oct 2022 09:53:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "On 2022-10-08 09:53:50 -0700, Andres Freund wrote:\n> On 2022-10-07 19:56:33 -0700, Andres Freund wrote:\n> > I'm planning to push this either later tonight (if I feel up to it after\n> > cooking dinner) or tomorrow morning PST, due to the release wrap deadline.\n>\n> I looked this over again, tested a bit more, and pushed the adjusted 15 and\n> master versions to github to get a CI run. Once that passes, as I expect, I'll\n> push them for real.\n\nThose passed and thus pushed.\n\nThanks for the report, debugging and review everyone!\n\n\nI think we need at least the following tests for replslots:\n- a reset while decoding is ongoing works correctly\n- replslot stats continue to be accumulated after stats have been removed\n\n\nI wonder how much it'd take to teach isolationtester to handle the replication\nprotocol...\n\n\n",
"msg_date": "Sat, 8 Oct 2022 10:40:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "On 10/8/22 1:40 PM, Andres Freund wrote:\r\n> On 2022-10-08 09:53:50 -0700, Andres Freund wrote:\r\n>> On 2022-10-07 19:56:33 -0700, Andres Freund wrote:\r\n>>> I'm planning to push this either later tonight (if I feel up to it after\r\n>>> cooking dinner) or tomorrow morning PST, due to the release wrap deadline.\r\n>>\r\n>> I looked this over again, tested a bit more, and pushed the adjusted 15 and\r\n>> master versions to github to get a CI run. Once that passes, as I expect, I'll\r\n>> push them for real.\r\n> \r\n> Those passed and thus pushed.\r\n> \r\n> Thanks for the report, debugging and review everyone!\r\n\r\nThanks for the quick turnaround! I've closed the open item.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Sat, 8 Oct 2022 22:57:40 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "On Sun, Oct 9, 2022 at 2:42 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-10-08 09:53:50 -0700, Andres Freund wrote:\n> > On 2022-10-07 19:56:33 -0700, Andres Freund wrote:\n> > > I'm planning to push this either later tonight (if I feel up to it after\n> > > cooking dinner) or tomorrow morning PST, due to the release wrap deadline.\n> >\n> > I looked this over again, tested a bit more, and pushed the adjusted 15 and\n> > master versions to github to get a CI run. Once that passes, as I expect, I'll\n> > push them for real.\n>\n> Those passed and thus pushed.\n>\n> Thanks for the report, debugging and review everyone!\n\nThanks!\n\n>\n>\n> I think we need at least the following tests for replslots:\n> - a reset while decoding is ongoing works correctly\n> - replslot stats continue to be accumulated after stats have been removed\n>\n>\n> I wonder how much it'd take to teach isolationtester to handle the replication\n> protocol...\n\nI think we can do these tests by using pg_recvlogical in TAP tests.\nI've attached a patch to do that.\n\nRegards,\n\n-- \nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 11 Oct 2022 17:10:52 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-11 17:10:52 +0900, Masahiko Sawada wrote:\n> +# Reset the replication slot statistics.\n> +$node->safe_psql('postgres',\n> +\t\"SELECT pg_stat_reset_replication_slot('regression_slot');\");\n> +my $result = $node->safe_psql('postgres',\n> +\t\"SELECT * FROM pg_stat_replication_slots WHERE slot_name = 'regrssion_slot'\"\n> +);\n\nTypo in the slot name \"regrssion_slot\" instead of \"regression_slot\". We can't\nuse * here, because that'll include the reset timestamp.\n\n\n> +# Teardown the node so the statistics is removed.\n> +$pg_recvlogical->kill_kill;\n> +$node->teardown_node;\n> +$node->start;\n\nISTM that removing the file instead of shutting down the cluster with force\nwould make it a more targeted test.\n\n\n> +# Check if the replication slot statistics have been removed.\n> +$result = $node->safe_psql('postgres',\n> +\t\"SELECT * FROM pg_stat_replication_slots WHERE slot_name = 'regrssion_slot'\"\n> +);\n> +is($result, \"\", \"replication slot statistics are removed\");\n\nSame typo as above. We can't assert a specific result here either, because\nrecvlogical will have processed a bunch of changes. Perhaps we could check at\nleast that the reset time is NULL? \n\n\n> +# Test if the replication slot staistics continue to be accumulated even after\n\ns/staistics/statistics/\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 12 Oct 2022 09:21:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "On Thu, Oct 13, 2022 at 1:21 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-10-11 17:10:52 +0900, Masahiko Sawada wrote:\n> > +# Reset the replication slot statistics.\n> > +$node->safe_psql('postgres',\n> > + \"SELECT pg_stat_reset_replication_slot('regression_slot');\");\n> > +my $result = $node->safe_psql('postgres',\n> > + \"SELECT * FROM pg_stat_replication_slots WHERE slot_name = 'regrssion_slot'\"\n> > +);\n>\n> Typo in the slot name \"regrssion_slot\" instead of \"regression_slot\". We can't\n> use * here, because that'll include the reset timestamp.\n\nFixed.\n\n>\n>\n> > +# Teardown the node so the statistics is removed.\n> > +$pg_recvlogical->kill_kill;\n> > +$node->teardown_node;\n> > +$node->start;\n>\n> ISTM that removing the file instead of shutting down the cluster with force\n> would make it a more targeted test.\n\nAgreed.\n\n>\n>\n> > +# Check if the replication slot statistics have been removed.\n> > +$result = $node->safe_psql('postgres',\n> > + \"SELECT * FROM pg_stat_replication_slots WHERE slot_name = 'regrssion_slot'\"\n> > +);\n> > +is($result, \"\", \"replication slot statistics are removed\");\n>\n> Same typo as above. We can't assert a specific result here either, because\n> recvlogical will have processed a bunch of changes. Perhaps we could check at\n> least that the reset time is NULL?\n\nAgreed.\n\n>\n>\n> > +# Test if the replication slot staistics continue to be accumulated even after\n>\n> s/staistics/statistics/\n\nFixed.\n\nI've attached an updated patch. I've added the common function to\nstart pg_recvlogical and wait for it to become active. Please review\nit.\n\nRegards,\n\n-- \nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 13 Oct 2022 15:57:28 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-13 15:57:28 +0900, Masahiko Sawada wrote:\n> I've attached an updated patch. I've added the common function to\n> start pg_recvlogical and wait for it to become active. Please review\n> it.\n\n> +# Start pg_recvlogical process and wait for it to become active.\n> +sub start_pg_recvlogical\n> +{\n> +\tmy ($node, $slot_name, $create_slot) = @_;\n> +\n> +\tmy @cmd = (\n> +\t\t'pg_recvlogical', '-S', \"$slot_name\", '-d',\n> +\t\t$node->connstr('postgres'),\n> +\t\t'--start', '--no-loop', '-f', '-');\n> +\tpush @cmd, '--create-slot' if $create_slot;\n> +\n> +\t# start pg_recvlogical process.\n> +\tmy $pg_recvlogical = IPC::Run::start(@cmd);\n> +\n> +\t# Wait for the replication slot to become active.\n> +\t$node->poll_query_until('postgres',\n> +\t\t\"SELECT EXISTS (SELECT 1 FROM pg_replication_slots WHERE slot_name = '$slot_name' AND active_pid IS NOT NULL)\"\n> +\t) or die \"slot never became active\";\n> +\n> +\treturn $pg_recvlogical;\n> +}\n\nBecause you never process the output from pg_recvlogical I think this test\nwill fail if the pipe buffer is small (or more changes are made). I think\neither it needs to output to a file, or we need to process the output.\n\nA file seems the easier solution in this case, we don't really care about what\nchanges are streamed to the client, right?\n\n\n> +$node = PostgreSQL::Test::Cluster->new('test2');\n> +$node->init(allows_streaming => 'logical');\n> +$node->start;\n> +$node->safe_psql('postgres', \"CREATE TABLE test(i int)\");\n\nWhy are we creating a new cluster? Initdbs takes a fair bit of time on some\nplatforms, so this seems unnecessary?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Oct 2022 14:54:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
},
{
"msg_contents": "On Thu, Oct 20, 2022 at 6:54 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-10-13 15:57:28 +0900, Masahiko Sawada wrote:\n> > I've attached an updated patch. I've added the common function to\n> > start pg_recvlogical and wait for it to become active. Please review\n> > it.\n>\n> > +# Start pg_recvlogical process and wait for it to become active.\n> > +sub start_pg_recvlogical\n> > +{\n> > + my ($node, $slot_name, $create_slot) = @_;\n> > +\n> > + my @cmd = (\n> > + 'pg_recvlogical', '-S', \"$slot_name\", '-d',\n> > + $node->connstr('postgres'),\n> > + '--start', '--no-loop', '-f', '-');\n> > + push @cmd, '--create-slot' if $create_slot;\n> > +\n> > + # start pg_recvlogical process.\n> > + my $pg_recvlogical = IPC::Run::start(@cmd);\n> > +\n> > + # Wait for the replication slot to become active.\n> > + $node->poll_query_until('postgres',\n> > + \"SELECT EXISTS (SELECT 1 FROM pg_replication_slots WHERE slot_name = '$slot_name' AND active_pid IS NOT NULL)\"\n> > + ) or die \"slot never became active\";\n> > +\n> > + return $pg_recvlogical;\n> > +}\n>\n> Because you never process the output from pg_recvlogical I think this test\n> will fail if the pipe buffer is small (or more changes are made). I think\n> either it needs to output to a file, or we need to process the output.\n\nOkay, but how can we test this situation? As far as I tested, if we\ndon't specify the redirection of pg_recvlogical's output as above,\npg_recvlogical's stdout and stderr are output to the log file. So I\ncould not reproduce the issue you're concerned about. Which pipe do\nyou refer to?\n\n>\n> A file seems the easier solution in this case, we don't really care about what\n> changes are streamed to the client, right?\n>\n>\n> > +$node = PostgreSQL::Test::Cluster->new('test2');\n> > +$node->init(allows_streaming => 'logical');\n> > +$node->start;\n> > +$node->safe_psql('postgres', \"CREATE TABLE test(i int)\");\n>\n> Why are we creating a new cluster? Initdbs takes a fair bit of time on some\n> platforms, so this seems unnecessary?\n\nAgreed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 21 Oct 2022 21:04:04 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: START_REPLICATION SLOT causing a crash in an assert build"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nI've attached a draft of the PostgreSQL 15 Beta 4 release announcement. \r\nPlease review for correctness and if there are any omissions.\r\n\r\nPlease provide feedback on the draft no later than Sep 8, 2022 0:00 AoE.\r\n\r\nThanks!\r\n\r\nJonathan",
"msg_date": "Tue, 6 Sep 2022 21:40:23 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 15 Beta 4 release announcement draft"
},
{
"msg_contents": "On Wed, 7 Sept 2022 at 13:40, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> Please provide feedback on the draft no later than Sep 8, 2022 0:00 AoE.\n\n\"* Adjust costing to force parallelism with partition aggregates.\"\n\nIf that's about 01474f5698, then it does not need to be mentioned.\nAll that commit does is update the regression tests so that they're\nproperly exercising what they were originally meant to test.\n\nDavid\n\n\n",
"msg_date": "Wed, 7 Sep 2022 13:53:13 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 Beta 4 release announcement draft"
},
{
"msg_contents": "Op 07-09-2022 om 03:40 schreef Jonathan S. Katz:\n> Hi,\n> \n> I've attached a draft of the PostgreSQL 15 Beta 4 release announcement. \n> Please review for correctness and if there are any omissions.\n> \n> Please provide feedback on the draft no later than Sep 8, 2022 0:00 AoE.\n\n'Fixes and changes in PostgreSQL 15 Beta 3 include:' should be\n'Fixes and changes in PostgreSQL 15 Beta 4 include:'\n\n\nErik\n\n\n",
"msg_date": "Wed, 7 Sep 2022 07:18:40 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 Beta 4 release announcement draft"
},
{
"msg_contents": "Hi Jonathan,\n\nOn 2022-Sep-06, Jonathan S. Katz wrote:\n\n> * [`MERGE`](https://www.postgresql.org/docs/15/sql-merge.html) statements are\n> explicitly rejected inside of a\n> [common-table expression](https://www.postgresql.org/docs/15/queries-with.html)\n> (aka `WITH` query) and\n> [`COPY`](https://www.postgresql.org/docs/15/sql-copy.html) statements.\n\nI would say \"Avoid crash in MERGE when called inside COPY or a CTE by\nthrowing an error early\", so that it doesn't look like we're removing a\nfeature.\n\nThank you for putting this together!\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Those who use electric razors are infidels destined to burn in hell while\nwe drink from rivers of beer, download free vids and mingle with naked\nwell shaved babes.\" (http://slashdot.org/comments.pl?sid=44793&cid=4647152)\n\n\n",
"msg_date": "Wed, 7 Sep 2022 10:02:35 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 Beta 4 release announcement draft"
},
{
"msg_contents": "On 9/7/22 4:02 AM, Alvaro Herrera wrote:\r\n> Hi Jonathan,\r\n> \r\n> On 2022-Sep-06, Jonathan S. Katz wrote:\r\n> \r\n>> * [`MERGE`](https://www.postgresql.org/docs/15/sql-merge.html) statements are\r\n>> explicitly rejected inside of a\r\n>> [common-table expression](https://www.postgresql.org/docs/15/queries-with.html)\r\n>> (aka `WITH` query) and\r\n>> [`COPY`](https://www.postgresql.org/docs/15/sql-copy.html) statements.\r\n> \r\n> I would say \"Avoid crash in MERGE when called inside COPY or a CTE by\r\n> throwing an error early\", so that it doesn't look like we're removing a\r\n> feature.\r\n\r\nYeah, we don't want to create the wrong impression. I updated it per \r\nyour suggestion (with minor tweaks) and removed the line that David \r\nmentioned around the test fix.\r\n\r\nThanks!\r\n\r\nJonathan",
"msg_date": "Wed, 7 Sep 2022 11:39:52 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 15 Beta 4 release announcement draft"
},
{
"msg_contents": "On 9/7/22 1:18 AM, Erik Rijkers wrote:\r\n> Op 07-09-2022 om 03:40 schreef Jonathan S. Katz:\r\n>> Hi,\r\n>>\r\n>> I've attached a draft of the PostgreSQL 15 Beta 4 release \r\n>> announcement. Please review for correctness and if there are any \r\n>> omissions.\r\n>>\r\n>> Please provide feedback on the draft no later than Sep 8, 2022 0:00 AoE.\r\n> \r\n> 'Fixes and changes in PostgreSQL 15 Beta 3 include:' should be\r\n> 'Fixes and changes in PostgreSQL 15 Beta 4 include:'\r\n\r\nFixed -- thanks!\r\n\r\nJonathan",
"msg_date": "Wed, 7 Sep 2022 11:40:55 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 15 Beta 4 release announcement draft"
}
] |
[
{
"msg_contents": "Doc: Explain about Column List feature.\n\nAdd a new logical replication section for \"Column Lists\" (analogous to the\nRow Filters page). This explains how the feature can be used and the\ncaveats in it.\n\nAuthor: Peter Smith\nReviewed-by: Shi yu, Vignesh C, Erik Rijkers, Amit Kapila\nBackpatch-through: 15, where it was introduced\nDiscussion: https://postgr.es/m/CAHut+PvOuc9=_4TbASc5=VUqh16UWtFO3GzcKQK_5m1hrW3vqg@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/f98d07424523ab5fefa6c6483dbf60d0d2fe1df3\n\nModified Files\n--------------\ndoc/src/sgml/logical-replication.sgml | 217 +++++++++++++++++++++++++++++++\ndoc/src/sgml/ref/alter_publication.sgml | 12 +-\ndoc/src/sgml/ref/create_publication.sgml | 6 +-\n3 files changed, 224 insertions(+), 11 deletions(-)",
"msg_date": "Wed, 07 Sep 2022 03:41:17 +0000",
"msg_from": "Amit Kapila <akapila@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Doc: Explain about Column List feature."
},
{
"msg_contents": "On 2022-Sep-07, Amit Kapila wrote:\n\n> Doc: Explain about Column List feature.\n> \n> Add a new logical replication section for \"Column Lists\" (analogous to the\n> Row Filters page). This explains how the feature can be used and the\n> caveats in it.\n> \n> Author: Peter Smith\n> Reviewed-by: Shi yu, Vignesh C, Erik Rijkers, Amit Kapila\n> Backpatch-through: 15, where it was introduced\n> Discussion: https://postgr.es/m/CAHut+PvOuc9=_4TbASc5=VUqh16UWtFO3GzcKQK_5m1hrW3vqg@mail.gmail.com\n\nHi\n\nI just read these docs and noticed that it mentions that column lists\ncan be used for security. As far as I remember, this is wrong: it is\nthe subscriber that builds the query to read column data during initial\nsync, and the publisher doesn't forbid to read columns not in it, so it\nis entirely possible for a malicious subscriber to read columns other\nthan those published. I'm pretty sure we discussed this at some point\nduring development of the feature.\n\nSo I suggest to mention this point explicitly in its own paragraph, to\navoid giving a false sense of security.\n\nWhile going over this text I found some additional things that could\n--in my opinion-- stand some improvement:\n\n* It feels better to start the section saying that a list can be\n specified; subscriber must have all those columns; omitting list\n means to publish everything. That leads to shorter text (no need to\n say \"you need to have them all, oh wait you might only have a few\").\n\n* there's no reason to explain the syntax in vague terms and refer the\n reader to the reference page.\n\n* The first few <sect2> seem to give no useful structure, and instead\n cause the text to become disorganized. I propose to remove them, and\n instead mix the paragraphs in them so that we explain the rules and\n the behavior, and lastly the effect on specific commands.\n\nThe attached patch effects those changes.\n\n\nOne more thing. There's a sect2 about combining column list. Part of it\nseems pretty judgmental and I see no reason to have it in there; I\npropose to remove it (it's not in this patch). I think we should just\nsay it doesn't work at present, here's how to work around it, and\nperhaps even say that we may lift the restriction in the future. The\nparagraph that starts with \"Background:\" is IMO out of place, and it\nrepeats the mistake that column lists are for security.\n\n\nLastly: In the create-publication reference page I think it would be\nbetter to reword the Parameters section a bit. It mentions\nFOR TABLE as a parameter, but the parameter is actually\n<replaceable>table_name</replaceable>; and the row-filter and\ncolumn-list explanations are also put in there when they should be in\ntheir own <replaceable>expression</> and <replaceable>column_name</>\nvarlistentries. I think splitting things that way would result in a\nclearer explanation.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 13 Sep 2022 14:11:38 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Doc: Explain about Column List feature."
},
{
"msg_contents": "On Tue, Sep 13, 2022 at 10:11 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Sep-07, Amit Kapila wrote:\n>\n> > Doc: Explain about Column List feature.\n> >\n> > Add a new logical replication section for \"Column Lists\" (analogous to the\n> > Row Filters page). This explains how the feature can be used and the\n> > caveats in it.\n> >\n> > Author: Peter Smith\n> > Reviewed-by: Shi yu, Vignesh C, Erik Rijkers, Amit Kapila\n> > Backpatch-through: 15, where it was introduced\n> > Discussion: https://postgr.es/m/CAHut+PvOuc9=_4TbASc5=VUqh16UWtFO3GzcKQK_5m1hrW3vqg@mail.gmail.com\n>\n> Hi\n>\n> I just read these docs and noticed that it mentions that column lists\n> can be used for security. As far as I remember, this is wrong: it is\n> the subscriber that builds the query to read column data during initial\n> sync, and the publisher doesn't forbid to read columns not in it, so it\n> is entirely possible for a malicious subscriber to read columns other\n> than those published. I'm pretty sure we discussed this at some point\n> during development of the feature.\n>\n> So I suggest to mention this point explicitly in its own paragraph, to\n> avoid giving a false sense of security.\n>\n\nThanks for the feedback.\n\nThe mention of 'security' in the page (as previously written) just\nmeans to say that publications can prevent sensitive columns from\nbeing replicated/subscribed by default. It was not intended to imply\nthose columns are immune from a malicious attack. Indeed, just having\nanother publication without any column lists could expose the same\nsensitive columns.\n\nI am fine with your rewording of the security part.\n\n> While going over this text I found some additional things that could\n> --in my opinion-- stand some improvement:\n\nIn general (because they have a lot of similarities), the wording and\nthe section structure of the \"Column Lists\" page tried to be\nconsistent with the \"Row Filters\" page. Perhaps this made everything\nunnecessarily complex. Anyway, your suggested re-wording and removal\nof those sections look OK to me - the content is the same AFAICT.\n\n>\n> * It feels better to start the section saying that a list can be\n> specified; subscriber must have all those columns; omitting list\n> means to publish everything. That leads to shorter text (no need to\n> say \"you need to have them all, oh wait you might only have a few\").\n>\n> * there's no reason to explain the syntax in vague terms and refer the\n> reader to the reference page.\n>\n> * The first few <sect2> seem to give no useful structure, and instead\n> cause the text to become disorganized. I propose to remove them, and\n> instead mix the paragraphs in them so that we explain the rules and\n> the behavior, and lastly the effect on specific commands.\n>\n> The attached patch effects those changes.\n>\n\nFor some reason I was unable to apply your supplied patch to master:\n\n[postgres@CentOS7-x64 oss_postgres_misc]$ git apply column-list-wording.patch\ncolumn-list-wording.patch:16: trailing whitespace.\n Each publication can optionally specify which columns of each table are\ncolumn-list-wording.patch:17: trailing whitespace.\n replicated to subscribers. The table on the subscriber side must have at\ncolumn-list-wording.patch:18: trailing whitespace.\n least all the columns that are published. If no column list is specified,\ncolumn-list-wording.patch:19: trailing whitespace.\n then all columns in the publisher are replicated.\ncolumn-list-wording.patch:20: trailing whitespace.\n See <xref linkend=\"sql-createpublication\"/> for details on the syntax.\nerror: patch failed: doc/src/sgml/logical-replication.sgml:1093\nerror: doc/src/sgml/logical-replication.sgml: patch does not apply\nerror: patch failed: doc/src/sgml/ref/create_publication.sgml:94\nerror: doc/src/sgml/ref/create_publication.sgml: patch does not apply\n\n>\n> One more thing. There's a sect2 about combining column list. Part of it\n> seems pretty judgmental and I see no reason to have it in there; I\n> propose to remove it (it's not in this patch). I think we should just\n> say it doesn't work at present, here's how to work around it, and\n> perhaps even say that we may lift the restriction in the future. The\n> paragraph that starts with \"Background:\" is IMO out of place, and it\n> repeats the mistake that column lists are for security.\n>\n\nIt wasn't clear which part you felt was judgemental. I have removed\nthe \"Background\" paragraph but I have otherwise left that section and\nWarning as-is because it still seemed useful for the user to know. You\ncan change/remove it if you disagree.\n\n>\n> Lastly: In the create-publication reference page I think it would be\n> better to reword the Parameters section a bit. It mentions\n> FOR TABLE as a parameter, but the parameter is actually\n> <replaceable>table_name</replaceable>; and the row-filter and\n> column-list explanations are also put in there when they should be in\n> their own <replaceable>expression</> and <replaceable>column_name</>\n> varlistentries. I think splitting things that way would result in a\n> clearer explanation.\n>\n\nIMO this should be proposed as a separate patch. Some of those things\n(e.g. FOR TABLE as a parameter) seem to have been written that way\nsince PG10.\n\n~~~\n\nPSA a new patch for the \"Column Lists\" page. AFAIK this is the same as\neverything that you suggested (except for the Warning section which\nwas kept as mentioned above).\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 14 Sep 2022 15:39:26 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Doc: Explain about Column List feature."
},
{
"msg_contents": "On 2022-Sep-14, Peter Smith wrote:\n\n> PSA a new patch for the \"Column Lists\" page. AFAIK this is the same as\n> everything that you suggested\n\nI don't get it. You send me my patch back, and claim it is a new patch?\n\nI kindly request that when you review a patch, you do not hijack the\nsubmitter's patch and claim it as your own. If a submitter goes mising\nor states that they're unavailable to complete some work, then that's\nokay, but otherwise it seems a bit offensive to me. I have seen that\nrepeatedly of late, and I find it quite rude. \n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Las cosas son buenas o malas segun las hace nuestra opinión\" (Lisias)\n\n\n",
"msg_date": "Wed, 14 Sep 2022 11:40:09 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Doc: Explain about Column List feature."
},
{
"msg_contents": "On Wed, Sep 14, 2022 at 7:40 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Sep-14, Peter Smith wrote:\n>\n> > PSA a new patch for the \"Column Lists\" page. AFAIK this is the same as\n> > everything that you suggested\n>\n> I don't get it. You send me my patch back, and claim it is a new patch?\n>\n> I kindly request that when you review a patch, you do not hijack the\n> submitter's patch and claim it as your own. If a submitter goes mising\n> or states that they're unavailable to complete some work, then that's\n> okay, but otherwise it seems a bit offensive to me. I have seen that\n> repeatedly of late, and I find it quite rude.\n\nHi Alvaro,\n\nI'm sorry for any misunderstandings.\n\nI attached the replacement patch primarily because the original did\nnot apply for me, so I had to re-make it at my end anyway so I could\nsee the result. I thought posting it might save others from having to\ndo the same.\n\nCertainly I am not trying to hijack or claim ownership.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 15 Sep 2022 11:57:28 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Doc: Explain about Column List feature."
},
{
"msg_contents": "On 2022-Sep-14, Peter Smith wrote:\n\n> On Tue, Sep 13, 2022 at 10:11 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > On 2022-Sep-07, Amit Kapila wrote:\n\n> > One more thing. There's a sect2 about combining column list. Part of it\n> > seems pretty judgmental and I see no reason to have it in there; I\n> > propose to remove it (it's not in this patch). I think we should just\n> > say it doesn't work at present, here's how to work around it, and\n> > perhaps even say that we may lift the restriction in the future. The\n> > paragraph that starts with \"Background:\" is IMO out of place, and it\n> > repeats the mistake that column lists are for security.\n> \n> It wasn't clear which part you felt was judgemental. I have removed\n> the \"Background\" paragraph but I have otherwise left that section and\n> Warning as-is because it still seemed useful for the user to know. You\n> can change/remove it if you disagree.\n\nI meant the Background part that you remove, yeah.\n\nLooking at the rendered docs again, I notice that section \"31.4.5.\nCombining Multiple Column Lists\" is *only* the red-tinted Warning block.\nThat seems quite odd. I am tempted to remove the sect2 heading for that\none too. I am now wondering how to split things between the normative bits\n \"It is not supported to have a subscription comprising several\n publications where the same table has been published with different\n column lists.\"\n\nand the informative bits\n \"This means changing the column lists of the tables being subscribed\n could cause inconsistency of column lists among publications, in which\n case the ALTER PUBLICATION will be successful but later the walsender\n on the publisher, or the subscriber may throw an error. In this\n scenario, the user needs to recreate the subscription after adjusting\n the column list or drop the problematic publication using ALTER\n SUBSCRIPTION ... DROP PUBLICATION and then add it back after adjusting\n the column list.\"\n\n> > Lastly: In the create-publication reference page I think it would be\n> > better to reword the Parameters section a bit. It mentions\n> > FOR TABLE as a parameter, but the parameter is actually\n> > <replaceable>table_name</replaceable>; and the row-filter and\n> > column-list explanations are also put in there when they should be in\n> > their own <replaceable>expression</> and <replaceable>column_name</>\n> > varlistentries. I think splitting things that way would result in a\n> > clearer explanation.\n> \n> IMO this should be proposed as a separate patch. Some of those things\n> (e.g. FOR TABLE as a parameter) seem to have been written that way\n> since PG10.\n\nAgreed.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Aprender sin pensar es inútil; pensar sin aprender, peligroso\" (Confucio)\n\n\n",
"msg_date": "Thu, 15 Sep 2022 15:08:21 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Doc: Explain about Column List feature."
},
{
"msg_contents": "On 2022-Sep-15, Alvaro Herrera wrote:\n\n> Looking at the rendered docs again, I notice that section \"31.4.5.\n> Combining Multiple Column Lists\" is *only* the red-tinted Warning block.\n> That seems quite odd. I am tempted to remove the sect2 heading for that\n> one too.\n\nPushed. I didn't modify this part; I spent too much time looking at it\ntrying to figure out how to do it. I think this bit really belongs in\nthe CREATE/ALTER docs rather than this chapter. But in order to support\nhaving a separate <para> for the restriction on combination, we need a\nseparate <varlistentry> for the column_name parameter. So I'm going to\nedit that one and I'll throw this change in.\n\nThanks, Peter, for the discussion.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Pensar que el espectro que vemos es ilusorio no lo despoja de espanto,\nsólo le suma el nuevo terror de la locura\" (Perelandra, C.S. Lewis)\n\n\n",
"msg_date": "Thu, 15 Sep 2022 18:11:42 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Doc: Explain about Column List feature."
},
{
"msg_contents": "On 2022-Sep-15, Alvaro Herrera wrote:\n\n> On 2022-Sep-15, Alvaro Herrera wrote:\n> \n> > Looking at the rendered docs again, I notice that section \"31.4.5.\n> > Combining Multiple Column Lists\" is *only* the red-tinted Warning block.\n> > That seems quite odd. I am tempted to remove the sect2 heading for that\n> > one too.\n> \n> Pushed. I didn't modify this part; I spent too much time looking at it\n> trying to figure out how to do it. I think this bit really belongs in\n> the CREATE/ALTER docs rather than this chapter. But in order to support\n> having a separate <para> for the restriction on combination, we need a\n> separate <varlistentry> for the column_name parameter. So I'm going to\n> edit that one and I'll throw this change in.\n\nI figured out how to fix this one -- just remove the <sect2> tags, and\nadd a <title> tag to the <warning> box. The attached yields the\nexplanatory text in a separate box that doesn't have the silly\notherwise-empty section title. We add the 'id' that was in the sect2 to\nthe warning; with this, at the referencing end the full title is\nrendered, which looks quite reasonable. I have attached screenshots of\nboth sides of this.\n\nCompare the existing\nhttps://www.postgresql.org/docs/15/logical-replication-col-lists.html#LOGICAL-REPLICATION-COL-LIST-COMBINING\n\nUnless there are objections, I'll get this pushed to 15 and master.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Mon, 19 Dec 2022 17:47:13 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Doc: Explain about Column List feature."
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 10:17 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Sep-15, Alvaro Herrera wrote:\n>\n> > On 2022-Sep-15, Alvaro Herrera wrote:\n> >\n> > > Looking at the rendered docs again, I notice that section \"31.4.5.\n> > > Combining Multiple Column Lists\" is *only* the red-tinted Warning block.\n> > > That seems quite odd. I am tempted to remove the sect2 heading for that\n> > > one too.\n> >\n> > Pushed. I didn't modify this part; I spent too much time looking at it\n> > trying to figure out how to do it. I think this bit really belongs in\n> > the CREATE/ALTER docs rather than this chapter. But in order to support\n> > having a separate <para> for the restriction on combination, we need a\n> > separate <varlistentry> for the column_name parameter. So I'm going to\n> > edit that one and I'll throw this change in.\n>\n> I figured out how to fix this one -- just remove the <sect2> tags, and\n> add a <title> tag to the <warning> box. The attached yields the\n> explanatory text in a separate box that doesn't have the silly\n> otherwise-empty section title. We add the 'id' that was in the sect2 to\n> the warning; with this, at the referencing end the full title is\n> rendered, which looks quite reasonable. I have attached screenshots of\n> both sides of this.\n>\n> Compare the existing\n> https://www.postgresql.org/docs/15/logical-replication-col-lists.html#LOGICAL-REPLICATION-COL-LIST-COMBINING\n>\n\n- <sect2 id=\"logical-replication-col-list-combining\">\n- <title>Combining Multiple Column Lists</title>\n-\n- <warning>\n+ <warning id=\"logical-replication-col-list-combining\">\n+ <title>Combining Column Lists from Multiple Subscriptions</title>\n\nShouldn't the title be \"Combining Column Lists from Multiple\nPublications\"? We can define column lists while defining publications\nso the proposed title doesn't seem to be conveying the right thing.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Dec 2022 08:57:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Doc: Explain about Column List feature."
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 3:47 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Sep-15, Alvaro Herrera wrote:\n>\n> > On 2022-Sep-15, Alvaro Herrera wrote:\n> >\n> > > Looking at the rendered docs again, I notice that section \"31.4.5.\n> > > Combining Multiple Column Lists\" is *only* the red-tinted Warning block.\n> > > That seems quite odd. I am tempted to remove the sect2 heading for that\n> > > one too.\n> >\n> > Pushed. I didn't modify this part; I spent too much time looking at it\n> > trying to figure out how to do it. I think this bit really belongs in\n> > the CREATE/ALTER docs rather than this chapter. But in order to support\n> > having a separate <para> for the restriction on combination, we need a\n> > separate <varlistentry> for the column_name parameter. So I'm going to\n> > edit that one and I'll throw this change in.\n>\n> I figured out how to fix this one -- just remove the <sect2> tags, and\n> add a <title> tag to the <warning> box. The attached yields the\n> explanatory text in a separate box that doesn't have the silly\n> otherwise-empty section title. We add the 'id' that was in the sect2 to\n> the warning; with this, at the referencing end the full title is\n> rendered, which looks quite reasonable. I have attached screenshots of\n> both sides of this.\n>\n> Compare the existing\n> https://www.postgresql.org/docs/15/logical-replication-col-lists.html#LOGICAL-REPLICATION-COL-LIST-COMBINING\n>\n> Unless there are objections, I'll get this pushed to 15 and master.\n>\n\nNot quite an objection, but...\n\nIf you change this warning title then it becomes the odd one out -\nevery other warning in all the pg docs just says \"Warning\". IMO\nmaintaining consistency throughout is best. e.g. I can imagine maybe\nsomeone searching for \"Warning\" in the docs, and now they are not\ngoing to find this one.\n\nMaybe a safer way to fix this \"silly otherwise-empty section title\"\nwould be to just add some explanatory text so it is no longer empty.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 20 Dec 2022 15:27:58 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Doc: Explain about Column List feature."
},
{
"msg_contents": "On 2022-Dec-20, Peter Smith wrote:\n\n> If you change this warning title then it becomes the odd one out -\n> every other warning in all the pg docs just says \"Warning\". IMO\n> maintaining consistency throughout is best. e.g. I can imagine maybe\n> someone searching for \"Warning\" in the docs, and now they are not\n> going to find this one.\n\nHmm, how do you propose that people search for warnings, and fail to\nnotice one that is not titled \"Warning\"? In my mind, it is much more\nlikely that they scan a page visually until they hit a red box (\"eh\nlook, a red box that I can ignore because its title is not Warning\" does\nnot sound very credible). On the other hand, if you're going over the\nsource .sgml files, you're going to grep for the warning tag, and that's\ngoing to be there.\n\n(Maybe you'd say somebody would grep for \"<warning>\" and not find this\none because the > is not there anymore. I grant you that that could\nhappen. But then they're doing it wrong already. I don't think we need\nto cater to that.)\n\n\nNow, I did notice that we don't have any other titled warning boxes,\nbecause I had a quick look at all the other warnings we have. I was\nsurprised to find out that we have very few, which I think is good,\nbecause warnings are annoying. I was also surprised that most of them\nare right not to have a title, because they are very quick one-para\nboxes. But I did find two others that should probably have a title:\n\nhttps://www.postgresql.org/docs/15/app-pgrewind.html\nMaybe \"Failures while rewinding\"\n\nhttps://www.postgresql.org/docs/15/applevel-consistency.html\nMaybe \"Serializable Transactions and Data Replication\"\n(and patch it to mention logical replication)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 20 Dec 2022 09:21:38 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Doc: Explain about Column List feature."
},
{
"msg_contents": "On 2022-Dec-20, Amit Kapila wrote:\n\n> + <warning id=\"logical-replication-col-list-combining\">\n> + <title>Combining Column Lists from Multiple Subscriptions</title>\n> \n> Shouldn't the title be \"Combining Column Lists from Multiple\n> Publications\"? We can define column lists while defining publications\n> so the proposed title doesn't seem to be conveying the right thing.\n\nDoh, of course.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n",
"msg_date": "Tue, 20 Dec 2022 09:22:27 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Doc: Explain about Column List feature."
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 7:21 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Dec-20, Peter Smith wrote:\n>\n> > If you change this warning title then it becomes the odd one out -\n> > every other warning in all the pg docs just says \"Warning\". IMO\n> > maintaining consistency throughout is best. e.g. I can imagine maybe\n> > someone searching for \"Warning\" in the docs, and now they are not\n> > going to find this one.\n>\n> Hmm, how do you propose that people search for warnings, and fail to\n> notice one that is not titled \"Warning\"? In my mind, it is much more\n> likely that they scan a page visually until they hit a red box (\"eh\n> look, a red box that I can ignore because its title is not Warning\" does\n> not sound very credible). On the other hand, if you're going over the\n> source .sgml files, you're going to grep for the warning tag, and that's\n> going to be there.\n>\n> (Maybe you'd say somebody would grep for \"<warning>\" and not find this\n> one because the > is not there anymore. I grant you that that could\n> happen. But then they're doing it wrong already. I don't think we need\n> to cater to that.)\n>\n\nBy \"searching\" I also meant just scanning visually, although I was\nthinking more about scanning the PDF.\n\nRight now, the intention of any text box is obvious at a glance\nbecause of those titles like \"Caution\", \"Tip\", \"Note\", \"Warning\".\nSure, the HTML rendering also uses colours to convey the purpose, but\nin the PDF version [1] everything is black-and-white so apart from the\ntitle all boxes look the same. That's why I felt using non-standard\nbox titles might be throwing away some of the meaning - e.g. the\nreader of the PDF won't know anymore at a glance are they looking at a\nwarning or a tip.\n\n>\n> Now, I did notice that we don't have any other titled warning boxes,\n> because I had a quick look at all the other warnings we have. I was\n> surprised to find out that we have very few, which I think is good,\n> because warnings are annoying. I was also surprised that most of them\n> are right not to have a title, because they are very quick one-para\n> boxes. But I did find two others that should probably have a title:\n>\n> https://www.postgresql.org/docs/15/app-pgrewind.html\n> Maybe \"Failures while rewinding\"\n>\n> https://www.postgresql.org/docs/15/applevel-consistency.html\n> Maybe \"Serializable Transactions and Data Replication\"\n> (and patch it to mention logical replication)\n>\n\n------\n[1] PDF docs - https://www.postgresql.org/files/documentation/pdf/15/postgresql-15-A4.pdf\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 21 Dec 2022 09:29:58 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Doc: Explain about Column List feature."
},
{
"msg_contents": "On 2022-Dec-21, Peter Smith wrote:\n\n> By \"searching\" I also meant just scanning visually, although I was\n> thinking more about scanning the PDF.\n> \n> Right now, the intention of any text box is obvious at a glance\n> because of those titles like \"Caution\", \"Tip\", \"Note\", \"Warning\".\n> Sure, the HTML rendering also uses colours to convey the purpose, but\n> in the PDF version [1] everything is black-and-white so apart from the\n> title all boxes look the same. That's why I felt using non-standard\n> box titles might be throwing away some of the meaning - e.g. the\n> reader of the PDF won't know anymore at a glance are they looking at a\n> warning or a tip.\n\nOh, I see. It's been so long that I haven't looked at the PDFs, that I\nfailed to realize that they don't use color. I agree that would be a\nproblem. Maybe we can change the title to have the word:\n\n <title>Warning: Combining Column Lists from Multiple Publications</title>\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 21 Dec 2022 08:59:08 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Doc: Explain about Column List feature."
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 6:59 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Dec-21, Peter Smith wrote:\n>\n> > By \"searching\" I also meant just scanning visually, although I was\n> > thinking more about scanning the PDF.\n> >\n> > Right now, the intention of any text box is obvious at a glance\n> > because of those titles like \"Caution\", \"Tip\", \"Note\", \"Warning\".\n> > Sure, the HTML rendering also uses colours to convey the purpose, but\n> > in the PDF version [1] everything is black-and-white so apart from the\n> > title all boxes look the same. That's why I felt using non-standard\n> > box titles might be throwing away some of the meaning - e.g. the\n> > reader of the PDF won't know anymore at a glance are they looking at a\n> > warning or a tip.\n>\n> Oh, I see. It's been so long that I haven't looked at the PDFs, that I\n> failed to realize that they don't use color. I agree that would be a\n> problem. Maybe we can change the title to have the word:\n>\n> <title>Warning: Combining Column Lists from Multiple Publications</title>\n>\n\nThat last idea LGTM. But no patch at all LGTM also.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Wed, 21 Dec 2022 20:04:39 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Doc: Explain about Column List feature."
},
{
"msg_contents": "On 2022-Dec-21, Peter Smith wrote:\n\n> On Wed, Dec 21, 2022 at 6:59 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > Oh, I see. It's been so long that I haven't looked at the PDFs, that I\n> > failed to realize that they don't use color. I agree that would be a\n> > problem. Maybe we can change the title to have the word:\n> >\n> > <title>Warning: Combining Column Lists from Multiple Publications</title>\n> \n> That last idea LGTM. But no patch at all LGTM also.\n\n... I hear you, but honestly that warning box with a section title looks\ncompletely wrong to me, so I've pushed it. Many thanks for looking.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"But static content is just dynamic content that isn't moving!\"\n http://smylers.hates-software.com/2007/08/15/fe244d0c.html\n\n\n",
"msg_date": "Fri, 23 Dec 2022 17:52:21 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Doc: Explain about Column List feature."
}
] |
[
{
"msg_contents": "Hello hackers!\n\nReading docs for the MERGE statement I've found a little error: a \nsemicolon in middle of a statement and absence of a semicolon in the end \nof it.\n\nKey words in subqueries are written in uppercase everywhere in the docs \nbut not in an example for MERGE. I think it should be adjusted too.\n\n\nAlso aliases, table and column names are written in lowercase \n(snake_case) almost all over the docs. I did not dare to fix examples in \nthe same patch (may be that style was intentional), but guess that style \nof the first two examples should not differ from the third one and from \nother examples in docs.\n\n\nDiscussions about MERGE was:\nhttps://postgr.es/m/20220801145257.GA15006@telsasoft.com\nhttps://postgr.es/m/20220714162618.GH18011@telsasoft.com\n\nbut I did not find there (via quick search) anything about case styling.\n\n\nThank all a lot in advance!\n\n\n-- \nBest regards,\nVitaly Burovoy",
"msg_date": "Wed, 7 Sep 2022 20:51:24 +0000",
"msg_from": "Vitaly Burovoy <vitaly.burovoy@gmail.com>",
"msg_from_op": true,
"msg_subject": "Doc fix and adjustment for MERGE command"
},
{
"msg_contents": "On 9/7/22 22:51, Vitaly Burovoy wrote:\n> Hello hackers!\n> \n> Reading docs for the MERGE statement I've found a little error: a \n> semicolon in middle of a statement and absence of a semicolon in the end \n> of it.\n> \n> Key words in subqueries are written in uppercase everywhere in the docs \n> but not in an example for MERGE. I think it should be adjusted too.\n> \n> \n> Also aliases, table and column names are written in lowercase \n> (snake_case) almost all over the docs. I did not dare to fix examples in \n> the same patch (may be that style was intentional), but guess that style \n> of the first two examples should not differ from the third one and from \n> other examples in docs.\n\n\nI agree with both of these patches (especially the semicolon part which \nis not subjective).\n-- \nVik Fearing\n\n\n",
"msg_date": "Thu, 8 Sep 2022 10:14:56 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc fix and adjustment for MERGE command"
},
{
"msg_contents": "On 2022-Sep-08, Vik Fearing wrote:\n\n> On 9/7/22 22:51, Vitaly Burovoy wrote:\n> > Hello hackers!\n> > \n> > Reading docs for the MERGE statement I've found a little error: a\n> > semicolon in middle of a statement and absence of a semicolon in the end\n> > of it.\n> > \n> > Key words in subqueries are written in uppercase everywhere in the docs\n> > but not in an example for MERGE. I think it should be adjusted too.\n\n> I agree with both of these patches (especially the semicolon part which is\n> not subjective).\n\nOK, pushed both together.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El sabio habla porque tiene algo que decir;\nel tonto, porque tiene que decir algo\" (Platon).\n\n\n",
"msg_date": "Fri, 9 Sep 2022 13:54:25 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc fix and adjustment for MERGE command"
},
{
"msg_contents": "On 2022-09-09 11:54Z, Alvaro Herrera wrote:\n> On 2022-Sep-08, Vik Fearing wrote:\n> \n>> On 9/7/22 22:51, Vitaly Burovoy wrote:\n>>> Hello hackers!\n>>>\n>>> Reading docs for the MERGE statement I've found a little error: a\n>>> semicolon in middle of a statement and absence of a semicolon in the end\n>>> of it.\n>>>\n>>> Key words in subqueries are written in uppercase everywhere in the docs\n>>> but not in an example for MERGE. I think it should be adjusted too.\n> \n>> I agree with both of these patches (especially the semicolon part which is\n>> not subjective).\n> \n> OK, pushed both together.\n> \n\nThank you!\n=)\n\n-- \nBest regards,\nVitaly Burovoy\n\n\n",
"msg_date": "Fri, 9 Sep 2022 11:58:48 +0000",
"msg_from": "Vitaly Burovoy <vitaly.burovoy@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc fix and adjustment for MERGE command"
}
] |
[
{
"msg_contents": "Fix perltidy breaking perlcritic\n\nperltidying a \"##no critic\" line moves the marker to where it becomes\nuseless. Put the line back to how it was, and protect it from further\nmalfeasance.\n\nPer buildfarm member crake.\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/12d40d4a8d0495cf2c7b564daa8aaa7f107a6c56\n\nModified Files\n--------------\nsrc/backend/catalog/Catalog.pm | 6 ++++--\n1 file changed, 4 insertions(+), 2 deletions(-)",
"msg_date": "Thu, 08 Sep 2022 09:23:09 +0000",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Fix perltidy breaking perlcritic"
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 5:23 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> Fix perltidy breaking perlcritic\n>\n> perltidying a \"##no critic\" line moves the marker to where it becomes\n> useless. Put the line back to how it was, and protect it from further\n> malfeasance.\n>\n>\n>\nA better way do do this IMNSHO is to put the eval in a block on its own\nalong with the no critic marker on its own line, like this:\n\n{\n ## no critic (ProhibitStringyEval)\n eval ...\n}\n\nperlcritic respects block boundaries for its directives.\n\ncheers\n\nandrew\n\nOn Thu, Sep 8, 2022 at 5:23 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:Fix perltidy breaking perlcritic\n\nperltidying a \"##no critic\" line moves the marker to where it becomes\nuseless. Put the line back to how it was, and protect it from further\nmalfeasance.\nA better way do do this IMNSHO is to put the eval in a block on its own along with the no critic marker on its own line, like this:{ ## no critic (ProhibitStringyEval) eval ...}perlcritic respects block boundaries for its directives.cheersandrew",
"msg_date": "Thu, 8 Sep 2022 16:32:14 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix perltidy breaking perlcritic"
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 3:32 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> A better way do do this IMNSHO is to put the eval in a block on its own along with the no critic marker on its own line, like this:\n>\n> {\n> ## no critic (ProhibitStringyEval)\n> eval ...\n> }\n>\n> perlcritic respects block boundaries for its directives.\n\nI tried that in the attached -- it looks a bit nicer but requires more\nexplanation. I don't have strong feelings either way.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Sat, 10 Sep 2022 09:44:44 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix perltidy breaking perlcritic"
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 10:44 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n> On Fri, Sep 9, 2022 at 3:32 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> > A better way do do this IMNSHO is to put the eval in a block on its own\n> along with the no critic marker on its own line, like this:\n> >\n> > {\n> > ## no critic (ProhibitStringyEval)\n> > eval ...\n> > }\n> >\n> > perlcritic respects block boundaries for its directives.\n>\n> I tried that in the attached -- it looks a bit nicer but requires more\n> explanation. I don't have strong feelings either way.\n>\n>\nMaybe even better would be just this, which I bet perltidy would not monkey\nwith, and would require no explanation:\n\neval \"\\$hash_ref = $_\"; ## no critic (ProhibitStringyEval)\n\ncheers\n\nandrew\n\nOn Fri, Sep 9, 2022 at 10:44 PM John Naylor <john.naylor@enterprisedb.com> wrote:On Fri, Sep 9, 2022 at 3:32 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> A better way do do this IMNSHO is to put the eval in a block on its own along with the no critic marker on its own line, like this:\n>\n> {\n> ## no critic (ProhibitStringyEval)\n> eval ...\n> }\n>\n> perlcritic respects block boundaries for its directives.\n\nI tried that in the attached -- it looks a bit nicer but requires more\nexplanation. I don't have strong feelings either way.\nMaybe even better would be just this, which I bet perltidy would not monkey with, and would require no explanation:eval \"\\$hash_ref = $_\"; ## no critic (ProhibitStringyEval) cheersandrew",
"msg_date": "Sat, 10 Sep 2022 02:50:09 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix perltidy breaking perlcritic"
},
{
"msg_contents": "[resending to -hackers instead of -committers]\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n\n> On Fri, Sep 9, 2022 at 10:44 PM John Naylor <john.naylor@enterprisedb.com>\n> wrote:\n>\n>> On Fri, Sep 9, 2022 at 3:32 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> > A better way do do this IMNSHO is to put the eval in a block on its own\n>> along with the no critic marker on its own line, like this:\n>> >\n>> > {\n>> > ## no critic (ProhibitStringyEval)\n>> > eval ...\n>> > }\n>> >\n>> > perlcritic respects block boundaries for its directives.\n>>\n>> I tried that in the attached -- it looks a bit nicer but requires more\n>> explanation. I don't have strong feelings either way.\n>>\n>>\n> Maybe even better would be just this, which I bet perltidy would not monkey\n> with, and would require no explanation:\n>\n> eval \"\\$hash_ref = $_\"; ## no critic (ProhibitStringyEval)\n\nI didn't see this until it got committed, since I'm not subscribed to\n-committers, but I think it would be even better to rely on the fact\nthat eval returns the value of the last expression in the string, which\nalso gets rid of the ugly quoting and escaping, per the attached.\n\n- ilmari",
"msg_date": "Mon, 12 Sep 2022 10:54:47 +0100",
"msg_from": "Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix perltidy breaking perlcritic"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n\n> On Fri, Sep 9, 2022 at 10:44 PM John Naylor <john.naylor@enterprisedb.com>\n> wrote:\n>\n>> On Fri, Sep 9, 2022 at 3:32 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> > A better way do do this IMNSHO is to put the eval in a block on its own\n>> along with the no critic marker on its own line, like this:\n>> >\n>> > {\n>> > ## no critic (ProhibitStringyEval)\n>> > eval ...\n>> > }\n>> >\n>> > perlcritic respects block boundaries for its directives.\n>>\n>> I tried that in the attached -- it looks a bit nicer but requires more\n>> explanation. I don't have strong feelings either way.\n>>\n>>\n> Maybe even better would be just this, which I bet perltidy would not monkey\n> with, and would require no explanation:\n>\n> eval \"\\$hash_ref = $_\"; ## no critic (ProhibitStringyEval)\n\nI didn't see this until it got committed, since I'm not subscribed to\n-committers, but I think it would be even better to rely on the fact\nthat eval returns the value of the last expression in the string, which\nalso gets rid of the ugly quoting and escaping, per the attached.\n\n- ilmari",
"msg_date": "Mon, 12 Sep 2022 10:54:47 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix perltidy breaking perlcritic"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 4:54 PM Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n\n> > eval \"\\$hash_ref = $_\"; ## no critic (ProhibitStringyEval)\n>\n> I didn't see this until it got committed, since I'm not subscribed to\n> -committers, but I think it would be even better to rely on the fact\n> that eval returns the value of the last expression in the string, which\n> also gets rid of the ugly quoting and escaping, per the attached.\n\nHmm, interesting.\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 13 Sep 2022 16:25:18 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix perltidy breaking perlcritic"
},
{
"msg_contents": "\nOn 2022-09-13 Tu 05:25, John Naylor wrote:\n> On Mon, Sep 12, 2022 at 4:54 PM Dagfinn Ilmari Mannsåker\n> <ilmari@ilmari.org> wrote:\n>\n>>> eval \"\\$hash_ref = $_\"; ## no critic (ProhibitStringyEval)\n>> I didn't see this until it got committed, since I'm not subscribed to\n>> -committers, but I think it would be even better to rely on the fact\n>> that eval returns the value of the last expression in the string, which\n>> also gets rid of the ugly quoting and escaping, per the attached.\n> Hmm, interesting.\n\n\n\nI agree it's a slight stylistic improvement. I was trying to keep as\nclose as possible to the original.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 13 Sep 2022 22:05:20 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix perltidy breaking perlcritic"
}
] |
[
{
"msg_contents": "Hi All,\n\nThe logically decoded data are sent to the logical subscriber at the time\nof transaction commit, assuming that the data is small. However, before the\ntransaction commit is performed, the LSN representing the data that is yet\nto be received by the logical subscriber appears in the confirmed_flush_lsn\ncolumn of pg_replication_slots catalog. Isn't the information seen in the\nconfirmed_flush_lsn column while the transaction is in progress incorrect ?\nesp considering the description given in the pg doc for this column.\n\nActually, while the transaction is running, the publisher keeps on sending\nkeepalive messages containing LSN of the last decoded data saved in reorder\nbuffer and the subscriber responds with the same LSN as the last received\nLSN which is then updated as confirmed_flush_lsn by the publisher. I think\nthe LSN that we are sending with the keepalive message should be the one\nrepresenting the transaction begin message, not the LSN of the last decoded\ndata which is yet to be sent. Please let me know if I am missing something\nhere.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nHi All,The logically decoded data are sent to the logical subscriber at the time of transaction commit, assuming that the data is small. However, before the transaction commit is performed, the LSN representing the data that is yet to be received by the logical subscriber appears in the confirmed_flush_lsn column of pg_replication_slots catalog. Isn't the information seen in the confirmed_flush_lsn column while the transaction is in progress incorrect ? esp considering the description given in the pg doc for this column.Actually, while the transaction is running, the publisher keeps on sending keepalive messages containing LSN of the last decoded data saved in reorder buffer and the subscriber responds with the same LSN as the last received LSN which is then updated as confirmed_flush_lsn by the publisher. I think the LSN that we are sending with the keepalive message should be the one representing the transaction begin message, not the LSN of the last decoded data which is yet to be sent. Please let me know if I am missing something here.--With Regards,Ashutosh Sharma.",
"msg_date": "Thu, 8 Sep 2022 16:14:00 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "confirmed_flush_lsn shows LSN of the data that has not yet been\n received by the logical subscriber."
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 4:14 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi All,\n>\n> The logically decoded data are sent to the logical subscriber at the time of transaction commit, assuming that the data is small. However, before the transaction commit is performed, the LSN representing the data that is yet to be received by the logical subscriber appears in the confirmed_flush_lsn column of pg_replication_slots catalog. Isn't the information seen in the confirmed_flush_lsn column while the transaction is in progress incorrect ? esp considering the description given in the pg doc for this column.\n>\n> Actually, while the transaction is running, the publisher keeps on sending keepalive messages containing LSN of the last decoded data saved in reorder buffer and the subscriber responds with the same LSN as the last received LSN which is then updated as confirmed_flush_lsn by the publisher. I think the LSN that we are sending with the keepalive message should be the one representing the transaction begin message, not the LSN of the last decoded data which is yet to be sent. Please let me know if I am missing something here.\n\nThe transactions with commit lsn < confirmed_flush_lsn are confirmed\nto be received (and applied by the subscriber. Setting LSN\ncorresponding to a WAL record within a transaction in progress as\nconfirmed_flush should be ok. Since the transactions are interleaved\nin WAL stream, it's quite possible that LSNs of some WAL records of an\ninflight transaction are lesser than commit LSN of some another\ntransaction. So setting commit LSN of another effectively same as\nsetting it to any of the LSNs of any previous WAL record irrespective\nof the transaction that it belongs to.\n\nIn case WAL sender restarts with confirmed_flush_lsn set to LSN of a\nWAL record of an inflight transaction, the whole inflight transaction\nwill be sent again since its commit LSN is higher than\nconfirmed_flush_lsn.\n\nI think logical replication has inherited this from physical\nreplication. A useful effect of this is to reduce WAL retention by\nmoving restart_lsn based on the latest confirmed_flush_lsn.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 8 Sep 2022 18:23:45 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: confirmed_flush_lsn shows LSN of the data that has not yet been\n received by the logical subscriber."
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 6:23 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Thu, Sep 8, 2022 at 4:14 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Hi All,\n> >\n> > The logically decoded data are sent to the logical subscriber at the time of transaction commit, assuming that the data is small. However, before the transaction commit is performed, the LSN representing the data that is yet to be received by the logical subscriber appears in the confirmed_flush_lsn column of pg_replication_slots catalog. Isn't the information seen in the confirmed_flush_lsn column while the transaction is in progress incorrect ? esp considering the description given in the pg doc for this column.\n> >\n> > Actually, while the transaction is running, the publisher keeps on sending keepalive messages containing LSN of the last decoded data saved in reorder buffer and the subscriber responds with the same LSN as the last received LSN which is then updated as confirmed_flush_lsn by the publisher. I think the LSN that we are sending with the keepalive message should be the one representing the transaction begin message, not the LSN of the last decoded data which is yet to be sent. Please let me know if I am missing something here.\n>\n> The transactions with commit lsn < confirmed_flush_lsn are confirmed\n> to be received (and applied by the subscriber. Setting LSN\n> corresponding to a WAL record within a transaction in progress as\n> confirmed_flush should be ok. Since the transactions are interleaved\n> in WAL stream, it's quite possible that LSNs of some WAL records of an\n> inflight transaction are lesser than commit LSN of some another\n> transaction. So setting commit LSN of another effectively same as\n> setting it to any of the LSNs of any previous WAL record irrespective\n> of the transaction that it belongs to.\n\nThank you Ashutosh for the explanation. I still feel that the\ndocumentation on confirmed_flush_lsn needs some improvement. It\nactually claims that all the data before the confirmed_flush_lsn has\nbeen received by the logical subscriber, but that's not the case. It\nactually means that all the data belonging to the transactions with\ncommit lsn < confirmed_flush_lsn has been received and applied by the\nsubscriber. So setting confirmed_flush_lsn to the lsn of wal records\ngenerated by running transaction might make people think that the wal\nrecords belonging to previous data of the same running transaction has\nalready been received and applied by the subscriber node, but that's\nnot true.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n",
"msg_date": "Thu, 8 Sep 2022 20:32:05 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: confirmed_flush_lsn shows LSN of the data that has not yet been\n received by the logical subscriber."
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 8:32 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> On Thu, Sep 8, 2022 at 6:23 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Thu, Sep 8, 2022 at 4:14 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > >\n> > > Hi All,\n> > >\n> > > The logically decoded data are sent to the logical subscriber at the time of transaction commit, assuming that the data is small. However, before the transaction commit is performed, the LSN representing the data that is yet to be received by the logical subscriber appears in the confirmed_flush_lsn column of pg_replication_slots catalog. Isn't the information seen in the confirmed_flush_lsn column while the transaction is in progress incorrect ? esp considering the description given in the pg doc for this column.\n> > >\n> > > Actually, while the transaction is running, the publisher keeps on sending keepalive messages containing LSN of the last decoded data saved in reorder buffer and the subscriber responds with the same LSN as the last received LSN which is then updated as confirmed_flush_lsn by the publisher. I think the LSN that we are sending with the keepalive message should be the one representing the transaction begin message, not the LSN of the last decoded data which is yet to be sent. Please let me know if I am missing something here.\n> >\n> > The transactions with commit lsn < confirmed_flush_lsn are confirmed\n> > to be received (and applied by the subscriber. Setting LSN\n> > corresponding to a WAL record within a transaction in progress as\n> > confirmed_flush should be ok. Since the transactions are interleaved\n> > in WAL stream, it's quite possible that LSNs of some WAL records of an\n> > inflight transaction are lesser than commit LSN of some another\n> > transaction. So setting commit LSN of another effectively same as\n> > setting it to any of the LSNs of any previous WAL record irrespective\n> > of the transaction that it belongs to.\n>\n> Thank you Ashutosh for the explanation. I still feel that the\n> documentation on confirmed_flush_lsn needs some improvement. It\n> actually claims that all the data before the confirmed_flush_lsn has\n> been received by the logical subscriber, but that's not the case. It\n> actually means that all the data belonging to the transactions with\n> commit lsn < confirmed_flush_lsn has been received and applied by the\n> subscriber. So setting confirmed_flush_lsn to the lsn of wal records\n> generated by running transaction might make people think that the wal\n> records belonging to previous data of the same running transaction has\n> already been received and applied by the subscriber node, but that's\n> not true.\n>\n\nCan you please point to the documentation.\n\nIt's true that it needs to be clarified. But what you are saying may\nnot be entirely true in case of streamed transaction. In that case we\nmight send logically decoded changes of an ongoing transaction as\nwell. They may even get applied but not necessarily committed. It's a\nbit complicated. :)\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 9 Sep 2022 17:36:47 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: confirmed_flush_lsn shows LSN of the data that has not yet been\n received by the logical subscriber."
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 5:36 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Thu, Sep 8, 2022 at 8:32 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > On Thu, Sep 8, 2022 at 6:23 PM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > >\n> > > On Thu, Sep 8, 2022 at 4:14 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> Can you please point to the documentation.\n>\n\nAFAIU there is just one documentation. Here is the link for it:\n\nhttps://www.postgresql.org/docs/current/view-pg-replication-slots.html\n\n> It's true that it needs to be clarified. But what you are saying may\n> not be entirely true in case of streamed transaction. In that case we\n> might send logically decoded changes of an ongoing transaction as\n> well. They may even get applied but not necessarily committed. It's a\n> bit complicated. :)\n>\n\nThis can happen in case of big transactions. That's the reason I\nmentioned that the transaction has a small set of data which is not\nyet committed but the confirmed_flush_lsn says it has already reached\nthe logical subscriber.\n\nAnd.. lastly sorry for the delayed response. I was not well and\ncouldn't access email for quite some days. The poor dengue had almost\nkilled me :(\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n",
"msg_date": "Mon, 19 Sep 2022 13:43:46 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: confirmed_flush_lsn shows LSN of the data that has not yet been\n received by the logical subscriber."
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 1:43 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> On Fri, Sep 9, 2022 at 5:36 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Thu, Sep 8, 2022 at 8:32 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > >\n> > > On Thu, Sep 8, 2022 at 6:23 PM Ashutosh Bapat\n> > > <ashutosh.bapat.oss@gmail.com> wrote:\n> > > >\n> > > > On Thu, Sep 8, 2022 at 4:14 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > Can you please point to the documentation.\n> >\n>\n> AFAIU there is just one documentation. Here is the link for it:\n>\n> https://www.postgresql.org/docs/current/view-pg-replication-slots.html\n\nThanks. Description of confirmed_flush_lsn is \"The address (LSN) up to\nwhich the logical slot's consumer has confirmed receiving data. Data\nolder than this is not available anymore. NULL for physical slots.\"\nThe second sentence is misleading. AFAIU, it really should be \"Data\ncorresponding to the transactions committed before this LSN is not\navailable anymore\". WAL before restart_lsn is likely to be removed but\nWAL with LSN higher than restart_lsn is preserved. This correction\nmakes more sense because of the third sentence.\n\n>\n> And.. lastly sorry for the delayed response. I was not well and\n> couldn't access email for quite some days. The poor dengue had almost\n> killed me :(\n\nDengue had almost killed me also. Take care.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 19 Sep 2022 17:24:11 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: confirmed_flush_lsn shows LSN of the data that has not yet been\n received by the logical subscriber."
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 5:24 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Mon, Sep 19, 2022 at 1:43 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > On Fri, Sep 9, 2022 at 5:36 PM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > >\n> > > On Thu, Sep 8, 2022 at 8:32 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > >\n> > > > On Thu, Sep 8, 2022 at 6:23 PM Ashutosh Bapat\n> > > > <ashutosh.bapat.oss@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Sep 8, 2022 at 4:14 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > Can you please point to the documentation.\n> > >\n> >\n> > AFAIU there is just one documentation. Here is the link for it:\n> >\n> > https://www.postgresql.org/docs/current/view-pg-replication-slots.html\n>\n> Thanks. Description of confirmed_flush_lsn is \"The address (LSN) up to\n> which the logical slot's consumer has confirmed receiving data. Data\n> older than this is not available anymore. NULL for physical slots.\"\n> The second sentence is misleading. AFAIU, it really should be \"Data\n> corresponding to the transactions committed before this LSN is not\n> available anymore\". WAL before restart_lsn is likely to be removed but\n> WAL with LSN higher than restart_lsn is preserved. This correction\n> makes more sense because of the third sentence.\n>\n\nThanks for the clarification. Attached is the patch with the changes.\nPlease have a look.\n\n--\nWith Regards,\nAshutosh Sharma.",
"msg_date": "Mon, 19 Sep 2022 20:09:11 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: confirmed_flush_lsn shows LSN of the data that has not yet been\n received by the logical subscriber."
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 8:09 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> On Mon, Sep 19, 2022 at 5:24 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Mon, Sep 19, 2022 at 1:43 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > >\n> > > On Fri, Sep 9, 2022 at 5:36 PM Ashutosh Bapat\n> > > <ashutosh.bapat.oss@gmail.com> wrote:\n> > > >\n> > > > On Thu, Sep 8, 2022 at 8:32 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Sep 8, 2022 at 6:23 PM Ashutosh Bapat\n> > > > > <ashutosh.bapat.oss@gmail.com> wrote:\n> > > > > >\n> > > > > > On Thu, Sep 8, 2022 at 4:14 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > > Can you please point to the documentation.\n> > > >\n> > >\n> > > AFAIU there is just one documentation. Here is the link for it:\n> > >\n> > > https://www.postgresql.org/docs/current/view-pg-replication-slots.html\n> >\n> > Thanks. Description of confirmed_flush_lsn is \"The address (LSN) up to\n> > which the logical slot's consumer has confirmed receiving data. Data\n> > older than this is not available anymore. NULL for physical slots.\"\n> > The second sentence is misleading. AFAIU, it really should be \"Data\n> > corresponding to the transactions committed before this LSN is not\n> > available anymore\". WAL before restart_lsn is likely to be removed but\n> > WAL with LSN higher than restart_lsn is preserved. This correction\n> > makes more sense because of the third sentence.\n> >\n>\n> Thanks for the clarification. Attached is the patch with the changes.\n> Please have a look.\n>\nLooks good to me. Do you want to track this through commitfest app?\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 21 Sep 2022 19:21:02 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: confirmed_flush_lsn shows LSN of the data that has not yet been\n received by the logical subscriber."
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 7:21 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Mon, Sep 19, 2022 at 8:09 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > On Mon, Sep 19, 2022 at 5:24 PM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > >\n> > > On Mon, Sep 19, 2022 at 1:43 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > >\n> > > > On Fri, Sep 9, 2022 at 5:36 PM Ashutosh Bapat\n> > > > <ashutosh.bapat.oss@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Sep 8, 2022 at 8:32 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > > > >\n> > > > > > On Thu, Sep 8, 2022 at 6:23 PM Ashutosh Bapat\n> > > > > > <ashutosh.bapat.oss@gmail.com> wrote:\n> > > > > > >\n> > > > > > > On Thu, Sep 8, 2022 at 4:14 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > > > Can you please point to the documentation.\n> > > > >\n> > > >\n> > > > AFAIU there is just one documentation. Here is the link for it:\n> > > >\n> > > > https://www.postgresql.org/docs/current/view-pg-replication-slots.html\n> > >\n> > > Thanks. Description of confirmed_flush_lsn is \"The address (LSN) up to\n> > > which the logical slot's consumer has confirmed receiving data. Data\n> > > older than this is not available anymore. NULL for physical slots.\"\n> > > The second sentence is misleading. AFAIU, it really should be \"Data\n> > > corresponding to the transactions committed before this LSN is not\n> > > available anymore\". WAL before restart_lsn is likely to be removed but\n> > > WAL with LSN higher than restart_lsn is preserved. This correction\n> > > makes more sense because of the third sentence.\n> > >\n> >\n> > Thanks for the clarification. Attached is the patch with the changes.\n> > Please have a look.\n> >\n> Looks good to me. Do you want to track this through commitfest app?\n>\n\nYeah, I've added an entry for it in the commitfest app and marked it\nas ready for committer. Thanks for the suggestion.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n",
"msg_date": "Wed, 21 Sep 2022 20:57:01 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: confirmed_flush_lsn shows LSN of the data that has not yet been\n received by the logical subscriber."
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 8:09 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Thanks for the clarification. Attached is the patch with the changes.\n> Please have a look.\n>\n\nLGTM. I'll push this tomorrow unless there are any other\ncomments/suggestions for this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 3 Nov 2022 16:49:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: confirmed_flush_lsn shows LSN of the data that has not yet been\n received by the logical subscriber."
},
{
"msg_contents": "On Thu, Nov 3, 2022 at 4:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Sep 19, 2022 at 8:09 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Thanks for the clarification. Attached is the patch with the changes.\n> > Please have a look.\n> >\n>\n> LGTM. I'll push this tomorrow unless there are any other\n> comments/suggestions for this.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 4 Nov 2022 15:29:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: confirmed_flush_lsn shows LSN of the data that has not yet been\n received by the logical subscriber."
},
{
"msg_contents": "On Fri, Nov 4, 2022 at 3:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Nov 3, 2022 at 4:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Sep 19, 2022 at 8:09 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > >\n> > > Thanks for the clarification. Attached is the patch with the changes.\n> > > Please have a look.\n> > >\n> >\n> > LGTM. I'll push this tomorrow unless there are any other\n> > comments/suggestions for this.\n> >\n>\n> Pushed!\n>\n\nthanks Amit.!\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n",
"msg_date": "Fri, 4 Nov 2022 17:11:33 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: confirmed_flush_lsn shows LSN of the data that has not yet been\n received by the logical subscriber."
}
] |
[
{
"msg_contents": "Hi,\n\nIn Neon, we've had to modify the GIN fast insertion path as attached,\ndue to an unexpected XLOG insertion and buffer locking ordering issue.\n\nThe xlog readme [0] mentions that the common order of operations is 1)\npin and lock any buffers you need for the log record, then 2) start a\ncritical section, then 3) call XLogBeginInsert.\nIn Neon, we rely on this documented order of operations to expect to\nbe able to WAL-log hint pages (freespace map, visibility map) when\nthey are written to disk (e.g. cache eviction, checkpointer). In\ngeneral, this works fine, except that in ginHeapTupleFastInsert we\ncall XLogBeginInsert() before the last of the buffers for the eventual\nrecord was read, thus creating a path where eviction is possible in a\n`begininsert_called = true` context. That breaks our current code by\nbeing unable to evict (WAL-log) the dirtied hint pages.\n\nPFA a patch that rectifies this issue, by moving the XLogBeginInsert()\ndown to where 1.) we have all relevant buffers pinned and locked, and\n2.) we're in a critical section, making that part of the code\nconsistent with the general scheme for XLog insertion.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] access/transam/README, section \"Write-Ahead Log Coding\", line 436-470",
"msg_date": "Thu, 8 Sep 2022 13:07:54 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Issue in GIN fast-insert: XLogBeginInsert + Read/LockBuffer ordering"
},
{
"msg_contents": "HI,\n\nOn Sep 8, 2022, 19:08 +0800, Matthias van de Meent <boekewurm+postgres@gmail.com>, wrote:\n> In general, this works fine, except that in ginHeapTupleFastInsert we\n> call XLogBeginInsert() before the last of the buffers for the eventual\n> record was read, thus creating a path where eviction is possible in a\n> `begininsert_called = true` context. That breaks our current code by\n> being unable to evict (WAL-log) the dirtied hint pages.\nDoes it break Postgres or Neon?\nI look around the codes and as far as I can see, dirty pages could be flushed whether begininsert_called is true or false.\n>\n> PFA a patch that rectifies this issue, by moving the XLogBeginInsert()\n> down to where 1.) we have all relevant buffers pinned and locked, and\n> 2.) we're in a critical section, making that part of the code\n> consistent with the general scheme for XLog insertion.\n+1, Make sense.\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHI,\n\nOn Sep 8, 2022, 19:08 +0800, Matthias van de Meent <boekewurm+postgres@gmail.com>, wrote:\n In general, this works fine, except that in ginHeapTupleFastInsert we\ncall XLogBeginInsert() before the last of the buffers for the eventual\nrecord was read, thus creating a path where eviction is possible in a\n`begininsert_called = true` context. That breaks our current code by\nbeing unable to evict (WAL-log) the dirtied hint pages.\nDoes it break Postgres or Neon? \nI look around the codes and as far as I can see, dirty pages could be flushed whether begininsert_called is true or false.\n\nPFA a patch that rectifies this issue, by moving the XLogBeginInsert()\ndown to where 1.) we have all relevant buffers pinned and locked, and\n2.) we're in a critical section, making that part of the code\nconsistent with the general scheme for XLog insertion.\n+1, Make sense.\n\n\nRegards,\nZhang Mingli",
"msg_date": "Fri, 30 Sep 2022 17:57:32 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue in GIN fast-insert: XLogBeginInsert + Read/LockBuffer\n ordering"
},
{
"msg_contents": "Hi,\n\nOn Sep 8, 2022, 19:08 +0800, Matthias van de Meent <boekewurm+postgres@gmail.com>, wrote:\n>\n> PFA a patch that rectifies this issue, by moving the XLogBeginInsert()\n> down to where 1.) we have all relevant buffers pinned and locked, and\n> 2.) we're in a critical section, making that part of the code\n> consistent with the general scheme for XLog insertion.\n>\nIn the same function, there is disorder of XLogBeginInsert and START_CRIT_SECTION.\n\n```\ncollectordata = ptr = (char *) palloc(collector->sumsize);\n\n data.ntuples = collector->ntuples;\n\n if (needWal)\n XLogBeginInsert();\n\n START_CRIT_SECTION();\n```\n\nShall we adjust that too?\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi,\n\nOn Sep 8, 2022, 19:08 +0800, Matthias van de Meent <boekewurm+postgres@gmail.com>, wrote:\n\nPFA a patch that rectifies this issue, by moving the XLogBeginInsert()\ndown to where 1.) we have all relevant buffers pinned and locked, and\n2.) we're in a critical section, making that part of the code\nconsistent with the general scheme for XLog insertion.\n\nIn the same function, there is disorder of XLogBeginInsert and START_CRIT_SECTION.\n\n```\ncollectordata = ptr = (char *) palloc(collector->sumsize);\n\n data.ntuples = collector->ntuples;\n\n if (needWal)\n XLogBeginInsert();\n\n START_CRIT_SECTION();\n```\n\nShall we adjust that too?\n\n\nRegards,\nZhang Mingli",
"msg_date": "Fri, 30 Sep 2022 18:22:02 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue in GIN fast-insert: XLogBeginInsert + Read/LockBuffer\n ordering"
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 06:22:02PM +0800, Zhang Mingli wrote:\n> In the same function, there is disorder of XLogBeginInsert and START_CRIT_SECTION.\n> \n> ```\n> collectordata = ptr = (char *) palloc(collector->sumsize);\n> \n> data.ntuples = collector->ntuples;\n> \n> if (needWal)\n> XLogBeginInsert();\n> \n> START_CRIT_SECTION();\n> ```\n> \n> Shall we adjust that too?\n\nNice catches, both of you. Let's adjust everything spotted in one\nshot. Matthias' patch makes the ordering right, but the\ninitialization path a bit harder to follow when using a separate list.\nThe code is short so it does not strike me as a big deal, and it comes\nfrom the fact that we need to lock an existing buffer when merging two\nlists. For the branch where we insert into a tail page, the pages are\nalready locked but it looks to be enough to move XLogBeginInsert()\nbefore the first XLogRegisterBuffer() call.\n\nWould any of you like to write a patch?\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 11:09:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Issue in GIN fast-insert: XLogBeginInsert + Read/LockBuffer\n ordering"
},
{
"msg_contents": "HI,\n\nOn Oct 12, 2022, 10:09 +0800, Michael Paquier <michael@paquier.xyz>, wrote:\n>\n> Nice catches, both of you. Let's adjust everything spotted in one\n> shot. Matthias' patch makes the ordering right, but the\n> initialization path a bit harder to follow when using a separate list.\n> The code is short so it does not strike me as a big deal, and it comes\n> from the fact that we need to lock an existing buffer when merging two\n> lists. For the branch where we insert into a tail page, the pages are\n> already locked but it looks to be enough to move XLogBeginInsert()\n> before the first XLogRegisterBuffer() call.\n>\n> Would any of you like to write a patch?\n> --\n> Michael\nPatch added.\n\nRegards,\nZhang Mingli",
"msg_date": "Wed, 12 Oct 2022 21:39:11 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue in GIN fast-insert: XLogBeginInsert + Read/LockBuffer\n ordering"
},
{
"msg_contents": "On Wed, Oct 12, 2022 at 09:39:11PM +0800, Zhang Mingli wrote:\n> Patch added.\n\nThanks. This looks fine on a second look, so applied.\n--\nMichael",
"msg_date": "Thu, 13 Oct 2022 09:33:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Issue in GIN fast-insert: XLogBeginInsert + Read/LockBuffer\n ordering"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Thanks. This looks fine on a second look, so applied.\n\nDon't we need to back-patch these fixes?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Oct 2022 20:54:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Issue in GIN fast-insert: XLogBeginInsert + Read/LockBuffer\n ordering"
},
{
"msg_contents": "On Wed, Oct 12, 2022 at 08:54:34PM -0400, Tom Lane wrote:\n> Don't we need to back-patch these fixes?\n\nI guess I should, though I have finished by not doing due to the\nunlikeliness of the problem, where we would need the combination of a\npage eviction with an error in the critical section to force a PANIC,\nor a crash before the WAL gets inserted. Other opinions?\n--\nMichael",
"msg_date": "Thu, 13 Oct 2022 10:24:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Issue in GIN fast-insert: XLogBeginInsert + Read/LockBuffer\n ordering"
},
{
"msg_contents": "On 2022-Oct-13, Michael Paquier wrote:\n\n> On Wed, Oct 12, 2022 at 08:54:34PM -0400, Tom Lane wrote:\n> > Don't we need to back-patch these fixes?\n> \n> I guess I should, though I have finished by not doing due to the\n> unlikeliness of the problem, where we would need the combination of a\n> page eviction with an error in the critical section to force a PANIC,\n> or a crash before the WAL gets inserted. Other opinions?\n\nI suppose it's a matter of whether any bugs are possible outside of\nNeon. If yes, then definitely this should be backpatched. Offhand, I\ndon't see any. On the other hand, even if no bugs are known, then it's\nstill valuable to have all code paths do WAL insertion in the same way,\nrather than having a single place that is alone in doing it in a\ndifferent way. But if we don't know of any bugs, then backpatching\nmight be more risk than not doing so.\n\nI confess I don't understand why is it important that XLogBeginInsert is\ncalled inside the critical section. It seems to me that that part is\nonly a side-effect of having to acquire the buffer locks in the critical\nsection. Right?\n\nI noticed that line 427 logs the GIN metapage with flag REGBUF_STANDARD;\nis the GIN metapage really honoring pd_upper? I see only pg_lower being\nset.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La grandeza es una experiencia transitoria. Nunca es consistente.\nDepende en gran parte de la imaginación humana creadora de mitos\"\n(Irulan)\n\n\n",
"msg_date": "Mon, 24 Oct 2022 14:22:16 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Issue in GIN fast-insert: XLogBeginInsert + Read/LockBuffer\n ordering"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 02:22:16PM +0200, Alvaro Herrera wrote:\n> I suppose it's a matter of whether any bugs are possible outside of\n> Neon. If yes, then definitely this should be backpatched. Offhand, I\n> don't see any. On the other hand, even if no bugs are known, then it's\n> still valuable to have all code paths do WAL insertion in the same way,\n> rather than having a single place that is alone in doing it in a\n> different way. But if we don't know of any bugs, then backpatching\n> might be more risk than not doing so.\n\nI have been putting my mind on that once again, and I don't see how\nthis would cause an issue in vanilla PG code. XLogBeginInsert() does\nits checks, meaning that we'd get a PANIC instead of an ERROR now that\nthese calls are within a critical section but that should not matter\nbecause we know that recovery has ended or we would not be able to do\nGIN insertions like that. Then, it only switches to\nbegininsert_called to true, that we use for sanity checks in the\nvarious WAL insert paths. As Matthias has outlined, Neon relies on\nbegininsert_called more than we do currently. FWIW, I think that\nwe're still fine not backpatching that, even considering the\nconsistency of the code with stable branches. Now, if there is a\nstrong trend in favor of a backpatch, I'd be fine with this conclusion\nas well.\n\n> I confess I don't understand why is it important that XLogBeginInsert is\n> called inside the critical section. It seems to me that that part is\n> only a side-effect of having to acquire the buffer locks in the critical\n> section. Right?\n\nYeah, you are right that it would not matter for XLogBeginInsert(),\nthough I'd like to think that this is a good practice on consistency\ngrounds with anywhere else, and we respect what's documented in the\nREADME.\n\n> I noticed that line 427 logs the GIN metapage with flag REGBUF_STANDARD;\n> is the GIN metapage really honoring pd_upper? I see only pg_lower being\n> set.\n\nHmm. Not sure.\n--\nMichael",
"msg_date": "Tue, 25 Oct 2022 13:30:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Issue in GIN fast-insert: XLogBeginInsert + Read/LockBuffer\n ordering"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Oct 24, 2022 at 02:22:16PM +0200, Alvaro Herrera wrote:\n>> I confess I don't understand why is it important that XLogBeginInsert is\n>> called inside the critical section. It seems to me that that part is\n>> only a side-effect of having to acquire the buffer locks in the critical\n>> section. Right?\n\n> Yeah, you are right that it would not matter for XLogBeginInsert(),\n> though I'd like to think that this is a good practice on consistency\n> grounds with anywhere else, and we respect what's documented in the\n> README.\n\nYeah --- it's documented that way, and there doesn't seem to be\na good reason not to honor that here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 Oct 2022 00:58:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Issue in GIN fast-insert: XLogBeginInsert + Read/LockBuffer\n ordering"
},
{
"msg_contents": "On 2022-Oct-25, Tom Lane wrote:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Mon, Oct 24, 2022 at 02:22:16PM +0200, Alvaro Herrera wrote:\n> >> I confess I don't understand why is it important that XLogBeginInsert is\n> >> called inside the critical section. It seems to me that that part is\n> >> only a side-effect of having to acquire the buffer locks in the critical\n> >> section. Right?\n> \n> > Yeah, you are right that it would not matter for XLogBeginInsert(),\n> > though I'd like to think that this is a good practice on consistency\n> > grounds with anywhere else, and we respect what's documented in the\n> > README.\n> \n> Yeah --- it's documented that way, and there doesn't seem to be\n> a good reason not to honor that here.\n\nOkay, so if we follow this argument, then the logical conclusion is that\nthis *should* be backpatched, after all.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nMaybe there's lots of data loss but the records of data loss are also lost.\n(Lincoln Yeoh)\n\n\n",
"msg_date": "Tue, 25 Oct 2022 09:37:08 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Issue in GIN fast-insert: XLogBeginInsert + Read/LockBuffer\n ordering"
},
{
"msg_contents": "On Tue, Oct 25, 2022 at 09:37:08AM +0200, Alvaro Herrera wrote:\n> Okay, so if we follow this argument, then the logical conclusion is that\n> this *should* be backpatched, after all.\n\nAfter sleeping on it and looking at all the stable branches involved,\nbackpatched down to v10.\n--\nMichael",
"msg_date": "Wed, 26 Oct 2022 09:43:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Issue in GIN fast-insert: XLogBeginInsert + Read/LockBuffer\n ordering"
}
] |
[
{
"msg_contents": "Apparently, you can't add a table to a publication if its schema is \nalready part of the publication (and vice versa), e.g.,\n\n=# alter publication p1 add table s1.t1;\nERROR: 22023: cannot add relation \"s1.t1\" to publication\nDETAIL: Table's schema \"s1\" is already part of the publication or part \nof the specified schema list.\n\nIs there a reason for this? It looks a bit like a misfeature to me. It \nconstrains how you can move your tables around between schemas, based on \nhow somewhere else a publication has been constructed.\n\nIt seems to me that it shouldn't be difficult to handle the case that a \ntable is part of the publication via two different routes. (We must \nalready handle that since a subscription can use more than one publication.)\n\n\n",
"msg_date": "Thu, 8 Sep 2022 13:36:07 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 5:06 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Apparently, you can't add a table to a publication if its schema is\n> already part of the publication (and vice versa), e.g.,\n>\n> =# alter publication p1 add table s1.t1;\n> ERROR: 22023: cannot add relation \"s1.t1\" to publication\n> DETAIL: Table's schema \"s1\" is already part of the publication or part\n> of the specified schema list.\n>\n> Is there a reason for this?\n>\n\nYes, because otherwise, there was confusion while dropping the objects\nfrom publication. Consider in the above case, if we would have allowed\nit and then the user performs ALTER PUBLICATION p1 DROP ALL TABLES IN\nSCHEMA s1, then (a) shall we remove both schema s1 and a table that is\nseparately added (s1.t1) from that schema, or (b) just remove schema\ns1? There is a point that the user can expect that as she has added\nthem separately, so we should allow her to drop them separately as\nwell. OTOH, if we go that way, then it is again questionable that when\nthe user has asked to Drop all the tables in the schema (via DROP ALL\nTABLES IN SCHEMA command) then why keep some tables?\n\nThe other confusion from allowing publications to have schemas and\ntables from the same schema is that say the user has created a\npublication with the command CREATE PUBLICATION pub1 FOR ALL TABLES IN\nSCHEMA s1, and then she can ask to allow dropping one of the tables in\nthe schema by ALTER PUBLICATION pub1 DROP TABLE s1.t1. Now, it will be\ntricky to support and I think it doesn't make sense as well.\n\nSimilarly, what if the user has first created a publication with\n\"CREATE PUBLICATION pub1 ADD TABLE s1.t1, s1.t2, ... s1.t99;\" and then\ntries to drop all tables in one shot by ALTER PUBLICATION DROP ALL\nTABLES IN SCHEMA sch1;?\n\nTo avoid these confusions, we have disallowed adding a table if its\nschema is already part of the publication and vice-versa. Also, there\nwon't be any additional benefit to doing so.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 9 Sep 2022 11:28:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> To avoid these confusions, we have disallowed adding a table if its\n> schema is already part of the publication and vice-versa.\n\nReally?\n\nIs there logic in ALTER TABLE SET SCHEMA that rejects the command\ndependent on the contents of the publication tables? If so, are\nthere locks taken in both ALTER TABLE SET SCHEMA and the\npublication-modifying commands that are sufficient to prevent\nrace conditions in such changes?\n\nThis position sounds quite untenable from here, even if I found\nyour arguments-in-support convincing, which I don't really.\nISTM the rule should be along the lines of \"table S.T should\nbe published either if schema S is published or S.T itself is\".\nThere's no obvious need to interconnect the two conditions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Sep 2022 02:14:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 11:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > To avoid these confusions, we have disallowed adding a table if its\n> > schema is already part of the publication and vice-versa.\n>\n> Really?\n>\n> Is there logic in ALTER TABLE SET SCHEMA that rejects the command\n> dependent on the contents of the publication tables?\n>\n\nYes, it has. For example,\npostgres=# create schema s1;\nCREATE SCHEMA\npostgres=# create table s1.t1(c1 int);\nCREATE TABLE\npostgres=# create schema s2;\nCREATE SCHEMA\n\npostgres=# create publication pub1 for all tables in schema s2, table s1.t1;\nCREATE PUBLICATION\n\npostgres=# Alter table s1.t1 set schema s2;\nERROR: cannot move table \"t1\" to schema \"s2\"\nDETAIL: The schema \"s2\" and same schema's table \"t1\" cannot be part\nof the same publication \"pub1\".\n\n> If so, are\n> there locks taken in both ALTER TABLE SET SCHEMA and the\n> publication-modifying commands that are sufficient to prevent\n> race conditions in such changes?\n>\n\nGood point. I have checked it and found that ALTER TABLE SET SCHEMA\ntakes AccessExclusiveLock on relation and AccessShareLock on the\nschema which it is going to set. The alter publication command takes\nShareUpdateExclusiveLock on relation for dropping/adding a table to\npublication which will prevent any race condition with ALTER TABLE SET\nSCHEMA. However, the alter publication command takes AccessShareLock\nfor dropping/adding schema which won't block with ALTER TABLE SET\nSCHEMA command. So, I think we need to change the lock mode for it in\nalter publication command.\n\n\n> This position sounds quite untenable from here, even if I found\n> your arguments-in-support convincing, which I don't really.\n> ISTM the rule should be along the lines of \"table S.T should\n> be published either if schema S is published or S.T itself is\".\n> There's no obvious need to interconnect the two conditions.\n>\n\nThis rule is currently followed when a subscription has more than one\npublication. It is just that we didn't allow it in the same\npublication because of a fear that it may cause confusion for some of\nthe users. The other thing to look at here is that the existing case\nof a \"FOR ALL TABLES\" publication also follows a similar rule such\nthat it doesn't allow adding individual tables if the publication is\nfor all tables. For example,\n\npostgres=# create publication pub1 for all tables;\nCREATE PUBLICATION\npostgres=# alter publication pub1 add table t1;\nERROR: publication \"pub1\" is defined as FOR ALL TABLES\nDETAIL: Tables cannot be added to or dropped from FOR ALL TABLES publications.\n\nSo, why shouldn't a \"FOR ALL TABLES IN SCHEMA\" publication follow a\nsimilar behavior?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 9 Sep 2022 14:50:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 5:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> So, why shouldn't a \"FOR ALL TABLES IN SCHEMA\" publication follow a\n> similar behavior?\n\nSurely that is not the same case at all. If you're publishing\neverything, there's no point in also having a specific list of things\nthat you want published, but when you're publishing only some things,\nthere is. If my wife tells me to wash everything in the laundry basket\nand also my nice pants, and I discover that my nice pants already\nhappen to be in the laundry basket, I do not tell her:\n\nERROR: my nice pants are already in the laundry basket\n\nIt feels like a mistake to me that there's any catalog representation\nat all for a table that is published because it is part of a schema.\nThe way a feature like this should work is that the schema should be\nlabelled as published, and we should discover which tables are part of\nit at any given time as we go. We shouldn't need separate catalog\nentries for each table in the schema just because the schema is\npublished. But if we do have such catalog entries, surely there should\nbe a difference between the catalog entry that gets created when the\ntable is individually published and the one that gets created when the\ncontaining schema is published. We have such tracking in other cases\n(coninhcount, conislocal; attinhcount, attislocal).\n\nIn my opinion, this shouldn't have been committed working the way it does.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 9 Sep 2022 09:56:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Friday, September 9, 2022 9:57 PM Robert Haas <robertmhaas@gmail.com> wrote:\r\n> \r\n> On Fri, Sep 9, 2022 at 5:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > So, why shouldn't a \"FOR ALL TABLES IN SCHEMA\" publication follow a\r\n> > similar behavior?\r\n\r\nHi\r\n> \r\n> It feels like a mistake to me that there's any catalog representation at all for a\r\n> table that is published because it is part of a schema.\r\n> The way a feature like this should work is that the schema should be labelled as\r\n> published, and we should discover which tables are part of it at any given time as\r\n> we go. We shouldn't need separate catalog entries for each table in the schema\r\n> just because the schema is published. \r\n\r\nIIRC, the feature currently works almost the same as you described. It doesn't\r\ncreate entry for tables that are published via its schema level, it only record\r\nthe published schema and check which tables are part of it.\r\n\r\nSorry, If I misunderstand your points or missed something.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n\r\n",
"msg_date": "Fri, 9 Sep 2022 14:29:37 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 10:29 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> IIRC, the feature currently works almost the same as you described. It doesn't\n> create entry for tables that are published via its schema level, it only record\n> the published schema and check which tables are part of it.\n\nOh, well if that's the case, that is great news. But then I don't\nunderstand Amit's comment from before:\n\n> Yes, because otherwise, there was confusion while dropping the objects\n> from publication. Consider in the above case, if we would have allowed\n> it and then the user performs ALTER PUBLICATION p1 DROP ALL TABLES IN\n> SCHEMA s1, then (a) shall we remove both schema s1 and a table that is\n> separately added (s1.t1) from that schema, or (b) just remove schema\n> s1?\n\nI believe that (b) is the correct behavior, so I assumed that this\nissue must be some difficulty in implementing it, like a funny catalog\nrepresentation.\n\nThings might be clearer if we'd made the syntax \"ALTER PUBLICATION p1\n{ ADD | DROP } { TABLE | SCHEMA } name\". I don't understand why we\nused this ALL TABLES IN SCHEMA language.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 9 Sep 2022 11:18:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "\n\n> On Sep 9, 2022, at 8:18 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> Things might be clearer if we'd made the syntax \"ALTER PUBLICATION p1\n> { ADD | DROP } { TABLE | SCHEMA } name\". I don't understand why we\n> used this ALL TABLES IN SCHEMA language.\n\nThe conversation, as I recall, was that \"ADD SCHEMA foo\" would only mean all tables in foo, until publication of other object types became supported, at which point \"ADD SCHEMA foo\" would suddenly mean more than it did before. People might find that surprising, so the \"ALL TABLES IN\" was intended to future-proof against surprising behavioral changes.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 9 Sep 2022 11:17:12 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 8:48 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Sep 9, 2022 at 10:29 AM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> > IIRC, the feature currently works almost the same as you described. It doesn't\n> > create entry for tables that are published via its schema level, it only record\n> > the published schema and check which tables are part of it.\n>\n> Oh, well if that's the case, that is great news.\n>\n\nYes, the feature works as you and Hou-San have mentioned.\n\n> But then I don't\n> understand Amit's comment from before:\n>\n> > Yes, because otherwise, there was confusion while dropping the objects\n> > from publication. Consider in the above case, if we would have allowed\n> > it and then the user performs ALTER PUBLICATION p1 DROP ALL TABLES IN\n> > SCHEMA s1, then (a) shall we remove both schema s1 and a table that is\n> > separately added (s1.t1) from that schema, or (b) just remove schema\n> > s1?\n>\n> I believe that (b) is the correct behavior, so I assumed that this\n> issue must be some difficulty in implementing it, like a funny catalog\n> representation.\n>\n\nNo, it was because of syntax. IIRC, during development, Greg Nancarrow\nraised a point [1] that a user can expect the individually added\ntables for a schema which is also part of the publication to also get\ndropped when she specifies DROP ALL TABLES IN SCHEMA. IIRC,\noriginally, the patch had a behavior (b) but then changed due to\ndiscussion around this point. But now that it seems you and others\ndon't feel that was right, we can change back to (b) as I think that\nshouldn't be difficult to achieve.\n\n> Things might be clearer if we'd made the syntax \"ALTER PUBLICATION p1\n> { ADD | DROP } { TABLE | SCHEMA } name\". I don't understand why we\n> used this ALL TABLES IN SCHEMA language.\n>\n\nIt was exactly due to the reason Mark had mentioned in the email [2].\n\n[1] - https://www.postgresql.org/message-id/CAJcOf-fTRZ3HiA5xU0-O-PT390A7wuUUkjP8uX3aQJLBsJNVmw%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/596EA671-66DF-4285-8560-0270DC062353%40enterprisedb.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 10 Sep 2022 07:31:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 2:17 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > On Sep 9, 2022, at 8:18 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> > Things might be clearer if we'd made the syntax \"ALTER PUBLICATION p1\n> > { ADD | DROP } { TABLE | SCHEMA } name\". I don't understand why we\n> > used this ALL TABLES IN SCHEMA language.\n>\n> The conversation, as I recall, was that \"ADD SCHEMA foo\" would only mean all tables in foo, until publication of other object types became supported, at which point \"ADD SCHEMA foo\" would suddenly mean more than it did before. People might find that surprising, so the \"ALL TABLES IN\" was intended to future-proof against surprising behavioral changes.\n\nIf I encountered this syntax in a vacuum, that's not what I would\nthink. I would think that ADD ALL TABLES IN SCHEMA meant add all the\ntables in the schema to the publication one by one as individual\nobjects, i.e. add the tables that are currently as of this moment in\nthat schema to the publication; and I would think that ADD SCHEMA\nmeant remember that this schema is part of the publication and so\nwhenever tables are created and dropped in that schema (or moved in\nand out) what is being published is automatically updated.\n\nThe analogy here seems to be to GRANT, which actually does support\nboth syntaxes. And if I understand correctly, GRANT ON SCHEMA gives\nprivileges on the schema; whereas GRANT ON ALL TABLES IN SCHEMA\nmodifies each table that is currently in that schema (never mind what\nhappens later).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 10 Sep 2022 19:17:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Sat, 10 Sept 2022 at 19:18, Robert Haas <robertmhaas@gmail.com> wrote:\n\nIf I encountered this syntax in a vacuum, that's not what I would\n> think. I would think that ADD ALL TABLES IN SCHEMA meant add all the\n> tables in the schema to the publication one by one as individual\n> objects, i.e. add the tables that are currently as of this moment in\n> that schema to the publication; and I would think that ADD SCHEMA\n> meant remember that this schema is part of the publication and so\n> whenever tables are created and dropped in that schema (or moved in\n> and out) what is being published is automatically updated.\n>\n> The analogy here seems to be to GRANT, which actually does support\n> both syntaxes. And if I understand correctly, GRANT ON SCHEMA gives\n> privileges on the schema; whereas GRANT ON ALL TABLES IN SCHEMA\n> modifies each table that is currently in that schema (never mind what\n> happens later).\n>\n\nYes, except GRANT ON SCHEMA only grants access to the schema - CREATE or\nUSAGE. You cannot write GRANT SELECT ON SCHEMA to grant access to all\ntables in the schema.\n\nOn Sat, 10 Sept 2022 at 19:18, Robert Haas <robertmhaas@gmail.com> wrote:\nIf I encountered this syntax in a vacuum, that's not what I would\nthink. I would think that ADD ALL TABLES IN SCHEMA meant add all the\ntables in the schema to the publication one by one as individual\nobjects, i.e. add the tables that are currently as of this moment in\nthat schema to the publication; and I would think that ADD SCHEMA\nmeant remember that this schema is part of the publication and so\nwhenever tables are created and dropped in that schema (or moved in\nand out) what is being published is automatically updated.\n\nThe analogy here seems to be to GRANT, which actually does support\nboth syntaxes. And if I understand correctly, GRANT ON SCHEMA gives\nprivileges on the schema; whereas GRANT ON ALL TABLES IN SCHEMA\nmodifies each table that is currently in that schema (never mind what\nhappens later).\nYes, except GRANT ON SCHEMA only grants access to the schema - CREATE or USAGE. You cannot write GRANT SELECT ON SCHEMA to grant access to all tables in the schema.",
"msg_date": "Sat, 10 Sep 2022 21:41:54 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "\n\n> On Sep 10, 2022, at 4:17 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n>>> I don't understand why we\n>>> used this ALL TABLES IN SCHEMA language.\n>> \n>> The conversation, as I recall, was that \"ADD SCHEMA foo\" would only mean all tables in foo, until publication of other object types became supported, at which point \"ADD SCHEMA foo\" would suddenly mean more than it did before. People might find that surprising, so the \"ALL TABLES IN\" was intended to future-proof against surprising behavioral changes.\n> \n> If I encountered this syntax in a vacuum, that's not what I would\n> think. I would think that ADD ALL TABLES IN SCHEMA meant add all the\n> tables in the schema to the publication one by one as individual\n> objects\n\nYes, it appears the syntax was chosen to avoid one kind of confusion, but created another kind. Per the docs on this feature:\n\n FOR ALL TABLES IN SCHEMA\n Marks the publication as one that replicates changes for all tables in the specified list of schemas, including tables created in the future.\n\nLike you, I wouldn't expect that definition, given the behavior of GRANT with respect to the same grammatical construction.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Sun, 11 Sep 2022 10:07:49 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Monday, September 12, 2022 1:08 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\r\n> > > On Sep 10, 2022, at 4:17 PM, Robert Haas <robertmhaas@gmail.com> wrote:\r\n> >\r\n> >>> I don't understand why we\r\n> >>> used this ALL TABLES IN SCHEMA language.\r\n> >>\r\n> >> The conversation, as I recall, was that \"ADD SCHEMA foo\" would only mean\r\n> all tables in foo, until publication of other object types became supported, at\r\n> which point \"ADD SCHEMA foo\" would suddenly mean more than it did before.\r\n> People might find that surprising, so the \"ALL TABLES IN\" was intended to\r\n> future-proof against surprising behavioral changes.\r\n> >\r\n> > If I encountered this syntax in a vacuum, that's not what I would\r\n> > think. I would think that ADD ALL TABLES IN SCHEMA meant add all the\r\n> > tables in the schema to the publication one by one as individual\r\n> > objects\r\n> \r\n> Yes, it appears the syntax was chosen to avoid one kind of confusion, but created\r\n> another kind. Per the docs on this feature:\r\n> \r\n> FOR ALL TABLES IN SCHEMA\r\n> Marks the publication as one that replicates changes for all tables in the\r\n> specified list of schemas, including tables created in the future.\r\n> \r\n> Like you, I wouldn't expect that definition, given the behavior of GRANT with\r\n> respect to the same grammatical construction.\r\n\r\nI'm a bit unsure if it should be compared to GRANT. Because even if we chose\r\n\"ALTER PUBLICATION p1 { ADD | DROP } SCHEMA name\", it's also not\r\nconsistent with the meaning of GRANT ON SCHEMA, as GRANT ON SCHEMA doesn't\r\ngrant rights on the tables within schema if I understand correctly.\r\n\r\nI feel we'd better compare the syntax with the existing publication command:\r\nFOR ALL TABLES. If you create a publication FOR ALL TABLES, it means publishing\r\nall the tables in the database *including* tables created in the future. I\r\nthink both the syntax and meaning of ALL TABLES IN SCHEMA are consistent with\r\nthe existing FOR ALL TABLES.\r\n\r\nAnd the behavior is clearly documented, so personally I think it's fine.\r\nhttps://www.postgresql.org/docs/devel/sql-createpublication.html\r\n--\r\nFOR ALL TABLES\r\n\tMarks the publication as one that replicates changes for all tables in the database, including tables created in the future.\r\nFOR ALL TABLES IN SCHEMA\r\n\tMarks the publication as one that replicates changes for all tables in the specified list of schemas, including tables created in the future.\r\n--\r\n\r\nBesides, as mentioned(and suggested by Tom[1]), we might support publishing\r\nSEQUENCE(or others) in the future. It would give more flexibility to user if we\r\nhave another FOR ALL SEQUENCES(or other objects) IN SCHEMA.\r\n\r\n[1] https://www.postgresql.org/message-id/155565.1628954580%40sss.pgh.pa.us\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Mon, 12 Sep 2022 04:26:48 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Sat, 10 Sept 2022 at 07:32, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Sep 9, 2022 at 8:48 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Fri, Sep 9, 2022 at 10:29 AM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > > IIRC, the feature currently works almost the same as you described. It doesn't\n> > > create entry for tables that are published via its schema level, it only record\n> > > the published schema and check which tables are part of it.\n> >\n> > Oh, well if that's the case, that is great news.\n> >\n>\n> Yes, the feature works as you and Hou-San have mentioned.\n>\n> > But then I don't\n> > understand Amit's comment from before:\n> >\n> > > Yes, because otherwise, there was confusion while dropping the objects\n> > > from publication. Consider in the above case, if we would have allowed\n> > > it and then the user performs ALTER PUBLICATION p1 DROP ALL TABLES IN\n> > > SCHEMA s1, then (a) shall we remove both schema s1 and a table that is\n> > > separately added (s1.t1) from that schema, or (b) just remove schema\n> > > s1?\n> >\n> > I believe that (b) is the correct behavior, so I assumed that this\n> > issue must be some difficulty in implementing it, like a funny catalog\n> > representation.\n> >\n>\n> No, it was because of syntax. IIRC, during development, Greg Nancarrow\n> raised a point [1] that a user can expect the individually added\n> tables for a schema which is also part of the publication to also get\n> dropped when she specifies DROP ALL TABLES IN SCHEMA. IIRC,\n> originally, the patch had a behavior (b) but then changed due to\n> discussion around this point. But now that it seems you and others\n> don't feel that was right, we can change back to (b) as I think that\n> shouldn't be difficult to achieve.\n\nI have made the changes to allow creation of publication with a schema\nand table of the same schema. The attached patch has the changes for\nthe same.\nI'm planning to review and test the patch further.\n\nRegards,\nVignesh",
"msg_date": "Mon, 12 Sep 2022 19:44:18 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "At Mon, 12 Sep 2022 04:26:48 +0000, \"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com> wrote in \n> On Monday, September 12, 2022 1:08 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > > > On Sep 10, 2022, at 4:17 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > >>> I don't understand why we\n> > >>> used this ALL TABLES IN SCHEMA language.\n> > >>\n> > >> The conversation, as I recall, was that \"ADD SCHEMA foo\" would only mean\n> > all tables in foo, until publication of other object types became supported, at\n> > which point \"ADD SCHEMA foo\" would suddenly mean more than it did before.\n> > People might find that surprising, so the \"ALL TABLES IN\" was intended to\n> > future-proof against surprising behavioral changes.\n> > >\n> > > If I encountered this syntax in a vacuum, that's not what I would\n> > > think. I would think that ADD ALL TABLES IN SCHEMA meant add all the\n> > > tables in the schema to the publication one by one as individual\n> > > objects\n> > \n> > Yes, it appears the syntax was chosen to avoid one kind of confusion, but created\n> > another kind. Per the docs on this feature:\n> > \n> > FOR ALL TABLES IN SCHEMA\n> > Marks the publication as one that replicates changes for all tables in the\n> > specified list of schemas, including tables created in the future.\n> > \n> > Like you, I wouldn't expect that definition, given the behavior of GRANT with\n> > respect to the same grammatical construction.\n> \n> I'm a bit unsure if it should be compared to GRANT. Because even if we chose\n> \"ALTER PUBLICATION p1 { ADD | DROP } SCHEMA name\", it's also not\n> consistent with the meaning of GRANT ON SCHEMA, as GRANT ON SCHEMA doesn't\n> grant rights on the tables within schema if I understand correctly.\n>\n> I feel we'd better compare the syntax with the existing publication command:\n> FOR ALL TABLES. If you create a publication FOR ALL TABLES, it means publishing\n> all the tables in the database *including* tables created in the future. I\n> think both the syntax and meaning of ALL TABLES IN SCHEMA are consistent with\n> the existing FOR ALL TABLES.\n\nIMHO, I feel closer to Robert. \"ALL TABLES IN SCHEMA\" sounds like the\nconcrete tables at the time of invocation. While I agree that it is\nnot directly comparable to GRANT, but if I see \"ALTER PUBLICATION p1\nADD SCHEMA s1\", I automatically translate that into \"all tables in the\nschema s1 at the time of using this publication\". At least, it would\ncause less confusion when it were \"ALT PUB p1 DROP SCEMA s1\" aginst\n\"DROP ALL TABLES IN SCHEMA s1\".\n\nHowever..\n\n> And the behavior is clearly documented, so personally I think it's fine.\n> https://www.postgresql.org/docs/devel/sql-createpublication.html\n> --\n> FOR ALL TABLES\n> \tMarks the publication as one that replicates changes for all tables in the database, including tables created in the future.\n> FOR ALL TABLES IN SCHEMA\n> \tMarks the publication as one that replicates changes for all tables in the specified list of schemas, including tables created in the future.\n> --\n> \n> Besides, as mentioned(and suggested by Tom[1]), we might support publishing\n> SEQUENCE(or others) in the future. It would give more flexibility to user if we\n> have another FOR ALL SEQUENCES(or other objects) IN SCHEMA.\n> \n> [1] https://www.postgresql.org/message-id/155565.1628954580%40sss.pgh.pa.us\n\nFair point. Should be stupid, but how about the following?\n\nCREATE PUBLICATION p1 FOR TABLES * IN SCHEMA s1;\nDROP PUBLICATION p1 FOR TABLES * IN SCHEMA s1;\nATLER PUBLICATION p1 ADD TABLES * IN SCHEMA s1;\nALTER PUBLICATION p1 DROP TABLES * IN SCHEMA s1;\n\nThis is an analog of synchronous_standby_names. But I'm not sure a\nbare asterisc can appear there.. We could use ANY instead?\n\nCREATE PUBLICATION p1 FOR TABLES ANY IN SCHEMA s1;\n...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 13 Sep 2022 13:40:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Tuesday, September 13, 2022 12:40 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> \n> At Mon, 12 Sep 2022 04:26:48 +0000, \"houzj.fnst@fujitsu.com\"\n> <houzj.fnst@fujitsu.com> wrote in\n> > On Monday, September 12, 2022 1:08 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > > > > On Sep 10, 2022, at 4:17 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> > > >\n> > > >>> I don't understand why we\n> > > >>> used this ALL TABLES IN SCHEMA language.\n> > > >>\n> > > >> The conversation, as I recall, was that \"ADD SCHEMA foo\" would\n> > > >> only mean\n> > > all tables in foo, until publication of other object types became\n> > > supported, at which point \"ADD SCHEMA foo\" would suddenly mean more than it did before.\n> > > People might find that surprising, so the \"ALL TABLES IN\" was\n> > > intended to future-proof against surprising behavioral changes.\n> > > >\n> > > > If I encountered this syntax in a vacuum, that's not what I would\n> > > > think. I would think that ADD ALL TABLES IN SCHEMA meant add all\n> > > > the tables in the schema to the publication one by one as\n> > > > individual objects\n> > >\n> > > Yes, it appears the syntax was chosen to avoid one kind of\n> > > confusion, but created another kind. Per the docs on this feature:\n> > >\n> > > FOR ALL TABLES IN SCHEMA\n> > > Marks the publication as one that replicates changes for all\n> > > tables in the specified list of schemas, including tables created in the future.\n> > >\n> > > Like you, I wouldn't expect that definition, given the behavior of\n> > > GRANT with respect to the same grammatical construction.\n> >\n> > I'm a bit unsure if it should be compared to GRANT. Because even if we\n> > chose \"ALTER PUBLICATION p1 { ADD | DROP } SCHEMA name\", it's also not\n> > consistent with the meaning of GRANT ON SCHEMA, as GRANT ON SCHEMA\n> > doesn't grant rights on the tables within schema if I understand correctly.\n> >\n> > I feel we'd better compare the syntax with the existing publication command:\n> > FOR ALL TABLES. If you create a publication FOR ALL TABLES, it means\n> > publishing all the tables in the database *including* tables created\n> > in the future. I think both the syntax and meaning of ALL TABLES IN\n> > SCHEMA are consistent with the existing FOR ALL TABLES.\n> \n> IMHO, I feel closer to Robert. \"ALL TABLES IN SCHEMA\" sounds like the\n> concrete tables at the time of invocation. While I agree that it is not directly\n> comparable to GRANT, but if I see \"ALTER PUBLICATION p1 ADD SCHEMA s1\", I\n> automatically translate that into \"all tables in the schema s1 at the time of using\n> this publication\". At least, it would cause less confusion when it were \"ALT PUB\n> p1 DROP SCEMA s1\" aginst \"DROP ALL TABLES IN SCHEMA s1\".\n> \n> However..\n> \n> > And the behavior is clearly documented, so personally I think it's fine.\n> > https://www.postgresql.org/docs/devel/sql-createpublication.html\n> > --\n> > FOR ALL TABLES\n> > \tMarks the publication as one that replicates changes for all tables in the\n> database, including tables created in the future.\n> > FOR ALL TABLES IN SCHEMA\n> > \tMarks the publication as one that replicates changes for all tables in the\n> specified list of schemas, including tables created in the future.\n> > --\n> >\n> > Besides, as mentioned(and suggested by Tom[1]), we might support\n> > publishing SEQUENCE(or others) in the future. It would give more\n> > flexibility to user if we have another FOR ALL SEQUENCES(or other objects) IN\n> SCHEMA.\n> >\n> > [1]\n> >\n> https://www.postgresql.org/message-id/155565.1628954580%40sss.pgh.pa.u\n> > s\n> \n> Fair point. Should be stupid, but how about the following?\n> \n> CREATE PUBLICATION p1 FOR TABLES * IN SCHEMA s1;\n> DROP PUBLICATION p1 FOR TABLES * IN SCHEMA s1;\n> ATLER PUBLICATION p1 ADD TABLES * IN SCHEMA s1; ALTER PUBLICATION\n> p1 DROP TABLES * IN SCHEMA s1;\n> \n> This is an analog of synchronous_standby_names. But I'm not sure a bare\n> asterisc can appear there.. We could use ANY instead?\n> \n> CREATE PUBLICATION p1 FOR TABLES ANY IN SCHEMA s1; ...\n\nThanks for the suggestions. But personally, I am not sure if this is better the\ncurrent syntax as it seems syntactically inconsistent with the existing \"FOR\nALL TABLES\". Also, the behavior to include future tables is consistent with FOR\nALL TABLES.\n\nBest regards,\nHou zj\n\n\n",
"msg_date": "Tue, 13 Sep 2022 05:54:51 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Monday, September 12, 2022 10:14 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> On Sat, 10 Sept 2022 at 07:32, Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Fri, Sep 9, 2022 at 8:48 PM Robert Haas <robertmhaas@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Fri, Sep 9, 2022 at 10:29 AM houzj.fnst@fujitsu.com\r\n> > > <houzj.fnst@fujitsu.com> wrote:\r\n> > > > IIRC, the feature currently works almost the same as you\r\n> > > > described. It doesn't create entry for tables that are published\r\n> > > > via its schema level, it only record the published schema and check which\r\n> tables are part of it.\r\n> > >\r\n> > > Oh, well if that's the case, that is great news.\r\n> > >\r\n> >\r\n> > Yes, the feature works as you and Hou-San have mentioned.\r\n> >\r\n> > > But then I don't\r\n> > > understand Amit's comment from before:\r\n> > >\r\n> > > > Yes, because otherwise, there was confusion while dropping the\r\n> > > > objects from publication. Consider in the above case, if we would\r\n> > > > have allowed it and then the user performs ALTER PUBLICATION p1\r\n> > > > DROP ALL TABLES IN SCHEMA s1, then (a) shall we remove both schema\r\n> > > > s1 and a table that is separately added (s1.t1) from that schema,\r\n> > > > or (b) just remove schema s1?\r\n> > >\r\n> > > I believe that (b) is the correct behavior, so I assumed that this\r\n> > > issue must be some difficulty in implementing it, like a funny\r\n> > > catalog representation.\r\n> > >\r\n> >\r\n> > No, it was because of syntax. IIRC, during development, Greg Nancarrow\r\n> > raised a point [1] that a user can expect the individually added\r\n> > tables for a schema which is also part of the publication to also get\r\n> > dropped when she specifies DROP ALL TABLES IN SCHEMA. IIRC,\r\n> > originally, the patch had a behavior (b) but then changed due to\r\n> > discussion around this point. But now that it seems you and others\r\n> > don't feel that was right, we can change back to (b) as I think that\r\n> > shouldn't be difficult to achieve.\r\n> \r\n> I have made the changes to allow creation of publication with a schema and\r\n> table of the same schema. The attached patch has the changes for the same.\r\n> I'm planning to review and test the patch further.\r\n\r\nThanks for the patch. While reviewing it, I found that the column list behavior\r\nmight need to be changed or confirmed after allowing the above case.\r\n\r\nAfter applying the patch, we support adding a table with column list along with\r\nthe table's schema[1], and it will directly apply the column list in the\r\nlogical replication after applying the patch.\r\n\r\n[1]--\r\nCREATE PUBLICATION pub FOR TABLE public.test(a), FOR ALL TABLES IN SCHEMA public;\r\n-----\r\n\r\nIf from the point of view of consistency, for column list, we could report an\r\nERROR because we currently don't allow using different column lists for a\r\ntable. Maybe an ERROR like:\r\n\r\n\"ERROR: cannot use column for table x when the table's schema is also in the publication\"\r\n\r\nBut if we want to report an ERROR for column list in above case. We might need\r\nto restrict the ALTER TABLE SET SCHEMA as well because user could move a table\r\nwhich is published with column list to a schema that is also published in the\r\npublication, so we might need to add some similar check(which is removed in\r\nVignesh's patch) to tablecmd.c to disallow this case.\r\n\r\nAnother option could be just ingore the column list if table's schema is also\r\npart of publication. But it seems slightly inconsistent with the rule that we\r\ndisallow using different column list for a table.\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Wed, 14 Sep 2022 05:10:32 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On 14.09.22 07:10, houzj.fnst@fujitsu.com wrote:\n> After applying the patch, we support adding a table with column list along with\n> the table's schema[1], and it will directly apply the column list in the\n> logical replication after applying the patch.\n> \n> [1]--\n> CREATE PUBLICATION pub FOR TABLE public.test(a), FOR ALL TABLES IN SCHEMA public;\n> -----\n> \n> If from the point of view of consistency, for column list, we could report an\n> ERROR because we currently don't allow using different column lists for a\n> table. Maybe an ERROR like:\n> \n> \"ERROR: cannot use column for table x when the table's schema is also in the publication\"\n> \n> But if we want to report an ERROR for column list in above case. We might need\n> to restrict the ALTER TABLE SET SCHEMA as well because user could move a table\n> which is published with column list to a schema that is also published in the\n> publication, so we might need to add some similar check(which is removed in\n> Vignesh's patch) to tablecmd.c to disallow this case.\n> \n> Another option could be just ingore the column list if table's schema is also\n> part of publication. But it seems slightly inconsistent with the rule that we\n> disallow using different column list for a table.\n\nIgnoring things doesn't seem like a good idea.\n\nA solution might be to disallow adding any schemas to a publication if \ncolumn lists on a table are specified.\n\n\n",
"msg_date": "Wed, 14 Sep 2022 21:36:37 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Thursday, September 15, 2022 3:37 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\r\n\r\nHi,\r\n\r\n> \r\n> On 14.09.22 07:10, houzj.fnst@fujitsu.com wrote:\r\n> > After applying the patch, we support adding a table with column list\r\n> > along with the table's schema[1], and it will directly apply the\r\n> > column list in the logical replication after applying the patch.\r\n> >\r\n> > [1]--\r\n> > CREATE PUBLICATION pub FOR TABLE public.test(a), FOR ALL TABLES IN\r\n> > SCHEMA public;\r\n> > -----\r\n> >\r\n> > If from the point of view of consistency, for column list, we could\r\n> > report an ERROR because we currently don't allow using different\r\n> > column lists for a table. Maybe an ERROR like:\r\n> >\r\n> > \"ERROR: cannot use column for table x when the table's schema is also in the\r\n> publication\"\r\n> >\r\n> > But if we want to report an ERROR for column list in above case. We\r\n> > might need to restrict the ALTER TABLE SET SCHEMA as well because user\r\n> > could move a table which is published with column list to a schema\r\n> > that is also published in the publication, so we might need to add\r\n> > some similar check(which is removed in Vignesh's patch) to tablecmd.c to\r\n> disallow this case.\r\n> >\r\n> > Another option could be just ingore the column list if table's schema\r\n> > is also part of publication. But it seems slightly inconsistent with\r\n> > the rule that we disallow using different column list for a table.\r\n> \r\n> Ignoring things doesn't seem like a good idea.\r\n> \r\n> A solution might be to disallow adding any schemas to a publication if column\r\n> lists on a table are specified.\r\n\r\nThanks for the suggestion. If I understand correctly, you mean we can disallow\r\npublishing a table with column list and any schema(a schema that the table\r\nmight not be part of) in the same publication[1].\r\n\r\nsomething like--\r\n[1]CREATE PUBLICATION pub FOR TABLE public.test(a), ALL TABLES IN SCHEMA s2;\r\nERROR: \"cannot add schema to publication when column list is used in the published table\"\r\n--\r\n\r\nPersonally, it looks acceptable to me as user can anyway achieve the same\r\npurpose by creating serval publications and combine it and we can save the\r\nrestriction at ALTER TABLE SET SCHEMA. Although it restricts some cases.\r\nI will post a top-up patch about this soon.\r\n\r\n\r\nAbout the row filter handling, maybe we don't need to restrict row filter like\r\nabove ? Because the rule is to simply merge the row filter with 'OR' among\r\npublications, so it seems we could ignore the row filter in the publication when\r\nthe table's schema is also published in the same publication(which means no filter).\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Thu, 15 Sep 2022 02:48:20 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Thursday, September 15, 2022 10:48 AM houzj.fnst@fujitsu.com wrote:\r\n> \r\n> On Thursday, September 15, 2022 3:37 AM Peter Eisentraut\r\n> <peter.eisentraut@enterprisedb.com> wrote:\r\n> \r\n> Hi,\r\n> \r\n> >\r\n> > On 14.09.22 07:10, houzj.fnst@fujitsu.com wrote:\r\n> > > After applying the patch, we support adding a table with column list\r\n> > > along with the table's schema[1], and it will directly apply the\r\n> > > column list in the logical replication after applying the patch.\r\n> > >\r\n> > > [1]--\r\n> > > CREATE PUBLICATION pub FOR TABLE public.test(a), FOR ALL TABLES IN\r\n> > > SCHEMA public;\r\n> > > -----\r\n> > >\r\n> > > If from the point of view of consistency, for column list, we could\r\n> > > report an ERROR because we currently don't allow using different\r\n> > > column lists for a table. Maybe an ERROR like:\r\n> > >\r\n> > > \"ERROR: cannot use column for table x when the table's schema is\r\n> > > also in the\r\n> > publication\"\r\n> > >\r\n> > > But if we want to report an ERROR for column list in above case. We\r\n> > > might need to restrict the ALTER TABLE SET SCHEMA as well because\r\n> > > user could move a table which is published with column list to a\r\n> > > schema that is also published in the publication, so we might need\r\n> > > to add some similar check(which is removed in Vignesh's patch) to\r\n> > > tablecmd.c to\r\n> > disallow this case.\r\n> > >\r\n> > > Another option could be just ingore the column list if table's\r\n> > > schema is also part of publication. But it seems slightly\r\n> > > inconsistent with the rule that we disallow using different column list for a\r\n> table.\r\n> >\r\n> > Ignoring things doesn't seem like a good idea.\r\n> >\r\n> > A solution might be to disallow adding any schemas to a publication if\r\n> > column lists on a table are specified.\r\n> \r\n> Thanks for the suggestion. If I understand correctly, you mean we can disallow\r\n> publishing a table with column list and any schema(a schema that the table\r\n> might not be part of) in the same publication[1].\r\n> \r\n> something like--\r\n> [1]CREATE PUBLICATION pub FOR TABLE public.test(a), ALL TABLES IN SCHEMA\r\n> s2;\r\n> ERROR: \"cannot add schema to publication when column list is used in the\r\n> published table\"\r\n> --\r\n> \r\n> Personally, it looks acceptable to me as user can anyway achieve the same\r\n> purpose by creating serval publications and combine it and we can save the\r\n> restriction at ALTER TABLE SET SCHEMA. Although it restricts some cases.\r\n> I will post a top-up patch about this soon.\r\n> \r\n> \r\n> About the row filter handling, maybe we don't need to restrict row filter like\r\n> above ? Because the rule is to simply merge the row filter with 'OR' among\r\n> publications, so it seems we could ignore the row filter in the publication when\r\n> the table's schema is also published in the same publication(which means no\r\n> filter).\r\n\r\nAttach the new version patch which added suggested restriction for column list\r\nand merged Vignesh's patch.\r\n\r\nSome other document might need to be updated. I will update them soon.\r\n\r\nBest regards,\r\nHou zj",
"msg_date": "Thu, 15 Sep 2022 12:57:07 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Thu, Sep 15, 2022 at 10:57 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n...\n> Attach the new version patch which added suggested restriction for column list\n> and merged Vignesh's patch.\n>\n> Some other document might need to be updated. I will update them soon.\n>\n> Best regards,\n> Hou zj\n\nHi Hou-san.\n\nFYI, I found your v3 patch apply was broken due to a very recent changes pushed:\n\ne.g.\nerror: patch failed: doc/src/sgml/logical-replication.sgml:1142\n\n~~\n\nPSA a rebase of your same patch (I left the version number the same)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Fri, 16 Sep 2022 11:28:58 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Thu, Sep 15, 2022 at 8:18 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Thursday, September 15, 2022 3:37 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> > > Another option could be just ingore the column list if table's schema\n> > > is also part of publication. But it seems slightly inconsistent with\n> > > the rule that we disallow using different column list for a table.\n> >\n> > Ignoring things doesn't seem like a good idea.\n> >\n> > A solution might be to disallow adding any schemas to a publication if column\n> > lists on a table are specified.\n>\n> Thanks for the suggestion. If I understand correctly, you mean we can disallow\n> publishing a table with column list and any schema(a schema that the table\n> might not be part of) in the same publication[1].\n>\n> something like--\n> [1]CREATE PUBLICATION pub FOR TABLE public.test(a), ALL TABLES IN SCHEMA s2;\n> ERROR: \"cannot add schema to publication when column list is used in the published table\"\n> --\n>\n> Personally, it looks acceptable to me as user can anyway achieve the same\n> purpose by creating serval publications and combine it and we can save the\n> restriction at ALTER TABLE SET SCHEMA. Although it restricts some cases.\n>\n\nYeah, I agree that it restricts more cases for how different\ncombinations can be specified for a publication but OTOH it helps to\nuplift restriction in ALTER TABLE ... SET SCHEMA which seems like a\ngood trade-off.\n\n> I will post a top-up patch about this soon.\n>\n>\n> About the row filter handling, maybe we don't need to restrict row filter like\n> above ? Because the rule is to simply merge the row filter with 'OR' among\n> publications, so it seems we could ignore the row filter in the publication when\n> the table's schema is also published in the same publication(which means no filter).\n>\n\nYeah, this is what we are doing internally when combining multiple\npublications but let me explain with an example the case of a single\npublication so that if anybody has any objections to it, we can\ndiscuss the same.\n\nCase-1: When row filter is specified *without* ALL TABLES IN SCHEMA clause\npostgres=# create table t1(c1 int, c2 int, c3 int);\nCREATE TABLE\npostgres=# create publication pub1 for table t1 where (c1 > 10);\nCREATE PUBLICATION\npostgres=# select pubname, schemaname, tablename, rowfilter from\npg_publication_tables;\n pubname | schemaname | tablename | rowfilter\n---------+------------+-----------+-----------\n pub1 | public | t1 | (c1 > 10)\n(1 row)\n\nCase-2: When row filter is specified *with* ALL TABLES IN SCHEMA clause\npostgres=# create schema s1;\nCREATE SCHEMA\npostgres=# create table s1.t2(c1 int, c2 int, c3 int);\nCREATE TABLE\npostgres=# create publication pub2 for table s1.t2 where (c1 > 10),\nall tables in schema s1;\nCREATE PUBLICATION\npostgres=# select pubname, schemaname, tablename, rowfilter from\npg_publication_tables;\n pubname | schemaname | tablename | rowfilter\n---------+------------+-----------+-----------\n pub1 | public | t1 | (c1 > 10)\n pub2 | s1 | t2 |\n(2 rows)\n\nSo, for case-2, the rowfilter is not considered. Note, case-2 was not\npossible before the patch which is discussed here and after the patch,\nthe behavior will be the same as we have it before when we combine\npublications.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 16 Sep 2022 09:07:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Thu, Sep 15, 2022 at 6:27 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Attach the new version patch which added suggested restriction for column list\n> and merged Vignesh's patch.\n>\n\nFew comments:\n============\n1.\n static void\n-CheckPubRelationColumnList(List *tables, const char *queryString,\n+CheckPubRelationColumnList(List *tables, bool publish_schema,\n+ const char *queryString,\n bool pubviaroot)\n\nIt is better to keep bool parameters together at the end.\n\n2.\n /*\n+ * Disallow using column list if any schema is in the publication.\n+ */\n+ if (publish_schema)\n+ ereport(ERROR,\n+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"cannot use publication column list for relation \\\"%s.%s\\\"\",\n+ get_namespace_name(RelationGetNamespace(pri->relation)),\n+ RelationGetRelationName(pri->relation)),\n+ errdetail(\"Column list cannot be specified if any schema is part of\nthe publication or specified in the list.\"));\n\nI think it would be better to explain why we disallow this case.\n\n3.\n+ if (!heap_attisnull(coltuple, Anum_pg_publication_rel_prattrs, NULL))\n+ ereport(ERROR,\n+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"cannot add schema to the publication\"),\n+ errdetail(\"Schema cannot be added if any table that specifies column\nlist is already part of the publication\"));\n\nA full stop is missing at the end in the errdetail message.\n\n4. I have modified a few comments in the attached. Please check and if\nyou like the changes then please include those in the next version.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Fri, 16 Sep 2022 11:12:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Friday, September 16, 2022 1:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, Sep 15, 2022 at 6:27 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Attach the new version patch which added suggested restriction for\r\n> > column list and merged Vignesh's patch.\r\n> >\r\n> \r\n> Few comments:\r\n> ============\r\n> 1.\r\n> static void\r\n> -CheckPubRelationColumnList(List *tables, const char *queryString,\r\n> +CheckPubRelationColumnList(List *tables, bool publish_schema,\r\n> + const char *queryString,\r\n> bool pubviaroot)\r\n> \r\n> It is better to keep bool parameters together at the end.\r\n> \r\n> 2.\r\n> /*\r\n> + * Disallow using column list if any schema is in the publication.\r\n> + */\r\n> + if (publish_schema)\r\n> + ereport(ERROR,\r\n> + errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> + errmsg(\"cannot use publication column list for relation \\\"%s.%s\\\"\",\r\n> + get_namespace_name(RelationGetNamespace(pri->relation)),\r\n> + RelationGetRelationName(pri->relation)),\r\n> + errdetail(\"Column list cannot be specified if any schema is part of\r\n> the publication or specified in the list.\"));\r\n> \r\n> I think it would be better to explain why we disallow this case.\r\n> \r\n> 3.\r\n> + if (!heap_attisnull(coltuple, Anum_pg_publication_rel_prattrs, NULL))\r\n> + ereport(ERROR, errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> + errmsg(\"cannot add schema to the publication\"), errdetail(\"Schema\r\n> + cannot be added if any table that specifies column\r\n> list is already part of the publication\"));\r\n> \r\n> A full stop is missing at the end in the errdetail message.\r\n> \r\n> 4. I have modified a few comments in the attached. Please check and if you like\r\n> the changes then please include those in the next version.\r\n\r\nThanks for the comments.\r\nAttach the new version patch which addressed above comments and ran pgident.\r\nI also improved some codes and documents based on some comments\r\ngiven by Vignesh and Peter offlist.\r\n\r\nBest regards,\r\nHou zj",
"msg_date": "Fri, 16 Sep 2022 07:39:42 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Fri, Sep 16, 2022 at 1:09 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Attach the new version patch which addressed above comments and ran pgident.\n> I also improved some codes and documents based on some comments\n> given by Vignesh and Peter offlist.\n>\n\nThanks, the patch looks mostly good to me. I have made a few cosmetic\nchanges and edited a few comments. I would like to push this to HEAD\nand backpatch it to 15 by Tuesday unless there are any comments. I\nthink we should back patch this because otherwise, users will see a\nchange in behavior in 16 but if others don't think the same way then\nwe can consider pushing this to HEAD only.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Sat, 17 Sep 2022 08:51:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Saturday, September 17, 2022 11:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Fri, Sep 16, 2022 at 1:09 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\r\n> wrote:\r\n> >\r\n> > Attach the new version patch which addressed above comments and ran\r\n> pgident.\r\n> > I also improved some codes and documents based on some comments given\r\n> > by Vignesh and Peter offlist.\r\n> >\r\n> \r\n> Thanks, the patch looks mostly good to me. I have made a few cosmetic changes\r\n> and edited a few comments. I would like to push this to HEAD and backpatch it\r\n> to 15 by Tuesday unless there are any comments. I think we should back patch\r\n> this because otherwise, users will see a change in behavior in 16 but if others\r\n> don't think the same way then we can consider pushing this to HEAD only.\r\n\r\nThanks for the patch.\r\nI rebased it based on PG15 and here is the patch.\r\n\r\nBest regards,\r\nHou zj",
"msg_date": "Sat, 17 Sep 2022 04:47:57 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "\n> diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml\n> index 1ae3287..0ab768d 100644\n> --- a/doc/src/sgml/logical-replication.sgml\n> +++ b/doc/src/sgml/logical-replication.sgml\n> @@ -1120,6 +1120,11 @@ test_sub=# SELECT * FROM child ORDER BY a;\n> </para>\n> \n> <para>\n> + Specifying a column list when the publication also publishes\n> + <literal>FOR ALL TABLES IN SCHEMA</literal> is not supported.\n> + </para>\n> +\n> + <para>\n> For partitioned tables, the publication parameter\n> <literal>publish_via_partition_root</literal> determines which column list\n> is used. If <literal>publish_via_partition_root</literal> is\n> \n> diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml\n> index 0a68c4b..0ced7da 100644\n> --- a/doc/src/sgml/ref/create_publication.sgml\n> +++ b/doc/src/sgml/ref/create_publication.sgml\n> @@ -103,17 +103,17 @@ CREATE PUBLICATION <replaceable class=\"parameter\">name</replaceable>\n> </para>\n> \n> <para>\n> + Specifying a column list when the publication also publishes\n> + <literal>FOR ALL TABLES IN SCHEMA</literal> is not supported.\n> + </para>\n> \n> @@ -733,6 +694,24 @@ CheckPubRelationColumnList(List *tables, const char *queryString,\n> \t\t\tcontinue;\n> \n> \t\t/*\n> +\t\t * Disallow specifying column list if any schema is in the\n> +\t\t * publication.\n> +\t\t *\n> +\t\t * XXX We could instead just forbid the case when the publication\n> +\t\t * tries to publish the table with a column list and a schema for that\n> +\t\t * table. However, if we do that then we need a restriction during\n> +\t\t * ALTER TABLE ... SET SCHEMA to prevent such a case which doesn't\n> +\t\t * seem to be a good idea.\n> +\t\t */\n> +\t\tif (publish_schema)\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\terrcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\terrmsg(\"cannot use publication column list for relation \\\"%s.%s\\\"\",\n> +\t\t\t\t\t\t get_namespace_name(RelationGetNamespace(pri->relation)),\n> +\t\t\t\t\t\t RelationGetRelationName(pri->relation)),\n> +\t\t\t\t\terrdetail(\"Column list cannot be specified if any schema is part of the publication or specified in the list.\"));\n> +\n\nThis seems a pretty arbitrary restriction. It feels like you're adding\nthis restriction precisely so that you don't have to write the code to\nreject the ALTER .. SET SCHEMA if an incompatible configuration is\ndetected. But we already have such checks in several cases, so I don't\nsee why this one does not seem a good idea.\n\nThe whole FOR ALL TABLES IN SCHEMA thing seems pretty weird in several\naspects. Others have already commented about the syntax, which is\nunlike what GRANT uses; I'm also surprised that we've gotten away with\nit being superuser-only. Why are we building more superuser-only\nfeatures in this day and age? I think not even FOR ALL TABLES should\nrequire superuser.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nThou shalt study thy libraries and strive not to reinvent them without\ncause, that thy code may be short and readable and thy days pleasant\nand productive. (7th Commandment for C Programmers)\n\n\n",
"msg_date": "Mon, 19 Sep 2022 17:16:27 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On 9/19/22 11:16 AM, Alvaro Herrera wrote:\r\n> \r\n>> diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml\r\n>> index 1ae3287..0ab768d 100644\r\n>> --- a/doc/src/sgml/logical-replication.sgml\r\n>> +++ b/doc/src/sgml/logical-replication.sgml\r\n>> @@ -1120,6 +1120,11 @@ test_sub=# SELECT * FROM child ORDER BY a;\r\n>> </para>\r\n>> \r\n>> <para>\r\n>> + Specifying a column list when the publication also publishes\r\n>> + <literal>FOR ALL TABLES IN SCHEMA</literal> is not supported.\r\n>> + </para>\r\n>> +\r\n>> + <para>\r\n>> For partitioned tables, the publication parameter\r\n>> <literal>publish_via_partition_root</literal> determines which column list\r\n>> is used. If <literal>publish_via_partition_root</literal> is\r\n>>\r\n>> diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml\r\n>> index 0a68c4b..0ced7da 100644\r\n>> --- a/doc/src/sgml/ref/create_publication.sgml\r\n>> +++ b/doc/src/sgml/ref/create_publication.sgml\r\n>> @@ -103,17 +103,17 @@ CREATE PUBLICATION <replaceable class=\"parameter\">name</replaceable>\r\n>> </para>\r\n>> \r\n>> <para>\r\n>> + Specifying a column list when the publication also publishes\r\n>> + <literal>FOR ALL TABLES IN SCHEMA</literal> is not supported.\r\n>> + </para>\r\n>>\r\n>> @@ -733,6 +694,24 @@ CheckPubRelationColumnList(List *tables, const char *queryString,\r\n>> \t\t\tcontinue;\r\n>> \r\n>> \t\t/*\r\n>> +\t\t * Disallow specifying column list if any schema is in the\r\n>> +\t\t * publication.\r\n>> +\t\t *\r\n>> +\t\t * XXX We could instead just forbid the case when the publication\r\n>> +\t\t * tries to publish the table with a column list and a schema for that\r\n>> +\t\t * table. However, if we do that then we need a restriction during\r\n>> +\t\t * ALTER TABLE ... SET SCHEMA to prevent such a case which doesn't\r\n>> +\t\t * seem to be a good idea.\r\n>> +\t\t */\r\n>> +\t\tif (publish_schema)\r\n>> +\t\t\tereport(ERROR,\r\n>> +\t\t\t\t\terrcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n>> +\t\t\t\t\terrmsg(\"cannot use publication column list for relation \\\"%s.%s\\\"\",\r\n>> +\t\t\t\t\t\t get_namespace_name(RelationGetNamespace(pri->relation)),\r\n>> +\t\t\t\t\t\t RelationGetRelationName(pri->relation)),\r\n>> +\t\t\t\t\terrdetail(\"Column list cannot be specified if any schema is part of the publication or specified in the list.\"));\r\n>> +\r\n> \r\n> This seems a pretty arbitrary restriction. It feels like you're adding\r\n> this restriction precisely so that you don't have to write the code to\r\n> reject the ALTER .. SET SCHEMA if an incompatible configuration is\r\n> detected. But we already have such checks in several cases, so I don't\r\n> see why this one does not seem a good idea.\r\n> \r\n> The whole FOR ALL TABLES IN SCHEMA thing seems pretty weird in several\r\n> aspects. Others have already commented about the syntax, which is\r\n> unlike what GRANT uses; I'm also surprised that we've gotten away with\r\n> it being superuser-only. Why are we building more superuser-only\r\n> features in this day and age? I think not even FOR ALL TABLES should\r\n> require superuser.\r\n\r\nFYI, I've added this to the PG15 open items as there are some open \r\nquestions to resolve in this thread.\r\n\r\nJonathan",
"msg_date": "Mon, 19 Sep 2022 16:52:06 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 8:46 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml\n> > index 1ae3287..0ab768d 100644\n> > --- a/doc/src/sgml/logical-replication.sgml\n> > +++ b/doc/src/sgml/logical-replication.sgml\n> > @@ -1120,6 +1120,11 @@ test_sub=# SELECT * FROM child ORDER BY a;\n> > </para>\n> >\n> > <para>\n> > + Specifying a column list when the publication also publishes\n> > + <literal>FOR ALL TABLES IN SCHEMA</literal> is not supported.\n> > + </para>\n> > +\n> > + <para>\n> > For partitioned tables, the publication parameter\n> > <literal>publish_via_partition_root</literal> determines which column list\n> > is used. If <literal>publish_via_partition_root</literal> is\n> >\n> > diff --git a/doc/src/sgml/ref/create_publication.sgml b/doc/src/sgml/ref/create_publication.sgml\n> > index 0a68c4b..0ced7da 100644\n> > --- a/doc/src/sgml/ref/create_publication.sgml\n> > +++ b/doc/src/sgml/ref/create_publication.sgml\n> > @@ -103,17 +103,17 @@ CREATE PUBLICATION <replaceable class=\"parameter\">name</replaceable>\n> > </para>\n> >\n> > <para>\n> > + Specifying a column list when the publication also publishes\n> > + <literal>FOR ALL TABLES IN SCHEMA</literal> is not supported.\n> > + </para>\n> >\n> > @@ -733,6 +694,24 @@ CheckPubRelationColumnList(List *tables, const char *queryString,\n> > continue;\n> >\n> > /*\n> > + * Disallow specifying column list if any schema is in the\n> > + * publication.\n> > + *\n> > + * XXX We could instead just forbid the case when the publication\n> > + * tries to publish the table with a column list and a schema for that\n> > + * table. However, if we do that then we need a restriction during\n> > + * ALTER TABLE ... SET SCHEMA to prevent such a case which doesn't\n> > + * seem to be a good idea.\n> > + */\n> > + if (publish_schema)\n> > + ereport(ERROR,\n> > + errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > + errmsg(\"cannot use publication column list for relation \\\"%s.%s\\\"\",\n> > + get_namespace_name(RelationGetNamespace(pri->relation)),\n> > + RelationGetRelationName(pri->relation)),\n> > + errdetail(\"Column list cannot be specified if any schema is part of the publication or specified in the list.\"));\n> > +\n>\n> This seems a pretty arbitrary restriction. It feels like you're adding\n> this restriction precisely so that you don't have to write the code to\n> reject the ALTER .. SET SCHEMA if an incompatible configuration is\n> detected. But we already have such checks in several cases, so I don't\n> see why this one does not seem a good idea.\n>\n\nI agree that we have such checks at other places as well and one\nsomewhat similar is in ATPrepChangePersistence().\n\nATPrepChangePersistence()\n{\n...\n...\n/*\n * Check that the table is not part of any publication when changing to\n * UNLOGGED, as UNLOGGED tables can't be published.\n */\n\nHowever, another angle to look at it is that we try to avoid adding\nrestrictions in other DDL commands for defined publications. I am not\nsure but it appears to me Peter E. is not in favor of restrictions in\nother DDLs. I think we don't have a strict rule in this regard, so we\nare trying to see what makes the most sense based on feedback and do\nit accordingly.\n\n> The whole FOR ALL TABLES IN SCHEMA thing seems pretty weird in several\n> aspects. Others have already commented about the syntax, which is\n> unlike what GRANT uses; I'm also surprised that we've gotten away with\n> it being superuser-only. Why are we building more superuser-only\n> features in this day and age? I think not even FOR ALL TABLES should\n> require superuser.\n>\n\nThe intention was to be in sync with FOR ALL TABLES.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Sep 2022 07:39:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On 9/19/22 4:52 PM, Jonathan S. Katz wrote:\r\n> On 9/19/22 11:16 AM, Alvaro Herrera wrote:\r\n\r\n>> This seems a pretty arbitrary restriction. It feels like you're adding\r\n>> this restriction precisely so that you don't have to write the code to\r\n>> reject the ALTER .. SET SCHEMA if an incompatible configuration is\r\n>> detected. But we already have such checks in several cases, so I don't\r\n>> see why this one does not seem a good idea.\r\n>>\r\n>> The whole FOR ALL TABLES IN SCHEMA thing seems pretty weird in several\r\n>> aspects. Others have already commented about the syntax, which is\r\n>> unlike what GRANT uses; I'm also surprised that we've gotten away with\r\n>> it being superuser-only. Why are we building more superuser-only\r\n>> features in this day and age? I think not even FOR ALL TABLES should\r\n>> require superuser.\r\n> \r\n> FYI, I've added this to the PG15 open items as there are some open \r\n> questions to resolve in this thread.\r\n\r\n(Replying personally, not RMT).\r\n\r\nI wanted to enumerate the concerns raised in this thread in the context \r\nof the open item to understand what needs to be addressed, and also give \r\nan opinion. I did read up on the original thread to better understand \r\ncontext around decisions.\r\n\r\nI believe the concerns are these 3 things:\r\n\r\n1. Allowing calls that have \"ALL TABLES IN SCHEMA\" that include calls to \r\nspecific tables in schema\r\n2. The syntax of the \"ALL TABLES IN SCHEMA\" and comparing it to similar \r\nbehaviors in PostgreSQL\r\n3. Adding on an additional \"superuser-only\" feature\r\n\r\nFor #1 (allowing calls that have schema/table overlap...), there appears \r\nto be both a patch that allows this (reversing[8]), and a suggestion for \r\ndealing with a corner-case that is reasonable, i.e. disallowing adding \r\nschemas to a publication when specifying column-lists. Do we think we \r\ncan have consensus on this prior to the RC1 freeze?\r\n\r\nFor #2 (\"ALL TABLES IN SCHEMA\" syntax), this was heavily discussed on \r\nthe original thread[1][3][4][5][7]. I thought Tom's proposal on the \r\nsyntax[3] was reasonable as it \"future proofs\" for when we allow other \r\nschema-scoped objects to be published and give control over which ones \r\ncan be published.\r\n\r\nThe bigger issue seems to be around the behavior in regards to the \r\nsyntax. The current behavior is that when one specifies \"ALL TABLES IN \r\nSCHEMA\", any future tables created in that schema are added to the \r\npublication. While folks tried to find parallels to GRANT[6], I think \r\nthis actually resembles how we handle partitions that are \r\npublished[9][10], i.e.:\r\n\r\n\"When a partitioned table is added to a publication, all of its existing \r\nand future partitions are implicitly considered to be part of the \r\npublication.\"[10]\r\n\r\nAdditionally, this is the behavior that is already present in \"FOR ALL \r\nTABLES\":\r\n\r\n\"Marks the publication as one that replicates changes for all tables in \r\nthe database, including tables created in the future.\"[10]\r\n\r\nI don't think we should change this behavior that's already in logical \r\nreplication. While I understand the reasons why \"GRANT ... ALL TABLES IN \r\nSCHEMA\" has a different behavior (i.e. it's not applied to future \r\nobjects) and do not advocate to change it, I have personally been \r\naffected where I thought a permission would be applied to all future \r\nobjects, only to discover otherwise. I believe it's more intuitive to \r\nthink that \"ALL\" applies to \"everything, always.\"\r\n\r\nFor #3 (more superuser-only), in general I do agree that we shouldn't be \r\nadding more of these. However, we have in this release, and not just to \r\nthis feature. ALTER SUBSCRIPTION ... SKIP[11] requires superuser. I \r\nthink it's easier for us to \"relax\" privileges (e.g. w/predefined roles) \r\nthan to make something \"superuser-only\" in the future, so I don't see \r\nthis being a blocker for v15. The feature will continue to work for \r\nusers even if we remove \"superuser-only\" in the future.\r\n\r\nTo summarize:\r\n\r\n1. I do think we should fix the issue that Peter originally brought up \r\nin this thread before v15. That is an open item.\r\n2. I don't see why we need to change the syntax/behavior, I think that \r\nwill make this feature much harder to use.\r\n3. I don't think we need to change the superuser requirement right now, \r\nbut we should do that for a future release.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/CAFiTN-u_m0cq7Rm5Bcu9EW4gSHG94WaLuxLfibwE-o7%2BLea2GQ%40mail.gmail.com\r\n[2] \r\nhttps://www.postgresql.org/message-id/C4D04B90-AC4D-42A7-B93C-4799CEDDDD96%40enterprisedb.com\r\n[3] https://www.postgresql.org/message-id/155565.1628954580%40sss.pgh.pa.us\r\n[4] \r\nhttps://www.postgresql.org/message-id/CAHut%2BPvNwzp-EdtsDNazwrNrV4ziqCovNdLywzOJKSy52LvRjw%40mail.gmail.com\r\n[5] \r\nhttps://www.postgresql.org/message-id/CAHut%2BPt6Czj0KsE0ip6nMsPf4FatHgNDni-wSu2KkYNYF9mDAw%40mail.gmail.com\r\n[6] \r\nhttps://www.postgresql.org/message-id/CAA4eK1Lwtea0St1MV5nfSg9FrFeU04YKpHvhQ0i4W-tOBw%3D9Qw%40mail.gmail.com\r\n[7] \r\nhttps://www.postgresql.org/message-id/202109241325.eag5g6mpvoup@alvherre.pgsql\r\n[8] \r\nhttps://www.postgresql.org/message-id/CALDaNm1BEXtvg%3Dfq8FzM-FoYvETTEuvA_Gf8rCAjFr1VrB5aBA%40mail.gmail.com\r\n[9] \r\nhttps://www.postgresql.org/message-id/CAJcOf-fyM3075t9%2B%3DB-BSFz2FG%3D5BnDSPX4YtL8k1nnK%3DwjgWA%40mail.gmail.com\r\n[10] https://www.postgresql.org/docs/current/sql-createpublication.html\r\n[11] https://www.postgresql.org/docs/15/sql-altersubscription.html",
"msg_date": "Mon, 19 Sep 2022 23:03:38 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On 2022-Sep-20, Amit Kapila wrote:\n\n> On Mon, Sep 19, 2022 at 8:46 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > This seems a pretty arbitrary restriction. It feels like you're adding\n> > this restriction precisely so that you don't have to write the code to\n> > reject the ALTER .. SET SCHEMA if an incompatible configuration is\n> > detected. But we already have such checks in several cases, so I don't\n> > see why this one does not seem a good idea.\n> >\n> I agree that we have such checks at other places as well and one\n> somewhat similar is in ATPrepChangePersistence().\n> \n> ATPrepChangePersistence()\n> {\n> ...\n> ...\n> /*\n> * Check that the table is not part of any publication when changing to\n> * UNLOGGED, as UNLOGGED tables can't be published.\n> */\n\nRight, I think this is a sensible approach.\n\n> However, another angle to look at it is that we try to avoid adding\n> restrictions in other DDL commands for defined publications.\n\nWell, it makes sense to avoid restrictions wherever possible. But here,\nthe consequence is that you end up with a restriction in the publication\ndefinition that is not very sensible. Imagine if you said \"you can't\nadd schema S because it contains an unlogged table\". It's absurd.\n\nMaybe this can be relaxed in a future release, but it's quite odd.\n\n> The intention was to be in sync with FOR ALL TABLES.\n\nI guess we can change both (FOR ALL TABLES and IN SCHEMA) later.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 20 Sep 2022 11:27:03 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 2:57 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Sep-20, Amit Kapila wrote:\n>\n> > On Mon, Sep 19, 2022 at 8:46 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > > This seems a pretty arbitrary restriction. It feels like you're adding\n> > > this restriction precisely so that you don't have to write the code to\n> > > reject the ALTER .. SET SCHEMA if an incompatible configuration is\n> > > detected. But we already have such checks in several cases, so I don't\n> > > see why this one does not seem a good idea.\n> > >\n> > I agree that we have such checks at other places as well and one\n> > somewhat similar is in ATPrepChangePersistence().\n> >\n> > ATPrepChangePersistence()\n> > {\n> > ...\n> > ...\n> > /*\n> > * Check that the table is not part of any publication when changing to\n> > * UNLOGGED, as UNLOGGED tables can't be published.\n> > */\n>\n> Right, I think this is a sensible approach.\n>\n> > However, another angle to look at it is that we try to avoid adding\n> > restrictions in other DDL commands for defined publications.\n>\n> Well, it makes sense to avoid restrictions wherever possible. But here,\n> the consequence is that you end up with a restriction in the publication\n> definition that is not very sensible. Imagine if you said \"you can't\n> add schema S because it contains an unlogged table\". It's absurd.\n>\n> Maybe this can be relaxed in a future release, but it's quite odd.\n>\n\nYeah, we can relax it in a future release based on some field\nexperience, or maybe we can keep the current restriction of not\nallowing to add a table when the schema of the table is part of the\nsame publication and try to relax that in a future release based on\nfield experience.\n\n> > The intention was to be in sync with FOR ALL TABLES.\n>\n> I guess we can change both (FOR ALL TABLES and IN SCHEMA) later.\n>\n\nThat sounds reasonable.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Sep 2022 17:30:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 11:03 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> For #1 (allowing calls that have schema/table overlap...), there appears\n> to be both a patch that allows this (reversing[8]), and a suggestion for\n> dealing with a corner-case that is reasonable, i.e. disallowing adding\n> schemas to a publication when specifying column-lists. Do we think we\n> can have consensus on this prior to the RC1 freeze?\n\nI am not sure whether we can or should rush a fix in that fast, but I\nagree with this direction.\n\n> For #2 (\"ALL TABLES IN SCHEMA\" syntax), this was heavily discussed on\n> the original thread[1][3][4][5][7]. I thought Tom's proposal on the\n> syntax[3] was reasonable as it \"future proofs\" for when we allow other\n> schema-scoped objects to be published and give control over which ones\n> can be published.\n\nAll right, well, I still don't like it and think it's confusing, but\nperhaps I'm in the minority.\n\n> I don't think we should change this behavior that's already in logical\n> replication. While I understand the reasons why \"GRANT ... ALL TABLES IN\n> SCHEMA\" has a different behavior (i.e. it's not applied to future\n> objects) and do not advocate to change it, I have personally been\n> affected where I thought a permission would be applied to all future\n> objects, only to discover otherwise. I believe it's more intuitive to\n> think that \"ALL\" applies to \"everything, always.\"\n\nNah, there's room for multiple behaviors here. It's reasonable to want\nto add all the tables currently in the schema to a publication (or\ngrant permissions on them) and it's reasonable to want to include all\ncurrent and future tables in the schema in a publication (or grant\npermissions on them) too. The reason I don't like the ALL TABLES IN\nSCHEMA syntax is that it sounds like the former, but actually is the\nlatter. Based on your link to the email from Tom, I understand now the\nreason why it's like that, but it's still counterintuitive to me.\n\n> For #3 (more superuser-only), in general I do agree that we shouldn't be\n> adding more of these. However, we have in this release, and not just to\n> this feature. ALTER SUBSCRIPTION ... SKIP[11] requires superuser. I\n> think it's easier for us to \"relax\" privileges (e.g. w/predefined roles)\n> than to make something \"superuser-only\" in the future, so I don't see\n> this being a blocker for v15. The feature will continue to work for\n> users even if we remove \"superuser-only\" in the future.\n\nYeah, this is clearly not a release blocker, I think.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Sep 2022 09:42:25 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On 2022-Sep-13, Kyotaro Horiguchi wrote:\n\n> At Mon, 12 Sep 2022 04:26:48 +0000, \"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com> wrote in \n\n> > I feel we'd better compare the syntax with the existing publication command:\n> > FOR ALL TABLES. If you create a publication FOR ALL TABLES, it means publishing\n> > all the tables in the database *including* tables created in the future. I\n> > think both the syntax and meaning of ALL TABLES IN SCHEMA are consistent with\n> > the existing FOR ALL TABLES.\n> \n> IMHO, I feel closer to Robert. \"ALL TABLES IN SCHEMA\" sounds like the\n> concrete tables at the time of invocation. While I agree that it is\n> not directly comparable to GRANT, \n\nWhat if we remove the ALL keyword from there? That would leave us with\n\"FOR TABLES IN SCHEMA\", which seems to better convey that it doesn't\nrestrict to current tables in there.\n\n\n> but if I see \"ALTER PUBLICATION p1 ADD SCHEMA s1\", I automatically\n> translate that into \"all tables in the schema s1 at the time of using\n> this publication\".\n\n... but that translation is wrong if replication supports other kinds of\nobjects, as it inevitably will in the near future. Clearly the fact\nthat we spell out TABLES there is important. When we add support for\nsequences, we could have combinations\n\nADD [ALL] TABLES IN SCHEMA s\nADD [ALL] SEQUENCES IN SCHEMA s\nADD [ALL] TABLES AND SEQUENCES IN SCHEMA s\n\nand at that point, the unadorned ADD SCHEMA one will become ambiguous.\n\n> At least, it would cause less confusion when it were \"ALT PUB p1 DROP\n> SCEMA s1\" aginst \"DROP ALL TABLES IN SCHEMA s1\".\n\nI'm not sure what you mean here.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"En las profundidades de nuestro inconsciente hay una obsesiva necesidad\nde un universo lógico y coherente. Pero el universo real se halla siempre\nun paso más allá de la lógica\" (Irulan)\n\n\n",
"msg_date": "Tue, 20 Sep 2022 16:01:19 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "\n\n> On Sep 19, 2022, at 8:03 PM, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> \n> \"When a partitioned table is added to a publication, all of its existing and future partitions are implicitly considered to be part of the publication.\"[10]\n> \n> Additionally, this is the behavior that is already present in \"FOR ALL TABLES\":\n> \n> \"Marks the publication as one that replicates changes for all tables in the database, including tables created in the future.\"[10]\n> \n> I don't think we should change this behavior that's already in logical replication.\n\nThe existing behavior in logical replication doesn't have any \"IN SCHEMA\" qualifiers.\n\n> While I understand the reasons why \"GRANT ... ALL TABLES IN SCHEMA\" has a different behavior (i.e. it's not applied to future objects) and do not advocate to change it, I have personally been affected where I thought a permission would be applied to all future objects, only to discover otherwise. I believe it's more intuitive to think that \"ALL\" applies to \"everything, always.\"\n\nThe conversation is focusing on what \"ALL TABLES\" means, but the ambiguous part is what \"IN SCHEMA\" means. In GRANT it means \"currently in schema, computed now.\" We are about to create confusion by adding the \"IN SCHEMA\" phrase to publication commands meaning \"later in schema, computed then.\" A user who diligently consults the documentation for one command to discover what \"IN SCHEMA\" means may fairly, but wrongly, assume it means the same thing in another command.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 20 Sep 2022 07:55:23 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On 9/20/22 10:55 AM, Mark Dilger wrote:\r\n> \r\n> \r\n>> On Sep 19, 2022, at 8:03 PM, Jonathan S. Katz <jkatz@postgresql.org> wrote:\r\n>>\r\n>> \"When a partitioned table is added to a publication, all of its existing and future partitions are implicitly considered to be part of the publication.\"[10]\r\n>>\r\n>> Additionally, this is the behavior that is already present in \"FOR ALL TABLES\":\r\n>>\r\n>> \"Marks the publication as one that replicates changes for all tables in the database, including tables created in the future.\"[10]\r\n>>\r\n>> I don't think we should change this behavior that's already in logical replication.\r\n> \r\n> The existing behavior in logical replication doesn't have any \"IN SCHEMA\" qualifiers.\r\n\r\nThis behavior exists \"FOR ALL TABLES\" without the \"IN SCHEMA\" qualifier. \r\nThis was discussed multiple times on the original thread[1].\r\n\r\n> \r\n>> While I understand the reasons why \"GRANT ... ALL TABLES IN SCHEMA\" has a different behavior (i.e. it's not applied to future objects) and do not advocate to change it, I have personally been affected where I thought a permission would be applied to all future objects, only to discover otherwise. I believe it's more intuitive to think that \"ALL\" applies to \"everything, always.\"\r\n> \r\n> The conversation is focusing on what \"ALL TABLES\" means, but the ambiguous part is what \"IN SCHEMA\" means. In GRANT it means \"currently in schema, computed now.\" We are about to create confusion by adding the \"IN SCHEMA\" phrase to publication commands meaning \"later in schema, computed then.\" A user who diligently consults the documentation for one command to discover what \"IN SCHEMA\" means may fairly, but wrongly, assume it means the same thing in another command.\r\n\r\nI tried to diligently read the sections where we talk about granting + \r\nprivileges[2][3] to see what it says about \"ALL * IN SCHEMA\". Unless I \r\nmissed it, and I read through it twice, it does not explicitly state \r\nwhether or not \"GRANT\" applies to all objects at only that given moment, \r\nor to future objects of that type which are created in that schema. \r\nMaybe the behavior is implied or is part of the standard, but it's not \r\ncurrently documented. We do link to \"ALTER DEFAULT PRIVILEGES\" at the \r\nbottom of the GRANT[2] docs, but we don't give any indication as to why.\r\n\r\n(This is also to say we should document in GRANT that ALL * IN SCHEMA \r\ndoes not apply to future objects; if you need that behavior use ALTER \r\nDEFAULT PRIVILEGES. Separate thread :)\r\n\r\nI understand there is a risk of confusion of the similar grammar across \r\ncommands, but the current command in logical replication has this is \r\nbuilding on the existing behavior.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/flat/CALDaNm0OANxuJ6RXqwZsM1MSY4s19nuH3734j4a72etDwvBETQ%40mail.gmail.com\r\n[2] https://www.postgresql.org/docs/current/sql-grant.html\r\n[3] https://www.postgresql.org/docs/current/ddl-priv.html",
"msg_date": "Tue, 20 Sep 2022 15:36:28 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "(RMT hat on, unless otherwise noted)\r\n\r\nOn 9/20/22 9:42 AM, Robert Haas wrote:\r\n> On Mon, Sep 19, 2022 at 11:03 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\r\n>> For #1 (allowing calls that have schema/table overlap...), there appears\r\n>> to be both a patch that allows this (reversing[8]), and a suggestion for\r\n>> dealing with a corner-case that is reasonable, i.e. disallowing adding\r\n>> schemas to a publication when specifying column-lists. Do we think we\r\n>> can have consensus on this prior to the RC1 freeze?\r\n> \r\n> I am not sure whether we can or should rush a fix in that fast, but I\r\n> agree with this direction.\r\n\r\nThe RMT met today to discuss this.\r\n\r\nWe did agree that the above is an open item that should be resolved \r\nbefore this release. While it is an accepted pattern for us to \"ERROR\" \r\non unsupported behavior and then later introduce said behavior, we do \r\nagree with Peter's original post in this thread and would like it resolved.\r\n\r\nAs for the state of the fix, the patch has been iterated on and Amit \r\nfelt ready to commit it[1]. We do want to hear how others feel about \r\nthis, but the folks behind this feature have been working on this patch \r\nsince this was reported.\r\n\r\n>> For #2 (\"ALL TABLES IN SCHEMA\" syntax), this was heavily discussed on\r\n>> the original thread[1][3][4][5][7]. I thought Tom's proposal on the\r\n>> syntax[3] was reasonable as it \"future proofs\" for when we allow other\r\n>> schema-scoped objects to be published and give control over which ones\r\n>> can be published.\r\n> \r\n> All right, well, I still don't like it and think it's confusing, but\r\n> perhaps I'm in the minority.\r\n\r\nThe RMT discussed this as well. The RMT feels that there should not be \r\nany changes to syntax/behavior for this release. This doesn't preclude \r\nfuture work in this area (e.g. having a toggle for \"all future \r\nbehavior\"), but based on all the discussions and existing behavior in \r\nthis feature, we do not see a need to make changes or delay the release \r\non this.\r\n\r\n>> I don't think we should change this behavior that's already in logical\r\n>> replication. While I understand the reasons why \"GRANT ... ALL TABLES IN\r\n>> SCHEMA\" has a different behavior (i.e. it's not applied to future\r\n>> objects) and do not advocate to change it, I have personally been\r\n>> affected where I thought a permission would be applied to all future\r\n>> objects, only to discover otherwise. I believe it's more intuitive to\r\n>> think that \"ALL\" applies to \"everything, always.\"\r\n> \r\n> Nah, there's room for multiple behaviors here. It's reasonable to want\r\n> to add all the tables currently in the schema to a publication (or\r\n> grant permissions on them) and it's reasonable to want to include all\r\n> current and future tables in the schema in a publication (or grant\r\n> permissions on them) too. The reason I don't like the ALL TABLES IN\r\n> SCHEMA syntax is that it sounds like the former, but actually is the\r\n> latter. Based on your link to the email from Tom, I understand now the\r\n> reason why it's like that, but it's still counterintuitive to me.\r\n\r\n<PersonalOpinion>\r\nI understand your view on \"multiple behaviors\" and I do agree with your \r\nreasoning. I still think we should leave this as is, but perhaps this \r\nopens up an option we add later to specify the behavior.\r\n</PersonalOpinion>\r\n\r\n> \r\n>> For #3 (more superuser-only), in general I do agree that we shouldn't be\r\n>> adding more of these. However, we have in this release, and not just to\r\n>> this feature. ALTER SUBSCRIPTION ... SKIP[11] requires superuser. I\r\n>> think it's easier for us to \"relax\" privileges (e.g. w/predefined roles)\r\n>> than to make something \"superuser-only\" in the future, so I don't see\r\n>> this being a blocker for v15. The feature will continue to work for\r\n>> users even if we remove \"superuser-only\" in the future.\r\n> \r\n> Yeah, this is clearly not a release blocker, I think.\r\n\r\nThe RMT concurs. We do recommend future work on \"relaxing\" the \r\nsuperuser-only requirement.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/CAA4eK1LDhoBM8K5uVme8PZ%2BkxNOfVpRh%3DoO42JtFdqBgBuj1bA%40mail.gmail.com",
"msg_date": "Tue, 20 Sep 2022 15:45:37 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "\n\n> On Sep 20, 2022, at 12:36 PM, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> \n> This behavior exists \"FOR ALL TABLES\" without the \"IN SCHEMA\" qualifier. This was discussed multiple times on the original thread[1].\n\nYes, nobody is debating that as far as I can see. And I do take your point that this stuff was discussed in other threads quite a while back.\n\n> I tried to diligently read the sections where we talk about granting + privileges[2][3] to see what it says about \"ALL * IN SCHEMA\". Unless I missed it, and I read through it twice, it does not explicitly state whether or not \"GRANT\" applies to all objects at only that given moment, or to future objects of that type which are created in that schema. Maybe the behavior is implied or is part of the standard, but it's not currently documented.\n\nInteresting. Thanks for that bit of research.\n\n> We do link to \"ALTER DEFAULT PRIVILEGES\" at the bottom of the GRANT[2] docs, but we don't give any indication as to why.\n> \n> (This is also to say we should document in GRANT that ALL * IN SCHEMA does not apply to future objects;\n\nYes, I agree this should be documented.\n\n> if you need that behavior use ALTER DEFAULT PRIVILEGES. Separate thread :)\n> \n> I understand there is a risk of confusion of the similar grammar across commands, but the current command in logical replication has this is building on the existing behavior.\n\nI don't complain that it is buidling on the existing behavior. I'm *only* concerned about the keywords we're using for this. Consider the following:\n\n -- AS ADMIN\n CREATE USER bob NOSUPERUSER;\n GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA foo TO bob;\n SET ROLE bob;\n CREATE PUBLICATION bobs_pub FOR ALL TABLES IN SCHEMA foo;\n\nWe're going to have that fail in pg15 because the FOR ALL TABLES IN SCHEMA option is reserved to superusers. But we agreed that was a stop-gap solution that we'd potentially loosen in the future. Certainly we'll need wiggle room in the syntax to perform that loosening:\n\n --- Must be superuser for this in pg15, and in subsequent releases.\n CREATE PUBLICATION bobs_pub FOR ALL FUTURE TABLES IN SCHEMA foo;\n\n --- Not supported in pg15, but reserved for some future pg versions to allow\n --- non-superusers to create publications on tables currently in schema foo,\n --- assuming they have sufficient privileges on those tables\n CREATE PUBLICATION bobs_pub FOR ALL TABLES IN SCHEMA foo;\n\nDoing it this way makes the syntax consistent between the GRANT...TO bob and the CREATE PUBLICATION bobs_pub. Surely this makes more sense?\n\nI'm not a huge fan of the keyword \"FUTURE\" here, but I found a reference to another database that uses that keyword for what I think is a similar purpose. We should choose *something* for this, though, if we want things to be rational going forward.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 20 Sep 2022 13:06:23 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "[personal views, not RMT]\r\n\r\nOn 9/20/22 4:06 PM, Mark Dilger wrote:\r\n\r\n> I don't complain that it is buidling on the existing behavior. I'm *only* concerned about the keywords we're using for this. Consider the following:\r\n> \r\n> -- AS ADMIN\r\n> CREATE USER bob NOSUPERUSER;\r\n> GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA foo TO bob;\r\n> SET ROLE bob;\r\n> CREATE PUBLICATION bobs_pub FOR ALL TABLES IN SCHEMA foo;\r\n> \r\n> We're going to have that fail in pg15 because the FOR ALL TABLES IN SCHEMA option is reserved to superusers. But we agreed that was a stop-gap solution that we'd potentially loosen in the future. Certainly we'll need wiggle room in the syntax to perform that loosening:\r\n> \r\n> --- Must be superuser for this in pg15, and in subsequent releases.\r\n> CREATE PUBLICATION bobs_pub FOR ALL FUTURE TABLES IN SCHEMA foo;\r\n> \r\n> --- Not supported in pg15, but reserved for some future pg versions to allow\r\n> --- non-superusers to create publications on tables currently in schema foo,\r\n> --- assuming they have sufficient privileges on those tables\r\n> CREATE PUBLICATION bobs_pub FOR ALL TABLES IN SCHEMA foo;\r\n> \r\n> Doing it this way makes the syntax consistent between the GRANT...TO bob and the CREATE PUBLICATION bobs_pub. Surely this makes more sense?\r\n\r\nWhen you put it that way, I see your point. However, for the \r\nlesser-privileged user though, will the behavior be that it will \r\ncontinue to add all future tables in a schema to the publication so long \r\nas they have sufficient privileges on those tables? Or would that mirror \r\nthe current behavior with GRANT?\r\n\r\nWhile I understand it makes it consistent, the one concern I raise is \r\nthat it means the less privileged user could have a less convenient user \r\nexperience than the privileged user. Perhaps that's OK, but worth noting.\r\n\r\n> I'm not a huge fan of the keyword \"FUTURE\" here, but I found a reference to another database that uses that keyword for what I think is a similar purpose.\r\n\r\nI did try doing research on this prior, but hadn't thought to \r\nincorporate \"future\" into my searches.\r\n\r\nDoing so, I probably found the same database that you did that used the \r\n\"FUTURE\" word for adding permissions to future objects (and this is \r\nfresh, as the docs for it were published last week). That's definitely \r\ninteresting.\r\n\r\nI did see some notes on a legacy database system that offered similar \r\nadvice to what we do for GRANT if you're not using ALTER DEFAULT PRIVILEGES.\r\n\r\n> We should choose *something* for this, though, if we want things to be rational going forward.\r\n\r\nThat all said, while I understand your point and open to the suggestion \r\non \"FUTURE\", I'm not convinced on the syntax change. But I'll sleep on it.\r\n\r\nJonathan",
"msg_date": "Tue, 20 Sep 2022 22:16:07 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Wednesday, September 21, 2022 4:06 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\r\n> > On Sep 20, 2022, at 12:36 PM, Jonathan S. Katz <jkatz@postgresql.org>\r\n> wrote:\r\n> >\r\n> > This behavior exists \"FOR ALL TABLES\" without the \"IN SCHEMA\" qualifier.\r\n> This was discussed multiple times on the original thread[1].\r\n> \r\n> Yes, nobody is debating that as far as I can see. And I do take your point that\r\n> this stuff was discussed in other threads quite a while back.\r\n> \r\n> > I tried to diligently read the sections where we talk about granting +\r\n> privileges[2][3] to see what it says about \"ALL * IN SCHEMA\". Unless I missed it,\r\n> and I read through it twice, it does not explicitly state whether or not \"GRANT\"\r\n> applies to all objects at only that given moment, or to future objects of that\r\n> type which are created in that schema. Maybe the behavior is implied or is part\r\n> of the standard, but it's not currently documented.\r\n> \r\n> Interesting. Thanks for that bit of research.\r\n> \r\n> > We do link to \"ALTER DEFAULT PRIVILEGES\" at the bottom of the GRANT[2]\r\n> docs, but we don't give any indication as to why.\r\n> >\r\n> > (This is also to say we should document in GRANT that ALL * IN SCHEMA does\r\n> not apply to future objects;\r\n> \r\n> Yes, I agree this should be documented.\r\n> \r\n> > if you need that behavior use ALTER DEFAULT PRIVILEGES. Separate thread :)\r\n> >\r\n> > I understand there is a risk of confusion of the similar grammar across\r\n> commands, but the current command in logical replication has this is building\r\n> on the existing behavior.\r\n> \r\n> I don't complain that it is buidling on the existing behavior. I'm *only*\r\n> concerned about the keywords we're using for this. Consider the following:\r\n> \r\n> -- AS ADMIN\r\n> CREATE USER bob NOSUPERUSER;\r\n> GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA foo TO bob;\r\n> SET ROLE bob;\r\n> CREATE PUBLICATION bobs_pub FOR ALL TABLES IN SCHEMA foo;\r\n> \r\n> We're going to have that fail in pg15 because the FOR ALL TABLES IN SCHEMA\r\n> option is reserved to superusers. But we agreed that was a stop-gap solution\r\n> that we'd potentially loosen in the future. Certainly we'll need wiggle room in\r\n> the syntax to perform that loosening:\r\n> \r\n> --- Must be superuser for this in pg15, and in subsequent releases.\r\n> CREATE PUBLICATION bobs_pub FOR ALL FUTURE TABLES IN SCHEMA foo;\r\n> \r\n> --- Not supported in pg15, but reserved for some future pg versions to\r\n> allow\r\n> --- non-superusers to create publications on tables currently in schema foo,\r\n> --- assuming they have sufficient privileges on those tables\r\n> CREATE PUBLICATION bobs_pub FOR ALL TABLES IN SCHEMA foo;\r\n> \r\n> Doing it this way makes the syntax consistent between the GRANT...TO bob and\r\n> the CREATE PUBLICATION bobs_pub. Surely this makes more sense?\r\n\r\nThanks for the suggestion.\r\n\r\nMy concern is that I am not sure do we really want to add a feature that only\r\npublish all the current tables(not future tables).\r\n\r\nI think, if possible, it would be better to find an approach that can release the\r\nsuperuser restriction for both FOR ALL TABLES and FOR ALL TABLES IN SCHEMA in\r\nthe future release. I think another solution might be introduce a new\r\npublication option (like: include_future).\r\n\r\nWhen user execute:\r\nCREATE PUBLICATION ... FOR ALL TABLES IN SCHEMA ... WITH (include_future)\r\n\r\nit means we publish all current and future tables and require superuser. We can\r\nset the default value of this option to 'true' and user can set it to false if\r\nthey only want to publish the current tables and don't want to use superuser.\r\nAnd in this approach, we don't need to change the syntax.\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Wed, 21 Sep 2022 02:52:41 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On 2022-Sep-20, Robert Haas wrote:\n\n> > I don't think we should change this behavior that's already in logical\n> > replication. While I understand the reasons why \"GRANT ... ALL TABLES IN\n> > SCHEMA\" has a different behavior (i.e. it's not applied to future\n> > objects) and do not advocate to change it, I have personally been\n> > affected where I thought a permission would be applied to all future\n> > objects, only to discover otherwise. I believe it's more intuitive to\n> > think that \"ALL\" applies to \"everything, always.\"\n> \n> Nah, there's room for multiple behaviors here. It's reasonable to want\n> to add all the tables currently in the schema to a publication (or\n> grant permissions on them) and it's reasonable to want to include all\n> current and future tables in the schema in a publication (or grant\n> permissions on them) too. The reason I don't like the ALL TABLES IN\n> SCHEMA syntax is that it sounds like the former, but actually is the\n> latter. Based on your link to the email from Tom, I understand now the\n> reason why it's like that, but it's still counterintuitive to me.\n\nI already proposed elsewhere that we remove the ALL keyword from there,\nwhich I think serves to reduce confusion (in particular it's no longer\nparallel to the GRANT one). As in the attached.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No renuncies a nada. No te aferres a nada.\"",
"msg_date": "Wed, 21 Sep 2022 16:24:22 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On 9/21/22 10:24 AM, Alvaro Herrera wrote:\r\n> On 2022-Sep-20, Robert Haas wrote:\r\n> \r\n>>> I don't think we should change this behavior that's already in logical\r\n>>> replication. While I understand the reasons why \"GRANT ... ALL TABLES IN\r\n>>> SCHEMA\" has a different behavior (i.e. it's not applied to future\r\n>>> objects) and do not advocate to change it, I have personally been\r\n>>> affected where I thought a permission would be applied to all future\r\n>>> objects, only to discover otherwise. I believe it's more intuitive to\r\n>>> think that \"ALL\" applies to \"everything, always.\"\r\n>>\r\n>> Nah, there's room for multiple behaviors here. It's reasonable to want\r\n>> to add all the tables currently in the schema to a publication (or\r\n>> grant permissions on them) and it's reasonable to want to include all\r\n>> current and future tables in the schema in a publication (or grant\r\n>> permissions on them) too. The reason I don't like the ALL TABLES IN\r\n>> SCHEMA syntax is that it sounds like the former, but actually is the\r\n>> latter. Based on your link to the email from Tom, I understand now the\r\n>> reason why it's like that, but it's still counterintuitive to me.\r\n> \r\n> I already proposed elsewhere that we remove the ALL keyword from there,\r\n> which I think serves to reduce confusion (in particular it's no longer\r\n> parallel to the GRANT one). As in the attached.\r\n\r\n[personal, not RMT hat]\r\n\r\nI'd be OK with this. It would still allow for \"FOR SEQUENCES IN SCHEMA\" etc.\r\n\r\nJonathan",
"msg_date": "Wed, 21 Sep 2022 10:54:39 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 8:24 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>\n> On 9/21/22 10:24 AM, Alvaro Herrera wrote:\n> > On 2022-Sep-20, Robert Haas wrote:\n> >\n> >>> I don't think we should change this behavior that's already in logical\n> >>> replication. While I understand the reasons why \"GRANT ... ALL TABLES IN\n> >>> SCHEMA\" has a different behavior (i.e. it's not applied to future\n> >>> objects) and do not advocate to change it, I have personally been\n> >>> affected where I thought a permission would be applied to all future\n> >>> objects, only to discover otherwise. I believe it's more intuitive to\n> >>> think that \"ALL\" applies to \"everything, always.\"\n> >>\n> >> Nah, there's room for multiple behaviors here. It's reasonable to want\n> >> to add all the tables currently in the schema to a publication (or\n> >> grant permissions on them) and it's reasonable to want to include all\n> >> current and future tables in the schema in a publication (or grant\n> >> permissions on them) too. The reason I don't like the ALL TABLES IN\n> >> SCHEMA syntax is that it sounds like the former, but actually is the\n> >> latter. Based on your link to the email from Tom, I understand now the\n> >> reason why it's like that, but it's still counterintuitive to me.\n> >\n> > I already proposed elsewhere that we remove the ALL keyword from there,\n> > which I think serves to reduce confusion (in particular it's no longer\n> > parallel to the GRANT one). As in the attached.\n>\n\nThanks for working on this.\n\n> [personal, not RMT hat]\n>\n> I'd be OK with this. It would still allow for \"FOR SEQUENCES IN SCHEMA\" etc.\n>\n\nI also think this is reasonable. It can later be extended to have an\noption to exclude/include future tables with a publication option.\nAlso, if we want to keep it compatible with FOR ALL TABLES syntax, we\ncan later add ALL as an optional keyword in the syntax.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 21 Sep 2022 22:46:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 1:15 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>\n> (RMT hat on, unless otherwise noted)\n>\n> On 9/20/22 9:42 AM, Robert Haas wrote:\n> > On Mon, Sep 19, 2022 at 11:03 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> >> For #1 (allowing calls that have schema/table overlap...), there appears\n> >> to be both a patch that allows this (reversing[8]), and a suggestion for\n> >> dealing with a corner-case that is reasonable, i.e. disallowing adding\n> >> schemas to a publication when specifying column-lists. Do we think we\n> >> can have consensus on this prior to the RC1 freeze?\n> >\n> > I am not sure whether we can or should rush a fix in that fast, but I\n> > agree with this direction.\n>\n> The RMT met today to discuss this.\n>\n> We did agree that the above is an open item that should be resolved\n> before this release. While it is an accepted pattern for us to \"ERROR\"\n> on unsupported behavior and then later introduce said behavior, we do\n> agree with Peter's original post in this thread and would like it resolved.\n>\n\nAs there seems to be an agreement with this direction, I think it is\nbetter to commit the patch in this release (before RC1) to avoid users\nseeing any behavior change in a later release. If the proposed\nbehavior for one of the cases (disallowing adding schemas to a\npublication when specifying column-lists) turns out to be too\nrestrictive for users, we can make it less restrictive in a future\nrelease. I am planning to commit the current patch [1] tomorrow unless\nRMT or anyone else thinks otherwise.\n\n[1] - https://www.postgresql.org/message-id/OS0PR01MB57162E862758402F978725CD944B9%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 22 Sep 2022 07:30:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "FWIW I put this to CI:\nhttps://cirrus-ci.com/build/5823276948652032 (master)\n\nand everything appears to be OK. If anybody has reservations about this\ngrammar change, please speak up soon, as there's not much time before RC1.\n\nThe one for 15 just started running:\nhttps://cirrus-ci.com/build/4735322423558144\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El miedo atento y previsor es la madre de la seguridad\" (E. Burke)\n\n\n",
"msg_date": "Thu, 22 Sep 2022 14:02:00 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
},
{
"msg_contents": "\n\n> On Sep 22, 2022, at 8:02 AM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> FWIW I put this to CI:\n> https://cirrus-ci.com/build/5823276948652032 (master)\n> \n> and everything appears to be OK. If anybody has reservations about this\n> grammar change, please speak up soon, as there's not much time before RC1.\n> \n> The one for 15 just started running:\n> https://cirrus-ci.com/build/4735322423558144\n\n[personal hat, not RMT]\n\nLooks like it passed. No objections.\n\nJonathan \n\n\n\n\n",
"msg_date": "Thu, 22 Sep 2022 10:04:02 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: why can't a table be part of the same publication as its schema"
}
] |
[
{
"msg_contents": "The pg_walinspect function pg_get_wal_stats() has output arguments \ndeclared as float4 (count_percentage, record_size_percentage, etc.), but \nthe internal computations are all done in type double. Is there a \nreason why this is then converted to float4 for output? It probably \ndoesn't matter in practice, but it seems unnecessarily confusing. Or at \nleast add a comment so it doesn't look like an accident. Also compare \nwith pgstattuple, which uses float8 in its SQL interface for similar data.\n\n\n",
"msg_date": "Thu, 8 Sep 2022 13:53:10 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "pg_walinspect float4/float8 confusion"
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 5:23 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> The pg_walinspect function pg_get_wal_stats() has output arguments\n> declared as float4 (count_percentage, record_size_percentage, etc.), but\n> the internal computations are all done in type double. Is there a\n> reason why this is then converted to float4 for output? It probably\n> doesn't matter in practice, but it seems unnecessarily confusing. Or at\n> least add a comment so it doesn't look like an accident. Also compare\n> with pgstattuple, which uses float8 in its SQL interface for similar data.\n\nThanks for finding this. There's no specific reason as such. However,\nit's good to be in sync with what code does internally and what it\nexposes to the users. pg_walinspect uses double data type (double\nprecision floating point number) for internal calculations and cuts it\ndown to single precision floating point number float4 to the users.\nAttaching a patch herewith. I'm not sure if this needs to be\nbackported, if at all, we were to, IMO it should be backported to\nreduce the code diff.\n\nWhile on, I found that pgstattuple uses uint64 for internal percentile\ncalculations as opposed to double data type for others. Attaching a\nsmall patch to fix it.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 9 Sep 2022 09:21:17 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_walinspect float4/float8 confusion"
},
{
"msg_contents": "On 09.09.22 05:51, Bharath Rupireddy wrote:\n> On Thu, Sep 8, 2022 at 5:23 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> The pg_walinspect function pg_get_wal_stats() has output arguments\n>> declared as float4 (count_percentage, record_size_percentage, etc.), but\n>> the internal computations are all done in type double. Is there a\n>> reason why this is then converted to float4 for output? It probably\n>> doesn't matter in practice, but it seems unnecessarily confusing. Or at\n>> least add a comment so it doesn't look like an accident. Also compare\n>> with pgstattuple, which uses float8 in its SQL interface for similar data.\n> \n> Thanks for finding this. There's no specific reason as such. However,\n> it's good to be in sync with what code does internally and what it\n> exposes to the users. pg_walinspect uses double data type (double\n> precision floating point number) for internal calculations and cuts it\n> down to single precision floating point number float4 to the users.\n> Attaching a patch herewith. I'm not sure if this needs to be\n> backported, if at all, we were to, IMO it should be backported to\n> reduce the code diff.\n\ndone\n\n> While on, I found that pgstattuple uses uint64 for internal percentile\n> calculations as opposed to double data type for others. Attaching a\n> small patch to fix it.\n\nGood find. I also changed the computation to use 100.0 instead of 100, \nso that you actually get non-integer values out of it.\n\nI didn't backpatch this, since it would probably result in a small \nbehavior change and the results with the previous code are not wrong, \njust unnecessarily truncated.\n\n\n\n",
"msg_date": "Mon, 12 Sep 2022 10:28:38 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_walinspect float4/float8 confusion"
}
] |
[
{
"msg_contents": "While testing Alvaro's \"cataloguing NOT NULL constraints\" patch [1], I\nnoticed a behavior of inherited column merging during ALTER TABLE that\nI thought might be a bug (though see at the bottom).\n\nNote how CREATE TABLE merges the inherited properties of parent foo's\na's NOT NULL into a child bar's own a:\n\ncreate table foo (a int not null);\n\ncreate table bar (a int, b int) inherits (foo);\nNOTICE: merging column \"a\" with inherited definition\nCREATE TABLE\n\n\\d bar\n Table \"public.bar\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | not null |\n b | integer | | |\nInherits: foo\n\nHowever, ALTER TABLE apparently doesn't pass down the NOT NULL flag\nwhen merging the parent's new column b into a child table's existing\ncolumn b:\n\nalter table foo add b int not null;\nNOTICE: merging definition of column \"b\" for child \"bar\"\n\\d bar\n Table \"public.bar\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | not null |\n b | integer | | |\nInherits: foo\n\nATExecAddColumn()'s following block of code that handles the merging\nseems inadequate compared to the similar block of MergeAttributes()\ncalled during CREATE TABLE:\n\n /*\n * Are we adding the column to a recursion child? If so, check whether to\n * merge with an existing definition for the column. If we do merge, we\n * must not recurse. Children will already have the column, and recursing\n * into them would mess up attinhcount.\n */\n if (colDef->inhcount > 0)\n {\n ...\n /* Bump the existing child att's inhcount */\n childatt->attinhcount++;\n CatalogTupleUpdate(attrdesc, &tuple->t_self, tuple);\n\n heap_freetuple(tuple);\n\n /* Inform the user about the merge */\n ereport(NOTICE,\n (errmsg(\"merging definition of column \\\"%s\\\" for\nchild \\\"%s\\\"\",\n colDef->colname, RelationGetRelationName(rel))));\n\n table_close(attrdesc, RowExclusiveLock);\n return InvalidObjectAddress;\n }\n\nThis only increments attinhcount of the child's existing column,\nunlike MergeAttributes()'s code, which will not only merge the NOT\nNULL flag but also check for generated conflicts, so one gets the\nfollowing behavior:\n\ncreate table foo (a int generated always as (1) stored);\n\ncreate table bar (a int generated always as identity) inherits (foo);\nNOTICE: merging column \"a\" with inherited definition\nERROR: column \"a\" inherits from generated column but specifies identity\n\ncreate table bar (a int generated always as (2) stored) inherits (foo);\nNOTICE: merging column \"a\" with inherited definition\nERROR: child column \"a\" specifies generation expression\nHINT: Omit the generation expression in the definition of the child\ntable column to inherit the generation expression from the parent\ntable.\n\ncreate table bar (a int, b int generated always as identity) inherits (foo);\nNOTICE: merging column \"a\" with inherited definition\n\\d bar\n Table \"public.bar\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+--------------------------------\n a | integer | | | generated always as (1) stored\n b | integer | | not null | generated always as identity\nInherits: foo\n\nalter table foo add b int generated always as (1) stored;\nNOTICE: merging definition of column \"b\" for child \"bar\"\n\\d bar\n Table \"public.bar\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+--------------------------------\n a | integer | | | generated always as (1) stored\n b | integer | | not null | generated always as identity\nInherits: foo\n\nSo, adding a column to the parent after-the-fact will allow its\ngenerated definition to differ from that of the child's existing\ncolumn, which is not allowed when creating the child table with its\nown different generated definition for its column.\n\nI feel like we may have discussed this before and decided that the\n$subject is left that way intentionally, but wanted to bring this up\nagain in the light of Alvaro's patch which touches nearby code.\nShould we try to fix this?\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://commitfest.postgresql.org/39/3869/\n\n\n",
"msg_date": "Thu, 8 Sep 2022 21:03:47 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "ATExecAddColumn() doesn't merge inherited properties"
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 9:03 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> While testing Alvaro's \"cataloguing NOT NULL constraints\" patch [1], I\n> noticed a behavior of inherited column merging during ALTER TABLE that\n> I thought might be a bug (though see at the bottom).\n>\n> I feel like we may have discussed this before and decided that the\n> $subject is left that way intentionally, but wanted to bring this up\n> again in the light of Alvaro's patch which touches nearby code.\n> Should we try to fix this?\n\nI found a not-so-old email where we indeed called this (allowing a\nchild table's column to be nullable when it is marked NOT NULL in a\nparent) a bug that should be fixed:\n\nhttps://www.postgresql.org/message-id/CA%2BHiwqEPy72GnNa88jMkHMJaiAYiE7-zgcdPBMwNP-zWi%2Beifw%40mail.gmail.com\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 9 Sep 2022 14:59:09 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ATExecAddColumn() doesn't merge inherited properties"
}
] |
[
{
"msg_contents": "Attached is the plain-text list of acknowledgments for the PG15 release \nnotes, current through REL_15_BETA4. Please check for problems such as \nwrong sorting, duplicate names in different variants, or names in the \nwrong order etc. (Note that the current standard is given name followed \nby surname, independent of cultural origin.)",
"msg_date": "Thu, 8 Sep 2022 14:13:18 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "list of acknowledgments for PG15"
},
{
"msg_contents": "\nOn Thu, 08 Sep 2022 at 20:13, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> Attached is the plain-text list of acknowledgments for the PG15 release \n> notes, current through REL_15_BETA4. Please check for problems such as \n> wrong sorting, duplicate names in different variants, or names in the \n> wrong order etc. (Note that the current standard is given name followed \n> by surname, independent of cultural origin.)\n\nHi, Peter\n\nLi Japin is an alias of Japin Li, it is unnecessary to list both of them.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 08 Sep 2022 23:39:43 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG15"
},
{
"msg_contents": "On Thu, Sep 08, 2022 at 11:39:43PM +0800, Japin Li wrote:\n> \n> On Thu, 08 Sep 2022 at 20:13, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> > Attached is the plain-text list of acknowledgments for the PG15 release \n> > notes, current through REL_15_BETA4. Please check for problems such as \n> > wrong sorting, duplicate names in different variants, or names in the \n> > wrong order etc. (Note that the current standard is given name followed \n> > by surname, independent of cultural origin.)\n> \n> Hi, Peter\n> \n> Li Japin is an alias of Japin Li, it is unnecessary to list both of them.\n\nThanks. This script finds another name which seems to be duplicated:\n\nawk '{print $1,$2; print $2,$1}' |sort |uniq -c |sort -nr |awk '$1>1'\n 2 Tang Haiying\n 2 Li Japin\n 2 Japin Li\n 2 Haiying Tang\n\nAlternately: awk 'a[$2$1]{print} {a[$1$2]=1}'\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 8 Sep 2022 10:47:45 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG15"
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 9:13 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> Attached is the plain-text list of acknowledgments for the PG15 release\n> notes, current through REL_15_BETA4. Please check for problems such as\n> wrong sorting, duplicate names in different variants, or names in the\n> wrong order etc. (Note that the current standard is given name followed\n> by surname, independent of cultural origin.)\n\nThanks as usual!\n\nI think these are Japanese names that are in the\nsurname-followed-by-given-name order:\n\nKamigishi Rei\nKawamoto Masaya\nOkano Naoki\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 12 Sep 2022 13:03:24 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG15"
},
{
"msg_contents": "On 12.09.22 06:03, Etsuro Fujita wrote:\n> On Thu, Sep 8, 2022 at 9:13 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> Attached is the plain-text list of acknowledgments for the PG15 release\n>> notes, current through REL_15_BETA4. Please check for problems such as\n>> wrong sorting, duplicate names in different variants, or names in the\n>> wrong order etc. (Note that the current standard is given name followed\n>> by surname, independent of cultural origin.)\n> \n> Thanks as usual!\n> \n> I think these are Japanese names that are in the\n> surname-followed-by-given-name order:\n> \n> Kamigishi Rei\n> Kawamoto Masaya\n> Okano Naoki\n\ncommitted with the provided corrections\n\n\n",
"msg_date": "Mon, 12 Sep 2022 16:51:57 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: list of acknowledgments for PG15"
},
{
"msg_contents": "On 2022/09/08 21:13, Peter Eisentraut wrote:\n> \n> Attached is the plain-text list of acknowledgments for the PG15 release notes, current through REL_15_BETA4. Please check for problems such as wrong sorting, duplicate names in different variants, or names in the wrong order etc. (Note that the current standard is given name followed by surname, independent of cultural origin.)\n\nI'd propose to add \"Tatsuhiro Nakamori\" whose patch was back-patched to v15 last week, into the list. Thought?\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=249b0409b181311bb1c375311e43eb767b5c3bdd\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 7 Oct 2022 01:26:09 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG15"
},
{
"msg_contents": "On 2022-Oct-07, Fujii Masao wrote:\n\n> On 2022/09/08 21:13, Peter Eisentraut wrote:\n> > \n> > Attached is the plain-text list of acknowledgments for the PG15 release notes, current through REL_15_BETA4. Please check for problems such as wrong sorting, duplicate names in different variants, or names in the wrong order etc. (Note that the current standard is given name followed by surname, independent of cultural origin.)\n> \n> I'd propose to add \"Tatsuhiro Nakamori\" whose patch was back-patched to v15 last week, into the list. Thought?\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=249b0409b181311bb1c375311e43eb767b5c3bdd\n\nI agree, he has made some other contributions in the list, even if his\nemail does not yet show up in the git log.\n\n(I think it would be good to have people's full name when writing the\ncommit messages, too ...)\n\n(Also: I think it would be nice to have people's names that are\noriginally in scripts other than Latin to appear in both scripts.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 6 Oct 2022 19:43:30 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG15"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> (Also: I think it would be nice to have people's names that are\n> originally in scripts other than Latin to appear in both scripts.)\n\nThat'd move the goalposts for the docs toolchain rather a long way,\nI fear.\n\nAs for the point originally made, I'm not sure whether Peter has a\nconsistent rule for which release cycle people get acknowledged in.\nIt may be that we're already into the time frame in which Nakamori-san\nshould be listed in PG v16 acknowledgments instead. I have no objection\nto adding him if we're still in the v15 frame, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Oct 2022 14:26:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG15"
},
{
"msg_contents": "On 06.10.22 18:26, Fujii Masao wrote:\n> On 2022/09/08 21:13, Peter Eisentraut wrote:\n>> Attached is the plain-text list of acknowledgments for the PG15 \n>> release notes, current through REL_15_BETA4. Please check for \n>> problems such as wrong sorting, duplicate names in different variants, \n>> or names in the wrong order etc. (Note that the current standard is \n>> given name followed by surname, independent of cultural origin.)\n> \n> I'd propose to add \"Tatsuhiro Nakamori\" whose patch was back-patched to \n> v15 last week, into the list. Thought?\n\nThey were added with the last update.\n\n\n\n",
"msg_date": "Mon, 10 Oct 2022 08:24:43 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: list of acknowledgments for PG15"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 06.10.22 18:26, Fujii Masao wrote:\n>> I'd propose to add \"Tatsuhiro Nakamori\" whose patch was back-patched to \n>> v15 last week, into the list. Thought?\n\n> They were added with the last update.\n\nI don't wish to object to adding Nakamori-san here, but I feel like we\nneed a policy that doesn't require last-minute updates to release notes.\n\nAs far as I've understood, the idea is to credit people based on the\ntime frame in which their patches were committed, not on the branch(es)\nthat the patches were committed to. Otherwise we'd have to retroactively\nadd people to back-branch acknowledgements, and we have not been doing\nthat. So a patch that goes in during the v16 development cycle means\nthat the author should get acknowledged in the v16 release notes,\neven if it got back-patched to older branches. What remains is to\ndefine when is the cutoff point between \"acknowledge in v15\" versus\n\"acknowledge in v16\". I don't have a strong opinion about that,\nbut I'd like it to be more than 24 hours before the 15.0 wrap.\nCould we make the cutoff be, say, beta1?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Oct 2022 02:41:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG15"
},
{
"msg_contents": "On Mon, Oct 10, 2022 at 02:41:06AM -0400, Tom Lane wrote:\n> I don't wish to object to adding Nakamori-san here, but I feel like we\n> need a policy that doesn't require last-minute updates to release notes.\n> \n> As far as I've understood, the idea is to credit people based on the\n> time frame in which their patches were committed, not on the branch(es)\n> that the patches were committed to. Otherwise we'd have to retroactively\n> add people to back-branch acknowledgements, and we have not been doing\n> that. So a patch that goes in during the v16 development cycle means\n> that the author should get acknowledged in the v16 release notes,\n> even if it got back-patched to older branches. What remains is to\n> define when is the cutoff point between \"acknowledge in v15\" versus\n> \"acknowledge in v16\". I don't have a strong opinion about that,\n> but I'd like it to be more than 24 hours before the 15.0 wrap.\n> Could we make the cutoff be, say, beta1?\n\nIs the issue that we are really only crediting people whose commits/work\nappears in major releases, and not in minor ones?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Mon, 10 Oct 2022 14:30:35 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG15"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Mon, Oct 10, 2022 at 02:41:06AM -0400, Tom Lane wrote:\n>> As far as I've understood, the idea is to credit people based on the\n>> time frame in which their patches were committed, not on the branch(es)\n>> that the patches were committed to.\n\n> Is the issue that we are really only crediting people whose commits/work\n> appears in major releases, and not in minor ones?\n\nWhat Peter has said about this is that he lists everyone whose name\nhas appeared in commit messages over thus-and-such a time frame.\nSo it doesn't matter which branch is involved, just when the contribution\nwas made. That process is fine with me; I'm just seeking a bit more\nclarity as to what the time frames are.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Oct 2022 14:44:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG15"
},
{
"msg_contents": "On Mon, Oct 10, 2022 at 02:44:22PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Mon, Oct 10, 2022 at 02:41:06AM -0400, Tom Lane wrote:\n> >> As far as I've understood, the idea is to credit people based on the\n> >> time frame in which their patches were committed, not on the branch(es)\n> >> that the patches were committed to.\n> \n> > Is the issue that we are really only crediting people whose commits/work\n> > appears in major releases, and not in minor ones?\n> \n> What Peter has said about this is that he lists everyone whose name\n> has appeared in commit messages over thus-and-such a time frame.\n> So it doesn't matter which branch is involved, just when the contribution\n> was made. That process is fine with me; I'm just seeking a bit more\n> clarity as to what the time frames are.\n\nOh, that's an interesting approach but it might mean that, for example,\nPG 16 patch authors appear in the PG 15 major release notes. It seems\nthat the stable major release branch date should be the cut-off, so no\none is missed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Mon, 10 Oct 2022 14:48:36 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG15"
},
{
"msg_contents": "On 10.10.22 08:41, Tom Lane wrote:\n> What remains is to\n> define when is the cutoff point between \"acknowledge in v15\" versus\n> \"acknowledge in v16\". I don't have a strong opinion about that,\n> but I'd like it to be more than 24 hours before the 15.0 wrap.\n> Could we make the cutoff be, say, beta1?\n\nbeta1 is too early, because a significant portion of the names comes in \nafter beta1. rc1 would be ok, I think.\n\n\n\n",
"msg_date": "Tue, 11 Oct 2022 11:02:58 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: list of acknowledgments for PG15"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI tried to understand some implementation details of ProcArray and\ndiscovered that this is a bit challenging to do due to a missing\ncomment for PGPROC.pgprocno. E.g. it's hard to understand why\nProcArrayAdd() preserves procArray->pgprocnos[] sorted by (PGPROC *)\nif actually the sorting is done by pgprocno. Took me some time to\nfigure this out.\n\nHere is the patch that fixes this. Hopefully this will save some time\nfor the newcomers.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 8 Sep 2022 20:16:07 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Add a missing comment for PGPROC.pgprocno"
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 08:16:07PM +0300, Aleksander Alekseev wrote:\n> Hi hackers,\n> \n> I tried to understand some implementation details of ProcArray and\n> discovered that this is a bit challenging to do due to a missing\n> comment for PGPROC.pgprocno. E.g. it's hard to understand why\n> ProcArrayAdd() preserves procArray->pgprocnos[] sorted by (PGPROC *)\n> if actually the sorting is done by pgprocno. Took me some time to\n> figure this out.\n> \n> Here is the patch that fixes this. Hopefully this will save some time\n> for the newcomers.\n\nThanks, patch applied to master.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 11 Oct 2022 13:08:31 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add a missing comment for PGPROC.pgprocno"
}
] |
[
{
"msg_contents": "Hi,\n\nThe current WAL records generated by the Heap tableAM do not contain\nthe command ID of the query that inserted/updated/deleted the records.\nThe CID is not included in XLog because it is useful only to\nvisibility checks in an active read/write transaction, which currently\nonly appear in a primary node.\n\nIn Neon [0], we're using XLog to reconstruct page state, as opposed to\nwriting out dirty pages. This has the benefit of saving write IO for\ndirty page writes, but this does mean that we need the CID in heap\ninsert/update/delete records to correctly mark the tuples, such that\nmodified pages that are flushed from the buffer pool get reconstructed\ncorrectly. A more detailed write-up why we do this is here [1].\n\nNeon does not need to be the only user of this API, as adding CID to\nxlog records also allows the primary to offload (partial) queries to a\nremote physical replica that would utilise the same transaction and\nsnapshot of the primary.\nRight now, it's not possible to offload the read-component of RW\nqueries to a secondary [2]. The attached patch would make multi-node\ntransactions possible on systems with a single primary node and\nmultiple read replicas, without the need for prepared commits and\nspecial extra code to achieve snapshot consistency, as a consistent\nsnapshot could be copied and used by physical replicas (think parallel\nworkers, but on a different server).\n\nPlease find attached a patch that adds the CommandId of the inserting\ntransaction to heap (batch)insert, update and delete records. It is\nbased on the changes we made in the fork we maintain for Neon.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://github.com/neondatabase/neon/#neon\n[1] https://github.com/neondatabase/neon/blob/main/docs/core_changes.md#add-t_cid-to-heap-wal-records\n[2] At least not without blocking XLog replay of the primary\ntransaction on the secondary, due to the same issues that Neon\nencountered: you need the CommandID to distinguish between this\ntransactions' updates in the current command and previous commands.",
"msg_date": "Thu, 8 Sep 2022 21:56:09 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Adding CommandID to heap xlog records"
},
{
"msg_contents": "Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> Please find attached a patch that adds the CommandId of the inserting\n> transaction to heap (batch)insert, update and delete records. It is\n> based on the changes we made in the fork we maintain for Neon.\n\nThis seems like a very significant cost increment with returns\nto only a minuscule number of users. We certainly cannot consider\nit unless you provide some evidence that that impression is wrong.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Sep 2022 17:24:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CommandID to heap xlog records"
},
{
"msg_contents": "On Thu, 8 Sept 2022 at 23:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> > Please find attached a patch that adds the CommandId of the inserting\n> > transaction to heap (batch)insert, update and delete records. It is\n> > based on the changes we made in the fork we maintain for Neon.\n>\n> This seems like a very significant cost increment with returns\n> to only a minuscule number of users. We certainly cannot consider\n> it unless you provide some evidence that that impression is wrong.\n\nAttached a proposed set of patches to reduce overhead of the inital patch.\n\n0001: moves the RMGR-specific xl_info flag bits into their own field\nin the xlog header\n\nThis new field is allocated in the 2-byte alignment hole in the xlog\nrecord header (of which now 1 byte is left). This change was discussed\nby Andres (cc-ed) earlier in [0], and this is a partial implementation\nof the suggestion. With the patch we could merge the xl_heap and\nxl_heap2 redo managers, but that is not implemented and not a goal of\nthis patchset - we're only enabling the change, not providing it.\n\nThe main difference between this patch and the proposed change of [0]\nis that this patch only provides a single 8-bit field for rmgr use,\nfor both flag bits and record types, as opposed to separate fields for\nrecord type and flag bits.\n\nThe reason for including this patch is to get free bits to use in the\nxlog header - all other addressable bits in the xlhdr are in use\nalready; and it is much more difficult to find usable bits in the heap\nxlog structs. They exist, but processing would be much more of a pain\nthan what it is now.\n\n0002: add new wal_level = remote\nThis implements the same concept as v1; but now makes CommandId\npresence optional. This presence is now indicated by the all-new\nXLOG_HEAP_WITH_CID bit. CommandId is included when wal_level is at\nleast set to the new 'remote' value. wal_level=logical by extension\nalso includes this commandId.\n\nPerformance numbers for this patch seem to indicate no significant\nregression: Runs of pgbench (options: -s 50 -c 4 -j 4 -T 900 -v) have\nshown no immediately significant regression with wal_level = replica\nwhen compared to master @ cbe6dd17 (master: 978tps, patched: 985tps).\nResults for wal_level = remote are slightly worse than with wal_level\n= replica, but acceptable nonetheless (wal_level=remote: 964tps,\n=replica: 985tps). Apart from wal_level being changed between runs, it\nwas an otherwise default postgres configuration with shared_buffers\nset to 10GB.\n\nKind regards,\n\nMatthias van de Meent\n\n\n[0] https://www.postgresql.org/message-id/20220715173731.6t3km5cww3f5ztfq%40awork3.anarazel.de",
"msg_date": "Thu, 22 Sep 2022 23:12:32 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding CommandID to heap xlog records"
},
{
"msg_contents": "On Thu, Sep 22, 2022 at 11:12:32PM +0200, Matthias van de Meent wrote:\n> On Thu, 8 Sept 2022 at 23:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> > > Please find attached a patch that adds the CommandId of the inserting\n> > > transaction to heap (batch)insert, update and delete records. It is\n> > > based on the changes we made in the fork we maintain for Neon.\n> >\n> > This seems like a very significant cost increment with returns\n> > to only a minuscule number of users. We certainly cannot consider\n> > it unless you provide some evidence that that impression is wrong.\n> \n> Attached a proposed set of patches to reduce overhead of the inital patch.\n\nThis might be obvious to some, but the patch got a lot larger. :-(\n\n---------------------------------------------------------------------------\n\n> contrib/pg_walinspect/pg_walinspect.c | 4 +-\n> src/backend/access/brin/brin_pageops.c | 16 +++---\n> src/backend/access/brin/brin_xlog.c | 8 +--\n> src/backend/access/gin/ginxlog.c | 6 +--\n> src/backend/access/gist/gistxlog.c | 6 +--\n> src/backend/access/hash/hash_xlog.c | 6 +--\n> src/backend/access/heap/heapam.c | 40 +++++++--------\n> src/backend/access/nbtree/nbtinsert.c | 18 +++----\n> src/backend/access/nbtree/nbtpage.c | 8 +--\n> src/backend/access/nbtree/nbtxlog.c | 10 ++--\n> src/backend/access/rmgrdesc/brindesc.c | 20 ++++----\n> src/backend/access/rmgrdesc/clogdesc.c | 10 ++--\n> src/backend/access/rmgrdesc/committsdesc.c | 10 ++--\n> src/backend/access/rmgrdesc/dbasedesc.c | 12 ++---\n> src/backend/access/rmgrdesc/genericdesc.c | 2 +-\n> src/backend/access/rmgrdesc/gindesc.c | 8 +--\n> src/backend/access/rmgrdesc/gistdesc.c | 8 +--\n> src/backend/access/rmgrdesc/hashdesc.c | 8 +--\n> src/backend/access/rmgrdesc/heapdesc.c | 46 ++++++++---------\n> src/backend/access/rmgrdesc/logicalmsgdesc.c | 8 +--\n> src/backend/access/rmgrdesc/mxactdesc.c | 14 ++---\n> src/backend/access/rmgrdesc/nbtdesc.c | 8 +--\n> src/backend/access/rmgrdesc/relmapdesc.c | 8 +--\n> src/backend/access/rmgrdesc/replorigindesc.c | 8 +--\n> src/backend/access/rmgrdesc/seqdesc.c | 8 +--\n> src/backend/access/rmgrdesc/smgrdesc.c | 10 ++--\n> src/backend/access/rmgrdesc/spgdesc.c | 8 +--\n> src/backend/access/rmgrdesc/standbydesc.c | 12 ++---\n> src/backend/access/rmgrdesc/tblspcdesc.c | 10 ++--\n> src/backend/access/rmgrdesc/xactdesc.c | 34 ++++++------\n> src/backend/access/rmgrdesc/xlogdesc.c | 28 +++++-----\n> src/backend/access/spgist/spgxlog.c | 6 +--\n> src/backend/access/transam/clog.c | 8 +--\n> src/backend/access/transam/commit_ts.c | 8 +--\n> src/backend/access/transam/multixact.c | 48 ++++++++---------\n> src/backend/access/transam/twophase.c | 2 +-\n> src/backend/access/transam/xact.c | 36 +++++++------\n> src/backend/access/transam/xlog.c | 34 ++++++------\n> src/backend/access/transam/xloginsert.c | 31 ++++++++---\n> src/backend/access/transam/xlogprefetcher.c | 2 +-\n> src/backend/access/transam/xlogreader.c | 2 +-\n> src/backend/access/transam/xlogrecovery.c | 54 ++++++++++----------\n> src/backend/access/transam/xlogstats.c | 2 +-\n> src/backend/catalog/storage.c | 15 +++---\n> src/backend/commands/dbcommands.c | 30 ++++++-----\n> src/backend/commands/sequence.c | 6 +--\n> src/backend/commands/tablespace.c | 8 +--\n> src/backend/postmaster/autovacuum.c | 4 +-\n> src/backend/replication/logical/decode.c | 38 +++++++-------\n> src/backend/replication/logical/message.c | 6 +--\n> src/backend/replication/logical/origin.c | 6 +--\n> src/backend/storage/ipc/standby.c | 10 ++--\n> src/backend/utils/cache/relmapper.c | 6 +--\n> src/bin/pg_resetwal/pg_resetwal.c | 2 +-\n> src/bin/pg_rewind/parsexlog.c | 10 ++--\n> src/bin/pg_waldump/pg_waldump.c | 6 +--\n> src/include/access/brin_xlog.h | 2 +-\n> src/include/access/clog.h | 2 +-\n> src/include/access/ginxlog.h | 2 +-\n> src/include/access/gistxlog.h | 2 +-\n> src/include/access/hash_xlog.h | 2 +-\n> src/include/access/heapam_xlog.h | 4 +-\n> src/include/access/multixact.h | 3 +-\n> src/include/access/nbtxlog.h | 2 +-\n> src/include/access/spgxlog.h | 2 +-\n> src/include/access/xact.h | 6 +--\n> src/include/access/xlog.h | 2 +-\n> src/include/access/xloginsert.h | 3 +-\n> src/include/access/xlogreader.h | 1 +\n> src/include/access/xlogrecord.h | 11 +---\n> src/include/access/xlogstats.h | 2 +-\n> src/include/catalog/storage_xlog.h | 2 +-\n> src/include/commands/dbcommands_xlog.h | 2 +-\n> src/include/commands/sequence.h | 2 +-\n> src/include/commands/tablespace.h | 2 +-\n> src/include/replication/message.h | 2 +-\n> src/include/storage/standbydefs.h | 2 +-\n> src/include/utils/relmapper.h | 2 +-\n> 78 files changed, 430 insertions(+), 412 deletions(-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 28 Sep 2022 13:40:28 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CommandID to heap xlog records"
},
{
"msg_contents": "On Wed, 28 Sept 2022 at 19:40, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, Sep 22, 2022 at 11:12:32PM +0200, Matthias van de Meent wrote:\n> > On Thu, 8 Sept 2022 at 23:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> > > > Please find attached a patch that adds the CommandId of the inserting\n> > > > transaction to heap (batch)insert, update and delete records. It is\n> > > > based on the changes we made in the fork we maintain for Neon.\n> > >\n> > > This seems like a very significant cost increment with returns\n> > > to only a minuscule number of users. We certainly cannot consider\n> > > it unless you provide some evidence that that impression is wrong.\n> >\n> > Attached a proposed set of patches to reduce overhead of the inital patch.\n>\n> This might be obvious to some, but the patch got a lot larger. :-(\n\nSorry for that, but updating the field from which the redo manager\nshould pull its information does indeed touch a lot of files because\nmost users of xl_info are only interested in the 4 bits reserved for\nthe redo-manager. Most of 0001 is therefore updates to point code to\nthe new field in XLogRecord, and renaming the variables and arguments\nfrom info to rminfo.\n\n[tangent] With that refactoring, I also clean up a lot of code that\nwas using a wrong macro/constant for rmgr flags; `info &\n~XLR_INFO_MASK` may have the same value as `info &\nXLR_RMGR_INFO_MASK`, but that's only guaranteed by the documentation;\nand would require the same significant rework if new bits were\nassigned to non-XLR_INFO_MASK and non-XLR_RMGR_INFO_MASK. [/tangent]\n\n0002 grew a bit as well, but not to a degree that I think is worrying\nor otherwise impossible to review.\n\nKind regards,\n\nMatthias van de Meent.\n\n\n",
"msg_date": "Thu, 29 Sep 2022 18:03:40 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding CommandID to heap xlog records"
},
{
"msg_contents": "2022年9月30日(金) 1:04 Matthias van de Meent <boekewurm+postgres@gmail.com>:\n>\n> On Wed, 28 Sept 2022 at 19:40, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Thu, Sep 22, 2022 at 11:12:32PM +0200, Matthias van de Meent wrote:\n> > > On Thu, 8 Sept 2022 at 23:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > >\n> > > > Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> > > > > Please find attached a patch that adds the CommandId of the inserting\n> > > > > transaction to heap (batch)insert, update and delete records. It is\n> > > > > based on the changes we made in the fork we maintain for Neon.\n> > > >\n> > > > This seems like a very significant cost increment with returns\n> > > > to only a minuscule number of users. We certainly cannot consider\n> > > > it unless you provide some evidence that that impression is wrong.\n> > >\n> > > Attached a proposed set of patches to reduce overhead of the inital patch.\n> >\n> > This might be obvious to some, but the patch got a lot larger. :-(\n>\n> Sorry for that, but updating the field from which the redo manager\n> should pull its information does indeed touch a lot of files because\n> most users of xl_info are only interested in the 4 bits reserved for\n> the redo-manager. Most of 0001 is therefore updates to point code to\n> the new field in XLogRecord, and renaming the variables and arguments\n> from info to rminfo.\n>\n> [tangent] With that refactoring, I also clean up a lot of code that\n> was using a wrong macro/constant for rmgr flags; `info &\n> ~XLR_INFO_MASK` may have the same value as `info &\n> XLR_RMGR_INFO_MASK`, but that's only guaranteed by the documentation;\n> and would require the same significant rework if new bits were\n> assigned to non-XLR_INFO_MASK and non-XLR_RMGR_INFO_MASK. [/tangent]\n>\n> 0002 grew a bit as well, but not to a degree that I think is worrying\n> or otherwise impossible to review.\n\nHi\n\nThis entry was marked as \"Needs review\" in the CommitFest app but cfbot\nreports the patch no longer applies.\n\nWe've marked it as \"Waiting on Author\". As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time update the patch.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can move the patch entry forward by visiting\n\n https://commitfest.postgresql.org/40/3882/\n\nand changing the status to \"Needs review\".\n\n\nThanks\n\nIan Barwick\n\n\n",
"msg_date": "Thu, 3 Nov 2022 18:36:32 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CommandID to heap xlog records"
},
{
"msg_contents": "On Thu, 3 Nov 2022 at 15:06, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n>\n> 2022年9月30日(金) 1:04 Matthias van de Meent <boekewurm+postgres@gmail.com>:\n> >\n> > On Wed, 28 Sept 2022 at 19:40, Bruce Momjian <bruce@momjian.us> wrote:\n> > >\n> > > On Thu, Sep 22, 2022 at 11:12:32PM +0200, Matthias van de Meent wrote:\n> > > > On Thu, 8 Sept 2022 at 23:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > >\n> > > > > Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> > > > > > Please find attached a patch that adds the CommandId of the inserting\n> > > > > > transaction to heap (batch)insert, update and delete records. It is\n> > > > > > based on the changes we made in the fork we maintain for Neon.\n> > > > >\n> > > > > This seems like a very significant cost increment with returns\n> > > > > to only a minuscule number of users. We certainly cannot consider\n> > > > > it unless you provide some evidence that that impression is wrong.\n> > > >\n> > > > Attached a proposed set of patches to reduce overhead of the inital patch.\n> > >\n> > > This might be obvious to some, but the patch got a lot larger. :-(\n> >\n> > Sorry for that, but updating the field from which the redo manager\n> > should pull its information does indeed touch a lot of files because\n> > most users of xl_info are only interested in the 4 bits reserved for\n> > the redo-manager. Most of 0001 is therefore updates to point code to\n> > the new field in XLogRecord, and renaming the variables and arguments\n> > from info to rminfo.\n> >\n> > [tangent] With that refactoring, I also clean up a lot of code that\n> > was using a wrong macro/constant for rmgr flags; `info &\n> > ~XLR_INFO_MASK` may have the same value as `info &\n> > XLR_RMGR_INFO_MASK`, but that's only guaranteed by the documentation;\n> > and would require the same significant rework if new bits were\n> > assigned to non-XLR_INFO_MASK and non-XLR_RMGR_INFO_MASK. [/tangent]\n> >\n> > 0002 grew a bit as well, but not to a degree that I think is worrying\n> > or otherwise impossible to review.\n>\n> Hi\n>\n> This entry was marked as \"Needs review\" in the CommitFest app but cfbot\n> reports the patch no longer applies.\n>\n> We've marked it as \"Waiting on Author\". As CommitFest 2022-11 is\n> currently underway, this would be an excellent time update the patch.\n>\n> Once you think the patchset is ready for review again, you (or any\n> interested party) can move the patch entry forward by visiting\n>\n> https://commitfest.postgresql.org/40/3882/\n>\n> and changing the status to \"Needs review\".\n\nI was not sure if you will be planning to post an updated version of\npatch as the patch has been awaiting your attention from last\ncommitfest, please post an updated version for it soon or update the\ncommitfest entry accordingly.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 16 Jan 2023 19:56:27 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CommandID to heap xlog records"
},
{
"msg_contents": "On Mon, 16 Jan 2023 at 19:56, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, 3 Nov 2022 at 15:06, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> >\n> > 2022年9月30日(金) 1:04 Matthias van de Meent <boekewurm+postgres@gmail.com>:\n> > >\n> > > On Wed, 28 Sept 2022 at 19:40, Bruce Momjian <bruce@momjian.us> wrote:\n> > > >\n> > > > On Thu, Sep 22, 2022 at 11:12:32PM +0200, Matthias van de Meent wrote:\n> > > > > On Thu, 8 Sept 2022 at 23:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > > >\n> > > > > > Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> > > > > > > Please find attached a patch that adds the CommandId of the inserting\n> > > > > > > transaction to heap (batch)insert, update and delete records. It is\n> > > > > > > based on the changes we made in the fork we maintain for Neon.\n> > > > > >\n> > > > > > This seems like a very significant cost increment with returns\n> > > > > > to only a minuscule number of users. We certainly cannot consider\n> > > > > > it unless you provide some evidence that that impression is wrong.\n> > > > >\n> > > > > Attached a proposed set of patches to reduce overhead of the inital patch.\n> > > >\n> > > > This might be obvious to some, but the patch got a lot larger. :-(\n> > >\n> > > Sorry for that, but updating the field from which the redo manager\n> > > should pull its information does indeed touch a lot of files because\n> > > most users of xl_info are only interested in the 4 bits reserved for\n> > > the redo-manager. Most of 0001 is therefore updates to point code to\n> > > the new field in XLogRecord, and renaming the variables and arguments\n> > > from info to rminfo.\n> > >\n> > > [tangent] With that refactoring, I also clean up a lot of code that\n> > > was using a wrong macro/constant for rmgr flags; `info &\n> > > ~XLR_INFO_MASK` may have the same value as `info &\n> > > XLR_RMGR_INFO_MASK`, but that's only guaranteed by the documentation;\n> > > and would require the same significant rework if new bits were\n> > > assigned to non-XLR_INFO_MASK and non-XLR_RMGR_INFO_MASK. [/tangent]\n> > >\n> > > 0002 grew a bit as well, but not to a degree that I think is worrying\n> > > or otherwise impossible to review.\n> >\n> > Hi\n> >\n> > This entry was marked as \"Needs review\" in the CommitFest app but cfbot\n> > reports the patch no longer applies.\n> >\n> > We've marked it as \"Waiting on Author\". As CommitFest 2022-11 is\n> > currently underway, this would be an excellent time update the patch.\n> >\n> > Once you think the patchset is ready for review again, you (or any\n> > interested party) can move the patch entry forward by visiting\n> >\n> > https://commitfest.postgresql.org/40/3882/\n> >\n> > and changing the status to \"Needs review\".\n>\n> I was not sure if you will be planning to post an updated version of\n> patch as the patch has been awaiting your attention from last\n> commitfest, please post an updated version for it soon or update the\n> commitfest entry accordingly.\n\nThere has been no updates on this thread for some time, so this has\nbeen switched as Returned with Feedback. Feel free to open it in the\nnext commitfest if you plan to continue on this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 31 Jan 2023 23:18:49 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CommandID to heap xlog records"
},
{
"msg_contents": "I took another stab at this from a different angle, and tried to use \nthis to simplify logical decoding. The theory was that if we included \nthe command ID in the WAL records, we wouldn't need the separate \nHEAP2_NEW_CID record anymore, and could remove much of the code in \nreorderbuffer.c that's concerned with tracking ctid->(cmin,cmax) \nmapping. Unfortunately, it didn't work out.\n\nHere's one problem:\n\nInsert with cmin 1\nCommit\nDelete the same tuple with cmax 2.\nAbort\n\nEven if we store the cmin in the INSERT record, and set it on the tuple \non replay, the DELETE overwrites it. That's OK for the original \ntransactions, because they only look at the cmin/cmax of their own \ntransaction, but it's a problem for logical decoding. If we see the \ninserted tuple during logical decoding, we need the cmin of the tuple.\n\nWe could still just replace the HEAP2_NEW_CID records with the CIDs in \nthe heap INSERT/UPDATE/DELETE records, and use that information to \nmaintain the ctid->(cmin,cmax) mapping in reorderbuffer.c like we do \ntoday. But that doesn't really simplify reorderbuffer.c much. Attached \nis a patch for that, for the archives sake.\n\nAnother problem with that is that logical decoding needs slightly \ndifferent information than what we store on the tuples on disk. My \noriginal motivation for this was for Neon, which needs the WAL replay to \nrestore the same CID as what's stored on disk, whether it's cmin, cmax \nor combocid. But for logical decoding, we need the cmin or cmax, *not* \nthe combocid. To cater for both uses, we'd need to include both the \noriginal cmin/cmax and the possible combocid, which again makes it more \ncomplicated.\n\nSo unfortunately I don't see much opportunity to simplify logical \ndecoding with this. However, please take a look at the first two patches \nattached. They're tiny cleanups that make sense on their own.\n\n- Heikki",
"msg_date": "Tue, 28 Feb 2023 15:52:26 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Adding CommandID to heap xlog records"
},
{
"msg_contents": "On 28/02/2023 15:52, Heikki Linnakangas wrote:\n> So unfortunately I don't see much opportunity to simplify logical\n> decoding with this. However, please take a look at the first two patches\n> attached. They're tiny cleanups that make sense on their own.\n\nRebased these small patches. I'll add this to the commitfest.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Mon, 26 Jun 2023 09:57:56 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Improve comment on cid mapping (was Re: Adding CommandID to heap xlog\n records)"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-26 09:57:56 +0300, Heikki Linnakangas wrote:\n> diff --git a/src/backend/replication/logical/snapbuild.c b/src/backend/replication/logical/snapbuild.c\n> index 0786bb0ab7..e403feeccd 100644\n> --- a/src/backend/replication/logical/snapbuild.c\n> +++ b/src/backend/replication/logical/snapbuild.c\n> @@ -41,10 +41,15 @@\n> * transactions we need Snapshots that see intermediate versions of the\n> * catalog in a transaction. During normal operation this is achieved by using\n> * CommandIds/cmin/cmax. The problem with that however is that for space\n> - * efficiency reasons only one value of that is stored\n> - * (cf. combocid.c). Since combo CIDs are only available in memory we log\n> - * additional information which allows us to get the original (cmin, cmax)\n> - * pair during visibility checks. Check the reorderbuffer.c's comment above\n> + * efficiency reasons, the cmin and cmax are not included in WAL records. We\n> + * cannot read the cmin/cmax from the tuple itself, either, because it is\n> + * reset on crash recovery. Even if we could, we could not decode combocids\n> + * which are only tracked in the original backend's memory. To work around\n> + * that, heapam writes an extra WAL record (XLOG_HEAP2_NEW_CID) every time a\n> + * catalog row is modified, which includes the cmin and cmax of the\n> + * tuple. During decoding, we insert the ctid->(cmin,cmax) mappings into the\n> + * reorder buffer, and use them at visibility checks instead of the cmin/cmax\n> + * on the tuple itself. Check the reorderbuffer.c's comment above\n> * ResolveCminCmaxDuringDecoding() for details.\n> *\n> * To facilitate all this we need our own visibility routine, as the normal\n> -- \n> 2.30.2\n\nLGTM\n\n\n> From 9140a0d98fd21b595eac6d111175521a6b1a9f1b Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> Date: Mon, 26 Jun 2023 09:56:02 +0300\n> Subject: [PATCH v2 2/2] Remove redundant check for fast_forward.\n> \n> We already checked for it earlier in the function.\n> \n> Discussion: https://www.postgresql.org/message-id/1ba2899e-77f8-7866-79e5-f3b7d1251a3e@iki.fi\n> ---\n> src/backend/replication/logical/decode.c | 3 +--\n> 1 file changed, 1 insertion(+), 2 deletions(-)\n> \n> diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c\n> index d91055a440..7039d425e2 100644\n> --- a/src/backend/replication/logical/decode.c\n> +++ b/src/backend/replication/logical/decode.c\n> @@ -422,8 +422,7 @@ heap2_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)\n> \tswitch (info)\n> \t{\n> \t\tcase XLOG_HEAP2_MULTI_INSERT:\n> -\t\t\tif (!ctx->fast_forward &&\n> -\t\t\t\tSnapBuildProcessChange(builder, xid, buf->origptr))\n> +\t\t\tif (SnapBuildProcessChange(builder, xid, buf->origptr))\n> \t\t\t\tDecodeMultiInsert(ctx, buf);\n> \t\t\tbreak;\n> \t\tcase XLOG_HEAP2_NEW_CID:\n> -- \n> 2.30.2\n\nLGTM^2\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 26 Jun 2023 14:15:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improve comment on cid mapping (was Re: Adding CommandID to heap\n xlog records)"
}
] |
[
{
"msg_contents": "Hi!\n\nI created a patch for improving MARGE tab completion.\nCurrently there is a problem with \"MERGE INTO dst as d Using src as s ON \nd.key = s.key WHEN <tab>\" is typed, \"MATCHED\" and \"NOT MATCHED\" is not \ncompleted.\nThere is also a problem that typing \"MERGE INTO a AS <tab>\" completes \n\"USING\".\nThis patch solves the above problems.\n\nRegards,\nKotaro Kawamoto",
"msg_date": "Fri, 09 Sep 2022 11:18:13 +0900",
"msg_from": "bt22kawamotok <bt22kawamotok@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "[PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "On 2022-09-09 11:18, bt22kawamotok wrote:\n\n> I created a patch for improving MARGE tab completion.\n> Currently there is a problem with \"MERGE INTO dst as d Using src as s\n> ON d.key = s.key WHEN <tab>\" is typed, \"MATCHED\" and \"NOT MATCHED\" is\n> not completed.\n> There is also a problem that typing \"MERGE INTO a AS <tab>\" completes \n> \"USING\".\n> This patch solves the above problems.\n\nThanks for the patch!\n\n\telse if (TailMatches(\"USING\", MatchAny, \"ON\", MatchAny, \"WHEN\"))\n\t\tCOMPLETE_WITH(\"MATCHED\", \"NOT MATCHED\");\n\telse if (TailMatches(\"USING\", MatchAny, \"AS\", MatchAny, \"ON\", MatchAny, \n\"WHEN\"))\n\t\tCOMPLETE_WITH(\"MATCHED\", \"NOT MATCHED\");\n\telse if (TailMatches(\"USING\", MatchAny, MatchAny, \"ON\", MatchAny, \n\"WHEN\"))\n\t\tCOMPLETE_WITH(\"MATCHED\", \"NOT MATCHED\");\n\nI thought it would be better to describe this section as follows, \nsummarizing the conditions\n\n\telse if (TailMatches(\"USING\", MatchAny, \"ON\", MatchAny, \"WHEN\") ||\n\t\t\t TailMatches(\"USING\", MatchAny, \"AS\", MatchAny, \"ON\", MatchAny, \n\"WHEN\") ||\n\t\t\t TailMatches(\"USING\", MatchAny, MatchAny, \"ON\", MatchAny, \"WHEN\"))\n\t\tCOMPLETE_WITH(\"MATCHED\", \"NOT MATCHED\");\n\nThere are similar redundancies in the tab completion of MERGE statement, \nso why not fix that as well?\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 09 Sep 2022 20:55:43 +0900",
"msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "> \telse if (TailMatches(\"USING\", MatchAny, \"ON\", MatchAny, \"WHEN\"))\n> \t\tCOMPLETE_WITH(\"MATCHED\", \"NOT MATCHED\");\n> \telse if (TailMatches(\"USING\", MatchAny, \"AS\", MatchAny, \"ON\",\n> MatchAny, \"WHEN\"))\n> \t\tCOMPLETE_WITH(\"MATCHED\", \"NOT MATCHED\");\n> \telse if (TailMatches(\"USING\", MatchAny, MatchAny, \"ON\", MatchAny, \n> \"WHEN\"))\n> \t\tCOMPLETE_WITH(\"MATCHED\", \"NOT MATCHED\");\n> \n> I thought it would be better to describe this section as follows,\n> summarizing the conditions\n> \n> \telse if (TailMatches(\"USING\", MatchAny, \"ON\", MatchAny, \"WHEN\") ||\n> \t\t\t TailMatches(\"USING\", MatchAny, \"AS\", MatchAny, \"ON\", MatchAny, \n> \"WHEN\") ||\n> \t\t\t TailMatches(\"USING\", MatchAny, MatchAny, \"ON\", MatchAny, \"WHEN\"))\n> \t\tCOMPLETE_WITH(\"MATCHED\", \"NOT MATCHED\");\n> \n> There are similar redundancies in the tab completion of MERGE\n> statement, so why not fix that as well?\n\nThanks for your review.\n\nA new patch has been created to reflect the changes you indicated.\nI also found a problem that \"DO NOTHING\" is not completed when type \n\"WHEN MATCHED <tab>\", so I fixed it as well.\n\nRegards,\n\nKotaro Kawamoto",
"msg_date": "Mon, 12 Sep 2022 15:53:46 +0900",
"msg_from": "bt22kawamotok <bt22kawamotok@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "On 2022-09-12 15:53, bt22kawamotok wrote:\n>> \telse if (TailMatches(\"USING\", MatchAny, \"ON\", MatchAny, \"WHEN\"))\n>> \t\tCOMPLETE_WITH(\"MATCHED\", \"NOT MATCHED\");\n>> \telse if (TailMatches(\"USING\", MatchAny, \"AS\", MatchAny, \"ON\",\n>> MatchAny, \"WHEN\"))\n>> \t\tCOMPLETE_WITH(\"MATCHED\", \"NOT MATCHED\");\n>> \telse if (TailMatches(\"USING\", MatchAny, MatchAny, \"ON\", MatchAny, \n>> \"WHEN\"))\n>> \t\tCOMPLETE_WITH(\"MATCHED\", \"NOT MATCHED\");\n>> \n>> I thought it would be better to describe this section as follows,\n>> summarizing the conditions\n>> \n>> \telse if (TailMatches(\"USING\", MatchAny, \"ON\", MatchAny, \"WHEN\") ||\n>> \t\t\t TailMatches(\"USING\", MatchAny, \"AS\", MatchAny, \"ON\", MatchAny, \n>> \"WHEN\") ||\n>> \t\t\t TailMatches(\"USING\", MatchAny, MatchAny, \"ON\", MatchAny, \"WHEN\"))\n>> \t\tCOMPLETE_WITH(\"MATCHED\", \"NOT MATCHED\");\n>> \n>> There are similar redundancies in the tab completion of MERGE\n>> statement, so why not fix that as well?\n> \n> Thanks for your review.\n> \n> A new patch has been created to reflect the changes you indicated.\n\nThanks for updating!\n\nCompile errors have occurred, so can you fix them?\nAnd I think we can eliminate similar redundancies in MERGE tab \ncompletion and I would like you to fix them.\n\nFor example,\n\n\telse if (TailMatches(\"WHEN\", \"MATCHED\"))\n\t\tCOMPLETE_WITH(\"THEN\", \"AND\");\n\telse if (TailMatches(\"WHEN\", \"NOT\", \"MATCHED\"))\n\t\tCOMPLETE_WITH(\"THEN\", \"AND\");\n\nabove statement can be converted to the statement below.\n\n\telse if (TailMatches(\"WHEN\", \"MATCHED\") ||\n\t\t\t TailMatches(\"WHEN\", \"NOT\", \"MATCHED\"))\n\t\tCOMPLETE_WITH(\"THEN\", \"AND\");\n\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 12 Sep 2022 17:03:25 +0900",
"msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "> Thanks for updating!\n> \n> Compile errors have occurred, so can you fix them?\n> And I think we can eliminate similar redundancies in MERGE tab\n> completion and I would like you to fix them.\n> \n> For example,\n> \n> \telse if (TailMatches(\"WHEN\", \"MATCHED\"))\n> \t\tCOMPLETE_WITH(\"THEN\", \"AND\");\n> \telse if (TailMatches(\"WHEN\", \"NOT\", \"MATCHED\"))\n> \t\tCOMPLETE_WITH(\"THEN\", \"AND\");\n> \n> above statement can be converted to the statement below.\n> \n> \telse if (TailMatches(\"WHEN\", \"MATCHED\") ||\n> \t\t\t TailMatches(\"WHEN\", \"NOT\", \"MATCHED\"))\n> \t\tCOMPLETE_WITH(\"THEN\", \"AND\");\n\nSorry for making such a silly mistake.\nI create new path.\nPlease reviewing.\n\nKotaro Kawamoto",
"msg_date": "Mon, 12 Sep 2022 17:25:44 +0900",
"msg_from": "bt22kawamotok <bt22kawamotok@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "Other than this correction, the parts that can be put together in OR \nwere corrected in fix_tab_completion_merge_v4.patch.\n\nKotaro Kawamoto",
"msg_date": "Mon, 12 Sep 2022 18:20:30 +0900",
"msg_from": "bt22kawamotok <bt22kawamotok@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "On 2022-09-12 18:20, bt22kawamotok wrote:\n> Other than this correction, the parts that can be put together in OR\n> were corrected in fix_tab_completion_merge_v4.patch.\n\nWhen I tried to apply this patch, I got the following warning, please \nfix it.\nOther than that, I think everything is fine.\n\n$ git apply fix_tab_completion_merge_v4.patch\nfix_tab_completion_merge_v4.patch:38: trailing whitespace.\n else if (TailMatches(\"USING\", MatchAny, \"ON\", MatchAny) ||\nfix_tab_completion_merge_v4.patch:39: indent with spaces.\n TailMatches(\"USING\", MatchAny, \"AS\", MatchAny, \"ON\", \nMatchAny) ||\nfix_tab_completion_merge_v4.patch:40: indent with spaces.\n TailMatches(\"USING\", MatchAny, MatchAny, \"ON\", \nMatchAny))\nfix_tab_completion_merge_v4.patch:53: trailing whitespace.\n else if (TailMatches(\"WHEN\", \"MATCHED\") ||\nwarning: 4 lines add whitespace errors.\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 13 Sep 2022 13:56:44 +0900",
"msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "> When I tried to apply this patch, I got the following warning, please \n> fix it.\n> Other than that, I think everything is fine.\n> \n> $ git apply fix_tab_completion_merge_v4.patch\n> fix_tab_completion_merge_v4.patch:38: trailing whitespace.\n> else if (TailMatches(\"USING\", MatchAny, \"ON\", MatchAny) ||\n> fix_tab_completion_merge_v4.patch:39: indent with spaces.\n> TailMatches(\"USING\", MatchAny, \"AS\", MatchAny, \"ON\",\n> MatchAny) ||\n> fix_tab_completion_merge_v4.patch:40: indent with spaces.\n> TailMatches(\"USING\", MatchAny, MatchAny, \"ON\", \n> MatchAny))\n> fix_tab_completion_merge_v4.patch:53: trailing whitespace.\n> else if (TailMatches(\"WHEN\", \"MATCHED\") ||\n> warning: 4 lines add whitespace errors.\n\nThanks for reviewing.\n\nI fixed the problem and make patch v5.\nPlease check it.\n\nRegards,\n\nKotaro Kawamoto",
"msg_date": "Wed, 14 Sep 2022 14:08:07 +0900",
"msg_from": "bt22kawamotok <bt22kawamotok@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "\n\nOn 2022/09/14 14:08, bt22kawamotok wrote:\n>> When I tried to apply this patch, I got the following warning, please fix it.\n>> Other than that, I think everything is fine.\n>>\n>> $ git apply fix_tab_completion_merge_v4.patch\n>> fix_tab_completion_merge_v4.patch:38: trailing whitespace.\n>> else if (TailMatches(\"USING\", MatchAny, \"ON\", MatchAny) ||\n>> fix_tab_completion_merge_v4.patch:39: indent with spaces.\n>> TailMatches(\"USING\", MatchAny, \"AS\", MatchAny, \"ON\",\n>> MatchAny) ||\n>> fix_tab_completion_merge_v4.patch:40: indent with spaces.\n>> TailMatches(\"USING\", MatchAny, MatchAny, \"ON\", MatchAny))\n>> fix_tab_completion_merge_v4.patch:53: trailing whitespace.\n>> else if (TailMatches(\"WHEN\", \"MATCHED\") ||\n>> warning: 4 lines add whitespace errors.\n> \n> Thanks for reviewing.\n> \n> I fixed the problem and make patch v5.\n> Please check it.\n\nThanks for updating the patch!\n\n+\telse if (TailMatches(\"MERGE\", \"INTO\", MatchAny, \"USING\") ||\n+\t\t\t TailMatches(\"MERGE\", \"INTO\", MatchAny, MatchAny, \"USING\") ||\n+\t\t\t TailMatches(\"MERGE\", \"INTO\", MatchAny, \"AS\", MatchAny, \"USING\"))\n \t\tCOMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);\n\n+\telse if (TailMatches(\"MERGE\", \"INTO\", MatchAny, \"AS\", MatchAny, \"USING\") ||\n+\t\t\t TailMatches(\"MERGE\", \"INTO\", MatchAny, MatchAny, \"USING\"))\n \t\tCOMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);\n\nThe latter seems redundant and can be removed. The former seems to\ncover all the cases where the latter covers.\n\n\nNot only table but also view, foreign table, etc can be specified after\nUSING in MERGE command. So ISTM that Query_for_list_of_selectables\nshould be used at the above tab-completion, instead of Query_for_list_of_tables.\nThought?\n\n\n+\telse if (TailMatches(\"USING\", MatchAny, \"ON\", MatchAny) ||\n+\t\t\t TailMatches(\"USING\", MatchAny, \"ON\", MatchAny, MatchAnyExcept(\"When\"), MatchAnyExcept(\"When\")) ||\n+\t\t\t TailMatches(\"USING\", MatchAny, \"AS\", MatchAny, \"ON\", MatchAny) ||\n+\t\t\t TailMatches(\"USING\", MatchAny, \"AS\", MatchAny, \"ON\", MatchAny, MatchAnyExcept(\"When\"), MatchAnyExcept(\"When\")) ||\n+\t\t\t TailMatches(\"USING\", MatchAny, MatchAny, \"ON\", MatchAny) ||\n+\t\t\t TailMatches(\"USING\", MatchAny, MatchAny, \"ON\", MatchAny, MatchAnyExcept(\"When\"), MatchAnyExcept(\"When\")))\n\n\"When\" should be \"WHEN\"?\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 14 Sep 2022 17:15:02 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "> +\telse if (TailMatches(\"MERGE\", \"INTO\", MatchAny, \"USING\") ||\n> +\t\t\t TailMatches(\"MERGE\", \"INTO\", MatchAny, MatchAny, \"USING\") ||\n> +\t\t\t TailMatches(\"MERGE\", \"INTO\", MatchAny, \"AS\", MatchAny, \"USING\"))\n> \t\tCOMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);\n> \n> +\telse if (TailMatches(\"MERGE\", \"INTO\", MatchAny, \"AS\", MatchAny, \n> \"USING\") ||\n> +\t\t\t TailMatches(\"MERGE\", \"INTO\", MatchAny, MatchAny, \"USING\"))\n> \t\tCOMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);\n> \n> The latter seems redundant and can be removed. The former seems to\n> cover all the cases where the latter covers.\n\n> +\telse if (TailMatches(\"USING\", MatchAny, \"ON\", MatchAny) ||\n> +\t\t\t TailMatches(\"USING\", MatchAny, \"ON\", MatchAny,\n> MatchAnyExcept(\"When\"), MatchAnyExcept(\"When\")) ||\n> +\t\t\t TailMatches(\"USING\", MatchAny, \"AS\", MatchAny, \"ON\", MatchAny) ||\n> +\t\t\t TailMatches(\"USING\", MatchAny, \"AS\", MatchAny, \"ON\", MatchAny,\n> MatchAnyExcept(\"When\"), MatchAnyExcept(\"When\")) ||\n> +\t\t\t TailMatches(\"USING\", MatchAny, MatchAny, \"ON\", MatchAny) ||\n> +\t\t\t TailMatches(\"USING\", MatchAny, MatchAny, \"ON\", MatchAny,\n> MatchAnyExcept(\"When\"), MatchAnyExcept(\"When\")))\n> \n> \"When\" should be \"WHEN\"?\n> \n> \n> Regards,\n\nThanks for reviewing.\n\nSorry for making such a simple mistake.\nI fixed it in v6.\n\n> Not only table but also view, foreign table, etc can be specified after\n> USING in MERGE command. So ISTM that Query_for_list_of_selectables\n> should be used at the above tab-completion, instead of \n> Query_for_list_of_tables.\n> Thought?\n\nThat's nice idea!\nI took that in v6.\n\nRegards,\n\nKotaro Kawamoto",
"msg_date": "Wed, 14 Sep 2022 18:12:52 +0900",
"msg_from": "bt22kawamotok <bt22kawamotok@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "On 2022-09-14 18:12, bt22kawamotok wrote:\n\n> I fixed it in v6.\n\nThanks for updating.\n\n+\t\tCOMPLETE_WITH(\"UPDATE\", \"DELETE\", \"DO NOTHING\");\n\n\"UPDATE\" is always followed by \"SET\", so why not complement it with \n\"UPDATE SET\"?\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 15 Sep 2022 09:57:55 +0900",
"msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "> Thanks for updating.\n> \n> +\t\tCOMPLETE_WITH(\"UPDATE\", \"DELETE\", \"DO NOTHING\");\n> \n> \"UPDATE\" is always followed by \"SET\", so why not complement it with\n> \"UPDATE SET\"?\n\nThanks for reviewing.\nThat's a good idea!\nI create new patch v7.\n\nRegards,\n\nKotaro Kawamoto",
"msg_date": "Fri, 16 Sep 2022 11:46:14 +0900",
"msg_from": "bt22kawamotok <bt22kawamotok@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "On 2022/09/16 11:46, bt22kawamotok wrote:\n>> Thanks for updating.\n>>\n>> + COMPLETE_WITH(\"UPDATE\", \"DELETE\", \"DO NOTHING\");\n>>\n>> \"UPDATE\" is always followed by \"SET\", so why not complement it with\n>> \"UPDATE SET\"?\n> \n> Thanks for reviewing.\n> That's a good idea!\n> I create new patch v7.\n\nThanks for updating the patch!\n\nI applied the following changes to the patch. Attached is the updated version of the patch.\n\nThe tab-completion code for MERGE was added in the middle of that for LOCK TABLE.\nThis would be an oversight of the commit that originally supported tab-completion\nfor MERGE. I fixed this issue.\n\n+\telse if (TailMatches(\"MERGE\", \"INTO\", MatchAny, \"AS\", MatchAny) ||\n+\t\t\t TailMatches(\"MERGE\", \"INTO\", MatchAny, MatchAnyExcept(\"AS\")))\n \t\tCOMPLETE_WITH(\"USING\");\n\nThis can cause to complete \"MERGE INTO <table> USING\" with \"USING\" unexpectedly.\nI fixed this issue by replacing MatchAnyExcept(\"AS\") with MatchAnyExcept(\"USING|AS\").\n\nI added some comments.\n\n\"MERGE\" was tab-completed with just after \"EXPLAIN\" or \"EXPLAIN ANALYZE\", etc.\nSince \"INTO\" always follows \"MERGE\", it's better to complete with \"MERGE INTO\"\nthere. I replaced \"MERGE\" with \"MERGE INTO\" in those tab-completions.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Sun, 18 Sep 2022 14:29:30 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "On 2022-Sep-18, Fujii Masao wrote:\n\n> The tab-completion code for MERGE was added in the middle of that for LOCK TABLE.\n> This would be an oversight of the commit that originally supported tab-completion\n> for MERGE. I fixed this issue.\n\nArgh, thanks.\n\n> \"MERGE\" was tab-completed with just after \"EXPLAIN\" or \"EXPLAIN ANALYZE\", etc.\n> Since \"INTO\" always follows \"MERGE\", it's better to complete with \"MERGE INTO\"\n> there. I replaced \"MERGE\" with \"MERGE INTO\" in those tab-completions.\n\nOK, that would be similar to REFRESH MATERIALIZED VIEW.\n\nThe rules starting at line 4111 make me a bit nervous, since nowhere\nwe're restricting them to operating only on MERGE lines. I don't think\nit's a real problem since USING is not terribly common anyway. Likewise\nfor the ones with WHEN [NOT] MATCHED. I kinda wish we had a way to\nsearch for stuff like \"keyword MERGE appears earlier in the command\",\nbut we don't have that.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I'm always right, but sometimes I'm more right than other times.\"\n (Linus Torvalds)\n\n\n",
"msg_date": "Tue, 20 Sep 2022 17:51:05 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "On 2022/09/21 0:51, Alvaro Herrera wrote:\n> The rules starting at line 4111 make me a bit nervous, since nowhere\n> we're restricting them to operating only on MERGE lines. I don't think\n> it's a real problem since USING is not terribly common anyway. Likewise\n> for the ones with WHEN [NOT] MATCHED. I kinda wish we had a way to\n> search for stuff like \"keyword MERGE appears earlier in the command\",\n> but we don't have that.\n\nYeah, I was thinking the same when updating the patch.\n\nHow about adding something like PartialMatches() that checks whether\nthe keywords are included in the input string or not? If so, we can restrict\nsome tab-completion rules to operating only on MERGE, as follows. I attached\nthe WIP patch (0002 patch) that introduces PartialMatches().\nIs this approach over-complicated? Thought?\n\n+\telse if (PartialMatches(\"MERGE\", \"INTO\", MatchAny, \"USING\") ||\n+\t\t\t PartialMatches(\"MERGE\", \"INTO\", MatchAny, \"AS\", MatchAny, \"USING\") ||\n+\t\t\t PartialMatches(\"MERGE\", \"INTO\", MatchAny, MatchAny, \"USING\"))\n+\t{\n+\t\t/* Complete MERGE INTO ... ON with target table attributes */\n+\t\tif (TailMatches(\"INTO\", MatchAny, \"USING\", MatchAny, \"ON\"))\n+\t\t\tCOMPLETE_WITH_ATTR(prev4_wd);\n+\t\telse if (TailMatches(\"INTO\", MatchAny, \"AS\", MatchAny, \"USING\", MatchAny, \"AS\", MatchAny, \"ON\"))\n+\t\t\tCOMPLETE_WITH_ATTR(prev8_wd);\n+\t\telse if (TailMatches(\"INTO\", MatchAny, MatchAny, \"USING\", MatchAny, MatchAny, \"ON\"))\n+\t\t\tCOMPLETE_WITH_ATTR(prev6_wd);\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 21 Sep 2022 14:25:08 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "On 2022-Sep-21, Fujii Masao wrote:\n\n> How about adding something like PartialMatches() that checks whether\n> the keywords are included in the input string or not? If so, we can restrict\n> some tab-completion rules to operating only on MERGE, as follows. I attached\n> the WIP patch (0002 patch) that introduces PartialMatches().\n> Is this approach over-complicated? Thought?\n\nI think it's fine to backpatch your 0001 to 15 and put 0002 in master.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I can't go to a restaurant and order food because I keep looking at the\nfonts on the menu. Five minutes later I realize that it's also talking\nabout food\" (Donald Knuth)\n\n\n",
"msg_date": "Wed, 21 Sep 2022 12:40:00 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "On Wed, 21 Sept 2022 at 10:55, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2022/09/21 0:51, Alvaro Herrera wrote:\n> > The rules starting at line 4111 make me a bit nervous, since nowhere\n> > we're restricting them to operating only on MERGE lines. I don't think\n> > it's a real problem since USING is not terribly common anyway. Likewise\n> > for the ones with WHEN [NOT] MATCHED. I kinda wish we had a way to\n> > search for stuff like \"keyword MERGE appears earlier in the command\",\n> > but we don't have that.\n>\n> Yeah, I was thinking the same when updating the patch.\n>\n> How about adding something like PartialMatches() that checks whether\n> the keywords are included in the input string or not? If so, we can restrict\n> some tab-completion rules to operating only on MERGE, as follows. I attached\n> the WIP patch (0002 patch) that introduces PartialMatches().\n> Is this approach over-complicated? Thought?\n>\n> + else if (PartialMatches(\"MERGE\", \"INTO\", MatchAny, \"USING\") ||\n> + PartialMatches(\"MERGE\", \"INTO\", MatchAny, \"AS\", MatchAny, \"USING\") ||\n> + PartialMatches(\"MERGE\", \"INTO\", MatchAny, MatchAny, \"USING\"))\n> + {\n> + /* Complete MERGE INTO ... ON with target table attributes */\n> + if (TailMatches(\"INTO\", MatchAny, \"USING\", MatchAny, \"ON\"))\n> + COMPLETE_WITH_ATTR(prev4_wd);\n> + else if (TailMatches(\"INTO\", MatchAny, \"AS\", MatchAny, \"USING\", MatchAny, \"AS\", MatchAny, \"ON\"))\n> + COMPLETE_WITH_ATTR(prev8_wd);\n> + else if (TailMatches(\"INTO\", MatchAny, MatchAny, \"USING\", MatchAny, MatchAny, \"ON\"))\n> + COMPLETE_WITH_ATTR(prev6_wd);\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\ne351f85418313e97c203c73181757a007dfda6d0 ===\n=== applying patch ./v9-0001-psql-Improve-tab-completion-for-MERGE.patch\npatching file src/bin/psql/tab-complete.c\nHunk #1 FAILED at 1669.\nHunk #2 FAILED at 3641.\nHunk #3 FAILED at 3660.\nHunk #4 FAILED at 4065.\n4 out of 4 hunks FAILED -- saving rejects to file\nsrc/bin/psql/tab-complete.c.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3890.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 3 Jan 2023 18:00:35 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "On Tue, 3 Jan 2023 at 12:30, vignesh C <vignesh21@gmail.com> wrote:\n>\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n>\n\nThis is because 0001 has been committed.\nRe-uploading 0002, to keep the CF-bot happy.\n\nReviewing 0002...\n\nI'm not entirely convinced that the PartialMatches() changes are\nreally necessary. As far as I can see USING followed by ON doesn't\nappear anywhere else in the grammar, and the later completions\ninvolving WHEN [NOT] MATCHED are definitely unique to MERGE.\nNonetheless, I guess it's not a bad thing to check that it really is a\nMERGE. Also the new matching function might prove useful for other\ncases.\n\nSome more detailed code comments:\n\nI find the name PartialMatches() a little off, since \"partial\" doesn't\nreally accurately describe what it's doing. HeadMatches() and\nTailMatches() are also partial matches (matching the head and tail\nparts). So perhaps MidMatches() would be a better name.\n\nI also found the comment description of PartialMatchesImpl() misleading:\n\n/*\n * Implementation of PartialMatches and PartialMatchesCS macros: do parts of\n * the words in previous_words match the variadic arguments?\n */\n\nI think a more accurate description would be:\n\n/*\n * Implementation of MidMatches and MidMatchesCS macros: do any N consecutive\n * words in previous_words match the variadic arguments?\n */\n\nSimilarly, instead of:\n\n /* Match N words on the line partially, case-insensitively. */\n\nhow about:\n\n /* Match N consecutive words anywhere on the line, case-insensitively. */\n\nIn PartialMatchesImpl()'s main loop:\n\n if (previous_words_count - startpos < narg)\n {\n va_end(args);\n return false;\n }\n\ncouldn't that just be built into the loop's termination clause (i.e.,\nloop while startpos <= previous_words_count - narg)?\n\nFor the first block of changes using the new function:\n\n else if (PartialMatches(\"MERGE\", \"INTO\", MatchAny, \"USING\") ||\n PartialMatches(\"MERGE\", \"INTO\", MatchAny, \"AS\", MatchAny,\n\"USING\") ||\n PartialMatches(\"MERGE\", \"INTO\", MatchAny, MatchAny, \"USING\"))\n {\n /* Complete MERGE INTO ... ON with target table attributes */\n if (TailMatches(\"INTO\", MatchAny, \"USING\", MatchAny, \"ON\"))\n COMPLETE_WITH_ATTR(prev4_wd);\n else if (TailMatches(\"INTO\", MatchAny, \"AS\", MatchAny,\n\"USING\", MatchAny, \"AS\", MatchAny, \"ON\"))\n COMPLETE_WITH_ATTR(prev8_wd);\n else if (TailMatches(\"INTO\", MatchAny, MatchAny, \"USING\",\nMatchAny, MatchAny, \"ON\"))\n COMPLETE_WITH_ATTR(prev6_wd);\n\nwouldn't it be simpler to just include \"MERGE\" in the TailMatches()\narguments, and leave these 3 cases outside the new code block. I.e.:\n\n /* Complete MERGE INTO ... ON with target table attributes */\n else if (TailMatches(\"MERGE\", \"INTO\", MatchAny, \"USING\", MatchAny, \"ON\"))\n COMPLETE_WITH_ATTR(prev4_wd);\n else if (TailMatches(\"MERGE\", \"INTO\", MatchAny, \"AS\", MatchAny,\n\"USING\", MatchAny, \"AS\", MatchAny, \"ON\"))\n COMPLETE_WITH_ATTR(prev8_wd);\n else if (TailMatches(\"MERGE\", \"INTO\", MatchAny, MatchAny, \"USING\",\nMatchAny, MatchAny, \"ON\"))\n COMPLETE_WITH_ATTR(prev6_wd);\n\nRegards,\nDean",
"msg_date": "Tue, 10 Jan 2023 12:24:18 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "It looks like this remaining work isn't going to happen this CF and\ntherefore this release. There hasn't been an update since January when\nDean Rasheed posted a review.\n\nUnless there's any updates soon I'll move this on to the next\ncommitfest or mark it returned with feedback.\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Tue, 28 Mar 2023 14:55:54 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "Ah, another thread with a bouncing email address...\nPlease respond to to thread from this point to avoid bounces.\n\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Tue, 28 Mar 2023 14:58:25 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
},
{
"msg_contents": "> On 28 Mar 2023, at 20:55, Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:\n> \n> It looks like this remaining work isn't going to happen this CF and\n> therefore this release. There hasn't been an update since January when\n> Dean Rasheed posted a review.\n> \n> Unless there's any updates soon I'll move this on to the next\n> commitfest or mark it returned with feedback.\n\nThere are still no updates to this patch or thread, so I'm closing this as\nReturned with Feedback. Please feel free to resubmit to a future CF when there\nis renewed interest in working on this.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 4 Jul 2023 14:34:28 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH]Feature improvement for MERGE tab completion"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen implementing the feature to perform streaming logical transactions by\nbackground workers[1], we plan to extend the LOGICAL_REP_MSG_STREAM_ABORT message\nto send the additional \"abort_lsn\" and \"abort_time\" so that we can advance the\norigin lsn in subscriber and can restart streaming from correct position in\ncase of crash.\n\nSince the LOGICAL_REP_MSG_STREAM_ABORT message is changed, we planned to bump\nthe logical replication protocol version. But when reviewing the code, we feel\nit can also work without using a new protocol version if we check the\nnew streaming option value called('parallel'). On publisher, we can check if\nthe streaming option is set the new value('parallel') and only send extra abort\ninformation in this case.\n\nI think it's reasonable to bump the protocol version number if we change any\nprotocol message even if we only add some new fields to the existing message,\nand that's what we've always done.\n\nThe only personal concern is that I didn't find any documentation that clearly\nstated the standard about when to bump logical replication protocol version,\nwhich makes me a little unsure if this is the right thing to do.\n\nSo, I'd like to confirm is it OK to modify or add some fields without bumping\nthe protocol version ? Or it's a standard to bump it if we change any\nprotocol message.\n\nAny hints will be appreciated.\n\n[1] https://www.postgresql.org/message-id/CAA4eK1%2BwyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw%40mail.gmail.com\n\nBest regards,\nHou zj\n\n\n",
"msg_date": "Fri, 9 Sep 2022 05:11:03 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "When should we bump the logical replication protocol version?"
}
] |
[
{
"msg_contents": "Hi!\n\nis_superuser function checks whether a user is a superuser or not, and \nis commonly used. However, is_superuser is not documented and is set to \nUNGROUPED in guc.c. I think is_superuser should be added to the \ndocumentation and set to PRESET OPTIONS.What are you thought on this?\n\nRegards,\nKotaro Kawmaoto\n\n\n",
"msg_date": "Fri, 09 Sep 2022 14:28:39 +0900",
"msg_from": "bt22kawamotok <bt22kawamotok@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "is_superuser is not documented"
},
{
"msg_contents": "On Fri, Sep 9, 2022, at 2:28 AM, bt22kawamotok wrote:\n> is_superuser function checks whether a user is a superuser or not, and \n> is commonly used. However, is_superuser is not documented and is set to \n> UNGROUPED in guc.c. I think is_superuser should be added to the \n> documentation and set to PRESET OPTIONS.What are you thought on this?\nThere is no such function. Are you referring to the GUC? I agree that it should\nbe added to the documentation. The main reason is that it is reported by\nParameterStatus along with some other GUCs described in the Preset Options\nsection.\n\npostgres=# \\dfS is_superuser\n List of functions\nSchema | Name | Result data type | Argument data types | Type \n--------+------+------------------+---------------------+------\n(0 rows)\n\npostgres=# SHOW is_superuser;\nis_superuser \n--------------\non\n(1 row)\n\nDo you mind writing a patch?\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Fri, Sep 9, 2022, at 2:28 AM, bt22kawamotok wrote:is_superuser function checks whether a user is a superuser or not, and is commonly used. However, is_superuser is not documented and is set to UNGROUPED in guc.c. I think is_superuser should be added to the documentation and set to PRESET OPTIONS.What are you thought on this?There is no such function. Are you referring to the GUC? I agree that it shouldbe added to the documentation. The main reason is that it is reported byParameterStatus along with some other GUCs described in the Preset Optionssection.postgres=# \\dfS is_superuser List of functionsSchema | Name | Result data type | Argument data types | Type --------+------+------------------+---------------------+------(0 rows)postgres=# SHOW is_superuser;is_superuser --------------on(1 row)Do you mind writing a patch?--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Fri, 09 Sep 2022 12:47:37 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "\"Euler Taveira\" <euler@eulerto.com> writes:\n> On Fri, Sep 9, 2022, at 2:28 AM, bt22kawamotok wrote:\n>> is_superuser function checks whether a user is a superuser or not, and \n>> is commonly used. However, is_superuser is not documented and is set to \n>> UNGROUPED in guc.c. I think is_superuser should be added to the \n>> documentation and set to PRESET OPTIONS.What are you thought on this?\n\n> There is no such function. Are you referring to the GUC? I agree that it should\n> be added to the documentation.\n\nIf you look at guc.c, it kind of seems intentional that it's undocumented:\n\n /* Not for general use --- used by SET SESSION AUTHORIZATION */\n {\"is_superuser\", PGC_INTERNAL, UNGROUPED,\n gettext_noop(\"Shows whether the current user is a superuser.\"),\n NULL,\n GUC_REPORT | GUC_NO_SHOW_ALL | GUC_NO_RESET_ALL | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE\n },\n &session_auth_is_superuser,\n false,\n NULL, NULL, NULL\n\nOn the other hand, it seems pretty silly that it's GUC_REPORT if\nwe want to consider it private. I've not checked the git history,\nbut I bet that flag was added later with no thought about context.\n\nIf we are going to document this then we should at least remove\nthe GUC_NO_SHOW_ALL flag and rewrite the comment. I wonder whether\nthe GUC_NO_RESET_ALL flag is needed either --- seems like the\nPGC_INTERNAL context protects it sufficiently.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Sep 2022 12:56:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "I wrote:\n> On the other hand, it seems pretty silly that it's GUC_REPORT if\n> we want to consider it private. I've not checked the git history,\n> but I bet that flag was added later with no thought about context.\n>\n> If we are going to document this then we should at least remove\n> the GUC_NO_SHOW_ALL flag and rewrite the comment. I wonder whether\n> the GUC_NO_RESET_ALL flag is needed either --- seems like the\n> PGC_INTERNAL context protects it sufficiently.\n\nBTW, \"session_authorization\" has a subset of these same issues:\n\n /* Not for general use --- used by SET SESSION AUTHORIZATION */\n {\"session_authorization\", PGC_USERSET, UNGROUPED,\n gettext_noop(\"Sets the session user name.\"),\n NULL,\n GUC_IS_NAME | GUC_REPORT | GUC_NO_SHOW_ALL | GUC_NO_RESET_ALL | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE | GUC_NOT_WHILE_SEC_REST\n },\n &session_authorization_string,\n NULL,\n check_session_authorization, assign_session_authorization, NULL\n\nI wonder why this one is marked USERSET where the other is not.\nYou'd think both of them need similar special-casing about how\nto handle SET.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Sep 2022 13:16:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "> On the other hand, it seems pretty silly that it's GUC_REPORT if\n> we want to consider it private. I've not checked the git history,\n> but I bet that flag was added later with no thought about context.\n> \n> If we are going to document this then we should at least remove\n> the GUC_NO_SHOW_ALL flag and rewrite the comment. I wonder whether\n> the GUC_NO_RESET_ALL flag is needed either --- seems like the\n> PGC_INTERNAL context protects it sufficiently.\n\n> I wonder why this one is marked USERSET where the other is not.\n> You'd think both of them need similar special-casing about how\n> to handle SET.\n\nThanks for your review.\n\nI have created a patch in response to your suggestion.\nI wasn't sure about USERSET, so I only created documentation for \nis_superuser.\n\nRegards,\nKotaro Kawamoto.",
"msg_date": "Mon, 12 Sep 2022 17:13:34 +0900",
"msg_from": "bt22kawamotok <bt22kawamotok@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "\n\nOn 2022/09/12 17:13, bt22kawamotok wrote:\n>> On the other hand, it seems pretty silly that it's GUC_REPORT if\n>> we want to consider it private. I've not checked the git history,\n>> but I bet that flag was added later with no thought about context.\n>>\n>> If we are going to document this then we should at least remove\n>> the GUC_NO_SHOW_ALL flag and rewrite the comment. I wonder whether\n>> the GUC_NO_RESET_ALL flag is needed either --- seems like the\n>> PGC_INTERNAL context protects it sufficiently.\n> \n>> I wonder why this one is marked USERSET where the other is not.\n>> You'd think both of them need similar special-casing about how\n>> to handle SET.\n\nSET SESSION AUTHORIZATION command changes the setting of session_authorization\nby calling set_config_option() with PGC_SUSET/USERSET and PGC_S_SESSION in\nExecSetVariableStmt(). So SET SESSION AUTHORIZATION command may fail\nunless session_authorization uses PGC_USERSET.\n\nOTOH, SET SESSION AUTHORIZATION causes to call the assign hook for\nsession_authorization and it changes is_superuser by using PGC_INTERNAL and\nPGC_S_DYNAMIC_DEFAULT. So is_superuser doesn't need to use PGC_USERSET.\n\nI think that session_authorization also can use PGC_INTERNAL if we add\nthe special-handling in SET SESSION AUTHORIZATION command. But it seems a bit\noverkill to me.\n\n> Thanks for your review.\n> \n> I have created a patch in response to your suggestion.\n> I wasn't sure about USERSET, so I only created documentation for is_superuser.\n\nThanks for the patch!\n\n\n+ <varlistentry id=\"guc-is-superuser\" xreflabel=\"is_superuser\">\n+ <term><varname>is_superuser</varname> (<type>boolean</type>)\n\nYou need to add this entry just after that of \"in_hot_standby\" because\nthe descriptions of preset parameters should be placed in alphabetical\norder in the docs.\n\n\n+ <para>\n+ Reports whether the user is superuser or not.\n\nIsn't it better to add \"current\" before \"user\", e.g.,\n\"Reports whether the current user is a superuser\"?\n\n\n \t\t/* Not for general use --- used by SET SESSION AUTHORIZATION */\n-\t\t{\"is_superuser\", PGC_INTERNAL, UNGROUPED,\n+\t\t{\"is_superuser\", PGC_INTERNAL, PRESET_OPTIONS,\n\nThis comment should be rewritten or removed because \"Not for general\nuse\" part is not true.\n\n\n-\t\t\tGUC_REPORT | GUC_NO_SHOW_ALL | GUC_NO_RESET_ALL | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE\n+\t\t\tGUC_REPORT | GUC_NO_RESET_ALL | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE\n\nAs Tom commented upthread, GUC_NO_RESET_ALL flag should be removed\nbecause it's not necessary when PGC_INTERNAL context (i.e., in this context,\nRESET ALL is prohibit by defaulted) is used.\n\n\nWith the patch, make check failed. You need to update src/test/regress/expected/guc.out.\n\n\n <varlistentry>\n <term><literal>IS_SUPERUSER</literal></term>\n <listitem>\n <para>\n True if the current role has superuser privileges.\n </para>\n\nI found that the docs of SET command has the above description about is_superuser.\nThis description seems useless if we document the is_superuser GUC itself. So isn't\nit better to remove this description?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 13 Sep 2022 15:49:50 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "> Thanks for the patch!\n> \n> \n> + <varlistentry id=\"guc-is-superuser\" xreflabel=\"is_superuser\">\n> + <term><varname>is_superuser</varname> (<type>boolean</type>)\n> \n> You need to add this entry just after that of \"in_hot_standby\" because\n> the descriptions of preset parameters should be placed in alphabetical\n> order in the docs.\n> \n> \n> + <para>\n> + Reports whether the user is superuser or not.\n> \n> Isn't it better to add \"current\" before \"user\", e.g.,\n> \"Reports whether the current user is a superuser\"?\n> \n> \n> \t\t/* Not for general use --- used by SET SESSION AUTHORIZATION */\n> -\t\t{\"is_superuser\", PGC_INTERNAL, UNGROUPED,\n> +\t\t{\"is_superuser\", PGC_INTERNAL, PRESET_OPTIONS,\n> \n> This comment should be rewritten or removed because \"Not for general\n> use\" part is not true.\n> \n> \n> -\t\t\tGUC_REPORT | GUC_NO_SHOW_ALL | GUC_NO_RESET_ALL |\n> GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE\n> +\t\t\tGUC_REPORT | GUC_NO_RESET_ALL | GUC_NOT_IN_SAMPLE | \n> GUC_DISALLOW_IN_FILE\n> \n> As Tom commented upthread, GUC_NO_RESET_ALL flag should be removed\n> because it's not necessary when PGC_INTERNAL context (i.e., in this \n> context,\n> RESET ALL is prohibit by defaulted) is used.\n> \n> \n> With the patch, make check failed. You need to update\n> src/test/regress/expected/guc.out.\n> \n> \n> <varlistentry>\n> <term><literal>IS_SUPERUSER</literal></term>\n> <listitem>\n> <para>\n> True if the current role has superuser privileges.\n> </para>\n> \n> I found that the docs of SET command has the above description about\n> is_superuser.\n> This description seems useless if we document the is_superuser GUC\n> itself. So isn't\n> it better to remove this description?\n\nThank you for your review.\nI create new patch add_document_is_superuser_v2.\nplease check it.\n\nRegards,\nKotaro Kawamoto",
"msg_date": "Tue, 13 Sep 2022 17:25:26 +0900",
"msg_from": "bt22kawamotok <bt22kawamotok@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "\n\nOn 2022/09/13 17:25, bt22kawamotok wrote:\n> \n>> Thanks for the patch!\n>>\n>>\n>> + <varlistentry id=\"guc-is-superuser\" xreflabel=\"is_superuser\">\n>> + <term><varname>is_superuser</varname> (<type>boolean</type>)\n>>\n>> You need to add this entry just after that of \"in_hot_standby\" because\n>> the descriptions of preset parameters should be placed in alphabetical\n>> order in the docs.\n>>\n>>\n>> + <para>\n>> + Reports whether the user is superuser or not.\n>>\n>> Isn't it better to add \"current\" before \"user\", e.g.,\n>> \"Reports whether the current user is a superuser\"?\n>>\n>>\n>> /* Not for general use --- used by SET SESSION AUTHORIZATION */\n>> - {\"is_superuser\", PGC_INTERNAL, UNGROUPED,\n>> + {\"is_superuser\", PGC_INTERNAL, PRESET_OPTIONS,\n>>\n>> This comment should be rewritten or removed because \"Not for general\n>> use\" part is not true.\n>>\n>>\n>> - GUC_REPORT | GUC_NO_SHOW_ALL | GUC_NO_RESET_ALL |\n>> GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE\n>> + GUC_REPORT | GUC_NO_RESET_ALL | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE\n>>\n>> As Tom commented upthread, GUC_NO_RESET_ALL flag should be removed\n>> because it's not necessary when PGC_INTERNAL context (i.e., in this context,\n>> RESET ALL is prohibit by defaulted) is used.\n>>\n>>\n>> With the patch, make check failed. You need to update\n>> src/test/regress/expected/guc.out.\n>>\n>>\n>> <varlistentry>\n>> <term><literal>IS_SUPERUSER</literal></term>\n>> <listitem>\n>> <para>\n>> True if the current role has superuser privileges.\n>> </para>\n>>\n>> I found that the docs of SET command has the above description about\n>> is_superuser.\n>> This description seems useless if we document the is_superuser GUC\n>> itself. So isn't\n>> it better to remove this description?\n> \n> Thank you for your review.\n> I create new patch add_document_is_superuser_v2.\n> please check it.\n\nThanks for updating the patch!\n\nThe patch looks good to me.\n\n-\t\t/* Not for general use --- used by SET SESSION AUTHORIZATION */\n \t\t{\"session_authorization\", PGC_USERSET, UNGROUPED,\n\nIf we don't document session_authorization and do expect that\nit's usually used by SET/SHOW SESSION AUTHORIZATION,\nthis comment doesn't need to be removed, I think.\n\nCould you register this patch into the next Commit Fest\nso that we don't forget it?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 13 Sep 2022 23:00:48 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "> Thanks for updating the patch!\n> \n> The patch looks good to me.\n> \n> -\t\t/* Not for general use --- used by SET SESSION AUTHORIZATION */\n> \t\t{\"session_authorization\", PGC_USERSET, UNGROUPED,\n> \n> If we don't document session_authorization and do expect that\n> it's usually used by SET/SHOW SESSION AUTHORIZATION,\n> this comment doesn't need to be removed, I think.\n\nThanks for reviewing.\n\nI update patch to reflect master update.\n\n> Could you register this patch into the next Commit Fest\n> so that we don't forget it?\n\nOK. I will register next Commit Fest. Thank you.\n\nRegards,\n\nKotaro Kawamoto",
"msg_date": "Wed, 14 Sep 2022 14:27:25 +0900",
"msg_from": "bt22kawamotok <bt22kawamotok@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "On 2022/09/14 14:27, bt22kawamotok wrote:\n> I update patch to reflect master update.\n\nThanks for updating the patch!\n\n+ <para>\n+ Shows whether the current user is a superuser or not.\n+ </para>\n\nHow about adding the note about when this parameter can change,\nlike we do for in_hot_standby docs? I applied this change to the patch.\nAttached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Sun, 18 Sep 2022 16:42:42 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 11:53 AM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n>\n> On 2022/09/14 14:27, bt22kawamotok wrote:\n> > I update patch to reflect master update.\n>\n> Thanks for updating the patch!\n>\n> + <para>\n> + Shows whether the current user is a superuser or not.\n> + </para>\n>\n> How about adding the note about when this parameter can change,\n> like we do for in_hot_standby docs? I applied this change to the\npatch.\n> Attached is the updated version of the patch.\n>\n\nI just came across this thread and noticed that the patch was never\nmerged. There is some brief docs for is_superuser in the SHOW docs:\nhttps://www.postgresql.org/docs/current/sql-show.html, but the GUC\nfields were never updated.\n\nIs there a reason that it never got merged or was it just forgotten\nabout?\n\n- Joe Koshakow\n\nOn Thu, Mar 2, 2023 at 11:53 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:>> On 2022/09/14 14:27, bt22kawamotok wrote:> > I update patch to reflect master update.>> Thanks for updating the patch!>> + <para>> + Shows whether the current user is a superuser or not.> + </para>>> How about adding the note about when this parameter can change,> like we do for in_hot_standby docs? I applied this change to the patch.> Attached is the updated version of the patch.>I just came across this thread and noticed that the patch was nevermerged. There is some brief docs for is_superuser in the SHOW docs:https://www.postgresql.org/docs/current/sql-show.html, but the GUCfields were never updated.Is there a reason that it never got merged or was it just forgottenabout?- Joe Koshakow",
"msg_date": "Thu, 2 Mar 2023 12:00:43 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 12:00:43PM -0500, Joseph Koshakow wrote:\n> \n> \n> On Thu, Mar 2, 2023 at 11:53 AM Fujii Masao <masao.fujii@oss.nttdata.com>\n> wrote:\n> >\n> > On 2022/09/14 14:27, bt22kawamotok wrote:\n> > > I update patch to reflect master update.\n> >\n> > Thanks for updating the patch!\n> >\n> > + <para>\n> > + Shows whether the current user is a superuser or not.\n> > + </para>\n> >\n> > How about adding the note about when this parameter can change,\n> > like we do for in_hot_standby docs? I applied this change to the patch.\n> > Attached is the updated version of the patch.\n> >\n> \n> I just came across this thread and noticed that the patch was never\n> merged. There is some brief docs for is_superuser in the SHOW docs:\n> https://www.postgresql.org/docs/current/sql-show.html, but the GUC\n> fields were never updated.\n> \n> Is there a reason that it never got merged or was it just forgotten\n> about?\n\nUh, where are you looking? I see it in the SGML, and in the PG 15 docs:\n\n\thttps://www.postgresql.org/docs/current/sql-show.html\n\n\tIS_SUPERUSER\n\t\n\t True if the current role has superuser privileges.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n",
"msg_date": "Wed, 29 Mar 2023 17:21:00 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 5:21 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, Mar 2, 2023 at 12:00:43PM -0500, Joseph Koshakow wrote:\n> >\n> >\n> > On Thu, Mar 2, 2023 at 11:53 AM Fujii Masao <\nmasao.fujii@oss.nttdata.com>\n> > wrote:\n> > >\n> > > On 2022/09/14 14:27, bt22kawamotok wrote:\n> > > > I update patch to reflect master update.\n> > >\n> > > Thanks for updating the patch!\n> > >\n> > > + <para>\n> > > + Shows whether the current user is a superuser or not.\n> > > + </para>\n> > >\n> > > How about adding the note about when this parameter can change,\n> > > like we do for in_hot_standby docs? I applied this change to\nthe patch.\n> > > Attached is the updated version of the patch.\n> > >\n> >\n> > I just came across this thread and noticed that the patch was never\n> > merged. There is some brief docs for is_superuser in the SHOW docs:\n> > https://www.postgresql.org/docs/current/sql-show.html, but the GUC\n> > fields were never updated.\n> >\n> > Is there a reason that it never got merged or was it just forgotten\n> > about?\n>\n> Uh, where are you looking? I see it in the SGML, and in the PG 15\ndocs:\n>\n> https://www.postgresql.org/docs/current/sql-show.html\n>\n> IS_SUPERUSER\n>\n> True if the current role has superuser privileges.\n\nThe patch updated the guc table for is_superuser in\nsrc/backend/utils/misc/guc_tables.c\n\n - /* Not for general use --- used by SET SESSION AUTHORIZATION */\n - {\"is_superuser\", PGC_INTERNAL, UNGROUPED,\n + {\"is_superuser\", PGC_INTERNAL, PRESET_OPTIONS,\n gettext_noop(\"Shows whether the current user is a superuser.\"),\n NULL,\n - GUC_REPORT | GUC_NO_SHOW_ALL | GUC_NO_RESET_ALL | GUC_NOT_IN_SAMPLE |\nGUC_DISALLOW_IN_FILE\n + GUC_REPORT | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE\n\nHowever, when I look at the code on master I don't see this update\n\n /* Not for general use --- used by SET SESSION AUTHORIZATION */\n {\"is_superuser\", PGC_INTERNAL, UNGROUPED,\n gettext_noop(\"Shows whether the current user is a superuser.\"),\n NULL,\n GUC_REPORT | GUC_NO_SHOW_ALL | GUC_NO_RESET_ALL | GUC_NOT_IN_SAMPLE |\nGUC_DISALLOW_IN_FILE\n\nSimilarly, when running `SHOW ALL` against master I don't see the\nis_superuser variable\n\n $ /usr/local/pgsql/bin/psql -c \"SHOW ALL\" test | grep is_superuser\n $\n\nOn Wed, Mar 29, 2023 at 5:21 PM Bruce Momjian <bruce@momjian.us> wrote:>> On Thu, Mar 2, 2023 at 12:00:43PM -0500, Joseph Koshakow wrote:> >> >> > On Thu, Mar 2, 2023 at 11:53 AM Fujii Masao <masao.fujii@oss.nttdata.com>> > wrote:> > >> > > On 2022/09/14 14:27, bt22kawamotok wrote:> > > > I update patch to reflect master update.> > >> > > Thanks for updating the patch!> > >> > > + <para>> > > + Shows whether the current user is a superuser or not.> > > + </para>> > >> > > How about adding the note about when this parameter can change,> > > like we do for in_hot_standby docs? I applied this change to the patch.> > > Attached is the updated version of the patch.> > >> >> > I just came across this thread and noticed that the patch was never> > merged. There is some brief docs for is_superuser in the SHOW docs:> > https://www.postgresql.org/docs/current/sql-show.html, but the GUC> > fields were never updated.> >> > Is there a reason that it never got merged or was it just forgotten> > about?>> Uh, where are you looking? I see it in the SGML, and in the PG 15 docs:>> https://www.postgresql.org/docs/current/sql-show.html>> IS_SUPERUSER>> True if the current role has superuser privileges.The patch updated the guc table for is_superuser in src/backend/utils/misc/guc_tables.c -\t\t/* Not for general use --- used by SET SESSION AUTHORIZATION */ -\t\t{\"is_superuser\", PGC_INTERNAL, UNGROUPED, +\t\t{\"is_superuser\", PGC_INTERNAL, PRESET_OPTIONS, \t\t\tgettext_noop(\"Shows whether the current user is a superuser.\"), \t\t\tNULL, -\t\t\tGUC_REPORT | GUC_NO_SHOW_ALL | GUC_NO_RESET_ALL | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE +\t\t\tGUC_REPORT | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILEHowever, when I look at the code on master I don't see this update \t\t/* Not for general use --- used by SET SESSION AUTHORIZATION */ \t\t{\"is_superuser\", PGC_INTERNAL, UNGROUPED, \t\t\tgettext_noop(\"Shows whether the current user is a superuser.\"), \t\t\tNULL, \t\t\tGUC_REPORT | GUC_NO_SHOW_ALL | GUC_NO_RESET_ALL | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILESimilarly, when running `SHOW ALL` against master I don't see theis_superuser variable $ /usr/local/pgsql/bin/psql -c \"SHOW ALL\" test | grep is_superuser $",
"msg_date": "Sat, 1 Apr 2023 09:34:06 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "\n\nOn 2023/04/01 22:34, Joseph Koshakow wrote:\n> The patch updated the guc table for is_superuser in\n> src/backend/utils/misc/guc_tables.c\n\nYes, this patch moves the descriptions of is_superuser to config.sgml\nand changes its group to PRESET_OPTIONS.\n\n> However, when I look at the code on master I don't see this update\n\nYes, the patch has not been committed yet because of lack of review comments.\nDo you have any review comments on this patch?\nOr you think it's ready for committer?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 3 Apr 2023 23:47:48 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "On Mon, Apr 3, 2023 at 10:47 AM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n> Yes, the patch has not been committed yet because of lack of review\ncomments.\n> Do you have any review comments on this patch?\n> Or you think it's ready for committer?\n\nI'm not very familiar with this code, so I'm not sure how much my\nreview is worth, but maybe it will spark some discussion.\n\n> Yes, this patch moves the descriptions of is_superuser to config.sgml\n> and changes its group to PRESET_OPTIONS.\n\nis_superuser feels a little out of place in this file. All of\nthe options here apply to the entire PostgreSQL server, while\nis_superuser only applies to the current session. The description of\nthis file says :\n\n> These options report various aspects of PostgreSQL behavior that\n> might be of interest to certain applications, particularly\n> administrative front-ends. Most of them are determined when\n> PostgreSQL is compiled or when it is installed.\n\nWhich doesn't seem to apply to is_superuser. It doesn't affect\nthe behavior of PostgreSQL, only what the current session is allowed to\ndo. It's also not determined when PostgreSQL is compiled/installed. Is\nthere some update that we can make to the description that would make\nis_superuser fit in better?\n\nI'm not familiar with the origins of is_superuser and it may be too\nlate for this, but it seems like is_superuser would fit in much better\nas a system information function [0] rather than a GUC. Particularly\nin the Session Information Functions.\n\n> - GUC_REPORT | GUC_NO_SHOW_ALL | GUC_NO_RESET_ALL | GUC_NOT_IN_SAMPLE |\nGUC_DISALLOW_IN_FILE\n> + GUC_REPORT | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE\n\nThis looks good to me. The lack of is_superuser from SHOW ALL has been\na source of confusion to me in the past.\n\nAs a side note server_version, server_encoding, lc_collate, and\nlc_ctype all appear in both the preset options section of config.sgml\nand in show.sgml. I'm not sure what the logic is for just including\nthese three parameters in show.sgml, but I think we should either\ninclude all of the preset options or none of them.\n\nThanks,\nJoe Koshakow\n\n[0] https://www.postgresql.org/docs/current/functions-info.html\n\nOn Mon, Apr 3, 2023 at 10:47 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:> Yes, the patch has not been committed yet because of lack of review comments.> Do you have any review comments on this patch?> Or you think it's ready for committer?I'm not very familiar with this code, so I'm not sure how much myreview is worth, but maybe it will spark some discussion.> Yes, this patch moves the descriptions of is_superuser to config.sgml> and changes its group to PRESET_OPTIONS.is_superuser feels a little out of place in this file. All ofthe options here apply to the entire PostgreSQL server, whileis_superuser only applies to the current session. The description ofthis file says :> These options report various aspects of PostgreSQL behavior that> might be of interest to certain applications, particularly> administrative front-ends. Most of them are determined when> PostgreSQL is compiled or when it is installed.Which doesn't seem to apply to is_superuser. It doesn't affectthe behavior of PostgreSQL, only what the current session is allowed todo. It's also not determined when PostgreSQL is compiled/installed. Isthere some update that we can make to the description that would makeis_superuser fit in better?I'm not familiar with the origins of is_superuser and it may be toolate for this, but it seems like is_superuser would fit in much betteras a system information function [0] rather than a GUC. Particularlyin the Session Information Functions.> -\t\t\tGUC_REPORT | GUC_NO_SHOW_ALL | GUC_NO_RESET_ALL | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE> +\t\t\tGUC_REPORT | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILEThis looks good to me. The lack of is_superuser from SHOW ALL has beena source of confusion to me in the past.As a side note server_version, server_encoding, lc_collate, andlc_ctype all appear in both the preset options section of config.sgmland in show.sgml. I'm not sure what the logic is for just includingthese three parameters in show.sgml, but I think we should eitherinclude all of the preset options or none of them.Thanks,Joe Koshakow[0] https://www.postgresql.org/docs/current/functions-info.html",
"msg_date": "Sat, 8 Apr 2023 10:53:45 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "On Sat, Apr 8, 2023 at 10:54 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n> is_superuser feels a little out of place in this file. All of\n> the options here apply to the entire PostgreSQL server, while\n> is_superuser only applies to the current session. The description of\n> this file says :\n>\n> > These options report various aspects of PostgreSQL behavior that\n> > might be of interest to certain applications, particularly\n> > administrative front-ends. Most of them are determined when\n> > PostgreSQL is compiled or when it is installed.\n>\n> Which doesn't seem to apply to is_superuser. It doesn't affect\n> the behavior of PostgreSQL, only what the current session is allowed to\n> do.\n\nI'm not sure I agree with that. I mean, maybe the phrasing could be\nimproved somehow, but \"PostgreSQL behavior\" as a category seems to\ninclude whether or not it lets you do certain things. And, for\nexample, psql will show a > or # in the prompt based on whether you're\na superuser. I find \"administrative front-end\" to be a somewhat odd\nturn of phrase, but I guess it means general purpose frontends like\npgsql or pgAdmin or whatever that you might use to administer the\nsystem, as opposed to applications.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 10 Apr 2023 12:47:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "\n\nOn 2023/04/08 23:53, Joseph Koshakow wrote:\n> \n> \n> On Mon, Apr 3, 2023 at 10:47 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> > Yes, the patch has not been committed yet because of lack of review comments.\n> > Do you have any review comments on this patch?\n> > Or you think it's ready for committer?\n> \n> I'm not very familiar with this code, so I'm not sure how much my\n> review is worth, but maybe it will spark some discussion.\n\nThanks for the comments!\n\n\n> > Yes, this patch moves the descriptions of is_superuser to config.sgml\n> > and changes its group to PRESET_OPTIONS.\n> \n> is_superuser feels a little out of place in this file. All of\n> the options here apply to the entire PostgreSQL server, while\n> is_superuser only applies to the current session.\n\nAren't other preset options like lc_collate, lc_ctype and server_encoding\nsimilar to is_superuser? They seem to behave in a similar way as their\nsettings can be different for each connection depending on the connected database.\n\n\n> I'm not familiar with the origins of is_superuser and it may be too\n> late for this, but it seems like is_superuser would fit in much better\n> as a system information function [0] rather than a GUC. Particularly\n> in the Session Information Functions.\n\nI understand your point, but I think it would be more confusing to document\nis_superuser there because it's defined and behaves differently from\nsession information functions like current_user. For instance,\nthe value of is_superuser can be displayed using SHOW command,\nwhile current_user cannot. Therefore, it's better to keep is_superuser\nseparate from the session information functions.\n\n\n> As a side note server_version, server_encoding, lc_collate, and\n> lc_ctype all appear in both the preset options section of config.sgml\n> and in show.sgml. I'm not sure what the logic is for just including\n> these three parameters in show.sgml, but I think we should either\n> include all of the preset options or none of them.\n\nAgreed. I think that it's better to just treat them as GUCs and\nremove their descriptions from show.sgml.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 11 Apr 2023 22:37:40 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 9:37 AM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n> > > Yes, this patch moves the descriptions of is_superuser to\nconfig.sgml\n> > > and changes its group to PRESET_OPTIONS.\n> >\n> > is_superuser feels a little out of place in this file. All of\n> > the options here apply to the entire PostgreSQL server, while\n> > is_superuser only applies to the current session.\n>\n> Aren't other preset options like lc_collate, lc_ctype and\nserver_encoding\n> similar to is_superuser? They seem to behave in a similar way as their\n> settings can be different for each connection depending on the\nconnected database.\n\nI think the difference is that all of those options are constant for\nall connections to the same database and once the database is created\nthey are immutable. is_superuser is set on a per session basis and can\nbe changed at any time.\n\nLooking through the options it actually looks like all the options are\nset either when the server is built, the server is started, or the\ndatabase is created, and once they're set they become immutable. The\none exception I see is in_hot_standby mode which can be updated from on\nto off (I can't remember off the top of my head if it can be updated\nthe other way). I'm moving the goal post a bit but I think preset may\nimply that the value isn't going to change once it's been set.\n\nHaving said all that I actually think this is the best place for\nis_superuser since it doesn't seem to fit in anywhere else.\n\n> > I'm not familiar with the origins of is_superuser and it may be too\n> > late for this, but it seems like is_superuser would fit in much\nbetter\n> > as a system information function [0] rather than a GUC. Particularly\n> > in the Session Information Functions.\n>\n> I understand your point, but I think it would be more confusing to\ndocument\n> is_superuser there because it's defined and behaves differently from\n> session information functions like current_user. For instance,\n> the value of is_superuser can be displayed using SHOW command,\n> while current_user cannot. Therefore, it's better to keep is_superuser\n> separate from the session information functions.\n\nI was implying that I thought it would have made more sense for\nis_superuser to be implemented as a function, behave as a function,\nand not be visible via SHOW. However, there may have been a good reason\nnot to do this and it may already be too late for that.\n\nIn my opinion, this is ready to be committed. However, like I said\nearlier I'm not very familiar with the GUC code so you may want to\nwait for another opinion.\n\nThanks,\nJoe Koshakow\n\nOn Tue, Apr 11, 2023 at 9:37 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:> > > Yes, this patch moves the descriptions of is_superuser to config.sgml> > > and changes its group to PRESET_OPTIONS.> >> > is_superuser feels a little out of place in this file. All of> > the options here apply to the entire PostgreSQL server, while> > is_superuser only applies to the current session.>> Aren't other preset options like lc_collate, lc_ctype and server_encoding> similar to is_superuser? They seem to behave in a similar way as their> settings can be different for each connection depending on the connected database.I think the difference is that all of those options are constant forall connections to the same database and once the database is createdthey are immutable. is_superuser is set on a per session basis and canbe changed at any time.Looking through the options it actually looks like all the options areset either when the server is built, the server is started, or thedatabase is created, and once they're set they become immutable. Theone exception I see is in_hot_standby mode which can be updated from onto off (I can't remember off the top of my head if it can be updatedthe other way). I'm moving the goal post a bit but I think preset mayimply that the value isn't going to change once it's been set.Having said all that I actually think this is the best place foris_superuser since it doesn't seem to fit in anywhere else.> > I'm not familiar with the origins of is_superuser and it may be too> > late for this, but it seems like is_superuser would fit in much better> > as a system information function [0] rather than a GUC. Particularly> > in the Session Information Functions.>> I understand your point, but I think it would be more confusing to document> is_superuser there because it's defined and behaves differently from> session information functions like current_user. For instance,> the value of is_superuser can be displayed using SHOW command,> while current_user cannot. Therefore, it's better to keep is_superuser> separate from the session information functions.I was implying that I thought it would have made more sense foris_superuser to be implemented as a function, behave as a function, and not be visible via SHOW. However, there may have been a good reasonnot to do this and it may already be too late for that.In my opinion, this is ready to be committed. However, like I saidearlier I'm not very familiar with the GUC code so you may want towait for another opinion.Thanks,Joe Koshakow",
"msg_date": "Tue, 11 Apr 2023 16:41:47 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "\n\nOn 2023/04/12 5:41, Joseph Koshakow wrote:\n> Having said all that I actually think this is the best place for\n> is_superuser since it doesn't seem to fit in anywhere else.\n\nYeah, I also could not find more appropriate place for is_superuser than there.\n\n\n> I was implying that I thought it would have made more sense for\n> is_superuser to be implemented as a function, behave as a function,\n> and not be visible via SHOW. However, there may have been a good reason\n> not to do this and it may already be too late for that.\n\nThe is_superuser parameter is currently marked as GUC_REPORT and\nits value is automatically reported to a client. If we change\nit to a function, we will need to add functionality to automatically\nreport the return value of the function to a client, which could\nbe overkill.\n\n\n> In my opinion, this is ready to be committed.\n\nThanks! Given that we have already exceeded the feature freeze date,\nI'm thinking to commit this change at the next CommitFest.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 13 Apr 2023 00:03:45 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "I think I may have discovered a reason why is_superuser is\nintentionally undocumented. is_superuser is not updated if a role's\nsuperuser attribute is changed by another session. Therefore,\nis_superuser may show you an incorrect stale value.\n\nPerhaps this can be fixed with a show_hook? Otherwise it's probably\nbest not to document a GUC that can show an incorrect value.\n\n- Joe Koshakow\n\nI think I may have discovered a reason why is_superuser isintentionally undocumented. is_superuser is not updated if a role'ssuperuser attribute is changed by another session. Therefore,is_superuser may show you an incorrect stale value.Perhaps this can be fixed with a show_hook? Otherwise it's probablybest not to document a GUC that can show an incorrect value.- Joe Koshakow",
"msg_date": "Wed, 7 Jun 2023 10:15:46 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "\n\nOn 2023/06/07 23:15, Joseph Koshakow wrote:\n> I think I may have discovered a reason why is_superuser is\n> intentionally undocumented. is_superuser is not updated if a role's\n> superuser attribute is changed by another session. Therefore,\n> is_superuser may show you an incorrect stale value.\n> \n> Perhaps this can be fixed with a show_hook? Otherwise it's probably\n> best not to document a GUC that can show an incorrect value.\n\nOr we can correct the description of is_superuser, for example,\n\"True if the current role had superuser privileges when it connected to\nthe database. Note that this parameter doesn't always indicate\nthe current superuser status of the role.\"?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 8 Jun 2023 00:36:39 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
},
{
"msg_contents": "On Wed, Jun 7, 2023 at 11:36 AM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n>\n>\n>\n> On 2023/06/07 23:15, Joseph Koshakow wrote:\n> > I think I may have discovered a reason why is_superuser is\n> > intentionally undocumented. is_superuser is not updated if a role's\n> > superuser attribute is changed by another session. Therefore,\n> > is_superuser may show you an incorrect stale value.\n> >\n> > Perhaps this can be fixed with a show_hook? Otherwise it's probably\n> > best not to document a GUC that can show an incorrect value.\n>\n> Or we can correct the description of is_superuser, for example,\n> \"True if the current role had superuser privileges when it connected to\n> the database. Note that this parameter doesn't always indicate\n> the current superuser status of the role.\"?\n\nThat description isn't exactly accurate either, since is_superuser is\nre-evaluated whenever the role GUC is changed (i.e. through SET ROLE\nor RESET ROLE), and potentially at other times I'm not aware of. I'm\ncurious to hear what others think though, since it seems like a bit of\na footgun. It will be up to the user to understand when `is_superuser`\nis accurate or inaccurate. In most cases it will be impossible for\nthem to know unless they get the same information elsewhere, like\nthrough pg_roles.\n\n\nAs an aside I think there's a similar issue with the\nAuthenticatedUserIsSuperuser static variable. That variable is\ninitialized in miscinit.c in the InitializeSessionUserId function\nbased on whether the session role is a superuser when connecting. Then\nas far as I can tell the variable is never updated.\n\nWhen executing a SET SESSION AUTHORIZATION command, we check if\nAuthenticatedUserIsSuperuser is true to determine if the session is\nallowed to execute the command. That check happens in miscinit.c in the\nSetSessionAuthorization function.\n\nThis means that if some role, r, connects as a superuser and then later\nsome other role removes r's superuser attribute, r can always set their\nsession authorization to a different role with the superuser attribute\nto regain superuser privileges. So as long as r maintains an active\nconnection, they can never truly lose their superuser privileges.\n\n- Joe Koshakow\n\nOn Wed, Jun 7, 2023 at 11:36 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:>>>> On 2023/06/07 23:15, Joseph Koshakow wrote:> > I think I may have discovered a reason why is_superuser is> > intentionally undocumented. is_superuser is not updated if a role's> > superuser attribute is changed by another session. Therefore,> > is_superuser may show you an incorrect stale value.> >> > Perhaps this can be fixed with a show_hook? Otherwise it's probably> > best not to document a GUC that can show an incorrect value.>> Or we can correct the description of is_superuser, for example,> \"True if the current role had superuser privileges when it connected to> the database. Note that this parameter doesn't always indicate> the current superuser status of the role.\"?That description isn't exactly accurate either, since is_superuser isre-evaluated whenever the role GUC is changed (i.e. through SET ROLEor RESET ROLE), and potentially at other times I'm not aware of. I'mcurious to hear what others think though, since it seems like a bit ofa footgun. It will be up to the user to understand when `is_superuser`is accurate or inaccurate. In most cases it will be impossible forthem to know unless they get the same information elsewhere, likethrough pg_roles.As an aside I think there's a similar issue with theAuthenticatedUserIsSuperuser static variable. That variable isinitialized in miscinit.c in the InitializeSessionUserId functionbased on whether the session role is a superuser when connecting. Thenas far as I can tell the variable is never updated.When executing a SET SESSION AUTHORIZATION command, we check ifAuthenticatedUserIsSuperuser is true to determine if the session isallowed to execute the command. That check happens in miscinit.c in theSetSessionAuthorization function.This means that if some role, r, connects as a superuser and then latersome other role removes r's superuser attribute, r can always set theirsession authorization to a different role with the superuser attributeto regain superuser privileges. So as long as r maintains an activeconnection, they can never truly lose their superuser privileges.- Joe Koshakow",
"msg_date": "Wed, 7 Jun 2023 13:33:03 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: is_superuser is not documented"
}
] |
[
{
"msg_contents": "Hello.\n\nWhile I played with some patch, I met an assertion failure.\n\n#2 0x0000000000b350e0 in ExceptionalCondition (\n conditionName=0xbd8970 \"!IsInstallXLogFileSegmentActive()\", \n errorType=0xbd6e11 \"FailedAssertion\", fileName=0xbd6f28 \"xlogrecovery.c\", \n lineNumber=4190) at assert.c:69\n#3 0x0000000000586f9c in XLogFileRead (segno=61, emode=13, tli=1, \n source=XLOG_FROM_ARCHIVE, notfoundOk=true) at xlogrecovery.c:4190\n#4 0x00000000005871d2 in XLogFileReadAnyTLI (segno=61, emode=13, \n source=XLOG_FROM_ANY) at xlogrecovery.c:4296\n#5 0x000000000058656f in WaitForWALToBecomeAvailable (RecPtr=1023410360, \n randAccess=false, fetching_ckpt=false, tliRecPtr=1023410336, replayTLI=1, \n replayLSN=1023410336, nonblocking=false) at xlogrecovery.c:3727\n\nThis is replayable by the following steps.\n\n1. insert a sleep(1) in WaitForWALToBecomeAvailable().\n> \t\t\t\t\t * WAL that we restore from archive.\n> \t\t\t\t\t */\n> +\t\t\t\t\tsleep(1);\n> \t\t\t\t\tif (WalRcvStreaming())\n> \t\t\t\t\t\tXLogShutdownWalRcv();\n\n2. create a primary with archiving enabled.\n\n3. create a standby with recovering from the primary's archive and\n unconnectable primary_conninfo.\n\n4. start the primary.\n\n5. switch wal on the primary.\n\n6. Kaboom.\n\nThis is because WaitForWALToBecomeAvailable doesn't call\nXLogSHutdownWalRcv() when walreceiver has been stopped before we reach\nthe WalRcvStreaming() call cited above. But we need to set\nInstasllXLogFileSegmentActive to false even in that case, since no one\nother than startup process does that.\n\nUnconditionally calling XLogShutdownWalRcv() fixes it. I feel we might\nneed to correct the dependencies between the flag and walreceiver\nstate, but it not mandatory because XLogShutdownWalRcv() is designed\nso that it can be called even after walreceiver is stopped. I don't\nhave a clear memory about why we do that at the time, though, but\nrecovery check runs successfully with this.\n\nThis code was introduced at PG12.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 09 Sep 2022 17:29:49 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Possible crash on standby"
},
{
"msg_contents": "On Fri, Sep 09, 2022 at 05:29:49PM +0900, Kyotaro Horiguchi wrote:\n> This is because WaitForWALToBecomeAvailable doesn't call\n> XLogSHutdownWalRcv() when walreceiver has been stopped before we reach\n> the WalRcvStreaming() call cited above. But we need to set\n> InstasllXLogFileSegmentActive to false even in that case, since no one\n> other than startup process does that.\n\nNice find.\n\n> Unconditionally calling XLogShutdownWalRcv() fixes it. I feel we might\n> need to correct the dependencies between the flag and walreceiver\n> state, but it not mandatory because XLogShutdownWalRcv() is designed\n> so that it can be called even after walreceiver is stopped. I don't\n> have a clear memory about why we do that at the time, though, but\n> recovery check runs successfully with this.\n\nI suppose the alternative would be to set InstallXLogFileSegmentActive to\nfalse in an 'else' block, but that doesn't seem necessary if\nXLogShutdownWalRcv() is safe to call unconditionally. So, unless there is\na bigger problem that I'm not seeing, +1 for your patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 9 Sep 2022 10:18:35 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible crash on standby"
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 2:00 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Hello.\n>\n> While I played with some patch, I met an assertion failure.\n>\n> #2 0x0000000000b350e0 in ExceptionalCondition (\n> conditionName=0xbd8970 \"!IsInstallXLogFileSegmentActive()\",\n> errorType=0xbd6e11 \"FailedAssertion\", fileName=0xbd6f28 \"xlogrecovery.c\",\n> lineNumber=4190) at assert.c:69\n> #3 0x0000000000586f9c in XLogFileRead (segno=61, emode=13, tli=1,\n> source=XLOG_FROM_ARCHIVE, notfoundOk=true) at xlogrecovery.c:4190\n> #4 0x00000000005871d2 in XLogFileReadAnyTLI (segno=61, emode=13,\n> source=XLOG_FROM_ANY) at xlogrecovery.c:4296\n> #5 0x000000000058656f in WaitForWALToBecomeAvailable (RecPtr=1023410360,\n> randAccess=false, fetching_ckpt=false, tliRecPtr=1023410336, replayTLI=1,\n> replayLSN=1023410336, nonblocking=false) at xlogrecovery.c:3727\n>\n> This is replayable by the following steps.\n>\n> 1. insert a sleep(1) in WaitForWALToBecomeAvailable().\n> > * WAL that we restore from archive.\n> > */\n> > + sleep(1);\n> > if (WalRcvStreaming())\n> > XLogShutdownWalRcv();\n>\n> 2. create a primary with archiving enabled.\n>\n> 3. create a standby with recovering from the primary's archive and\n> unconnectable primary_conninfo.\n>\n> 4. start the primary.\n>\n> 5. switch wal on the primary.\n>\n> 6. Kaboom.\n>\n> This is because WaitForWALToBecomeAvailable doesn't call\n> XLogSHutdownWalRcv() when walreceiver has been stopped before we reach\n> the WalRcvStreaming() call cited above. But we need to set\n> InstasllXLogFileSegmentActive to false even in that case, since no one\n> other than startup process does that.\n>\n> Unconditionally calling XLogShutdownWalRcv() fixes it. I feel we might\n> need to correct the dependencies between the flag and walreceiver\n> state, but it not mandatory because XLogShutdownWalRcv() is designed\n> so that it can be called even after walreceiver is stopped. I don't\n> have a clear memory about why we do that at the time, though, but\n> recovery check runs successfully with this.\n>\n> This code was introduced at PG12.\n\nI think it is a duplicate of [1]. I have tested the above use-case\nwith the patch at [1] and it fixes the issue.\n\n[1] https://www.postgresql.org/message-id/CALj2ACXPn_xePphnh88qmoQqqW%2BE2KEOdxGL%2BD-o9o7_XNGkkw%40mail.gmail.com\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 9 Sep 2022 22:51:10 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible crash on standby"
},
{
"msg_contents": "On Fri, Sep 09, 2022 at 10:51:10PM +0530, Bharath Rupireddy wrote:\n> I think it is a duplicate of [1]. I have tested the above use-case\n> with the patch at [1] and it fixes the issue.\n\nI added this thread to the existing commitfest entry. Thanks for pointing\nthis out.\n\n\thttps://commitfest.postgresql.org/39/3814\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 9 Sep 2022 10:26:32 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible crash on standby"
}
] |
[
{
"msg_contents": "Based on work in [1].\nAccording to https://cplusplus.com/reference/cstdio/fprintf/\nThe use of fprintf is related to the need to generate a string based on a\nformat, which should be different from \"%s\".\nSince fprintf has overhead when parsing the \"format\" parameter, plus all\nthe trouble of checking the va_arg parameters.\nI think this is one of the low fruits available and easy to reap.\nBy replacing fprintf with its equivalents, fputs and fputc,\nwe avoid overhead and increase security [2] and [3].\n\nThe downside is a huge big churm, which unfortunately will occur.\nBut, IHMO, I think the advantages are worth it.\nNote that behavior remains the same, since fputs and fputc do not change\nthe expected behavior of fprintf.\n\nA small performance gain is expected, mainly for the client, since there\nare several occurrences in some critical places, such as\n(usr/src/fe_utils/print.c).\n\nPatch attached.\nThis pass check-world.\n\nregards,\nRanier Vilela\n\n[1]\nhttps://www.postgresql.org/message-id/CAApHDvp2THseLvCc%2BTcYFBC7FKHpHTs1JyYmd2JghtOVhb5WGA%40mail.gmail.com\n[2]\nhttps://stackoverflow.com/questions/20837989/fprintf-stack-buffer-overflow\n[3]\nhttps://bufferoverflows.net/format-string-vulnerability-what-when-and-how/",
"msg_date": "Fri, 9 Sep 2022 10:45:37 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Avoid overhead with fprintf related functions"
},
{
"msg_contents": "On Fri, Sep 09, 2022 at 10:45:37AM -0300, Ranier Vilela wrote:\n> Based on work in [1].\n> According to https://cplusplus.com/reference/cstdio/fprintf/\n> The use of fprintf is related to the need to generate a string based on a\n> format, which should be different from \"%s\".\n> Since fprintf has overhead when parsing the \"format\" parameter, plus all\n> the trouble of checking the va_arg parameters.\n> I think this is one of the low fruits available and easy to reap.\n> By replacing fprintf with its equivalents, fputs and fputc,\n> we avoid overhead and increase security [2] and [3].\n> \n> The downside is a huge big churm, which unfortunately will occur.\n> But, IHMO, I think the advantages are worth it.\n> Note that behavior remains the same, since fputs and fputc do not change\n> the expected behavior of fprintf.\n> \n> A small performance gain is expected, mainly for the client, since there\n> are several occurrences in some critical places, such as\n> (usr/src/fe_utils/print.c).\n\nI agree with David [0]. But if you can demonstrate a performance gain,\nperhaps it's worth considering a subset of these changes in hot paths.\n\n[0] https://postgr.es/m/CAApHDvp2THseLvCc%2BTcYFBC7FKHpHTs1JyYmd2JghtOVhb5WGA%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 9 Sep 2022 09:19:56 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overhead with fprintf related functions"
},
{
"msg_contents": "Em sex., 9 de set. de 2022 às 13:20, Nathan Bossart <\nnathandbossart@gmail.com> escreveu:\n\n> On Fri, Sep 09, 2022 at 10:45:37AM -0300, Ranier Vilela wrote:\n> > Based on work in [1].\n> > According to https://cplusplus.com/reference/cstdio/fprintf/\n> > The use of fprintf is related to the need to generate a string based on a\n> > format, which should be different from \"%s\".\n> > Since fprintf has overhead when parsing the \"format\" parameter, plus all\n> > the trouble of checking the va_arg parameters.\n> > I think this is one of the low fruits available and easy to reap.\n> > By replacing fprintf with its equivalents, fputs and fputc,\n> > we avoid overhead and increase security [2] and [3].\n> >\n> > The downside is a huge big churm, which unfortunately will occur.\n> > But, IHMO, I think the advantages are worth it.\n> > Note that behavior remains the same, since fputs and fputc do not change\n> > the expected behavior of fprintf.\n> >\n> > A small performance gain is expected, mainly for the client, since there\n> > are several occurrences in some critical places, such as\n> > (usr/src/fe_utils/print.c).\n>\n> I agree with David [0]. But if you can demonstrate a performance gain,\n> perhaps it's worth considering a subset of these changes in hot paths.\n>\nSimple benchmark with David sort example.\n\n1. make data\ncreate table t (a bigint not null, b bigint not null, c bigint not\nnull, d bigint not null, e bigint not null, f bigint not null);\n\ninsert into t select x,x,x,x,x,x from generate_Series(1,140247142) x; --\n10GB!\nvacuum freeze t;\n\n2. client run\n\\timing on\n\\pset pager off\nselect * from t limit 1000000;\n\nhead:\nTime: 418,210 ms\nTime: 419,588 ms\nTime: 424,713 ms\n\nfprintf patch:\nTime: 416,919 ms\nTime: 416,246 ms\nTime: 416,237 ms\n\nregards,\nRanier Vilela\n\nEm sex., 9 de set. de 2022 às 13:20, Nathan Bossart <nathandbossart@gmail.com> escreveu:On Fri, Sep 09, 2022 at 10:45:37AM -0300, Ranier Vilela wrote:\n> Based on work in [1].\n> According to https://cplusplus.com/reference/cstdio/fprintf/\n> The use of fprintf is related to the need to generate a string based on a\n> format, which should be different from \"%s\".\n> Since fprintf has overhead when parsing the \"format\" parameter, plus all\n> the trouble of checking the va_arg parameters.\n> I think this is one of the low fruits available and easy to reap.\n> By replacing fprintf with its equivalents, fputs and fputc,\n> we avoid overhead and increase security [2] and [3].\n> \n> The downside is a huge big churm, which unfortunately will occur.\n> But, IHMO, I think the advantages are worth it.\n> Note that behavior remains the same, since fputs and fputc do not change\n> the expected behavior of fprintf.\n> \n> A small performance gain is expected, mainly for the client, since there\n> are several occurrences in some critical places, such as\n> (usr/src/fe_utils/print.c).\n\nI agree with David [0]. But if you can demonstrate a performance gain,\nperhaps it's worth considering a subset of these changes in hot paths.Simple benchmark with David sort example.1. make datacreate table t (a bigint not null, b bigint not null, c bigint notnull, d bigint not null, e bigint not null, f bigint not null);insert into t select x,x,x,x,x,x from generate_Series(1,140247142) x; -- 10GB!vacuum freeze t;2. client run\\timing on\\pset pager offselect * from t limit 1000000;head:Time: 418,210 msTime: 419,588 msTime: 424,713 msfprintf patch:Time: 416,919 msTime: 416,246 msTime: 416,237 msregards,Ranier Vilela",
"msg_date": "Fri, 9 Sep 2022 18:45:31 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid overhead with fprintf related functions"
},
{
"msg_contents": "Em sex., 9 de set. de 2022 às 10:45, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Based on work in [1].\n> According to https://cplusplus.com/reference/cstdio/fprintf/\n> The use of fprintf is related to the need to generate a string based on a\n> format, which should be different from \"%s\".\n> Since fprintf has overhead when parsing the \"format\" parameter, plus all\n> the trouble of checking the va_arg parameters.\n> I think this is one of the low fruits available and easy to reap.\n> By replacing fprintf with its equivalents, fputs and fputc,\n> we avoid overhead and increase security [2] and [3].\n>\n> The downside is a huge big churm, which unfortunately will occur.\n> But, IHMO, I think the advantages are worth it.\n> Note that behavior remains the same, since fputs and fputc do not change\n> the expected behavior of fprintf.\n>\n> A small performance gain is expected, mainly for the client, since there\n> are several occurrences in some critical places, such as\n> (usr/src/fe_utils/print.c).\n>\n> Patch attached.\n> This pass check-world.\n>\nRechecked for the hundredth time.\nOne typo.\n\nregards,\nRanier Vilela",
"msg_date": "Fri, 9 Sep 2022 18:49:32 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid overhead with fprintf related functions"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em sex., 9 de set. de 2022 às 13:20, Nathan Bossart <\n> nathandbossart@gmail.com> escreveu:\n>> I agree with David [0]. But if you can demonstrate a performance gain,\n>> perhaps it's worth considering a subset of these changes in hot paths.\n\n> head:\n> Time: 418,210 ms\n> Time: 419,588 ms\n> Time: 424,713 ms\n\n> fprintf patch:\n> Time: 416,919 ms\n> Time: 416,246 ms\n> Time: 416,237 ms\n\nThat is most certainly not enough gain to justify a large amount\nof code churn. In fact, given that this is probably pretty\nplatform-dependent and you've checked only one platform, I don't\nthink I'd call this a sufficient case for even a one-line change.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Sep 2022 17:53:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overhead with fprintf related functions"
},
{
"msg_contents": "Em sex., 9 de set. de 2022 às 18:53, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Em sex., 9 de set. de 2022 às 13:20, Nathan Bossart <\n> > nathandbossart@gmail.com> escreveu:\n> >> I agree with David [0]. But if you can demonstrate a performance gain,\n> >> perhaps it's worth considering a subset of these changes in hot paths.\n>\n> > head:\n> > Time: 418,210 ms\n> > Time: 419,588 ms\n> > Time: 424,713 ms\n>\n> > fprintf patch:\n> > Time: 416,919 ms\n> > Time: 416,246 ms\n> > Time: 416,237 ms\n>\n> That is most certainly not enough gain to justify a large amount\n> of code churn. In fact, given that this is probably pretty\n> platform-dependent and you've checked only one platform, I don't\n> think I'd call this a sufficient case for even a one-line change.\n>\nOf course, base these changes not on performance gain, but on correct style\nand increased security.\nBut out-vote is out-vote, case closed.\n\nRegards,\nRanier Vilela\n\nEm sex., 9 de set. de 2022 às 18:53, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em sex., 9 de set. de 2022 às 13:20, Nathan Bossart <\n> nathandbossart@gmail.com> escreveu:\n>> I agree with David [0]. But if you can demonstrate a performance gain,\n>> perhaps it's worth considering a subset of these changes in hot paths.\n\n> head:\n> Time: 418,210 ms\n> Time: 419,588 ms\n> Time: 424,713 ms\n\n> fprintf patch:\n> Time: 416,919 ms\n> Time: 416,246 ms\n> Time: 416,237 ms\n\nThat is most certainly not enough gain to justify a large amount\nof code churn. In fact, given that this is probably pretty\nplatform-dependent and you've checked only one platform, I don't\nthink I'd call this a sufficient case for even a one-line change.Of course, base these changes not on performance gain, but on correct style and increased security.But out-vote is out-vote, case closed.Regards,Ranier Vilela",
"msg_date": "Fri, 9 Sep 2022 18:58:08 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid overhead with fprintf related functions"
},
{
"msg_contents": "On Fri, Sep 09, 2022 at 05:53:54PM -0400, Tom Lane wrote:\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n>> Em sex., 9 de set. de 2022 �s 13:20, Nathan Bossart <\n>> nathandbossart@gmail.com> escreveu:\n>>> I agree with David [0]. But if you can demonstrate a performance gain,\n>>> perhaps it's worth considering a subset of these changes in hot paths.\n> \n>> head:\n>> Time: 418,210 ms\n>> Time: 419,588 ms\n>> Time: 424,713 ms\n> \n>> fprintf patch:\n>> Time: 416,919 ms\n>> Time: 416,246 ms\n>> Time: 416,237 ms\n> \n> That is most certainly not enough gain to justify a large amount\n> of code churn. In fact, given that this is probably pretty\n> platform-dependent and you've checked only one platform, I don't\n> think I'd call this a sufficient case for even a one-line change.\n\nAgreed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 9 Sep 2022 15:11:42 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overhead with fprintf related functions"
}
] |
[
{
"msg_contents": "Hi, hackers\n\nI found there are some redundant code in pl_exec.c,\nplpgsql_param_eval_generic_ro is same as plpgsql_param_eval_generic\nexcept it invokes MakeExpandedObjectReadOnly.\n\nIMO, we can invoke plpgsql_param_eval_generic in plpgsql_param_eval_generic_ro\nto avoid the redundant.\n\nIs there something I missed? Any thoughts?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Fri, 09 Sep 2022 23:18:04 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Remove redundant code in pl_exec.c"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> I found there are some redundant code in pl_exec.c,\n> plpgsql_param_eval_generic_ro is same as plpgsql_param_eval_generic\n> except it invokes MakeExpandedObjectReadOnly.\n\nWhich is exactly why it's NOT redundant.\n\n> IMO, we can invoke plpgsql_param_eval_generic in plpgsql_param_eval_generic_ro\n> to avoid the redundant.\n\nI don't like this particularly --- it puts way too much premium on\nthe happenstance that the MakeExpandedObjectReadOnly call is the\nvery last step in the callback function. If that needed to change,\nwe'd have a mess.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Sep 2022 11:34:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove redundant code in pl_exec.c"
},
{
"msg_contents": "\nOn Fri, 09 Sep 2022 at 23:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> IMO, we can invoke plpgsql_param_eval_generic in plpgsql_param_eval_generic_ro\n>> to avoid the redundant.\n>\n> I don't like this particularly --- it puts way too much premium on\n> the happenstance that the MakeExpandedObjectReadOnly call is the\n> very last step in the callback function. If that needed to change,\n> we'd have a mess.\n>\n\nSorry, I don't get your mind. Could you explain it more? Thanks in advance!\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 09 Sep 2022 23:49:48 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove redundant code in pl_exec.c"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> On Fri, 09 Sep 2022 at 23:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I don't like this particularly --- it puts way too much premium on\n>> the happenstance that the MakeExpandedObjectReadOnly call is the\n>> very last step in the callback function. If that needed to change,\n>> we'd have a mess.\n\n> Sorry, I don't get your mind. Could you explain it more? Thanks in advance!\n\nThis refactoring cannot support the situation where there is more\ncode to execute after MakeExpandedObjectReadOnly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Sep 2022 12:57:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove redundant code in pl_exec.c"
}
] |
[
{
"msg_contents": "Just a reminder that the first week of \"September 2022 commitfest\" is over,\nAs of now, there are \"295\" patches in total. Out of these 295 patches, \"29\"\npatches required committer attention, and 188 patches need reviews. I think\nwe need more reviewers to low down the number.\n\nI will keep sending the emails for the reminder and change the status\nof the patches entries accordingly.\n\nThanks to everyone who is participating in commitfest.\n\n-- \n\nIbrar Ahmed.\nSenior Software Engineer, PostgreSQL Consultant.\n\nJust a reminder that the first week of \"September 2022 commitfest\" is over,As of now, there are \"295\" patches in total. Out of these 295 patches, \"29\" patches required committer attention, and 188 patches need reviews. I thinkwe need more reviewers to low down the number.I will keep sending the emails for the reminder and change the statusof the patches entries accordingly.Thanks to everyone who is participating in commitfest.-- Ibrar Ahmed. Senior Software Engineer, PostgreSQL Consultant.",
"msg_date": "Sat, 10 Sep 2022 02:19:52 +0400",
"msg_from": "Ibrar Ahmed <ibrar.ahmed@percona.com>",
"msg_from_op": true,
"msg_subject": "[Commitfest 2022-09] First week is over"
}
] |
[
{
"msg_contents": "Hello,\n\nplease find my first patch for PostgreSQL is attached.\nKind regards,\nAndrey Arapov",
"msg_date": "Sat, 10 Sep 2022 01:24:36 +0000",
"msg_from": "andrey.arapov@nixaid.com",
"msg_from_op": true,
"msg_subject": "[PATCH] initdb: do not exit after warn_on_mount_point"
},
{
"msg_contents": "andrey.arapov@nixaid.com writes:\n> please find my first patch for PostgreSQL is attached.\n\nYou haven't explained why you think this would be a good\nchange, or even a safe one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Sep 2022 11:10:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] initdb: do not exit after warn_on_mount_point"
},
{
"msg_contents": "Hi Tom,\n\nI've updated the patch by adding the explanation behind and more comments. (please see the attachment)\n\nHave slightly improved the logic so that it does not report an error\n\"directory \\\"%s\\\" exists but is not empty\"\nwhen it is only supposed to warn the user about the mountpoint, without exiting.\n\nTo me, my patch looks like a typo fix where exit(1) should not be called on the warn_on_mount_point(),\nbut only warn and continue as more people are mounting the device at `/var/lib/postgresql/data` (PGDATA) in the containerized world (K8s deployments, especially now in the Akash Network I am working for) for making sure their data persist.\n\nAs a workaround, we either have to `rmdir /var/lib/postgresql/data/lost+found` before running `docker-entrypoint.sh postgres` which in turn calls the `initdb`, or, alternatively we have to pass `PGDATA=/var/lib/postgresql/data/<something>` while mounting persistent storage over `/var/lib/postgresql/data` path so that it won't exit on the very first run.\nTo me, this in itself is an odd behavior, which led me to finding this typo (from my point of view) to which I've made this patch.\n\n\nPlease let me know if it makes sense or requires more information / explanation.\n\n\nKind regards,\nAndrey Arapov\n\n\nSeptember 10, 2022 5:10 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n\n> andrey.arapov@nixaid.com writes:\n> \n>> please find my first patch for PostgreSQL is attached.\n> \n> You haven't explained why you think this would be a good\n> change, or even a safe one.\n> \n> regards, tom lane",
"msg_date": "Sat, 10 Sep 2022 19:14:59 +0000",
"msg_from": "andrey.arapov@nixaid.com",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] initdb: do not exit after warn_on_mount_point"
},
{
"msg_contents": "Hi,\n\nOn Sat, Sep 10, 2022 at 07:14:59PM +0000, andrey.arapov@nixaid.com wrote:\n>\n> Have slightly improved the logic so that it does not report an error\n> \"directory \\\"%s\\\" exists but is not empty\"\n> when it is only supposed to warn the user about the mountpoint, without\n> exiting.\n>\n> To me, my patch looks like a typo fix where exit(1) should not be called on\n> the warn_on_mount_point(), but only warn and continue as more people are\n> mounting the device at `/var/lib/postgresql/data` (PGDATA) in the\n> containerized world (K8s deployments, especially now in the Akash Network I\n> am working for) for making sure their data persist.\n\nThis definitely isn't a typo, as it's really strongly discouraged to use a\nmount point for the data directory. You can refer to this thread [1] for more\nexplanations.\n\n[1] https://www.postgresql.org/message-id/flat/CAKoxK%2B6H40imynM5P31bf0DnpN-5f5zeROjcaj6BKVAjxdEA9w%40mail.gmail.com\n\n\n",
"msg_date": "Sun, 11 Sep 2022 18:17:47 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] initdb: do not exit after warn_on_mount_point"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sat, Sep 10, 2022 at 07:14:59PM +0000, andrey.arapov@nixaid.com wrote:\n>> Have slightly improved the logic so that it does not report an error\n>> \"directory \\\"%s\\\" exists but is not empty\"\n>> when it is only supposed to warn the user about the mountpoint, without\n>> exiting.\n\n> This definitely isn't a typo, as it's really strongly discouraged to use a\n> mount point for the data directory.\n\nAbsolutely. I think maybe the problem here is that warn_on_mount_point()\nis a pretty bad name for that helper function, as this is not \"just a\nwarning\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Sep 2022 10:53:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] initdb: do not exit after warn_on_mount_point"
},
{
"msg_contents": "September 11, 2022 12:18 PM, \"Julien Rouhaud\" <rjuju123@gmail.com> wrote:\n\n> Hi,\n> \n> On Sat, Sep 10, 2022 at 07:14:59PM +0000, andrey.arapov@nixaid.com wrote:\n> \n>> Have slightly improved the logic so that it does not report an error\n>> \"directory \\\"%s\\\" exists but is not empty\"\n>> when it is only supposed to warn the user about the mountpoint, without\n>> exiting.\n>> \n>> To me, my patch looks like a typo fix where exit(1) should not be called on\n>> the warn_on_mount_point(), but only warn and continue as more people are\n>> mounting the device at `/var/lib/postgresql/data` (PGDATA) in the\n>> containerized world (K8s deployments, especially now in the Akash Network I\n>> am working for) for making sure their data persist.\n> \n> This definitely isn't a typo, as it's really strongly discouraged to use a\n> mount point for the data directory. You can refer to this thread [1] for more\n> explanations.\n> \n> [1]\n> https://www.postgresql.org/message-id/flat/CAKoxK+6H40imynM5P31bf0DnpN-5f5zeROjcaj6BKVAjxdEA9w@mail.\n> mail.com\n\n\nI've read the \"why not using a mountpoint as PGDATA?\" thread [1] as well as Bugzilla \"postgresql-setup fails when /var/lib/pgsql/data is mount point\" thead [2] but could not find any good reason why not to mount the PGDATA directly,\nexcept probably for the NFS mount point, but who does that anyway?\nAnd using NFS for PostgreSQL PGDATA is a way for finding problems starting with poor performance ending up with corrupted DB. lol\n\nThe only point that hooked my attention was the pg_upgrade, but then I've tried pg_upgrade'ing postgresql:13 to postgresql:14, everything went without issues. I wrote a step-by-step doc for that purpose [3].\n\nThat leaves me unconvinced on why would `initdb` quit when detecting PGDATA being a mountpoint.\n\nEveryone using containerized postgresql image cannot use /var/lib/postgresql as the mountpoint but has to use /var/lib/postgresql/data instead due to this issue [4] due to [5].\nHence, everyone using containerized version of postgresql with the device (say Ceph's RBD) mounted over /var/lib/postgresql/data directory has to either specify:\n\n- PGDATA=/var/lib/postgresql/data/<some-dir>\n\nOR\n\nmake sure to remove $PGDATA/lost+found directory.\n\nBoth of these hacks are only for the initdb to fail detect the mountpoint which, on its own, is supposed to be only a warning which is seen from the function name warn_on_mount_point() and its messages [6] look to be of only advisory nature to me.\n\nI would understand initdb to exit if it detects the mountpoint being is NFS, otherwise, there is no reason for not letting one use the mountpoint over PGDATA as neither it breaks the pg_upgrade functionality as shown in this message.\n\n\n[1] https://www.postgresql.org/message-id/flat/CAKoxK+6H40imynM5P31bf0DnpN-5f5zeROjcaj6BKVAjxdEA9w@mail.gmail.com\n[2] https://bugzilla.redhat.com/show_bug.cgi?id=1247477\n[3] https://gist.github.com/andy108369/aa3bf1707054542f2fa944f6d39aef64\n[4] https://github.com/docker-library/postgres/issues/404#issuecomment-773755801\n[5] https://github.com/docker-library/postgres/blob/3c20b7bdb915ecb1648fb468ab53080c58bb1716/Dockerfile-debian.template#L184\n[6] https://github.com/postgres/postgres/blob/7fed801135bae14d63b11ee4a10f6083767046d8/src/bin/initdb/initdb.c#L2616-L2626\n\n\nKind regards,\nAndrey Arapov\n\n\n",
"msg_date": "Sun, 11 Sep 2022 18:22:21 +0000",
"msg_from": "andrey.arapov@nixaid.com",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] initdb: do not exit after warn_on_mount_point"
},
{
"msg_contents": "On Sun, Sep 11, 2022 at 06:22:21PM +0000, andrey.arapov@nixaid.com wrote:\n> September 11, 2022 12:18 PM, \"Julien Rouhaud\" <rjuju123@gmail.com> wrote:\n>\n> > Hi,\n> >\n> > On Sat, Sep 10, 2022 at 07:14:59PM +0000, andrey.arapov@nixaid.com wrote:\n> >\n> >> Have slightly improved the logic so that it does not report an error\n> >> \"directory \\\"%s\\\" exists but is not empty\"\n> >> when it is only supposed to warn the user about the mountpoint, without\n> >> exiting.\n> >>\n> >> To me, my patch looks like a typo fix where exit(1) should not be called on\n> >> the warn_on_mount_point(), but only warn and continue as more people are\n> >> mounting the device at `/var/lib/postgresql/data` (PGDATA) in the\n> >> containerized world (K8s deployments, especially now in the Akash Network I\n> >> am working for) for making sure their data persist.\n> >\n> > This definitely isn't a typo, as it's really strongly discouraged to use a\n> > mount point for the data directory. You can refer to this thread [1] for more\n> > explanations.\n> >\n> > [1]\n> > https://www.postgresql.org/message-id/flat/CAKoxK+6H40imynM5P31bf0DnpN-5f5zeROjcaj6BKVAjxdEA9w@mail.\n> > mail.com\n>\n>\n> I've read the \"why not using a mountpoint as PGDATA?\" thread [1] as well as\n> Bugzilla \"postgresql-setup fails when /var/lib/pgsql/data is mount point\"\n> thead [2] but could not find any good reason why not to mount the PGDATA\n> directly, except probably for the NFS mount point, but who does that anyway?\n\nWhat about this part in Tom's original answer:\n\n3. If, some day, the filesystem is accidentally unmounted while the database is\nup, it will continue to write into files that are now getting placed in the\nmount-point directory on the parent volume. This usually results in an\nunrecoverably messed-up database by the time you realize what's going wrong.\n(There are horror stories about such cases in the PG community mailing list\narchives, dating from before we installed the don't-use-a-mount-point defenses\nin initdb.)\n\n> Everyone using containerized postgresql image cannot use /var/lib/postgresql\n> as the mountpoint but has to use /var/lib/postgresql/data instead due to this\n> issue [4] due to [5]. Hence, everyone using containerized version of\n> postgresql with the device (say Ceph's RBD) mounted over\n> /var/lib/postgresql/data directory has to either specify:\n>\n> - PGDATA=/var/lib/postgresql/data/<some-dir>\n>\n> OR\n>\n> make sure to remove $PGDATA/lost+found directory.\n\ninitdb had this behavior for almost 10 years, and for good reasons: it prevents\nactual problems that were seen in the field.\n\nIt's unfortunate that the docker postgres images didn't take that behavior into\naccount, which was introduced more than a year before any work started on those\nimages, but if you're not happy with those workarounds it's something that\nshould be changed in the docker images.\n\n> Both of these hacks are only for the initdb to fail detect the mountpoint\n> which, on its own, is supposed to be only a warning which is seen from the\n> function name warn_on_mount_point() and its messages [6] look to be of only\n> advisory nature to me.\n\nAs Tom confirmed at [1], you shouldn't assume anything about the criticality\nbased on the function name. If anything, the \"warn\" part is only saying that\nthe function itself won't exit(1). And yes this function is only adding\ninformation on the reason why the given directory can't be used, but it doesn't\nchange the fact that the given directory can't be used.\n\n[1] https://www.postgresql.org/message-id/813162.1662908002@sss.pgh.pa.us\n\n\n",
"msg_date": "Mon, 12 Sep 2022 13:03:47 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] initdb: do not exit after warn_on_mount_point"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sun, Sep 11, 2022 at 06:22:21PM +0000, andrey.arapov@nixaid.com wrote:\n>> Everyone using containerized postgresql image cannot use /var/lib/postgresql\n>> as the mountpoint but has to use /var/lib/postgresql/data instead due to this\n>> issue [4] due to [5].\n\n> initdb had this behavior for almost 10 years, and for good reasons: it prevents\n> actual problems that were seen in the field.\n\nThe long and the short of this is that one person losing their data\noutweighs thousands of people being very mildly inconvenienced by\nhaving to create an extra level of directory. I understand that you\ndon't think so, but you'd change your mind PDQ if it was *your* data\nthat got irretrievably corrupted.\n\nWe are not going to remove this check.\n\nIf anything, the fault I'd find with the existing code is that it's not\nsufficiently thorough about detecting what is a mount point. AFAICS,\nneither the lost+found check nor the extra-dot-files check will trigger\non modern filesystems such as XFS. I wonder if we could do something\nlike comparing the stat(2) st_dev numbers for the proposed data directory\nvs. its parent directory.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Sep 2022 01:51:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] initdb: do not exit after warn_on_mount_point"
},
{
"msg_contents": "September 12, 2022 7:51 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> \n>> On Sun, Sep 11, 2022 at 06:22:21PM +0000, andrey.arapov@nixaid.com wrote:\n>>> Everyone using containerized postgresql image cannot use /var/lib/postgresql\n>>> as the mountpoint but has to use /var/lib/postgresql/data instead due to this\n>>> issue [4] due to [5].\n>> \n>> initdb had this behavior for almost 10 years, and for good reasons: it prevents\n>> actual problems that were seen in the field.\n> \n> The long and the short of this is that one person losing their data\n> outweighs thousands of people being very mildly inconvenienced by\n> having to create an extra level of directory. I understand that you\n> don't think so, but you'd change your mind PDQ if it was *your* data\n> that got irretrievably corrupted.\n> \n> We are not going to remove this check.\n> \n> If anything, the fault I'd find with the existing code is that it's not\n> sufficiently thorough about detecting what is a mount point. AFAICS,\n> neither the lost+found check nor the extra-dot-files check will trigger\n> on modern filesystems such as XFS. I wonder if we could do something\n> like comparing the stat(2) st_dev numbers for the proposed data directory\n> vs. its parent directory.\n> \n> regards, tom lane\n\n\nThank you for the explanation Tom & Julien!\n\nThis is not an issue for the containerized PostgreSQL where the mountpoint is a required dependency without which the container won't even spawn nor start the scripts (such as initdb).\nThis also makes it difficult to comprehend the reason initdb exits when it sees PGDATA is a mountpoint it at the first hand.\n\nThat and the \"Warn about initdb using mount-points\" commit [4] made me thinking that this was a typo.\n\nBut I do understand that having it exiting on detecting PGDATA being a mountpoint have saved people from having their DB corrupted is a good enough reason for keeping thing as they are now.\n\n\n== Summarizing the issue ==\n\nThe problem is that PostgreSQL will irrecoverably corrupt when the `PGDATA` mountpoint gets accidentally unmounted while the DB is up. [1]\n\nThe `PGDATA` can't just accidentally get unmounted due to the files being locked by a process there, unless there is a time window between the DB locking/unlocking the data in it of course.\nOr unless someone forcibly unmounts it.\n\n(I'd expect the DB to detect this & immediately terminate with a fatal error)\n\nOr more likely is when `PGDATA` directory gets mounted back again _while_ PostgreSQL is running with a newly initialized cluster in the empty PGDATA directory (say, due to start-scripts running `initdb` or there was an old instance of a previously initialized cluster in the underlying PGDATA directory [with mountpoint being unmounted]), and then gets closed which in turn causes it writing the wrong data into the correct pg_control file.\n\n\n== The solution (partial) ==\n\nThe solution to this was built into the `initdb` [4] - which makes sure it fails when it finds the `PGDATA` is a mountpoint.\nThis is currently achieved by `initdb` finding the `lost+found` directory in PGDATA, which is not a robust solution, since there are filesystems that do not place that file onto the mountpoint such as XFS, BTRFS, eCryptfs, ..., or one can erase the `lost+found` directory.\nHence, this is only a partial solution to the problem and looks to be more of a workaround nature IMHO.\n\n\n== The recommendation ==\n\nThe recommendation is to mount an upper directory instead of PGDATA directly (one above the PGDATA, e.g. `$PGDATA/../.`) to make sure the DB or a certain* PostgreSQL start script will fail due to a missing `PGDATA` directory instead of running the `initdb` to re-init the cluster.\n\n== The reason ==\n\nThis is because, as already mentioned before, running the `initdb` can lead to a situation where one can irrecoverably corrupt the DB by mounting the original PGDATA backing device back over again and stopping/re-starting the DB which will irrecoverably overwrite the good pg_control file.\n(* - I assume that on some systems the postgresql start-script runs `initdb` when detects PGDATA directory is empty before starting the DB; like in the postgresql docker container [2] (by checking the presence of `$PGDATA/PG_VERSION`). And the default postgresql start script [3] runs the DB directly without issuing `initdb`.)\n\n\n== Next steps? ==\n\nI'm wondering whether that's the same issue for mariadb / mysql ...\n\nAnd whether there could be a better way to handle this issue and protect the PostgreSQL from corrupting the DB in the event the PGDATA gets accidentally unmounted / re-mounted leading the the issue described above.\nSolving this would allow people to actually mount the PGDATA directly, without the need to fix the `initdb`'s mountpoint detection for filesystems without `lost+found` directory.\n\n\n[1] https://bugzilla.redhat.com/show_bug.cgi?id=1247477#c1\n[2] https://github.com/docker-library/postgres/blob/74e51d102aede/docker-entrypoint.sh#L317\n[3] https://github.com/postgres/postgres/blob/REL_14_5/contrib/start-scripts/linux#L94\n[4] https://github.com/postgres/postgres/commit/17f15239325a88581bb4f9cf91d38005f1f52d69\n\n\nKind regards,\nAndrey Arapov\n\n\n",
"msg_date": "Mon, 12 Sep 2022 10:47:57 +0000",
"msg_from": "andrey.arapov@nixaid.com",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] initdb: do not exit after warn_on_mount_point"
}
] |
[
{
"msg_contents": "The sequence of events leading up to this:\n\n0) Yesterday I upgraded an internal VM to pg15b4 using PGDG RPMs;\n It's the same VM that hit the prefetch_recovery bug which was fixed by\n adb466150. I don't think that should've left it in a weird state\n (since recovery was sucessful when prefetch was disabled, and the\n cluser worked fine until now).\n1) This evening, I started running sqlsmith connected to the postgres DB\n that has some logging tables in it;\n2) There was a lot of swapping, and the backend was finally killed due\n to OOM when sqlsmith tried to run a terrible query involving\n database_to_xml();\n3) lots of backends started crashing with SIGABRT;\n4) I was simultaneously compiling pg14b4 to run with with\n -DRELCACHE_FORCE_RELEASE and installing it into /usr/local. I don't *think*\n running libraries would've been overwritten, and that shouldn't have\n affected the running instance anyway...\n\nI got a lot (100+) of SIGABRT. I suspect all the backtraces are the same. My\nhypothesis is that the OOM crash caused bad xmax to be written (or rather,\nrecovery didn't cause it to be written correctly?). We may have hit a race\ncondition due to heavy swapping.\n\n(gdb) bt\n#0 0x00007fb8a22f31f7 in raise () from /lib64/libc.so.6\n#1 0x00007fb8a22f48e8 in abort () from /lib64/libc.so.6\n#2 0x000000000098f9be in ExceptionalCondition (conditionName=conditionName@entry=0x9fada4 \"TransactionIdIsValid(xmax)\", errorType=errorType@entry=0x9ed217 \"FailedAssertion\", \n fileName=fileName@entry=0x9fad90 \"heapam_visibility.c\", lineNumber=lineNumber@entry=1353) at assert.c:69\n#3 0x00000000004fd4d7 in HeapTupleSatisfiesVacuumHorizon (htup=htup@entry=0x7ffc225a87e0, buffer=buffer@entry=5100, dead_after=dead_after@entry=0x7ffc225a87d0) at heapam_visibility.c:1353\n#4 0x0000000000501702 in heap_prune_satisfies_vacuum (buffer=5100, tup=0x7ffc225a87e0, prstate=0x7ffc225a8a50) at pruneheap.c:504\n#5 heap_page_prune (relation=relation@entry=0x7fb8a50c3438, buffer=buffer@entry=5100, vistest=vistest@entry=0xec7890 <GlobalVisDataRels>, old_snap_xmin=<optimized out>, old_snap_ts=<optimized out>, \n nnewlpdead=nnewlpdead@entry=0x7ffc225a964c, off_loc=off_loc@entry=0x0) at pruneheap.c:351\n#6 0x0000000000502326 in heap_page_prune_opt (relation=0x7fb8a50c3438, buffer=buffer@entry=5100) at pruneheap.c:209\n#7 0x00000000004f3ae3 in heapgetpage (sscan=sscan@entry=0x199b1d0, page=page@entry=2892) at heapam.c:415\n#8 0x00000000004f44c2 in heapgettup_pagemode (scan=scan@entry=0x199b1d0, dir=<optimized out>, nkeys=0, key=0x0) at heapam.c:1120\n#9 0x00000000004f5abe in heap_getnextslot (sscan=0x199b1d0, direction=<optimized out>, slot=0x1967be8) at heapam.c:1352\n#10 0x00000000006de16b in table_scan_getnextslot (slot=0x1967be8, direction=ForwardScanDirection, sscan=<optimized out>) at ../../../src/include/access/tableam.h:1046\n#11 SeqNext (node=node@entry=0x1967a38) at nodeSeqscan.c:80\n#12 0x00000000006b109a in ExecScanFetch (recheckMtd=0x6de0f0 <SeqRecheck>, accessMtd=0x6de100 <SeqNext>, node=0x1967a38) at execScan.c:133\n#13 ExecScan (node=0x1967a38, accessMtd=0x6de100 <SeqNext>, recheckMtd=0x6de0f0 <SeqRecheck>) at execScan.c:199\n#14 0x00000000006add88 in ExecProcNodeInstr (node=0x1967a38) at execProcnode.c:479\n#15 0x00000000006a6182 in ExecProcNode (node=0x1967a38) at ../../../src/include/executor/executor.h:259\n#16 ExecutePlan (execute_once=<optimized out>, dest=0x1988448, direction=<optimized out>, numberTuples=0, sendTuples=true, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x1967a38, estate=0x19677a0)\n at execMain.c:1636\n#17 standard_ExecutorRun (queryDesc=0x1996e80, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:363\n#18 0x00007fb8960913bd in pgss_ExecutorRun (queryDesc=0x1996e80, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at pg_stat_statements.c:1010\n#19 0x00007fb895c6f781 in explain_ExecutorRun (queryDesc=0x1996e80, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at auto_explain.c:320\n#20 0x000000000084976e in PortalRunSelect (portal=portal@entry=0x18fed30, forward=forward@entry=true, count=0, count@entry=9223372036854775807, dest=dest@entry=0x1988448) at pquery.c:924\n#21 0x000000000084af4f in PortalRun (portal=0x18fed30, count=9223372036854775807, isTopLevel=<optimized out>, run_once=<optimized out>, dest=0x1988448, altdest=0x1988448, qc=0x7ffc225a9ce0) at pquery.c:768\n#22 0x000000000084679b in exec_simple_query (query_string=0x186d8a0 \"SELECT alarm_id, alarm_disregard FROM alarms WHERE alarm_ack_time IS NULL AND alarm_clear_time IS NULL AND alarm_office = 'ETHERNET'\") at postgres.c:1250\n#23 0x000000000084848a in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4581\n#24 0x0000000000495afe in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4504\n#25 BackendStartup (port=0x1894250) at postmaster.c:4232\n#26 ServerLoop () at postmaster.c:1806\n#27 0x00000000007b0c60 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x1868280) at postmaster.c:1478\n#28 0x00000000004976a6 in main (argc=3, argv=0x1868280) at main.c:202\n\n\n< 2022-09-09 19:44:03.329 CDT >LOG: server process (PID 8949) was terminated by signal 6: Aborted\n< 2022-09-09 19:44:03.329 CDT >DETAIL: Failed process was running: SELECT alarm_id, alarm_disregard FROM alarms\n WHERE alarm_shakeout_time<=now()\n AND alarm_shakeout_time>now()-$1::interval\n AND alarm_time!=alarm_shakeout_time\n AND alarm_clear_time IS NULL\n\nFor context, that's a partition the main table on this DB, and any\nproblem there would've been immediately apparent. We have multiple\ninstances of that query which run continuously.\n\nI saved a copy of the data dir, but I'm going to have to bring the DB back\nonline soon. I don't know if I can get a page image easily since GDB is a bit\nbusted (won't show source code) and I can't remember how I fixed that last\ntime...\n\n(gdb) fr 6\n#6 0x0000000000502326 in heap_page_prune_opt (relation=0x7fb8a50c3438, buffer=buffer@entry=5100) at pruneheap.c:209\n209 pruneheap.c: No such file or directory.\n\n(gdb) p *vistest\n$2 = {definitely_needed = {value = 8074911083}, maybe_needed = {value = 8074911083}}\n\n\n#2 0x000000000098f9be in ExceptionalCondition (conditionName=conditionName@entry=0x9fada4 \"TransactionIdIsValid(xmax)\", errorType=errorType@entry=0x9ed217 \"FailedAssertion\",\n fileName=fileName@entry=0x9fad90 \"heapam_visibility.c\", lineNumber=lineNumber@entry=1353) at assert.c:69\nNo locals.\n#3 0x00000000004fd4d7 in HeapTupleSatisfiesVacuumHorizon (htup=htup@entry=0x7ffc225a87e0, buffer=buffer@entry=5100, dead_after=dead_after@entry=0x7ffc225a87d0) at heapam_visibility.c:1353\n xmax = 0\n tuple = 0x2aaab5a5fdf8\n#4 0x0000000000501702 in heap_prune_satisfies_vacuum (buffer=5100, tup=0x7ffc225a87e0, prstate=0x7ffc225a8a50) at pruneheap.c:504\n res = <optimized out>\n dead_after = 0\n#5 heap_page_prune (relation=relation@entry=0x7fb8a50c3438, buffer=buffer@entry=5100, vistest=vistest@entry=0xec7890 <GlobalVisDataRels>, old_snap_xmin=<optimized out>, old_snap_ts=<optimized out>,\n nnewlpdead=nnewlpdead@entry=0x7ffc225a964c, off_loc=off_loc@entry=0x0) at pruneheap.c:351\n itemid = 0x2aaab5a5fd38\n htup = <optimized out>\n ndeleted = 0\n page = 0x2aaab5a5fd00 <Address 0x2aaab5a5fd00 out of bounds>\n blockno = 2892\n offnum = 9\n maxoff = 10\n prstate = {rel = 0x7fb8a50c3438, vistest = 0xec7890 <GlobalVisDataRels>, old_snap_ts = 0, old_snap_xmin = 0, old_snap_used = false, new_prune_xid = 0, latestRemovedXid = 0, nredirected = 0, ndead = 0, nunused = 0,\n redirected = {32696, 0, 22724, 151, 0, 0, 1114, 0, 0, 0, 434, 0, 0, 0, 434, 0, 0, 0, 1114, 0, 0, 0, 1114, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 3, 0, 0, 950, 0, 6960, 144, 0, 0, 184, 0, 2, 1, 2, 0, 0, 0, 0, 0, 0, 0, 52144,\n 397, 0, 0, 0, 0, 0, 0, 434, 0, 0, 0, 0, 0, 2, 3, 0, 0, 950, 0, 6960, 144, 0, 0, 184, 0, 2, 1, 2, 0, 0, 0, 0, 0, 0, 0, 52144, 397, 0, 0, 0, 0, 0, 0, 1114, 0, 0, 0, 0, 0, 3, 3, 0, 0, 950, 0, 6960, 144, 0, 0, 184, 0,\n 2, 1, 2, 0, 0, 0, 0, 0, 0, 0, 52144, 397, 0, 0, 0, 0, 0, 0, 1114, 0, 0, 0, 0, 0, 4, 3, 0, 0, 950, 0, 51440, 139, 0, 0, 63, 0, 2, 1, 2, 0, 0, 0, 0, 0, 0, 0, 52144, 397, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,\n 5120, 60453, 16693, 62684, 0, 0, 0, 0, 60168, 397, 0, 0, 1304, 408, 0, 0, 29816, 7947, 0, 0, 3472, 408...}, nowdead = {63057, 4137, 0, 0, 64763, 42226, 32696, 0, 26544, 121, 0, 0, 36704, 8794, 32764, 0, 27788,\n 41847, 32696, 0, 32664, 41847, 32696, 0, 36976, 8794, 32764, 0, 36960, 8794, 32764, 0, 17, 0, 32764, 0, 0, 0, 0, 0, 0, 0, 0, 0, 19304, 42256, 32696, 0, 54472, 42260, 32696, 0, 14860, 69, 0, 0, 36432, 41847, 32696,\n 0, 28184, 65, 0, 0, 0, 0, 1, 0, 177, 0, 1, 0, 11240, 408, 0, 0, 37160, 8794, 32764, 0, 37120, 8794, 32764, 0, 1, 0, 0, 0, 19304, 42256, 32696, 0, 38056, 42260, 32696, 0, 37200, 42260, 32696, 0, 65503, 42226,\n 32696, 0, 0, 0, 0, 0, 19304, 42256, 32696, 0, 1, 0, 0, 16368, 0, 0, 0, 0, 1, 0, 32764, 0, 37200, 42260, 32696, 0, 0, 0, 0, 0, 42289, 152, 0, 0, 53392, 406, 0, 0, 11240, 408, 0, 0, 0, 0, 0, 0, 38056, 42260, 32696,\n 0, 36976, 8794, 32764, 0, 36960, 8794, 32764, 0, 63057, 4137, 0, 0, 14860, 69, 0, 0, 65535, 65535, 0, 0, 3, 0, 0, 0, 32664, 41847, 32696, 0, 54472, 42260, 32696, 0, 0, 0, 0, 16368, 0, 0, 0, 16400, 0, 0, 0, 0,\n 5243, 18350, 31457, 16228...}, nowunused = {0, 32000, 46500, 10922, 0, 11, 0, 0, 0, 17, 2560, 0, 0, 61105, 6, 0, 0, 37544, 8794, 32764, 0, 28552, 408, 0, 0, 0, 0, 0, 0, 52144, 397, 0, 0, 8096, 0, 65535, 2,\n 0 <repeats 98 times>, 11, 0, 10, 0, 4, 25089, 78, 11265, 0, 0, 0, 0, 0, 0, 1007, 0, 42, 0, 43, 0, 2406, 0, 2407, 0, 0, 0, 0, 0, 0, 0, 32639, 32639, 32639, 32639, 32639, 32639, 32639, 32639, 36353, 33032, 24272,\n 16340, 0, 0, 0, 0, 0, 0, 0, 16368, 0, 0, 0, 0, 0, 0, 18136, 16663, 0, 0, 0, 0, 0, 49152, 18127, 16359, 0, 0, 0, 0, 59295, 42150, 63299, 48114, 0, 0, 0, 0, 28399, 5317, 64332, 48184, 0, 0, 0, 0, 8832, 36553, 22255,\n 16426, 0, 0, 0, 0, 0, 0, 0, 0, 64, 0, 0, 0, 0, 0, 0, 0, 55168, 390, 0, 0, 64, 0, 0, 0, 231, 0, 0, 0, 38652, 8794, 32764, 0, 38640, 8794, 32764, 0, 0, 0, 0, 0, 3092, 156, 0, 0, 0, 0, 0, 0, 64, 0, 0, 0, 6,\n 0 <repeats 13 times>}, marked = {false <repeats 292 times>},\n htsv = \"\\340Xm&\\343:\\367/$\\254\\001\\357\\335@*q\\212^D\\032\\070\\327\\004\\347!6\\231\\275\\347Ac\\207\\207\\236\\262\\211%2(\\373\\303\\240U[\\207\\274\\033\\246\\374{T\\355\\331\\177\\311\\310m\\222\\200\", '\\000' <repeats 17 times>, \"\\220\\314\\277MH<\\262\\000\\000\\000\\000\\000\\221\\022\\000\\000\\065A\\334\\364\\370>\\262\\000\\000\\000\\000\\000\\000\\024%\\354\\065A\\334\\364\\330\\227\\035\\263\\252*\\000\\000\\300\\227\\035\\263\\252*\\000\\000\\000\\375\\245\\265\\252*\\000\\000\\200\\301\\300\\252\\252*\\000\\000P\\347\\226\\001\\000\\000\\000\\000\\330\\227\\035\\263\\252*\\000\\000\\375\\017\\233\\260\\000\\000\\000\\000\\277\\303\\200\\000\\000\\000\\000\\000`\\213\\223\\001\\000\\000\\000\\000\\333\\227\\035\\263\\252*\\000\\000\\000\\000\\000\\000L\\v\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\"...}\n tup = {t_len = 596, t_self = {ip_blkid = {bi_hi = 0, bi_lo = 2892}, ip_posid = 9}, t_tableOid = 17852325, t_data = 0x2aaab5a5fdf8}\n#6 0x0000000000502326 in heap_page_prune_opt (relation=0x7fb8a50c3438, buffer=buffer@entry=5100) at pruneheap.c:209\n ndeleted = <optimized out>\n nnewlpdead = -186891979\n page = 0x2aaab5a5fd00 <Address 0x2aaab5a5fd00 out of bounds>\n prune_xid = <optimized out>\n vistest = 0xec7890 <GlobalVisDataRels>\n limited_xmin = 0\n limited_ts = 0\n minfree = 819\n#7 0x00000000004f3ae3 in heapgetpage (sscan=sscan@entry=0x199b1d0, page=page@entry=2892) at heapam.c:415\n scan = 0x199b1d0\n buffer = 5100\n snapshot = 0x188ef18\n lines = <optimized out>\n ntup = <optimized out>\n lineoff = <optimized out>\n lpp = <optimized out>\n all_visible = <optimized out>\n#8 0x00000000004f44c2 in heapgettup_pagemode (scan=scan@entry=0x199b1d0, dir=<optimized out>, nkeys=0, key=0x0) at heapam.c:1120\n tuple = 0x199b228\n backward = false\n page = 2892\n finished = false\n dp = 0x2aaab5a5dd00 <Address 0x2aaab5a5dd00 out of bounds>\n lines = <optimized out>\n lineindex = 8\n lineoff = <optimized out>\n linesleft = 0\n lpp = <optimized out>\n#9 0x00000000004f5abe in heap_getnextslot (sscan=0x199b1d0, direction=<optimized out>, slot=0x1967be8) at heapam.c:1352\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 9 Sep 2022 21:06:37 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "pg15b4: FailedAssertion(\"TransactionIdIsValid(xmax)"
},
{
"msg_contents": "On Fri, Sep 09, 2022 at 09:06:37PM -0500, Justin Pryzby wrote:\n> #0 0x00007fb8a22f31f7 in raise () from /lib64/libc.so.6\n> #1 0x00007fb8a22f48e8 in abort () from /lib64/libc.so.6\n> #2 0x000000000098f9be in ExceptionalCondition (conditionName=conditionName@entry=0x9fada4 \"TransactionIdIsValid(xmax)\", errorType=errorType@entry=0x9ed217 \"FailedAssertion\", \n> fileName=fileName@entry=0x9fad90 \"heapam_visibility.c\", lineNumber=lineNumber@entry=1353) at assert.c:69\n> #3 0x00000000004fd4d7 in HeapTupleSatisfiesVacuumHorizon (htup=htup@entry=0x7ffc225a87e0, buffer=buffer@entry=5100, dead_after=dead_after@entry=0x7ffc225a87d0) at heapam_visibility.c:1353\n> #4 0x0000000000501702 in heap_prune_satisfies_vacuum (buffer=5100, tup=0x7ffc225a87e0, prstate=0x7ffc225a8a50) at pruneheap.c:504\n> #5 heap_page_prune (relation=relation@entry=0x7fb8a50c3438, buffer=buffer@entry=5100, vistest=vistest@entry=0xec7890 <GlobalVisDataRels>, old_snap_xmin=<optimized out>, old_snap_ts=<optimized out>, \n> nnewlpdead=nnewlpdead@entry=0x7ffc225a964c, off_loc=off_loc@entry=0x0) at pruneheap.c:351\n> #6 0x0000000000502326 in heap_page_prune_opt (relation=0x7fb8a50c3438, buffer=buffer@entry=5100) at pruneheap.c:209\n> #7 0x00000000004f3ae3 in heapgetpage (sscan=sscan@entry=0x199b1d0, page=page@entry=2892) at heapam.c:415\n> #8 0x00000000004f44c2 in heapgettup_pagemode (scan=scan@entry=0x199b1d0, dir=<optimized out>, nkeys=0, key=0x0) at heapam.c:1120\n> #9 0x00000000004f5abe in heap_getnextslot (sscan=0x199b1d0, direction=<optimized out>, slot=0x1967be8) at heapam.c:1352\n> #10 0x00000000006de16b in table_scan_getnextslot (slot=0x1967be8, direction=ForwardScanDirection, sscan=<optimized out>) at ../../../src/include/access/tableam.h:1046\n> #11 SeqNext (node=node@entry=0x1967a38) at nodeSeqscan.c:80\n> #12 0x00000000006b109a in ExecScanFetch (recheckMtd=0x6de0f0 <SeqRecheck>, accessMtd=0x6de100 <SeqNext>, node=0x1967a38) at execScan.c:133\n> #13 ExecScan (node=0x1967a38, accessMtd=0x6de100 <SeqNext>, recheckMtd=0x6de0f0 <SeqRecheck>) at execScan.c:199\n> #14 0x00000000006add88 in ExecProcNodeInstr (node=0x1967a38) at execProcnode.c:479\n> #15 0x00000000006a6182 in ExecProcNode (node=0x1967a38) at ../../../src/include/executor/executor.h:259\n> #16 ExecutePlan (execute_once=<optimized out>, dest=0x1988448, direction=<optimized out>, numberTuples=0, sendTuples=true, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x1967a38, estate=0x19677a0)\n> at execMain.c:1636\n> #17 standard_ExecutorRun (queryDesc=0x1996e80, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:363\n> #18 0x00007fb8960913bd in pgss_ExecutorRun (queryDesc=0x1996e80, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at pg_stat_statements.c:1010\n> #19 0x00007fb895c6f781 in explain_ExecutorRun (queryDesc=0x1996e80, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at auto_explain.c:320\n> #20 0x000000000084976e in PortalRunSelect (portal=portal@entry=0x18fed30, forward=forward@entry=true, count=0, count@entry=9223372036854775807, dest=dest@entry=0x1988448) at pquery.c:924\n> #21 0x000000000084af4f in PortalRun (portal=0x18fed30, count=9223372036854775807, isTopLevel=<optimized out>, run_once=<optimized out>, dest=0x1988448, altdest=0x1988448, qc=0x7ffc225a9ce0) at pquery.c:768\n> #22 0x000000000084679b in exec_simple_query (query_string=0x186d8a0 \"SELECT alarm_id, alarm_disregard FROM alarms WHERE alarm_ack_time IS NULL AND alarm_clear_time IS NULL AND alarm_office = 'ETHERNET'\") at postgres.c:1250\n> #23 0x000000000084848a in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4581\n> #24 0x0000000000495afe in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4504\n> #25 BackendStartup (port=0x1894250) at postmaster.c:4232\n> #26 ServerLoop () at postmaster.c:1806\n> #27 0x00000000007b0c60 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x1868280) at postmaster.c:1478\n> #28 0x00000000004976a6 in main (argc=3, argv=0x1868280) at main.c:202\n> \n> I saved a copy of the data dir, but I'm going to have to bring the DB back\n> online soon. I don't know if I can get a page image easily since GDB is a bit\n> busted (won't show source code) and I can't remember how I fixed that last\n> time...\n\nActually gdb seems to be fine, except it doesn't work well on the\ncorefile.\n\nBreakpoint 4, HeapTupleSatisfiesVacuumHorizon (htup=htup@entry=0x7ffcbdc89970, buffer=buffer@entry=690, dead_after=dead_after@entry=0x7ffcbdc898fc) at heapam_visibility.c:1196\n1196 {\n(gdb) n\n1197 HeapTupleHeader tuple = htup->t_data;\n(gdb) p *tuple\n$21 = {t_choice = {t_heap = {t_xmin = 3779887563, t_xmax = 133553150, t_field3 = {t_cid = 103, t_xvac = 103}}, t_datum = {datum_len_ = -515079733, datum_typmod = 133553150, datum_typeid = 103}}, t_ctid = {ip_blkid = {\n bi_hi = 0, bi_lo = 30348}, ip_posid = 11}, t_infomask2 = 8225, t_infomask = 4423, t_hoff = 32 ' ', t_bits = 0x2aaab37ebe0f \"\\303\\377\\367\\377\\001\"}\n\n(gdb) n\n1350 Assert(!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask));\n(gdb) p xmax\n$20 = 0\n(gdb) n\n1353 Assert(TransactionIdIsValid(xmax));\n\n(gdb) down\n#3 0x00000000004f3817 in heap_page_prune_opt (relation=0x7f5c989043e0, buffer=buffer@entry=690) at pruneheap.c:209\n209 ndeleted = heap_page_prune(relation, buffer, vistest, limited_xmin,\n(gdb) l\n204 if (PageIsFull(page) || PageGetHeapFreeSpace(page) < minfree)\n205 {\n\n(gdb) p ((PageHeader) page)->pd_flags \n$26 = 3\n\n=>\nsrc/include/storage/bufpage.h-#define PD_HAS_FREE_LINES 0x0001 /* are there any unused line pointers? */\nsrc/include/storage/bufpage.h:#define PD_PAGE_FULL 0x0002 /* not enough free space for new tuple? */\n\n(gdb) p PageGetHeapFreeSpace(page)\n$28 = 180\n(gdb) p minfree\n$29 = 819\n\nts=# SELECT * FROM verify_heapam('child.alarms_null');\n blkno | offnum | attnum | msg \n-------+--------+--------+-----------------------------------------------------------------\n 2892 | 9 | | update xid is invalid\n 10336 | 9 | | update xid is invalid\n 14584 | 5 | | update xid is invalid\n 30449 | 3 | | update xid is invalid\n 30900 | 2 | | xmin 3779946727 precedes relation freeze threshold 1:3668608554\n 43078 | 2 | | update xid is invalid\n 43090 | 1 | | update xid is invalid\n(7 rows)\n\n child | alarms_null | table | telsasoft | permanent | heap | 357 MB | \n\n\n\n",
"msg_date": "Fri, 9 Sep 2022 21:57:56 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg15b4: FailedAssertion(\"TransactionIdIsValid(xmax)"
},
{
"msg_contents": "Hi,\n\n\nThat’s interesting, dig into it for a while but not too much progress.\n\nMaybe we could add some logs to print MultiXactMembers’ xid and status if xid is 0.\n\nInside MultiXactIdGetUpdateXid()\n\n```\n\tnmembers = GetMultiXactIdMembers(xmax, &members, false, false);\n\n\tif (nmembers > 0)\n\t{\n int i;\n\n for (i = 0; i < nmembers; i++)\n {\n /* Ignore lockers */\n if (!ISUPDATE_from_mxstatus(members[i].status))\n continue;\n\n /* there can be at most one updater */\n Assert(update_xact == InvalidTransactionId);\n update_xact = members[i].xid;\n\n// log here if xid is invalid\n#ifndef USE_ASSERT_CHECKING\n\n /*\n * in an assert-enabled build, walk the whole array to ensure\n * there's no other updater.\n */\n break;\n#endif\n }\n\n pfree(members);\n\t}\n// and here if didn’t update update_xact at all (it shouldn’t happen as designed)\n\treturn update_xact;\n```\nThat will help a little if we can reproduce it.\n\nAnd could we see multixact reply in logs if db does recover?\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi,\n\n\nThat’s interesting, dig into it for a while but not too much progress.\n\nMaybe we could add some logs to print MultiXactMembers’ xid and status if xid is 0.\n\nInside MultiXactIdGetUpdateXid()\n\n```\n\tnmembers = GetMultiXactIdMembers(xmax, &members, false, false);\n\n\tif (nmembers > 0)\n\t{\n int i;\n\n for (i = 0; i < nmembers; i++)\n {\n /* Ignore lockers */\n if (!ISUPDATE_from_mxstatus(members[i].status))\n continue;\n\n /* there can be at most one updater */\n Assert(update_xact == InvalidTransactionId);\n update_xact = members[i].xid;\n\n// log here if xid is invalid\n#ifndef USE_ASSERT_CHECKING\n\n /*\n * in an assert-enabled build, walk the whole array to ensure\n * there's no other updater.\n */\n break;\n#endif\n }\n\n pfree(members);\n\t}\n// and here if didn’t update update_xact at all (it shouldn’t happen as designed)\n\treturn update_xact;\n```\nThat will help a little if we can reproduce it.\n\nAnd could we see multixact reply in logs if db does recover? \n\n\nRegards,\nZhang Mingli",
"msg_date": "Sat, 10 Sep 2022 12:07:30 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b4: FailedAssertion(\"TransactionIdIsValid(xmax)"
},
{
"msg_contents": "On Sat, Sep 10, 2022 at 12:07:30PM +0800, Zhang Mingli wrote:\n> That’s interesting, dig into it for a while but not too much progress.\n> \n> Maybe we could add some logs to print MultiXactMembers’ xid and status if xid is 0.\n> \n> Inside MultiXactIdGetUpdateXid()\n> \n> ```\n> \tnmembers = GetMultiXactIdMembers(xmax, &members, false, false);\n> \n> \tif (nmembers > 0)\n> \t{\n> int i;\n> \n> for (i = 0; i < nmembers; i++)\n> {\n> /* Ignore lockers */\n> if (!ISUPDATE_from_mxstatus(members[i].status))\n> continue;\n> \n> /* there can be at most one updater */\n> Assert(update_xact == InvalidTransactionId);\n> update_xact = members[i].xid;\n> \n> // log here if xid is invalid\n\n> #ifndef USE_ASSERT_CHECKING\n> \n> /*\n> * in an assert-enabled build, walk the whole array to ensure\n> * there's no other updater.\n> */\n> break;\n> #endif\n> }\n> \n> pfree(members);\n> \t}\n> // and here if didn’t update update_xact at all (it shouldn’t happen as designed)\n\nYeah. I added assertions for the above case inside the loop, and for\nthis one, and this fails right before \"return\".\n\nTRAP: FailedAssertion(\"update_xact != InvalidTransactionId\", File: \"src/backend/access/heap/heapam.c\", Line: 6939, PID: 4743)\n\nIt looks like nmembers==2, both of which are lockers and being ignored.\n\n> And could we see multixact reply in logs if db does recover?\n\nDo you mean waldump or ??\n\nBTW, after a number of sigabrt's, I started seeing these during\nrecovery:\n\n< 2022-09-09 19:44:04.180 CDT >LOG: unexpected pageaddr 1214/AF0FE000 in log segment 0000000100001214000000B4, offset 1040384\n< 2022-09-09 23:20:50.830 CDT >LOG: unexpected pageaddr 1214/CF65C000 in log segment 0000000100001214000000D8, offset 6668288\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 10 Sep 2022 00:01:43 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg15b4: FailedAssertion(\"TransactionIdIsValid(xmax)"
},
{
"msg_contents": "The OOM was at:\n< 2022-09-09 19:34:24.043 CDT >LOG: server process (PID 14841) was terminated by signal 9: Killed\n\nThe first SIGABRT was at:\n< 2022-09-09 19:37:31.650 CDT >LOG: server process (PID 7363) was terminated by signal 6: Aborted\n\nAnd I've just found a bunch of \"interesting\" logs between the two:\n\n< 2022-09-09 19:36:48.505 CDT telsasoft >ERROR: MultiXactId 133553150 has not been created yet -- apparent wraparound\n< 2022-09-09 19:36:48.505 CDT telsasoft >STATEMENT: SELECT alarm_id, alarm_disregard FROM alarms WHERE alarm_ack_time IS NULL AND alarm_clear_time IS NULL AND alarm_office = 'ETHERNET'\n< 2022-09-09 19:36:48.788 CDT telsasoft >ERROR: could not access status of transaction 3779944583\n< 2022-09-09 19:36:48.788 CDT telsasoft >DETAIL: Could not read from file \"pg_subtrans/E14D\" at offset 98304: read too few bytes.\n...\n< 2022-09-09 19:37:08.550 CDT telsasoft >ERROR: MultiXactId 133553156 has not been created yet -- apparent wraparound\n...\n< 2022-09-09 19:37:13.792 CDT telsasoft >ERROR: could not access status of transaction 3779946306\n< 2022-09-09 19:37:13.792 CDT telsasoft >DETAIL: Could not read from file \"pg_subtrans/E14D\" at offset 98304: read too few bytes.\n...\n< 2022-09-09 19:37:19.682 CDT telsasoft >ERROR: could not access status of transaction 3779946306\n< 2022-09-09 19:37:19.682 CDT telsasoft >DETAIL: Could not read from file \"pg_subtrans/E14D\" at offset 98304: read too few bytes.\n< 2022-09-09 19:37:19.682 CDT telsasoft >CONTEXT: while locking tuple (11755,5) in relation \"alarms_null\"\n...\n< 2022-09-09 19:37:25.835 CDT telsasoft >ERROR: MultiXactId 133553154 has not been created yet -- apparent wraparound\n\nBTW, if I load the datadir backup to crash it, I see:\n\nt_infomask = 4423, which is:\n\n; 0x1000+0x0100+0x0040+0x0004+0x0002+0x0001\n 4423\n\nsrc/include/access/htup_details.h-#define HEAP_HASNULL 0x0001 /* has null attribute(s) */\nsrc/include/access/htup_details.h-#define HEAP_HASVARWIDTH 0x0002 /* has variable-width attribute(s) */\nsrc/include/access/htup_details.h-#define HEAP_HASEXTERNAL 0x0004 /* has external stored attribute(s) */\nsrc/include/access/htup_details.h-#define HEAP_XMAX_EXCL_LOCK 0x0040 /* xmax is exclusive locker */\nsrc/include/access/htup_details.h-#define HEAP_XMIN_COMMITTED 0x0100 /* t_xmin committed */\nsrc/include/access/htup_details.h:#define HEAP_XMAX_IS_MULTI 0x1000 /* t_xmax is a MultiXactId */\n\nI was I could say what autovacuum had been doing during that period, but\nunfortunately I have \"log_autovacuum_min_duration = 9s\"...\n\n\n",
"msg_date": "Sat, 10 Sep 2022 00:44:40 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg15b4: FailedAssertion(\"TransactionIdIsValid(xmax)"
},
{
"msg_contents": "On Sat, Sep 10, 2022 at 5:01 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> BTW, after a number of sigabrt's, I started seeing these during\n> recovery:\n>\n> < 2022-09-09 19:44:04.180 CDT >LOG: unexpected pageaddr 1214/AF0FE000 in log segment 0000000100001214000000B4, offset 1040384\n> < 2022-09-09 23:20:50.830 CDT >LOG: unexpected pageaddr 1214/CF65C000 in log segment 0000000100001214000000D8, offset 6668288\n\nThat's just what it looks like when we discover the end of the WAL by\nhitting a page that hasn't been overwritten yet in a recycled WAL\nsegment, so the pageaddr is off by a multiple of 16MB. Depending on\ntiming and chance you might be more used to seeing the error where we\nhit zeroes in a partially filled page, the famous 'wanted 24, got 0',\nand you can also hit a fully zero-initialised page 'invalid magic\nnumber 0000'. All of these are expected, and more exotic errors are\npossible with power loss torn writes or on crash of a streaming\nstandbys where we currently fail to zero the rest of overwritten\npages.\n\n\n",
"msg_date": "Mon, 12 Sep 2022 10:31:49 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b4: FailedAssertion(\"TransactionIdIsValid(xmax)"
},
{
"msg_contents": "On Sat, Sep 10, 2022 at 5:44 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> < 2022-09-09 19:37:25.835 CDT telsasoft >ERROR: MultiXactId 133553154 has not been created yet -- apparent wraparound\n\nI guess what happened here is that after one of your (apparently\nseveral?) OOM crashes, crash recovery didn't run all the way to the\ntrue end of the WAL due to the maintenance_io_concurrency=0 bug. In\nthe case you reported, it couldn't complete an end-of-recovery\ncheckpoint until you disabled recovery_prefetch, but that's only\nbecause of the somewhat unusual way that vismap pages work. In\nanother case it might have been able to (bogusly) complete a\ncheckpoint, leaving things in an inconsistent state.\n\n\n",
"msg_date": "Mon, 12 Sep 2022 10:44:38 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b4: FailedAssertion(\"TransactionIdIsValid(xmax)"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 10:44:38AM +1200, Thomas Munro wrote:\n> On Sat, Sep 10, 2022 at 5:44 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > < 2022-09-09 19:37:25.835 CDT telsasoft >ERROR: MultiXactId 133553154 has not been created yet -- apparent wraparound\n> \n> I guess what happened here is that after one of your (apparently\n> several?) OOM crashes, crash recovery didn't run all the way to the\n> true end of the WAL due to the maintenance_io_concurrency=0 bug. In\n> the case you reported, it couldn't complete an end-of-recovery\n> checkpoint until you disabled recovery_prefetch, but that's only\n> because of the somewhat unusual way that vismap pages work. In\n> another case it might have been able to (bogusly) complete a\n> checkpoint, leaving things in an inconsistent state.\n\nI think you're saying is that this can be explained by the\nio_concurrency bug in recovery_prefetch, if run under 15b3.\n\nBut yesterday I started from initdb and restored this cluster from backup, and\nstarted up sqlsmith, and sent some kill -9, and now got more corruption.\nLooks like it took ~10 induced crashes before this happened.\n\nAt the moment, I have no reason to believe this issue is related to\nprefetch_recovery; I am wondering about changes to vacuum.\n\n< 2022-09-11 20:19:03.071 CDT telsasoft >ERROR: MultiXactId 732646 has not been created yet -- apparent wraparound\n< 2022-09-11 20:24:00.530 CDT telsasoft >ERROR: MultiXactId 732646 has not been created yet -- apparent wraparound\n\nProgram terminated with signal 6, Aborted.\n#0 0x00007f413716b1f7 in raise () from /lib64/libc.so.6\nMissing separate debuginfos, use: debuginfo-install glibc-2.17-196.el7_4.2.x86_64 libgcc-4.8.5-44.el7.x86_64 libxml2-2.9.1-6.el7_9.6.x86_64 lz4-1.8.3-1.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-18.el7.x86_64\n(gdb) bt\n#0 0x00007f413716b1f7 in raise () from /lib64/libc.so.6\n#1 0x00007f413716c8e8 in abort () from /lib64/libc.so.6\n#2 0x0000000000962c5c in ExceptionalCondition (conditionName=conditionName@entry=0x9ce238 \"P_ISLEAF(opaque) && !P_ISDELETED(opaque)\", errorType=errorType@entry=0x9bad97 \"FailedAssertion\", \n fileName=fileName@entry=0x9cdcd1 \"nbtpage.c\", lineNumber=lineNumber@entry=1778) at assert.c:69\n#3 0x0000000000507e34 in _bt_rightsib_halfdeadflag (rel=rel@entry=0x7f4138a238a8, leafrightsib=leafrightsib@entry=53) at nbtpage.c:1778\n#4 0x0000000000507fba in _bt_mark_page_halfdead (rel=rel@entry=0x7f4138a238a8, leafbuf=leafbuf@entry=13637, stack=stack@entry=0x144ca20) at nbtpage.c:2121\n#5 0x000000000050af1d in _bt_pagedel (rel=rel@entry=0x7f4138a238a8, leafbuf=leafbuf@entry=13637, vstate=vstate@entry=0x7ffef18de8b0) at nbtpage.c:2004\n#6 0x000000000050c996 in btvacuumpage (vstate=vstate@entry=0x7ffef18de8b0, scanblkno=scanblkno@entry=36) at nbtree.c:1342\n#7 0x000000000050caf8 in btvacuumscan (info=info@entry=0x7ffef18deac0, stats=stats@entry=0x142fb70, callback=callback@entry=0x67e89b <vac_tid_reaped>, callback_state=callback_state@entry=0x1461220, cycleid=<optimized out>)\n at nbtree.c:997\n#8 0x000000000050cc2f in btbulkdelete (info=0x7ffef18deac0, stats=0x142fb70, callback=0x67e89b <vac_tid_reaped>, callback_state=0x1461220) at nbtree.c:801\n#9 0x00000000004fc64b in index_bulk_delete (info=info@entry=0x7ffef18deac0, istat=istat@entry=0x0, callback=callback@entry=0x67e89b <vac_tid_reaped>, callback_state=callback_state@entry=0x1461220) at indexam.c:701\n#10 0x000000000068108c in vac_bulkdel_one_index (ivinfo=ivinfo@entry=0x7ffef18deac0, istat=istat@entry=0x0, dead_items=0x1461220) at vacuum.c:2324\n#11 0x00000000004f72ae in lazy_vacuum_one_index (indrel=<optimized out>, istat=0x0, reltuples=<optimized out>, vacrel=vacrel@entry=0x142f100) at vacuumlazy.c:2726\n#12 0x00000000004f738b in lazy_vacuum_all_indexes (vacrel=vacrel@entry=0x142f100) at vacuumlazy.c:2328\n#13 0x00000000004f75df in lazy_vacuum (vacrel=vacrel@entry=0x142f100) at vacuumlazy.c:2261\n#14 0x00000000004f7f14 in lazy_scan_heap (vacrel=vacrel@entry=0x142f100) at vacuumlazy.c:1264\n#15 0x00000000004f895f in heap_vacuum_rel (rel=0x7f4138a67c00, params=0x143cbec, bstrategy=0x143ea20) at vacuumlazy.c:534\n#16 0x000000000067f62b in table_relation_vacuum (bstrategy=<optimized out>, params=0x143cbec, rel=0x7f4138a67c00) at ../../../src/include/access/tableam.h:1680\n#17 vacuum_rel (relid=1249, relation=<optimized out>, params=params@entry=0x143cbec) at vacuum.c:2086\n#18 0x000000000068065c in vacuum (relations=0x144a118, params=params@entry=0x143cbec, bstrategy=<optimized out>, bstrategy@entry=0x143ea20, isTopLevel=isTopLevel@entry=true) at vacuum.c:475\n#19 0x0000000000796a0e in autovacuum_do_vac_analyze (tab=tab@entry=0x143cbe8, bstrategy=bstrategy@entry=0x143ea20) at autovacuum.c:3149\n#20 0x00000000007987bf in do_autovacuum () at autovacuum.c:2472\n#21 0x0000000000798e72 in AutoVacWorkerMain (argc=argc@entry=0, argv=argv@entry=0x0) at autovacuum.c:1715\n#22 0x0000000000798eed in StartAutoVacWorker () at autovacuum.c:1493\n#23 0x000000000079fe49 in StartAutovacuumWorker () at postmaster.c:5534\n#24 0x00000000007a0c44 in sigusr1_handler (postgres_signal_arg=<optimized out>) at postmaster.c:5239\n#25 <signal handler called>\n#26 0x00007f4137225783 in __select_nocancel () from /lib64/libc.so.6\n#27 0x00000000007a0fc5 in ServerLoop () at postmaster.c:1770\n#28 0x00000000007a2361 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x136e9d0) at postmaster.c:1478\n#29 0x00000000006ed9c4 in main (argc=3, argv=0x136e9d0) at main.c:202\n\n\n#2 0x0000000000962c5c in ExceptionalCondition (conditionName=conditionName@entry=0xaa6a32 \"false\", errorType=errorType@entry=0x9bad97 \"FailedAssertion\", fileName=fileName@entry=0x9c6207 \"heapam.c\", \n lineNumber=lineNumber@entry=7803) at assert.c:69\n#3 0x00000000004d9db9 in index_delete_sort_cmp (deltid2=0x7ffef18dcc90, deltid1=<optimized out>) at heapam.c:7803\n#4 index_delete_sort (delstate=delstate@entry=0x7ffef18ddf10) at heapam.c:7844\n#5 0x00000000004e9323 in heap_index_delete_tuples (rel=0x7f4138a672b8, delstate=0x7ffef18ddf10) at heapam.c:7502\n#6 0x000000000050a512 in table_index_delete_tuples (delstate=0x7ffef18ddf10, rel=0x0) at ../../../../src/include/access/tableam.h:1329\n#7 _bt_delitems_delete_check (rel=rel@entry=0x7f41389dbea0, buf=buf@entry=9183, heapRel=heapRel@entry=0x7f4138a672b8, delstate=delstate@entry=0x7ffef18ddf10) at nbtpage.c:1540\n#8 0x00000000004fff30 in _bt_simpledel_pass (rel=rel@entry=0x7f41389dbea0, buffer=buffer@entry=9183, heapRel=heapRel@entry=0x7f4138a672b8, deletable=deletable@entry=0x7ffef18ddfb0, ndeletable=4, newitem=<optimized out>, \n minoff=2, maxoff=215) at nbtinsert.c:2899\n#9 0x0000000000500171 in _bt_delete_or_dedup_one_page (rel=rel@entry=0x7f41389dbea0, heapRel=heapRel@entry=0x7f4138a672b8, insertstate=insertstate@entry=0x7ffef18de3c0, simpleonly=simpleonly@entry=false, \n checkingunique=checkingunique@entry=true, uniquedup=uniquedup@entry=false, indexUnchanged=indexUnchanged@entry=false) at nbtinsert.c:2710\n#10 0x00000000005051ad in _bt_findinsertloc (rel=rel@entry=0x7f41389dbea0, insertstate=insertstate@entry=0x7ffef18de3c0, checkingunique=checkingunique@entry=true, indexUnchanged=indexUnchanged@entry=false, \n stack=stack@entry=0x157dbc8, heapRel=heapRel@entry=0x7f4138a672b8) at nbtinsert.c:902\n#11 0x00000000005055ad in _bt_doinsert (rel=rel@entry=0x7f41389dbea0, itup=itup@entry=0x157dcc8, checkUnique=checkUnique@entry=UNIQUE_CHECK_YES, indexUnchanged=indexUnchanged@entry=false, heapRel=heapRel@entry=0x7f4138a672b8)\n at nbtinsert.c:256\n#12 0x000000000050b16c in btinsert (rel=0x7f41389dbea0, values=<optimized out>, isnull=<optimized out>, ht_ctid=0x157d1bc, heapRel=0x7f4138a672b8, checkUnique=UNIQUE_CHECK_YES, indexUnchanged=false, indexInfo=0x1803df0)\n at nbtree.c:200\n#13 0x00000000004fb95d in index_insert (indexRelation=indexRelation@entry=0x7f41389dbea0, values=values@entry=0x7ffef18de520, isnull=isnull@entry=0x7ffef18de500, heap_t_ctid=heap_t_ctid@entry=0x157d1bc, \n heapRelation=heapRelation@entry=0x7f4138a672b8, checkUnique=UNIQUE_CHECK_YES, indexUnchanged=indexUnchanged@entry=false, indexInfo=indexInfo@entry=0x1803df0) at indexam.c:193\n#14 0x0000000000581ae6 in CatalogIndexInsert (indstate=indstate@entry=0x157c2f8, heapTuple=heapTuple@entry=0x157d1b8) at indexing.c:158\n#15 0x0000000000581b9f in CatalogTupleInsert (heapRel=heapRel@entry=0x7f4138a672b8, tup=tup@entry=0x157d1b8) at indexing.c:231\n#16 0x000000000057996f in InsertPgClassTuple (pg_class_desc=0x7f4138a672b8, new_rel_desc=new_rel_desc@entry=0x7f41389d9e30, new_rel_oid=<optimized out>, relacl=relacl@entry=0, reloptions=reloptions@entry=0) at heap.c:939\n#17 0x0000000000579a07 in AddNewRelationTuple (pg_class_desc=pg_class_desc@entry=0x7f4138a672b8, new_rel_desc=new_rel_desc@entry=0x7f41389d9e30, new_rel_oid=new_rel_oid@entry=500038, new_type_oid=new_type_oid@entry=0, \n reloftype=reloftype@entry=0, relowner=relowner@entry=16556, relkind=relkind@entry=116 't', relfrozenxid=17414307, relminmxid=730642, relacl=relacl@entry=0, reloptions=reloptions@entry=0) at heap.c:998\n#18 0x000000000057a204 in heap_create_with_catalog (relname=relname@entry=0x7ffef18dea90 \"pg_toast_500035\", relnamespace=relnamespace@entry=20138, reltablespace=<optimized out>, relid=500038, relid@entry=0, \n reltypeid=reltypeid@entry=0, reloftypeid=reloftypeid@entry=0, ownerid=16556, accessmtd=2, tupdesc=tupdesc@entry=0x1800420, cooked_constraints=cooked_constraints@entry=0x0, relkind=relkind@entry=116 't', \n relpersistence=relpersistence@entry=116 't', shared_relation=shared_relation@entry=false, mapped_relation=mapped_relation@entry=false, oncommit=oncommit@entry=ONCOMMIT_NOOP, reloptions=reloptions@entry=0, \n use_user_acl=use_user_acl@entry=false, allow_system_table_mods=allow_system_table_mods@entry=true, is_internal=is_internal@entry=true, relrewrite=relrewrite@entry=0, typaddress=typaddress@entry=0x0) at heap.c:1386\n#19 0x00000000005a41e2 in create_toast_table (rel=rel@entry=0x7f41389ddfb8, toastOid=toastOid@entry=0, toastIndexOid=toastIndexOid@entry=0, reloptions=reloptions@entry=0, lockmode=lockmode@entry=8, check=check@entry=false, \n OIDOldToast=OIDOldToast@entry=0) at toasting.c:249\n#20 0x00000000005a4571 in CheckAndCreateToastTable (relOid=relOid@entry=500035, reloptions=reloptions@entry=0, lockmode=lockmode@entry=8, check=check@entry=false, OIDOldToast=OIDOldToast@entry=0) at toasting.c:88\n#21 0x00000000005a45d3 in NewRelationCreateToastTable (relOid=relOid@entry=500035, reloptions=reloptions@entry=0) at toasting.c:75\n#22 0x0000000000609e47 in create_ctas_internal (attrList=attrList@entry=0x17ff798, into=into@entry=0x1374c80) at createas.c:135\n#23 0x000000000060a0cf in intorel_startup (self=0x1547678, operation=<optimized out>, typeinfo=0x17fc530) at createas.c:528\n#24 0x0000000000694b1e in standard_ExecutorRun (queryDesc=queryDesc@entry=0x1569188, direction=direction@entry=ForwardScanDirection, count=count@entry=0, execute_once=execute_once@entry=true) at execMain.c:352\n#25 0x00007f41307d2a2e in pgss_ExecutorRun (queryDesc=0x1569188, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at pg_stat_statements.c:1010\n#26 0x00007f41303af648 in explain_ExecutorRun (queryDesc=0x1569188, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at auto_explain.c:320\n#27 0x0000000000694c13 in ExecutorRun (queryDesc=queryDesc@entry=0x1569188, direction=direction@entry=ForwardScanDirection, count=count@entry=0, execute_once=execute_once@entry=true) at execMain.c:305\n#28 0x000000000060a894 in ExecCreateTableAs (pstate=pstate@entry=0x14bd950, stmt=stmt@entry=0x1545140, params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, qc=qc@entry=0x7ffef18df720) at createas.c:336\n#29 0x00000000008378dc in ProcessUtilitySlow (pstate=pstate@entry=0x14bd950, pstmt=pstmt@entry=0x15ce250, \n queryString=queryString@entry=0x1373df0 \"\\n-- do paging substitutions\\nCREATE TEMPORARY TABLE SU AS\\n\\tSELECT ...\n\n\n",
"msg_date": "Sun, 11 Sep 2022 20:42:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg15b4: FailedAssertion(\"TransactionIdIsValid(xmax)"
},
{
"msg_contents": "On Sun, Sep 11, 2022 at 6:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I think you're saying is that this can be explained by the\n> io_concurrency bug in recovery_prefetch, if run under 15b3.\n>\n> But yesterday I started from initdb and restored this cluster from backup, and\n> started up sqlsmith, and sent some kill -9, and now got more corruption.\n> Looks like it took ~10 induced crashes before this happened.\n\nHave you tested fsync on the system?\n\nThe symptoms here are all over the place. This assertion failure seems\nlike a pretty good sign that the problems happen during recovery, or\nbecause basic guarantees needed by for crash safety aren't met:\n\n> #2 0x0000000000962c5c in ExceptionalCondition (conditionName=conditionName@entry=0x9ce238 \"P_ISLEAF(opaque) && !P_ISDELETED(opaque)\", errorType=errorType@entry=0x9bad97 \"FailedAssertion\",\n> fileName=fileName@entry=0x9cdcd1 \"nbtpage.c\", lineNumber=lineNumber@entry=1778) at assert.c:69\n> #3 0x0000000000507e34 in _bt_rightsib_halfdeadflag (rel=rel@entry=0x7f4138a238a8, leafrightsib=leafrightsib@entry=53) at nbtpage.c:1778\n> #4 0x0000000000507fba in _bt_mark_page_halfdead (rel=rel@entry=0x7f4138a238a8, leafbuf=leafbuf@entry=13637, stack=stack@entry=0x144ca20) at nbtpage.c:2121\n\nThis shows that the basic rules for page deletion have somehow\nseemingly been violated. It's as if a page deletion went ahead, but\ndidn't work as an atomic operation -- there were some lost writes for\nsome but not all pages. Actually, it looks like a mix of states from\nbefore and after both the first and the second phases of page deletion\n-- so not just one atomic operation.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 11 Sep 2022 19:10:47 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pg15b4: FailedAssertion(\"TransactionIdIsValid(xmax)"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 1:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Mon, Sep 12, 2022 at 10:44:38AM +1200, Thomas Munro wrote:\n> > On Sat, Sep 10, 2022 at 5:44 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > < 2022-09-09 19:37:25.835 CDT telsasoft >ERROR: MultiXactId 133553154 has not been created yet -- apparent wraparound\n> >\n> > I guess what happened here is that after one of your (apparently\n> > several?) OOM crashes, crash recovery didn't run all the way to the\n> > true end of the WAL due to the maintenance_io_concurrency=0 bug. In\n> > the case you reported, it couldn't complete an end-of-recovery\n> > checkpoint until you disabled recovery_prefetch, but that's only\n> > because of the somewhat unusual way that vismap pages work. In\n> > another case it might have been able to (bogusly) complete a\n> > checkpoint, leaving things in an inconsistent state.\n>\n> I think you're saying is that this can be explained by the\n> io_concurrency bug in recovery_prefetch, if run under 15b3.\n\nWell I don't know, but it's one way I could think of that you could\nhave a data page referring to a multixact that isn't on disk after\nrecovery (because the data page happens to have been flushed, but we\ndidn't replay the WAL that would create the multixact).\n\n> But yesterday I started from initdb and restored this cluster from backup, and\n> started up sqlsmith, and sent some kill -9, and now got more corruption.\n> Looks like it took ~10 induced crashes before this happened.\n\n$SUBJECT says 15b4, which doesn't have the fix. Are you still using\nmaintainance_io_concurrent=0?\n\n\n",
"msg_date": "Mon, 12 Sep 2022 14:25:48 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b4: FailedAssertion(\"TransactionIdIsValid(xmax)"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 02:25:48PM +1200, Thomas Munro wrote:\n> On Mon, Sep 12, 2022 at 1:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Mon, Sep 12, 2022 at 10:44:38AM +1200, Thomas Munro wrote:\n> > > On Sat, Sep 10, 2022 at 5:44 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > < 2022-09-09 19:37:25.835 CDT telsasoft >ERROR: MultiXactId 133553154 has not been created yet -- apparent wraparound\n> > >\n> > > I guess what happened here is that after one of your (apparently\n> > > several?) OOM crashes, crash recovery didn't run all the way to the\n> > > true end of the WAL due to the maintenance_io_concurrency=0 bug. In\n> > > the case you reported, it couldn't complete an end-of-recovery\n> > > checkpoint until you disabled recovery_prefetch, but that's only\n> > > because of the somewhat unusual way that vismap pages work. In\n> > > another case it might have been able to (bogusly) complete a\n> > > checkpoint, leaving things in an inconsistent state.\n> >\n> > I think you're saying is that this can be explained by the\n> > io_concurrency bug in recovery_prefetch, if run under 15b3.\n> \n> Well I don't know, but it's one way I could think of that you could\n> have a data page referring to a multixact that isn't on disk after\n> recovery (because the data page happens to have been flushed, but we\n> didn't replay the WAL that would create the multixact).\n> \n> > But yesterday I started from initdb and restored this cluster from backup, and\n> > started up sqlsmith, and sent some kill -9, and now got more corruption.\n> > Looks like it took ~10 induced crashes before this happened.\n> \n> $SUBJECT says 15b4, which doesn't have the fix. Are you still using\n> maintainance_io_concurrent=0?\n\nYeah ... I just realized that I've already forgotten the relevant\nchronology.\n\nThe io_concurrency bugfix wasn't included in 15b4, so (if I understood\nyou correctly), that might explain these symptoms - right ?\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 11 Sep 2022 21:27:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg15b4: FailedAssertion(\"TransactionIdIsValid(xmax)"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 2:27 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Mon, Sep 12, 2022 at 02:25:48PM +1200, Thomas Munro wrote:\n> > On Mon, Sep 12, 2022 at 1:42 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > But yesterday I started from initdb and restored this cluster from backup, and\n> > > started up sqlsmith, and sent some kill -9, and now got more corruption.\n> > > Looks like it took ~10 induced crashes before this happened.\n> >\n> > $SUBJECT says 15b4, which doesn't have the fix. Are you still using\n> > maintainance_io_concurrent=0?\n>\n> Yeah ... I just realized that I've already forgotten the relevant\n> chronology.\n>\n> The io_concurrency bugfix wasn't included in 15b4, so (if I understood\n> you correctly), that might explain these symptoms - right ?\n\nYeah.\n\n\n",
"msg_date": "Mon, 12 Sep 2022 14:34:48 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b4: FailedAssertion(\"TransactionIdIsValid(xmax)"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 02:34:48PM +1200, Thomas Munro wrote:\n> On Mon, Sep 12, 2022 at 2:27 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> Yeah ... I just realized that I've already forgotten the relevant\n>> chronology.\n>>\n>> The io_concurrency bugfix wasn't included in 15b4, so (if I understood\n>> you correctly), that might explain these symptoms - right ?\n> \n> Yeah.\n\nCould you double-check if the issues you are seeing persist after\nsyncing up with the latest point of REL_15_STABLE? For now, I have\nadded an open item just to be on the safe side.\n--\nMichael",
"msg_date": "Mon, 12 Sep 2022 11:53:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg15b4: FailedAssertion(\"TransactionIdIsValid(xmax)"
},
{
"msg_contents": "On 2022-Sep-09, Justin Pryzby wrote:\n\n> 4) I was simultaneously compiling pg14b4 to run with with\n> -DRELCACHE_FORCE_RELEASE and installing it into /usr/local. I don't *think*\n> running libraries would've been overwritten, and that shouldn't have\n> affected the running instance anyway...\n\nIf you were installing new files with the system running and under\nduress, then yeah bad things could happen -- if any ABIs are changed and\nnew connections are opened in between, then these new connections could\nload new copies of the libraries with changed ABI. This might or might\nnot be happening here, but I wouldn't waste too much time chasing broken\ndatabases created this way.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Pensar que el espectro que vemos es ilusorio no lo despoja de espanto,\nsólo le suma el nuevo terror de la locura\" (Perelandra, C.S. Lewis)\n\n\n",
"msg_date": "Mon, 12 Sep 2022 11:09:13 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg15b4: FailedAssertion(\"TransactionIdIsValid(xmax)"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 11:53:14AM +0900, Michael Paquier wrote:\n> On Mon, Sep 12, 2022 at 02:34:48PM +1200, Thomas Munro wrote:\n> > On Mon, Sep 12, 2022 at 2:27 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >> Yeah ... I just realized that I've already forgotten the relevant\n> >> chronology.\n> >>\n> >> The io_concurrency bugfix wasn't included in 15b4, so (if I understood\n> >> you correctly), that might explain these symptoms - right ?\n> > \n> > Yeah.\n> \n> Could you double-check if the issues you are seeing persist after\n> syncing up with the latest point of REL_15_STABLE? For now, I have\n> added an open item just to be on the safe side.\n\nAfter another round of restore-from-backup, and sqlsmith-with-kill-9, it\nlooks to be okay. The issue was evidently another possible symptom of\nthe recovery prefetch bug, which is already fixed in REL_15_STABLE (but\nnot in pg15b4).\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 12 Sep 2022 16:29:22 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg15b4: FailedAssertion(\"TransactionIdIsValid(xmax)"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 04:29:22PM -0500, Justin Pryzby wrote:\n> After another round of restore-from-backup, and sqlsmith-with-kill-9, it\n> looks to be okay. The issue was evidently another possible symptom of\n> the recovery prefetch bug, which is already fixed in REL_15_STABLE (but\n> not in pg15b4).\n\nNice! Thanks for double-checking.\n--\nMichael",
"msg_date": "Tue, 13 Sep 2022 10:48:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg15b4: FailedAssertion(\"TransactionIdIsValid(xmax)"
}
] |
[
{
"msg_contents": "Hello,\n\nAre there any plans or thoughts about adding support for other languages\nthan C into Postgres, namely Rust? I would love to hack on some features\nbut I worry somewhat that the C compiler won't give me enough hints that\nI'm doing something wrong, and the Rust compiler has been excellent at\npreventing bugs.\n\nBest,\nLev\n\nHello,Are there any plans or thoughts about adding support for other languages than C into Postgres, namely Rust? I would love to hack on some features but I worry somewhat that the C compiler won't give me enough hints that I'm doing something wrong, and the Rust compiler has been excellent at preventing bugs.Best,Lev",
"msg_date": "Fri, 9 Sep 2022 19:38:14 -0700",
"msg_from": "Lev Kokotov <lev@hyperparam.ai>",
"msg_from_op": true,
"msg_subject": "Support for Rust"
},
{
"msg_contents": "Hi!\n\n> On 10 Sep 2022, at 07:38, Lev Kokotov <lev@hyperparam.ai> wrote:\n> \n> Are there any plans or thoughts about adding support for other languages than C into Postgres, namely Rust? I would love to hack on some features but I worry somewhat that the C compiler won't give me enough hints that I'm doing something wrong, and the Rust compiler has been excellent at preventing bugs.\n\nYou can write Postgres extensions in Rust. And Postgres extensions are really powerful. What kind of features are you interested in?\n\nUndoubtedly, attracting Rust folks to contribute Postgres could be a good things.\nYet some very simple questions arise.\n1. Is Rust compatible with Memory Contexts and shared memory constructs of Postgres? With elog error reporting, PG_TRY() and his friends?\n2. Does Rust support same set of platforms as Postgres? Quick glance at Build Farm can give an impression of what is supported by Postgres[0].\n3. Do we gain anything besides compiler hints? Postgres development is hard due to interference of complex subsystems. It will be even harder if those systems will be implemented in different languages.\n\nProbably, answers to all these questions are obvious to Rust pros. I just think this can be of interest to someone new to Rust (like me).\n\n\nBest regards, Andrey Borodin.\n\n[0] https://buildfarm.postgresql.org/cgi-bin/show_members.pl\n\n",
"msg_date": "Sat, 10 Sep 2022 21:19:17 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Support for Rust"
},
{
"msg_contents": "On Fri, Sep 09, 2022 at 07:38:14PM -0700, Lev Kokotov wrote:\n> Are there any plans or thoughts about adding support for other languages\n> than C into Postgres, namely Rust? I would love to hack on some features\n> but I worry somewhat that the C compiler won't give me enough hints that\n> I'm doing something wrong, and the Rust compiler has been excellent at\n> preventing bugs.\n\nThere was some discussion about Rust back in 2017 [0] that you might be\ninterested in.\n\n[0] https://www.postgresql.org/message-id/flat/CAASwCXdQUiuUnhycdRvrUmHuzk5PsaGxr54U4t34teQjcjb%3DAQ%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 10 Sep 2022 10:14:55 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for Rust"
},
{
"msg_contents": "> You can write Postgres extensions in Rust. And Postgres extensions are\nreally powerful. What kind of features are you interested in?\n\nAgreed, I've been writing one in Rust using tcdi/pgx [0]. Some features\ncan't be done there though, e.g. adding ON CONFLICT support to COPY.\n\n> 1. Is Rust compatible with Memory Contexts and shared memory constructs\nof Postgres? With elog error reporting, PG_TRY() and his friends?\n\nNot to my knowledge. Just by reading the implementation, jumping to\narbitrary positions in the code is against safe programming that Rust\nguarantees.\n\n> 2. Does Rust support the same set of platforms as Postgres? Quick glance\nat Build Farm can give an impression of what is supported by Postgres.\n\nRust compiles to LLVM, so the platforms supported by Rust are roughly the\nsame as LLVM. The list is large, but smaller than GCC. There is a Rust GCC\nfrontend in the works and the number of Rust targets is also increasing\nincluding embedded [1].\n\n> 3. Do we gain anything besides compiler hints? Postgres development is\nhard due to interference of complex subsystems. It will be even harder if\nthose systems will be implemented in different languages.\n\nRust gives many things we wanted for decades:\n\n1. No undefined behavior\n2. No memory leaks, guaranteed at compile time\n3. RAII - no surprise NULLs, no segfaults\n4. \"Zero-cost abstractions\" - simple looking and otherwise expensive code\n(e.g. generics, collections) is optimized at compile time and has no\nruntime cost\n5. Find bugs at compile time instead of runtime\n\nRust also has a large and ever growing community, and a great\ncross-platform package manager and build system (cargo).\n\n> There was some discussion about Rust back in 2017 [0] that you might be\ninterested in.\n\nThat was a good discussion. My takeaways are:\n\n- Suggesting that someone else should do the work is a bad idea (I agree)\n- This should be done by both C and Rust experts\n- It's hard to find a good starting point and build momentum\n- C++ is the favorite at the moment\n\nWell... at the risk of embarrassing myself on the internet, I tried it this\nweekend: https://github.com/postgresml/postgres/pull/1\n\nI took a small part of Postgres to get started, so just as a PoC; it\ncompiles and runs though. Larger parts will take more work (deleting code,\nnot just swapping object files), and more fancy things like PG_TRY() and\nfriends will have to be rewritten, so not a short and easy migration.\n\nBest,\nLev\n\n[0] https://github.com/tcdi/pgx\n[1] https://docs.rust-embedded.org/book/intro/index.html\n\n\nOn Sat, Sep 10, 2022 at 10:15 AM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Fri, Sep 09, 2022 at 07:38:14PM -0700, Lev Kokotov wrote:\n> > Are there any plans or thoughts about adding support for other languages\n> > than C into Postgres, namely Rust? I would love to hack on some features\n> > but I worry somewhat that the C compiler won't give me enough hints that\n> > I'm doing something wrong, and the Rust compiler has been excellent at\n> > preventing bugs.\n>\n> There was some discussion about Rust back in 2017 [0] that you might be\n> interested in.\n>\n> [0]\n> https://www.postgresql.org/message-id/flat/CAASwCXdQUiuUnhycdRvrUmHuzk5PsaGxr54U4t34teQjcjb%3DAQ%40mail.gmail.com\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n>\n\n> You can write Postgres extensions in Rust. And Postgres extensions are \nreally powerful. What kind of features are you interested in?Agreed, I've been writing one in Rust using tcdi/pgx [0]. Some features can't be done there though, e.g. adding ON CONFLICT support to COPY.> \n1. Is Rust compatible with Memory Contexts and shared memory constructs \nof Postgres? With elog error reporting, PG_TRY() and his friends?Not to my knowledge. Just by reading the implementation, jumping to arbitrary positions in the code is against safe programming that Rust guarantees.> \n2. Does Rust support the same set of platforms as Postgres? Quick glance at \nBuild Farm can give an impression of what is supported by Postgres.Rust compiles to LLVM, so the platforms supported by Rust are roughly the same as LLVM. The list is large, but smaller than GCC. There is a Rust GCC frontend in the works and the number of Rust targets is also increasing including embedded [1].> 3. Do we gain anything besides compiler hints? Postgres development is \nhard due to interference of complex subsystems. It will be even harder \nif those systems will be implemented in different languages.Rust gives many things we wanted for decades:1. No undefined behavior2. No memory leaks, guaranteed at compile time3. RAII - no surprise NULLs, no segfaults4. \"Zero-cost abstractions\" - simple looking and otherwise expensive code (e.g. generics, collections) is optimized at compile time and has no runtime cost5. Find bugs at compile time instead of runtimeRust also has a large and ever growing community, and a great cross-platform package manager and build system (cargo).> There was some discussion about Rust back in 2017 [0] that you might be\ninterested in.That was a good discussion. My takeaways are:- Suggesting that someone else should do the work is a bad idea (I agree)- This should be done by both C and Rust experts- It's hard to find a good starting point and build momentum- C++ is the favorite at the momentWell... at the risk of embarrassing myself on the internet, I tried it this weekend: https://github.com/postgresml/postgres/pull/1I took a small part of Postgres to get started, so just as a PoC; it compiles and runs though. Larger parts will take more work (deleting code, not just swapping object files), and more fancy things like PG_TRY() and friends will have to be rewritten, so not a short and easy migration.Best,Lev[0] https://github.com/tcdi/pgx[1] https://docs.rust-embedded.org/book/intro/index.htmlOn Sat, Sep 10, 2022 at 10:15 AM Nathan Bossart <nathandbossart@gmail.com> wrote:On Fri, Sep 09, 2022 at 07:38:14PM -0700, Lev Kokotov wrote:\n> Are there any plans or thoughts about adding support for other languages\n> than C into Postgres, namely Rust? I would love to hack on some features\n> but I worry somewhat that the C compiler won't give me enough hints that\n> I'm doing something wrong, and the Rust compiler has been excellent at\n> preventing bugs.\n\nThere was some discussion about Rust back in 2017 [0] that you might be\ninterested in.\n\n[0] https://www.postgresql.org/message-id/flat/CAASwCXdQUiuUnhycdRvrUmHuzk5PsaGxr54U4t34teQjcjb%3DAQ%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 12 Sep 2022 10:57:31 -0400",
"msg_from": "Lev Kokotov <lev@hyperparam.ai>",
"msg_from_op": true,
"msg_subject": "Re: Support for Rust"
},
{
"msg_contents": "Lev Kokotov <lev@hyperparam.ai> writes:\n>> 3. Do we gain anything besides compiler hints? Postgres development is\n>> hard due to interference of complex subsystems. It will be even harder if\n>> those systems will be implemented in different languages.\n\n> Rust gives many things we wanted for decades:\n\n> 1. No undefined behavior\n> 2. No memory leaks, guaranteed at compile time\n\nReally? It seems impossible to me that a language that even thinks\nit can guarantee that could interoperate with the backend's memory\nmanagement. And that's not something we are interested in replacing.\n\n> I took a small part of Postgres to get started, so just as a PoC; it\n> compiles and runs though. Larger parts will take more work (deleting code,\n> not just swapping object files), and more fancy things like PG_TRY() and\n> friends will have to be rewritten, so not a short and easy migration.\n\nYeah, that's what I thought. \"Allow some parts to be written in\nlanguage X\" soon turns into \"Rewrite the entire system in language X,\nincluding fundamental rethinking of memory management, error handling,\nand some other things\". That's pretty much a non-starter.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Sep 2022 11:29:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Support for Rust"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 11:29:12AM -0400, Tom Lane wrote:\n> Lev Kokotov <lev@hyperparam.ai> writes:\n> >> 3. Do we gain anything besides compiler hints? Postgres development is\n> >> hard due to interference of complex subsystems. It will be even harder if\n> >> those systems will be implemented in different languages.\n> \n> > Rust gives many things we wanted for decades:\n> \n> > 1. No undefined behavior\n> > 2. No memory leaks, guaranteed at compile time\n> \n> Really? It seems impossible to me that a language that even thinks\n> it can guarantee that could interoperate with the backend's memory\n> management. And that's not something we are interested in replacing.\n\nIndeed, it can only guarantee that if you rely on regular safe rust where\nlifetimes can be checked. To use the memory context infrastructure you would\nneed to use unsafe rust, and when you do that you're back with the same\nproblems.\n\n> > I took a small part of Postgres to get started, so just as a PoC; it\n> > compiles and runs though. Larger parts will take more work (deleting code,\n> > not just swapping object files), and more fancy things like PG_TRY() and\n> > friends will have to be rewritten, so not a short and easy migration.\n> \n> Yeah, that's what I thought. \"Allow some parts to be written in\n> language X\" soon turns into \"Rewrite the entire system in language X,\n> including fundamental rethinking of memory management, error handling,\n> and some other things\". That's pretty much a non-starter.\n\nAlso, unless I'm missing something the modified code will only work for\nfrontend programs, where palloc / pfree are really malloc / free calls.\n\nThe rewritten BuildRestoreCommand won't return a palloc'd string on the\nbackend, so the recovery TAP tests should crash when using it.\n\n\n",
"msg_date": "Tue, 13 Sep 2022 00:01:22 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for Rust"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 10:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Lev Kokotov <lev@hyperparam.ai> writes:\n> > I took a small part of Postgres to get started, so just as a PoC; it\n> > compiles and runs though. Larger parts will take more work (deleting\ncode,\n> > not just swapping object files), and more fancy things like PG_TRY() and\n> > friends will have to be rewritten, so not a short and easy migration.\n>\n> Yeah, that's what I thought. \"Allow some parts to be written in\n> language X\" soon turns into \"Rewrite the entire system in language X,\n> including fundamental rethinking of memory management, error handling,\n> and some other things\". That's pretty much a non-starter.\n\nAdded \"Rewrite the code in a different language\" to \"Features We Do Not\nWant\" section of Wiki, referencing the two threads that came up:\n\nhttps://wiki.postgresql.org/wiki/Todo#Features_We_Do_Not_Want\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Sep 12, 2022 at 10:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> Lev Kokotov <lev@hyperparam.ai> writes:> > I took a small part of Postgres to get started, so just as a PoC; it> > compiles and runs though. Larger parts will take more work (deleting code,> > not just swapping object files), and more fancy things like PG_TRY() and> > friends will have to be rewritten, so not a short and easy migration.>> Yeah, that's what I thought. \"Allow some parts to be written in> language X\" soon turns into \"Rewrite the entire system in language X,> including fundamental rethinking of memory management, error handling,> and some other things\". That's pretty much a non-starter.Added \"Rewrite the code in a different language\" to \"Features We Do Not Want\" section of Wiki, referencing the two threads that came up:https://wiki.postgresql.org/wiki/Todo#Features_We_Do_Not_Want-- John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 19 Sep 2022 13:21:20 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for Rust"
},
{
"msg_contents": "On Mon, 2022-09-12 at 11:29 -0400, Tom Lane wrote:\n> > Rust gives many things we wanted for decades:\n> \n> > 1. No undefined behavior\n> > 2. No memory leaks, guaranteed at compile time\n> \n> Really? It seems impossible to me that a language that even thinks\n> it can guarantee that could interoperate with the backend's memory\n> management. And that's not something we are interested in replacing.\n\nIt's a distraction to talk about rust's safety \"guarantees\" in the\ncontext of this thread. #1 is partially true, and #2 is outright\nfalse[1].\n\nC interoperability is the most compelling rust feature, in my opinion.\nC memory representations are explicitly supported, and high-level\nlanguage features don't impose on your struct layouts. For instance,\nrust does dynamic dispatch using trait objects[2], which hold the\nvtable along with the reference, rather than in the struct itself. And\na \"Foo *\" from C has the same memory representation as an Option<&Foo>\nin rust, so that you get the type safety.\n\nOf course, rewriting Postgres would be terrible idea regardless of the\nmerits of rust for all kinds of reasons. But writing *extensions* in\nrust is very promising because of this C interoperability.\n\n> \n> Yeah, that's what I thought. \"Allow some parts to be written in\n> language X\" soon turns into \"Rewrite the entire system in language X,\n> including fundamental rethinking of memory management, error\n> handling,\n> and some other things\". That's pretty much a non-starter.\n\nYou may be surprised how much you can do with rust extensions without\nchanging any of those things[3].\n\nRegards,\n\tJeff Davis\n\n[1] https://doc.rust-lang.org/std/mem/fn.forget.html\n[2] https://doc.rust-lang.org/book/ch17-02-trait-objects.html\n[3] https://www.pgcon.org/2019/schedule/attachments/532_RustTalk.pdf\n\n\n\n",
"msg_date": "Wed, 12 Oct 2022 10:43:59 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for Rust"
}
] |
[
{
"msg_contents": "hi,\r\n\r\n\r\nthis morning i met an issue, that after vacuum full tablename, the associated toast table shows not exist.\r\nhere is the operation steps:\r\n\r\n\r\ndrop table if exists reymont;\r\ncreate table reymont ( id bigint primary key, data bytea not null);\r\nalter table reymont alter column data set compression pglz;\r\ninsert into reymont values(1, pg_read_binary_file('filename'));\r\nvacuum full reymont;\r\nselect relname, relfilenode, reltoastrelid from pg_class where relname='reymont';\r\n\\d+ pg_toast.pg_toast_relfilenode\r\nDid not find any relation named \"pg_toast.pg_toast_relfilenode\".\r\n\r\n\r\nhowever, if display toast table before vacuum full operation, no problem.\r\ndrop table if exists reymont;\r\ncreate table reymont ( id bigint primary key, data bytea not null);\r\nalter table reymont alter column data set compression pglz;\r\ninsert into reymont values(1, pg_read_binary_file('filename'));\r\n\r\n\\d+ pg_toast.pg_toast_relfilenode --- it's ok, the toast table exists\r\nvacuum full reymont;\r\n\\d+ pg_toast.pg_toast_relfilenode --- it's ok, the toast table exists\r\n\r\n\r\nit looks a little strange, any ideas? appreciate your help.\r\n\r\n\r\nenv:\r\npg14.4\r\nlinux 3.10.0-693.17.1.e17\r\n\r\n\r\nthanks\r\nwalker\nhi,this morning i met an issue, that after vacuum full tablename, the associated toast table shows not exist.here is the operation steps:drop table if exists reymont;create table reymont ( id bigint primary key, data bytea not null);alter table reymont alter column data set compression pglz;insert into reymont values(1, pg_read_binary_file('filename'));vacuum full reymont;select relname, relfilenode, reltoastrelid from pg_class where relname='reymont';\\d+ pg_toast.pg_toast_relfilenodeDid not find any relation named \"pg_toast.pg_toast_relfilenode\".however, if display toast table before vacuum full operation, no problem.drop table if exists reymont;create table reymont ( id bigint primary key, data bytea not null);alter table reymont alter column data set compression pglz;insert into reymont values(1, pg_read_binary_file('filename'));\\d+ pg_toast.pg_toast_relfilenode --- it's ok, the toast table existsvacuum full reymont;\\d+ pg_toast.pg_toast_relfilenode --- it's ok, the toast table existsit looks a little strange, any ideas? appreciate your help.env:pg14.4linux 3.10.0-693.17.1.e17thankswalker",
"msg_date": "Sat, 10 Sep 2022 12:53:30 +0800",
"msg_from": "\"=?ISO-8859-1?B?d2Fsa2Vy?=\" <failaway@qq.com>",
"msg_from_op": true,
"msg_subject": "pg_toast.pg_toast_relfilenode not exist due to vacuum full tablename"
},
{
"msg_contents": "\"=?ISO-8859-1?B?d2Fsa2Vy?=\" <failaway@qq.com> writes:\n> this morning i met an issue, that after vacuum full tablename, the associated toast table shows not exist.\n\nYour example doesn't show what you actually did, but I think what is\nfooling you is that VACUUM FULL changes the relfilenode of the table\nbut not the name of its toast table. So the situation afterwards\nmight look like\n\nregression=# select relname, relfilenode, reltoastrelid from pg_class where relname='reymont';\n relname | relfilenode | reltoastrelid \n---------+-------------+---------------\n reymont | 40616 | 40611\n(1 row)\nregression=# select relname from pg_class where oid = 40611;\n relname \n----------------\n pg_toast_40608\n(1 row)\n\nregression=# \\d+ pg_toast.pg_toast_40608\nTOAST table \"pg_toast.pg_toast_40608\"\n Column | Type | Storage \n------------+---------+---------\n chunk_id | oid | plain\n chunk_seq | integer | plain\n chunk_data | bytea | plain\nOwning table: \"public.reymont\"\nIndexes:\n \"pg_toast_40608_index\" PRIMARY KEY, btree (chunk_id, chunk_seq)\nAccess method: heap\n\n(where 40608 is reymont's original relfilenode).\n\nI'm not sure if this should be considered a bug or not. Everything still\nworks well enough, but conceivably we could have a TOAST name collision\ndown the road when we recycle the 40608 number --- I don't recall if\nthe TOAST logic is able to cope with that or not.\n\nIn any case, you should not be making assumptions about the name of\na TOAST table without verifying it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Sep 2022 10:32:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_toast.pg_toast_relfilenode not exist due to vacuum full\n tablename"
}
] |
[
{
"msg_contents": "Here's a WIP stab at the project Andres mentioned [1] of splitting up\nguc.c into smaller files. As things stand here, we have:\n\n1. guc.c: the core GUC machinery.\n2. guc_tables.c: the data arrays, and some previously-exposed constant\ntables. guc_tables.h can now be considered the associated header.\n3. guc_hooks.c: (most of) the per-variable check/assign/show hooks\nthat had been in guc.c. guc_hooks.h declares these.\n\nFile sizes are like so:\n\n$ wc guc*c\n 2629 9372 69467 guc-file.c\n 7422 25136 202284 guc.c\n 939 2693 22915 guc_hooks.c\n 4877 13163 126769 guc_tables.c\n 15867 50364 421435 total\n$ size guc*o\n text data bss dec hex filename\n 13653 4 112 13769 35c9 guc-file.o\n 54953 0 564 55517 d8dd guc.o\n 6951 0 112 7063 1b97 guc_hooks.o\n 43570 62998 216 106784 1a120 guc_tables.o\n\nI'm fairly happy with the way things turned out in guc.c and\nguc_tables.c, but I don't much like guc_hooks.c. I think instead of\ncreating such a file, what we should do is to shove most of those\nfunctions into whatever module the GUC variable is associated with.\n(Perhaps commands/variable.c could absorb any stragglers that lack\na better home.) I made a start at that for wal_consistency_checking\nand the syslog parameters, but haven't gone further yet.\n\nBefore proceeding further, I wanted to ask for comments on a design\nchoice that might be controversial. Even though I don't want to\ninvent guc_hooks.c, I think we *should* invent guc_hooks.h, and\nconsolidate all the GUC hook function declarations there. The\npoint would be to not have to #include guc.h in headers of unrelated\nmodules. This is similar to what we've done with utils/fmgrprotos.h,\nthough the motivation is different. I already moved a few declarations\nfrom guc.h to there (and in consequence had to adjust #includes in\nthe modules defining those hooks), but there's a lot more to be done\nif we apply that policy across the board. Does anybody think that's\na bad approach, or have a better one?\n\nBTW, this is more or less orthogonal to my other GUC patch at [2],\nalthough both lead to the conclusion that we need to export\nguc_malloc and friends.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/20220905233233.jhcu5jqsrtosmgh5%40awork3.anarazel.de\n[2] https://www.postgresql.org/message-id/flat/2982579.1662416866%40sss.pgh.pa.us",
"msg_date": "Sat, 10 Sep 2022 15:04:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Splitting up guc.c"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-10 15:04:59 -0400, Tom Lane wrote:\n> Here's a WIP stab at the project Andres mentioned [1] of splitting up\n> guc.c into smaller files.\n\nCool!\n\n\n> As things stand here, we have:\n> \n> 1. guc.c: the core GUC machinery.\n> 2. guc_tables.c: the data arrays, and some previously-exposed constant\n> tables. guc_tables.h can now be considered the associated header.\n> 3. guc_hooks.c: (most of) the per-variable check/assign/show hooks\n> that had been in guc.c. guc_hooks.h declares these.\n> \n> File sizes are like so:\n> \n> $ wc guc*c\n> 2629 9372 69467 guc-file.c\n> 7422 25136 202284 guc.c\n> 939 2693 22915 guc_hooks.c\n> 4877 13163 126769 guc_tables.c\n> 15867 50364 421435 total\n> $ size guc*o\n> text data bss dec hex filename\n> 13653 4 112 13769 35c9 guc-file.o\n> 54953 0 564 55517 d8dd guc.o\n> 6951 0 112 7063 1b97 guc_hooks.o\n> 43570 62998 216 106784 1a120 guc_tables.o\n\nA tad surprised by the text size of guc_tables.o - not that it is a problem,\njust seems a bit odd.\n\n\n> I'm fairly happy with the way things turned out in guc.c and\n> guc_tables.c, but I don't much like guc_hooks.c. I think instead of\n> creating such a file, what we should do is to shove most of those\n> functions into whatever module the GUC variable is associated with.\n\n+1. I think our associated habit of declaring externs in multiple .c files\nisn't great either.\n\n\n\n> Before proceeding further, I wanted to ask for comments on a design\n> choice that might be controversial. Even though I don't want to\n> invent guc_hooks.c, I think we *should* invent guc_hooks.h, and\n> consolidate all the GUC hook function declarations there. The\n> point would be to not have to #include guc.h in headers of unrelated\n> modules. This is similar to what we've done with utils/fmgrprotos.h,\n> though the motivation is different. I already moved a few declarations\n> from guc.h to there (and in consequence had to adjust #includes in\n> the modules defining those hooks), but there's a lot more to be done\n> if we apply that policy across the board. Does anybody think that's\n> a bad approach, or have a better one?\n\nHm, I'm not opposed, the reasoning makes sense to me. How would this interact\nwith the declaration of the variables underlying GUCs?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 10 Sep 2022 12:15:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Splitting up guc.c"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-09-10 15:04:59 -0400, Tom Lane wrote:\n>> $ size guc*o\n>> text data bss dec hex filename\n>> 13653 4 112 13769 35c9 guc-file.o\n>> 54953 0 564 55517 d8dd guc.o\n>> 6951 0 112 7063 1b97 guc_hooks.o\n>> 43570 62998 216 106784 1a120 guc_tables.o\n\n> A tad surprised by the text size of guc_tables.o - not that it is a problem,\n> just seems a bit odd.\n\nThere's a pretty fair number of constant tables that got moved to there.\nNot to mention all the constant strings.\n\n>> Before proceeding further, I wanted to ask for comments on a design\n>> choice that might be controversial. Even though I don't want to\n>> invent guc_hooks.c, I think we *should* invent guc_hooks.h, and\n>> consolidate all the GUC hook function declarations there. The\n>> point would be to not have to #include guc.h in headers of unrelated\n>> modules. This is similar to what we've done with utils/fmgrprotos.h,\n>> though the motivation is different. I already moved a few declarations\n>> from guc.h to there (and in consequence had to adjust #includes in\n>> the modules defining those hooks), but there's a lot more to be done\n>> if we apply that policy across the board. Does anybody think that's\n>> a bad approach, or have a better one?\n\n> Hm, I'm not opposed, the reasoning makes sense to me. How would this interact\n> with the declaration of the variables underlying GUCs?\n\nI'd still declare the variables as we do now, ie just straightforwardly\nexport them from the associated modules. Since they're all of native\nC types, they don't cause any inclusion-footprint issues. We could move\ntheir declarations to a common file I guess, but I don't see any benefit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Sep 2022 15:23:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Splitting up guc.c"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-10 12:15:33 -0700, Andres Freund wrote:\n> On 2022-09-10 15:04:59 -0400, Tom Lane wrote:\n> > As things stand here, we have:\n> >\n> > 1. guc.c: the core GUC machinery.\n> > 2. guc_tables.c: the data arrays, and some previously-exposed constant\n> > tables. guc_tables.h can now be considered the associated header.\n> > 3. guc_hooks.c: (most of) the per-variable check/assign/show hooks\n> > that had been in guc.c. guc_hooks.h declares these.\n> >\n> > File sizes are like so:\n> >\n> > $ wc guc*c\n> > 2629 9372 69467 guc-file.c\n> > 7422 25136 202284 guc.c\n> > 939 2693 22915 guc_hooks.c\n> > 4877 13163 126769 guc_tables.c\n> > 15867 50364 421435 total\n> > $ size guc*o\n> > text data bss dec hex filename\n> > 13653 4 112 13769 35c9 guc-file.o\n> > 54953 0 564 55517 d8dd guc.o\n> > 6951 0 112 7063 1b97 guc_hooks.o\n> > 43570 62998 216 106784 1a120 guc_tables.o\n>\n> A tad surprised by the text size of guc_tables.o - not that it is a problem,\n> just seems a bit odd.\n\nLooks like that's just size misgrouping some section. Built a guc_tables.o\nwithout debug information (that makes the output too complicated):\n\n$ size guc_tables_nodebug.o\n text data bss dec hex filename\n 40044 66868 344 107256 1a2f8 guc_tables_nodebug.o\n\n$ size --format=sysv guc_tables_nodebug.o\nguc_tables_nodebug.o :\nsection size addr\n.text 0 0\n.data 52 0\n.bss 344 0\n.rodata 40044 0\n.data.rel.ro.local 3720 0\n.data.rel.local 8 0\n.data.rel 63088 0\n.comment 31 0\n.note.GNU-stack 0 0\nTotal 107287\n\nFor some reason size adds .roata to the size for text in the default berkeley\nstyle. Which is even documented:\n\n The Berkeley style output counts read only data in the \"text\" column, not in the \"data\" column, the \"dec\" and \"hex\" columns both display the sum\n of the \"text\", \"data\", and \"bss\" columns in decimal and hexadecimal respectively.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 10 Sep 2022 12:24:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Splitting up guc.c"
},
{
"msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2022-09-10 15:04:59 -0400, Tom Lane wrote:\n>>> $ size guc*o\n>>> text data bss dec hex filename\n>>> 13653 4 112 13769 35c9 guc-file.o\n>>> 54953 0 564 55517 d8dd guc.o\n>>> 6951 0 112 7063 1b97 guc_hooks.o\n>>> 43570 62998 216 106784 1a120 guc_tables.o\n\n>> A tad surprised by the text size of guc_tables.o - not that it is a problem,\n>> just seems a bit odd.\n\n> There's a pretty fair number of constant tables that got moved to there.\n> Not to mention all the constant strings.\n\nI forgot to include comparison numbers for HEAD:\n\n$ wc guc*c\n 2629 9372 69467 guc-file.c\n 13335 41584 356896 guc.c\n 15964 50956 426363 total\n$ size guc*o\n text data bss dec hex filename\n 13653 4 112 13769 35c9 guc-file.o\n 105848 63156 908 169912 297b8 guc.o\n\nThis isn't completely apples-to-apples because of the few\nhook functions I'd moved to other places in v1, but you can\nsee that the total text and data sizes didn't change much.\nIt'd likely indicate a mistake if they had. (However, v1\ndoes include const-ifying a few options tables that had\nsomehow escaped being labeled that way, so the total data\nsize did shrink a small amount.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Sep 2022 15:35:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Splitting up guc.c"
},
{
"msg_contents": "On Sat, Sep 10, 2022 at 03:04:59PM -0400, Tom Lane wrote:\n> Before proceeding further, I wanted to ask for comments on a design\n> choice that might be controversial. Even though I don't want to\n> invent guc_hooks.c, I think we *should* invent guc_hooks.h, and\n> consolidate all the GUC hook function declarations there. The\n> point would be to not have to #include guc.h in headers of unrelated\n> modules. This is similar to what we've done with utils/fmgrprotos.h,\n> though the motivation is different. I already moved a few declarations\n> from guc.h to there (and in consequence had to adjust #includes in\n> the modules defining those hooks), but there's a lot more to be done\n> if we apply that policy across the board. Does anybody think that's\n> a bad approach, or have a better one?\n\nOne part that I have found a bit strange lately about guc.c is that we\nhave mix the core machinery with the SQL-callable parts. What do you\nthink about the addition of a gucfuncs.c in src/backend/utils/adt/ to\nsplit things a bit more?\n--\nMichael",
"msg_date": "Sun, 11 Sep 2022 09:43:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Splitting up guc.c"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> One part that I have found a bit strange lately about guc.c is that we\n> have mix the core machinery with the SQL-callable parts. What do you\n> think about the addition of a gucfuncs.c in src/backend/utils/adt/ to\n> split things a bit more?\n\nI might be wrong, but I think the SQL-callable stuff makes use\nof some APIs that are currently private in guc.c. So we'd have\nto expose more API to make that possible. Maybe that wouldn't\nbe a bad thing, but it seems to be getting beyond the original\nidea here. (Note I already had to expose find_option() in\norder to get the wal_consistency_checking stuff moved out.)\nIt's not clear to me that \"move the SQL-callable stuff\" will\nend with a nice API boundary.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Sep 2022 22:08:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Splitting up guc.c"
},
{
"msg_contents": "I wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> One part that I have found a bit strange lately about guc.c is that we\n>> have mix the core machinery with the SQL-callable parts. What do you\n>> think about the addition of a gucfuncs.c in src/backend/utils/adt/ to\n>> split things a bit more?\n\n> I might be wrong, but I think the SQL-callable stuff makes use\n> of some APIs that are currently private in guc.c. So we'd have\n> to expose more API to make that possible. Maybe that wouldn't\n> be a bad thing, but it seems to be getting beyond the original\n> idea here.\n\nI tried this just to see, and it worked out better than I thought.\nThe key extra idea is to also pull out the functions implementing\nthe SET and SHOW commands, because (unsurprisingly) those are just\nabout in the same place dependency-wise as the SQL functions, and\nthey have some common subroutines.\n\nI had to export get_config_unit_name(), config_enum_get_options(),\nand _ShowOption() (here renamed to ShowGUCOption()) to make this\nwork. That doesn't seem too awful.\n\nv2 attached does this, without any further relocation of hook\nfunctions as yet. I now see these file sizes:\n\n$ wc guc*c\n 2629 9372 69467 guc-file.c\n 6425 22282 176816 guc.c\n 1048 3005 26962 guc_funcs.c\n 939 2693 22915 guc_hooks.c\n 4877 13163 126769 guc_tables.c\n 15918 50515 422929 total\n$ size guc*o\n text data bss dec hex filename\n 13653 4 112 13769 35c9 guc-file.o\n 46589 0 564 47153 b831 guc.o\n 8509 0 0 8509 213d guc_funcs.o\n 6951 0 112 7063 1b97 guc_hooks.o\n 43570 62998 216 106784 1a120 guc_tables.o\n\nSo this removes just about a thousand more lines from guc.c,\nwhich seems worth doing.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 11 Sep 2022 13:52:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Splitting up guc.c"
},
{
"msg_contents": "Here's a v3 that gets rid of guc_hooks.c in favor of moving the\nhook functions to related modules (though some did end up in\nvariables.c for lack of a better idea). I also pushed all the\nhook function declarations to guc_hooks.h. Unsurprisingly,\nremoval of guc.h #includes from header files led to discovery\nof some surprising indirect dependencies, notably a lot of places\nwere evidently depending on indirect inclusions of array.h.\n\nI think this is code-complete at this point. I'd like to not\nsit on it too long, because it'll inevitably get side-swiped\nby additions of new GUCs. On the other hand, pushing it in\nthe middle of a CF would presumably break other people's patches.\nMaybe push it at the end of this CF, to give people a month to\nrebase anything that's affected?\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 11 Sep 2022 18:31:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Splitting up guc.c"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-11 18:31:41 -0400, Tom Lane wrote:\n> Here's a v3 that gets rid of guc_hooks.c in favor of moving the\n> hook functions to related modules (though some did end up in\n> variables.c for lack of a better idea).\n\n- a bit worried that in_hot_standby will be confusing due vs InHotStandby. I\n wonder if we could perhaps get rid of an underlying variable in cases where\n we really just need the GUC entry to trigger the show hook?\n\n- perhaps too annoying, but it'd be easier to review this if the function\n renaming were done in a preparatory patch\n\n- Are all those includes in guc_tables.c still necessary? I'd have assumed\n that more should be obsoleted by the introduction of guc_hooks.h? Although I\n guess many are just there for the variable's declaration?\n\n- It's a bit depressing that the GUC arrays aren't const, . But I guess that's\n better fixed separately.\n\n\n\n> I think this is code-complete at this point. I'd like to not\n> sit on it too long, because it'll inevitably get side-swiped\n> by additions of new GUCs. On the other hand, pushing it in\n> the middle of a CF would presumably break other people's patches.\n> Maybe push it at the end of this CF, to give people a month to\n> rebase anything that's affected?\n\nI think this is localized enough that asking people to manually resolve a\nconflict around adding a GUC entry wouldn't be asking for that much. And I\nthink plenty changes might be automatically resolvable, despite the rename.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Sep 2022 12:39:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Splitting up guc.c"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> - a bit worried that in_hot_standby will be confusing due vs InHotStandby. I\n> wonder if we could perhaps get rid of an underlying variable in cases where\n> we really just need the GUC entry to trigger the show hook?\n\nYeah, that worried me too. We do need the variable because guc.c checks\nit directly, but let's use a less confusing name. in_hot_standby_guc,\nmaybe?\n\n> - perhaps too annoying, but it'd be easier to review this if the function\n> renaming were done in a preparatory patch\n\nThere were only a couple that I renamed, and I don't think any of them\nshould be directly referenced by anything else.\n\n> - Are all those includes in guc_tables.c still necessary?\n\nThe ones that are still there are necessary. I believe they're mostly\npulling in variables that are GUC targets.\n\n> - It's a bit depressing that the GUC arrays aren't const, . But I guess that's\n> better fixed separately.\n\nDunno that it'd be helpful, unless we separate the variable and constant\nparts of the structs.\n\n> I think this is localized enough that asking people to manually resolve a\n> conflict around adding a GUC entry wouldn't be asking for that much. And I\n> think plenty changes might be automatically resolvable, despite the rename.\n\nI wonder whether git will be able to figure out that this is mostly a\ncode move. I would expect so for a straight file rename, but will that\nwork when we're splitting the file 3 ways?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Sep 2022 15:46:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Splitting up guc.c"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n>> I think this is localized enough that asking people to manually resolve a\n>> conflict around adding a GUC entry wouldn't be asking for that much. And I\n>> think plenty changes might be automatically resolvable, despite the rename.\n>\n> I wonder whether git will be able to figure out that this is mostly a\n> code move. I would expect so for a straight file rename, but will that\n> work when we're splitting the file 3 ways?\n\nGit can detect more complicated code movement (see the `--color-moved`\noption to `git diff`), but I'm not sure it's clever enough to realise\nthat a change modifying a block of code that was moved in the meanwhile\nshould be applied at the new destination.\n\n> \t\t\tregards, tom lane\n\n- ilmari\n\n\n",
"msg_date": "Mon, 12 Sep 2022 21:12:03 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Splitting up guc.c"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Git can detect more complicated code movement (see the `--color-moved`\n> option to `git diff`), but I'm not sure it's clever enough to realise\n> that a change modifying a block of code that was moved in the meanwhile\n> should be applied at the new destination.\n\nYeah, I suspect people will have to manually reapply any changes in\nthe GUC tables to guc_tables.c. That'll be the same amount of work\nfor them whenever we commit this patch (unless theirs lands first,\nin which case I have to deal with it). The issue I think is\nwhether it's politer to make that happen during a CF or between\nCFs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Sep 2022 16:20:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Splitting up guc.c"
},
{
"msg_contents": "\n\n",
"msg_date": "Mon, 12 Sep 2022 13:49:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Splitting up guc.c"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-12 21:12:03 +0100, Dagfinn Ilmari Manns�ker wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> \n> >> I think this is localized enough that asking people to manually resolve a\n> >> conflict around adding a GUC entry wouldn't be asking for that much. And I\n> >> think plenty changes might be automatically resolvable, despite the rename.\n> >\n> > I wonder whether git will be able to figure out that this is mostly a\n> > code move. I would expect so for a straight file rename, but will that\n> > work when we're splitting the file 3 ways?\n> \n> Git can detect more complicated code movement (see the `--color-moved`\n> option to `git diff`), but I'm not sure it's clever enough to realise\n> that a change modifying a block of code that was moved in the meanwhile\n> should be applied at the new destination.\n\nIt sometimes can for large code movements, but not in this case. I think\nbecause guc.c is more self-similar than guc_tables.c.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Sep 2022 13:50:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Splitting up guc.c"
},
{
"msg_contents": "On 2022-Sep-12, Tom Lane wrote:\n\n> Yeah, I suspect people will have to manually reapply any changes in\n> the GUC tables to guc_tables.c. That'll be the same amount of work\n> for them whenever we commit this patch (unless theirs lands first,\n> in which case I have to deal with it). The issue I think is\n> whether it's politer to make that happen during a CF or between\n> CFs.\n\nPersonally I would prefer that this kind of thing is done quickly rather\nthan delay it to some uncertain future. That way I can deal with it\nstraight ahead rather than living with the anxiety that it will land\nlater and I will have to deal with it then. I see no benefit in\nwaiting.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 13 Sep 2022 11:10:00 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Splitting up guc.c"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Sep-12, Tom Lane wrote:\n>> Yeah, I suspect people will have to manually reapply any changes in\n>> the GUC tables to guc_tables.c. That'll be the same amount of work\n>> for them whenever we commit this patch (unless theirs lands first,\n>> in which case I have to deal with it). The issue I think is\n>> whether it's politer to make that happen during a CF or between\n>> CFs.\n\n> Personally I would prefer that this kind of thing is done quickly rather\n> than delay it to some uncertain future. That way I can deal with it\n> straight ahead rather than living with the anxiety that it will land\n> later and I will have to deal with it then. I see no benefit in\n> waiting.\n\nFair enough. I'm also not looking forward to having to rebase my\npatch over anybody else's GUC changes -- even just a new GUC would\ninvalidate a thousand-line diff hunk, and I doubt that \"git apply\"\nwould deal with that very helpfully. I'll go ahead and get this\npushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Sep 2022 10:05:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Splitting up guc.c"
}
] |
[
{
"msg_contents": "On a two-column btree index, we can constrain the first column with\nequality and read the rows in order by the 2nd column. But we can't\nconstrain the first column by IS NULL and still read the rows in order by\nthe 2nd column. But why not? Surely the structure of the btree index\nwould allow for this to work.\n\nExample:\n\ncreate table if not exists j as select case when random()<0.9 then\nfloor(random()*10)::int end b, random() c from generate_series(1,1000000);\ncreate index if not exists j_b_c_idx on j (b,c);\nset enable_sort TO off;\nexplain analyze select * from j where b is null order by c limit 10;\nexplain analyze select * from j where b =8 order by c limit 10;\n\nThe first uses a sort despite it being disabled.\n\nCheers,\n\nJeff\n\nOn a two-column btree index, we can constrain the first column with equality and read the rows in order by the 2nd column. But we can't constrain the first column by IS NULL and still read the rows in order by the 2nd column. But why not? Surely the structure of the btree index would allow for this to work.Example:create table if not exists j as select case when random()<0.9 then floor(random()*10)::int end b, random() c from generate_series(1,1000000);create index if not exists j_b_c_idx on j (b,c);set enable_sort TO off;explain analyze select * from j where b is null order by c limit 10;explain analyze select * from j where b =8 order by c limit 10; The first uses a sort despite it being disabled.Cheers,Jeff",
"msg_date": "Sat, 10 Sep 2022 17:28:10 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Index ordering after IS NULL"
},
{
"msg_contents": "On Sat, Sep 10, 2022 at 2:28 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n> explain analyze select * from j where b is null order by c limit 10;\n> explain analyze select * from j where b =8 order by c limit 10;\n>\n> The first uses a sort despite it being disabled.\n\nThe first/is null query seems to give the result and plan you're\nlooking for if the query is rewritten to order by \"b, c\", and not just\n\"c\".\n\nThat in itself doesn't make your complaint any less valid, of course.\nYou don't have to do this with the second query, so why should you\nhave to do it with the first?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 10 Sep 2022 15:00:08 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Index ordering after IS NULL"
},
{
"msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> On a two-column btree index, we can constrain the first column with\n> equality and read the rows in order by the 2nd column. But we can't\n> constrain the first column by IS NULL and still read the rows in order by\n> the 2nd column. But why not?\n\n\"x IS NULL\" doesn't give rise to an EquivalenceClass, which is what\nis needed to drive the deduction that the first index column isn't\naffecting the result ordering.\n\nMaybe we could extend the notion of ECs to allow that, but I'm not\ntoo sure about how it'd work. There are already some expectations\nthat EC equality operators be strict, and this'd blow a large hole\nin a lot of related assumptions. For example, given \"x IS NULL AND\nx = y\", the correct deduction is not \"y IS NULL\", it's that the\nWHERE condition is constant-FALSE.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Sep 2022 22:18:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Index ordering after IS NULL"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhile wandering around the codes of reducing outer joins, I noticed that\nwhen determining which base rels/Vars are forced nonnullable by given\nclause, we don't take SubPlan into consideration. Does anyone happen to\nknow what is the concern behind that?\n\nIMO, for SubPlans of type ALL/ANY/ROWCOMPARE, we should be able to find\nadditional nonnullable rels/Vars by descending through their testexpr.\nAs we know, ALL_SUBLINK/ANY_SUBLINK combine results across tuples\nproduced by the subplan using AND/OR semantics. ROWCOMPARE_SUBLINK\ndoesn't allow multiple tuples from the subplan. So we can tell whether\nthe subplan is strict or not by checking its testexpr, leveraging the\nexisting codes in find_nonnullable_rels/vars_walker. Below is an\nexample:\n\n# explain (costs off)\nselect * from a left join b on a.i = b.i where b.i = ANY (select i from c\nwhere c.j = b.j);\n QUERY PLAN\n-----------------------------------\n Hash Join\n Hash Cond: (b.i = a.i)\n -> Seq Scan on b\n Filter: (SubPlan 1)\n SubPlan 1\n -> Seq Scan on c\n Filter: (j = b.j)\n -> Hash\n -> Seq Scan on a\n(9 rows)\n\nBTW, this change would also have impact on SpecialJoinInfo, especially\nfor the case of identity 3, because find_nonnullable_rels() is also used\nto determine strict_relids from the clause. As an example, consider\n\n select * from a left join b on a.i = b.i\n left join c on b.j = ANY (select j from c);\n\nNow we can know the SubPlan is strict for 'b'. Thus the b/c join would\nbe considered to be legal.\n\nThanks\nRichard",
"msg_date": "Sun, 11 Sep 2022 18:42:03 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Check SubPlan clause for nonnullable rels/Vars"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> While wandering around the codes of reducing outer joins, I noticed that\n> when determining which base rels/Vars are forced nonnullable by given\n> clause, we don't take SubPlan into consideration. Does anyone happen to\n> know what is the concern behind that?\n\nProbably just didn't bother with the case at the time.\n\n> IMO, for SubPlans of type ALL/ANY/ROWCOMPARE, we should be able to find\n> additional nonnullable rels/Vars by descending through their testexpr.\n\nI think you can make something of this, but you need to be a lot more\nparanoid than this patch is.\n\n* I don't believe you can prove anything from an ALL_SUBLINK SubPlan,\nbecause it will return true if the sub-query returns zero rows, no\nmatter what the testexpr is. (Maybe if you could prove the sub-query\ndoes return a row, but I doubt it's worth going there.)\n\n* You need to explicitly check the subLinkType; as written this'll\nconsider EXPR_SUBLINK and so on. I'm not really on board with\nassuming that nothing bad will happen with sublink types other than\nthe ones the code is expecting.\n\n* It's not apparent to me that it's okay to pass down \"top_level\"\nrather than \"false\". Maybe it's all right, but it could do with\na comment.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Nov 2022 16:26:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Check SubPlan clause for nonnullable rels/Vars"
},
{
"msg_contents": "On Thu, Nov 3, 2022 at 4:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> * I don't believe you can prove anything from an ALL_SUBLINK SubPlan,\n> because it will return true if the sub-query returns zero rows, no\n> matter what the testexpr is. (Maybe if you could prove the sub-query\n> does return a row, but I doubt it's worth going there.)\n\n\nThanks for pointing this out. You're right. I didn't consider the case\nthat the subplan produces zero rows. In this case ALL_SUBLINK will\nalways return true, and ANY_SUBLINK will always return false. That\nmakes ALL_SUBLINK not strict, and ANY_SUBLINK can be strict only at top\nlevel.\n\n* You need to explicitly check the subLinkType; as written this'll\n> consider EXPR_SUBLINK and so on. I'm not really on board with\n> assuming that nothing bad will happen with sublink types other than\n> the ones the code is expecting.\n>\n\nYes, I need to check for ANY_SUBLINK and ROWCOMPARE_SUBLINK here. The\ntestexpr is only meaningful for ALL/ANY/ROWCOMPARE, and ALL_SUBLINK has\nbeen proven not strict.\n\n* It's not apparent to me that it's okay to pass down \"top_level\"\n> rather than \"false\". Maybe it's all right, but it could do with\n> a comment.\n\n\nThe 'top_level' param is one point that I'm not very confident about.\nI've added comments in the v2 patch.\n\nThanks for reviewing this patch!\n\nThanks\nRichard",
"msg_date": "Thu, 3 Nov 2022 17:17:58 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Check SubPlan clause for nonnullable rels/Vars"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> [ v2-0001-Check-SubPlan-clause-for-nonnullable-rels-Vars.patch ]\n\nPushed with cosmetic changes:\n\n* I don't believe in \"add at the end\" as a principle for placement\nof new code. There's usually some other logic that will give more\nconsistent results. In cases like this, ordering the treatment of\nNode types in the same way as they appear in the include/nodes/\nheaders is the standard answer. (Not that everybody's been totally\nconsistent about that :-( ... but that's not an argument for\nintroducing even more entropy.)\n\n* I rewrote the comments a bit.\n\n* I didn't like the test case too much: spinning up a whole new set\nof tables seems like a lot of useless cycles. Plus it makes it\nharder to experiment with the test query manually. I usually like\nto write such queries using the regression database's standard tables,\nso I rewrote this example that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 05 Nov 2022 15:33:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Check SubPlan clause for nonnullable rels/Vars"
},
{
"msg_contents": "On Sun, Nov 6, 2022 at 3:33 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > [ v2-0001-Check-SubPlan-clause-for-nonnullable-rels-Vars.patch ]\n>\n> Pushed with cosmetic changes:\n>\n> * I don't believe in \"add at the end\" as a principle for placement\n> of new code. There's usually some other logic that will give more\n> consistent results. In cases like this, ordering the treatment of\n> Node types in the same way as they appear in the include/nodes/\n> headers is the standard answer. (Not that everybody's been totally\n> consistent about that :-( ... but that's not an argument for\n> introducing even more entropy.)\n>\n> * I rewrote the comments a bit.\n>\n> * I didn't like the test case too much: spinning up a whole new set\n> of tables seems like a lot of useless cycles. Plus it makes it\n> harder to experiment with the test query manually. I usually like\n> to write such queries using the regression database's standard tables,\n> so I rewrote this example that way.\n\n\nThanks for the changes. They make the patch look better. And thanks for\npushing it.\n\nThanks\nRichard\n\nOn Sun, Nov 6, 2022 at 3:33 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> [ v2-0001-Check-SubPlan-clause-for-nonnullable-rels-Vars.patch ]\n\nPushed with cosmetic changes:\n\n* I don't believe in \"add at the end\" as a principle for placement\nof new code. There's usually some other logic that will give more\nconsistent results. In cases like this, ordering the treatment of\nNode types in the same way as they appear in the include/nodes/\nheaders is the standard answer. (Not that everybody's been totally\nconsistent about that :-( ... but that's not an argument for\nintroducing even more entropy.)\n\n* I rewrote the comments a bit.\n\n* I didn't like the test case too much: spinning up a whole new set\nof tables seems like a lot of useless cycles. Plus it makes it\nharder to experiment with the test query manually. I usually like\nto write such queries using the regression database's standard tables,\nso I rewrote this example that way. Thanks for the changes. They make the patch look better. And thanks forpushing it.ThanksRichard",
"msg_date": "Mon, 7 Nov 2022 10:53:06 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Check SubPlan clause for nonnullable rels/Vars"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nRecently I observed very peculiar incident.\n\n=== Incident description ===\n\nETL database was operating fine for many months, regularly updated etc. Workload was not changing much, but as far as it was ETL database - most of queries were different all the time.\nOn the night of September 7th database was stuck, no monitoring query could be executed. DBAs started to deal with the incident, but there is not much to do with database service, when you cannot execute a single query. According to VM metrics, VM was writing a lot on disk.\nOn-call engineer was summoned. He observed a lot of backends stuck with similar backtrace\n#5 LWLockAcquire()\n#6 pgss_store()\n#7 pgss_post_parse_analyze()\n#8 parse_analyze()\n#9 pg_analyze_and_rewrite()\n\nAfter restart, problem reproduced within 50 minutes. But monitoring queries were operating, what showed that all backends were stuck in LWLock pg_stat_statements. It was impossible to disable pgss with SQL, so engineer altered auto.conf and restarted database. This resolved the incident.\n\nLater I was working on analyzing the incident. Enabling pgss back showed traces of the problem:\n Fri 09 Sep 2022 08:52:31 AM MSK (every 2s)\n\n usename | state | wait_event | cnt \n-----------------+---------------------+--------------------+-----\n content | active | DataFileRead | 1\n content | active | pg_stat_statements | 42\n content | idle in transaction | ClientRead | 2\n pgwatch_monitor | active | BufFileRead | 1\n pgwatch_monitor | active | [null] | 5\n pgwatch_monitor | active | pg_stat_statements | 85\n postgres | active | [null] | 1\n repl | active | WalSenderMain | 2\n [null] | active | [null] | 2\n [null] | active | VacuumDelay | 7\n(10 rows)\n\npgwatch was quering 60 databases, every minute and each call to pg_stat_statements() took approximately 3-4 seconds.\nBackend that was in charge of grand lock was looking like this in pg_stat_statements:\n\ndatid | 127782\npid | 4077900\nusename | pgwatch_monitor\napplication_name | pgwatch2 - 10.96.17.68\nwait_event_type | IO\nwait_event | BufFileWrite\nstate | active\nbackend_xid | [null]\nbackend_xmin | 67048029\n\nThe contents of pg_stat_statements view overrun work_mem and were materialized in tuplestore on disk. This is what cause a lot of disk write on database that was not accepting any user query.\n\n</Incident description>\n\nTLDR: LWLock \"pg_stat_statements\" disabled all SQL queries.\n\n\n=== How the problem develops ===\nPrerequisite 1. pgwatch is quering pgss often.\nPrerequisite 2. pgss becomes big so that tuplestore is written on disk, while holding shared lock.\nPrerequisite 3. Someone is calling reset() or pgss_store() needing exclusive lock.\n\nConsequence. Exclusive lock queues after long held shared lock and prevents all shared locks to be taken.\nResult. Any query calling pgss hooks hangs.\n\n\n=== Reproduction for development purposes ===\n0. Setup a database with pg_stat_statements.track = all.\n1. Modify pg_stat_statements_internal() to wait for a long time under LWLockAcquire(pgss->lock, LW_SHARED).\n2. select * from pg_stat_statements()\n3. select pg_stat_statements_reset()\n\nNow the database is bonked. Any query will hang until pg_stat_statements() finishes.\n\n\n=== How to fix ===\npgss uses LWLock to protect hashtable.\nWhen the hashtable is reset or new user query is inserted in pgss_store() - exclusive lock is used.\nWhen stats are updated for known query or pg_stat_statements are read - shared lock is used.\n\nI propose to swap shared lock for stats update to conditional shared lock.\nIt cannot be taken during reset() - and that's totally fine. It cannot be taken during entering new query - and I think it's OK to spill some stats in this case. PFA patch attached.\n\nThis patch prevents from a disaster incurred by described here locking.\n\n\n=== Other possible mitigation ===\nThe incident would not occur if tuplestore did not convert into on-disk representation. But I don't see how to reliably protect from this.\n\nWhat do you think?\n\n\nThank!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Mon, 12 Sep 2022 11:52:28 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "pg_stat_statements locking"
},
{
"msg_contents": "Hi,\n\nOn Mon, Sep 12, 2022 at 11:52:28AM +0500, Andrey Borodin wrote:\n>\n> === How to fix ===\n> pgss uses LWLock to protect hashtable.\n> When the hashtable is reset or new user query is inserted in pgss_store() -\n> exclusive lock is used.\n> When stats are updated for known query or pg_stat_statements are read - shared lock is used.\n>\n> I propose to swap shared lock for stats update to conditional shared lock.\n> It cannot be taken during reset() - and that's totally fine. It cannot be\n> taken during entering new query - and I think it's OK to spill some stats in\n> this case. PFA patch attached.\n>\n> This patch prevents from a disaster incurred by described here locking.\n\nI'm not a fan of that patch as it now silently ignores entries if the lwlock\ncan't be acquired *immediately*, without any way to avoid that if your\nconfiguration and/or workload doesn't lead to this problem, or any way to know\nthat entries were ignored.\n\n> === Other possible mitigation ===\n> The incident would not occur if tuplestore did not convert into on-disk\n> representation. But I don't see how to reliably protect from this.\n\nI'm not sure that's true. If you need an exclusive lwlock it means that new\nentries are added. If that keeps happening it means that you will eventually\nneed to defragment the query text file, and this operation will certainly hold\nan exclusive lwlock for a very long time.\n\nI think that the better long term approach is to move pg_stat_statements and\nthe query texts to dynamic shared memory. This should also help in this\nscenario as dshash is partitioned, so you don't have a single lwlock for the\nwhole hash table. And as discussed recently, see [1], we should make the stat\ncollector extensible to reuse it in extensions like pg_stat_statements to\nbenefit from all the other optimizations.\n\n[1] https://www.postgresql.org/message-id/20220818195124.c7ipzf6c5v7vxymc@awork3.anarazel.de\n\n\n",
"msg_date": "Mon, 12 Sep 2022 16:40:47 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements locking"
},
{
"msg_contents": "\n\n> On 12 Sep 2022, at 13:40, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> I'm not a fan of that patch as it now silently ignores entries if the lwlock\n> can't be acquired *immediately*, without any way to avoid that if your\n> configuration and/or workload doesn't lead to this problem, or any way to know\n> that entries were ignored.\n\nPractically, workload of this configuration is uncommon. At least I could not find any reports of such locking.\nBut theoretically, all prerequisites of a disaster is very common (variety of queries + some QPS of pg_stat_statements view + small work_mem + occasional reset() or GC).\n\nMaybe it's only a problem of programs that use pgss. pgwatch is calling pgss on every DB in the cluster, that's how check once in a minute became some queries per second.\n\nPersonally, I'd prefer if I could configure a timeout to aquire lock. That timeout would denote maximum delay that pgss can incur on the query. But we would need to extend LWLock API for this.\n\n\n\n> On 12 Sep 2022, at 13:40, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> I think that the better long term approach is to move pg_stat_statements and\n> the query texts to dynamic shared memory. \n\nBTW we don't even need a dynamic memory. We need just a shared memory, probably pre-allocated.\nI agree that pgss must reside in main memory only, never on disk.\n\nBut we still will have a possibility of long lock conflicts preventing queries from completing. And the ability to configure pgss hooks timeout would be useful anyway.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 12 Sep 2022 17:32:55 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements locking"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 05:32:55PM +0500, Andrey Borodin wrote:\n>\n> > On 12 Sep 2022, at 13:40, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > I'm not a fan of that patch as it now silently ignores entries if the lwlock\n> > can't be acquired *immediately*, without any way to avoid that if your\n> > configuration and/or workload doesn't lead to this problem, or any way to know\n> > that entries were ignored.\n>\n> Practically, workload of this configuration is uncommon. At least I could not\n> find any reports of such locking. But theoretically, all prerequisites of a\n> disaster is very common (variety of queries + some QPS of pg_stat_statements\n> view + small work_mem + occasional reset() or GC).\n\nSimply needing to evict entries regularly is already famous for being really\nexpensive. See for instance [1].\n\n> Maybe it's only a problem of programs that use pgss. pgwatch is calling pgss\n> on every DB in the cluster, that's how check once in a minute became some\n> queries per second.\n\nAh, I wasn't sure if that's what you meant in your original message. Calling\npg_stat_statements *for every database* doesn't sound like a good idea.\n\nAlso ideally you shouldn't need to retrieve the query text every time. There's\nnow pg_stat_statements_info.dealloc, so between that and the number of row\nreturned you can easily know if there are new query texts that you never saw\nyet and cache those on the application side rather than retrieving them again\nand again.\n\n> Personally, I'd prefer if I could configure a timeout to aquire lock. That\n> timeout would denote maximum delay that pgss can incur on the query. But we\n> would need to extend LWLock API for this.\n\nYeah, that's what I meant by \"immediately\" in my previous message. That being\nsaid I don't know if adding a timeout would be too expensive for the lwlock\ninfrastructure.\n\n> > On 12 Sep 2022, at 13:40, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > I think that the better long term approach is to move pg_stat_statements and\n> > the query texts to dynamic shared memory.\n>\n> BTW we don't even need a dynamic memory. We need just a shared memory,\n> probably pre-allocated. I agree that pgss must reside in main memory only,\n> never on disk.\n\nWe still need dynamic shared memory to get rid of the query text file, which is\na big problem on its own. For the main hash table, relying on dshash could\nallow increasing the maximum number of entries without a restart, which could\nbe cool if you're in a situation where you have a finite number of entries\nthat's higher than pg_stat_statements.max (like after creating a new role or\nsomething).\n>\n> But we still will have a possibility of long lock conflicts preventing\n> queries from completing. And the ability to configure pgss hooks timeout\n> would be useful anyway.\n\nI didn't look thoroughly at the new pgstats infrastructure, but from what I saw\nit should be able to leverage most of the current problems.\n\n[1] https://twitter.com/AndresFreundTec/status/1105585237772263424\n\n\n",
"msg_date": "Mon, 12 Sep 2022 21:18:00 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements locking"
},
{
"msg_contents": "\n\n> On 12 Sep 2022, at 18:18, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> That being\n> said I don't know if adding a timeout would be too expensive for the lwlock\n> infrastructure.\n\nImplementation itself is straightforward, but we need to add 3 impls of waiting for semaphore with timeout.\nPOSIX have sem_timedwait(). Windows' WaitForMultipleObjectsEx() have timeout arg. SysV have semtimedop().\nThat's what we need to add something like LWLockAcquireWithTimeout().\n\nDoes adding all these stuff sound like a good tradeoff for lock-safe pg_stat_statements? If so - I'll start to implement this.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 12 Sep 2022 22:47:36 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements locking"
},
{
"msg_contents": "Andrey Borodin <x4mmm@yandex-team.ru> writes:\n>> On 12 Sep 2022, at 18:18, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> That being\n>> said I don't know if adding a timeout would be too expensive for the lwlock\n>> infrastructure.\n\nI want to object fiercely to loading down LWLock with anything like\ntimeouts. It's supposed to be \"lightweight\". If we get away from\nthat we're just going to find ourselves needing another lighter-weight\nlock mechanism.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Sep 2022 14:01:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements locking"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 02:01:23PM -0400, Tom Lane wrote:\n> Andrey Borodin <x4mmm@yandex-team.ru> writes:\n> >> On 12 Sep 2022, at 18:18, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >> That being\n> >> said I don't know if adding a timeout would be too expensive for the lwlock\n> >> infrastructure.\n> \n> I want to object fiercely to loading down LWLock with anything like\n> timeouts. It's supposed to be \"lightweight\". If we get away from\n> that we're just going to find ourselves needing another lighter-weight\n> lock mechanism.\n\nThat's what I was thinking, so it looks like a show-stopper for the proposed\npatch.\n\n\n",
"msg_date": "Tue, 13 Sep 2022 02:08:35 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements locking"
},
{
"msg_contents": "\n\n> On 12 Sep 2022, at 23:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Andrey Borodin <x4mmm@yandex-team.ru> writes:\n>>> On 12 Sep 2022, at 18:18, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>> That being\n>>> said I don't know if adding a timeout would be too expensive for the lwlock\n>>> infrastructure.\n> \n> I want to object fiercely to loading down LWLock with anything like\n> timeouts. It's supposed to be \"lightweight\". If we get away from\n> that we're just going to find ourselves needing another lighter-weight\n> lock mechanism.\n\nThanks for clarifying this out, Tom. I agree that spreading timeout-based algorithms is not a good thing. And when you have a hammer - everything seems like a nail, so it would be temping to use timeout here and there.\n\n\n> On 12 Sep 2022, at 23:08, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> That's what I was thinking, so it looks like a show-stopper for the proposed\n> patch.\n\nSo, the only option to make things configurable is a switch for waiting\\waitless locks.\n\nAnd the other way is refactoring towards partitioned hashtable, namely dshash. But I don't see how partitioned locking can save us from a locking disaster. Problem is caused by reading all the pgss view colliding with reset() or GC. Both this operations deal with each partition - they will conflict anyway, with the same result. Time-consuming read of each partition will prevent exclusive lock by reset(), and queued exclusive lock will prevent any reads from hashtable.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 13 Sep 2022 10:38:13 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements locking"
},
{
"msg_contents": "On Tue, Sep 13, 2022 at 10:38:13AM +0500, Andrey Borodin wrote:\n> \n> And the other way is refactoring towards partitioned hashtable, namely\n> dshash. But I don't see how partitioned locking can save us from a locking\n> disaster. Problem is caused by reading all the pgss view colliding with\n> reset() or GC.\n\nIf you store the query texts in DSM, you won't have a query text file to\nmaintain and the GC problem will disappear.\n\n> Both this operations deal with each partition - they will\n> conflict anyway, with the same result. Time-consuming read of each partition\n> will prevent exclusive lock by reset(), and queued exclusive lock will\n> prevent any reads from hashtable.\n\nConflicts would still be possible, just less likely and less long as the whole\ndshash is never locked globally, just one partition at a time (except when the\ndshash is resized, but the locks aren't held for a long time and it's not\nsomething frequent).\n\nBut the biggest improvements should be gained by reusing the pgstats\ninfrastructure. I only had a glance at it so I don't know much about it, but\nit has a per-backend hashtable to cache some information and avoid too many\naccesses on the shared hash table, and a mechanism to accumulate entries and do\nbatch updates.\n\n\n",
"msg_date": "Tue, 13 Sep 2022 14:12:55 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements locking"
}
] |
[
{
"msg_contents": "Hi, it looks like the commit [1] renamed pg_stop_backup() to\npg_backup_stop() but forgot to rename the associated\nPG_STOP_BACKUP_V2_COLS macro. While this is harmless, here's a patch\nto rename the macro to be in sync with the function name.\n\nThoughts?\n\n[1]\ncommit 39969e2a1e4d7f5a37f3ef37d53bbfe171e7d77a\nAuthor: Stephen Frost <sfrost@snowman.net>\nDate: Wed Apr 6 14:41:03 2022 -0400\n\n Remove exclusive backup mode\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 12 Sep 2022 17:06:16 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Adjust macro name in pg_backup_stop()"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 05:06:16PM +0530, Bharath Rupireddy wrote:\n> Hi, it looks like the commit [1] renamed pg_stop_backup() to\n> pg_backup_stop() but forgot to rename the associated\n> PG_STOP_BACKUP_V2_COLS macro. While this is harmless, here's a patch\n> to rename the macro to be in sync with the function name.\n\nThis is the last reference to pg_start/stop_backup() in the code, so\ndone.\n--\nMichael",
"msg_date": "Tue, 13 Sep 2022 10:58:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Adjust macro name in pg_backup_stop()"
}
] |
[
{
"msg_contents": "See attached simple patch to fix $SUBJECT; the old link generates a Not Found.\n\nThanks,\nJames Coleman",
"msg_date": "Mon, 12 Sep 2022 12:13:27 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix broken link to FreeBSD DocProj in docs"
},
{
"msg_contents": "> On 12 Sep 2022, at 18:13, James Coleman <jtc331@gmail.com> wrote:\n\n> See attached simple patch to fix $SUBJECT; the old link generates a Not Found.\n\nAccording to archive.org the freebsd.org site changed sometime in early 2021\nwith a 301 redirect to docproj/docproj which then ends up with a 404. I'll\napply this back through v10 to get a working link and will report it to the\nFreeBSD web team. Thanks for the fix!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 12 Sep 2022 20:46:47 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Fix broken link to FreeBSD DocProj in docs"
},
{
"msg_contents": "> On 12 Sep 2022, at 20:46, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 12 Sep 2022, at 18:13, James Coleman <jtc331@gmail.com> wrote:\n> \n>> See attached simple patch to fix $SUBJECT; the old link generates a Not Found.\n> \n> According to archive.org the freebsd.org site changed sometime in early 2021\n> with a 301 redirect to docproj/docproj which then ends up with a 404. I'll\n> apply this back through v10 to get a working link and will report it to the\n> FreeBSD web team. Thanks for the fix!\n\nCommitted and redirect bug reported to FreeBSD [0], thanks!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=266393\n\n\n\n",
"msg_date": "Tue, 13 Sep 2022 10:43:22 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Fix broken link to FreeBSD DocProj in docs"
},
{
"msg_contents": "On Tue, Sep 13, 2022 at 4:43 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 12 Sep 2022, at 20:46, Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> >> On 12 Sep 2022, at 18:13, James Coleman <jtc331@gmail.com> wrote:\n> >\n> >> See attached simple patch to fix $SUBJECT; the old link generates a Not Found.\n> >\n> > According to archive.org the freebsd.org site changed sometime in early 2021\n> > with a 301 redirect to docproj/docproj which then ends up with a 404. I'll\n> > apply this back through v10 to get a working link and will report it to the\n> > FreeBSD web team. Thanks for the fix!\n>\n> Committed and redirect bug reported to FreeBSD [0], thanks!\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n> [0] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=266393\n\nThanks for getting this so quickly!\n\nJames Coleman\n\n\n",
"msg_date": "Tue, 13 Sep 2022 07:36:12 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix broken link to FreeBSD DocProj in docs"
}
] |
[
{
"msg_contents": "Hi!\n\nI am trying to solve the problem of estimating the table bloat (and index\nbloat, though I am mostly focusing on tables at the moment).\n\nAfter searching far and wide, it seems that the choice is to be made\nbetween two methods:\n1. Slow, but very precise pgstattuple\n2. Fast, but somewhat imprecise \"bloat query\" which is attributed to\ncheck_postgres <https://bucardo.org/check_postgres/> project, though there\nare numerous\n<https://www.citusdata.com/blog/2017/10/20/monitoring-your-bloat-in-postgres/>\nvariations <https://github.com/pgexperts/pgx_scripts/tree/master/bloat> in\nexistence.\n\npgstattuple is beautiful and accurate but rather slow. If tables are large,\npgstattuple_approx could easily take 5-10 minutes, and if that were the\ncase, you can see pgstattuple to take 30-60 minutes on the same table\neasily.\n\n\"Bloat query\", on the other hand, is wonderfully fast, but rather\nimprecise. It tries to estimate the table data size as pg_class.reltuples *\nrow_width, where row_width is taken, roughly, to be (24 bytes for the\nheader + size of NULL map + (sum( (1 - null_frac)*avg_width ) for all\ncolumns in the table, as reported by pg_statistics)).\n\nThis, of course, completely ignores the question of padding and so on\ntables with a large number of columns the query tends to underestimate the\nsize of live data by some 10-20% (unless schema was explicitly created to\nminimize padding).\n\nI'd like to ask you:\n1. Are these indeed two approaches the only options on the table, or am I\nmissing something?\n\n2. I am considering my own approach where, after looking at pg_attributes\nand pg_stats, I am constructing \"an example row from this table with no\nnulls\" (so, max amount of data + max amount of padding) and \"an example row\nfrom the table with all the NULLs\" (so, as little padding as possible), do\npg_column_size() on both these rows (so that pg_column_size could compute\nsize+padding for me) and then take an average between them, perhaps\nweighted somehow by examining null_frac of table columns. Quick experiments\nshow that this yields a more accurate estimate of row size for tables with\nlarge numbers of columns than what the \"bloat query\" does. Question: can I\ndo anything better/easier here without sacrificing speed?\n\n-- \nD. Astapov\n\nHi!I am trying to solve the problem of estimating the table bloat (and index bloat, though I am mostly focusing on tables at the moment).After searching far and wide, it seems that the choice is to be made between two methods:1. Slow, but very precise pgstattuple2. Fast, but somewhat imprecise \"bloat query\" which is attributed to check_postgres project, though there are numerous variations in existence.pgstattuple is beautiful and accurate but rather slow. If tables are large, pgstattuple_approx could easily take 5-10 minutes, and if that were the case, you can see pgstattuple to take 30-60 minutes on the same table easily. \"Bloat query\", on the other hand, is wonderfully fast, but rather imprecise. It tries to estimate the table data size as pg_class.reltuples * row_width, where row_width is taken, roughly, to be (24 bytes for the header + size of NULL map + (sum( (1 - null_frac)*avg_width ) for all columns in the table, as reported by pg_statistics)).This, of course, completely ignores the question of padding and so on tables with a large number of columns the query tends to underestimate the size of live data by some 10-20% (unless schema was explicitly created to minimize padding).I'd like to ask you:1. Are these indeed two approaches the only options on the table, or am I missing something?2. I am considering my own approach where, after looking at pg_attributes and pg_stats, I am constructing \"an example row from this table with no nulls\" (so, max amount of data + max amount of padding) and \"an example row from the table with all the NULLs\" (so, as little padding as possible), do pg_column_size() on both these rows (so that pg_column_size could compute size+padding for me) and then take an average between them, perhaps weighted somehow by examining null_frac of table columns. Quick experiments show that this yields a more accurate estimate of row size for tables with large numbers of columns than what the \"bloat query\" does. Question: can I do anything better/easier here without sacrificing speed? -- D. Astapov",
"msg_date": "Mon, 12 Sep 2022 21:14:06 +0100",
"msg_from": "Dmitry Astapov <dastapov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Estimating bloat for very large tables: what is the state of art?"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen I try to restore an old backup on Windows, the restore always fails with a message that permission denied for \"base/19513/21359\".\n\nNote this file name provided by postgres is random. When I check the properties of this file, this file doesn't have any owner at all. Windows says that no one is owning this file.\nThis is only happening to certain files in the base directory.\n\nDue to these reasons, the restore operation always fails.\n\nQueries:\n\n 1. How does postgres assign in conjunction with Windows assign ownership of the database files post a restore operation?\n 2. Are there any guesses on why the issue might have occured?\n\nRegards,\nJoel\n\n\n\n\n\n\n\n\n\nHi,\n \nWhen I try to restore an old backup on Windows, the restore always fails with a message that permission denied for “base/19513/21359”.\n \nNote this file name provided by postgres is random. When I check the properties of this file, this file doesn’t have any owner at all. Windows says that no one is owning this file.\nThis is only happening to certain files in the base directory.\n \nDue to these reasons, the restore operation always fails.\n \nQueries:\n\nHow does postgres assign in conjunction with Windows assign ownership of the database files post a restore operation?Are there any guesses on why the issue might have occured?\n \nRegards,\nJoel",
"msg_date": "Mon, 12 Sep 2022 20:52:46 +0000",
"msg_from": "\"Joel Mariadasan (jomariad)\" <jomariad@cisco.com>",
"msg_from_op": true,
"msg_subject": "Permissions denied for the database file system on Windows during\n restore"
}
] |
[
{
"msg_contents": "My ongoing project to make VACUUM more predictable over time by\nproactive freezing [1] will increase the overall number of tuples\nfrozen by VACUUM significantly (at least in larger tables). It's\nimportant that we avoid any new user-visible impact from extra\nfreezing, though. I recently spent a lot of time on adding high-level\ntechniques that aim to avoid extra freezing (e.g. by being lazy about\nfreezing) when that makes sense. Low level techniques aimed at making\nthe mechanical process of freezing cheaper might also help. (In any\ncase it's well worth optimizing.)\n\nI'd like to talk about one such technique on this thread. The attached\nWIP patch reduces the size of xl_heap_freeze_page records by applying\na simple deduplication process. This can be treated as independent\nwork (I think it can, at least). The patch doesn't change anything\nabout the conceptual model used by VACUUM's lazy_scan_prune function\nto build xl_heap_freeze_page records for a page, and yet still manages\nto make the WAL records for freeze records over 5x smaller in many\nimportant cases. They'll be ~4x-5x smaller with *most* workloads,\neven.\n\nEach individual tuple entry (each xl_heap_freeze_tuple) adds a full 12\nbytes to the WAL record right now -- no matter what. So the existing\napproach is rather space inefficient by any standard (perhaps because\nit was developed under time pressure while fixing bugs in Postgres\n9.3). More importantly, there is a lot of natural redundancy among\neach xl_heap_freeze_tuple entry -- each tuple's xl_heap_freeze_tuple\ndetails tend to match. We can usually get away with storing each\nunique combination of values from xl_heap_freeze_tuple once per\nxl_heap_freeze_page record, while storing associated page offset\nnumbers in a separate area, grouped by their canonical freeze plan\n(which is a normalized version of the information currently stored in\nxl_heap_freeze_tuple).\n\nIn practice most individual tuples that undergo any kind of freezing\nonly need to have their xmin field frozen. And when xmax is affected\nat all, it'll usually just get set to InvalidTransactionId. And so the\nactual low-level processing steps for xmax have a high chance of being\nshared by other tuples on the page, even in ostensibly tricky cases.\nWhile there are quite a few paths that lead to VACUUM setting a\ntuple's xmax to InvalidTransactionId, they all end up with the same\ninstructional state in the final xl_heap_freeze_tuple entry.\n\nNote that there is a small chance that the patch will be less space\nefficient by up to 2 bytes per tuple frozen per page in cases where\nwe're allocating new Mulits during VACUUM. I think that this should be\nacceptable on its own -- even in rare bad cases we'll usually still\ncome out ahead -- what are the chances that we won't make up the\ndifference on the same page? Or at least within the same VACUUM? And\nthat's before we talk about a future world in which freezing will\nbatch tuples together at the page level (you don't have to bring the\nother VACUUM work into this discussion, I think, but it's not\n*completely* unrelated either).\n\n[1] https://postgr.es/m/CAH2-WzkFok_6EAHuK39GaW4FjEFQsY=3J0AAd6FXk93u-Xq3Fg@mail.gmail.com\n-- \nPeter Geoghegan",
"msg_date": "Mon, 12 Sep 2022 14:01:34 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "On Tue, Sep 13, 2022 at 6:02 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> My ongoing project to make VACUUM more predictable over time by\n> proactive freezing [1] will increase the overall number of tuples\n> frozen by VACUUM significantly (at least in larger tables). It's\n> important that we avoid any new user-visible impact from extra\n> freezing, though. I recently spent a lot of time on adding high-level\n> techniques that aim to avoid extra freezing (e.g. by being lazy about\n> freezing) when that makes sense. Low level techniques aimed at making\n> the mechanical process of freezing cheaper might also help. (In any\n> case it's well worth optimizing.)\n>\n> I'd like to talk about one such technique on this thread. The attached\n> WIP patch reduces the size of xl_heap_freeze_page records by applying\n> a simple deduplication process. This can be treated as independent\n> work (I think it can, at least).\n\n+1\n\n> The patch doesn't change anything\n> about the conceptual model used by VACUUM's lazy_scan_prune function\n> to build xl_heap_freeze_page records for a page, and yet still manages\n> to make the WAL records for freeze records over 5x smaller in many\n> important cases. They'll be ~4x-5x smaller with *most* workloads,\n> even.\n\nAfter a quick benchmark, I've confirmed that the amount of WAL records\nfor freezing 1 million tuples reduced to about one-fifth (1.2GB vs\n250MB). Great.\n\n>\n> Each individual tuple entry (each xl_heap_freeze_tuple) adds a full 12\n> bytes to the WAL record right now -- no matter what. So the existing\n> approach is rather space inefficient by any standard (perhaps because\n> it was developed under time pressure while fixing bugs in Postgres\n> 9.3). More importantly, there is a lot of natural redundancy among\n> each xl_heap_freeze_tuple entry -- each tuple's xl_heap_freeze_tuple\n> details tend to match. We can usually get away with storing each\n> unique combination of values from xl_heap_freeze_tuple once per\n> xl_heap_freeze_page record, while storing associated page offset\n> numbers in a separate area, grouped by their canonical freeze plan\n> (which is a normalized version of the information currently stored in\n> xl_heap_freeze_tuple).\n\nTrue. I've not looked at the patch in depth yet but I think we need\nregression tests for this.\n\nRegards,\n\n-- \nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 16 Sep 2022 16:29:32 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "On Fri, Sep 16, 2022 at 12:30 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> After a quick benchmark, I've confirmed that the amount of WAL records\n> for freezing 1 million tuples reduced to about one-fifth (1.2GB vs\n> 250MB). Great.\n\nI think that the really interesting thing about the patch is how this\nchanges the way we should think about freezing costs. It makes\npage-level batching seem very natural.\n\nThe minimum possible size of a Heap2/FREEZE_PAGE record is 64 bytes,\nonce alignment and so on is taken into account (without the patch).\nOnce we already know that we have to freeze *some* tuples on a given\nheap page, it becomes very reasonable to freeze as many as possible,\nin batch, just because we know that it'll be much cheaper if we do it\nnow versus doing it later on instead. Even if this extra freezing ends\nup \"going to waste\" due to updates against the same tuples that happen\na little later on, the *added* cost of freezing \"extra\" tuples will\nhave been so small that it's unlikely to matter. On the other hand, if\nit's not wasted we'll be *much* better off.\n\nIt's very hard to predict the future, which is kinda what the current\nFreezeLimit-based approach to freezing does. It's actually quite easy\nto understand the cost of freezing now versus freezing later, though.\nAt a high level, it makes sense for VACUUM to focus on freezing costs\n(including the risk that comes with *not* freezing with larger\ntables), and not worry so much about making accurate predictions.\nMaking accurate predictions about freezing/workload characteristics is\noverrated.\n\n> True. I've not looked at the patch in depth yet but I think we need\n> regression tests for this.\n\nWhat did you have in mind?\n\nI think that the best way to test something like this is with\nwal_consistency_checking. That mostly works fine. However, note that\nheap_mask() won't always be able to preserve the state of a tuple's\nxmax when modified by freezing. We sometimes need \"hint bits\" to\nactually reliably be set in REDO, when replaying the records for\nfreezing. At other times they really are just hints. We have to\nconservatively assume that it's just a hint when masking. Not sure if\nI can do much about that.\n\nNote that this optimization is one level below lazy_scan_prune(), and\none level above heap_execute_freeze_tuple(). Neither function really\nchanges at all. This seems useful because there are rare\npg_upgrade-only paths where xvac fields need to be frozen. That's not\ntested either.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 16 Sep 2022 12:24:03 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 2:01 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'd like to talk about one such technique on this thread. The attached\n> WIP patch reduces the size of xl_heap_freeze_page records by applying\n> a simple deduplication process.\n\nAttached is v2, which I'm just posting to keep CFTester happy. No real\nchanges here.\n\n--\nPeter Geoghegan",
"msg_date": "Tue, 20 Sep 2022 15:12:00 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 03:12:00PM -0700, Peter Geoghegan wrote:\n> On Mon, Sep 12, 2022 at 2:01 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> I'd like to talk about one such technique on this thread. The attached\n>> WIP patch reduces the size of xl_heap_freeze_page records by applying\n>> a simple deduplication process.\n> \n> Attached is v2, which I'm just posting to keep CFTester happy. No real\n> changes here.\n\nThis idea seems promising. I see that you called this patch a\nwork-in-progress, so I'm curious what else you are planning to do with it.\n\nAs I'm reading this thread and the patch, I'm finding myself wondering if\nit's worth exploring using wal_compression for these records instead. I\nthink you've essentially created an efficient compression mechanism for\nthis one type of record, but I'm assuming that lz4/zstd would also yield\nsome rather substantial improvements for this kind of data. Presumably a\ngeneric WAL record compression mechanism could be reused for other large\nrecords, too. That could be much easier than devising a deduplication\nstrategy for every record type.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Sep 2022 13:13:58 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 1:14 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> This idea seems promising. I see that you called this patch a\n> work-in-progress, so I'm curious what else you are planning to do with it.\n\nI really just meant that the patch wasn't completely finished at that\npoint. I hadn't yet convinced myself that I mostly had it right. I'm\nmore confident now.\n\n> As I'm reading this thread and the patch, I'm finding myself wondering if\n> it's worth exploring using wal_compression for these records instead.\n\nThe term deduplication works better than compression here because\nwe're not actually decompressing anything in the REDO routine. Rather,\nthe REDO routine processes each freeze plan by processing all affected\ntuples in order. To me this seems like the natural way to structure\nthings -- the WAL records are much smaller, but in a way that's kind\nof incidental. The approach taken by the patch just seems like the\nnatural approach, given the specifics of how freezing works at a high\nlevel.\n\n> I think you've essentially created an efficient compression mechanism for\n> this one type of record, but I'm assuming that lz4/zstd would also yield\n> some rather substantial improvements for this kind of data.\n\nI don't think of it that way. I've used the term \"deduplication\" to\nadvertise the patch, but that's mostly just a description of what\nwe're doing in the patch relative to what we do on HEAD today. There\nis nothing truly clever in the patch. We see a huge amount of\nredundancy among tuples from the same page in practically all cases,\nfor reasons that have everything to do with what freezing is, and how\nit works at a high level. The thought process that led to my writing\nthis patch was more high level than appearances suggest. (I often\nwrite patches that combine high level and low level insights in some\nway or other, actually.)\n\nTheoretically there might not be very much redundancy within each\nxl_heap_freeze_page record, with the right workload, but in practice a\ndecrease of 4x or more is all but guaranteed once you have more than a\nfew tuples to freeze on each page. If there are other WAL records that\nare as space inefficient as xl_heap_freeze_page is, then I'd be\nsurprised -- it is *unusually* space inefficient (like I said, I\nsuspect that this may have something to do with the fact that it was\noriginally designed under time pressure). So I don't expect that this\npatch tells us much about what we should do for any other WAL record.\nI certainly *hope* that it doesn't, at least.\n\n> Presumably a\n> generic WAL record compression mechanism could be reused for other large\n> records, too. That could be much easier than devising a deduplication\n> strategy for every record type.\n\nIt's quite possible that that's a good idea, but that should probably\nwork as an additive thing. That's something that I think of as a\n\"clever technique\", whereas I'm focussed on just not being naive in\nhow we represent this one specific WAL record type.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 21 Sep 2022 14:11:36 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 2:11 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Presumably a\n> > generic WAL record compression mechanism could be reused for other large\n> > records, too. That could be much easier than devising a deduplication\n> > strategy for every record type.\n>\n> It's quite possible that that's a good idea, but that should probably\n> work as an additive thing. That's something that I think of as a\n> \"clever technique\", whereas I'm focussed on just not being naive in\n> how we represent this one specific WAL record type.\n\nBTW, if you wanted to pursue something like this, that would work with\nmany different types of WAL record, ISTM that a \"medium level\" (not\nlow level) approach might be the best place to start. In particular,\nthe way that page offset numbers are represented in many WAL records\nis quite space inefficient. A domain-specific approach built with\nsome understanding of how page offset numbers tend to look in practice\nseems promising.\n\nThe representation of page offset numbers in PRUNE and VACUUM heapam\nWAL records (and in index WAL records) always just stores an array of\n2 byte OffsetNumber elements. It probably wouldn't be all that\ndifficult to come up with a simple scheme for compressing an array of\nOffsetNumbers in WAL records. It certainly doesn't seem like it would\nbe all that difficult to get it down to 1 byte per offset number in\nmost cases (even greater improvements seem doable).\n\nThat could also be used for the xl_heap_freeze_page record type --\nthough only after this patch is committed. The patch makes the WAL\nrecord use a simple array of page offset numbers, just like in\nPRUNE/VACUUM records. That's another reason why the approach\nimplemented by the patch seems like \"the natural approach\" to me. It's\nmuch closer to how heapam PRUNE records work (we have a variable\nnumber of arrays of page offset numbers in both cases).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 21 Sep 2022 14:41:28 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 02:41:28PM -0700, Peter Geoghegan wrote:\n> On Wed, Sep 21, 2022 at 2:11 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> > Presumably a\n>> > generic WAL record compression mechanism could be reused for other large\n>> > records, too. That could be much easier than devising a deduplication\n>> > strategy for every record type.\n>>\n>> It's quite possible that that's a good idea, but that should probably\n>> work as an additive thing. That's something that I think of as a\n>> \"clever technique\", whereas I'm focussed on just not being naive in\n>> how we represent this one specific WAL record type.\n> \n> BTW, if you wanted to pursue something like this, that would work with\n> many different types of WAL record, ISTM that a \"medium level\" (not\n> low level) approach might be the best place to start. In particular,\n> the way that page offset numbers are represented in many WAL records\n> is quite space inefficient. A domain-specific approach built with\n> some understanding of how page offset numbers tend to look in practice\n> seems promising.\n\nI wouldn't mind giving this a try.\n\n> The representation of page offset numbers in PRUNE and VACUUM heapam\n> WAL records (and in index WAL records) always just stores an array of\n> 2 byte OffsetNumber elements. It probably wouldn't be all that\n> difficult to come up with a simple scheme for compressing an array of\n> OffsetNumbers in WAL records. It certainly doesn't seem like it would\n> be all that difficult to get it down to 1 byte per offset number in\n> most cases (even greater improvements seem doable).\n> \n> That could also be used for the xl_heap_freeze_page record type --\n> though only after this patch is committed. The patch makes the WAL\n> record use a simple array of page offset numbers, just like in\n> PRUNE/VACUUM records. That's another reason why the approach\n> implemented by the patch seems like \"the natural approach\" to me. It's\n> much closer to how heapam PRUNE records work (we have a variable\n> number of arrays of page offset numbers in both cases).\n\nYeah, it seems likely that we could pack offsets in single bytes in many\ncases. A more sophisticated approach could even choose how many bits to\nuse per offset based on the maximum in the array. Furthermore, we might be\nable to make use of SIMD instructions to mitigate any performance penalty.\n\nI'm tempted to start by just using single-byte offsets when possible since\nthat should be relatively simple while still yielding a decent improvement\nfor many workloads. What do you think?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Sep 2022 21:21:04 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 02:11:36PM -0700, Peter Geoghegan wrote:\n> On Wed, Sep 21, 2022 at 1:14 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> Presumably a\n>> generic WAL record compression mechanism could be reused for other large\n>> records, too. That could be much easier than devising a deduplication\n>> strategy for every record type.\n> \n> It's quite possible that that's a good idea, but that should probably\n> work as an additive thing. That's something that I think of as a\n> \"clever technique\", whereas I'm focussed on just not being naive in\n> how we represent this one specific WAL record type.\n\nGot it. I think that's a fair point.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Sep 2022 21:22:48 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 9:21 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> I wouldn't mind giving this a try.\n\nDefinitely seems worth experimenting with. Many WAL records generated\nduring VACUUM (and during opportunistic pruning/index tuple deletion)\nhave offset number arrays that can be assumed to be both sorted and\nunique. My guess is that these WAL record types are the most amenable\nto compression at the WAL record level.\n\n> Yeah, it seems likely that we could pack offsets in single bytes in many\n> cases. A more sophisticated approach could even choose how many bits to\n> use per offset based on the maximum in the array. Furthermore, we might be\n> able to make use of SIMD instructions to mitigate any performance penalty.\n\nI guess I'd start with some kind of differential compression that\nrelies on the arrays being both sorted and unique. While it might be\nimportant to be able to compose together multiple different techniques\n(something that is more than the sum of its parts can be very useful),\nit seems most important to quickly validate the basic idea first.\n\nOne obvious thing that still seems worth pointing out: you may not\nneed to decompress anything. All that you really need to do is to get\nthe logic from some routine like PageIndexMultiDelete() to be executed\nby a REDO routine. Perhaps it'll make sense to come up with a\nrepresentation that can just be passed to the routine directly. (I\nreally don't know how likely this is, but it's something to consider.)\n\n> I'm tempted to start by just using single-byte offsets when possible since\n> that should be relatively simple while still yielding a decent improvement\n> for many workloads. What do you think?\n\nThe really big wins for WAL size will come from applying high level\ninsights about what is really needed, in all likelihood. The main\noverhead is very often generic WAL record header overhead. When it is\nthen it'll be hard to really move the needle just by compressing the\npayload. To do that you'd need to find a way to use fewer WAL records\nto do the same amount of work.\n\nThe thing that I think will really make the biggest difference is\nmerging PRUNE records with FREEZE records (and maybe even make the\nmerged record do the work of a VISIBLE record when that's possible).\nJust because now you have 1 WAL record instead of 2 (or even 3) WAL\nrecords. Obviously that's a complicated project, but it's another case\nwhere it feels like the more efficient approach might also be simpler.\nWe often write a PRUNE record with only one or two items in the array,\nin which case it's practically free to do some freezing, at least in\nterms of WAL space overhead (as long as you can do it with the same\nWAL record). Plus freezing is already inescapably tied to pruning --\nwe always prune a page that we're going to try to freeze in VACUUM (we\ncan't safely freeze dead tuples, so there is more or less a dependency\nalready).\n\nNot that you shouldn't pursue compressing the payload from WAL records\nas a project -- maybe that'll work very well. I'm just pointing out\nthat there is a bigger picture, that may or may not end up mattering\nhere. For the patch on this thread there certainly is a bigger picture\nabout costs over time. Something like that could be true for this\nother patch too.\n\nIt's definitely worth considering the size of the WAL records when\nthere are only one or two items, how common that may be in each\nindividual case, etc.\nFor example, FREEZE records have a minimum size of 64 bytes in\npractice, due to WAL record alignment overhead (the payload itself\ndoesn't usually have to be aligned, but the WAL header still does). It\nmay be possible to avoid going over the critical threshold that makes\nthe WAL size one MAXALIGN() quantum larger in the event of having only\na few tuples to freeze, a scenario where negative compression is\nlikely.\n\nNegative compression is always a potential problem, but maybe you can\ndeal with it very effectively by thinking about the problem\nholistically. If you're \"wasting\" space that was just going to be\nalignment padding anyway, does it really matter at all? (Maybe there\nis some reason to care, but I offhand I can't think of one.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 22 Sep 2022 09:51:01 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 3:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v2, which I'm just posting to keep CFTester happy. No real\n> changes here.\n\nAttached is v3. I'd like to move forward with commit soon. I'll do so\nin the next few days, barring objections.\n\nv3 has vacuumlazy.c pass NewRelfrozenXid instead of FreezeLimit for\nthe purposes of generating recovery conflicts during subsequent REDO\nof the resulting xl_heap_freeze_page WAL record. This more general\napproach is preparation for my patch to add page-level freezing [1].\nIt might theoretically lead to more recovery conflicts, but in\npractice the impact should be negligible. For one thing VACUUM must\nfreeze *something* before any recovery conflict can happen during\nsubsequent REDO on a replica in hot standby. It's far more likely that\nany disruptive recovery conflicts come from pruning.\n\nIt also makes the cutoff_xid field from the xl_heap_freeze_page WAL\nrecord into a \"standard latestRemovedXid format\" field. In other words\nit backs up an XID passed by vacuumlazy.c caller during original\nexecution (not in the REDO routine, as on HEAD). To make things\nclearer, the patch also renames the nearby xl_heap_visible.cutoff_xid\nfield to xl_heap_visible.latestRemovedXid. Now there are no WAL\nrecords with a field called \"cutoff_xid\" (they're all called\n\"latestRemovedXid\" now). This matches PRUNE records, and B-Tree DELETE\nrecords.\n\nThe overall picture is that all REDO routines (for both heapam and\nindex AMs) now advertise that they have a field that they use to\ngenerate recovery conflicts that follows a standard format. All\nlatestRemovedXid XIDs are applied in a standard way during REDO: by\npassing them to ResolveRecoveryConflictWithSnapshot(). Users can grep\nthe output of tools like pg_waldump to find latestRemovedXid fields,\nwithout necessarily needing to give any thought to which kind of WAL\nrecords are involved, or even the rmgr. Presenting this information\nprecisely and uniformity seems useful to me. (Perhaps we should have a\ntruly generic name, which latestRemovedXid isn't, but that can be\nhandled separately.)\n\n[1] https://commitfest.postgresql.org/39/3843/\n--\nPeter Geoghegan",
"msg_date": "Thu, 10 Nov 2022 16:48:17 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "On Thu, Nov 10, 2022 at 04:48:17PM -0800, Peter Geoghegan wrote:\n> On Tue, Sep 20, 2022 at 3:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Attached is v2, which I'm just posting to keep CFTester happy. No real\n> > changes here.\n> \n> Attached is v3. I'd like to move forward with commit soon. I'll do so\n> in the next few days, barring objections.\n\nNote that this comment is dangling in your patch:\n\n+{\n+ Page page = BufferGetPage(buffer);\n+\n+ /* nor when there are no tuples to freeze */\n...\n- /* Caller should not call me on a non-WAL-logged relation */\n- Assert(RelationNeedsWAL(reln));\n- /* nor when there are no tuples to freeze */\n- Assert(ntuples > 0);\n\n\n",
"msg_date": "Thu, 10 Nov 2022 20:59:58 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "On Thu, Nov 10, 2022 at 7:00 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Note that this comment is dangling in your patch:\n\nAttached is v4, which removes the old comments you pointed out were\nnow out of place (they weren't adding much anyway). Also fixed bitrot\nagainst HEAD from today's visibility map commit from Jeff Davis.\n\nThere is a more substantive change here, too. Like v3, v4 refactors\nthe *mechanical* details of how the XID based cutoff is handed down.\nHowever, unlike v3, v4 goes back to using vacuumlazy.c's FreezeLimit\nas the starting point for generating a latestRemovedXid. It seemed\nbetter to deal with the recovery conflict issues created by my big\npage-level freezing/freezing strategy patch set in the patch set\nitself.\n\nWill commit this early next week barring any objections.\n\nThanks\n--\nPeter Geoghegan",
"msg_date": "Fri, 11 Nov 2022 10:38:57 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "On Fri, Nov 11, 2022 at 10:38 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v4, which removes the old comments you pointed out were\n> now out of place (they weren't adding much anyway). Also fixed bitrot\n> against HEAD from today's visibility map commit from Jeff Davis.\n\nPushed something like this earlier today, though without any changes\nto VISIBLE records.\n\nI just started a new thread to discuss standardizing the symbol name\nfor recovery conflict XID cutoffs reported by tools like pg_waldump.\nSeemed better to deal with VISIBLE records in the scope of that new\nrefactoring patch.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 15 Nov 2022 10:26:05 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-15 10:26:05 -0800, Peter Geoghegan wrote:\n> Pushed something like this earlier today, though without any changes\n> to VISIBLE records.\n\nWhile updating a patch to log various offsets in pg_waldump, I noticed a few\nminor issues in this patch:\n\nISTM that some of the page level freezing functions are misnamed. In heapam.c\nthe heap_xlog* routines are for replay, afaict. However\nheap_xlog_freeze_plan() is used to WAL log the freeze\nplan. heap_xlog_freeze_page() is used to replay that WAL record. Probably your\nbrain is too used to nbtree/ :).\n\nI think s/heap_xlog_freeze/heap_log_freeze/ would mostly do the trick, except\nthat heap_xlog_new_freeze_plan() doesn't quite fit in the scheme.\n\nThe routines then also should be moved a bit up, because right now they're\ninbetween other routines doing WAL replay, adding to the confusion.\n\n\nThe memcpy in heap_xlog_freeze_page() seems a tad odd. I assume that the\nalignment guarantees for xl_heap_freeze_plan are too weak? But imo it's\nfailure prone (and I'm not sure strictly speaking legal from an undefined\nbehaviour POV) to form a pointer to a misaligned array. Even if we then later\njust memcpy() from those pointers.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Jan 2023 13:43:08 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 1:43 PM Andres Freund <andres@anarazel.de> wrote:\n> ISTM that some of the page level freezing functions are misnamed. In heapam.c\n> the heap_xlog* routines are for replay, afaict. However\n> heap_xlog_freeze_plan() is used to WAL log the freeze\n> plan. heap_xlog_freeze_page() is used to replay that WAL record. Probably your\n> brain is too used to nbtree/ :).\n\nSometimes I wonder why other people stubbornly insist on not starting\nevery function name with an underscore. :-)\n\n> I think s/heap_xlog_freeze/heap_log_freeze/ would mostly do the trick, except\n> that heap_xlog_new_freeze_plan() doesn't quite fit in the scheme.\n\n> The routines then also should be moved a bit up, because right now they're\n> inbetween other routines doing WAL replay, adding to the confusion.\n\nI believe that I used this scheme because of the fact that the new\nfunctions were conceptually related to REDO routines, even though they\nrun during original execution. I'm quite happy to revise the code\nbased on your suggestions, though.\n\n> The memcpy in heap_xlog_freeze_page() seems a tad odd. I assume that the\n> alignment guarantees for xl_heap_freeze_plan are too weak?\n\nThey're not too weak. I'm not sure why the memcpy() was used. I see\nyour point; it makes you wonder if it must be necessary, which then\nseems to call into question why it's okay to access the main array as\nan array. I can change this detail, too.\n\nI'll try to get back to it this week.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 9 Jan 2023 14:18:21 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 2:18 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'll try to get back to it this week.\n\nAttached patch fixes up these issues. It's almost totally mechanical.\n\n(Ended up using \"git diff --color-moved=dimmed-zebra\n--color-moved-ws=ignore-all-space\" with this, per your recent tip,\nwhich did help.)\n\n--\nPeter Geoghegan",
"msg_date": "Wed, 11 Jan 2023 16:06:31 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-11 16:06:31 -0800, Peter Geoghegan wrote:\n> On Mon, Jan 9, 2023 at 2:18 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I'll try to get back to it this week.\n> \n> Attached patch fixes up these issues. It's almost totally mechanical.\n\nLooks better, thanks!\n\n\n> (Ended up using \"git diff --color-moved=dimmed-zebra\n> --color-moved-ws=ignore-all-space\" with this, per your recent tip,\n> which did help.)\n\nIt's a really useful feature. I configured git to always use\n--color-moved=dimmed-zebra, but haven't quite dared to enable\n--color-moved-ws=ignore-all-space by default.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 11 Jan 2023 16:44:30 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 4:44 PM Andres Freund <andres@anarazel.de> wrote:\n> > Attached patch fixes up these issues. It's almost totally mechanical.\n>\n> Looks better, thanks!\n\nPushed that just now.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 11 Jan 2023 17:32:18 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating\n per-tuple freeze plans"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile looking at Robert's work to improve our handling of roles I found it\nhelpful to be able to see not only the directly recorded membership\ninformation, which now includes grantor, but also to see what was reachable\nvia SET ROLE. The attached patch puts that information at our users'\nfingertips by creating new system views and psql meta-commands.\n\nThe patch presented is mostly content complete though not ready to be\ncommitted by my own standards. But before I start moving it closer to that\nstate I wanted to get feedback and at least moral support for its\ncompletion.\n\nI've decided not to touch \\du and \\dg at this time. The role graph concept\nI'm implementing complements their \"show the catalog in user-friendly\nformat\" design.\n\nThe graph concept is doable even without the v16 changes but I'm wondering\nif that is a hard requirement here. In any case, the patch does not\nproperly protect itself in that situation even though the meta-commands\ndepend on a system view being present.\n\nNot having pl/pgsql available while building out the system views (initdb)\nis an annoyance - one I've overcome by including the code I used to\ngenerate a mis-named normal view as part of the commit and doing the\nintegration manually via pg_dump. The pg_role_relationship view probably\ncould be pulled out of the dynamic code generator, or rolled back into it\nas a CTE, depending on how valuable it seems to provide the recursive CTE\nquery to the user. I'm leaning toward CTE but figure my opinion is likely\nto change upon seeing feedback.\n\nFor the rest of my design choices and thinking please see the system view\npg_role_graph documentation. There are also code comments in the\npg_role_graph.plpgsql file.\n\nI haven't looked at how to implement automated testing on this yet, I've\njust used the roles below and manually verified I got the expected results\nand that they didn't change during refactoring. A few of these are noted\nin the view documentation to explain the format I've implemented. If the\nview is designed well, reviewing the expected memberships should be\nreasonably easy, so checking the data also tests the user interface.\n\nThanks!\n\nDavid J.\n\ncreate group grp1;\ncreate user usr1;\ngrant grp1 to usr1;\ncreate group grp2;\ncreate user usr2;\ngrant grp2 to usr1;\ngrant grp2 to usr2 with admin option;\ngrant grp2 to usr1 granted by usr2;\ncreate user usr1a;\ngrant usr1 to usr1a;\ncreate group \"group 3\";\ngrant \"group 3\" to usr1a;\ncreate group grp4;\ncreate group grp4adm;\ncreate user usr4;\ncreate user usr4a;\ngrant grp4 to grp4adm with admin option;\ngrant grp4adm to usr4;\ngrant grp4 to usr4a granted by grp4adm;\ncreate role sup1 with superuser login;\ncreate role usr5 with login;\ncreate group grp5a;\ncreate group grp5b;\ncreate group grp5c;\ncreate group grp5d;\ngrant grp5a to usr5;\ngrant grp5b to grp5a;\ngrant grp5c to grp5b with admin option;\ngrant grp5d to grp5c;\ncreate group grp6a;\ncreate group grp6b;\ncreate group grp6c;\ncreate group grp6d;\ngrant grp6b to grp6a;\ngrant grp6c to grp6b;\ngrant grp6d to grp6c;\n-- grant grp6a to grp6d; // not possible, no cycles allowed",
"msg_date": "Mon, 12 Sep 2022 16:46:21 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Role Graph Viewing in Core (psql: \\drr \\dru \\drg, system view:\n pg_role_graph, pg_role_relationship)"
}
] |
[
{
"msg_contents": "Move any remaining files generated by pg_upgrade into an internal subdir\n\nThis change concerns a couple of .txt files (for internal state checks)\nthat were still written in the path where the binary is executed, and\nnot in the subdirectory located in the target cluster. Like the other\n.txt files doing already so (like loadable_libraries.txt), these are\nsaved in the base output directory. Note that on failure, the logs\nreport the full path to the .txt file generated, so these are easy to\nfind.\n\nOversight in 38bfae3.\n\nAuthor: Daniel Gustafsson\nReviewed-by: Michael Paquier, Justin Prysby\nDiscussion: https://postgr.es/m/181A6DA8-3B7F-4B71-82D5-363FF0146820@yesql.se\nBackpatch-through: 15\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/ee5353abb6124de5ffd24ef1cedbc2a7196d4fd5\n\nModified Files\n--------------\nsrc/bin/pg_upgrade/check.c | 12 +++++++++---\nsrc/bin/pg_upgrade/version.c | 12 +++++++++---\n2 files changed, 18 insertions(+), 6 deletions(-)",
"msg_date": "Tue, 13 Sep 2022 01:39:59 +0000",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "pgsql: Move any remaining files generated by pg_upgrade into an\n interna"
},
{
"msg_contents": "Hi,\n\nAfter my last rebase of the meson tree I encountered the following test\nfailure:\n\nhttps://cirrus-ci.com/task/5532444261613568\n\n[20:23:04.171] ------------------------------------- 8< -------------------------------------\n[20:23:04.171] stderr:\n[20:23:04.171] # Failed test 'pg_upgrade_output.d/ not removed after pg_upgrade --check success'\n[20:23:04.171] # at C:/cirrus/src/bin/pg_upgrade/t/002_pg_upgrade.pl line 249.\n[20:23:04.171] # Failed test 'pg_upgrade_output.d/ removed after pg_upgrade success'\n[20:23:04.171] # at C:/cirrus/src/bin/pg_upgrade/t/002_pg_upgrade.pl line 261.\n[20:23:04.171] # Looks like you failed 2 tests of 13.\n\nregress_log:\nhttps://api.cirrus-ci.com/v1/artifact/task/5532444261613568/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade\n\nThe pg_upgrade output contains these potentially relevant warnings:\n\n...\n*Clusters are compatible*\npg_upgrade: warning: could not remove file or directory \"C:/cirrus/build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20220919T201958.511/log\": Directory not empty\npg_upgrade: warning: could not remove file or directory \"C:/cirrus/build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20220919T201958.511\": Directory not empty\n...\n\n\nI don't know if actually related to the commit below, but there've been a\nlot of runs of the pg_upgrade tests in the meson branch, and this is the first\nfailure of this kind. Unfortunately the error seems to be transient -\nrerunning the tests succeeded.\n\nOn 2022-09-13 01:39:59 +0000, Michael Paquier wrote:\n> Move any remaining files generated by pg_upgrade into an internal subdir\n>\n> This change concerns a couple of .txt files (for internal state checks)\n> that were still written in the path where the binary is executed, and\n> not in the subdirectory located in the target cluster. Like the other\n> .txt files doing already so (like loadable_libraries.txt), these are\n> saved in the base output directory. Note that on failure, the logs\n> report the full path to the .txt file generated, so these are easy to\n> find.\n>\n> Oversight in 38bfae3.\n>\n> Author: Daniel Gustafsson\n> Reviewed-by: Michael Paquier, Justin Prysby\n> Discussion: https://postgr.es/m/181A6DA8-3B7F-4B71-82D5-363FF0146820@yesql.se\n> Backpatch-through: 15\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 19 Sep 2022 14:32:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "pg_upgrade test failure"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 02:32:17PM -0700, Andres Freund wrote:\n> I don't know if actually related to the commit below, but there've been a\n> lot of runs of the pg_upgrade tests in the meson branch, and this is the first\n> failure of this kind. Unfortunately the error seems to be transient -\n> rerunning the tests succeeded.\n\nThis smells to me like a race condition in pg_upgrade (or even pg_ctl\nfor SERVER_LOG_FILE) where the code still has handles on some of the\nfiles in the log/ subdirectory, causing its removal to not be able to\nfinish happen. If this proves to be rather easy to reproduce, giving\na list of the files still present in this path would give a hint easy\nto follow. Does this reproduce with a good frequency?\n--\nMichael",
"msg_date": "Tue, 20 Sep 2022 10:08:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-20 10:08:41 +0900, Michael Paquier wrote:\n> On Mon, Sep 19, 2022 at 02:32:17PM -0700, Andres Freund wrote:\n> > I don't know if actually related to the commit below, but there've been a\n> > lot of runs of the pg_upgrade tests in the meson branch, and this is the first\n> > failure of this kind. Unfortunately the error seems to be transient -\n> > rerunning the tests succeeded.\n> \n> This smells to me like a race condition in pg_upgrade (or even pg_ctl\n> for SERVER_LOG_FILE) where the code still has handles on some of the\n> files in the log/ subdirectory, causing its removal to not be able to\n> finish happen.\n\nI don't really see what'd race with what here? pg_upgrade has precise control\nover what's happening here, no?\n\n\n> If this proves to be rather easy to reproduce, giving\n> a list of the files still present in this path would give a hint easy\n> to follow. Does this reproduce with a good frequency?\n\nI've only seen it once so far, but there haven't been many CI runs of the\nmeson branch since rebasing ontop of the last changes to pg_upgrade.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 19 Sep 2022 18:13:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 06:13:17PM -0700, Andres Freund wrote:\n> I don't really see what'd race with what here? pg_upgrade has precise control\n> over what's happening here, no?\n\nA code path could have forgotten a fclose() for example, but this code\nis rather old and close-proof as far as I know. Most of the log files\nare used with redirections for external calls, though I don't see\nhow these could still be hold after pg_upgrade finishes, though :/\nCould the use meson somewhat influence when running tests on Windows?\n\n> I've only seen it once so far, but there haven't been many CI runs of the\n> meson branch since rebasing ontop of the last changes to pg_upgrade.\n\nHmm, okay. Is that a specific branch in one of your public repos?\n--\nMichael",
"msg_date": "Tue, 20 Sep 2022 10:25:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 02:32:17PM -0700, Andres Freund wrote:\n> Hi,\n> \n> After my last rebase of the meson tree I encountered the following test\n> failure:\n> \n> https://cirrus-ci.com/task/5532444261613568\n> \n> [20:23:04.171] ------------------------------------- 8< -------------------------------------\n> [20:23:04.171] stderr:\n> [20:23:04.171] # Failed test 'pg_upgrade_output.d/ not removed after pg_upgrade --check success'\n> [20:23:04.171] # at C:/cirrus/src/bin/pg_upgrade/t/002_pg_upgrade.pl line 249.\n> [20:23:04.171] # Failed test 'pg_upgrade_output.d/ removed after pg_upgrade success'\n> [20:23:04.171] # at C:/cirrus/src/bin/pg_upgrade/t/002_pg_upgrade.pl line 261.\n> [20:23:04.171] # Looks like you failed 2 tests of 13.\n> \n> regress_log:\n> https://api.cirrus-ci.com/v1/artifact/task/5532444261613568/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade\n> \n> The pg_upgrade output contains these potentially relevant warnings:\n> \n> ...\n> *Clusters are compatible*\n> pg_upgrade: warning: could not remove file or directory \"C:/cirrus/build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20220919T201958.511/log\": Directory not empty\n> pg_upgrade: warning: could not remove file or directory \"C:/cirrus/build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20220919T201958.511\": Directory not empty\n> ...\n\nIt looks like it failed to remove a *.log file under windows, which\ncaused rmtree to fail.\n\nsrc/bin/pg_upgrade/pg_upgrade.h-#define DB_DUMP_LOG_FILE_MASK \"pg_upgrade_dump_%u.log\"\nsrc/bin/pg_upgrade/pg_upgrade.h-#define SERVER_LOG_FILE \"pg_upgrade_server.log\"\nsrc/bin/pg_upgrade/pg_upgrade.h-#define UTILITY_LOG_FILE \"pg_upgrade_utility.log\"\nsrc/bin/pg_upgrade/pg_upgrade.h:#define INTERNAL_LOG_FILE \"pg_upgrade_internal.log\"\n\nee5353abb only changed .txt files used for errors so can't be the cause,\nbut the original commit 38bfae3 might be related.\n\nI suspect that rmtree() was looping in pgunlink(), and got ENOENT, so\ndidn't warn about the file itself, but then failed one moment later in\nrmdir.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 19 Sep 2022 20:31:22 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 7:01 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Sep 19, 2022 at 02:32:17PM -0700, Andres Freund wrote:\n> > Hi,\n> >\n> > After my last rebase of the meson tree I encountered the following test\n> > failure:\n> >\n> > https://cirrus-ci.com/task/5532444261613568\n> >\n> > [20:23:04.171] ------------------------------------- 8< -------------------------------------\n> > [20:23:04.171] stderr:\n> > [20:23:04.171] # Failed test 'pg_upgrade_output.d/ not removed after pg_upgrade --check success'\n> > [20:23:04.171] # at C:/cirrus/src/bin/pg_upgrade/t/002_pg_upgrade.pl line 249.\n> > [20:23:04.171] # Failed test 'pg_upgrade_output.d/ removed after pg_upgrade success'\n> > [20:23:04.171] # at C:/cirrus/src/bin/pg_upgrade/t/002_pg_upgrade.pl line 261.\n> > [20:23:04.171] # Looks like you failed 2 tests of 13.\n> >\n> > regress_log:\n> > https://api.cirrus-ci.com/v1/artifact/task/5532444261613568/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade\n> >\n> > The pg_upgrade output contains these potentially relevant warnings:\n> >\n> > ...\n> > *Clusters are compatible*\n> > pg_upgrade: warning: could not remove file or directory \"C:/cirrus/build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20220919T201958.511/log\": Directory not empty\n> > pg_upgrade: warning: could not remove file or directory \"C:/cirrus/build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20220919T201958.511\": Directory not empty\n> > ...\n\nJust for the records - the same issue was also seen here [1], [2].\n\n[1] https://cirrus-ci.com/task/5709014662119424?logs=check_world#L82\n[2] https://api.cirrus-ci.com/v1/artifact/task/5709014662119424/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 27 Sep 2022 11:47:37 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-27 11:47:37 +0530, Bharath Rupireddy wrote:\n> Just for the records - the same issue was also seen here [1], [2].\n> \n> [1] https://cirrus-ci.com/task/5709014662119424?logs=check_world#L82\n> [2] https://api.cirrus-ci.com/v1/artifact/task/5709014662119424/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade\n\nYea, this is at the moment one of the top sources of spurious test failures\nfor cfbot. Just manually looking at http://cfbot.cputube.org/ for tasks that\nrecently changed state on windows:\n\nhttps://cirrus-ci.com/task/6422687231770624?logs=check_world#L60\nhttps://cirrus-ci.com/task/6408332243107840?logs=check_world#L60\nhttps://cirrus-ci.com/task/6202259712245760?logs=check_world#L60\nhttps://cirrus-ci.com/task/6150885981028352?logs=check_world#L60\nhttps://cirrus-ci.com/task/5361597290905600?logs=check_world#L60\nhttps://cirrus-ci.com/task/5177327624650752?logs=check_world#L60\nhttps://cirrus-ci.com/task/4862503887831040?logs=check_world#L60\nhttps://cirrus-ci.com/task/4576362479484928?logs=check_world#L60\n\nSomething needs to happen here.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 2 Oct 2022 08:46:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 1:31 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I suspect that rmtree() was looping in pgunlink(), and got ENOENT, so\n> didn't warn about the file itself, but then failed one moment later in\n> rmdir.\n\nYeah, I think this is my fault. In commit f357233c the new lstat()\ncall might return ENOENT for STATUS_DELETE_PENDING, and then we don't\nenter pgunlink()'s 10 second sleep-retry loop. Let me think about how\nbest to fix that, and how to write a regression test program that\nwould exercise stuff like this. Might take a couple of days as I am\naway from computers until mid-week.\n\n\n",
"msg_date": "Mon, 3 Oct 2022 09:07:25 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Mon, Oct 3, 2022 at 9:07 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Sep 20, 2022 at 1:31 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I suspect that rmtree() was looping in pgunlink(), and got ENOENT, so\n> > didn't warn about the file itself, but then failed one moment later in\n> > rmdir.\n>\n> Yeah, I think this is my fault. In commit f357233c the new lstat()\n> call might return ENOENT for STATUS_DELETE_PENDING, and then we don't\n> enter pgunlink()'s 10 second sleep-retry loop. Let me think about how\n> best to fix that, and how to write a regression test program that\n> would exercise stuff like this. Might take a couple of days as I am\n> away from computers until mid-week.\n\nI think something like the attached should do the right thing for\nSTATUS_DELETE_PENDING (sort of \"ENOENT-in-progress\"). unlink() goes\nback to being blocking (sleep+retry until eventually we reach ENOENT\nor we time out and give up with EACCES), but we still distinguish it\nfrom true ENOENT so we have a fast exit in that case. This is passing\nCI, but not tested yet.\n\nOne ugly thing in this patch is that it has to deal with our\nhistorical mistake (?) of including Windows headers in this file in\nCygwin builds for no reason and thus getting WIN32 defined on a\nnon-WIN32 build, as I've complained about before[1] but not yet tidied\nup.\n\nRemembering why we do any of this weird looking stuff that we don't\nneed on Unix, the general idea is that things that scan directories to\nunlink everything before unlinking the parent directory need to block\nfor a while on STATUS_DELETE_PENDING to increase their probability of\nsuccess, while things that do anything else probably just want to skip\nsuch zombie files completely. To recap, we have:\n\n * readdir() sees files that are ENOENT-in-progress (so recursive\nunlinks can see them)\n * unlink() waits for ENOENT-in-progress to reach ENOENT (what broke here)\n * stat() and lstat() report ENOENT-in-progress as ENOENT (done to fix\neg pg_basebackup, which used to fail at random on Windows)\n * open() reports ENOENT-in-progress as either ENOENT or EEXIST\ndepending on O_CREAT (because used by stat())\n\nClearly this set of kludges isn't perfect and other kludge-sets would\nbe possible too. One thought is that we could hide ENOENT-in-progress\nfrom readdir(), and add a new rmdir() wrapper instead. If it gets a\ndirectory-not-empty error from the kernel, it could at that point wait\nfor zombie files to go away (perhaps registering for file system\nevents with some local equivalent of KQ_FILTER_VNODE if there is one,\nto be less sloppy that the current sleep() nonsense, but sleep would\nwork too).\n\nWhen I'm back at my real keyboard I'll try to come up with tests for\nthis stuff, but I'm not sure how solid we can really make a test for\nthis particular case -- I think you'd need to have another thread open\nthe file and then close it after different periods of time, to\ndemonstrate that the retry loop works but also gives up, and that's\nexactly the sort of timing-dependent stuff we try to avoid. But I\nthink I'll try that anyway, because it's essential infrastructure to\nallow Unix-only hackers to work only this stuff. Once we have that,\nwe might be able to make some more progress with the various\nFILE_DISPOSITION_POSIX_SEMANTICS proposals, if it helps, because we'll\nhave reproducible evidence for what it really does.\n\n[1] https://commitfest.postgresql.org/39/3781/",
"msg_date": "Mon, 3 Oct 2022 12:10:06 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Mon, Oct 03, 2022 at 12:10:06PM +1300, Thomas Munro wrote:\n> I think something like the attached should do the right thing for\n> STATUS_DELETE_PENDING (sort of \"ENOENT-in-progress\"). unlink() goes\n> back to being blocking (sleep+retry until eventually we reach ENOENT\n> or we time out and give up with EACCES), but we still distinguish it\n> from true ENOENT so we have a fast exit in that case. This is passing\n> CI, but not tested yet.\n\n if (lstat(path, &st) < 0)\n- return -1;\n+ {\n+ if (lstat_error_was_status_delete_pending())\n+ is_lnk = false;\n+ else\n+ return -1;\n+ }\n+ else\n+ is_lnk = S_ISLNK(st.st_mode);\nSorry, I don't remember all the details in this area, but a directory\ncan never be marked as STATUS_DELETE_PENDING with some of its contents\nstill inside, right? If it has some contents, forcing unlink() all\nthe time would be fine?\n\n> One ugly thing in this patch is that it has to deal with our\n> historical mistake (?) of including Windows headers in this file in\n> Cygwin builds for no reason and thus getting WIN32 defined on a\n> non-WIN32 build, as I've complained about before[1] but not yet tidied\n> up.\n\nYour proposal remains local to dirmod.c, so that does not sound like a\nbig deal to me for the time being.\n--\nMichael",
"msg_date": "Mon, 3 Oct 2022 09:40:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Mon, Oct 3, 2022 at 1:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Oct 03, 2022 at 12:10:06PM +1300, Thomas Munro wrote:\n> > I think something like the attached should do the right thing for\n> > STATUS_DELETE_PENDING (sort of \"ENOENT-in-progress\"). unlink() goes\n> > back to being blocking (sleep+retry until eventually we reach ENOENT\n> > or we time out and give up with EACCES), but we still distinguish it\n> > from true ENOENT so we have a fast exit in that case. This is passing\n> > CI, but not tested yet.\n>\n> if (lstat(path, &st) < 0)\n> - return -1;\n> + {\n> + if (lstat_error_was_status_delete_pending())\n> + is_lnk = false;\n> + else\n> + return -1;\n> + }\n> + else\n> + is_lnk = S_ISLNK(st.st_mode);\n\n> Sorry, I don't remember all the details in this area, but a directory\n> can never be marked as STATUS_DELETE_PENDING with some of its contents\n> still inside, right?\n\nThat's my understanding, yes: just like Unix, you can't remove a\ndirectory with something in it. Unlike Unix, that includes files that\nhave been unlinked but are still open somewhere. (Note that in this\ncase it's not exactly a real directory, it's a junction point, which\nis a directory but it doesn't have contents, it is a reparse point\npointing somewhere else, so I suspect that it can't really suffer from\nENOTEMPTY, but it probably can suffer from 'someone has it open for a\nshort time because they are concurrently stat-ing it'.)\n\n> If it has some contents, forcing unlink() all\n> the time would be fine?\n\nHere's why I think it's probably OK to use unlink() unconditionally\nafter detecting STATUS_DELETE_PENDING. AFAICT there is no way to even\nfind out if it's a file or a junction in this state, but it doesn't\nmatter: we are not waiting for rmdir() or unlink() to succeed, we are\nwaiting for it to fail with an error other than EACCES, most likely\nENOENT (or to time out, perhaps because someone held the file open for\n11 seconds, or because EACCES was actually telling us about a\npermissions problem). EACCES is the errno for many things including\nSTATUS_DELETE_PENDING and also \"you called unlink() but it's a\ndirectory\" (should be EPERM according to POSIX, or EISDIR according\nto Linux). Both of those reasons imply that the zombie directory\nentry still exists, and we don't care which of those reasons triggered\nit. So I think that setting is_lnk = false is good enough here. Do\nyou see a hole in it?\n\n\n",
"msg_date": "Mon, 3 Oct 2022 16:03:12 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Mon, Oct 03, 2022 at 04:03:12PM +1300, Thomas Munro wrote:\n> So I think that setting is_lnk = false is good enough here. Do\n> you see a hole in it?\n\nI cannot think on one, on top of my head. Thanks for the\nexplanation.\n--\nMichael",
"msg_date": "Mon, 3 Oct 2022 15:28:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Mon, Oct 3, 2022 at 7:29 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Oct 03, 2022 at 04:03:12PM +1300, Thomas Munro wrote:\n> > So I think that setting is_lnk = false is good enough here. Do\n> > you see a hole in it?\n>\n> I cannot think on one, on top of my head. Thanks for the\n> explanation.\n\nSome things I learned while trying to understand how I managed to\nintroduce that bug despite reading and testing:\n\n* the code in pgunlink() has comments saying that its retry loop is to\nhandle sharing violations\n* in fact that retry loop also comes into play for STATUS_DELETE_PENDING\n* that case is fairly well hidden, because to reach it you need to\nunlink(pathname) twice! the second call will wait up to 10 seconds\nfor handles to close and then report ENOENT, allowing rmdir(parent) to\nsucceed\n* I guess this code is relying on that double-unlink to block until\nthe directory is empty?\n* you wouldn't notice any of this if you were testing on Windows 10 on\na desktop/laptop/VM, because it now uses POSIX semantics for unlink on\nNTFS, so the first unlink truly (synchronously) unlinks (no more\nSTATUS_DELETE_PENDING)\n* Server 2019, as used on CI, still uses the traditional NT semantics\n(unlink is asynchronous, when all handles closes)\n* the fix I proposed has the right effect (I will follow up with tests\nto demonstrate)\n\nI'll post my tests for this and a bunch more things I figured out\nshortly in a new Windows-filesystem-semantics-omnibus thread.\n\n\n",
"msg_date": "Tue, 18 Oct 2022 09:47:37 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 09:47:37AM +1300, Thomas Munro wrote:\n> * Server 2019, as used on CI, still uses the traditional NT semantics\n> (unlink is asynchronous, when all handles closes)\n> * the fix I proposed has the right effect (I will follow up with tests\n> to demonstrate)\n\nWow, nice investigation. And cirrus does not offer a newer option\neither.. Do you think that Windows server 2022 (successor of 2019) is\nable to use POSIX semantics for unlink()? It looks that we are a few\nyears away from being able to do that assuming that cirrus offers a\nnewer version than server 2019, but I guess that the code could\nmention this possibility in a comment, at least..\n--\nMichael",
"msg_date": "Tue, 18 Oct 2022 13:06:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 01:06:15PM +0900, Michael Paquier wrote:\n> On Tue, Oct 18, 2022 at 09:47:37AM +1300, Thomas Munro wrote:\n> > * Server 2019, as used on CI, still uses the traditional NT semantics\n> > (unlink is asynchronous, when all handles closes)\n> > * the fix I proposed has the right effect (I will follow up with tests\n> > to demonstrate)\n> \n> Wow, nice investigation. And cirrus does not offer a newer option\n> either..\n\nCurrently Andres builds images based on cirrus's 2019 image, but I think\nwe could use any windows docker image.\n\n> Do you think that Windows server 2022 (successor of 2019) is\n> able to use POSIX semantics for unlink()?\n\nI think it's possible to use it now, like what's done here.\nhttps://commitfest.postgresql.org/40/3347/\n\nThe only caveat is that it's done conditionally.\n\n\n",
"msg_date": "Mon, 17 Oct 2022 23:31:44 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-17 23:31:44 -0500, Justin Pryzby wrote:\n> On Tue, Oct 18, 2022 at 01:06:15PM +0900, Michael Paquier wrote:\n> > On Tue, Oct 18, 2022 at 09:47:37AM +1300, Thomas Munro wrote:\n> > > * Server 2019, as used on CI, still uses the traditional NT semantics\n> > > (unlink is asynchronous, when all handles closes)\n> > > * the fix I proposed has the right effect (I will follow up with tests\n> > > to demonstrate)\n> > \n> > Wow, nice investigation. And cirrus does not offer a newer option\n> > either..\n> \n> Currently Andres builds images based on cirrus's 2019 image, but I think\n> we could use any windows docker image.\n\nYou unfortunately can't run newer containers than the host OS :(, just user\nolder ones. And if you use mismatching containers the startup gets slower\nbecause it switches to use full virtualization rather than containers.\n\nI think we need to switch to use full VMs rather than containers. The\nperformance of the windows containers is just atrocious (build times on a\nlocal VM with the same number of cores is 1/2 of what we see in CI, test times\n1/3), they're slow to start due to pulling all files and decompressing them,\nand they're fragile. I've asked Bilal (CCed) to work on generating both\ncontainers and images.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 Oct 2022 09:29:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "So [1] on its own didn't fix this. My next guess is that the attached\nmight help.\n\nHmm. Following Michael's clue that this might involve log files and\npg_ctl, I noticed one thing: pg_ctl implements\nwait_for_postmaster_stop() by waiting for kill(pid, 0) to fail, and\nour kill emulation does CallNamedPipe(). If the server is in the\nprocess of exiting and the kernel is cleaning up all the handles we\ndidn't close, is there any reason to expect the signal pipe to be\nclosed after the log file?\n\n[1] https://www.postgresql.org/message-id/flat/20221025213055.GA8537%40telsasoft.com#9030de6c4c5e544d2b057b793a5b42af",
"msg_date": "Tue, 8 Nov 2022 01:16:09 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Tue, Nov 08, 2022 at 01:16:09AM +1300, Thomas Munro wrote:\n> So [1] on its own didn't fix this. My next guess is that the attached\n> might help.\n\nI took the liberty of adding a CF entry for this\nhttps://commitfest.postgresql.org/41/4011/\n\nAnd afterwards figured I could be a little bit wasteful and run the\ntests using meson test --repeat, rather than let cfbot do it over the\ncourse of a month.\nhttps://cirrus-ci.com/task/5115893722644480\n\nSo I didn't find evidence that it doesn't resolve the issue (but this\nalso doesn't prove that it will works).\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 16 Nov 2022 21:15:43 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-08 01:16:09 +1300, Thomas Munro wrote:\n> So [1] on its own didn't fix this. My next guess is that the attached\n> might help.\n> \n> Hmm. Following Michael's clue that this might involve log files and\n> pg_ctl, I noticed one thing: pg_ctl implements\n> wait_for_postmaster_stop() by waiting for kill(pid, 0) to fail, and\n> our kill emulation does CallNamedPipe(). If the server is in the\n> process of exiting and the kernel is cleaning up all the handles we\n> didn't close, is there any reason to expect the signal pipe to be\n> closed after the log file?\n\nWhat is our plan here? This afaict is the most common \"false positive\" for\ncfbot in the last weeks.\n\nE.g.:\n\nhttps://api.cirrus-ci.com/v1/artifact/task/5462686092230656/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade\n...\n[00:02:58.761](93.859s) ok 10 - run of pg_upgrade for new instance\n[00:02:58.808](0.047s) not ok 11 - pg_upgrade_output.d/ removed after pg_upgrade success\n[00:02:58.815](0.007s) # Failed test 'pg_upgrade_output.d/ removed after pg_upgrade success'\n# at C:/cirrus/src/bin/pg_upgrade/t/002_pg_upgrade.pl line 288.\n\n\nMichael:\n\nWhy does 002_pg_upgrade.pl try to filter the list of files in\npg_upgrade_output.d for files ending in .log? And why does it print those\nonly after starting the new node?\n\nHow about moving the iteration through the pg_upgrade_output.d to before the\n->start and printing all the files, but only slurp_file() if the filename ends\nwith *.log?\n\nMinor nit: It seems off to quite so many copies of\n $newnode->data_dir . \"/pg_upgrade_output.d\"\nparticularly where the test defines $log_path, but then still builds\nit from scratch after (line 304).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Dec 2022 10:15:33 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 7:15 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-11-08 01:16:09 +1300, Thomas Munro wrote:\n> > So [1] on its own didn't fix this. My next guess is that the attached\n> > might help.\n\n> What is our plan here? This afaict is the most common \"false positive\" for\n> cfbot in the last weeks.\n\nThat branch hasn't failed on cfbot[1], except once due to one of the\nother known flapping races we have to fix. Which doesn't prove\nanything, of course, but it is encouraging. I wish we knew why the\ntest does this, though....\n\nHere's a better version that works harder to avoid opening more than\none fd at a time (like the pgfnames()-based code it replaces), and\nalso uses fd.c facilities in the backend version (unlike pgfnames(),\nwhich looks like it could leak a descriptor if palloc() threw, and\nalso doesn't know how to handle file descriptor pressure).\n\n[1] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/41/4011",
"msg_date": "Thu, 5 Jan 2023 16:11:00 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Thu, Jan 5, 2023 at 4:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Dec 7, 2022 at 7:15 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-11-08 01:16:09 +1300, Thomas Munro wrote:\n> > > So [1] on its own didn't fix this. My next guess is that the attached\n> > > might help.\n>\n> > What is our plan here? This afaict is the most common \"false positive\" for\n> > cfbot in the last weeks.\n\nI pushed the rmtree() change. Let's see if that helps, or tells us\nsomething new.\n\nMichael: There were some questions from Andres above. FWIW I think\nif you wanted to investigate this properly on a local Windows system\nto chase down who's got the file open (shutdown sequence problem or\nwhatever), you'd probably have to install Server 2019, or maybe use an\nold 8.1 VM if you still have such a thing, based on the suspicion that\ntypical 10 and 11 systems won't exhibit the problem. But then I could\nbe wrong about what's going on...\n\n\n",
"msg_date": "Tue, 31 Jan 2023 14:00:05 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Tue, Jan 31, 2023 at 02:00:05PM +1300, Thomas Munro wrote:\n> On Thu, Jan 5, 2023 at 4:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Wed, Dec 7, 2022 at 7:15 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2022-11-08 01:16:09 +1300, Thomas Munro wrote:\n> > > > So [1] on its own didn't fix this. My next guess is that the attached\n> > > > might help.\n> >\n> > > What is our plan here? This afaict is the most common \"false positive\" for\n> > > cfbot in the last weeks.\n> \n> I pushed the rmtree() change. Let's see if that helps, or tells us\n> something new.\n\nI found a few failures since then:\n\nhttps://api.cirrus-ci.com/v1/artifact/task/6696942420361216/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade\n\npg_upgrade: warning: could not remove directory \"C:/cirrus/build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20230131T134931.720/log\": Directory not empty\npg_upgrade: warning: could not remove directory \"C:/cirrus/build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20230131T134931.720\": Directory not empty\n\nhttps://api.cirrus-ci.com/v1/artifact/task/5119776607961088/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade\nsame\n\nI verified that those both include your 54e72b66e, which is pretty\nstrange, since the patch passed tests 10s of times on CI until it was\nmerged, when it started/kept failing.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 31 Jan 2023 11:28:06 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Wed, Feb 1, 2023 at 6:28 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I pushed the rmtree() change. Let's see if that helps, or tells us\n> > something new.\n>\n> I found a few failures since then:\n>\n> https://api.cirrus-ci.com/v1/artifact/task/6696942420361216/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade\n>\n> pg_upgrade: warning: could not remove directory \"C:/cirrus/build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20230131T134931.720/log\": Directory not empty\n> pg_upgrade: warning: could not remove directory \"C:/cirrus/build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20230131T134931.720\": Directory not empty\n\nSo no change: we didn't see \"could not unlink file ...\". So I think\nthat means that it was rmtree() that unlinked the file for the *first*\ntime, but someone else has it open.\n\nEven though Windows is at this point eroding my love of computers and\nmaking me consider a new career in, I dunno, carrot farming or\nsomething, I have one more idea. Check out this kluge in\nsrc/bin/pg_upgrade/exec.c:\n\n /*\n * \"pg_ctl -w stop\" might have reported that the server has stopped\n * because the postmaster.pid file has been removed, but \"pg_ctl -w\n * start\" might still be in the process of closing and might still be\n * holding its stdout and -l log file descriptors open. Therefore,\n * try to open the log file a few more times.\n */\n\nI'm not sure about anything, but if that's what's happening here, then\nmaybe the attached would help. In short, it would make the previous\ntheory true (the idea of a second unlink() saving the day).",
"msg_date": "Wed, 1 Feb 2023 09:54:42 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "Hi, \n\nOn January 31, 2023 12:54:42 PM PST, Thomas Munro <thomas.munro@gmail.com> wrote:\n>On Wed, Feb 1, 2023 at 6:28 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> > I pushed the rmtree() change. Let's see if that helps, or tells us\n>> > something new.\n>>\n>> I found a few failures since then:\n>>\n>> https://api.cirrus-ci.com/v1/artifact/task/6696942420361216/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade\n>>\n>> pg_upgrade: warning: could not remove directory \"C:/cirrus/build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20230131T134931.720/log\": Directory not empty\n>> pg_upgrade: warning: could not remove directory \"C:/cirrus/build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20230131T134931.720\": Directory not empty\n>\n>So no change: we didn't see \"could not unlink file ...\". So I think\n>that means that it was rmtree() that unlinked the file for the *first*\n>time, but someone else has it open.\n>\n>Even though Windows is at this point eroding my love of computers and\n>making me consider a new career in, I dunno, carrot farming or\n>something, I have one more idea. Check out this kluge in\n>src/bin/pg_upgrade/exec.c:\n>\n> /*\n> * \"pg_ctl -w stop\" might have reported that the server has stopped\n> * because the postmaster.pid file has been removed, but \"pg_ctl -w\n> * start\" might still be in the process of closing and might still be\n> * holding its stdout and -l log file descriptors open. Therefore,\n> * try to open the log file a few more times.\n> */\n>\n>I'm not sure about anything, but if that's what's happening here, then\n>maybe the attached would help. In short, it would make the previous\n>theory true (the idea of a second unlink() saving the day).\n\n\nMaybe we should just handle it by sleeping and retrying, if on windows? Sad to even propose... \n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Tue, 31 Jan 2023 13:04:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Wed, Feb 1, 2023 at 10:04 AM Andres Freund <andres@anarazel.de> wrote:\n> On January 31, 2023 12:54:42 PM PST, Thomas Munro <thomas.munro@gmail.com> wrote:\n> >I'm not sure about anything, but if that's what's happening here, then\n> >maybe the attached would help. In short, it would make the previous\n> >theory true (the idea of a second unlink() saving the day).\n>\n> Maybe we should just handle it by sleeping and retrying, if on windows? Sad to even propose...\n\nYeah, that's what that code I posted would do automatically, though\nit's a bit hidden. The second attempt to unlink() would see delete\nalready pending, and activate its secret internal sleep/retry loop.\n\n\n",
"msg_date": "Wed, 1 Feb 2023 10:08:17 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Wed, Feb 1, 2023 at 9:54 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> ... I have one more idea ...\n\nI also had a second idea, barely good enough to mention and probably\njust paranoia. In a nearby thread I learned that process exit does\nnot release Windows advisory file locks synchronously, which surprised\nthis Unix hacker; it made me wonder what else might be released lazily\nafter process exit. Handles?! However, as previously mentioned, it's\npossible that even with fully Unix-like resource cleanup on process\nexit, we could be confused if we are using \"the process that was on\nthe end of this pipe has closed it\" as a proxy for \"the process is\ngone, *all* its handles are closed\". In any case, the previous kluge\nshould help wallpaper over any of that too, for this test anyway.\n\n\n",
"msg_date": "Wed, 1 Feb 2023 10:20:23 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Wed, Feb 1, 2023 at 10:08 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Feb 1, 2023 at 10:04 AM Andres Freund <andres@anarazel.de> wrote:\n> > Maybe we should just handle it by sleeping and retrying, if on windows? Sad to even propose...\n>\n> Yeah, that's what that code I posted would do automatically, though\n> it's a bit hidden. The second attempt to unlink() would see delete\n> already pending, and activate its secret internal sleep/retry loop.\n\nOK, I pushed that. Third time lucky?\n\nI tracked down the discussion of that existing comment about pg_ctl,\nwhich comes from the 9.2 days:\n\nhttps://www.postgresql.org/message-id/flat/5044DE59.5020500%40dunslane.net\n\nI guess maybe back then fopen() was Windows' own fopen() that wouldn't\nallow two handles to a file at the same time? These days we redirect\nit to a wrapper with the magic \"shared\" flags, so the kluge installed\nby commit f8c81c5dde2 may not even be needed anymore. It does\ndemonstrate that there are long standing timing races around log\nfiles, process exit and wait-for-shutdown logic, though.\n\nSomeone who develops for Windows could probably chase this right down,\nand make sure that we do certain things in the right order, and/or\nfind better kernel facilities; at a wild guess, something like\nOpenProcess() before you initiate shutdown, so you can then wait on\nits handle, for example. The docs for ExitProcess() make it clear\nthat handles are synchronously closed, so I think it's probably just\nthat our tests for when processes have exited are too fuzzy.\n\n\n",
"msg_date": "Wed, 1 Feb 2023 14:44:53 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Wed, Feb 1, 2023 at 2:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> OK, I pushed that. Third time lucky?\n\nI pulled down logs for a week of Windows CI, just over ~1k builds.\nThe failure rate was a few per day before, but there are no failures\nlike that after that went in. There are logs that contain the\n\"Directory not empty\" warning, but clearly the\ntry-again-and-this-time-wait-for-the-other-process logic must be\nworking (as horrible as it is) because then the test checks that the\ndirectory is gone, and succeeds. Hooray.\n\nSo that's one of our biggest CI flappers fixed. Unfortunately without\ntreating the root cause, really.\n\nNext up: the new \"running\" tests, spuriously failing around 8.8% of CI\nbuilds on FreeBSD. I'll go and ping that thread...\n\n\n",
"msg_date": "Tue, 7 Feb 2023 10:51:20 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "Hi, \n\nOn February 6, 2023 1:51:20 PM PST, Thomas Munro <thomas.munro@gmail.com> wrote:\n>Next up: the new \"running\" tests, spuriously failing around 8.8% of CI\n>builds on FreeBSD. I'll go and ping that thread...\n\nIs that rate unchanged? I thought I fixed the main issue last week?\n\nGreetings,\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Mon, 06 Feb 2023 13:57:38 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 10:57 AM Andres Freund <andres@anarazel.de> wrote:\n> On February 6, 2023 1:51:20 PM PST, Thomas Munro <thomas.munro@gmail.com> wrote:\n> >Next up: the new \"running\" tests, spuriously failing around 8.8% of CI\n> >builds on FreeBSD. I'll go and ping that thread...\n>\n> Is that rate unchanged? I thought I fixed the main issue last week?\n\nUnfortunately my cfbot database only holds a week's history. What I\nsee is that there were 1254 FreeBSD tasks run in that window, of which\n163 failed, and (more interestingly) 111 of those failures succeeded\non every other platform. And clicking on a few on cfbot's page\nreveals that it's the new running stuff, and I'm still trying to find\nthe interesting logs...\n\n\n",
"msg_date": "Tue, 7 Feb 2023 11:03:18 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 11:03 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Feb 7, 2023 at 10:57 AM Andres Freund <andres@anarazel.de> wrote:\n> > On February 6, 2023 1:51:20 PM PST, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > >Next up: the new \"running\" tests, spuriously failing around 8.8% of CI\n> > >builds on FreeBSD. I'll go and ping that thread...\n> >\n> > Is that rate unchanged? I thought I fixed the main issue last week?\n>\n> Unfortunately my cfbot database only holds a week's history. What I\n> see is that there were 1254 FreeBSD tasks run in that window, of which\n> 163 failed, and (more interestingly) 111 of those failures succeeded\n> on every other platform. And clicking on a few on cfbot's page\n> reveals that it's the new running stuff, and I'm still trying to find\n> the interesting logs...\n\nAh, that number might include some other problems, including in\nsubscription (#2900). That's the problem with flapping tests, you get\ndesensitised and stop looking closely and miss things...\n\n\n",
"msg_date": "Tue, 7 Feb 2023 11:08:50 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-07 11:03:18 +1300, Thomas Munro wrote:\n> On Tue, Feb 7, 2023 at 10:57 AM Andres Freund <andres@anarazel.de> wrote:\n> > On February 6, 2023 1:51:20 PM PST, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > >Next up: the new \"running\" tests, spuriously failing around 8.8% of CI\n> > >builds on FreeBSD. I'll go and ping that thread...\n> >\n> > Is that rate unchanged? I thought I fixed the main issue last week?\n> \n> Unfortunately my cfbot database only holds a week's history.\n\nWould be interesting to increase that to a considerably longer time. I can't\nimagine that that'd take all that much resources?\n\n\n> What I see is that there were 1254 FreeBSD tasks run in that window, of\n> which 163 failed, and (more interestingly) 111 of those failures succeeded\n> on every other platform. And clicking on a few on cfbot's page reveals that\n> it's the new running stuff, and I'm still trying to find the interesting\n> logs...\n\nI think I figured out why the logs frequently fail to upload - the server is\nstill running, so the size changes during the upload, causing the upload to\nfail with errors like:\n\n[12:46:43.552] Failed to upload artifacts: Put \"https://storage.googleapis.com/cirrus-ci-5309429912436736-3271c9/artifacts/postgresql-cfbot/postgresql/6729936359129088/testrun/build/testrun/runningcheck.log?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=cirrus-ci%40cirrus-ci-community.iam.gserviceaccount.com%2F20230206%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20230206T124536Z&X-Goog-Expires=600&X-Goog-SignedHeaders=host%3Bx-goog-content-length-range%3Bx-goog-meta-created_by_task&X-Goog-Signature=8e84192cbc754180b8baa6c00c41b463f580fe7183f0e7113c253aac13cc2458b835caef7940b91e102e96d54cff2b5714c77390e74244e2fb88c00c9957a801e33cbee2ac960e0db8a01fe08ee945bedf4616881e6beafa3a162c22948ac0b9a9359d93e1f461fc9f49385b784b75d633f1b01805b987d9d53bc7fb55263917ec85180a2140659d50990f066160f03e8bb8984e8d2aadb64c875c253167cf24da152a18d69fcd3d941edce145931e4feb23dc8cf43de7b7bbfc565786c1c692406f2a0a127f30385a8c4b66f96709b51d26d3c71617991c731b0e7206ee3906338dedf6359412edd024f8c76bd33400f4c9320c2bde9512fa8bcd6289e54d52\": http2: request body larger than specified content length\n\nI'm testing adding a pg_ctl stop to the on_failure right now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 Feb 2023 14:14:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "On 2023-02-06 14:14:22 -0800, Andres Freund wrote:\n> On 2023-02-07 11:03:18 +1300, Thomas Munro wrote:\n> > What I see is that there were 1254 FreeBSD tasks run in that window, of\n> > which 163 failed, and (more interestingly) 111 of those failures succeeded\n> > on every other platform. And clicking on a few on cfbot's page reveals that\n> > it's the new running stuff, and I'm still trying to find the interesting\n> > logs...\n> \n> I think I figured out why the logs frequently fail to upload - the server is\n> still running, so the size changes during the upload, causing the upload to\n> fail with errors like:\n> \n> [12:46:43.552] Failed to upload artifacts: Put \"https://storage.googleapis.com/cirrus-ci-5309429912436736-3271c9/artifacts/postgresql-cfbot/postgresql/6729936359129088/testrun/build/testrun/runningcheck.log?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=cirrus-ci%40cirrus-ci-community.iam.gserviceaccount.com%2F20230206%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20230206T124536Z&X-Goog-Expires=600&X-Goog-SignedHeaders=host%3Bx-goog-content-length-range%3Bx-goog-meta-created_by_task&X-Goog-Signature=8e84192cbc754180b8baa6c00c41b463f580fe7183f0e7113c253aac13cc2458b835caef7940b91e102e96d54cff2b5714c77390e74244e2fb88c00c9957a801e33cbee2ac960e0db8a01fe08ee945bedf4616881e6beafa3a162c22948ac0b9a9359d93e1f461fc9f49385b784b75d633f1b01805b987d9d53bc7fb55263917ec85180a2140659d50990f066160f03e8bb8984e8d2aadb64c875c253167cf24da152a18d69fcd3d941edce145931e4feb23dc8cf43de7b7bbfc565786c1c692406f2a0a127f30385a8c4b66f96709b51d26d3c71617991c731b0e7206ee3906338dedf6359412edd024f8c76bd33400f4c9320c2bde9512fa8bcd6289e54d52\": http2: request body larger than specified content length\n> \n> I'm testing adding a pg_ctl stop to the on_failure right now.\n\nPushed the fix doing so.\n\n\n",
"msg_date": "Mon, 6 Feb 2023 15:43:01 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
},
{
"msg_contents": "Dear Andres,\n\nWhile tracking BF failures related with pg_ugprade, I found the same failure has still happened [1] - [4].\nAccording to the log, the output directory was remained even after the successful upgrade [5].\nI analyzed and attached the fix patch, and below is my analysis... how do you think?\n\n=====\n\nlstat() seemed fail while doing the second try of rmtree(). This error message is\noutput from get_dirent_type().\n\nApart from pgunlink(), get_dirent_type() does not have an retry mechanism when\nlstat()->_pglstat64() detects STATUS_DELETE_PENDING. Therefore, I think rmtree()\nmay not wait the file until it would be really removed, if the status is deceted\nin the get_dirent_type().\n\nOne solution is to retry stat() or lstat() even in get_dirent_type(), like attached.\n\n\n[1]: 2023-07-21 02:21:53 https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-07-21%2002%3A21%3A53\n[2]: 2023-10-21 13:39:15 https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-10-21%2013%3A39%3A15\n[3]: 2023-10-23 09:03:07 https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-10-23%2009%3A03%3A07\n[4]: 2023-10-27 23:06:17 https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-10-27%2023%3A06%3A17\n[5]\n```\n...\n*Clusters are compatible*\npg_upgrade: warning: could not remove directory \"C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20231027T234552.867/log\": Directory not empty\npg_upgrade: warning: could not remove directory \"C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20231027T234552.867\": Directory not empty\npg_upgrade: warning: could not stat file \"C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20231027T234552.867/log/pg_upgrade_internal.log\": No such file or directory\npg_upgrade: warning: could not remove directory \"C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20231027T234552.867/log\": Directory not empty\npg_upgrade: warning: could not remove directory \"C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20231027T234552.867\": Directory not empty\n[23:46:07.585](17.106s) ok 12 - run of pg_upgrade --check for new instance\n[23:46:07.587](0.002s) not ok 13 - pg_upgrade_output.d/ removed after pg_upgrade --check success\n...\n```\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Sun, 29 Oct 2023 05:43:46 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade test failure"
},
{
"msg_contents": "On Sun, 29 Oct 2023 at 11:14, Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Andres,\n>\n> While tracking BF failures related with pg_ugprade, I found the same failure has still happened [1] - [4].\n> According to the log, the output directory was remained even after the successful upgrade [5].\n> I analyzed and attached the fix patch, and below is my analysis... how do you think?\n>\n\nThe same failure occurs randomly at [1] for a newly added test too:\npg_upgrade: warning: could not remove directory\n\"C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240104T215133.796/log\":\nDirectory not empty\npg_upgrade: warning: could not remove directory\n\"C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240104T215133.796\":\nDirectory not empty\npg_upgrade: warning: could not stat file\n\"C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240104T215133.796/log/pg_upgrade_internal.log\":\nNo such file or directory\npg_upgrade: warning: could not remove directory\n\"C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240104T215133.796/log\":\nDirectory not empty\npg_upgrade: warning: could not remove directory\n\"C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_new_sub_data/pgdata/pg_upgrade_output.d/20240104T215133.796\":\nDirectory not empty\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-01-04%2019%3A56%3A20\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 5 Jan 2024 10:19:48 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade test failure"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.