threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "I just noticed that we list auxiliary processes in pg_stat_ssl:\n\n55432 13devel 28627=# select * from pg_stat_ssl ;\n pid │ ssl │ version │ cipher │ bits │ compression │ client_dn │ client_serial │ issuer_dn \n───────┼─────┼─────────┼────────────────────────┼──────┼─────────────┼───────────┼───────────────┼───────────\n 28618 │ f │ │ │ │ │ │ │ \n 28620 │ f │ │ │ │ │ │ │ \n 28627 │ t │ TLSv1.3 │ TLS_AES_256_GCM_SHA384 │ 256 │ f │ │ │ \n 28616 │ f │ │ │ │ │ │ │ \n 28615 │ f │ │ │ │ │ │ │ \n 28617 │ f │ │ │ │ │ │ │ \n(6 filas)\n\n55432 13devel 28627=# select pid, backend_type from pg_stat_activity ;\n pid │ backend_type \n───────┼──────────────────────────────\n 28618 │ autovacuum launcher\n 28620 │ logical replication launcher\n 28627 │ client backend\n 28616 │ background writer\n 28615 │ checkpointer\n 28617 │ walwriter\n(6 filas)\n\nBut this seems pointless. Should we not hide those? Seems this only\nhappened as an unintended side-effect of fc70a4b0df38. It appears to me\nthat we should redefine that view to restrict backend_type that's\n'client backend' (maybe include 'wal receiver'/'wal sender' also, not\nsure.)\n\n-- \nÁlvaro Herrera http://www.twitter.com/alvherre\n\n\n",
"msg_date": "Wed, 4 Sep 2019 11:15:35 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "auxiliary processes in pg_stat_ssl"
},
{
"msg_contents": "On 2019-Sep-04, Alvaro Herrera wrote:\n\n> I just noticed that we list auxiliary processes in pg_stat_ssl:\n[...]\n> But this seems pointless. Should we not hide those? Seems this only\n> happened as an unintended side-effect of fc70a4b0df38. It appears to me\n> that we should redefine that view to restrict backend_type that's\n> 'client backend' (maybe include 'wal receiver'/'wal sender' also, not\n> sure.)\n\n[crickets]\n\nRobert, Kuntal, any opinion on this?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 4 Nov 2019 10:25:59 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: auxiliary processes in pg_stat_ssl"
},
{
"msg_contents": "On Mon, Nov 4, 2019 at 8:26 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Sep-04, Alvaro Herrera wrote:\n> > I just noticed that we list auxiliary processes in pg_stat_ssl:\n> [...]\n> > But this seems pointless. Should we not hide those? Seems this only\n> > happened as an unintended side-effect of fc70a4b0df38. It appears to me\n> > that we should redefine that view to restrict backend_type that's\n> > 'client backend' (maybe include 'wal receiver'/'wal sender' also, not\n> > sure.)\n>\n> [crickets]\n>\n> Robert, Kuntal, any opinion on this?\n\nI think if I were doing something about it, I'd probably try to filter\non a field that directly represents whether there is a connection,\nrather than checking the backend type. That way, if the list of\nbackend types that have client connections changes later, there's\nnothing to update. Like \"WHERE client_port IS NOT NULL,\" or something\nof that sort.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 4 Nov 2019 09:28:30 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: auxiliary processes in pg_stat_ssl"
},
{
"msg_contents": "Em qua., 4 de set. de 2019 às 12:15, Alvaro Herrera\r\n<alvherre@2ndquadrant.com> escreveu:\r\n>\r\n> I just noticed that we list auxiliary processes in pg_stat_ssl:\r\n>\r\n> 55432 13devel 28627=# select * from pg_stat_ssl ;\r\n> pid │ ssl │ version │ cipher │ bits │ compression │ client_dn │ client_serial │ issuer_dn\r\n> ───────┼─────┼─────────┼────────────────────────┼──────┼─────────────┼───────────┼───────────────┼───────────\r\n> 28618 │ f │ │ │ │ │ │ │\r\n> 28620 │ f │ │ │ │ │ │ │\r\n> 28627 │ t │ TLSv1.3 │ TLS_AES_256_GCM_SHA384 │ 256 │ f │ │ │\r\n> 28616 │ f │ │ │ │ │ │ │\r\n> 28615 │ f │ │ │ │ │ │ │\r\n> 28617 │ f │ │ │ │ │ │ │\r\n> (6 filas)\r\n>\r\n> 55432 13devel 28627=# select pid, backend_type from pg_stat_activity ;\r\n> pid │ backend_type\r\n> ───────┼──────────────────────────────\r\n> 28618 │ autovacuum launcher\r\n> 28620 │ logical replication launcher\r\n> 28627 │ client backend\r\n> 28616 │ background writer\r\n> 28615 │ checkpointer\r\n> 28617 │ walwriter\r\n> (6 filas)\r\n>\r\n> But this seems pointless. Should we not hide those? Seems this only\r\n> happened as an unintended side-effect of fc70a4b0df38. It appears to me\r\n> that we should redefine that view to restrict backend_type that's\r\n> 'client backend' (maybe include 'wal receiver'/'wal sender' also, not\r\n> sure.)\r\n>\r\nYep, it is pointless. BackendType that open connections to server are:\r\nautovacuum worker, client backend, background worker, wal sender. I\r\nalso notice that pg_stat_gssapi is in the same boat as pg_stat_ssl and\r\nwe should constraint the rows to backend types that open connections.\r\nI'm attaching a patch to list only connections in those system views.\r\n\r\n\r\n\r\n--\r\n Euler Taveira Timbira -\r\nhttp://www.timbira.com.br/\r\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Mon, 4 Nov 2019 12:39:49 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: auxiliary processes in pg_stat_ssl"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Nov 4, 2019 at 8:26 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > On 2019-Sep-04, Alvaro Herrera wrote:\n> > > I just noticed that we list auxiliary processes in pg_stat_ssl:\n> > [...]\n> > > But this seems pointless. Should we not hide those? Seems this only\n> > > happened as an unintended side-effect of fc70a4b0df38. It appears to me\n> > > that we should redefine that view to restrict backend_type that's\n> > > 'client backend' (maybe include 'wal receiver'/'wal sender' also, not\n> > > sure.)\n> >\n> > [crickets]\n> >\n> > Robert, Kuntal, any opinion on this?\n> \n> I think if I were doing something about it, I'd probably try to filter\n> on a field that directly represents whether there is a connection,\n> rather than checking the backend type. That way, if the list of\n> backend types that have client connections changes later, there's\n> nothing to update. Like \"WHERE client_port IS NOT NULL,\" or something\n> of that sort.\n\nYeah, using a \"this has a connection\" would be better and, as also noted\non this thread, pg_stat_gssapi should get similar treatment.\n\nBased on what we claim in our docs, it does look like 'client_port IS\nNOT NULL' should work. I do think we might want to update the docs to\nmake it a bit more explicit, what we say now is:\n\nTCP port number that the client is using for communication with this\nbackend, or -1 if a Unix socket is used\n\nWe don't explain there that NULL means the backend doesn't have an\nexternal connection even though plenty of those entries show up in every\ninstance of PG. Perhaps we should add this:\n\nIf this field is null, it indicates that this is an internal process\nsuch as autovacuum.\n\nWhich is what we say for 'client_addr'.\n\nI have to admit that while it's handy that we just shove '-1' into\nclient_port when it's a unix socket, it's kind of ugly from a data\nperspective- in a green field it'd probably be better to have a\n\"connection type\" field that then indicates which other fields are valid\ninstead of having a special constant, but that ship sailed long ago and\nit's not like we have a lot of people complaining about it, so I suppose\njust using it here as suggested is fine.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 4 Nov 2019 11:06:05 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: auxiliary processes in pg_stat_ssl"
},
{
"msg_contents": "On Mon, Nov 4, 2019 at 9:09 PM Euler Taveira <euler@timbira.com.br> wrote:\n> >\n> > But this seems pointless. Should we not hide those? Seems this only\n> > happened as an unintended side-effect of fc70a4b0df38. It appears to me\n> > that we should redefine that view to restrict backend_type that's\n> > 'client backend' (maybe include 'wal receiver'/'wal sender' also, not\n> > sure.)\n> >\n> Yep, it is pointless. BackendType that open connections to server are:\n> autovacuum worker, client backend, background worker, wal sender. I\n> also notice that pg_stat_gssapi is in the same boat as pg_stat_ssl and\n> we should constraint the rows to backend types that open connections.\n> I'm attaching a patch to list only connections in those system views.\n>\nYeah, We should hide those. As Robert mentioned, I think checking\nwhether 'client_port IS NOT NULL' is a better approach than checking\nthe backend_type. The patch looks good to me.\n\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Nov 2019 10:59:03 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: auxiliary processes in pg_stat_ssl"
},
{
"msg_contents": "On 2019-Nov-04, Euler Taveira wrote:\n\n> Yep, it is pointless. BackendType that open connections to server are:\n> autovacuum worker, client backend, background worker, wal sender. I\n> also notice that pg_stat_gssapi is in the same boat as pg_stat_ssl and\n> we should constraint the rows to backend types that open connections.\n> I'm attaching a patch to list only connections in those system views.\n\nThanks! I pushed this.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 12 Nov 2019 18:49:33 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: auxiliary processes in pg_stat_ssl"
},
{
"msg_contents": "On 2019-Nov-04, Stephen Frost wrote:\n\n> Based on what we claim in our docs, it does look like 'client_port IS\n> NOT NULL' should work. I do think we might want to update the docs to\n> make it a bit more explicit, what we say now is:\n> \n> TCP port number that the client is using for communication with this\n> backend, or -1 if a Unix socket is used\n> \n> We don't explain there that NULL means the backend doesn't have an\n> external connection even though plenty of those entries show up in every\n> instance of PG. Perhaps we should add this:\n> \n> If this field is null, it indicates that this is an internal process\n> such as autovacuum.\n> \n> Which is what we say for 'client_addr'.\n\nSeems sensible. Done. Thanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 12 Nov 2019 18:50:08 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: auxiliary processes in pg_stat_ssl"
}
] |
[
{
"msg_contents": "Hello,\n\nIt is currently only possible to authenticate clients using certificates\nwith the CN.\n\nI would like to propose that the field used to identify the client is\nconfigurable, e.g. being able to specify DN as the appropriate field. The\nreason being is that in some organisations, where you might want to use the\ncorporate PKI, but where the CN of such certificates is not controlled.\n\nIn my case, the DN of our corporate issued client certificates is\ncontrolled and derived from AD groups we are members of. Only users in\nthose groups can request client certificates with a DN that is equal to the\nAD group ID. This would make DN a perfectly suitable drop-in replacement\nfor Postgres client certificate authentication, but as it stands it is not\npossible to change the field used.\n\nBest regards,\nGeorge\n\nHello,It is currently only possible to authenticate clients using certificates with the CN.I would like to propose that the field used to identify the client is configurable, e.g. being able to specify DN as the appropriate field. The reason being is that in some organisations, where you might want to use the corporate PKI, but where the CN of such certificates is not controlled.In my case, the DN of our corporate issued client certificates is controlled and derived from AD groups we are members of. Only users in those groups can request client certificates with a DN that is equal to the AD group ID. This would make DN a perfectly suitable drop-in replacement for Postgres client certificate authentication, but as it stands it is not possible to change the field used.Best regards,George",
"msg_date": "Wed, 4 Sep 2019 17:24:15 +0100",
"msg_from": "George Hafiz <george@hafiz.uk>",
"msg_from_op": true,
"msg_subject": "Client Certificate Authentication Using Custom Fields (i.e. other\n than CN)"
},
{
"msg_contents": "On Wed, Sep 04, 2019 at 05:24:15PM +0100, George Hafiz wrote:\n> Hello,\n> \n> It is currently only possible to authenticate clients using certificates\n> with the CN.\n> \n> I would like to propose that the field used to identify the client is\n> configurable, e.g. being able to specify DN as the appropriate field. The\n> reason being is that in some organisations, where you might want to use the\n> corporate PKI, but where the CN of such certificates is not controlled.\n> \n> In my case, the DN of our corporate issued client certificates is\n> controlled and derived from AD groups we are members of. Only users in\n> those groups can request client certificates with a DN that is equal to the\n> AD group ID. This would make DN a perfectly suitable drop-in replacement\n> for Postgres client certificate authentication, but as it stands it is not\n> possible to change the field used.\n\nThis all sounds interesting. Do you have a concrete proposal as to\nhow such a new interface would look in operation? Better yet, a PoC\npatch implementing same?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 4 Sep 2019 22:40:49 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Client Certificate Authentication Using Custom Fields (i.e.\n other than CN)"
},
{
"msg_contents": "Hi David,\n\nGlad you are open to the idea!\n\nMy proposal would be an additional authentication setting for certauth\n(alongside the current map option) which lets you specify which subject\nfield to match on.\n\nI'll take a look at what the patch would look like, but this is incredibly\ntangential to what I'm supposed to be doing, so I can't promise anything!\nWould be good if anyone else would like to look at it as well. Hopefully\nit's a relatively straightforward change.\n\nBest regards,\nGeorge\n\nOn Wed, 4 Sep 2019, 21:40 David Fetter, <david@fetter.org> wrote:\n\n> On Wed, Sep 04, 2019 at 05:24:15PM +0100, George Hafiz wrote:\n> > Hello,\n> >\n> > It is currently only possible to authenticate clients using certificates\n> > with the CN.\n> >\n> > I would like to propose that the field used to identify the client is\n> > configurable, e.g. being able to specify DN as the appropriate field. The\n> > reason being is that in some organisations, where you might want to use\n> the\n> > corporate PKI, but where the CN of such certificates is not controlled.\n> >\n> > In my case, the DN of our corporate issued client certificates is\n> > controlled and derived from AD groups we are members of. Only users in\n> > those groups can request client certificates with a DN that is equal to\n> the\n> > AD group ID. This would make DN a perfectly suitable drop-in replacement\n> > for Postgres client certificate authentication, but as it stands it is\n> not\n> > possible to change the field used.\n>\n> This all sounds interesting. Do you have a concrete proposal as to\n> how such a new interface would look in operation? Better yet, a PoC\n> patch implementing same?\n>\n> Best,\n> David.\n> --\n> David Fetter <david(at)fetter(dot)org> http://fetter.org/\n> Phone: +1 415 235 3778\n>\n> Remember to vote!\n> Consider donating to Postgres: http://www.postgresql.org/about/donate\n>\n\nHi David,Glad you are open to the idea! My proposal would be an additional authentication setting for certauth (alongside the current map option) which lets you specify which subject field to match on.I'll take a look at what the patch would look like, but this is incredibly tangential to what I'm supposed to be doing, so I can't promise anything! Would be good if anyone else would like to look at it as well. Hopefully it's a relatively straightforward change. Best regards, George On Wed, 4 Sep 2019, 21:40 David Fetter, <david@fetter.org> wrote:On Wed, Sep 04, 2019 at 05:24:15PM +0100, George Hafiz wrote:\n> Hello,\n> \n> It is currently only possible to authenticate clients using certificates\n> with the CN.\n> \n> I would like to propose that the field used to identify the client is\n> configurable, e.g. being able to specify DN as the appropriate field. The\n> reason being is that in some organisations, where you might want to use the\n> corporate PKI, but where the CN of such certificates is not controlled.\n> \n> In my case, the DN of our corporate issued client certificates is\n> controlled and derived from AD groups we are members of. Only users in\n> those groups can request client certificates with a DN that is equal to the\n> AD group ID. This would make DN a perfectly suitable drop-in replacement\n> for Postgres client certificate authentication, but as it stands it is not\n> possible to change the field used.\n\nThis all sounds interesting. Do you have a concrete proposal as to\nhow such a new interface would look in operation? Better yet, a PoC\npatch implementing same?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Wed, 4 Sep 2019 22:57:06 +0100",
"msg_from": "George Hafiz <george@hafiz.uk>",
"msg_from_op": true,
"msg_subject": "Re: Client Certificate Authentication Using Custom Fields (i.e. other\n than CN)"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nCurrently, if you hold a multixact open long enough to generate an\r\n\"oldest multixact is far in the past\" message during VACUUM, you may\r\nsee the following ERROR:\r\n\r\n WARNING: oldest multixact is far in the past\r\n HINT: Close open transactions with multixacts soon to avoid wraparound problems.\r\n ERROR: multixact X from before cutoff Y found to be still running\r\n\r\nUpon further inspection, I found that this is because the multixact\r\nlimit used in this case is the threshold for which we emit the \"oldest\r\nmultixact\" message. Instead, I think the multixact limit should be\r\nset to the result of GetOldestMultiXactId(), effectively forcing a\r\nminimum freeze age of zero. The ERROR itself is emitted by\r\nFreezeMultiXactId() and appears to be a safeguard against problems\r\nlike this.\r\n\r\nI've attached a patch to set the limit to the oldest multixact instead\r\nof the \"safeMxactLimit\" in this case. I'd like to credit Jeremy\r\nSchneider as the original reporter.\r\n\r\nNathan",
"msg_date": "Thu, 5 Sep 2019 00:37:40 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "ERROR: multixact X from before cutoff Y found to be still running"
},
{
"msg_contents": "On 9/4/19 17:37, Nathan Bossart wrote:\n> Hi,\n>\n> Currently, if you hold a multixact open long enough to generate an\n> \"oldest multixact is far in the past\" message during VACUUM, you may\n> see the following ERROR:\n>\n> WARNING: oldest multixact is far in the past\n> HINT: Close open transactions with multixacts soon to avoid wraparound problems.\n> ERROR: multixact X from before cutoff Y found to be still running\n>\n> Upon further inspection, I found that this is because the multixact\n> limit used in this case is the threshold for which we emit the \"oldest\n> multixact\" message. Instead, I think the multixact limit should be\n> set to the result of GetOldestMultiXactId(), effectively forcing a\n> minimum freeze age of zero. The ERROR itself is emitted by\n> FreezeMultiXactId() and appears to be a safeguard against problems\n> like this.\n>\n> I've attached a patch to set the limit to the oldest multixact instead\n> of the \"safeMxactLimit\" in this case. I'd like to credit Jeremy\n> Schneider as the original reporter.\n\n\nThis was fun (sortof) - and a good part of the afternoon for Nathan,\nNasby and myself today. A rather large PostgreSQL database with default\nautovacuum settings had a large table that started getting behind on\nSunday. The server has a fairly large number of CPUs and a respectable\nworkload. We realized today that with their XID generation they would\ngo read-only to prevent wraparound tomorrow. (And perfectly healthy XID\nage on Sunday - that's wraparound in four days! Did I mention that I'm\nexcited for the default limit GUC change in pg12?) To make matters more\ninteresting, whenever we attempted to run a VACUUM command we\nencountered the ERROR message that Nate quoted on every single attempt! \nThere was a momentary mild panic based on the \"ERRCODE_DATA_CORRUPTED\"\nmessage parameter in heapam.c FreezeMultiXactId() ... but as we looked\ncloser we're now thinking there might just be an obscure bug in the code\nthat sets vacuum limits.\n\nNathan and Nasby and myself have been chatting about this for quite\nawhile but the vacuum code isn't exactly the simplest thing in the world\nto reason about. :) Anyway, it looks to me like\nMultiXactMemberFreezeThreshold() is intended to progressively reduce the\nvacuum multixact limits across multiple vacuum runs on the same table,\nas pressure on the members space increases. I'm thinking there was just\na small oversight in writing the formula where under the most aggressive\ncircumstances, vacuum could actually be instructed to delete multixacts\nthat are still in use by active transactions and trigger the failure we\nobserved.\n\nNate put together an initial patch (attached to the previous email,\nwhich was sent only to the bugs list). We couldn't quite come to a\nconsensus and on the best approach, but we decided that he'd kick of the\nthread and I'd throw out an alternative version of the patch that might\nbe worth discussion. [Attached to this email.] Curious what others think!\n\n-Jeremy\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services",
"msg_date": "Wed, 4 Sep 2019 18:01:05 -0700",
"msg_from": "Jeremy Schneider <schnjere@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: multixact X from before cutoff Y found to be still running"
},
{
"msg_contents": "On Thu, Sep 5, 2019 at 1:01 PM Jeremy Schneider <schnjere@amazon.com> wrote:\n> On 9/4/19 17:37, Nathan Bossart wrote:\n> Currently, if you hold a multixact open long enough to generate an\n> \"oldest multixact is far in the past\" message during VACUUM, you may\n> see the following ERROR:\n>\n> WARNING: oldest multixact is far in the past\n> HINT: Close open transactions with multixacts soon to avoid wraparound problems.\n> ERROR: multixact X from before cutoff Y found to be still running\n>\n> Upon further inspection, I found that this is because the multixact\n> limit used in this case is the threshold for which we emit the \"oldest\n> multixact\" message. Instead, I think the multixact limit should be\n> set to the result of GetOldestMultiXactId(), effectively forcing a\n> minimum freeze age of zero. The ERROR itself is emitted by\n> FreezeMultiXactId() and appears to be a safeguard against problems\n> like this.\n>\n> I've attached a patch to set the limit to the oldest multixact instead\n> of the \"safeMxactLimit\" in this case. I'd like to credit Jeremy\n> Schneider as the original reporter.\n>\n> This was fun (sortof) - and a good part of the afternoon for Nathan, Nasby and myself today. A rather large PostgreSQL database with default autovacuum settings had a large table that started getting behind on Sunday. The server has a fairly large number of CPUs and a respectable workload. We realized today that with their XID generation they would go read-only to prevent wraparound tomorrow. (And perfectly healthy XID age on Sunday - that's wraparound in four days! Did I mention that I'm excited for the default limit GUC change in pg12?) To make matters more interesting, whenever we attempted to run a VACUUM command we encountered the ERROR message that Nate quoted on every single attempt! There was a momentary mild panic based on the \"ERRCODE_DATA_CORRUPTED\" message parameter in heapam.c FreezeMultiXactId() ... but as we looked closer we're now thinking there might just be an obscure bug in the code that sets vacuum limits.\n>\n> Nathan and Nasby and myself have been chatting about this for quite awhile but the vacuum code isn't exactly the simplest thing in the world to reason about. :) Anyway, it looks to me like MultiXactMemberFreezeThreshold() is intended to progressively reduce the vacuum multixact limits across multiple vacuum runs on the same table, as pressure on the members space increases. I'm thinking there was just a small oversight in writing the formula where under the most aggressive circumstances, vacuum could actually be instructed to delete multixacts that are still in use by active transactions and trigger the failure we observed.\n>\n> Nate put together an initial patch (attached to the previous email, which was sent only to the bugs list). We couldn't quite come to a consensus and on the best approach, but we decided that he'd kick of the thread and I'd throw out an alternative version of the patch that might be worth discussion. [Attached to this email.] Curious what others think!\n\nHi Jeremy, Nathan, Jim,\n\nOk, so to recap... since commit 801c2dc7 in 2014, if the limit was\nbefore the 'safe' limit, then it would log the warning and start using\nthe safe limit, even if that was newer than a multixact that is *still\nrunning*. It's not immediately clear to me if the limits on the\nrelevant GUCs or anything else ever prevented that.\n\nThen commit 53bb309d2d5 came along in 2015 (to fix a bug: member's\nhead could overwrite its tail) and created a way for the safe limit to\nbe more aggressive. When member space is low, we start lowering the\neffective max freeze age, and as we do so the likelihood of crossing\ninto still-running-multixact territory increases.\n\nI suppose this requires you to run out of member space (for example\nmany backends key sharing the same FK) or maybe just set\nautovacuum_multixact_freeze_max_age quite low, and then prolong the\nlife of a multixact for longer. Does the problem fix itself once you\nclose the transaction that's in the oldest multixact, ie holding back\nGetOldestMultiXact() from advancing? Since VACUUM errors out, we\ndon't corrupt data, right? Everyone else is still going to see the\nmultixact as running and do the right thing because vacuum never\nmanages to (bogusly) freeze the tuple.\n\nBoth patches prevent mxactLimit from being newer than the oldest\nrunning multixact. The v1 patch uses the most aggressive setting\npossible: the oldest running multi; the v2 uses the least aggressive\nof the 'safe' and oldest running multi. At first glance it seems like\nthe second one is better: it only does something different if we're in\nthe dangerous scenario you identified, but otherwise it sticks to the\nsafe limit, which generates less IO.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 5 Sep 2019 16:01:55 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: multixact X from before cutoff Y found to be still running"
},
{
"msg_contents": "On 9/4/19 21:01, Thomas Munro wrote:\n> I suppose this requires you to run out of member space (for example\n> many backends key sharing the same FK) or maybe just set\n> autovacuum_multixact_freeze_max_age quite low, and then prolong the\n> life of a multixact for longer.\nOn this particular production system,\nautovacuum_multixact_freeze_max_age is the default value of 400 million\nand it is not overridden for any tables. Looks to me like this was just\nworkload driven. There are a number of FKs and those seem to be a\nlikely candidate to me.\n\n> Does the problem fix itself once you\n> close the transaction that's in the oldest multixact, ie holding back\n> GetOldestMultiXact() from advancing? \nThe really interesting thing about this case is that the only\nlong-running connection was the autovacuum that had been running since\nSunday. While we were investigating yesterday, the autovacuum process\ndied without advancing relfrozenxid (users configured this system with\npoor logging, so it's not known whether autovac terminated from error or\nfrom a user who logged on to the system). As soon as the autovacuum\nprocess died, we stopped getting the \"multixact X from before cutoff Y\"\nerrors.\n\nIt really appears that it was the autovacuum process itself that was\nproviding the oldest running multixact which caused errors on\nyesterday's attempts to vacuum other tables - even though I though\nvacuum processes were ignored by that code. I'll have to take another\nlook at some point.\n\nVacuum cost parameters had been adjusted after Sunday, so the original\nautovacuum would have used default settings. Naturally, a new\nautovacuum process started up right away. This new process - definitely\nusing adjusted cost parameters - completed the vacuum of the large table\nwith 5 passes (index_vacuum_count) in a couple hours. Maintenance work\nmemory was already at the max; there were many hundreds of millions of\ndead tuples that still remained to be cleaned up.\n\nThe size of the large table (heap only) was about 75% of the memory on\nthe server, and the table had three indexes each about half the size of\nthe table. The storage was provisioned at just over 10k IOPS; at this\nrate you could read all three indexes from the storage one block at a\ntime in about an hour. (And Linux should be reading more than a block\nat a time.)\n\nIt is not known whether the original autovacuum failed to completely\nvacuum the large table in 3 days because of cost settings alone or\nbecause there's another latent bug somewhere in the autovacuum code that\nput it into some kind of loop (but if autovac hit the error above then\nthe PID would have terminated). We didn't manage to get a pstack.\n\n> Since VACUUM errors out, we\n> don't corrupt data, right? Everyone else is still going to see the\n> multixact as running and do the right thing because vacuum never\n> manages to (bogusly) freeze the tuple.\nThat's my take as well. I don't think there's any data corruption risk\nhere.\n\nIf anyone else ever hits this in the future, I think it's safe to just\nkill the oldest open session. The error should go away and there\nshouldn't be any risk of damage to the database.\n\n> Both patches prevent mxactLimit from being newer than the oldest\n> running multixact. The v1 patch uses the most aggressive setting\n> possible: the oldest running multi; the v2 uses the least aggressive\n> of the 'safe' and oldest running multi. At first glance it seems like\n> the second one is better: it only does something different if we're in\n> the dangerous scenario you identified, but otherwise it sticks to the\n> safe limit, which generates less IO.\nThanks for taking a look!\n\n-Jeremy\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n\n\n\n\n\nOn 9/4/19 21:01, Thomas Munro wrote:\n \n\nI suppose this requires you to run out of member space (for example\nmany backends key sharing the same FK) or maybe just set\nautovacuum_multixact_freeze_max_age quite low, and then prolong the\nlife of a multixact for longer.\n\n On this particular production system,\n autovacuum_multixact_freeze_max_age is the default value of 400\n million and it is not overridden for any tables. Looks to me like\n this was just workload driven. There are a number of FKs and those\n seem to be a likely candidate to me.\n\n\nDoes the problem fix itself once you\nclose the transaction that's in the oldest multixact, ie holding back\nGetOldestMultiXact() from advancing? \n\n The really interesting thing about this case is that the only\n long-running connection was the autovacuum that had been running\n since Sunday. While we were investigating yesterday, the autovacuum\n process died without advancing relfrozenxid (users configured this\n system with poor logging, so it's not known whether autovac\n terminated from error or from a user who logged on to the system). \n As soon as the autovacuum process died, we stopped getting the\n \"multixact X from before cutoff Y\" errors.\n\n It really appears that it was the autovacuum process itself that was\n providing the oldest running multixact which caused errors on\n yesterday's attempts to vacuum other tables - even though I though\n vacuum processes were ignored by that code. I'll have to take\n another look at some point.\n\n Vacuum cost parameters had been adjusted after Sunday, so the\n original autovacuum would have used default settings. Naturally, a\n new autovacuum process started up right away. This new process -\n definitely using adjusted cost parameters - completed the vacuum of\n the large table with 5 passes (index_vacuum_count) in a couple\n hours. Maintenance work memory was already at the max; there were\n many hundreds of millions of dead tuples that still remained to be\n cleaned up.\n\n The size of the large table (heap only) was about 75% of the memory\n on the server, and the table had three indexes each about half the\n size of the table. The storage was provisioned at just over 10k\n IOPS; at this rate you could read all three indexes from the storage\n one block at a time in about an hour. (And Linux should be reading\n more than a block at a time.)\n\n It is not known whether the original autovacuum failed to completely\n vacuum the large table in 3 days because of cost settings alone or\n because there's another latent bug somewhere in the autovacuum code\n that put it into some kind of loop (but if autovac hit the error\n above then the PID would have terminated). We didn't manage to get\n a pstack.\n\n\nSince VACUUM errors out, we\ndon't corrupt data, right? Everyone else is still going to see the\nmultixact as running and do the right thing because vacuum never\nmanages to (bogusly) freeze the tuple.\n\n That's my take as well. I don't think there's any data corruption\n risk here.\n\n If anyone else ever hits this in the future, I think it's safe to\n just kill the oldest open session. The error should go away and\n there shouldn't be any risk of damage to the database.\n\n\nBoth patches prevent mxactLimit from being newer than the oldest\nrunning multixact. The v1 patch uses the most aggressive setting\npossible: the oldest running multi; the v2 uses the least aggressive\nof the 'safe' and oldest running multi. At first glance it seems like\nthe second one is better: it only does something different if we're in\nthe dangerous scenario you identified, but otherwise it sticks to the\nsafe limit, which generates less IO.\n\n\n Thanks for taking a look!\n\n -Jeremy\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services",
"msg_date": "Thu, 5 Sep 2019 11:32:15 -0700",
"msg_from": "Jeremy Schneider <schnjere@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: multixact X from before cutoff Y found to be still running"
},
{
"msg_contents": "On 9/4/19, 9:03 PM, \"Thomas Munro\" <thomas.munro@gmail.com> wrote:\r\n> Both patches prevent mxactLimit from being newer than the oldest\r\n> running multixact. The v1 patch uses the most aggressive setting\r\n> possible: the oldest running multi; the v2 uses the least aggressive\r\n> of the 'safe' and oldest running multi. At first glance it seems like\r\n> the second one is better: it only does something different if we're in\r\n> the dangerous scenario you identified, but otherwise it sticks to the\r\n> safe limit, which generates less IO.\r\n\r\nThanks for taking a look!\r\n\r\nRight, the v2 patch will effectively ramp-down the freezemin as your\r\nfreeze_max_age gets smaller, while the v1 patch will set the effective\r\nfreezemin to zero as soon as your multixact age passes the threshold.\r\nI think what is unclear to me is whether this ramp-down behavior is\r\nthe intended functionality or we should be doing something similar to\r\nwhat we do for regular transaction IDs (i.e. force freezemin to zero\r\nright after it hits the \"oldest xmin is far in the past\" threshold).\r\nThe comment above MultiXactMemberFreezeThreshold() explains things\r\npretty well, but AFAICT it is more geared towards influencing\r\nautovacuum scheduling. I agree that v2 is safer from the standpoint\r\nthat it changes as little as possible, though.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Thu, 5 Sep 2019 20:08:11 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: multixact X from before cutoff Y found to be still running"
},
{
"msg_contents": "On Fri, Sep 6, 2019 at 6:32 AM Jeremy Schneider <schnjere@amazon.com> wrote:\n> It really appears that it was the autovacuum process itself that was providing the oldest running multixact which caused errors on yesterday's attempts to vacuum other tables - even though I though vacuum processes were ignored by that code. I'll have to take another look at some point.\n\nAh, that seems plausible. If the backend ever called\nGetMultiXactIdMembers() and thence MultiXactIdSetOldestVisible() at a\ntime when there were live multixacts, it would set its own\nOldestVisibleMXactID[] slot, and then GetOldestMultiXactId() would\nreturn that value for the rest of the transaction (unless there was an\neven older one to return, but in the case you're describing there\nwasn't). GetOldestMultiXactId() doesn't have a way to ignore vacuum\nbackends, like GetOldestXmin() does. That doesn't seem to be a\nproblem in itself.\n\n(I am not sure why GetOldestMultiXactId() needs to consider\nOldestVisibleMXactId[] at all for this purpose, and not just\nOldestMemberXactId[], but I suppose it has to do with simultaneously\nkey-share-locked and updated tuples or something, it's too early and I\nhaven't had enough coffee.)\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 6 Sep 2019 09:31:52 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: multixact X from before cutoff Y found to be still running"
},
{
"msg_contents": "On Thu, Sep 5, 2019 at 4:08 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> Right, the v2 patch will effectively ramp-down the freezemin as your\n> freeze_max_age gets smaller, while the v1 patch will set the effective\n> freezemin to zero as soon as your multixact age passes the threshold.\n> I think what is unclear to me is whether this ramp-down behavior is\n> the intended functionality or we should be doing something similar to\n> what we do for regular transaction IDs (i.e. force freezemin to zero\n> right after it hits the \"oldest xmin is far in the past\" threshold).\n> The comment above MultiXactMemberFreezeThreshold() explains things\n> pretty well, but AFAICT it is more geared towards influencing\n> autovacuum scheduling. I agree that v2 is safer from the standpoint\n> that it changes as little as possible, though.\n\nI don't presently have a view on fixing the actual but here, but I can\ncertainly confirm that I intended MultiXactMemberFreezeThreshold() to\nratchet up the pressure gradually rather than all at once, and my\nsuspicion is that this behavior may be good to retain, but I'm not\nsure.\n\nOne difference between regular XIDs and MultiXacts is that there's\nonly one reason why we can need to vacuum XIDs, but there are two\nreasons why we can need to vacuum MultiXacts. We can either be\nrunning short of members space or we can be running short of offset\nspace, and running out of either one is bad. Regular XIDs have no\nanalogue of this problem: there's only one thing that you can exhaust.\nAt the time I wrote MultiXactMemberFreezeThreshold(), only the\n'offsets' array had any sort of wraparound protection, and it was\nspace in 'offsets' that was measured by relminmxid, datminmxid, etc.\nYou could imagine having separate catalog state to track space in the\n'members' SLRU, e.g. relminmxidmembers, datminmxidmembers, etc., but\nthat wasn't really an option for fixing the bug at hand, because it\nwouldn't have been back-patchable.\n\nSo the challenge was to find some way of using the existing catalog\nstate to try to provide wraparound protection for a new kind of thing\nfor which wraparound protection had not been previously contemplated.\nAnd so MultiXactMemberFreezeThreshold() was born.\n\n(I apologize if any of the above sounds like I'm talking credit for\nwork actually done by Thomas, who I see is listed as the primary\nauthor of the commit in question. I feel like I invented\nMultiXactMemberFreezeThreshold and the big comment at the top of it\nlooks to me like something I wrote, but but this was a long time ago\nand I don't really remember who did what. My intent here is to provide\nsome context that may be useful based on what I remember about that\npatch, not to steal anybody's thunder.)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 6 Sep 2019 13:25:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: multixact X from before cutoff Y found to be still running"
},
{
"msg_contents": "On Sat, Sep 7, 2019 at 5:25 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> (I apologize if any of the above sounds like I'm talking credit for\n> work actually done by Thomas, who I see is listed as the primary\n> author of the commit in question. I feel like I invented\n> MultiXactMemberFreezeThreshold and the big comment at the top of it\n> looks to me like something I wrote, but but this was a long time ago\n> and I don't really remember who did what. My intent here is to provide\n> some context that may be useful based on what I remember about that\n> patch, not to steal anybody's thunder.)\n\nI don't recall but it could well have been your idea to do it that\nway, my code and testing, and your comments and commit. Either way\nI'm happy for you to steal my bugs.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 7 Sep 2019 08:40:53 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: multixact X from before cutoff Y found to be still running"
},
{
"msg_contents": "On 9/6/19, 10:26 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n> On Thu, Sep 5, 2019 at 4:08 PM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> Right, the v2 patch will effectively ramp-down the freezemin as your\r\n>> freeze_max_age gets smaller, while the v1 patch will set the effective\r\n>> freezemin to zero as soon as your multixact age passes the threshold.\r\n>> I think what is unclear to me is whether this ramp-down behavior is\r\n>> the intended functionality or we should be doing something similar to\r\n>> what we do for regular transaction IDs (i.e. force freezemin to zero\r\n>> right after it hits the \"oldest xmin is far in the past\" threshold).\r\n>> The comment above MultiXactMemberFreezeThreshold() explains things\r\n>> pretty well, but AFAICT it is more geared towards influencing\r\n>> autovacuum scheduling. I agree that v2 is safer from the standpoint\r\n>> that it changes as little as possible, though.\r\n>\r\n> I don't presently have a view on fixing the actual but here, but I can\r\n> certainly confirm that I intended MultiXactMemberFreezeThreshold() to\r\n> ratchet up the pressure gradually rather than all at once, and my\r\n> suspicion is that this behavior may be good to retain, but I'm not\r\n> sure.\r\n\r\nThanks for the detailed background information. FWIW I am now in\r\nfavor of the v2 patch.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 17 Sep 2019 19:34:45 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: multixact X from before cutoff Y found to be still running"
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 8:11 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> Thanks for the detailed background information. FWIW I am now in\n> favor of the v2 patch.\n\nHere's a version with a proposed commit message and a comment. Please\nlet me know if I credited things to the right people!",
"msg_date": "Wed, 16 Oct 2019 19:10:48 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: multixact X from before cutoff Y found to be still running"
},
{
"msg_contents": "On 10/15/19, 11:11 PM, \"Thomas Munro\" <thomas.munro@gmail.com> wrote:\r\n> Here's a version with a proposed commit message and a comment. Please\r\n> let me know if I credited things to the right people!\r\n\r\nLooks good to me. Thanks!\r\n\r\nNathan\r\n\r\n",
"msg_date": "Wed, 16 Oct 2019 17:09:06 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: multixact X from before cutoff Y found to be still running"
},
{
"msg_contents": "On 10/16/19 10:09, Bossart, Nathan wrote:\n> On 10/15/19, 11:11 PM, \"Thomas Munro\" <thomas.munro@gmail.com> wrote:\n>> Here's a version with a proposed commit message and a comment. Please\n>> let me know if I credited things to the right people!\n> \n> Looks good to me. Thanks!\n\n+1\n\n\n",
"msg_date": "Wed, 16 Oct 2019 10:11:37 -0700",
"msg_from": "Jeremy Schneider <schnjere@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: multixact X from before cutoff Y found to be still running"
},
{
"msg_contents": "On Thu, Oct 17, 2019 at 6:11 AM Jeremy Schneider <schnjere@amazon.com> wrote:\n> On 10/16/19 10:09, Bossart, Nathan wrote:\n> > On 10/15/19, 11:11 PM, \"Thomas Munro\" <thomas.munro@gmail.com> wrote:\n> >> Here's a version with a proposed commit message and a comment. Please\n> >> let me know if I credited things to the right people!\n> >\n> > Looks good to me. Thanks!\n>\n> +1\n\nPushed.\n\n\n",
"msg_date": "Thu, 17 Oct 2019 12:25:08 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: multixact X from before cutoff Y found to be still running"
}
] |
[
{
"msg_contents": "Hi,\n\nI found small issue in pg_promote(). If postmaster dies\nwhile pg_promote() is waiting for the standby promotion to finish,\npg_promote() can cause busy loop. This happens because\npg_promote() does nothing when WaitLatch() detects\nthe postmaster death event. I think that pg_promote()\nshould bail out of the loop immediately in that case.\n\nAttached is the patch for the fix.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Thu, 5 Sep 2019 09:46:26 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_promote() can cause busy loop"
},
{
"msg_contents": "On Thu, Sep 05, 2019 at 09:46:26AM +0900, Fujii Masao wrote:\n> I found small issue in pg_promote(). If postmaster dies\n> while pg_promote() is waiting for the standby promotion to finish,\n> pg_promote() can cause busy loop. This happens because\n> pg_promote() does nothing when WaitLatch() detects\n> the postmaster death event. I think that pg_promote()\n> should bail out of the loop immediately in that case.\n> \n> Attached is the patch for the fix.\n\nIndeed, this is not correct.\n\n- ereport(WARNING,\n- (errmsg(\"server did not promote within %d seconds\",\n- wait_seconds)));\n+ if (i >= WAITS_PER_SECOND * wait_seconds)\n+ ereport(WARNING,\n+ (errmsg(\"server did not promote within %d seconds\", wait_seconds)));\n\nWould it make more sense to issue a warning mentioning the postmaster\ndeath and then return PG_RETURN_BOOL(false) instead of breaking out of\nthe loop? It could be confusing to warn about a timeout if the\npostmaster died in parallel, and we know the actual reason why the\npromotion did not happen in this case.\n--\nMichael",
"msg_date": "Thu, 5 Sep 2019 10:25:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_promote() can cause busy loop"
},
{
"msg_contents": "On Thu, Sep 5, 2019 at 10:26 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Sep 05, 2019 at 09:46:26AM +0900, Fujii Masao wrote:\n> > I found small issue in pg_promote(). If postmaster dies\n> > while pg_promote() is waiting for the standby promotion to finish,\n> > pg_promote() can cause busy loop. This happens because\n> > pg_promote() does nothing when WaitLatch() detects\n> > the postmaster death event. I think that pg_promote()\n> > should bail out of the loop immediately in that case.\n> >\n> > Attached is the patch for the fix.\n>\n> Indeed, this is not correct.\n>\n> - ereport(WARNING,\n> - (errmsg(\"server did not promote within %d seconds\",\n> - wait_seconds)));\n> + if (i >= WAITS_PER_SECOND * wait_seconds)\n> + ereport(WARNING,\n> + (errmsg(\"server did not promote within %d seconds\", wait_seconds)));\n>\n> Would it make more sense to issue a warning mentioning the postmaster\n> death and then return PG_RETURN_BOOL(false) instead of breaking out of\n> the loop? It could be confusing to warn about a timeout if the\n> postmaster died in parallel, and we know the actual reason why the\n> promotion did not happen in this case.\n\nIt's ok to use PG_RETURN_BOOL(false) instead of breaking out of the loop\nin that case. Which would make the code simpler.\n\nBut I don't think it's worth warning about postmaster death here\nbecause a backend will emit FATAL message like \"terminating connection\ndue to unexpected postmaster exit\" in secure_read() after\npg_promote() returns false.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 5 Sep 2019 10:53:19 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_promote() can cause busy loop"
},
{
"msg_contents": "On Thu, Sep 05, 2019 at 10:53:19AM +0900, Fujii Masao wrote:\n> It's ok to use PG_RETURN_BOOL(false) instead of breaking out of the loop\n> in that case. Which would make the code simpler.\n\nOkay. I would have done so FWIW.\n\n> But I don't think it's worth warning about postmaster death here\n> because a backend will emit FATAL message like \"terminating connection\n> due to unexpected postmaster exit\" in secure_read() after\n> pg_promote() returns false.\n\nGood point, that could be equally confusing.\n--\nMichael",
"msg_date": "Thu, 5 Sep 2019 11:09:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_promote() can cause busy loop"
},
{
"msg_contents": "On Thu, Sep 5, 2019 at 11:10 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Sep 05, 2019 at 10:53:19AM +0900, Fujii Masao wrote:\n> > It's ok to use PG_RETURN_BOOL(false) instead of breaking out of the loop\n> > in that case. Which would make the code simpler.\n>\n> Okay. I would have done so FWIW.\n>\n> > But I don't think it's worth warning about postmaster death here\n> > because a backend will emit FATAL message like \"terminating connection\n> > due to unexpected postmaster exit\" in secure_read() after\n> > pg_promote() returns false.\n>\n> Good point, that could be equally confusing.\n\nSo, barring any objection, I will commit the attached patch.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Thu, 5 Sep 2019 16:03:22 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_promote() can cause busy loop"
},
{
"msg_contents": "On Thu, Sep 05, 2019 at 04:03:22PM +0900, Fujii Masao wrote:\n> So, barring any objection, I will commit the attached patch.\n\nLGTM. Thanks!\n--\nMichael",
"msg_date": "Thu, 5 Sep 2019 16:51:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_promote() can cause busy loop"
},
{
"msg_contents": "On Thu, Sep 5, 2019 at 4:52 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Sep 05, 2019 at 04:03:22PM +0900, Fujii Masao wrote:\n> > So, barring any objection, I will commit the attached patch.\n>\n> LGTM. Thanks!\n\nCommitted. Thanks!\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Fri, 6 Sep 2019 14:31:32 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_promote() can cause busy loop"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nI found that such a statement would get 0 in PL/pgSQL.\n\nPREPARE smt_del(int) AS DELETE FROM t1;\nEXECUTE 'EXECUTE smt_del(100)';\nGET DIAGNOSTICS j = ROW_COUNT;\n\nIn fact, this is a problem with SPI, it does not support getting result \nof the EXECUTE command. I made a little enhancement. Support for the \nnumber of rows processed when executing INSERT/UPDATE/DELETE statements \ndynamically.\n\nRegards,\nQuan Zongliang",
"msg_date": "Thu, 5 Sep 2019 14:39:00 +0800",
"msg_from": "Quan Zongliang <zongliang.quan@postgresdata.com>",
"msg_from_op": true,
"msg_subject": "enhance SPI to support EXECUTE commands"
},
{
"msg_contents": "čt 5. 9. 2019 v 8:39 odesílatel Quan Zongliang <\nzongliang.quan@postgresdata.com> napsal:\n\n> Dear hackers,\n>\n> I found that such a statement would get 0 in PL/pgSQL.\n>\n> PREPARE smt_del(int) AS DELETE FROM t1;\n> EXECUTE 'EXECUTE smt_del(100)';\n> GET DIAGNOSTICS j = ROW_COUNT;\n>\n> In fact, this is a problem with SPI, it does not support getting result\n> of the EXECUTE command. I made a little enhancement. Support for the\n> number of rows processed when executing INSERT/UPDATE/DELETE statements\n> dynamically.\n>\n\nIs there some use case for support this feature?\n\nRegards\n\nPavel\n\n\n> Regards,\n> Quan Zongliang\n>\n\nčt 5. 9. 2019 v 8:39 odesílatel Quan Zongliang <zongliang.quan@postgresdata.com> napsal:Dear hackers,\n\nI found that such a statement would get 0 in PL/pgSQL.\n\nPREPARE smt_del(int) AS DELETE FROM t1;\nEXECUTE 'EXECUTE smt_del(100)';\nGET DIAGNOSTICS j = ROW_COUNT;\n\nIn fact, this is a problem with SPI, it does not support getting result \nof the EXECUTE command. I made a little enhancement. Support for the \nnumber of rows processed when executing INSERT/UPDATE/DELETE statements \ndynamically.Is there some use case for support this feature?RegardsPavel\n\nRegards,\nQuan Zongliang",
"msg_date": "Thu, 5 Sep 2019 09:09:26 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enhance SPI to support EXECUTE commands"
},
{
"msg_contents": "On 2019/9/5 15:09, Pavel Stehule wrote:\n> \n> \n> čt 5. 9. 2019 v 8:39 odesílatel Quan Zongliang \n> <zongliang.quan@postgresdata.com \n> <mailto:zongliang.quan@postgresdata.com>> napsal:\n> \n> Dear hackers,\n> \n> I found that such a statement would get 0 in PL/pgSQL.\n> \n> PREPARE smt_del(int) AS DELETE FROM t1;\n> EXECUTE 'EXECUTE smt_del(100)';\n> GET DIAGNOSTICS j = ROW_COUNT;\n> \n> In fact, this is a problem with SPI, it does not support getting result\n> of the EXECUTE command. I made a little enhancement. Support for the\n> number of rows processed when executing INSERT/UPDATE/DELETE statements\n> dynamically.\n> \n> \n> Is there some use case for support this feature?\n> \nA user deletes the data in PL/pgSQL using the above method, hoping to do \nmore processing according to the number of rows affected, and found that \neach time will get 0.\n\nSample code:\nPREPARE smt_del(int) AS DELETE FROM t1 WHERE c=$1;\nEXECUTE 'EXECUTE smt_del(100)';\nGET DIAGNOSTICS j = ROW_COUNT;\n\nIF j=1 THEN\n do something\nELSIF j=0 THEN\n do something\n\nHere j is always equal to 0.\n\nRegards\n\n> Regards\n> \n> Pavel\n> \n> \n> Regards,\n> Quan Zongliang\n> \n\n\n\n",
"msg_date": "Thu, 5 Sep 2019 16:25:15 +0800",
"msg_from": "Quan Zongliang <zongliang.quan@postgresdata.com>",
"msg_from_op": true,
"msg_subject": "Re: enhance SPI to support EXECUTE commands"
},
{
"msg_contents": "čt 5. 9. 2019 v 10:25 odesílatel Quan Zongliang <\nzongliang.quan@postgresdata.com> napsal:\n\n> On 2019/9/5 15:09, Pavel Stehule wrote:\n> >\n> >\n> > čt 5. 9. 2019 v 8:39 odesílatel Quan Zongliang\n> > <zongliang.quan@postgresdata.com\n> > <mailto:zongliang.quan@postgresdata.com>> napsal:\n> >\n> > Dear hackers,\n> >\n> > I found that such a statement would get 0 in PL/pgSQL.\n> >\n> > PREPARE smt_del(int) AS DELETE FROM t1;\n> > EXECUTE 'EXECUTE smt_del(100)';\n> > GET DIAGNOSTICS j = ROW_COUNT;\n> >\n> > In fact, this is a problem with SPI, it does not support getting\n> result\n> > of the EXECUTE command. I made a little enhancement. Support for the\n> > number of rows processed when executing INSERT/UPDATE/DELETE\n> statements\n> > dynamically.\n> >\n> >\n> > Is there some use case for support this feature?\n> >\n> A user deletes the data in PL/pgSQL using the above method, hoping to do\n> more processing according to the number of rows affected, and found that\n> each time will get 0.\n>\n> Sample code:\n> PREPARE smt_del(int) AS DELETE FROM t1 WHERE c=$1;\n> EXECUTE 'EXECUTE smt_del(100)';\n> GET DIAGNOSTICS j = ROW_COUNT;\n>\n\nThis has not sense in plpgsql. Why you use PREPARE statement explicitly?\n\n\n> IF j=1 THEN\n> do something\n> ELSIF j=0 THEN\n> do something\n>\n> Here j is always equal to 0.\n>\n\n\n\n>\n> Regards\n>\n> > Regards\n> >\n> > Pavel\n> >\n> >\n> > Regards,\n> > Quan Zongliang\n> >\n>\n>\n\nčt 5. 9. 2019 v 10:25 odesílatel Quan Zongliang <zongliang.quan@postgresdata.com> napsal:On 2019/9/5 15:09, Pavel Stehule wrote:\n> \n> \n> čt 5. 9. 2019 v 8:39 odesílatel Quan Zongliang \n> <zongliang.quan@postgresdata.com \n> <mailto:zongliang.quan@postgresdata.com>> napsal:\n> \n> Dear hackers,\n> \n> I found that such a statement would get 0 in PL/pgSQL.\n> \n> PREPARE smt_del(int) AS DELETE FROM t1;\n> EXECUTE 'EXECUTE smt_del(100)';\n> GET DIAGNOSTICS j = ROW_COUNT;\n> \n> In fact, this is a problem with SPI, it does not support getting result\n> of the EXECUTE command. I made a little enhancement. Support for the\n> number of rows processed when executing INSERT/UPDATE/DELETE statements\n> dynamically.\n> \n> \n> Is there some use case for support this feature?\n> \nA user deletes the data in PL/pgSQL using the above method, hoping to do \nmore processing according to the number of rows affected, and found that \neach time will get 0.\n\nSample code:\nPREPARE smt_del(int) AS DELETE FROM t1 WHERE c=$1;\nEXECUTE 'EXECUTE smt_del(100)';\nGET DIAGNOSTICS j = ROW_COUNT;This has not sense in plpgsql. Why you use PREPARE statement explicitly?\n\nIF j=1 THEN\n do something\nELSIF j=0 THEN\n do something\n\nHere j is always equal to 0. \n\nRegards\n\n> Regards\n> \n> Pavel\n> \n> \n> Regards,\n> Quan Zongliang\n>",
"msg_date": "Thu, 5 Sep 2019 10:31:19 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enhance SPI to support EXECUTE commands"
},
{
"msg_contents": "On 2019/9/5 16:31, Pavel Stehule wrote:\n> \n> \n> čt 5. 9. 2019 v 10:25 odesílatel Quan Zongliang \n> <zongliang.quan@postgresdata.com \n> <mailto:zongliang.quan@postgresdata.com>> napsal:\n> \n> On 2019/9/5 15:09, Pavel Stehule wrote:\n> >\n> >\n> > čt 5. 9. 2019 v 8:39 odesílatel Quan Zongliang\n> > <zongliang.quan@postgresdata.com\n> <mailto:zongliang.quan@postgresdata.com>\n> > <mailto:zongliang.quan@postgresdata.com\n> <mailto:zongliang.quan@postgresdata.com>>> napsal:\n> >\n> > Dear hackers,\n> >\n> > I found that such a statement would get 0 in PL/pgSQL.\n> >\n> > PREPARE smt_del(int) AS DELETE FROM t1;\n> > EXECUTE 'EXECUTE smt_del(100)';\n> > GET DIAGNOSTICS j = ROW_COUNT;\n> >\n> > In fact, this is a problem with SPI, it does not support\n> getting result\n> > of the EXECUTE command. I made a little enhancement. Support\n> for the\n> > number of rows processed when executing INSERT/UPDATE/DELETE\n> statements\n> > dynamically.\n> >\n> >\n> > Is there some use case for support this feature?\n> >\n> A user deletes the data in PL/pgSQL using the above method, hoping\n> to do\n> more processing according to the number of rows affected, and found\n> that\n> each time will get 0.\n> \n> Sample code:\n> PREPARE smt_del(int) AS DELETE FROM t1 WHERE c=$1;\n> EXECUTE 'EXECUTE smt_del(100)';\n> GET DIAGNOSTICS j = ROW_COUNT;\n> \n> \n> This has not sense in plpgsql. Why you use PREPARE statement explicitly?\n> \nYes, I told him to do it in other ways, and the problem has been solved.\n\nUnder psql, we can get this result\n\nflying=# EXECUTE smt_del(100);\nDELETE 1\n\nSo I think this may be the negligence of SPI, it should be better to \ndeal with it.\n\n> \n> IF j=1 THEN\n> do something\n> ELSIF j=0 THEN\n> do something\n> \n> Here j is always equal to 0.\n> \n> \n> \n> Regards\n> \n> > Regards\n> >\n> > Pavel\n> >\n> >\n> > Regards,\n> > Quan Zongliang\n> >\n> \n\n\n\n",
"msg_date": "Thu, 5 Sep 2019 16:56:48 +0800",
"msg_from": "Quan Zongliang <zongliang.quan@postgresdata.com>",
"msg_from_op": true,
"msg_subject": "Re: enhance SPI to support EXECUTE commands"
},
{
"msg_contents": "čt 5. 9. 2019 v 10:57 odesílatel Quan Zongliang <\nzongliang.quan@postgresdata.com> napsal:\n\n> On 2019/9/5 16:31, Pavel Stehule wrote:\n> >\n> >\n> > čt 5. 9. 2019 v 10:25 odesílatel Quan Zongliang\n> > <zongliang.quan@postgresdata.com\n> > <mailto:zongliang.quan@postgresdata.com>> napsal:\n> >\n> > On 2019/9/5 15:09, Pavel Stehule wrote:\n> > >\n> > >\n> > > čt 5. 9. 2019 v 8:39 odesílatel Quan Zongliang\n> > > <zongliang.quan@postgresdata.com\n> > <mailto:zongliang.quan@postgresdata.com>\n> > > <mailto:zongliang.quan@postgresdata.com\n> > <mailto:zongliang.quan@postgresdata.com>>> napsal:\n> > >\n> > > Dear hackers,\n> > >\n> > > I found that such a statement would get 0 in PL/pgSQL.\n> > >\n> > > PREPARE smt_del(int) AS DELETE FROM t1;\n> > > EXECUTE 'EXECUTE smt_del(100)';\n> > > GET DIAGNOSTICS j = ROW_COUNT;\n> > >\n> > > In fact, this is a problem with SPI, it does not support\n> > getting result\n> > > of the EXECUTE command. I made a little enhancement. Support\n> > for the\n> > > number of rows processed when executing INSERT/UPDATE/DELETE\n> > statements\n> > > dynamically.\n> > >\n> > >\n> > > Is there some use case for support this feature?\n> > >\n> > A user deletes the data in PL/pgSQL using the above method, hoping\n> > to do\n> > more processing according to the number of rows affected, and found\n> > that\n> > each time will get 0.\n> >\n> > Sample code:\n> > PREPARE smt_del(int) AS DELETE FROM t1 WHERE c=$1;\n> > EXECUTE 'EXECUTE smt_del(100)';\n> > GET DIAGNOSTICS j = ROW_COUNT;\n> >\n> >\n> > This has not sense in plpgsql. Why you use PREPARE statement explicitly?\n> >\n> Yes, I told him to do it in other ways, and the problem has been solved.\n>\n> Under psql, we can get this result\n>\n> flying=# EXECUTE smt_del(100);\n> DELETE 1\n>\n> So I think this may be the negligence of SPI, it should be better to\n> deal with it.\n>\n\nPersonally, I would not to support features that allows bad code.\n\nPavel\n\n>\n> >\n> > IF j=1 THEN\n> > do something\n> > ELSIF j=0 THEN\n> > do something\n> >\n> > Here j is always equal to 0.\n> >\n> >\n> >\n> > Regards\n> >\n> > > Regards\n> > >\n> > > Pavel\n> > >\n> > >\n> > > Regards,\n> > > Quan Zongliang\n> > >\n> >\n>\n>\n\nčt 5. 9. 2019 v 10:57 odesílatel Quan Zongliang <zongliang.quan@postgresdata.com> napsal:On 2019/9/5 16:31, Pavel Stehule wrote:\n> \n> \n> čt 5. 9. 2019 v 10:25 odesílatel Quan Zongliang \n> <zongliang.quan@postgresdata.com \n> <mailto:zongliang.quan@postgresdata.com>> napsal:\n> \n> On 2019/9/5 15:09, Pavel Stehule wrote:\n> >\n> >\n> > čt 5. 9. 2019 v 8:39 odesílatel Quan Zongliang\n> > <zongliang.quan@postgresdata.com\n> <mailto:zongliang.quan@postgresdata.com>\n> > <mailto:zongliang.quan@postgresdata.com\n> <mailto:zongliang.quan@postgresdata.com>>> napsal:\n> >\n> > Dear hackers,\n> >\n> > I found that such a statement would get 0 in PL/pgSQL.\n> >\n> > PREPARE smt_del(int) AS DELETE FROM t1;\n> > EXECUTE 'EXECUTE smt_del(100)';\n> > GET DIAGNOSTICS j = ROW_COUNT;\n> >\n> > In fact, this is a problem with SPI, it does not support\n> getting result\n> > of the EXECUTE command. I made a little enhancement. Support\n> for the\n> > number of rows processed when executing INSERT/UPDATE/DELETE\n> statements\n> > dynamically.\n> >\n> >\n> > Is there some use case for support this feature?\n> >\n> A user deletes the data in PL/pgSQL using the above method, hoping\n> to do\n> more processing according to the number of rows affected, and found\n> that\n> each time will get 0.\n> \n> Sample code:\n> PREPARE smt_del(int) AS DELETE FROM t1 WHERE c=$1;\n> EXECUTE 'EXECUTE smt_del(100)';\n> GET DIAGNOSTICS j = ROW_COUNT;\n> \n> \n> This has not sense in plpgsql. Why you use PREPARE statement explicitly?\n> \nYes, I told him to do it in other ways, and the problem has been solved.\n\nUnder psql, we can get this result\n\nflying=# EXECUTE smt_del(100);\nDELETE 1\n\nSo I think this may be the negligence of SPI, it should be better to \ndeal with it.Personally, I would not to support features that allows bad code.Pavel\n\n> \n> IF j=1 THEN\n> do something\n> ELSIF j=0 THEN\n> do something\n> \n> Here j is always equal to 0.\n> \n> \n> \n> Regards\n> \n> > Regards\n> >\n> > Pavel\n> >\n> >\n> > Regards,\n> > Quan Zongliang\n> >\n>",
"msg_date": "Thu, 5 Sep 2019 11:33:34 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enhance SPI to support EXECUTE commands"
},
{
"msg_contents": "On 2019/9/5 17:33, Pavel Stehule wrote:\n> \n> \n> čt 5. 9. 2019 v 10:57 odesílatel Quan Zongliang \n> <zongliang.quan@postgresdata.com \n> <mailto:zongliang.quan@postgresdata.com>> napsal:\n> \n> On 2019/9/5 16:31, Pavel Stehule wrote:\n> >\n> >\n> > čt 5. 9. 2019 v 10:25 odesílatel Quan Zongliang\n> > <zongliang.quan@postgresdata.com\n> <mailto:zongliang.quan@postgresdata.com>\n> > <mailto:zongliang.quan@postgresdata.com\n> <mailto:zongliang.quan@postgresdata.com>>> napsal:\n> >\n> > On 2019/9/5 15:09, Pavel Stehule wrote:\n> > >\n> > >\n> > > čt 5. 9. 2019 v 8:39 odesílatel Quan Zongliang\n> > > <zongliang.quan@postgresdata.com\n> <mailto:zongliang.quan@postgresdata.com>\n> > <mailto:zongliang.quan@postgresdata.com\n> <mailto:zongliang.quan@postgresdata.com>>\n> > > <mailto:zongliang.quan@postgresdata.com\n> <mailto:zongliang.quan@postgresdata.com>\n> > <mailto:zongliang.quan@postgresdata.com\n> <mailto:zongliang.quan@postgresdata.com>>>> napsal:\n> > >\n> > > Dear hackers,\n> > >\n> > > I found that such a statement would get 0 in PL/pgSQL.\n> > >\n> > > PREPARE smt_del(int) AS DELETE FROM t1;\n> > > EXECUTE 'EXECUTE smt_del(100)';\n> > > GET DIAGNOSTICS j = ROW_COUNT;\n> > >\n> > > In fact, this is a problem with SPI, it does not support\n> > getting result\n> > > of the EXECUTE command. I made a little enhancement.\n> Support\n> > for the\n> > > number of rows processed when executing\n> INSERT/UPDATE/DELETE\n> > statements\n> > > dynamically.\n> > >\n> > >\n> > > Is there some use case for support this feature?\n> > >\n> > A user deletes the data in PL/pgSQL using the above method,\n> hoping\n> > to do\n> > more processing according to the number of rows affected, and\n> found\n> > that\n> > each time will get 0.\n> >\n> > Sample code:\n> > PREPARE smt_del(int) AS DELETE FROM t1 WHERE c=$1;\n> > EXECUTE 'EXECUTE smt_del(100)';\n> > GET DIAGNOSTICS j = ROW_COUNT;\n> >\n> >\n> > This has not sense in plpgsql. Why you use PREPARE statement\n> explicitly?\n> >\n> Yes, I told him to do it in other ways, and the problem has been solved.\n> \n> Under psql, we can get this result\n> \n> flying=# EXECUTE smt_del(100);\n> DELETE 1\n> \n> So I think this may be the negligence of SPI, it should be better to\n> deal with it.\n> \n> \n> Personally, I would not to support features that allows bad code.\n> \nMy code is actually a way to continue the CREATE AS SELECT and COPY \nstatements. In spi.c, they look like this:\n\nif (IsA(stmt->utilityStmt, CreateTableAsStmt)) // original code\n...\nelse if (IsA(stmt->utilityStmt, CopyStmt)) // original code\n...\nelse if (IsA(stmt->utilityStmt, ExecuteStmt)) // my code\n\nMy patch was not developed for this PL/pgSQL approach. I just because it \nfound this problem.\n\n\n> Pavel\n> \n> \n> >\n> > IF j=1 THEN\n> > do something\n> > ELSIF j=0 THEN\n> > do something\n> >\n> > Here j is always equal to 0.\n> >\n> >\n> >\n> > Regards\n> >\n> > > Regards\n> > >\n> > > Pavel\n> > >\n> > >\n> > > Regards,\n> > > Quan Zongliang\n> > >\n> >\n> \n\n\n\n\n",
"msg_date": "Fri, 6 Sep 2019 09:35:51 +0800",
"msg_from": "Quan Zongliang <zongliang.quan@postgresdata.com>",
"msg_from_op": true,
"msg_subject": "Re: enhance SPI to support EXECUTE commands"
},
{
"msg_contents": "pá 6. 9. 2019 v 3:36 odesílatel Quan Zongliang <\nzongliang.quan@postgresdata.com> napsal:\n\n> On 2019/9/5 17:33, Pavel Stehule wrote:\n> >\n> >\n> > čt 5. 9. 2019 v 10:57 odesílatel Quan Zongliang\n> > <zongliang.quan@postgresdata.com\n> > <mailto:zongliang.quan@postgresdata.com>> napsal:\n> >\n> > On 2019/9/5 16:31, Pavel Stehule wrote:\n> > >\n> > >\n> > > čt 5. 9. 2019 v 10:25 odesílatel Quan Zongliang\n> > > <zongliang.quan@postgresdata.com\n> > <mailto:zongliang.quan@postgresdata.com>\n> > > <mailto:zongliang.quan@postgresdata.com\n> > <mailto:zongliang.quan@postgresdata.com>>> napsal:\n> > >\n> > > On 2019/9/5 15:09, Pavel Stehule wrote:\n> > > >\n> > > >\n> > > > čt 5. 9. 2019 v 8:39 odesílatel Quan Zongliang\n> > > > <zongliang.quan@postgresdata.com\n> > <mailto:zongliang.quan@postgresdata.com>\n> > > <mailto:zongliang.quan@postgresdata.com\n> > <mailto:zongliang.quan@postgresdata.com>>\n> > > > <mailto:zongliang.quan@postgresdata.com\n> > <mailto:zongliang.quan@postgresdata.com>\n> > > <mailto:zongliang.quan@postgresdata.com\n> > <mailto:zongliang.quan@postgresdata.com>>>> napsal:\n> > > >\n> > > > Dear hackers,\n> > > >\n> > > > I found that such a statement would get 0 in PL/pgSQL.\n> > > >\n> > > > PREPARE smt_del(int) AS DELETE FROM t1;\n> > > > EXECUTE 'EXECUTE smt_del(100)';\n> > > > GET DIAGNOSTICS j = ROW_COUNT;\n> > > >\n> > > > In fact, this is a problem with SPI, it does not\n> support\n> > > getting result\n> > > > of the EXECUTE command. I made a little enhancement.\n> > Support\n> > > for the\n> > > > number of rows processed when executing\n> > INSERT/UPDATE/DELETE\n> > > statements\n> > > > dynamically.\n> > > >\n> > > >\n> > > > Is there some use case for support this feature?\n> > > >\n> > > A user deletes the data in PL/pgSQL using the above method,\n> > hoping\n> > > to do\n> > > more processing according to the number of rows affected, and\n> > found\n> > > that\n> > > each time will get 0.\n> > >\n> > > Sample code:\n> > > PREPARE smt_del(int) AS DELETE FROM t1 WHERE c=$1;\n> > > EXECUTE 'EXECUTE smt_del(100)';\n> > > GET DIAGNOSTICS j = ROW_COUNT;\n> > >\n> > >\n> > > This has not sense in plpgsql. Why you use PREPARE statement\n> > explicitly?\n> > >\n> > Yes, I told him to do it in other ways, and the problem has been\n> solved.\n> >\n> > Under psql, we can get this result\n> >\n> > flying=# EXECUTE smt_del(100);\n> > DELETE 1\n> >\n> > So I think this may be the negligence of SPI, it should be better to\n> > deal with it.\n> >\n> >\n> > Personally, I would not to support features that allows bad code.\n> >\n> My code is actually a way to continue the CREATE AS SELECT and COPY\n> statements. In spi.c, they look like this:\n>\n> if (IsA(stmt->utilityStmt, CreateTableAsStmt)) // original code\n> ...\n> else if (IsA(stmt->utilityStmt, CopyStmt)) // original code\n> ...\n> else if (IsA(stmt->utilityStmt, ExecuteStmt)) // my code\n>\n> My patch was not developed for this PL/pgSQL approach. I just because it\n> found this problem.\n>\n\nok, I can understand to this - but your example is usage is not good.\n\nPavel\n\n\n>\n> > Pavel\n> >\n> >\n> > >\n> > > IF j=1 THEN\n> > > do something\n> > > ELSIF j=0 THEN\n> > > do something\n> > >\n> > > Here j is always equal to 0.\n> > >\n> > >\n> > >\n> > > Regards\n> > >\n> > > > Regards\n> > > >\n> > > > Pavel\n> > > >\n> > > >\n> > > > Regards,\n> > > > Quan Zongliang\n> > > >\n> > >\n> >\n>\n>\n>\n\npá 6. 9. 2019 v 3:36 odesílatel Quan Zongliang <zongliang.quan@postgresdata.com> napsal:On 2019/9/5 17:33, Pavel Stehule wrote:\n> \n> \n> čt 5. 9. 2019 v 10:57 odesílatel Quan Zongliang \n> <zongliang.quan@postgresdata.com \n> <mailto:zongliang.quan@postgresdata.com>> napsal:\n> \n> On 2019/9/5 16:31, Pavel Stehule wrote:\n> >\n> >\n> > čt 5. 9. 2019 v 10:25 odesílatel Quan Zongliang\n> > <zongliang.quan@postgresdata.com\n> <mailto:zongliang.quan@postgresdata.com>\n> > <mailto:zongliang.quan@postgresdata.com\n> <mailto:zongliang.quan@postgresdata.com>>> napsal:\n> >\n> > On 2019/9/5 15:09, Pavel Stehule wrote:\n> > >\n> > >\n> > > čt 5. 9. 2019 v 8:39 odesílatel Quan Zongliang\n> > > <zongliang.quan@postgresdata.com\n> <mailto:zongliang.quan@postgresdata.com>\n> > <mailto:zongliang.quan@postgresdata.com\n> <mailto:zongliang.quan@postgresdata.com>>\n> > > <mailto:zongliang.quan@postgresdata.com\n> <mailto:zongliang.quan@postgresdata.com>\n> > <mailto:zongliang.quan@postgresdata.com\n> <mailto:zongliang.quan@postgresdata.com>>>> napsal:\n> > >\n> > > Dear hackers,\n> > >\n> > > I found that such a statement would get 0 in PL/pgSQL.\n> > >\n> > > PREPARE smt_del(int) AS DELETE FROM t1;\n> > > EXECUTE 'EXECUTE smt_del(100)';\n> > > GET DIAGNOSTICS j = ROW_COUNT;\n> > >\n> > > In fact, this is a problem with SPI, it does not support\n> > getting result\n> > > of the EXECUTE command. I made a little enhancement.\n> Support\n> > for the\n> > > number of rows processed when executing\n> INSERT/UPDATE/DELETE\n> > statements\n> > > dynamically.\n> > >\n> > >\n> > > Is there some use case for support this feature?\n> > >\n> > A user deletes the data in PL/pgSQL using the above method,\n> hoping\n> > to do\n> > more processing according to the number of rows affected, and\n> found\n> > that\n> > each time will get 0.\n> >\n> > Sample code:\n> > PREPARE smt_del(int) AS DELETE FROM t1 WHERE c=$1;\n> > EXECUTE 'EXECUTE smt_del(100)';\n> > GET DIAGNOSTICS j = ROW_COUNT;\n> >\n> >\n> > This has not sense in plpgsql. Why you use PREPARE statement\n> explicitly?\n> >\n> Yes, I told him to do it in other ways, and the problem has been solved.\n> \n> Under psql, we can get this result\n> \n> flying=# EXECUTE smt_del(100);\n> DELETE 1\n> \n> So I think this may be the negligence of SPI, it should be better to\n> deal with it.\n> \n> \n> Personally, I would not to support features that allows bad code.\n> \nMy code is actually a way to continue the CREATE AS SELECT and COPY \nstatements. In spi.c, they look like this:\n\nif (IsA(stmt->utilityStmt, CreateTableAsStmt)) // original code\n...\nelse if (IsA(stmt->utilityStmt, CopyStmt)) // original code\n...\nelse if (IsA(stmt->utilityStmt, ExecuteStmt)) // my code\n\nMy patch was not developed for this PL/pgSQL approach. I just because it \nfound this problem.ok, I can understand to this - but your example is usage is not good.Pavel\n\n\n> Pavel\n> \n> \n> >\n> > IF j=1 THEN\n> > do something\n> > ELSIF j=0 THEN\n> > do something\n> >\n> > Here j is always equal to 0.\n> >\n> >\n> >\n> > Regards\n> >\n> > > Regards\n> > >\n> > > Pavel\n> > >\n> > >\n> > > Regards,\n> > > Quan Zongliang\n> > >\n> >\n>",
"msg_date": "Fri, 6 Sep 2019 05:18:58 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enhance SPI to support EXECUTE commands"
},
{
"msg_contents": "I don't see much use for this because the documentation says that \"server's\nexecute command cannot be used directly within pl/pgsql function (and it is\nnot needed). Within pl/pgsql you can execute update/delete commands using\npl/pgsql EXECUTE command and get results like row_count using \"get\ndiagnostic\".\n\nWhy would somebody do what you have shown in your example in pl/pgsql? Or\ndo you have a more general use-case for this enhancement?\n\nOn Thu, Sep 5, 2019 at 11:39 AM Quan Zongliang <\nzongliang.quan@postgresdata.com> wrote:\n\n> Dear hackers,\n>\n> I found that such a statement would get 0 in PL/pgSQL.\n>\n> PREPARE smt_del(int) AS DELETE FROM t1;\n> EXECUTE 'EXECUTE smt_del(100)';\n> GET DIAGNOSTICS j = ROW_COUNT;\n>\n> In fact, this is a problem with SPI, it does not support getting result\n> of the EXECUTE command. I made a little enhancement. Support for the\n> number of rows processed when executing INSERT/UPDATE/DELETE statements\n> dynamically.\n>\n> Regards,\n> Quan Zongliang\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nEMAIL: mailto: ahsan.hadi@highgo.ca\n\nI don't see much use for this because the documentation says that \"server's execute command cannot be used directly within pl/pgsql function (and it is not needed). Within pl/pgsql you can execute update/delete commands using pl/pgsql EXECUTE command and get results like row_count using \"get diagnostic\". Why would somebody do what you have shown in your example in pl/pgsql? Or do you have a more general use-case for this enhancement?On Thu, Sep 5, 2019 at 11:39 AM Quan Zongliang <zongliang.quan@postgresdata.com> wrote:Dear hackers,\n\nI found that such a statement would get 0 in PL/pgSQL.\n\nPREPARE smt_del(int) AS DELETE FROM t1;\nEXECUTE 'EXECUTE smt_del(100)';\nGET DIAGNOSTICS j = ROW_COUNT;\n\nIn fact, this is a problem with SPI, it does not support getting result \nof the EXECUTE command. I made a little enhancement. Support for the \nnumber of rows processed when executing INSERT/UPDATE/DELETE statements \ndynamically.\n\nRegards,\nQuan Zongliang\n-- Highgo Software (Canada/China/Pakistan)URL : http://www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCEMAIL: mailto: ahsan.hadi@highgo.ca",
"msg_date": "Wed, 18 Sep 2019 17:29:52 +0500",
"msg_from": "Ahsan Hadi <ahsan.hadi@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enhance SPI to support EXECUTE commands"
},
{
"msg_contents": "Hi,\n\nOn Wed, Sep 18, 2019 at 05:29:52PM +0500, Ahsan Hadi wrote:\n>I don't see much use for this because the documentation says that \"server's\n>execute command cannot be used directly within pl/pgsql function (and it is\n>not needed). Within pl/pgsql you can execute update/delete commands using\n>pl/pgsql EXECUTE command and get results like row_count using \"get\n>diagnostic\".\n>\n>Why would somebody do what you have shown in your example in pl/pgsql? Or\n>do you have a more general use-case for this enhancement?\n>\n\nYeah, I think that's a good question - why would we need this? In fact,\nthe plpgsql docs explicitly say:\n\n The PL/pgSQL EXECUTE statement is not related to the EXECUTE SQL\n statement supported by the PostgreSQL server. The server's EXECUTE\n statement cannot be used directly within PL/pgSQL functions (and is\n not needed).\n\nThat is because all queries in plpgsql are prepared and cached\nautomatically, so why would we need this feature?\n\nIn any case, the patch should probably be in \"waiting on author\" state,\nso I'll make it that way.\n\n\nregistrace\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 5 Jan 2020 00:05:28 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: enhance SPI to support EXECUTE commands"
},
{
"msg_contents": "I've marked this patch as returned with feedback. It's been sitting in\nthe CF without any response from the author since September, and it's\nnot quite clear we actually want/need this feature. If needed, the patch\ncan be resubmitted for 2020-03.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 1 Feb 2020 12:21:05 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: enhance SPI to support EXECUTE commands"
}
] |
[
{
"msg_contents": "Dear all\n\nWe are developing MobilityDB, an open source PostgreSQL/PostGIS extension\nthat provides temporal and spatio-temporal types. The source code, manuals,\nand related publications are available at the address\nhttps://github.com/ULB-CoDE-WIT/MobilityDB/\n<https://github.com/ULB-CoDE-WIT/MobilityDB/tree/stats>\n\nIn MobilityDB temporal types are types derived from PostgreSQL/PostGIS\ntypes to which a time dimension is added. MobilityDB provides the following\ntemporal types: tbool (temporal boolean), tint (temporal int), tfloat\n(temporal float), text (temporal text), tgeompoint (temporal geometric\npoints) and tgeogpoint (temporal geographic points). For example, we can\ndefine a tfloat and a tgeompoint as follows\n\nSELECT tfloat '[1.5@2000-01-01, 2.5@2000-01-02, 1.5@2000-01-03]';\nSELECT tgeompoint '[Point(0 0)@2000-01-01 08:00, Point(1 0)@2000-01-02\n08:05, Point(1 1)@2000-01-03 08:10]';\n\nWe are developing the analyze/selectivity functions for those types. Our\napproach is to use the standard PostgreSQL/PostGIS functions for the value\nand the time dimensions where the slots starting from 0 will be used for\nthe value dimension, and the slots starting from 2 will be used for the\ntime dimension. For example, for tfloat we use range_typanalyze and related\nfunctions for\n* collecting in slots 0 and 1, STATISTIC_KIND_BOUNDS_HISTOGRAM\nand STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM for the float ranges of the value\ndimension\n* collecting in slots 2 and 3, STATISTIC_KIND_BOUNDS_HISTOGRAM\nand STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM for the periods (similar to\ntstzranges) of the time dimension\n\nHowever, we end up copying several PostgreSQL functions to which we only\nadd an additional parameter stating the slot number from which the specific\nstatistic kind should be found (either 0 or 2)\n\nbool\nget_attstatsslot_mobdb(AttStatsSlot *sslot, HeapTuple statstuple,\nint reqkind, Oid reqop, int flags, int startslot)\n{\n [...]\n for (i = startslot; i < STATISTIC_NUM_SLOTS; i++)\n {\n if ((&stats->stakind1)[i] == reqkind &&\n (reqop == InvalidOid || (&stats->staop1)[i] == reqop))\n break;\n }\n [...]\n}\n\ndouble\nvar_eq_const_mobdb(VariableStatData *vardata, Oid operator, Datum constval,\n bool negate, int startslot)\n{\n [...]\n}\nSelectivity\nscalarineqsel_mobdb(PlannerInfo *root, Oid operator, bool isgt, bool iseq,\nVariableStatData *vardata, Datum constval, Oid consttype,\n int startslot)\n{\n [...]\n}\n\nstatic Selectivity\nmcv_selectivity_mobdb(VariableStatData *vardata, FmgrInfo *opproc,\nDatum constval, Oid atttype, bool varonleft,\n double *sumcommonp, int startslot)\n{\n [...]\n}\nstatic double\nineq_histogram_selectivity_mobdb(PlannerInfo *root, VariableStatData *\nvardata,\nFmgrInfo *opproc, bool isgt, bool iseq, Datum constval,\n Oid consttype, int startslot)\n{\n [...]\n}\n\nin addition to copying other functions needed by the above functions since\nthey are not exported (defined as static)\n\nstatic bool\nget_actual_variable_range(PlannerInfo *root, VariableStatData *vardata,\nOid sortop, Datum *min, Datum *max)\n\nstatic bool\nget_actual_variable_endpoint(Relation heapRel,\nRelation indexRel, ScanDirection indexscandir,\nScanKey scankeys, int16 typLen,\nbool typByVal, MemoryContext outercontext,\nDatum *endpointDatum)\n\n[...]\n\nIs there a better way to do this ?\n\nIs there any chance that the API for accessing the typanalyze and\nselectivity functions will be enhanced in a future release ?\n\nRegards\n\nEsteban\n\n-- \n------------------------------------------------------------\nProf. Esteban Zimanyi\nDepartment of Computer & Decision Engineering (CoDE) CP 165/15\nUniversite Libre de Bruxelles\nAvenue F. D. Roosevelt 50\nB-1050 Brussels, Belgium\nfax: + 32.2.650.47.13\ntel: + 32.2.650.31.85\ne-mail: ezimanyi@ulb.ac.be\nInternet: http://code.ulb.ac.be/\n------------------------------------------------------------\n\nDear allWe are developing MobilityDB, an open source PostgreSQL/PostGIS extension that provides temporal and spatio-temporal types. The source code, manuals, and related publications are available at the addresshttps://github.com/ULB-CoDE-WIT/MobilityDB/In MobilityDB temporal types are types derived from PostgreSQL/PostGIS types to which a time dimension is added. MobilityDB provides the following temporal types: tbool (temporal boolean), tint (temporal int), tfloat (temporal float), text (temporal text), tgeompoint (temporal geometric points) and tgeogpoint (temporal geographic points). For example, we can define a tfloat and a tgeompoint as followsSELECT tfloat '[1.5@2000-01-01, 2.5@2000-01-02, 1.5@2000-01-03]';SELECT tgeompoint '[Point(0 0)@2000-01-01 08:00, Point(1 0)@2000-01-02 08:05, Point(1 1)@2000-01-03 08:10]';We are developing the analyze/selectivity functions for those types. Our approach is to use the standard PostgreSQL/PostGIS functions for the value and the time dimensions where the slots starting from 0 will be used for the value dimension, and the slots starting from 2 will be used for the time dimension. For example, for tfloat we use range_typanalyze and related functions for* collecting in slots 0 and 1, STATISTIC_KIND_BOUNDS_HISTOGRAM and STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM for the float ranges of the value dimension* collecting in slots 2 and 3, STATISTIC_KIND_BOUNDS_HISTOGRAM and STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM for the periods (similar to tstzranges) of the time dimensionHowever, we end up copying several PostgreSQL functions to which we only add an additional parameter stating the slot number from which the specific statistic kind should be found (either 0 or 2)boolget_attstatsslot_mobdb(AttStatsSlot *sslot, HeapTuple statstuple, int reqkind, Oid reqop, int flags, int startslot){ [...]\n for (i = startslot; i < STATISTIC_NUM_SLOTS; i++) { if ((&stats->stakind1)[i] == reqkind && (reqop == InvalidOid || (&stats->staop1)[i] == reqop)) break; } [...]\n}doublevar_eq_const_mobdb(VariableStatData *vardata, Oid operator, Datum constval, bool negate, int startslot){ [...]\n}\nSelectivityscalarineqsel_mobdb(PlannerInfo *root, Oid operator, bool isgt, bool iseq, VariableStatData *vardata, Datum constval, Oid consttype, int startslot){ [...]\n}static Selectivitymcv_selectivity_mobdb(VariableStatData *vardata, FmgrInfo *opproc, Datum constval, Oid atttype, bool varonleft, double *sumcommonp, int startslot){ [...]\n}\nstatic doubleineq_histogram_selectivity_mobdb(PlannerInfo *root, VariableStatData *vardata, FmgrInfo *opproc, bool isgt, bool iseq, Datum constval, Oid consttype, int startslot){ [...]\n}in addition to copying other functions needed by the above functions since they are not exported (defined as static)\nstatic boolget_actual_variable_range(PlannerInfo *root, VariableStatData *vardata, Oid sortop, Datum *min, Datum *max)static boolget_actual_variable_endpoint(Relation heapRel, Relation indexRel, ScanDirection indexscandir, ScanKey scankeys, int16 typLen, bool typByVal, MemoryContext outercontext, Datum *endpointDatum)[...]Is there a better way to do this ? Is there any chance that the API for accessing the typanalyze and selectivity functions will be enhanced in a future release ?\nRegardsEsteban-- ------------------------------------------------------------Prof. Esteban ZimanyiDepartment of Computer & Decision Engineering (CoDE) CP 165/15 Universite Libre de Bruxelles Avenue F. D. Roosevelt 50 B-1050 Brussels, Belgium fax: + 32.2.650.47.13tel: + 32.2.650.31.85e-mail: ezimanyi@ulb.ac.beInternet: http://code.ulb.ac.be/------------------------------------------------------------",
"msg_date": "Thu, 5 Sep 2019 11:39:44 +0200",
"msg_from": "Esteban Zimanyi <ezimanyi@ulb.ac.be>",
"msg_from_op": true,
"msg_subject": "Specifying attribute slot for storing/reading statistics"
},
{
"msg_contents": "Esteban Zimanyi <ezimanyi@ulb.ac.be> writes:\n> We are developing the analyze/selectivity functions for those types. Our\n> approach is to use the standard PostgreSQL/PostGIS functions for the value\n> and the time dimensions where the slots starting from 0 will be used for\n> the value dimension, and the slots starting from 2 will be used for the\n> time dimension. For example, for tfloat we use range_typanalyze and related\n> functions for\n> * collecting in slots 0 and 1, STATISTIC_KIND_BOUNDS_HISTOGRAM\n> and STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM for the float ranges of the value\n> dimension\n> * collecting in slots 2 and 3, STATISTIC_KIND_BOUNDS_HISTOGRAM\n> and STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM for the periods (similar to\n> tstzranges) of the time dimension\n\nIMO this is fundamentally wrong, or at least contrary to the design\nof pg_statistic. It is not supposed to matter which \"slot\" a given\nstatistic type is actually stored in; rather, readers are supposed to\nsearch for the desired statistic type using the stakindN, staopN and\n(if relevant) stacollN fields.\n\nIn this case it seems like it'd be reasonable to rely on the staop\nfields to distinguish between the value and time dimensions, since\n(IIUC) they're of different types.\n\nAnother idea is to invent your own slot kind identifiers instead of\nusing built-in ones. I'm not sure that there's any point in using\nthe built-in kind values, since (a) none of the core selectivity code\nis likely to get called on your data and (b) even if it were, it'd\nlikely do the wrong thing. See the comments in pg_statistic.h,\nstarting about line 150, about assignment of non-built-in slot kinds.\n\n> Is there any chance that the API for accessing the typanalyze and\n> selectivity functions will be enhanced in a future release ?\n\nWell, maybe you could convince us that the stakind/staop scheme for\nidentifying statistics is inadequate so we need another identification\nfield (corresponding to a component of the column being described,\nperhaps). I'd be strongly against assigning any semantic meaning\nto the slot numbers, though. That's likely to break code that's\nwritten according to existing conventions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Sep 2019 11:11:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Specifying attribute slot for storing/reading statistics"
},
{
"msg_contents": "Dear Tom\n\nMany thanks for your quick reply. Indeed both solutions you proposed can be\ncombined together in order to solve all the problems. However changes in\nthe code are needed. Let me now elaborate on the solution concerning the\ncombination of stakind/staop first and I will elaborate on adding a new\nkind identifier after.\n\nIn order to understand the setting, let me explain a little more about the\ndifferent kinds of temporal types. As explained in my previous email these\nare types whose values are composed of elements v@t where v is a\nPostgreSQL/PostGIS type (float or geometry) and t is a TimestampTz. There\nare four kinds of temporal types, depending on the their duration\n* Instant: Values of the form v@t. These are used for example to represent\ncar accidents as in Point(0 0)@2000-01-01 08:30\n* InstantSet: A set of values {v1@t1, ...., vn@tn} where the values between\nthe points are unknown. These are used for example to represent checkins in\nFourSquare or RFID readings\n* Sequence: A sequence of values [v1@t1, ...., vn@tn] where the values\nbetween two successive instants vi@ti vj@tj are (linearly) interpolated.\nThese are used to represent for example GPS tracks.\n* SequenceSet: A set of sequences {s1, ... , sn} where there is a temporal\ngap between them. These are used to represent for example GPS tracks where\nthe signal was lost during a time period.\n\nTo compute the selectivity of temporal types we assume that time and space\ndimensions are independent and thus we can reuse all existing analyze and\nselectivity infrastructure in PostgreSQL/PostGIS. For the various durations\nthis amounts to\n* Instant: Use the functions in analyze.c and selfuncs.c independently for\nthe value and time dimensions\n* InstantSet: Use the functions in array_typanalyze.c, array_selfuncs.c\nindependently for the value and time dimensions\n* Sequence and SequenceSet: To simplify, we do not take into account the\ngaps, and thus use the functions in rangetypes_typanalyze.c,\nrangetypes_selfuncs.c independently for the value and time dimensions\n\nHowever, this requires that the analyze and selectivity functions in all\nthe above files satisfy the following\n* Set the staop when computing statistics. For example in\nrangetypes_typanalyze.c the staop is set for\nSTATISTIC_KIND_RANGE_LENGTH_HISTOGRAM but not for\nSTATISTIC_KIND_BOUNDS_HISTOGRAM\n* Always call get_attstatsslot with the operator Oid not with InvalidOid.\nFor example, from the 17 times this function is called in selfuncs.c only\ntwo are passed with an operator. This also requires to pass the operator as\nan additional parameter to several functions. For example, the operator\nshould be passed to the function ineq_histogram_selectivity in selfuncs.c\n* Export several top-level functions which are currently static. For\nexample, var_eq_const, ineq_histogram_selectivity, eqjoinsel_inner and\nseveral others in the file selfuncs.c should be exported.\n\nThat would solve all the problems excepted for\nSTATISTIC_KIND_RANGE_LENGTH_HISTOGRAM, since in this case the staop will\nalways be Float8LessOperator, independently of whether we are computing\nlengths of value ranges or of tstzranges. This could be solved by using a\ndifferent stakind for the value and time dimensions.\n\nIf you want I can prepare a PR in order to understand the implications of\nthese changes. Please let me know.\n\nEsteban\n\n\nOn Thu, Sep 5, 2019 at 5:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Esteban Zimanyi <ezimanyi@ulb.ac.be> writes:\n> > We are developing the analyze/selectivity functions for those types. Our\n> > approach is to use the standard PostgreSQL/PostGIS functions for the\n> value\n> > and the time dimensions where the slots starting from 0 will be used for\n> > the value dimension, and the slots starting from 2 will be used for the\n> > time dimension. For example, for tfloat we use range_typanalyze and\n> related\n> > functions for\n> > * collecting in slots 0 and 1, STATISTIC_KIND_BOUNDS_HISTOGRAM\n> > and STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM for the float ranges of the\n> value\n> > dimension\n> > * collecting in slots 2 and 3, STATISTIC_KIND_BOUNDS_HISTOGRAM\n> > and STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM for the periods (similar to\n> > tstzranges) of the time dimension\n>\n> IMO this is fundamentally wrong, or at least contrary to the design\n> of pg_statistic. It is not supposed to matter which \"slot\" a given\n> statistic type is actually stored in; rather, readers are supposed to\n> search for the desired statistic type using the stakindN, staopN and\n> (if relevant) stacollN fields.\n>\n> In this case it seems like it'd be reasonable to rely on the staop\n> fields to distinguish between the value and time dimensions, since\n> (IIUC) they're of different types.\n>\n> Another idea is to invent your own slot kind identifiers instead of\n> using built-in ones. I'm not sure that there's any point in using\n> the built-in kind values, since (a) none of the core selectivity code\n> is likely to get called on your data and (b) even if it were, it'd\n> likely do the wrong thing. See the comments in pg_statistic.h,\n> starting about line 150, about assignment of non-built-in slot kinds.\n>\n> > Is there any chance that the API for accessing the typanalyze and\n> > selectivity functions will be enhanced in a future release ?\n>\n> Well, maybe you could convince us that the stakind/staop scheme for\n> identifying statistics is inadequate so we need another identification\n> field (corresponding to a component of the column being described,\n> perhaps). I'd be strongly against assigning any semantic meaning\n> to the slot numbers, though. That's likely to break code that's\n> written according to existing conventions.\n>\n> regards, tom lane\n>\n\nDear TomMany thanks for your quick reply. Indeed both solutions you proposed can be combined together in order to solve all the problems. However changes in the code are needed. Let me now elaborate on the solution concerning the combination of stakind/staop first and I will elaborate on adding a new kind identifier after.In order to understand the setting, let me explain a little more about the different kinds of temporal types. As explained in my previous email these are types whose values are composed of elements v@t where v is a PostgreSQL/PostGIS type (float or geometry) and t is a TimestampTz. There are four kinds of temporal types, depending on the their duration* Instant: Values of the form v@t. These are used for example to represent car accidents as in Point(0 0)@2000-01-01 08:30* InstantSet: A set of values {v1@t1, ...., vn@tn} where the values between the points are unknown. These are used for example to represent checkins in FourSquare or RFID readings* Sequence: A sequence of values [v1@t1, ...., vn@tn] where the values between two successive instants vi@ti vj@tj are (linearly) interpolated. These are used to represent for example GPS tracks.* SequenceSet: A set of sequences {s1, ... , sn} where there is a temporal gap between them. These are used to represent for example GPS tracks where the signal was lost during a time period.To compute the selectivity of temporal types we assume that time and space dimensions are independent and thus we can reuse all existing analyze and selectivity infrastructure in PostgreSQL/PostGIS. For the various durations this amounts to* Instant: Use the functions in analyze.c and selfuncs.c independently for the value and time dimensions* InstantSet: Use the functions in array_typanalyze.c, array_selfuncs.c independently for the value and time dimensions* Sequence and SequenceSet: To simplify, we do not take into account the gaps, and thus use the functions in rangetypes_typanalyze.c, rangetypes_selfuncs.c independently for the value and time dimensionsHowever, this requires that the analyze and selectivity functions in all the above files satisfy the following* Set the staop when computing statistics. For example in rangetypes_typanalyze.c the staop is set for STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM but not for STATISTIC_KIND_BOUNDS_HISTOGRAM* Always call get_attstatsslot with the operator Oid not with InvalidOid. For example, from the 17 times this function is called in selfuncs.c only two are passed with an operator. This also requires to pass the operator as an additional parameter to several functions. For example, the operator should be passed to the function ineq_histogram_selectivity in selfuncs.c* Export several top-level functions which are currently static. For example, var_eq_const, ineq_histogram_selectivity, eqjoinsel_inner and several others in the file selfuncs.c should be exported.That would solve all the problems excepted for STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM, since in this case the staop will always be Float8LessOperator, independently of whether we are computing lengths of value ranges or of tstzranges. This could be solved by using a different stakind for the value and time dimensions.If you want I can prepare a PR in order to understand the implications of these changes. Please let me know.EstebanOn Thu, Sep 5, 2019 at 5:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Esteban Zimanyi <ezimanyi@ulb.ac.be> writes:\n> We are developing the analyze/selectivity functions for those types. Our\n> approach is to use the standard PostgreSQL/PostGIS functions for the value\n> and the time dimensions where the slots starting from 0 will be used for\n> the value dimension, and the slots starting from 2 will be used for the\n> time dimension. For example, for tfloat we use range_typanalyze and related\n> functions for\n> * collecting in slots 0 and 1, STATISTIC_KIND_BOUNDS_HISTOGRAM\n> and STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM for the float ranges of the value\n> dimension\n> * collecting in slots 2 and 3, STATISTIC_KIND_BOUNDS_HISTOGRAM\n> and STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM for the periods (similar to\n> tstzranges) of the time dimension\n\nIMO this is fundamentally wrong, or at least contrary to the design\nof pg_statistic. It is not supposed to matter which \"slot\" a given\nstatistic type is actually stored in; rather, readers are supposed to\nsearch for the desired statistic type using the stakindN, staopN and\n(if relevant) stacollN fields.\n\nIn this case it seems like it'd be reasonable to rely on the staop\nfields to distinguish between the value and time dimensions, since\n(IIUC) they're of different types.\n\nAnother idea is to invent your own slot kind identifiers instead of\nusing built-in ones. I'm not sure that there's any point in using\nthe built-in kind values, since (a) none of the core selectivity code\nis likely to get called on your data and (b) even if it were, it'd\nlikely do the wrong thing. See the comments in pg_statistic.h,\nstarting about line 150, about assignment of non-built-in slot kinds.\n\n> Is there any chance that the API for accessing the typanalyze and\n> selectivity functions will be enhanced in a future release ?\n\nWell, maybe you could convince us that the stakind/staop scheme for\nidentifying statistics is inadequate so we need another identification\nfield (corresponding to a component of the column being described,\nperhaps). I'd be strongly against assigning any semantic meaning\nto the slot numbers, though. That's likely to break code that's\nwritten according to existing conventions.\n\n regards, tom lane",
"msg_date": "Fri, 6 Sep 2019 12:50:33 +0200",
"msg_from": "Esteban Zimanyi <ezimanyi@ulb.ac.be>",
"msg_from_op": true,
"msg_subject": "Re: Specifying attribute slot for storing/reading statistics"
},
{
"msg_contents": "Hi,\n\nPlease don't top-post. If you're not responding to parts of the e-mail,\nthen don't quote it.\n\nOn Fri, Sep 06, 2019 at 12:50:33PM +0200, Esteban Zimanyi wrote:\n>Dear Tom\n>\n>Many thanks for your quick reply. Indeed both solutions you proposed can be\n>combined together in order to solve all the problems. However changes in\n>the code are needed. Let me now elaborate on the solution concerning the\n>combination of stakind/staop first and I will elaborate on adding a new\n>kind identifier after.\n>\n>In order to understand the setting, let me explain a little more about the\n>different kinds of temporal types. As explained in my previous email these\n>are types whose values are composed of elements v@t where v is a\n>PostgreSQL/PostGIS type (float or geometry) and t is a TimestampTz. There\n>are four kinds of temporal types, depending on the their duration\n>* Instant: Values of the form v@t. These are used for example to represent\n>car accidents as in Point(0 0)@2000-01-01 08:30\n>* InstantSet: A set of values {v1@t1, ...., vn@tn} where the values between\n>the points are unknown. These are used for example to represent checkins in\n>FourSquare or RFID readings\n>* Sequence: A sequence of values [v1@t1, ...., vn@tn] where the values\n>between two successive instants vi@ti vj@tj are (linearly) interpolated.\n>These are used to represent for example GPS tracks.\n>* SequenceSet: A set of sequences {s1, ... , sn} where there is a temporal\n>gap between them. These are used to represent for example GPS tracks where\n>the signal was lost during a time period.\n>\n\nSo these are 4 different data types (or classes of data types) that you\nintroduce in your extension? Or is that just a conceptual view and it's\nstored in some other way (e.g. normalized in some way)?\n\n>To compute the selectivity of temporal types we assume that time and space\n>dimensions are independent and thus we can reuse all existing analyze and\n>selectivity infrastructure in PostgreSQL/PostGIS. For the various durations\n>this amounts to\n>* Instant: Use the functions in analyze.c and selfuncs.c independently for\n>the value and time dimensions\n>* InstantSet: Use the functions in array_typanalyze.c, array_selfuncs.c\n>independently for the value and time dimensions\n>* Sequence and SequenceSet: To simplify, we do not take into account the\n>gaps, and thus use the functions in rangetypes_typanalyze.c,\n>rangetypes_selfuncs.c independently for the value and time dimensions\n>\n\nOK.\n\n>However, this requires that the analyze and selectivity functions in all\n>the above files satisfy the following\n>* Set the staop when computing statistics. For example in\n>rangetypes_typanalyze.c the staop is set for\n>STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM but not for\n>STATISTIC_KIND_BOUNDS_HISTOGRAM\n>* Always call get_attstatsslot with the operator Oid not with InvalidOid.\n>For example, from the 17 times this function is called in selfuncs.c only\n>two are passed with an operator. This also requires to pass the operator as\n>an additional parameter to several functions. For example, the operator\n>should be passed to the function ineq_histogram_selectivity in selfuncs.c\n>* Export several top-level functions which are currently static. For\n>example, var_eq_const, ineq_histogram_selectivity, eqjoinsel_inner and\n>several others in the file selfuncs.c should be exported.\n>\n>That would solve all the problems excepted for\n>STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM, since in this case the staop will\n>always be Float8LessOperator, independently of whether we are computing\n>lengths of value ranges or of tstzranges. This could be solved by using a\n>different stakind for the value and time dimensions.\n>\n\nI don't think we're strongly against changing the code to allow this, as \nlong as it does not break existing extensions/code (unnecessarily).\n\n>If you want I can prepare a PR in order to understand the implications of\n>these changes. Please let me know.\n>\n\nI think having an actual patch to look at would be helpful.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 10 Sep 2019 15:30:35 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Specifying attribute slot for storing/reading statistics"
},
{
"msg_contents": ">\n> So these are 4 different data types (or classes of data types) that you\n> introduce in your extension? Or is that just a conceptual view and it's\n> stored in some other way (e.g. normalized in some way)?\n>\n\nAt the SQL level these 4 durations are not distinguishable. For example for\na tfloat (temporal float) we can have\n\nselect tfloat '1@2000-01-01' -- Instant duration\nselect tfloat '{1@2000-01-01 , 2@2000-01-02 , 1@2000-01-03}' -- Instant set\nduration\nselect tfloat '[1@2000-01-01, 2@2000-01-02 , 1@2000-01-03)' -- Sequence\nduration, left-inclusive and right-exclusive bound,\nselect tfloat {'[1@2000-01-01, 2@2000-01-02 , 1@2000-01-03], '[1@2000-01-04,\n1@2000-01-05]} ' -- Sequence set duration\n\nNevertheless it is possible to restrict a column to a specific duration\nwith a typymod specifier as in\n\ncreate table test ( ..., measure tfloat(Instant) -- only Instant durations\naccepted, ...)\n\nAt the C level these 4 durations are distinguished and implement in\nsomething equivalent to a template abstract class Temporal with four\nsubclasses TemporalInst, TemporalI, TemporalSeq, and TemporalS. Indeed the\nalgorithms for manipulating these 4 durations are completely different.\nThey are called template classes since they keep the Oid of the base type\n(float for tfloat or geometry for tgeompoint) in the same way array or\nranges do.\n\nFor more information please refer to the manual at github\nhttps://github.com/ULB-CoDE-WIT/MobilityDB/\n\n\n> I don't think we're strongly against changing the code to allow this, as\n> long as it does not break existing extensions/code (unnecessarily).\n>\n> >If you want I can prepare a PR in order to understand the implications of\n> >these changes. Please let me know.\n> >\n>\n> I think having an actual patch to look at would be helpful.\n>\n\nI am preparing a first patch for the files selfuncs.h and selfunc.c and\nthus for instant duration selectivity. It basically\n1) Moves some prototypes of the static functions from the .c to the .h file\nso that the functions are exported.\n2) Passes the operator from the top level functions to the inner functions\nsuch as mcv_selectivity or ineq_histogram_selectivity.\n\nThis allows me to call the functions twice, once for the value component\nand another for the time component, e.g. as follows.\n\n else if (cachedOp == CONTAINED_OP || cachedOp == OVERLAPS_OP)\n {\n /* Enable the addition of the selectivity of the value and time\n * dimensions since either may be missing */\n int selec_value = 1.0, selec_time = 1.0;\n\n /* Selectivity for the value dimension */\n if (MOBDB_FLAGS_GET_X(box->flags))\n {\n operator = oper_oid(LT_OP, valuetypid, valuetypid);\n selec_value = scalarineqsel(root, operator, false, false\n, vardata,\n Float8GetDatum(box->xmin), valuetypid);\n operator = oper_oid(GT_OP, valuetypid, valuetypid);\n selec_value += scalarineqsel(root, operator, true, false\n, vardata,\n Float8GetDatum(box->xmax), valuetypid);\n selec_value = 1 - selec_value;\n }\n /* Selectivity for the time dimension */\n if (MOBDB_FLAGS_GET_T(box->flags))\n {\n operator = oper_oid(LT_OP, T_TIMESTAMPTZ, T_TIMESTAMPTZ);\n selec_time = scalarineqsel(root, operator, false, false\n, vardata,\n TimestampTzGetDatum(box->tmin), TIMESTAMPTZOID);\n operator = oper_oid(GT_OP, T_TIMESTAMPTZ, T_TIMESTAMPTZ);\n selec_time += scalarineqsel(root, operator, true, false\n, vardata,\n TimestampTzGetDatum(box->tmax), TIMESTAMPTZOID);\n selec_time = 1 - selec_time;\n }\n selec = selec_value * selec_time;\n }\n\nRegards\n\nEsteban\n\nSo these are 4 different data types (or classes of data types) that you\nintroduce in your extension? Or is that just a conceptual view and it's\nstored in some other way (e.g. normalized in some way)?At the SQL level these 4 durations are not distinguishable. For example for a tfloat (temporal float) we can haveselect tfloat '1@2000-01-01' -- Instant durationselect tfloat '{1@2000-01-01\n\n, \n\n2@2000-01-02\n\n\n, 1@2000-01-03}' -- Instant set durationselect tfloat '[1@2000-01-01, \n\n2@2000-01-02\n\n\n\n\n, 1@2000-01-03)' -- Sequence duration, left-inclusive and right-exclusive bound, \n\nselect tfloat {'[1@2000-01-01, \n\n2@2000-01-02\n\n\n\n\n, 1@2000-01-03], \n\n'[1@2000-01-04, 1@2000-01-05]} ' -- Sequence set durationNevertheless it is possible to restrict a column to a specific duration with a typymod specifier as increate table test ( ..., measure tfloat(Instant) -- only Instant durations accepted, ...)At the C level these 4 durations are distinguished and implement in something equivalent to a template abstract class Temporal with four subclasses TemporalInst, TemporalI, TemporalSeq, and TemporalS. Indeed the algorithms for manipulating these 4 durations are completely different. \n\nThey are called template classes since they keep the Oid of the base type (float for tfloat or geometry for tgeompoint) in the same way array or ranges do. \n\n For more information please refer to the manual at githubhttps://github.com/ULB-CoDE-WIT/MobilityDB/ I don't think we're strongly against changing the code to allow this, as \nlong as it does not break existing extensions/code (unnecessarily).\n\n>If you want I can prepare a PR in order to understand the implications of\n>these changes. Please let me know.\n>\n\nI think having an actual patch to look at would be helpful.I am preparing a first patch for the files selfuncs.h and selfunc.c and thus for instant duration selectivity. It basically 1) Moves some prototypes of the static functions from the .c to the .h file so that the functions are exported.2) Passes the operator from the top level functions to the inner functions such as mcv_selectivity or ineq_histogram_selectivity.This allows me to call the functions twice, once for the value component and another for the time component, e.g. as follows. else if (cachedOp == CONTAINED_OP || cachedOp == OVERLAPS_OP) { /* Enable the addition of the selectivity of the value and time * dimensions since either may be missing */ int selec_value = 1.0, selec_time = 1.0; /* Selectivity for the value dimension */ if (MOBDB_FLAGS_GET_X(box->flags)) { operator = oper_oid(LT_OP, valuetypid, valuetypid); selec_value = scalarineqsel(root, operator, false, false, vardata, Float8GetDatum(box->xmin), valuetypid); operator = oper_oid(GT_OP, valuetypid, valuetypid); selec_value += scalarineqsel(root, operator, true, false, vardata, Float8GetDatum(box->xmax), valuetypid); selec_value = 1 - selec_value; } /* Selectivity for the time dimension */ if (MOBDB_FLAGS_GET_T(box->flags)) { operator = oper_oid(LT_OP, T_TIMESTAMPTZ, T_TIMESTAMPTZ); selec_time = scalarineqsel(root, operator, false, false, vardata, TimestampTzGetDatum(box->tmin), TIMESTAMPTZOID); operator = oper_oid(GT_OP, T_TIMESTAMPTZ, T_TIMESTAMPTZ); selec_time += scalarineqsel(root, operator, true, false, vardata, TimestampTzGetDatum(box->tmax), TIMESTAMPTZOID); selec_time = 1 - selec_time; } selec = selec_value * selec_time; }RegardsEsteban",
"msg_date": "Thu, 12 Sep 2019 11:22:13 +0200",
"msg_from": "Esteban Zimanyi <ezimanyi@ulb.ac.be>",
"msg_from_op": true,
"msg_subject": "Re: Specifying attribute slot for storing/reading statistics"
}
] |
[
{
"msg_contents": "Use data directory inode number, not port, to select SysV resource keys.\n\nThis approach provides a much tighter binding between a data directory\nand the associated SysV shared memory block (and SysV or named-POSIX\nsemaphores, if we're using those). Key collisions are still possible,\nbut only between data directories stored on different filesystems,\nso the situation should be negligible in practice. More importantly,\nrestarting the postmaster with a different port number no longer\nrisks failing to identify a relevant shared memory block, even when\npostmaster.pid has been removed. A standalone backend is likewise\nmuch more certain to detect conflicting leftover backends.\n\n(In the longer term, we might now think about deprecating the port as\na cluster-wide value, so that one postmaster could support sockets\nwith varying port numbers. But that's for another day.)\n\nThe hazards fixed here apply only on Unix systems; our Windows code\npaths already use identifiers derived from the data directory path\nname rather than the port.\n\nsrc/test/recovery/t/017_shm.pl, which intends to test key-collision\ncases, has been substantially rewritten since it can no longer use\ntwo postmasters with identical port numbers to trigger the case.\nInstead, use Perl's IPC::SharedMem module to create a conflicting\nshmem segment directly. The test script will be skipped if that\nmodule is not available. (This means that some older buildfarm\nmembers won't run it, but I don't think that that results in any\nmeaningful coverage loss.)\n\nPatch by me; thanks to Noah Misch and Peter Eisentraut for discussion\nand review.\n\nDiscussion: https://postgr.es/m/16908.1557521200@sss.pgh.pa.us\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/7de19fbc0b1a9172d0907017302b32846b2887b9\n\nModified Files\n--------------\nsrc/backend/port/posix_sema.c | 23 ++++--\nsrc/backend/port/sysv_sema.c | 23 ++++--\nsrc/backend/port/sysv_shmem.c | 38 +++++----\nsrc/backend/port/win32_sema.c | 2 +-\nsrc/backend/port/win32_shmem.c | 2 +-\nsrc/backend/postmaster/postmaster.c | 25 +++---\nsrc/backend/storage/ipc/ipci.c | 6 +-\nsrc/backend/utils/init/postinit.c | 8 +-\nsrc/include/storage/ipc.h | 2 +-\nsrc/include/storage/pg_sema.h | 2 +-\nsrc/include/storage/pg_shmem.h | 2 +-\nsrc/test/recovery/t/017_shm.pl | 150 +++++++++++++++++++-----------------\n12 files changed, 159 insertions(+), 124 deletions(-)\n\n",
"msg_date": "Thu, 05 Sep 2019 17:32:04 +0000",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pgsql: Use data directory inode number, not port,\n to select SysV resour"
},
{
"msg_contents": "\nOn 9/5/19 1:32 PM, Tom Lane wrote:\n> Use data directory inode number, not port, to select SysV resource keys.\n>\n> This approach provides a much tighter binding between a data directory\n> and the associated SysV shared memory block (and SysV or named-POSIX\n> semaphores, if we're using those). Key collisions are still possible,\n> but only between data directories stored on different filesystems,\n> so the situation should be negligible in practice. More importantly,\n> restarting the postmaster with a different port number no longer\n> risks failing to identify a relevant shared memory block, even when\n> postmaster.pid has been removed. A standalone backend is likewise\n> much more certain to detect conflicting leftover backends.\n>\n> (In the longer term, we might now think about deprecating the port as\n> a cluster-wide value, so that one postmaster could support sockets\n> with varying port numbers. But that's for another day.)\n>\n> The hazards fixed here apply only on Unix systems; our Windows code\n> paths already use identifiers derived from the data directory path\n> name rather than the port.\n>\n> src/test/recovery/t/017_shm.pl, which intends to test key-collision\n> cases, has been substantially rewritten since it can no longer use\n> two postmasters with identical port numbers to trigger the case.\n> Instead, use Perl's IPC::SharedMem module to create a conflicting\n> shmem segment directly. The test script will be skipped if that\n> module is not available. (This means that some older buildfarm\n> members won't run it, but I don't think that that results in any\n> meaningful coverage loss.)\n>\n> Patch by me; thanks to Noah Misch and Peter Eisentraut for discussion\n> and review.\n>\n> Discussion: https://postgr.es/m/16908.1557521200@sss.pgh.pa.us\n>\n\n\nThis has caused the 017_shm.pl tests to be skipped on jacana and\nbowerbird, and to fail completely on my msys2 test system where the Perl\nhas the relevant IPC:: modules, unlike the buildfarm animals.\n\n\nMaybe we need to fall back on the older code on Windows?\n\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 6 Sep 2019 11:09:02 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Use data directory inode number, not port, to select SysV\n resour"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 9/5/19 1:32 PM, Tom Lane wrote:\n>> Use data directory inode number, not port, to select SysV resource keys.\n\n> This has caused the 017_shm.pl tests to be skipped on jacana and\n> bowerbird, and to fail completely on my msys2 test system where the Perl\n> has the relevant IPC:: modules, unlike the buildfarm animals.\n\nI intended 017_shm.pl to be skipped on Windows builds; it's not apparent\nto me that that script tests anything useful when we're not using SysV\nshared memory.\n\nI don't quite understand what the msys2 platform might be doing with\nthese IPC modules. Do they actually do anything, or just fail at\nruntime? If the latter, maybe we can add something to the eval{}\nblock to check for present-but-doesnt-work?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Sep 2019 11:35:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Use data directory inode number, not port,\n to select SysV resour"
},
{
"msg_contents": "\nOn 9/6/19 11:35 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 9/5/19 1:32 PM, Tom Lane wrote:\n>>> Use data directory inode number, not port, to select SysV resource keys.\n>> This has caused the 017_shm.pl tests to be skipped on jacana and\n>> bowerbird, and to fail completely on my msys2 test system where the Perl\n>> has the relevant IPC:: modules, unlike the buildfarm animals.\n> I intended 017_shm.pl to be skipped on Windows builds; it's not apparent\n> to me that that script tests anything useful when we're not using SysV\n> shared memory.\n>\n> I don't quite understand what the msys2 platform might be doing with\n> these IPC modules. Do they actually do anything, or just fail at\n> runtime? If the latter, maybe we can add something to the eval{}\n> block to check for present-but-doesnt-work?\n\n\nGiven your stated intention, I think the simplest way to get it is just\nthis, without worrying about what the perl modules might do:\n\n\ndiff --git a/src/test/recovery/t/017_shm.pl b/src/test/recovery/t/017_shm.pl\nindex a29ef78855..dc0dcd3ca2 100644\n--- a/src/test/recovery/t/017_shm.pl\n+++ b/src/test/recovery/t/017_shm.pl\n@@ -18,7 +18,7 @@ eval {\n require IPC::SysV;\n IPC::SysV->import(qw(IPC_CREAT IPC_EXCL S_IRUSR S_IWUSR));\n };\n-if ($@)\n+if ($@ || $windows_os)\n {\n plan skip_all => 'SysV shared memory not supported by this platform';\n }\n\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 6 Sep 2019 14:26:13 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Use data directory inode number, not port, to select SysV\n resour"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> Given your stated intention, I think the simplest way to get it is just\n> this, without worrying about what the perl modules might do:\n\n> -if ($@)\n> +if ($@ || $windows_os)\n\nWFM, do you want to push that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Sep 2019 14:42:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Use data directory inode number, not port,\n to select SysV resour"
},
{
"msg_contents": "\nOn 9/6/19 2:42 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> Given your stated intention, I think the simplest way to get it is just\n>> this, without worrying about what the perl modules might do:\n>> -if ($@)\n>> +if ($@ || $windows_os)\n> WFM, do you want to push that?\n>\n> \t\t\t\n\n\ndone.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 6 Sep 2019 15:51:54 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Use data directory inode number, not port, to select SysV\n resour"
},
{
"msg_contents": "\nOn 9/6/19 3:51 PM, Andrew Dunstan wrote:\n> On 9/6/19 2:42 PM, Tom Lane wrote:\n>> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>>> Given your stated intention, I think the simplest way to get it is just\n>>> this, without worrying about what the perl modules might do:\n>>> -if ($@)\n>>> +if ($@ || $windows_os)\n>> WFM, do you want to push that?\n>>\n>> \t\t\t\n>\n> done.\n>\n>\n\n[redirected to -hackers]\n\n\nI'm going to disable this test (src/test/recovery/t/017_shm.pl) on\nWindows on the back branches too unless there's a violent objection. The\nreason is that the script runs \"postgres --single\" and that fails on\nWindows when run by an administrative account. We've carefully enabled\npostgres and its tests to run safely under an admin account. I\ndiscovered this as part of my myss2 testing.\n\n\ncheers\n\n\nandrew\n\n\n\n",
"msg_date": "Sun, 8 Sep 2019 17:54:12 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Use data directory inode number, not port, to select SysV\n resour"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> I'm going to disable this test (src/test/recovery/t/017_shm.pl) on\n> Windows on the back branches too unless there's a violent objection.\n\nAs I said before, I think that test does nothing useful unless SysV\nshmem is in use, so I see no reason not to disable it on Windows.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Sep 2019 18:00:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Use data directory inode number, not port,\n to select SysV resour"
},
{
"msg_contents": "On Sun, Sep 08, 2019 at 05:54:12PM -0400, Andrew Dunstan wrote:\n> I'm going to disable this test (src/test/recovery/t/017_shm.pl) on\n> Windows on the back branches too unless there's a violent objection. The\n> reason is that the script runs \"postgres --single\" and that fails on\n> Windows when run by an administrative account. We've carefully enabled\n> postgres and its tests to run safely under an admin account. I\n> discovered this as part of my myss2 testing.\n\nI'm reading that the test falsified this assertion that we've enabled postgres\nto run safely under an admin account. Enabling safe use of admin accounts\nentails fixing single-user mode. (Alternately, one could replace the \"vacuum\nthat database in single-user mode\" errhint with a reference to some\nnot-yet-built alternative. That sounds harder.)\n\n\n",
"msg_date": "Fri, 13 Sep 2019 06:20:28 +0000",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Use data directory inode number, not port, to select SysV\n resour"
}
] |
[
{
"msg_contents": "Way back in 2011, commit 57eb009092684e6e1788dd0dae641ccee1668b10\nmoved AbortTransaction's AtEOXact_Snapshot call to CleanupTransaction\nto fix a problem when a ROLLBACK statement was prepared at the\nprotocol level and executed in a transaction with REPEATABLE READ or\nhigher isolation. RevalidateCachedQuery would attempt to obtain a\nsnapshot and end up failing. At the time, it was judged that\nplancache.c was behaving correctly and this logic was rejiggered to\nmake that coding pattern safe. However, commit\nac63dca607e8e22247defbc8fe03b6baa3628c42 subsequently taught\nRevalidateCachedQuery not to obtain a snapshot for such commands after\nall while fixing an unrelated bug, and there now seems to be no case\nin which we obtain a snapshot in an aborted transaction.\n\nI'd like to propose that we upgrade that practice to a formal rule.\nWe've already taken some steps in this direction; notably, commit\n42c80c696e9c8323841180029cc62741c21bd356 added an assertion to the\neffect that we never perform catcache lookups outside of a valid,\nnon-aborted transaction. However, right now, if you made the mistake\nof trying to access the database through some means other than a\ncatcache lookup in an aborted transaction, it might appear to work.\nActually, it would be terribly unsafe, because (1) you might've failed\nafter inserting a heap tuple and before inserting all the\ncorresponding index tuples and (2) any random subset of the tuples\ninserted by prior commands in your transaction might have been pruned\nby other backends after you removed your XID from the ProcArray, while\nothers would remain visible. In short, such a snapshot is not really\na snapshot at all.\n\nThe best way that I've been able to come up with to enforce this rule\nafter a little bit of thought is to add Assert(IsTransactionState())\nto a bunch of functions in snapmgr.c, most notably\nGetTransactionSnapshot and GetCatalogSnapshot. The attached patch does\nthat. It also makes the comments in RevalidateCachedQuery more\nexplicit about this issue, and it moves the AtEOXact_Snapshot call\nback to AbortTransaction, on the theory (or hope?) that it's better to\ndispose of resources sooner, especially resources that might look\nsuperficially valid but really are not.\n\nYou may (or may not) wonder why I'm poking at this apparently-obscure\ntopic. The answer is \"undo.\" Without getting into the gory details,\nit's better for undo if as much of the cleanup work as possible\nhappens at AbortTransaction() time and as little as possible is left\nuntil CleanupTransaction(). That seems like a good idea on general\nprinciple too, though, so I'm proposing this as an independent\ncleanup.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 5 Sep 2019 14:50:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "AtEOXact_Snapshot timing"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-05 14:50:50 -0400, Robert Haas wrote:\n> The best way that I've been able to come up with to enforce this rule\n> after a little bit of thought is to add Assert(IsTransactionState())\n> to a bunch of functions in snapmgr.c, most notably\n> GetTransactionSnapshot and GetCatalogSnapshot.\n\nWonder if there's a risk that callers might still have a snapshot\naround, in independent memory? It could make sense to add such an\nassert to a visibility routine or two, maybe?\n\nI suspect that we should add non-assert condition to a central place\nlike GetSnapshotData(). It's not hard to imagine extensions just using\nthat directly, and that we'd never notice that with assert only\ntesting. It's also hard to imagine a single if\n(unlikely(IsTransactionState())) to be expensive enough to matter\ncompared to GetSnapshotData()'s own cost.\n\nI wonder, not just because of the previous paragraph, whether it could\nbe worthwhile to expose enough xact.c state to allow\nIsTransactionState() to be done without a function call. ISMT a few\nAssert(IsTransactionState()) type checks really are important enough to\nbe done in production builds too. Some of the relevant scenarios aren't\neven close to be covered fully, and you'll get bad results if there's\nsuch a problem.\n\n> @@ -2732,6 +2732,18 @@ AbortTransaction(void)\n> \t\tpgstat_report_xact_timestamp(0);\n> \t}\n> \n> +\t/*\n> +\t * Any snapshots taken by this transaction were unsafe to use even at the\n> +\t * time when we entered AbortTransaction(), since we might have, for\n> +\t * example, inserted a heap tuple and failed while inserting index tuples.\n> +\t * They were even more unsafe after ProcArrayEndTransaction(), since after\n> +\t * that point tuples we inserted could be pruned by other backends.\n> +\t * However, we postpone the cleanup until this point in the sequence\n> +\t * because it has to be done after ResourceOwnerRelease() has finishing\n> +\t * unregistering snapshots.\n> +\t */\n> +\tAtEOXact_Snapshot(false, true);\n> +\n> \t/*\n> \t * State remains TRANS_ABORT until CleanupTransaction().\n> \t */\n\nHm. This means that\n\t\tif (is_parallel_worker)\n\t\t\tCallXactCallbacks(XACT_EVENT_PARALLEL_ABORT);\n\t\telse\n\t\t\tCallXactCallbacks(XACT_EVENT_ABORT);\nwhich, together with a few of the other functions, could plausibly try\nto use snapshot related logic, may end up continuing to use an existing\nsnapshot without us detecting the problem? I think? Especially with the\nasserts present ISTM that we really should kill the existing snapshots\ndirectly adjacent to ProcArrayEndTransaction(). As you say, after that\nthe snapshots aren't correct anymore. And with the right assertions we\nshould be safe againsts accidental reintroduction of catalog access in\nthe following code?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 5 Sep 2019 14:32:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: AtEOXact_Snapshot timing"
},
{
"msg_contents": "On Thu, Sep 5, 2019 at 5:32 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-09-05 14:50:50 -0400, Robert Haas wrote:\n> > The best way that I've been able to come up with to enforce this rule\n> > after a little bit of thought is to add Assert(IsTransactionState())\n> > to a bunch of functions in snapmgr.c, most notably\n> > GetTransactionSnapshot and GetCatalogSnapshot.\n>\n> Wonder if there's a risk that callers might still have a snapshot\n> around, in independent memory? It could make sense to add such an\n> assert to a visibility routine or two, maybe?\n>\n> I suspect that we should add non-assert condition to a central place\n> like GetSnapshotData(). It's not hard to imagine extensions just using\n> that directly, and that we'd never notice that with assert only\n> testing. It's also hard to imagine a single if\n> (unlikely(IsTransactionState())) to be expensive enough to matter\n> compared to GetSnapshotData()'s own cost.\n\nI guess we could do that, but I feel like it might be overkill. It\nseems unlikely to me that an extension would call GetSnapshotData()\ndirectly, but if it does, it's probably some kind of advanced wizardry\nand I'm happy to trust that the extension author knows what she's\ndoing (or she can keep both pieces if it breaks). For my $0.02,\ncalling GetTransactionSnapshot() in an aborted transaction is the kind\nof thing that's much more likely to be an unintentional goof.\n\n> > @@ -2732,6 +2732,18 @@ AbortTransaction(void)\n> > pgstat_report_xact_timestamp(0);\n> > }\n> >\n> > + /*\n> > + * Any snapshots taken by this transaction were unsafe to use even at the\n> > + * time when we entered AbortTransaction(), since we might have, for\n> > + * example, inserted a heap tuple and failed while inserting index tuples.\n> > + * They were even more unsafe after ProcArrayEndTransaction(), since after\n> > + * that point tuples we inserted could be pruned by other backends.\n> > + * However, we postpone the cleanup until this point in the sequence\n> > + * because it has to be done after ResourceOwnerRelease() has finishing\n> > + * unregistering snapshots.\n> > + */\n> > + AtEOXact_Snapshot(false, true);\n> > +\n> > /*\n> > * State remains TRANS_ABORT until CleanupTransaction().\n> > */\n>\n> Hm. This means that\n> if (is_parallel_worker)\n> CallXactCallbacks(XACT_EVENT_PARALLEL_ABORT);\n> else\n> CallXactCallbacks(XACT_EVENT_ABORT);\n> which, together with a few of the other functions, could plausibly try\n> to use snapshot related logic, may end up continuing to use an existing\n> snapshot without us detecting the problem? I think? Especially with the\n> asserts present ISTM that we really should kill the existing snapshots\n> directly adjacent to ProcArrayEndTransaction(). As you say, after that\n> the snapshots aren't correct anymore. And with the right assertions we\n> should be safe againsts accidental reintroduction of catalog access in\n> the following code?\n\nWell, I don't really see how to make those things precisely adjacent\nwithout a lot more rejiggering of the code, and that doesn't seem\nworth it to me. I think the main risk here is that someone tries to\nuse a snapshot after AbortTransaction() and before\nCleanupTransaction(), and that's what I want to try to block. The\nrisk that somebody's going to try to use a snapshot within\nAbortTransaction() seems lower to me, although obviously not zero.\nYou can't do anything that will plausibly fail in this code path, and\nthat's generally a tighter restriction than the one we're trying to\nenforce here.\n\nI agree with you that it would be nicer if we could put the killing of\nthe old snapshots directly adjacent to ProcArrayEndTransaction(), and\nthat was the first thing I tried, but it doesn't work, because\nresource owner cleanup has to run first. It's possible that we could\nsplit AtEOXact_Snapshot() into two pieces that run at different times,\nbut I think we'd be guarding against a vulnerability that is mostly\ntheoretical, and I don't really see the point. The only kind of\naccess that somebody's likely to attempt during AbortTransaction() is\ncatalog access, and that's likely to trip either the assertion I added\nin GetCatalogSnapshot() or the existing one in SearchCatCache().\nNon-catalog access would probably fail the assertion in\nGetTransactionSnapshot() or GetLatestSnapshot(). I guess we should\nprobably add a similar check in GetActiveSnapshot(); the attached\npatch adds that.\n\nI might be missing something here, so feel free to point out to me if\nthere's a plausible coding pattern I'm missing where the things you\nare proposing would actually be a problem. To me, it just feels like\nyou're worried about a scenario that would require writing the code in\na very unnatural way, like saving a snapshot in a global variable or\ncalling GetSnapshotData() directly. The fact that there's no existing\ncode that does those things should be enough to warn you off of doing\nthem. If it's not, I doubt that a mere assertion is going to stand in\nyour way...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 6 Sep 2019 10:25:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: AtEOXact_Snapshot timing"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-06 10:25:22 -0400, Robert Haas wrote:\n> On Thu, Sep 5, 2019 at 5:32 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-09-05 14:50:50 -0400, Robert Haas wrote:\n> > > The best way that I've been able to come up with to enforce this rule\n> > > after a little bit of thought is to add Assert(IsTransactionState())\n> > > to a bunch of functions in snapmgr.c, most notably\n> > > GetTransactionSnapshot and GetCatalogSnapshot.\n> >\n> > Wonder if there's a risk that callers might still have a snapshot\n> > around, in independent memory? It could make sense to add such an\n> > assert to a visibility routine or two, maybe?\n> >\n> > I suspect that we should add non-assert condition to a central place\n> > like GetSnapshotData(). It's not hard to imagine extensions just using\n> > that directly, and that we'd never notice that with assert only\n> > testing. It's also hard to imagine a single if\n> > (unlikely(IsTransactionState())) to be expensive enough to matter\n> > compared to GetSnapshotData()'s own cost.\n>\n> I guess we could do that, but I feel like it might be overkill. It\n> seems unlikely to me that an extension would call GetSnapshotData()\n> directly, but if it does, it's probably some kind of advanced wizardry\n> and I'm happy to trust that the extension author knows what she's\n> doing (or she can keep both pieces if it breaks).\n\nHm. I feel like there's plenty reasons to get a snapshot in extensions -\nthere's plenty APIs one cannot really call without doing so? What I'm\nworried about is not primarily that GetSnapshotData() is being called\ndirectly, but that $author got a snapshot previously, and then tries to\nuse it in an xact callback or such.\n\nI'd add asserts to at least PushActiveSnapshot(), and I don't see the\nharm in adding one to GetSnapshotData().\n\n\n> For my $0.02, calling GetTransactionSnapshot() in an aborted\n> transaction is the kind of thing that's much more likely to be an\n> unintentional goof.\n\nBased on what I've seen people do in xact callback handlers... Didn't PG\nitself e.g. have code doing syscache lookups in aborted transactions a\ncouple times?\n\nThe danger of using a previously acquired snapshot in that stage is why\nI was pondering adding an assert to visibility functions\nthemselves. E.g. just adding one to HeapTupleSatisfiesVisibility() might\nalready add a good bit of coverage.\n\n\n> I agree with you that it would be nicer if we could put the killing of\n> the old snapshots directly adjacent to ProcArrayEndTransaction(), and\n> that was the first thing I tried, but it doesn't work, because\n> resource owner cleanup has to run first.\n\nHm. I'd even say that it actually belongs to *before* the\nProcArrayEndTransaction() call.\n\n\nFor a bit I wondered if the resowner snapshot cleanup ought to be at\nleast moved to RESOURCE_RELEASE_BEFORE_LOCKS. Not that it actually\naddresses this issue, but it seems to belong there \"thematically\". But\nthen I honestly don't understand why most of the resowner managed\nresources in the abort sequence are released where they are. The only\nreally explanatory comment is:\n\n\t * The ordering of operations is not entirely random. The idea is:\n\t * release resources visible to other backends (eg, files, buffer pins);\n\t * then release locks; then release backend-local resources. We want to\n\t * release locks at the point where any backend waiting for us will see\n\t * our transaction as being fully cleaned up.\n\nbut that doesn't explain why we e.g. process relcache references, jit\ncontexts (arguably it does for dsm segments), at that stage. And\ndefinitely not why we do abort's relcache inval processing between\nRESOURCE_RELEASE_BEFORE_LOCKS and RESOURCE_RELEASE_LOCKS - that can be\nquite expensive whe needing to scan the whole relcache.\n\nAnyway, this is grumbling about things far beyond the scope of this\npatch.\n\n\n> It's possible that we could\n> split AtEOXact_Snapshot() into two pieces that run at different times,\n> but I think we'd be guarding against a vulnerability that is mostly\n> theoretical, and I don't really see the point. The only kind of\n> access that somebody's likely to attempt during AbortTransaction() is\n> catalog access, and that's likely to trip either the assertion I added\n> in GetCatalogSnapshot() or the existing one in SearchCatCache().\n\nI'm not sure that's true - I've certainly seen extensions logging the\ntransaction state into a table, for example... Even in aborted xacts.\n\n\n> Non-catalog access would probably fail the assertion in\n> GetTransactionSnapshot() or GetLatestSnapshot().\n\nNot this patch's fault, obviously, but none of this appears to catch\nDML...\n\n\n> I might be missing something here, so feel free to point out to me if\n> there's a plausible coding pattern I'm missing where the things you\n> are proposing would actually be a problem. To me, it just feels like\n> you're worried about a scenario that would require writing the code in\n> a very unnatural way, like saving a snapshot in a global variable or\n> calling GetSnapshotData() directly.\n\nWell, if you actually want to look at state as it was at the beginning\nof the database, it seems not unreasonable to get a snapshot at the\nstart of the transaction, push it onto the stack, and then use it at the\nend of the transaction, too.\n\n\nJust to be clear: While I think an assert or two more seem like a good\nidea, that's musing around the edges, not a fundamental concern.\n\n\n> diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\n> index f594d33e7a..9993251607 100644\n> --- a/src/backend/access/transam/xact.c\n> +++ b/src/backend/access/transam/xact.c\n> @@ -2732,6 +2732,18 @@ AbortTransaction(void)\n> \t\tpgstat_report_xact_timestamp(0);\n> \t}\n>\n> +\t/*\n> +\t * Any snapshots taken by this transaction were unsafe to use even at the\n> +\t * time when we entered AbortTransaction(), since we might have, for\n> +\t * example, inserted a heap tuple and failed while inserting index tuples.\n> +\t * They were even more unsafe after ProcArrayEndTransaction(), since after\n> +\t * that point tuples we inserted could be pruned by other backends.\n> +\t * However, we postpone the cleanup until this point in the sequence\n> +\t * because it has to be done after ResourceOwnerRelease() has finishing\n> +\t * unregistering snapshots.\n> +\t */\n> +\tAtEOXact_Snapshot(false, true);\n\nOne thing that bothers me a bit here is that the other cleanup calls are\nwithin\n\n\t/*\n\t * Post-abort cleanup. See notes in CommitTransaction() concerning\n\t * ordering. We can skip all of it if the transaction failed before\n\t * creating a resource owner.\n\t */\n\tif (TopTransactionResourceOwner != NULL)\n\nand adding AtEOXact_Snapshot() below that block kind of makes that\ncomment wrong. I don't think we actually can make the call conditional\nin the same way, however.\n\n\n> \t/*\n> \t * State remains TRANS_ABORT until CleanupTransaction().\n> \t */\n> @@ -2757,7 +2769,6 @@ CleanupTransaction(void)\n> \t * do abort cleanup processing\n> \t */\n> \tAtCleanup_Portals();\t\t/* now safe to release portal memory */\n> -\tAtEOXact_Snapshot(false, true); /* and release the transaction's snapshots */\n\nHm. I don't quite get why we tell AtEOXact_Snapshot() to clean up\n->xmin, when we just called ProcArrayEndTransaction(). Again, not this\npatch's fault...\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 11 Nov 2019 11:12:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: AtEOXact_Snapshot timing"
},
{
"msg_contents": "On Mon, Nov 11, 2019 at 2:12 PM Andres Freund <andres@anarazel.de> wrote:\n> Hm. I feel like there's plenty reasons to get a snapshot in extensions -\n> there's plenty APIs one cannot really call without doing so?\n\nSure, I don't disagree with that.\n\n> What I'm\n> worried about is not primarily that GetSnapshotData() is being called\n> directly, but that $author got a snapshot previously, and then tries to\n> use it in an xact callback or such.\n\nYeah, I guess that's possible. Registering a new snapshot would trip\none of the assertions I added, but using an old one wouldn't.\n\n> I'd add asserts to at least PushActiveSnapshot(), and I don't see the\n> harm in adding one to GetSnapshotData().\n\nOK. I think there's some risk that someone will have a purpose for\ncalling GetSansphotData() which is actually tolerably safe but now\nwon't work, so I thin there's a chance we'll get a complaint. But if\nwe do, we can always reconsider whether to take that particular\nassertion back out again.\n\n> Based on what I've seen people do in xact callback handlers... Didn't PG\n> itself e.g. have code doing syscache lookups in aborted transactions a\n> couple times?\n\nSure -- but the assertions I had already added would catch that\nanyway, assuming that it actually attempted catalog access, and we now\nalso have assertions that will catch it even if it would have been\nsatisfied from the cache.\n\n> The danger of using a previously acquired snapshot in that stage is why\n> I was pondering adding an assert to visibility functions\n> themselves. E.g. just adding one to HeapTupleSatisfiesVisibility() might\n> already add a good bit of coverage.\n\nYeah, but I really hate to do that; those functions are super-hot. And\nI don't think we need to go overboard in protecting people from\nthemselves. The assertions I'm proposing to add should already catch\nquite a bit of stuff that is unchecked today, and with little or no\npossible downside. There's no rule that we can't add more later, nor\nare more assertions always better than fewer.\n\n> > I agree with you that it would be nicer if we could put the killing of\n> > the old snapshots directly adjacent to ProcArrayEndTransaction(), and\n> > that was the first thing I tried, but it doesn't work, because\n> > resource owner cleanup has to run first.\n>\n> Hm. I'd even say that it actually belongs to *before* the\n> ProcArrayEndTransaction() call.\n>\n> For a bit I wondered if the resowner snapshot cleanup ought to be at\n> least moved to RESOURCE_RELEASE_BEFORE_LOCKS. Not that it actually\n> addresses this issue, but it seems to belong there \"thematically\". But\n> then I honestly don't understand why most of the resowner managed\n> resources in the abort sequence are released where they are. The only\n> really explanatory comment is:\n>\n> * The ordering of operations is not entirely random. The idea is:\n> * release resources visible to other backends (eg, files, buffer pins);\n> * then release locks; then release backend-local resources. We want to\n> * release locks at the point where any backend waiting for us will see\n> * our transaction as being fully cleaned up.\n>\n> but that doesn't explain why we e.g. process relcache references, jit\n> contexts (arguably it does for dsm segments), at that stage. And\n> definitely not why we do abort's relcache inval processing between\n> RESOURCE_RELEASE_BEFORE_LOCKS and RESOURCE_RELEASE_LOCKS - that can be\n> quite expensive whe needing to scan the whole relcache.\n>\n> Anyway, this is grumbling about things far beyond the scope of this\n> patch.\n\nYeah. I do agree with you that a lot of that stuff isn't very well\nexplained. Nor is there much of an explanation of why some things go\nthrough resowner.c and other things have bespoke cleanup code. But, as\nyou say, that's out of scope.\n\n> I'm not sure that's true - I've certainly seen extensions logging the\n> transaction state into a table, for example... Even in aborted xacts.\n\nWhoa.\n\n> > Non-catalog access would probably fail the assertion in\n> > GetTransactionSnapshot() or GetLatestSnapshot().\n>\n> Not this patch's fault, obviously, but none of this appears to catch\n> DML...\n\nReally? It's hard to imagine that DML wouldn't attempt catalog access.\n\n> Just to be clear: While I think an assert or two more seem like a good\n> idea, that's musing around the edges, not a fundamental concern.\n\nAll right, here's another version with an assert or two more. :-)\n\n> > diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\n> > index f594d33e7a..9993251607 100644\n> > --- a/src/backend/access/transam/xact.c\n> > +++ b/src/backend/access/transam/xact.c\n> > @@ -2732,6 +2732,18 @@ AbortTransaction(void)\n> > pgstat_report_xact_timestamp(0);\n> > }\n> >\n> > + /*\n> > + * Any snapshots taken by this transaction were unsafe to use even at the\n> > + * time when we entered AbortTransaction(), since we might have, for\n> > + * example, inserted a heap tuple and failed while inserting index tuples.\n> > + * They were even more unsafe after ProcArrayEndTransaction(), since after\n> > + * that point tuples we inserted could be pruned by other backends.\n> > + * However, we postpone the cleanup until this point in the sequence\n> > + * because it has to be done after ResourceOwnerRelease() has finishing\n> > + * unregistering snapshots.\n> > + */\n> > + AtEOXact_Snapshot(false, true);\n>\n> One thing that bothers me a bit here is that the other cleanup calls are\n> within\n>\n> /*\n> * Post-abort cleanup. See notes in CommitTransaction() concerning\n> * ordering. We can skip all of it if the transaction failed before\n> * creating a resource owner.\n> */\n> if (TopTransactionResourceOwner != NULL)\n>\n> and adding AtEOXact_Snapshot() below that block kind of makes that\n> comment wrong. I don't think we actually can make the call conditional\n> in the same way, however.\n\nI guess I read that statement as referring to the contents of the\nif-block, not everything below that in the function. But we could\nreword the comment to, e.g. We can skip quite a bit of work if the\ntransaction failed before creating a resource owner. Done in the\nattached version.\n\n>\n>\n> > /*\n> > * State remains TRANS_ABORT until CleanupTransaction().\n> > */\n> > @@ -2757,7 +2769,6 @@ CleanupTransaction(void)\n> > * do abort cleanup processing\n> > */\n> > AtCleanup_Portals(); /* now safe to release portal memory */\n> > - AtEOXact_Snapshot(false, true); /* and release the transaction's snapshots */\n>\n> Hm. I don't quite get why we tell AtEOXact_Snapshot() to clean up\n> ->xmin, when we just called ProcArrayEndTransaction(). Again, not this\n> patch's fault...\n\nSo let's leave that for another time.\n\nv3 attached.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 13 Nov 2019 14:20:57 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: AtEOXact_Snapshot timing"
}
] |
[
{
"msg_contents": "Hi,\n\nPostgreSQL 12 Beta 4 will be released on 2019-09-12. Please make sure\nthat fixes for bugs and other open items[1] are committed by the end of\nthe weekend.\n\nThanks for all of your efforts in getting PostgreSQL 12 ready for\ngeneral availability!\n\nJonathan\n\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items",
"msg_date": "Thu, 5 Sep 2019 16:27:32 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 12 Beta 4"
},
{
"msg_contents": "On 2019-09-05 22:27, Jonathan S. Katz wrote:\n> PostgreSQL 12 Beta 4 will be released on 2019-09-12. Please make sure\n> that fixes for bugs and other open items[1] are committed by the end of\n> the weekend.\n\nCould we get the list of major items in the release notes done by then?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 6 Sep 2019 09:55:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 Beta 4"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-09-05 22:27, Jonathan S. Katz wrote:\n>> PostgreSQL 12 Beta 4 will be released on 2019-09-12. Please make sure\n>> that fixes for bugs and other open items[1] are committed by the end of\n>> the weekend.\n\n> Could we get the list of major items in the release notes done by then?\n\nI'll try to make a pass over the notes today, and incorporate text for\nthat from Jonathan's draft press release.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Sep 2019 09:56:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 Beta 4"
},
{
"msg_contents": "On 9/6/19 9:56 AM, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> On 2019-09-05 22:27, Jonathan S. Katz wrote:\n>>> PostgreSQL 12 Beta 4 will be released on 2019-09-12. Please make sure\n>>> that fixes for bugs and other open items[1] are committed by the end of\n>>> the weekend.\n> \n>> Could we get the list of major items in the release notes done by then?\n> \n> I'll try to make a pass over the notes today, and incorporate text for\n> that from Jonathan's draft press release.\n\nIf it helps, I already made said pass (attached) where I tried to make\nthem match the tone of the release notes.\n\nAlso, attached is a separate patch that has an example for PG_COLORS.\n\nThanks,\n\nJonathan",
"msg_date": "Fri, 6 Sep 2019 10:29:11 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 12 Beta 4"
},
{
"msg_contents": "Hi\n\nWhat's the date for PostgreSQL 12 GA?\n\nOn Fri, Sep 6, 2019 at 1:57 AM Jonathan S. Katz <jkatz@postgresql.org>\nwrote:\n\n> Hi,\n>\n> PostgreSQL 12 Beta 4 will be released on 2019-09-12. Please make sure\n> that fixes for bugs and other open items[1] are committed by the end of\n> the weekend.\n>\n> Thanks for all of your efforts in getting PostgreSQL 12 ready for\n> general availability!\n>\n> Jonathan\n>\n> [1] https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items\n>\n>\n\n-- \nSandeep Thakkar\n\nHiWhat's the date for PostgreSQL 12 GA?On Fri, Sep 6, 2019 at 1:57 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:Hi,\n\nPostgreSQL 12 Beta 4 will be released on 2019-09-12. Please make sure\nthat fixes for bugs and other open items[1] are committed by the end of\nthe weekend.\n\nThanks for all of your efforts in getting PostgreSQL 12 ready for\ngeneral availability!\n\nJonathan\n\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items\n\n-- Sandeep Thakkar",
"msg_date": "Tue, 10 Sep 2019 09:34:34 +0530",
"msg_from": "Sandeep Thakkar <sandeep.thakkar@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 Beta 4"
},
{
"msg_contents": "On Tue, Sep 10, 2019 at 09:34:34AM +0530, Sandeep Thakkar wrote:\n> What's the date for PostgreSQL 12 GA?\n\nThis is not decided yet.\n--\nMichael",
"msg_date": "Tue, 10 Sep 2019 14:13:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 Beta 4"
}
] |
[
{
"msg_contents": "Hello!\n\nI am using the basic extension building infrastructure with sql and\nexpected files, but what I want to test is changing a config setting and\nthen restarting the cluster with shared_preload_libraries in place. Is\nthere a canonical way to do this or does anyone have any examples of this?\nI appreciate it very much!\n\nThanks,\n\nJeremy\n\nHello!I am using the basic extension building infrastructure with sql and expected files, but what I want to test is changing a config setting and then restarting the cluster with shared_preload_libraries in place. Is there a canonical way to do this or does anyone have any examples of this? I appreciate it very much!Thanks,Jeremy",
"msg_date": "Fri, 6 Sep 2019 11:55:43 -0500",
"msg_from": "Jeremy Finzel <finzelj@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_regress restart cluster?"
},
{
"msg_contents": "On Fri, Sep 06, 2019 at 11:55:43AM -0500, Jeremy Finzel wrote:\n> I am using the basic extension building infrastructure with sql and\n> expected files, but what I want to test is changing a config setting and\n> then restarting the cluster with shared_preload_libraries in place. Is\n> there a canonical way to do this or does anyone have any examples of this?\n> I appreciate it very much!\n\nWith an extension out of core using PGXS for compilation, \"check\" is\nnot support, only \"installcheck\" is. With \"check\", we use\nREGRESS_OPTS with --temp-config to start up the cluster with a custom\nconfiguration file. You can use that by copying your extension into\nthe core code tree. Also, depending on the extension type, you may be\nable to test it without shared_preload_libraries. Most hooks, like\nthe password one, can be loaded in a session. If you use the shared\nmemory initialization hook that's of course not possible.\n\nFor an external extension, I think that you could just use TAP to test\nany kind of system configurations you would like to test. One thing\nto remember is that you need to set the environment variable\nPG_REGRESS in the context of the test, one trick for example I have\nused is that:\nmy $stdout = run_simple_command(['pg_config', '--libdir'],\n \"fetch library directory using pg_config\");\nprint \"LIBDIR path found as $stdout\\n\";\n$ENV{PG_REGRESS} = \"$stdout/pgxs/src/test/regress/pg_regress\";\n\nThis proves to be enough to control the routines of PostgresNode.pm to\ncontrol a cluster, and even start your own pg_regress command with\nthe SQL-based tests part of your extension.\n--\nMichael",
"msg_date": "Sat, 7 Sep 2019 11:52:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_regress restart cluster?"
},
{
"msg_contents": "Jeremy Finzel <finzelj@gmail.com> writes:\n> I am using the basic extension building infrastructure with sql and\n> expected files, but what I want to test is changing a config setting and\n> then restarting the cluster with shared_preload_libraries in place. Is\n> there a canonical way to do this or does anyone have any examples of this?\n> I appreciate it very much!\n\npg_regress doesn't have any support for that, but you can do it pretty\neasily in the context of a TAP test. There are a bunch of examples\nin the existing TAP tests --- look around for uses of append_conf().\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Sep 2019 14:12:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_regress restart cluster?"
}
] |
[
{
"msg_contents": "I noticed $subject while checking to see if commit db4383189's\nnew test script was behaving properly in the buildfarm. dory,\nfor one, should be running it but it just isn't.\n\nIt looks to me like the reason is that src/tools/msvc/vcregress.pl's\nsubroutine subdircheck isn't considering the possibility that\nsubdirectories of src/test/modules contain TAP tests. The\nsame code is used for contrib, so several existing TAP tests\nare being missed there too.\n\nI took a stab at fixing this, but lacking a Windows environment\nto test in, I can't be sure if it works. The attached does kinda\nsorta work if I run it in a Linux environment --- but I found that\nsystem() doesn't automatically expand \"t/*.pl\" on Linux. Is that\nan expected difference between Linux and Windows perl? I hacked\naround that by adding a glob() call in sub tap_check, as seen in\nthe first hunk below, but I'm not very sure if that hunk should\nget committed or not.\n\nFor ease of review, I did not re-indent the main part of sub\nsubdircheck, though that needs to be done before committing.\n\nAnybody with suitable tools care to test/commit this?\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 08 Sep 2019 12:07:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "MSVC buildfarm critters are not running modules' TAP tests"
},
{
"msg_contents": "## Tom Lane (tgl@sss.pgh.pa.us):\n\n> I took a stab at fixing this, but lacking a Windows environment\n> to test in, I can't be sure if it works. The attached does kinda\n> sorta work if I run it in a Linux environment --- but I found that\n> system() doesn't automatically expand \"t/*.pl\" on Linux. Is that\n> an expected difference between Linux and Windows perl?\n\nAt least the behaviour on Linux (or any unix) is expected: if you pass\na list to perl's system(), perl does not run the command under a shell\n(a shell is only invoked if there's only a scalar argument to system()\n(or if the list has only one element) and that argument contains shell\nmetacharacters). That's a source of no small amount \"fun\" for perl\nprogramms \"shelling out\", because \"sometimes\" there is no shell.\nPerl's system hase some more caveats, \"perldoc -f system\" has a\nstarter on that topic.\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n",
"msg_date": "Sun, 8 Sep 2019 19:53:49 +0200",
"msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>",
"msg_from_op": false,
"msg_subject": "Re: MSVC buildfarm critters are not running modules' TAP tests"
},
{
"msg_contents": "\nOn 9/8/19 12:07 PM, Tom Lane wrote:\n> I noticed $subject while checking to see if commit db4383189's\n> new test script was behaving properly in the buildfarm. dory,\n> for one, should be running it but it just isn't.\n>\n> It looks to me like the reason is that src/tools/msvc/vcregress.pl's\n> subroutine subdircheck isn't considering the possibility that\n> subdirectories of src/test/modules contain TAP tests. The\n> same code is used for contrib, so several existing TAP tests\n> are being missed there too.\n>\n> I took a stab at fixing this, but lacking a Windows environment\n> to test in, I can't be sure if it works. The attached does kinda\n> sorta work if I run it in a Linux environment --- but I found that\n> system() doesn't automatically expand \"t/*.pl\" on Linux. Is that\n> an expected difference between Linux and Windows perl? I hacked\n> around that by adding a glob() call in sub tap_check, as seen in\n> the first hunk below, but I'm not very sure if that hunk should\n> get committed or not.\n>\n> For ease of review, I did not re-indent the main part of sub\n> subdircheck, though that needs to be done before committing.\n>\n> Anybody with suitable tools care to test/commit this?\n>\n> \t\t\t\n\n\n\nActually, I think vcregress.pl is OK, this is a gap in the buildfarm\nclient's coverage that will be fixed when I make a new release. Any day\nnow I hope. See\n<https://github.com/PGBuildFarm/client-code/commit/1fc4e81e831fda64d62937de242ecda0ba145901>\n\n\nbowerbird which is already running that code is running the test you\nrefer to:\n<https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=bowerbird&dt=2019-09-08%2017%3A51%3A19&stg=test_misc-check>\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 8 Sep 2019 17:46:35 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: MSVC buildfarm critters are not running modules' TAP tests"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 9/8/19 12:07 PM, Tom Lane wrote:\n>> It looks to me like the reason is that src/tools/msvc/vcregress.pl's\n>> subroutine subdircheck isn't considering the possibility that\n>> subdirectories of src/test/modules contain TAP tests. The\n>> same code is used for contrib, so several existing TAP tests\n>> are being missed there too.\n\n> Actually, I think vcregress.pl is OK, this is a gap in the buildfarm\n> client's coverage that will be fixed when I make a new release.\n\nHm. Changing the buildfarm script would be an alternative way to\nfix the issue so far as the buildfarm is concerned, but it doesn't\nseem like it provides any useful way for one to manually invoke\nthe tests on Windows. Or am I missing something about how that's\nusually done?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Sep 2019 17:59:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: MSVC buildfarm critters are not running modules' TAP tests"
},
{
"msg_contents": "\nOn 9/8/19 5:59 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 9/8/19 12:07 PM, Tom Lane wrote:\n>>> It looks to me like the reason is that src/tools/msvc/vcregress.pl's\n>>> subroutine subdircheck isn't considering the possibility that\n>>> subdirectories of src/test/modules contain TAP tests. The\n>>> same code is used for contrib, so several existing TAP tests\n>>> are being missed there too.\n>> Actually, I think vcregress.pl is OK, this is a gap in the buildfarm\n>> client's coverage that will be fixed when I make a new release.\n> Hm. Changing the buildfarm script would be an alternative way to\n> fix the issue so far as the buildfarm is concerned, but it doesn't\n> seem like it provides any useful way for one to manually invoke\n> the tests on Windows. Or am I missing something about how that's\n> usually done?\n>\n> \t\t\t\n\n\nThe invocation is:\n\n\n vcregress.pl taptest [ PROVE_FLAGS=xxx ] directory\n\n\ndirectory needs to be relative to $topdir, so something like:\n\n\n vcregress.pl taptest src/test/modules/test_misc\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 8 Sep 2019 18:14:52 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: MSVC buildfarm critters are not running modules' TAP tests"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 9/8/19 5:59 PM, Tom Lane wrote:\n>> Hm. Changing the buildfarm script would be an alternative way to\n>> fix the issue so far as the buildfarm is concerned, but it doesn't\n>> seem like it provides any useful way for one to manually invoke\n>> the tests on Windows. Or am I missing something about how that's\n>> usually done?\n\n> The invocation is:\n> vcregress.pl taptest [ PROVE_FLAGS=xxx ] directory\n\nSure, I saw that you can run one test that way ... but what do you\ndo when you want the equivalent of check-world?\n\n(I'm surprised that vcregress hasn't already got a \"world\" option.\nBut at least there should be a determinate list of what you need\nto run to get all the tests.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Sep 2019 18:18:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: MSVC buildfarm critters are not running modules' TAP tests"
},
{
"msg_contents": "On Sun, Sep 08, 2019 at 06:18:33PM -0400, Tom Lane wrote:\n> Sure, I saw that you can run one test that way ... but what do you\n> do when you want the equivalent of check-world?\n\nI think that it is a good idea to add in subdircheck an extra path to\ncheck after TAP tests and run optionally these on top of the normal\nregression tests. I have a couple of comments.\n\n+ # Look for TAP tests.\n+ if ($config->{tap_tests} && -d \"t\")\n+ {\n+ print\n\"============================================================\\n\";\n+ print \"Running $module TAP tests\\n\";\n+ my $status = tap_check(getcwd());\n+ $mstat ||= $status;\n+ }\nShouldn't we check after TAP_TESTS in the Makefile?\n\nThere is an argument to also check after isolation tests and run\nthem. It seems to me that we should check after ISOLATION, and run\noptionally the tests if there is anything present. So we need\nsomething like fetchTests() and fetchRegressOpts() but for isolation\ntests.\n\nThe glob() part is a good idea in itself I think. Why not\nback-patching it? I could double-check it as well.\n--\nMichael",
"msg_date": "Mon, 9 Sep 2019 08:35:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: MSVC buildfarm critters are not running modules' TAP tests"
},
{
"msg_contents": "\nOn 9/8/19 6:18 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 9/8/19 5:59 PM, Tom Lane wrote:\n>>> Hm. Changing the buildfarm script would be an alternative way to\n>>> fix the issue so far as the buildfarm is concerned, but it doesn't\n>>> seem like it provides any useful way for one to manually invoke\n>>> the tests on Windows. Or am I missing something about how that's\n>>> usually done?\n>> The invocation is:\n>> vcregress.pl taptest [ PROVE_FLAGS=xxx ] directory\n> Sure, I saw that you can run one test that way ... but what do you\n> do when you want the equivalent of check-world?\n>\n> (I'm surprised that vcregress hasn't already got a \"world\" option.\n> But at least there should be a determinate list of what you need\n> to run to get all the tests.)\n>\n> \t\t\t\n\n\n\nPossibly. The script has not had as much love as it needs.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 8 Sep 2019 19:42:37 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: MSVC buildfarm critters are not running modules' TAP tests"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I think that it is a good idea to add in subdircheck an extra path to\n> check after TAP tests and run optionally these on top of the normal\n> regression tests. I have a couple of comments.\n\n> Shouldn't we check after TAP_TESTS in the Makefile?\n\nYeah, perhaps, but I wasn't sure about how to do that easily.\nFeel free to add it ...\n\n> There is an argument to also check after isolation tests and run\n> them.\n\nHm, yeah, if there are any such tests in those directories.\n\n> The glob() part is a good idea in itself I think. Why not\n> back-patching it? I could double-check it as well.\n\nThe whole thing should be back-patched into branches that have\nany affected tests. (But, please, not till after beta4 is\ntagged.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Sep 2019 19:44:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: MSVC buildfarm critters are not running modules' TAP tests"
},
{
"msg_contents": "On Sun, Sep 08, 2019 at 07:44:16PM -0400, Tom Lane wrote:\n> Yeah, perhaps, but I wasn't sure about how to do that easily.\n> Feel free to add it ...\n\nFeeding the makefile contents and then doing a lookup using =~ should\nbe enough. I think that we should just refactor set of routines for\nfetchTests() so as it uses the flag to look after as input.\nfetchRegressOpts() has tweaks for ENCODING and NO_LOCALE which don't\napply to the isolation tests so perhaps a different routine would be\nbetter.\n\n> Hm, yeah, if there are any such tests in those directories.\n\nsnapshot_too_old is a good example to work with.\n\n> The whole thing should be back-patched into branches that have\n> any affected tests. (But, please, not till after beta4 is\n> tagged.)\n\nSure. Don't worry about that. I am focused on another thing lately\nand it does not touch back-branches.\n--\nMichael",
"msg_date": "Mon, 9 Sep 2019 09:43:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: MSVC buildfarm critters are not running modules' TAP tests"
},
{
"msg_contents": "On Mon, Sep 09, 2019 at 09:43:06AM +0900, Michael Paquier wrote:\n> On Sun, Sep 08, 2019 at 07:44:16PM -0400, Tom Lane wrote:\n>> The whole thing should be back-patched into branches that have\n>> any affected tests. (But, please, not till after beta4 is\n>> tagged.)\n> \n> Sure. Don't worry about that. I am focused on another thing lately\n> and it does not touch back-branches.\n\nAs the cease-fire period is over, I have committed the part for glob()\nand backpatched down to 9.4. The other parts need a closer lookup,\nand I would not bother with anything older than v12 as TAP_TESTS has\nbeen added there in pgxs.mk.\n--\nMichael",
"msg_date": "Wed, 11 Sep 2019 11:11:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: MSVC buildfarm critters are not running modules' TAP tests"
}
] |
[
{
"msg_contents": "Fix RelationIdGetRelation calls that weren't bothering with error checks.\n\nSome of these are quite old, but that doesn't make them not bugs.\nWe'd rather report a failure via elog than SIGSEGV.\n\nWhile at it, uniformly spell the error check as !RelationIsValid(rel)\nrather than a bare rel == NULL test. The machine code is the same\nbut it seems better to be consistent.\n\nCoverity complained about this today, not sure why, because the\nmistake is in fact old.\n\nBranch\n------\nREL_11_STABLE\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/69f883fef14a3fc5849126799278abcc43f40f56\n\nModified Files\n--------------\nsrc/backend/access/heap/heapam.c | 3 +++\nsrc/backend/replication/logical/reorderbuffer.c | 8 ++++++--\n2 files changed, 9 insertions(+), 2 deletions(-)\n\n",
"msg_date": "Sun, 08 Sep 2019 21:01:20 +0000",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pgsql: Fix RelationIdGetRelation calls that weren't bothering with\n erro"
},
{
"msg_contents": "On 9/8/19 14:01, Tom Lane wrote:\n> Fix RelationIdGetRelation calls that weren't bothering with error checks.\n> \n> ...\n> \n> Details\n> -------\n> https://git.postgresql.org/pg/commitdiff/69f883fef14a3fc5849126799278abcc43f40f56\n\nWe had two different databases this week (with the same schema) both\nindependently hit the condition of this recent commit from Tom. It's on\n11.5 so we're actually segfaulting and restarting rather than just\ncausing the walsender process to ERROR, but regardless there's still\nsome underlying bug here.\n\nWe have core files and we're still working to see if we can figure out\nwhat's going on, but I thought I'd report now in case anyone has extra\nideas or suggestions. The segfault is on line 3034 of reorderbuffer.c.\n\nhttps://github.com/postgres/postgres/blob/REL_11_5/src/backend/replication/logical/reorderbuffer.c#L3034\n\n3033 toast_rel = RelationIdGetRelation(relation->rd_rel->reltoastrelid);\n3034 toast_desc = RelationGetDescr(toast_rel);\n\nWe'll keep looking; let me know any feedback! Would love to track down\nwhatever bug is in the logical decoding code, if that's what it is.\n\n==========\n\nbacktrace showing the call stack...\n\nCore was generated by `postgres: walsender <NAME-REDACTED>\n<DNS-REDACTED>(31712)'.\nProgram terminated with signal 11, Segmentation fault.\n#0 ReorderBufferToastReplace (rb=0x3086af0, txn=0x3094a78,\nrelation=0x2b79177249c8, relation=0x2b79177249c8, change=0x30ac938)\n at reorderbuffer.c:3034\n3034 reorderbuffer.c: No such file or directory.\n...\n(gdb) #0 ReorderBufferToastReplace (rb=0x3086af0, txn=0x3094a78,\nrelation=0x2b79177249c8, relation=0x2b79177249c8, change=0x30ac938)\n at reorderbuffer.c:3034\n#1 ReorderBufferCommit (rb=0x3086af0, xid=xid@entry=1358809,\ncommit_lsn=9430473346032, end_lsn=<optimized out>,\n commit_time=commit_time@entry=628712466364268,\norigin_id=origin_id@entry=0, origin_lsn=origin_lsn@entry=0) at\nreorderbuffer.c:1584\n#2 0x0000000000716248 in DecodeCommit (xid=1358809,\nparsed=0x7ffc4ce123f0, buf=0x7ffc4ce125b0, ctx=0x3068f70) at decode.c:637\n#3 DecodeXactOp (ctx=0x3068f70, buf=buf@entry=0x7ffc4ce125b0) at\ndecode.c:245\n#4 0x000000000071655a in LogicalDecodingProcessRecord (ctx=0x3068f70,\nrecord=0x3069208) at decode.c:117\n#5 0x0000000000727150 in XLogSendLogical () at walsender.c:2886\n#6 0x0000000000729192 in WalSndLoop (send_data=send_data@entry=0x7270f0\n<XLogSendLogical>) at walsender.c:2249\n#7 0x0000000000729f91 in StartLogicalReplication (cmd=0x30485a0) at\nwalsender.c:1111\n#8 exec_replication_command (\n cmd_string=cmd_string@entry=0x2f968b0 \"START_REPLICATION SLOT\n\\\"<NAME-REDACTED>\\\" LOGICAL 893/38002B98 (proto_version '1',\npublication_names '\\\"<NAME-REDACTED>\\\"')\") at walsender.c:1628\n#9 0x000000000076e939 in PostgresMain (argc=<optimized out>,\nargv=argv@entry=0x2fea168, dbname=0x2fea020 \"<NAME-REDACTED>\",\n username=<optimized out>) at postgres.c:4182\n#10 0x00000000004bdcb5 in BackendRun (port=0x2fdec50) at postmaster.c:4410\n#11 BackendStartup (port=0x2fdec50) at postmaster.c:4082\n#12 ServerLoop () at postmaster.c:1759\n#13 0x00000000007062f9 in PostmasterMain (argc=argc@entry=7,\nargv=argv@entry=0x2f92540) at postmaster.c:1432\n#14 0x00000000004be73b in main (argc=7, argv=0x2f92540) at main.c:228\n\n==========\n\nSome additional context...\n\n# select * from pg_publication_rel;\n prpubid | prrelid\n---------+---------\n 71417 | 16453\n 71417 | 54949\n(2 rows)\n\n(gdb) print toast_rel\n$4 = (struct RelationData *) 0x0\n\n(gdb) print *relation->rd_rel\n$11 = {relname = {data = \"<NAME-REDACTED>\", '\\000' <repeats 44 times>},\nrelnamespace = 16402, reltype = 16430, reloftype = 0,\nrelowner = 16393, relam = 0, relfilenode = 16428, reltablespace = 0,\nrelpages = 0, reltuples = 0, relallvisible = 0, reltoastrelid = 0,\nrelhasindex = true, relisshared = false, relpersistence = 112 'p',\nrelkind = 114 'r', relnatts = 4, relchecks = 0, relhasoids = false,\nrelhasrules = false, relhastriggers = false, relhassubclass = false,\nrelrowsecurity = false, relforcerowsecurity = false,\nrelispopulated = true, relreplident = 100 'd', relispartition = false,\nrelrewrite = 0, relfrozenxid = 1808, relminmxid = 1}\n\n(gdb) print *relation\n$12 = {rd_node = {spcNode = 1663, dbNode = 16401, relNode = 16428},\nrd_smgr = 0x0, rd_refcnt = 1, rd_backend = -1, rd_islocaltemp = false,\nrd_isnailed = false, rd_isvalid = true, rd_indexvalid = 0 '\\000',\nrd_statvalid = false, rd_createSubid = 0, rd_newRelfilenodeSubid = 0,\nrd_rel = 0x2b7917724bd8, rd_att = 0x2b7917724ce8, rd_id = 16428,\nrd_lockInfo = {lockRelId = {relId = 16428, dbId = 16401}},\nrd_rules = 0x0, rd_rulescxt = 0x0, trigdesc = 0x0, rd_rsdesc = 0x0,\nrd_fkeylist = 0x0, rd_fkeyvalid = false, rd_partkeycxt = 0x0,\nrd_partkey = 0x0, rd_pdcxt = 0x0, rd_partdesc = 0x0, rd_partcheck = 0x0,\nrd_indexlist = 0x0, rd_oidindex = 0, rd_pkindex = 0,\nrd_replidindex = 0, rd_statlist = 0x0, rd_indexattr = 0x0,\nrd_projindexattr = 0x0, rd_keyattr = 0x0, rd_pkattr = 0x0, rd_idattr = 0x0,\nrd_projidx = 0x0, rd_pubactions = 0x0, rd_options = 0x0, rd_index = 0x0,\nrd_indextuple = 0x0, rd_amhandler = 0, rd_indexcxt = 0x0,\nrd_amroutine = 0x0, rd_opfamily = 0x0, rd_opcintype = 0x0, rd_support =\n0x0, rd_supportinfo = 0x0, rd_indoption = 0x0, rd_indexprs = 0x0,\nrd_indpred = 0x0, rd_exclops = 0x0, rd_exclprocs = 0x0, rd_exclstrats =\n0x0, rd_amcache = 0x0, rd_indcollation = 0x0,\nrd_fdwroutine = 0x0, rd_toastoid = 0, pgstat_info = 0x0,\nrd_partcheckvalid = false, rd_partcheckcxt = 0x0}\n\n(gdb) print *desc\n$13 = {natts = 4, tdtypeid = 16430, tdtypmod = -1, tdhasoid = false,\ntdrefcount = 1, constr = 0x2b7917724ef8, attrs = 0x2b7917724d08}\n\n(gdb) print *txn\n$2 = {xid = 1358809, has_catalog_changes = true, is_known_as_subxact =\nfalse, toplevel_xid = 0, first_lsn = 9430473113640,\n final_lsn = 9430473346032, end_lsn = 9430473350592,\nrestart_decoding_lsn = 0, origin_id = 0, origin_lsn = 0,\n commit_time = 628712466364268, base_snapshot = 0x308cdc0,\nbase_snapshot_lsn = 9430473113776, base_snapshot_node = {prev = 0x3086b08,\n next = 0x3086b08}, nentries = 357, nentries_mem = 357, serialized =\nfalse, changes = {head = {prev = 0x30aca08, next = 0x309aac8}},\n tuplecids = {head = {prev = 0x30ac878, next = 0x309ab18}}, ntuplecids\n= 151, tuplecid_hash = 0x30b0bf0, toast_hash = 0x30bb460,\n subtxns = {head = {prev = 0x3094b30, next = 0x3094b30}}, nsubtxns = 0,\nninvalidations = 278, invalidations = 0x30acb08, node = {\n prev = 0x3086af8, next = 0x3086af8}}\n\n(gdb) print *change\n$1 = {lsn = 9430473343416, action = REORDER_BUFFER_CHANGE_INSERT,\norigin_id = 0, data = {tp = {relnode = {spcNode = 1663, dbNode = 16401,\n relNode = 16428}, clear_toast_afterwards = true, oldtuple = 0x0,\nnewtuple = 0x2b79313f9c68}, truncate = {nrelids = 70441758623359,\n cascade = 44, restart_seqs = 64, relids = 0x0}, msg = {prefix =\n0x40110000067f <Address 0x40110000067f out of bounds>,\n message_size = 4294983724, message = 0x0}, snapshot =\n0x40110000067f, command_id = 1663, tuplecid = {node = {spcNode = 1663,\n dbNode = 16401, relNode = 16428}, tid = {ip_blkid = {bi_hi = 1,\nbi_lo = 0}, ip_posid = 0}, cmin = 0, cmax = 826252392,\n combocid = 11129}}, node = {prev = 0x30ac918, next = 0x30ac9b8}}\n\n\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Wed, 4 Dec 2019 17:36:16 -0800",
"msg_from": "Jeremy Schneider <schnjere@amazon.com>",
"msg_from_op": false,
"msg_subject": "logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "On Wed, Dec 04, 2019 at 05:36:16PM -0800, Jeremy Schneider wrote:\n>On 9/8/19 14:01, Tom Lane wrote:\n>> Fix RelationIdGetRelation calls that weren't bothering with error checks.\n>>\n>> ...\n>>\n>> Details\n>> -------\n>> https://git.postgresql.org/pg/commitdiff/69f883fef14a3fc5849126799278abcc43f40f56\n>\n>We had two different databases this week (with the same schema) both\n>independently hit the condition of this recent commit from Tom. It's on\n>11.5 so we're actually segfaulting and restarting rather than just\n>causing the walsender process to ERROR, but regardless there's still\n>some underlying bug here.\n>\n>We have core files and we're still working to see if we can figure out\n>what's going on, but I thought I'd report now in case anyone has extra\n>ideas or suggestions. The segfault is on line 3034 of reorderbuffer.c.\n>\n>https://github.com/postgres/postgres/blob/REL_11_5/src/backend/replication/logical/reorderbuffer.c#L3034\n>\n>3033 toast_rel = RelationIdGetRelation(relation->rd_rel->reltoastrelid);\n>3034 toast_desc = RelationGetDescr(toast_rel);\n>\n>We'll keep looking; let me know any feedback! Would love to track down\n>whatever bug is in the logical decoding code, if that's what it is.\n>\n>==========\n>\n>backtrace showing the call stack...\n>\n>Core was generated by `postgres: walsender <NAME-REDACTED>\n><DNS-REDACTED>(31712)'.\n>Program terminated with signal 11, Segmentation fault.\n>#0 ReorderBufferToastReplace (rb=0x3086af0, txn=0x3094a78,\n>relation=0x2b79177249c8, relation=0x2b79177249c8, change=0x30ac938)\n> at reorderbuffer.c:3034\n>3034 reorderbuffer.c: No such file or directory.\n>...\n>(gdb) #0 ReorderBufferToastReplace (rb=0x3086af0, txn=0x3094a78,\n>relation=0x2b79177249c8, relation=0x2b79177249c8, change=0x30ac938)\n> at reorderbuffer.c:3034\n>#1 ReorderBufferCommit (rb=0x3086af0, xid=xid@entry=1358809,\n>commit_lsn=9430473346032, end_lsn=<optimized out>,\n> commit_time=commit_time@entry=628712466364268,\n>origin_id=origin_id@entry=0, origin_lsn=origin_lsn@entry=0) at\n>reorderbuffer.c:1584\n>#2 0x0000000000716248 in DecodeCommit (xid=1358809,\n>parsed=0x7ffc4ce123f0, buf=0x7ffc4ce125b0, ctx=0x3068f70) at decode.c:637\n>#3 DecodeXactOp (ctx=0x3068f70, buf=buf@entry=0x7ffc4ce125b0) at\n>decode.c:245\n>#4 0x000000000071655a in LogicalDecodingProcessRecord (ctx=0x3068f70,\n>record=0x3069208) at decode.c:117\n>#5 0x0000000000727150 in XLogSendLogical () at walsender.c:2886\n>#6 0x0000000000729192 in WalSndLoop (send_data=send_data@entry=0x7270f0\n><XLogSendLogical>) at walsender.c:2249\n>#7 0x0000000000729f91 in StartLogicalReplication (cmd=0x30485a0) at\n>walsender.c:1111\n>#8 exec_replication_command (\n> cmd_string=cmd_string@entry=0x2f968b0 \"START_REPLICATION SLOT\n>\\\"<NAME-REDACTED>\\\" LOGICAL 893/38002B98 (proto_version '1',\n>publication_names '\\\"<NAME-REDACTED>\\\"')\") at walsender.c:1628\n>#9 0x000000000076e939 in PostgresMain (argc=<optimized out>,\n>argv=argv@entry=0x2fea168, dbname=0x2fea020 \"<NAME-REDACTED>\",\n> username=<optimized out>) at postgres.c:4182\n>#10 0x00000000004bdcb5 in BackendRun (port=0x2fdec50) at postmaster.c:4410\n>#11 BackendStartup (port=0x2fdec50) at postmaster.c:4082\n>#12 ServerLoop () at postmaster.c:1759\n>#13 0x00000000007062f9 in PostmasterMain (argc=argc@entry=7,\n>argv=argv@entry=0x2f92540) at postmaster.c:1432\n>#14 0x00000000004be73b in main (argc=7, argv=0x2f92540) at main.c:228\n>\n>==========\n>\n>Some additional context...\n>\n># select * from pg_publication_rel;\n> prpubid | prrelid\n>---------+---------\n> 71417 | 16453\n> 71417 | 54949\n>(2 rows)\n>\n>(gdb) print toast_rel\n>$4 = (struct RelationData *) 0x0\n>\n>(gdb) print *relation->rd_rel\n>$11 = {relname = {data = \"<NAME-REDACTED>\", '\\000' <repeats 44 times>},\n>relnamespace = 16402, reltype = 16430, reloftype = 0,\n>relowner = 16393, relam = 0, relfilenode = 16428, reltablespace = 0,\n>relpages = 0, reltuples = 0, relallvisible = 0, reltoastrelid = 0,\n\nHmmm, so reltoastrelid = 0, i.e. the relation does not have a TOAST\nrelation. Yet we're calling ReorderBufferToastReplace on the decoded\nrecord ... interesting.\n\nCan you share structure of the relation causing the issue?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 5 Dec 2019 14:38:36 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "On 12/9/19, 10:10 AM, \"Tomas Vondra\" <tomas.vondra@2ndquadrant.com> wrote:\r\n >On Wed, Dec 04, 2019 at 05:36:16PM -0800, Jeremy Schneider wrote:\r\n >>On 9/8/19 14:01, Tom Lane wrote:\r\n >>> Fix RelationIdGetRelation calls that weren't bothering with error checks.\r\n >>>\r\n >>> ...\r\n >>>\r\n >>> Details\r\n >>> -------\r\n >>> https://git.postgresql.org/pg/commitdiff/69f883fef14a3fc5849126799278abcc43f40f56\r\n >>\r\n >>We had two different databases this week (with the same schema) both\r\n >>independently hit the condition of this recent commit from Tom. It's on\r\n >>11.5 so we're actually segfaulting and restarting rather than just\r\n >>causing the walsender process to ERROR, but regardless there's still\r\n >>some underlying bug here.\r\n >>\r\n >>We have core files and we're still working to see if we can figure out\r\n >>what's going on, but I thought I'd report now in case anyone has extra\r\n >>ideas or suggestions. The segfault is on line 3034 of reorderbuffer.c.\r\n >>\r\n >>https://github.com/postgres/postgres/blob/REL_11_5/src/backend/replication/logical/reorderbuffer.c#L3034\r\n >>\r\n >>3033 toast_rel = RelationIdGetRelation(relation->rd_rel->reltoastrelid);\r\n >>3034 toast_desc = RelationGetDescr(toast_rel);\r\n >>\r\n >>We'll keep looking; let me know any feedback! Would love to track down\r\n >>whatever bug is in the logical decoding code, if that's what it is.\r\n >>\r\n >>==========\r\n >>\r\n >>backtrace showing the call stack...\r\n >>\r\n >>Core was generated by `postgres: walsender <NAME-REDACTED>\r\n >><DNS-REDACTED>(31712)'.\r\n >>Program terminated with signal 11, Segmentation fault.\r\n >>#0 ReorderBufferToastReplace (rb=0x3086af0, txn=0x3094a78,\r\n >>relation=0x2b79177249c8, relation=0x2b79177249c8, change=0x30ac938)\r\n >> at reorderbuffer.c:3034\r\n >>3034 reorderbuffer.c: No such file or directory.\r\n >>...\r\n >>(gdb) #0 ReorderBufferToastReplace (rb=0x3086af0, txn=0x3094a78,\r\n >>relation=0x2b79177249c8, relation=0x2b79177249c8, change=0x30ac938)\r\n >> at reorderbuffer.c:3034\r\n >>#1 ReorderBufferCommit (rb=0x3086af0, xid=xid@entry=1358809,\r\n >>commit_lsn=9430473346032, end_lsn=<optimized out>,\r\n >> commit_time=commit_time@entry=628712466364268,\r\n >>origin_id=origin_id@entry=0, origin_lsn=origin_lsn@entry=0) at\r\n >>reorderbuffer.c:1584\r\n >>#2 0x0000000000716248 in DecodeCommit (xid=1358809,\r\n >>parsed=0x7ffc4ce123f0, buf=0x7ffc4ce125b0, ctx=0x3068f70) at decode.c:637\r\n >>#3 DecodeXactOp (ctx=0x3068f70, buf=buf@entry=0x7ffc4ce125b0) at\r\n >>decode.c:245\r\n >>#4 0x000000000071655a in LogicalDecodingProcessRecord (ctx=0x3068f70,\r\n >>record=0x3069208) at decode.c:117\r\n >>#5 0x0000000000727150 in XLogSendLogical () at walsender.c:2886\r\n >>#6 0x0000000000729192 in WalSndLoop (send_data=send_data@entry=0x7270f0\r\n >><XLogSendLogical>) at walsender.c:2249\r\n >>#7 0x0000000000729f91 in StartLogicalReplication (cmd=0x30485a0) at\r\n >>walsender.c:1111\r\n >>#8 exec_replication_command (\r\n >> cmd_string=cmd_string@entry=0x2f968b0 \"START_REPLICATION SLOT\r\n >>\\\"<NAME-REDACTED>\\\" LOGICAL 893/38002B98 (proto_version '1',\r\n >>publication_names '\\\"<NAME-REDACTED>\\\"')\") at walsender.c:1628\r\n >>#9 0x000000000076e939 in PostgresMain (argc=<optimized out>,\r\n >>argv=argv@entry=0x2fea168, dbname=0x2fea020 \"<NAME-REDACTED>\",\r\n >> username=<optimized out>) at postgres.c:4182\r\n >>#10 0x00000000004bdcb5 in BackendRun (port=0x2fdec50) at postmaster.c:4410\r\n >>#11 BackendStartup (port=0x2fdec50) at postmaster.c:4082\r\n >>#12 ServerLoop () at postmaster.c:1759\r\n >>#13 0x00000000007062f9 in PostmasterMain (argc=argc@entry=7,\r\n >>argv=argv@entry=0x2f92540) at postmaster.c:1432\r\n >>#14 0x00000000004be73b in main (argc=7, argv=0x2f92540) at main.c:228\r\n >>\r\n >>==========\r\n >>\r\n >>Some additional context...\r\n >>\r\n >># select * from pg_publication_rel;\r\n >> prpubid | prrelid\r\n >>---------+---------\r\n >> 71417 | 16453\r\n >> 71417 | 54949\r\n >>(2 rows)\r\n >>\r\n >>(gdb) print toast_rel\r\n >>$4 = (struct RelationData *) 0x0\r\n >>\r\n >>(gdb) print *relation->rd_rel\r\n >>$11 = {relname = {data = \"<NAME-REDACTED>\", '\\000' <repeats 44 times>},\r\n >>relnamespace = 16402, reltype = 16430, reloftype = 0,\r\n >>relowner = 16393, relam = 0, relfilenode = 16428, reltablespace = 0,\r\n >>relpages = 0, reltuples = 0, relallvisible = 0, reltoastrelid = 0,\r\n \r\n >Hmmm, so reltoastrelid = 0, i.e. the relation does not have a TOAST\r\n >relation. Yet we're calling ReorderBufferToastReplace on the decoded\r\n >record ... interesting.\r\n >\r\n >Can you share structure of the relation causing the issue?\r\n \r\n Here it is:\r\n\r\n\\d+ rel_having_issue\r\n Table \"public.rel_having_issue\"\r\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\r\n----------------+--------------------------+-----------+----------+-------------------------------------------------+----------+--------------+-------------\r\n id | integer | | not null | nextval('rel_having_issue_id_seq'::regclass) | plain | |\r\n field1 | character varying(255) | | | | extended | |\r\n field2 | integer | | | | plain | |\r\n field3 | timestamp with time zone | | | | plain | |\r\nIndexes:\r\n \"rel_having_issue_pkey\" PRIMARY KEY, btree (id)\r\n\r\nselect relname,relfilenode,reltoastrelid from pg_class where relname='rel_having_issue';\r\n relname | relfilenode | reltoastrelid\r\n---------------------+-------------+---------------\r\n rel_having_issue | 16428 | 0\r\n\r\nBertrand\r\n\r\n",
"msg_date": "Wed, 11 Dec 2019 08:17:01 +0000",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 3:17 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> Here it is:\n>\n> \\d+ rel_having_issue\n\nYou did a heck of a job choosing the name of that table. I bet nobody\nwas surprised when it had an issue!\n\n:-)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 11 Dec 2019 10:54:13 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-11 08:17:01 +0000, Drouvot, Bertrand wrote:\n> >>Core was generated by `postgres: walsender <NAME-REDACTED>\n> >><DNS-REDACTED>(31712)'.\n> >>Program terminated with signal 11, Segmentation fault.\n> >>#0 ReorderBufferToastReplace (rb=0x3086af0, txn=0x3094a78,\n> >>relation=0x2b79177249c8, relation=0x2b79177249c8, change=0x30ac938)\n> >> at reorderbuffer.c:3034\n> >>3034 reorderbuffer.c: No such file or directory.\n> >>...\n> >>(gdb) #0 ReorderBufferToastReplace (rb=0x3086af0, txn=0x3094a78,\n> >>relation=0x2b79177249c8, relation=0x2b79177249c8, change=0x30ac938)\n> >> at reorderbuffer.c:3034\n> >>#1 ReorderBufferCommit (rb=0x3086af0, xid=xid@entry=1358809,\n> >>commit_lsn=9430473346032, end_lsn=<optimized out>,\n> >> commit_time=commit_time@entry=628712466364268,\n> >>origin_id=origin_id@entry=0, origin_lsn=origin_lsn@entry=0) at\n> >>reorderbuffer.c:1584\n\nThis indicates that a toast record was present for that relation,\ndespite:\n\n> \n> \\d+ rel_having_issue\n> Table \"public.rel_having_issue\"\n> Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n> ----------------+--------------------------+-----------+----------+-------------------------------------------------+----------+--------------+-------------\n> id | integer | | not null | nextval('rel_having_issue_id_seq'::regclass) | plain | |\n> field1 | character varying(255) | | | | extended | |\n> field2 | integer | | | | plain | |\n> field3 | timestamp with time zone | | | | plain | |\n> Indexes:\n> \"rel_having_issue_pkey\" PRIMARY KEY, btree (id)\n> \n> select relname,relfilenode,reltoastrelid from pg_class where relname='rel_having_issue';\n> relname | relfilenode | reltoastrelid\n> ---------------------+-------------+---------------\n> rel_having_issue | 16428 | 0\n\n\nI think we need to see pg_waldump output for the preceding records. That\nmight allow us to see why there's a toast record that's being associated\nwith this table, despite there not being a toast table.\n\nSeems like we clearly should add an elog(ERROR) here, so we error out,\nrather than crash.\n\nHas there been DDL to this table?\n\nCould you print out *change?\n\nIs this version of postgres effectively unmodified in any potentially\nrelevant region (snapshot computations, generation of WAL records, ...)?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 11 Dec 2019 08:35:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> This indicates that a toast record was present for that relation,\n> despite:\n> [ \\d that looks like the table isn't wide enough for that ]\n> I think we need to see pg_waldump output for the preceding records. That\n> might allow us to see why there's a toast record that's being associated\n> with this table, despite there not being a toast table.\n\nI don't think you can make that conclusion. Perhaps the table once\nneeded a toast table because of some wide column that got dropped;\nif so, it'd still have one. It'd be safer to look at\npg_class.reltoastrelid to verify existence (or not) of the toast relation.\n\nIt strikes me that there could easily be cases where a publisher table\nhas a toast relation and a subscriber's doesn't ... maybe this code\nisn't expecting that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 11 Dec 2019 12:11:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 12:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't think you can make that conclusion. Perhaps the table once\n> needed a toast table because of some wide column that got dropped;\n> if so, it'd still have one. It'd be safer to look at\n> pg_class.reltoastrelid to verify existence (or not) of the toast relation.\n\nI believe that output was already shown earlier in the thread.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 11 Dec 2019 12:12:48 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-11 12:11:03 -0500, Tom Lane wrote:\n> I don't think you can make that conclusion. Perhaps the table once\n> needed a toast table because of some wide column that got dropped;\n> if so, it'd still have one. It'd be safer to look at\n> pg_class.reltoastrelid to verify existence (or not) of the toast relation.\n\nThat was checked in the email I was responding to.\n\n\n> It strikes me that there could easily be cases where a publisher table\n> has a toast relation and a subscriber's doesn't ... maybe this code\n> isn't expecting that?\n\nThis code is all running on the publisher side, so I don't think it\ncould matter.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 11 Dec 2019 09:21:01 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "On 12/11/19 08:35, Andres Freund wrote:\n> I think we need to see pg_waldump output for the preceding records. That\n> might allow us to see why there's a toast record that's being associated\n> with this table, despite there not being a toast table.\nUnfortunately the WAL logs are no longer available at this time. :(\n\nI did a little poking around in the core file and searching source code\nbut didn't find anything yet. Is there any memory structure that would\nhave the preceding/following records cached in memory? If so then I\nmight be able to extract this from the core dumps.\n\n> Seems like we clearly should add an elog(ERROR) here, so we error out,\n> rather than crash.\ndone - in the commit that I replied to when I started this thread :)\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=69f883fef14a3fc5849126799278abcc43f40f56\n\n> Has there been DDL to this table?\nI'm not sure that we will be able to find out at this point. \n\n> Could you print out *change?\n\nThis was also in the original email - here it is:\n\n(gdb) print *change\n$1 = {lsn = 9430473343416, action = REORDER_BUFFER_CHANGE_INSERT,\norigin_id = 0, data = {tp = {relnode = {spcNode = 1663, dbNode = 16401,\n relNode = 16428}, clear_toast_afterwards = true, oldtuple = 0x0,\nnewtuple = 0x2b79313f9c68}, truncate = {\n nrelids = 70441758623359, cascade = 44, restart_seqs = 64, relids\n= 0x0}, msg = {\n prefix = 0x40110000067f <Address 0x40110000067f out of bounds>,\nmessage_size = 4294983724, message = 0x0},\n snapshot = 0x40110000067f, command_id = 1663, tuplecid = {node =\n{spcNode = 1663, dbNode = 16401, relNode = 16428}, tid = {\n ip_blkid = {bi_hi = 1, bi_lo = 0}, ip_posid = 0}, cmin = 0, cmax\n= 826252392, combocid = 11129}}, node = {prev = 0x30ac918,\n next = 0x30ac9b8}}\n\n> Is this version of postgres effectively unmodified in any potentially\n> relevant region (snapshot computations, generation of WAL records, ...)?\nIt's not changed from community code in any relevant regions. (Also,\nFYI, this is not Aurora.)\n\n-Jeremy\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n\n\n\n\n\nOn 12/11/19 08:35, Andres Freund wrote:\n\n\nI think we need to see pg_waldump output for the preceding records. That\nmight allow us to see why there's a toast record that's being associated\nwith this table, despite there not being a toast table.\n\n Unfortunately the WAL logs are no longer available at this time. :(\n\n I did a little poking around in the core file and searching source\n code but didn't find anything yet. Is there any memory structure\n that would have the preceding/following records cached in memory? \n If so then I might be able to extract this from the core dumps.\n\n\nSeems like we clearly should add an elog(ERROR) here, so we error out,\nrather than crash.\n\n done - in the commit that I replied to when I started this thread :)\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=69f883fef14a3fc5849126799278abcc43f40f56\n\n\nHas there been DDL to this table?\n\n I'm not sure that we will be able to find out at this point. \n\n\nCould you print out *change?\n\n\n This was also in the original email - here it is:\n\n (gdb) print *change\n $1 = {lsn = 9430473343416, action = REORDER_BUFFER_CHANGE_INSERT,\n origin_id = 0, data = {tp = {relnode = {spcNode = 1663, dbNode =\n 16401,\n relNode = 16428}, clear_toast_afterwards = true, oldtuple =\n 0x0, newtuple = 0x2b79313f9c68}, truncate = {\n nrelids = 70441758623359, cascade = 44, restart_seqs = 64,\n relids = 0x0}, msg = {\n prefix = 0x40110000067f <Address 0x40110000067f out of\n bounds>, message_size = 4294983724, message = 0x0},\n snapshot = 0x40110000067f, command_id = 1663, tuplecid = {node =\n {spcNode = 1663, dbNode = 16401, relNode = 16428}, tid = {\n ip_blkid = {bi_hi = 1, bi_lo = 0}, ip_posid = 0}, cmin = 0,\n cmax = 826252392, combocid = 11129}}, node = {prev = 0x30ac918,\n next = 0x30ac9b8}}\n\n\nIs this version of postgres effectively unmodified in any potentially\nrelevant region (snapshot computations, generation of WAL records, ...)?\n\n\n It's not changed from community code in any relevant regions. \n (Also, FYI, this is not Aurora.)\n\n -Jeremy\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services",
"msg_date": "Fri, 13 Dec 2019 16:13:35 -0800",
"msg_from": "Jeremy Schneider <schnjere@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-13 16:13:35 -0800, Jeremy Schneider wrote:\n> On 12/11/19 08:35, Andres Freund wrote:\n> > I think we need to see pg_waldump output for the preceding records. That\n> > might allow us to see why there's a toast record that's being associated\n> > with this table, despite there not being a toast table.\n> Unfortunately the WAL logs are no longer available at this time.� :(\n> \n> I did a little poking around in the core file and searching source code\n> but didn't find anything yet.� Is there any memory structure that would\n> have the preceding/following records cached in memory?� If so then I\n> might be able to extract this from the core dumps.\n\nWell, not the records directly, but the changes could be, depending on\nthe size of the changes. That'd already help. It depends a bit on\nwhether there are subtransactions or not (txn->nsubtxns will tell\nyou). Within one transaction, the currently loaded (i.e. not changes\nthat are spilled to disk, and haven't currently been restored - see\ntxn->serialized) changes are in ReorderBufferTXN->changes.\n\n\n> > Seems like we clearly should add an elog(ERROR) here, so we error out,\n> > rather than crash.\n\n> done - in the commit that I replied to when I started this thread :)\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=69f883fef14a3fc5849126799278abcc43f40f56\n\nAh, I was actually thinking this is the thread of a similar sounding\nbug, where ReorderBufferToastReplace would crash because there isn't\nactually a new tuple - there somehow toast changes exist for a delete.\n\n\n> > Is this version of postgres effectively unmodified in any potentially\n> > relevant region (snapshot computations, generation of WAL records, ...)?\n> It's not changed from community code in any relevant regions.� (Also,\n> FYI, this is not Aurora.)\n\nWell, I've heard mutterings that plain RDS postgres had some efficiency\nimprovements around snapshots (in the GetSnapshotData() sense) - and\nthat's an area where slightly wrong changes could quite plausibly\ncause a bug like this.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 Dec 2019 16:25:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "On 12/13/19 16:25, Andres Freund wrote:\n> On 2019-12-13 16:13:35 -0800, Jeremy Schneider wrote:\n>> On 12/11/19 08:35, Andres Freund wrote:\n>>> I think we need to see pg_waldump output for the preceding records. That\n>>> might allow us to see why there's a toast record that's being associated\n>>> with this table, despite there not being a toast table.\n>> Unfortunately the WAL logs are no longer available at this time. :(\n>>\n>> I did a little poking around in the core file and searching source code\n>> but didn't find anything yet. Is there any memory structure that would\n>> have the preceding/following records cached in memory? If so then I\n>> might be able to extract this from the core dumps.\n> \n> Well, not the records directly, but the changes could be, depending on\n> the size of the changes. That'd already help. It depends a bit on\n> whether there are subtransactions or not (txn->nsubtxns will tell\n> you). Within one transaction, the currently loaded (i.e. not changes\n> that are spilled to disk, and haven't currently been restored - see\n> txn->serialized) changes are in ReorderBufferTXN->changes.\n\nI did include the txn in the original post to this thread; there are 357\nchanges in the transaction and they are all in memory (none spilled to\ndisk a.k.a. serialized). No subtransactions. However I do see that\n\"txn.has_catalog_changes = true\" which makes me wonder if that's related\nto the bug.\n\nSo... now I know... walking a dlist in gdb and dumping all the changes\nis not exactly a walk in the park! Need some python magic like Tomas\nVondra's script that decodes Nodes. I was not yet successful today in\nfiguring out how to do this... so the changes are there in the core dump\nbut I can't get them yet. :)\n\nI also dug around the ReorderBufferIterTXNState a little bit but there's\nnothing that isn't already in the original post.\n\nIf anyone has a trick for walking a dlist in gdb that would be awesome...\n\nI'm off for holidays and won't be working on this for a couple weeks;\nnot sure whether it'll be possible to get to the bottom of it. But I\nhope there's enough info in this thread to at least get a head start if\nsomeone hits it again in the future.\n\n\n> Well, I've heard mutterings that plain RDS postgres had some efficiency\n> improvements around snapshots (in the GetSnapshotData() sense) - and\n> that's an area where slightly wrong changes could quite plausibly\n> cause a bug like this.\n\nDefinitely no changes around snapshots. I've never even heard anyone\ntalk about making changes like that in RDS PostgreSQL - feels to me like\npeople at AWS want it to be as close as possible to postgresql.org code.\n\nAurora is different; it feels to me like the engineering org has more\nlicense to make changes. For example they re-wrote the subtransaction\nsubsystem. No changes to GetSnapshotData though.\n\n-Jeremy\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Fri, 20 Dec 2019 15:21:30 -0800",
"msg_from": "Jeremy Schneider <schnjere@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "On 12/13/19 16:13, Jeremy Schneider wrote:\n> On 12/11/19 08:35, Andres Freund wrote:\n>> Seems like we clearly should add an elog(ERROR) here, so we error out,\n>> rather than crash.\n> done - in the commit that I replied to when I started this thread :)\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=69f883fef14a3fc5849126799278abcc43f40f56\n\n\nAnother PostgreSQL user ran into this issue. This time on version 12.5 -\nso instead of a crash they got the error message from the commit.\n\nERROR: XX000: could not open relation with OID 0\nLOCATION: ReorderBufferToastReplace, reorderbuffer.c:305\n\nUpon seeing this error message, I realized that the base relation OID\nwould be very useful when the toast relation OID is \"0\".\n\nWould this patch work to show that?\n\ndiff --git a/src/backend/replication/logical/reorderbuffer.c\nb/src/backend/replication/logical/reorderbuffer.c\nindex 2d9e1279bb..b90603b051 100644\n--- a/src/backend/replication/logical/reorderbuffer.c\n+++ b/src/backend/replication/logical/reorderbuffer.c\n@@ -4598,8 +4598,8 @@ ReorderBufferToastReplace(ReorderBuffer *rb,\nReorderBufferTXN *txn,\n\n toast_rel = RelationIdGetRelation(relation->rd_rel->reltoastrelid);\n if (!RelationIsValid(toast_rel))\n- elog(ERROR, \"could not open relation with OID %u\",\n- relation->rd_rel->reltoastrelid);\n+ elog(ERROR, \"could not open toast relation with OID %u\n(base relation with OID %u)\",\n+ relation->rd_rel->reltoastrelid,\nrelation->rd_rel->oid);\n\n toast_desc = RelationGetDescr(toast_rel);\n\nThoughts?\n\n-Jeremy\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n\n\n\n\n\nOn 12/13/19 16:13, Jeremy Schneider\n wrote:\n\n\n\nOn 12/11/19 08:35, Andres Freund\n wrote:\n\n\nSeems like we clearly should add an elog(ERROR) here, so we error out,\nrather than crash.\n\n done - in the commit that I replied to when I started this thread\n :)\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=69f883fef14a3fc5849126799278abcc43f40f56\n\n\n\n Another PostgreSQL user ran into this issue. This time on version\n 12.5 - so instead of a crash they got the error message from the\n commit.\n\nERROR: XX000: could not open relation with\n OID 0\n LOCATION: ReorderBufferToastReplace, reorderbuffer.c:305\n\n Upon seeing this error message, I realized that the base relation\n OID would be very useful when the toast relation OID is \"0\".\n\n Would this patch work to show that?\n\ndiff --git\n a/src/backend/replication/logical/reorderbuffer.c\n b/src/backend/replication/logical/reorderbuffer.c\n index 2d9e1279bb..b90603b051 100644\n --- a/src/backend/replication/logical/reorderbuffer.c\n +++ b/src/backend/replication/logical/reorderbuffer.c\n @@ -4598,8 +4598,8 @@ ReorderBufferToastReplace(ReorderBuffer *rb,\n ReorderBufferTXN *txn,\n\n toast_rel =\n RelationIdGetRelation(relation->rd_rel->reltoastrelid);\n if (!RelationIsValid(toast_rel))\n - elog(ERROR, \"could not open relation with OID %u\",\n - relation->rd_rel->reltoastrelid);\n + elog(ERROR, \"could not open toast relation with\n OID %u (base relation with OID %u)\",\n + relation->rd_rel->reltoastrelid,\n relation->rd_rel->oid);\n\n toast_desc = RelationGetDescr(toast_rel);\n\n Thoughts?\n\n -Jeremy\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services",
"msg_date": "Fri, 4 Jun 2021 16:07:02 -0700",
"msg_from": "Jeremy Schneider <schnjere@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "On 2021-Jun-04, Jeremy Schneider wrote:\n\n> ERROR: XX000: could not open relation with OID 0\n> LOCATION: ReorderBufferToastReplace, reorderbuffer.c:305\n\nHah.\n\nIt seems to me that this code should silently return if\nrd_rel->reltoastrelid == 0; just like in the case of\ntxn->toast_hash == NULL. It evidently means that no datum can be\ntoasted, and therefor no toast replacement is needed.\n\n(As far as I recall, a table cannot go from having a toast table to not\nhaving one.)\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"�Qu� importan los a�os? Lo que realmente importa es comprobar que\na fin de cuentas la mejor edad de la vida es estar vivo\" (Mafalda)\n\n\n",
"msg_date": "Fri, 4 Jun 2021 19:35:33 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "[Resent -- apologies to those who are getting this email twice. Please\nbe mindful to reply to this one if you do. I think the no-crosspost\npolicy is very obnoxious and should be relaxed.)\n\nOn 2019-Dec-11, Andres Freund wrote:\n\n> On 2019-12-11 08:17:01 +0000, Drouvot, Bertrand wrote:\n> > >>Core was generated by `postgres: walsender <NAME-REDACTED>\n> > >><DNS-REDACTED>(31712)'.\n> > >>Program terminated with signal 11, Segmentation fault.\n> > >>#0 ReorderBufferToastReplace (rb=0x3086af0, txn=0x3094a78,\n> > >>relation=0x2b79177249c8, relation=0x2b79177249c8, change=0x30ac938)\n> > >> at reorderbuffer.c:3034\n> > >>3034 reorderbuffer.c: No such file or directory.\n> > >>...\n> > >>(gdb) #0 ReorderBufferToastReplace (rb=0x3086af0, txn=0x3094a78,\n> > >>relation=0x2b79177249c8, relation=0x2b79177249c8, change=0x30ac938)\n> > >> at reorderbuffer.c:3034\n> > >>#1 ReorderBufferCommit (rb=0x3086af0, xid=xid@entry=1358809,\n> > >>commit_lsn=9430473346032, end_lsn=<optimized out>,\n> > >> commit_time=commit_time@entry=628712466364268,\n> > >>origin_id=origin_id@entry=0, origin_lsn=origin_lsn@entry=0) at\n> > >>reorderbuffer.c:1584\n> \n> This indicates that a toast record was present for that relation,\n> despite:\n\nCan you explain what it is you saw that indicates that a toast record\nwas present for the relation? I may be misreading the code, but there's\nnothing obvious that says that if we reach there, then a toast datum\nexists anywhere for this relation. We only know that txn->toast_hash is\nset, but that could be because the transaction touched a toast record in\nsome other table. Right?\n\n> > \\d+ rel_having_issue\n> > Table \"public.rel_having_issue\"\n> > Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n> > ----------------+--------------------------+-----------+----------+-------------------------------------------------+----------+--------------+-------------\n> > id | integer | | not null | nextval('rel_having_issue_id_seq'::regclass) | plain | |\n> > field1 | character varying(255) | | | | extended | |\n> > field2 | integer | | | | plain | |\n> > field3 | timestamp with time zone | | | | plain | |\n> > Indexes:\n> > \"rel_having_issue_pkey\" PRIMARY KEY, btree (id)\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n<inflex> really, I see PHP as like a strange amalgamation of C, Perl, Shell\n<crab> inflex: you know that \"amalgam\" means \"mixture with mercury\",\n more or less, right?\n<crab> i.e., \"deadly poison\"\n\n\n",
"msg_date": "Fri, 4 Jun 2021 20:07:05 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "On Sat, Jun 5, 2021 at 5:41 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > This indicates that a toast record was present for that relation,\n> > despite:\n>\n> Can you explain what it is you saw that indicates that a toast record\n> was present for the relation? I may be misreading the code, but there's\n> nothing obvious that says that if we reach there, then a toast datum\n> exists anywhere for this relation. We only know that txn->toast_hash is\n> set, but that could be because the transaction touched a toast record in\n> some other table. Right?\n\nIs this problem is related to the thread [1], where due to spec abort\nthe toast hash was not deleted and after that, if the next record is\nfor some other relation which is not having a toast table you will see\nthis error. There are a few other problems if the toast hash is not\ncleaned due to spec abort. I have submitted patches with 2 approached\nin that thread.\n\n\n[1] https://www.postgresql.org/message-id/CAFiTN-szfpMXF2H%2Bmk3m_9AB610v103NTv7Z1E8uDBr9iQg1gg%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 5 Jun 2021 10:14:42 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "On Sat, Jun 5, 2021 at 5:05 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jun-04, Jeremy Schneider wrote:\n>\n> > ERROR: XX000: could not open relation with OID 0\n> > LOCATION: ReorderBufferToastReplace, reorderbuffer.c:305\n>\n> Hah.\n>\n> It seems to me that this code should silently return if\n> rd_rel->reltoastrelid == 0; just like in the case of\n> txn->toast_hash == NULL. It evidently means that no datum can be\n> toasted, and therefor no toast replacement is needed.\n>\n\nEven, if this fixes the issue, I guess it is better to find why this\nhappens? I think the reason why the code is giving an error is that\nafter toast insertions we always expect the insert on the main table\nof toast table, but if there happens to be a case where after toast\ninsertion, instead of getting the insertion on the main table we get\nan insert in some other table then you will see this error. I think\nthis can happen for speculative insertions where insertions lead to a\ntoast table insert, then we get a speculative abort record, and then\ninsertion on some other table. The main thing is currently decoding\ncode ignores speculative aborts due to which such a problem can occur.\nNow, there could be other cases where such a problem can happen but if\nmy theory is correct then the patch we are discussing in the thread\n[1] should solve this problem.\n\nJeremy, is this problem reproducible? Can we get a testcase or\npg_waldump output of previous WAL records?\n\n[1] - https://www.postgresql.org/message-id/CAExHW5sPKF-Oovx_qZe4p5oM6Dvof7_P%2BXgsNAViug15Fm99jA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 5 Jun 2021 12:12:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "On 6/4/21 23:42, Amit Kapila wrote:\n> On 2021-Jun-04, Jeremy Schneider wrote:\n>>> ERROR: XX000: could not open relation with OID 0\n>>> LOCATION: ReorderBufferToastReplace, reorderbuffer.c:305\n> Even, if this fixes the issue, I guess it is better to find why this\n> happens? I think the reason why the code is giving an error is that\n> after toast insertions we always expect the insert on the main table\n> of toast table, but if there happens to be a case where after toast\n> insertion, instead of getting the insertion on the main table we get\n> an insert in some other table then you will see this error. I think\n> this can happen for speculative insertions where insertions lead to a\n> toast table insert, then we get a speculative abort record, and then\n> insertion on some other table. The main thing is currently decoding\n> code ignores speculative aborts due to which such a problem can occur.\n> Now, there could be other cases where such a problem can happen but if\n> my theory is correct then the patch we are discussing in the thread\n> [1] should solve this problem.\n>\n> Jeremy, is this problem reproducible? Can we get a testcase or\n> pg_waldump output of previous WAL records?\n>\n> [1] - https://www.postgresql.org/message-id/CAExHW5sPKF-Oovx_qZe4p5oM6Dvof7_P%2BXgsNAViug15Fm99jA%40mail.gmail.com\n\nIt's unclear to me whether or not we'll be able to catch the repro on\nthe actual production system. It seems that we are hitting this somewhat\nconsistently, but at irregular and infrequent intervals. If we are able\nto catch it and walk the WAL records then I'll post back here. FYI,\nBertrand was able to replicate the exact error message with pretty much\nthe same repro that's in the other email thread which is linked above.\n\nSeparately, would there be any harm in adding the relation OID to the\nerror message? Personally, I just think the error message is generally\nmore useful if it shows the main relation OID (since we know that the\ntoast OID can be 0). Not a big deal though.\n\n-Jeremy\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n\n\n\n\n\nOn 6/4/21 23:42, Amit Kapila wrote:\n \n\nOn 2021-Jun-04, Jeremy Schneider wrote:\n\n\n\nERROR: XX000: could not open relation with OID 0\nLOCATION: ReorderBufferToastReplace, reorderbuffer.c:305\n\n\n\n\nEven, if this fixes the issue, I guess it is better to find why this\nhappens? I think the reason why the code is giving an error is that\nafter toast insertions we always expect the insert on the main table\nof toast table, but if there happens to be a case where after toast\ninsertion, instead of getting the insertion on the main table we get\nan insert in some other table then you will see this error. I think\nthis can happen for speculative insertions where insertions lead to a\ntoast table insert, then we get a speculative abort record, and then\ninsertion on some other table. The main thing is currently decoding\ncode ignores speculative aborts due to which such a problem can occur.\nNow, there could be other cases where such a problem can happen but if\nmy theory is correct then the patch we are discussing in the thread\n[1] should solve this problem.\n\nJeremy, is this problem reproducible? Can we get a testcase or\npg_waldump output of previous WAL records?\n\n[1] - https://www.postgresql.org/message-id/CAExHW5sPKF-Oovx_qZe4p5oM6Dvof7_P%2BXgsNAViug15Fm99jA%40mail.gmail.com\n\n\n\n It's unclear to me whether or not we'll be able to catch the repro\n on the actual production system. It seems that we are hitting this\n somewhat consistently, but at irregular and infrequent intervals. If\n we are able to catch it and walk the WAL records then I'll post back\n here. FYI, Bertrand was able to replicate the exact error message\n with pretty much the same repro that's in the other email thread\n which is linked above.\n\n Separately, would there be any harm in adding the relation OID to\n the error message? Personally, I just think the error message is\n generally more useful if it shows the main relation OID (since we\n know that the toast OID can be 0). Not a big deal though.\n\n -Jeremy\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services",
"msg_date": "Tue, 8 Jun 2021 11:35:57 -0700",
"msg_from": "Jeremy Schneider <schnjere@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 12:06 AM Jeremy Schneider <schnjere@amazon.com> wrote:\n>\n> On 6/4/21 23:42, Amit Kapila wrote:\n>\n> On 2021-Jun-04, Jeremy Schneider wrote:\n>\n> ERROR: XX000: could not open relation with OID 0\n> LOCATION: ReorderBufferToastReplace, reorderbuffer.c:305\n>\n> Even, if this fixes the issue, I guess it is better to find why this\n> happens? I think the reason why the code is giving an error is that\n> after toast insertions we always expect the insert on the main table\n> of toast table, but if there happens to be a case where after toast\n> insertion, instead of getting the insertion on the main table we get\n> an insert in some other table then you will see this error. I think\n> this can happen for speculative insertions where insertions lead to a\n> toast table insert, then we get a speculative abort record, and then\n> insertion on some other table. The main thing is currently decoding\n> code ignores speculative aborts due to which such a problem can occur.\n> Now, there could be other cases where such a problem can happen but if\n> my theory is correct then the patch we are discussing in the thread\n> [1] should solve this problem.\n>\n> Jeremy, is this problem reproducible? Can we get a testcase or\n> pg_waldump output of previous WAL records?\n>\n> [1] - https://www.postgresql.org/message-id/CAExHW5sPKF-Oovx_qZe4p5oM6Dvof7_P%2BXgsNAViug15Fm99jA%40mail.gmail.com\n>\n>\n> It's unclear to me whether or not we'll be able to catch the repro on the actual production system. It seems that we are hitting this somewhat consistently, but at irregular and infrequent intervals. If we are able to catch it and walk the WAL records then I'll post back here.\n>\n\nOkay, one thing you can check is if there is a usage of Insert .. On\nConflict .. statement in the actual production system?\n\n> FYI, Bertrand was able to replicate the exact error message with pretty much the same repro that's in the other email thread which is linked above.\n>\n> Separately, would there be any harm in adding the relation OID to the error message? Personally, I just think the error message is generally more useful if it shows the main relation OID (since we know that the toast OID can be 0). Not a big deal though.\n>\n\nI don't think that is a bad idea. However, I think it might be better\nto propose that as a separate patch in a new thread.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 9 Jun 2021 09:03:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "Hi Amit,\n\nOn 6/9/21 5:33 AM, Amit Kapila wrote:\n> On Wed, Jun 9, 2021 at 12:06 AM Jeremy Schneider <schnjere@amazon.com> wrote:\n>> On 6/4/21 23:42, Amit Kapila wrote:\n>>\n>> On 2021-Jun-04, Jeremy Schneider wrote:\n>>\n>> ERROR: XX000: could not open relation with OID 0\n>> LOCATION: ReorderBufferToastReplace, reorderbuffer.c:305\n>>\n>> Even, if this fixes the issue, I guess it is better to find why this\n>> happens? I think the reason why the code is giving an error is that\n>> after toast insertions we always expect the insert on the main table\n>> of toast table, but if there happens to be a case where after toast\n>> insertion, instead of getting the insertion on the main table we get\n>> an insert in some other table then you will see this error. I think\n>> this can happen for speculative insertions where insertions lead to a\n>> toast table insert, then we get a speculative abort record, and then\n>> insertion on some other table. The main thing is currently decoding\n>> code ignores speculative aborts due to which such a problem can occur.\n>> Now, there could be other cases where such a problem can happen but if\n>> my theory is correct then the patch we are discussing in the thread\n>> [1] should solve this problem.\n>>\n>> Jeremy, is this problem reproducible? Can we get a testcase or\n>> pg_waldump output of previous WAL records?\n>>\n>> [1] - https://www.postgresql.org/message-id/CAExHW5sPKF-Oovx_qZe4p5oM6Dvof7_P%2BXgsNAViug15Fm99jA%40mail.gmail.com\n>>\n>>\n>> It's unclear to me whether or not we'll be able to catch the repro on the actual production system. It seems that we are hitting this somewhat consistently, but at irregular and infrequent intervals. If we are able to catch it and walk the WAL records then I'll post back here.\n>>\n> Okay, one thing you can check is if there is a usage of Insert .. On\n> Conflict .. statement in the actual production system?\n\nYes that's the case, so that a speculative abort record followed by an \ninsert on some other table looks a perfect valid scenario regarding this \ncurrent issue.\n\nBertrand\n\n\n",
"msg_date": "Wed, 9 Jun 2021 08:07:24 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 11:37 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> On 6/9/21 5:33 AM, Amit Kapila wrote:\n> > On Wed, Jun 9, 2021 at 12:06 AM Jeremy Schneider <schnjere@amazon.com> wrote:\n> >> On 6/4/21 23:42, Amit Kapila wrote:\n> >>\n> >> On 2021-Jun-04, Jeremy Schneider wrote:\n> >>\n> >> ERROR: XX000: could not open relation with OID 0\n> >> LOCATION: ReorderBufferToastReplace, reorderbuffer.c:305\n> >>\n> >> Even, if this fixes the issue, I guess it is better to find why this\n> >> happens? I think the reason why the code is giving an error is that\n> >> after toast insertions we always expect the insert on the main table\n> >> of toast table, but if there happens to be a case where after toast\n> >> insertion, instead of getting the insertion on the main table we get\n> >> an insert in some other table then you will see this error. I think\n> >> this can happen for speculative insertions where insertions lead to a\n> >> toast table insert, then we get a speculative abort record, and then\n> >> insertion on some other table. The main thing is currently decoding\n> >> code ignores speculative aborts due to which such a problem can occur.\n> >> Now, there could be other cases where such a problem can happen but if\n> >> my theory is correct then the patch we are discussing in the thread\n> >> [1] should solve this problem.\n> >>\n> >> Jeremy, is this problem reproducible? Can we get a testcase or\n> >> pg_waldump output of previous WAL records?\n> >>\n> >> [1] - https://www.postgresql.org/message-id/CAExHW5sPKF-Oovx_qZe4p5oM6Dvof7_P%2BXgsNAViug15Fm99jA%40mail.gmail.com\n> >>\n> >>\n> >> It's unclear to me whether or not we'll be able to catch the repro on the actual production system. It seems that we are hitting this somewhat consistently, but at irregular and infrequent intervals. If we are able to catch it and walk the WAL records then I'll post back here.\n> >>\n> > Okay, one thing you can check is if there is a usage of Insert .. On\n> > Conflict .. statement in the actual production system?\n>\n> Yes that's the case, so that a speculative abort record followed by an\n> insert on some other table looks a perfect valid scenario regarding this\n> current issue.\n>\n\nOkay, thanks for the confirmation. So the patch being discussed in\nthat thread will fix your problem.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 9 Jun 2021 11:40:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
},
{
"msg_contents": "Hi,\n\nOn 6/9/21 8:10 AM, Amit Kapila wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Wed, Jun 9, 2021 at 11:37 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> On 6/9/21 5:33 AM, Amit Kapila wrote:\n>>> On Wed, Jun 9, 2021 at 12:06 AM Jeremy Schneider <schnjere@amazon.com> wrote:\n>>>> On 6/4/21 23:42, Amit Kapila wrote:\n>>>>\n>>>> On 2021-Jun-04, Jeremy Schneider wrote:\n>>>>\n>>>> ERROR: XX000: could not open relation with OID 0\n>>>> LOCATION: ReorderBufferToastReplace, reorderbuffer.c:305\n>>>>\n>>>> Even, if this fixes the issue, I guess it is better to find why this\n>>>> happens? I think the reason why the code is giving an error is that\n>>>> after toast insertions we always expect the insert on the main table\n>>>> of toast table, but if there happens to be a case where after toast\n>>>> insertion, instead of getting the insertion on the main table we get\n>>>> an insert in some other table then you will see this error. I think\n>>>> this can happen for speculative insertions where insertions lead to a\n>>>> toast table insert, then we get a speculative abort record, and then\n>>>> insertion on some other table. The main thing is currently decoding\n>>>> code ignores speculative aborts due to which such a problem can occur.\n>>>> Now, there could be other cases where such a problem can happen but if\n>>>> my theory is correct then the patch we are discussing in the thread\n>>>> [1] should solve this problem.\n>>>>\n>>>> Jeremy, is this problem reproducible? Can we get a testcase or\n>>>> pg_waldump output of previous WAL records?\n>>>>\n>>>> [1] - https://www.postgresql.org/message-id/CAExHW5sPKF-Oovx_qZe4p5oM6Dvof7_P%2BXgsNAViug15Fm99jA%40mail.gmail.com\n>>>>\n>>>>\n>>>> It's unclear to me whether or not we'll be able to catch the repro on the actual production system. It seems that we are hitting this somewhat consistently, but at irregular and infrequent intervals. If we are able to catch it and walk the WAL records then I'll post back here.\n>>>>\n>>> Okay, one thing you can check is if there is a usage of Insert .. On\n>>> Conflict .. statement in the actual production system?\n>> Yes that's the case, so that a speculative abort record followed by an\n>> insert on some other table looks a perfect valid scenario regarding this\n>> current issue.\n>>\n> Okay, thanks for the confirmation. So the patch being discussed in\n> that thread will fix your problem.\n\nYes, thanks a lot!\n\nBertrand\n\n\n\n",
"msg_date": "Wed, 9 Jun 2021 08:17:28 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding bug: segfault in ReorderBufferToastReplace()"
}
] |
[
{
"msg_contents": "Diagnosing this took quite a lot of time and detective work. For some\nreason I don't quite understand, when calling the Windows command\nprocessor in a modern msys2/WindowsServer2019 installation, you need to\ndouble the slash, thus:\n\n\n cmd //c foo.bat\n\n\nSome Internet postings at least seem to suggest this is by design. (FSVO\n\"design\")\n\n\nI tested this on older versions and the change appears to work, so I\npropose to apply the attached patch.\n\n\nThis is the last obstacle I have to declaring msys2 fully supportable.\n\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 8 Sep 2019 18:06:34 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "msys2 vs pg_upgrade/test.sh"
},
{
"msg_contents": "On 2019-09-09 00:06, Andrew Dunstan wrote:\n> Diagnosing this took quite a lot of time and detective work. For some\n> reason I don't quite understand, when calling the Windows command\n> processor in a modern msys2/WindowsServer2019 installation, you need to\n> double the slash, thus:\n> \n> cmd //c foo.bat\n> \n> Some Internet postings at least seem to suggest this is by design. (FSVO\n> \"design\")\n> \n> I tested this on older versions and the change appears to work, so I\n> propose to apply the attached patch.\n\nIf we're worried about messing things up for non-msys2 environments, we\ncould also set MSYS2_ARG_CONV_EXCL instead; see\n<https://github.com/msys2/msys2/wiki/Porting#filesystem-namespaces>.\nAccording to that page, that would seem to be the more proper way to do it.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 9 Sep 2019 10:48:55 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: msys2 vs pg_upgrade/test.sh"
},
{
"msg_contents": "\nOn 9/9/19 4:48 AM, Peter Eisentraut wrote:\n> On 2019-09-09 00:06, Andrew Dunstan wrote:\n>> Diagnosing this took quite a lot of time and detective work. For some\n>> reason I don't quite understand, when calling the Windows command\n>> processor in a modern msys2/WindowsServer2019 installation, you need to\n>> double the slash, thus:\n>>\n>> cmd //c foo.bat\n>>\n>> Some Internet postings at least seem to suggest this is by design. (FSVO\n>> \"design\")\n>>\n>> I tested this on older versions and the change appears to work, so I\n>> propose to apply the attached patch.\n> If we're worried about messing things up for non-msys2 environments, we\n> could also set MSYS2_ARG_CONV_EXCL instead; see\n> <https://github.com/msys2/msys2/wiki/Porting#filesystem-namespaces>.\n> According to that page, that would seem to be the more proper way to do it.\n>\n\n\nNice find, thanks, I'll do it that way.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Mon, 9 Sep 2019 08:39:20 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: msys2 vs pg_upgrade/test.sh"
}
] |
[
{
"msg_contents": "Hi\n\nWhen I played with vertical cursor support I got badly displayed last\ncolumns when border was not 2. Only when border is 2, then psql displays\nlast column with same width for each row.\n\nI think so we can force column width alignment for any border styles today\n(for alignment and wrapping styles) or as minimum this behave can be\noptional.\n\nI wrote a patch with pset option \"final_spaces\", but I don't see a reason\nwhy we trim rows today.\n\nRegards\n\nPavel",
"msg_date": "Mon, 9 Sep 2019 11:25:32 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "patch: psql - enforce constant width of last column"
},
{
"msg_contents": "Hi Pavel,\n\nI have been trying to reproduce the case of badly displaying last columns\nof a query result-set. I played around with the legal values for psql\nborder variable but not able to find a case where last columns are badly\ndisplayed. Can you please share an example that I can use to reproduce this\nproblem. I will try out your patch once I am able to reproduce the problem.\n\nThanks,\n\n-- Ahsan\n\n\nOn Mon, Sep 9, 2019 at 2:32 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> When I played with vertical cursor support I got badly displayed last\n> columns when border was not 2. Only when border is 2, then psql displays\n> last column with same width for each row.\n>\n> I think so we can force column width alignment for any border styles today\n> (for alignment and wrapping styles) or as minimum this behave can be\n> optional.\n>\n> I wrote a patch with pset option \"final_spaces\", but I don't see a reason\n> why we trim rows today.\n>\n> Regards\n>\n> Pavel\n>\n\nHi Pavel,I have been trying to reproduce the case of badly displaying last columns of a query result-set. I played around with the legal values for psql border variable but not able to find a case where last columns are badly displayed. Can you please share an example that I can use to reproduce this problem. I will try out your patch once I am able to reproduce the problem.Thanks,-- AhsanOn Mon, Sep 9, 2019 at 2:32 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:HiWhen I played with vertical cursor support I got badly displayed last columns when border was not 2. Only when border is 2, then psql displays last column with same width for each row.I think so we can force column width alignment for any border styles today (for alignment and wrapping styles) or as minimum this behave can be optional. I wrote a patch with pset option \"final_spaces\", but I don't see a reason why we trim rows today.RegardsPavel",
"msg_date": "Tue, 17 Sep 2019 20:06:09 +0500",
"msg_from": "Ahsan Hadi <ahsan.hadi@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: patch: psql - enforce constant width of last column"
},
{
"msg_contents": "út 17. 9. 2019 v 17:06 odesílatel Ahsan Hadi <ahsan.hadi@gmail.com> napsal:\n\n> Hi Pavel,\n>\n> I have been trying to reproduce the case of badly displaying last columns\n> of a query result-set. I played around with the legal values for psql\n> border variable but not able to find a case where last columns are badly\n> displayed. Can you please share an example that I can use to reproduce this\n> problem. I will try out your patch once I am able to reproduce the problem.\n>\n\nyou need to use pspg, and vertical cursor.\n\nhttps://github.com/okbob/pspg\nvertical cursor should be active\n\n\\pset border 1\n\\pset linestyle ascii\n\\pset pager always\n\nselect * from generate_series(1,3);\n\nRegards\n\nPavel\n\n\n> Thanks,\n>\n> -- Ahsan\n>\n>\n> On Mon, Sep 9, 2019 at 2:32 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> Hi\n>>\n>> When I played with vertical cursor support I got badly displayed last\n>> columns when border was not 2. Only when border is 2, then psql displays\n>> last column with same width for each row.\n>>\n>> I think so we can force column width alignment for any border styles\n>> today (for alignment and wrapping styles) or as minimum this behave can be\n>> optional.\n>>\n>> I wrote a patch with pset option \"final_spaces\", but I don't see a reason\n>> why we trim rows today.\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>",
"msg_date": "Tue, 17 Sep 2019 17:15:42 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: patch: psql - enforce constant width of last column"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 8:16 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> út 17. 9. 2019 v 17:06 odesílatel Ahsan Hadi <ahsan.hadi@gmail.com>\n> napsal:\n>\n>> Hi Pavel,\n>>\n>> I have been trying to reproduce the case of badly displaying last columns\n>> of a query result-set. I played around with the legal values for psql\n>> border variable but not able to find a case where last columns are badly\n>> displayed. Can you please share an example that I can use to reproduce this\n>> problem. I will try out your patch once I am able to reproduce the problem.\n>>\n>\n> you need to use pspg, and vertical cursor.\n>\n> https://github.com/okbob/pspg\n> vertical cursor should be active\n>\n\nokay thanks for the info. I don't think it was possible to figure this out\nby reading the initial post. I will check it out.\n\ndoes this patch have any value for psql without pspg?\n\n\n> \\pset border 1\n> \\pset linestyle ascii\n> \\pset pager always\n>\n> select * from generate_series(1,3);\n>\n> Regards\n>\n> Pavel\n>\n>\n>> Thanks,\n>>\n>> -- Ahsan\n>>\n>>\n>> On Mon, Sep 9, 2019 at 2:32 PM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>>> Hi\n>>>\n>>> When I played with vertical cursor support I got badly displayed last\n>>> columns when border was not 2. Only when border is 2, then psql displays\n>>> last column with same width for each row.\n>>>\n>>> I think so we can force column width alignment for any border styles\n>>> today (for alignment and wrapping styles) or as minimum this behave can be\n>>> optional.\n>>>\n>>> I wrote a patch with pset option \"final_spaces\", but I don't see a\n>>> reason why we trim rows today.\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>\n\nOn Tue, Sep 17, 2019 at 8:16 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:út 17. 9. 2019 v 17:06 odesílatel Ahsan Hadi <ahsan.hadi@gmail.com> napsal:Hi Pavel,I have been trying to reproduce the case of badly displaying last columns of a query result-set. I played around with the legal values for psql border variable but not able to find a case where last columns are badly displayed. Can you please share an example that I can use to reproduce this problem. I will try out your patch once I am able to reproduce the problem.you need to use pspg, and vertical cursor. https://github.com/okbob/pspgvertical cursor should be activeokay thanks for the info. I don't think it was possible to figure this out by reading the initial post. I will check it out.does this patch have any value for psql without pspg? \\pset border 1\\pset linestyle ascii\\pset pager alwaysselect * from generate_series(1,3);RegardsPavelThanks,-- AhsanOn Mon, Sep 9, 2019 at 2:32 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:HiWhen I played with vertical cursor support I got badly displayed last columns when border was not 2. Only when border is 2, then psql displays last column with same width for each row.I think so we can force column width alignment for any border styles today (for alignment and wrapping styles) or as minimum this behave can be optional. I wrote a patch with pset option \"final_spaces\", but I don't see a reason why we trim rows today.RegardsPavel",
"msg_date": "Wed, 18 Sep 2019 15:52:25 +0500",
"msg_from": "Ahsan Hadi <ahsan.hadi@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: patch: psql - enforce constant width of last column"
},
{
"msg_contents": "st 18. 9. 2019 v 12:52 odesílatel Ahsan Hadi <ahsan.hadi@gmail.com> napsal:\n\n>\n>\n> On Tue, Sep 17, 2019 at 8:16 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>>\n>>\n>> út 17. 9. 2019 v 17:06 odesílatel Ahsan Hadi <ahsan.hadi@gmail.com>\n>> napsal:\n>>\n>>> Hi Pavel,\n>>>\n>>> I have been trying to reproduce the case of badly displaying last\n>>> columns of a query result-set. I played around with the legal values for\n>>> psql border variable but not able to find a case where last columns are\n>>> badly displayed. Can you please share an example that I can use to\n>>> reproduce this problem. I will try out your patch once I am able to\n>>> reproduce the problem.\n>>>\n>>\n>> you need to use pspg, and vertical cursor.\n>>\n>> https://github.com/okbob/pspg\n>> vertical cursor should be active\n>>\n>\n> okay thanks for the info. I don't think it was possible to figure this out\n> by reading the initial post. I will check it out.\n>\n> does this patch have any value for psql without pspg?\n>\n\nThe benefit of this patch is just for pspg users today.\n\nPavel\n\n\n\n>\n>> \\pset border 1\n>> \\pset linestyle ascii\n>> \\pset pager always\n>>\n>> select * from generate_series(1,3);\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>> Thanks,\n>>>\n>>> -- Ahsan\n>>>\n>>>\n>>> On Mon, Sep 9, 2019 at 2:32 PM Pavel Stehule <pavel.stehule@gmail.com>\n>>> wrote:\n>>>\n>>>> Hi\n>>>>\n>>>> When I played with vertical cursor support I got badly displayed last\n>>>> columns when border was not 2. Only when border is 2, then psql displays\n>>>> last column with same width for each row.\n>>>>\n>>>> I think so we can force column width alignment for any border styles\n>>>> today (for alignment and wrapping styles) or as minimum this behave can be\n>>>> optional.\n>>>>\n>>>> I wrote a patch with pset option \"final_spaces\", but I don't see a\n>>>> reason why we trim rows today.\n>>>>\n>>>> Regards\n>>>>\n>>>> Pavel\n>>>>\n>>>\n\nst 18. 9. 2019 v 12:52 odesílatel Ahsan Hadi <ahsan.hadi@gmail.com> napsal:On Tue, Sep 17, 2019 at 8:16 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:út 17. 9. 2019 v 17:06 odesílatel Ahsan Hadi <ahsan.hadi@gmail.com> napsal:Hi Pavel,I have been trying to reproduce the case of badly displaying last columns of a query result-set. I played around with the legal values for psql border variable but not able to find a case where last columns are badly displayed. Can you please share an example that I can use to reproduce this problem. I will try out your patch once I am able to reproduce the problem.you need to use pspg, and vertical cursor. https://github.com/okbob/pspgvertical cursor should be activeokay thanks for the info. I don't think it was possible to figure this out by reading the initial post. I will check it out.does this patch have any value for psql without pspg? The benefit of this patch is just for pspg users today.Pavel \\pset border 1\\pset linestyle ascii\\pset pager alwaysselect * from generate_series(1,3);RegardsPavelThanks,-- AhsanOn Mon, Sep 9, 2019 at 2:32 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:HiWhen I played with vertical cursor support I got badly displayed last columns when border was not 2. Only when border is 2, then psql displays last column with same width for each row.I think so we can force column width alignment for any border styles today (for alignment and wrapping styles) or as minimum this behave can be optional. I wrote a patch with pset option \"final_spaces\", but I don't see a reason why we trim rows today.RegardsPavel",
"msg_date": "Wed, 18 Sep 2019 14:51:09 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: patch: psql - enforce constant width of last column"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 05:15:42PM +0200, Pavel Stehule wrote:\n> \n> \n> �t 17. 9. 2019 v�17:06 odes�latel Ahsan Hadi <ahsan.hadi@gmail.com> napsal:\n> \n> Hi Pavel,\n> \n> I have been trying to reproduce the case of badly displaying last columns\n> of a query result-set. I played around with the legal values for psql\n> border variable but not able to find a case where last columns are badly\n> displayed. Can you please share an example that I can use to reproduce this\n> problem. I will try out your patch once I am able to reproduce the problem.\n> \n> \n> you need to use pspg, and vertical cursor.\n> \n> https://github.com/okbob/pspg\n> vertical cursor should be active\n> \n> \\pset border 1\n> \\pset linestyle ascii\n> \\pset pager always\n> \n> select * from generate_series(1,3);\n\nI was able to reproduce the failure, but with a little more work:\n\n\t$ export PSQL_PAGER='pspg --vertical-cursor'\n\t$ psql test\n\t\\pset border 1\n\t\\pset linestyle ascii\n\t\\pset pager always\n\tselect * from generate_series(1,3);\n\nLine '1' has highlighted trailing space.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 27 Sep 2019 13:47:33 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: patch: psql - enforce constant width of last column"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> st 18. 9. 2019 v 12:52 odesílatel Ahsan Hadi <ahsan.hadi@gmail.com> napsal:\n>> does this patch have any value for psql without pspg?\n\n> The benefit of this patch is just for pspg users today.\n\nTBH, I think we should just reject this patch. It makes psql's\ntable-printing behavior even more complicated than it was before.\nAnd I don't see how pspg gets any benefit --- you'll still have\nto deal with the old code, for an indefinite time into the future.\n\nMoreover, *other* programs that pay close attention to the output\nformat will be forced to deal with the possibility that this flag\nhas been turned on, which typically they wouldn't even have a way\nto find out. So I think you're basically trying to export your\nproblems onto everyone else.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Nov 2019 15:55:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: patch: psql - enforce constant width of last column"
},
{
"msg_contents": "Hi\n\npo 4. 11. 2019 v 21:55 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > st 18. 9. 2019 v 12:52 odesílatel Ahsan Hadi <ahsan.hadi@gmail.com>\n> napsal:\n> >> does this patch have any value for psql without pspg?\n>\n> > The benefit of this patch is just for pspg users today.\n>\n> TBH, I think we should just reject this patch. It makes psql's\n> table-printing behavior even more complicated than it was before.\n> And I don't see how pspg gets any benefit --- you'll still have\n> to deal with the old code, for an indefinite time into the future.\n>\n\nI don't think so it increase printing rules too much. A default value\n\"auto\" doesn't any change against current state, \"always\" ensure same line\nwidth of any row.\n\nThe problem, that this patch try to solve, is different width of rows -\nalthough the result is aligned.\n\nPersonally I think so current behave is not correct. Correct solution\nshould be set \"finalspaces true\" every time - for aligned output. But I\ndon't know a motivation of authors and as solution with minimal impacts I\nwrote a possibility to set (it's not default) to finalspace to \"always\" as\nfix of some possible visual artefact (although these artefacts are almost\ntime invisible).\n\nThe patch maybe looks not trivial (although it is trivial), but it is due I\ntry to reduce possible impact on any other application to zero.\n\n\n>\n> Moreover, *other* programs that pay close attention to the output\n> format will be forced to deal with the possibility that this flag\n> has been turned on, which typically they wouldn't even have a way\n> to find out. So I think you're basically trying to export your\n> problems onto everyone else.\n>\n\nI try to fix this issue where this issue coming. For this patch is\nimportant to get a agreement (or not) if this problem is a issue that\nshould be fixed.\n\nI think so in aligned mode all rows should to have same width.\n\nOn second hand, really I don't know why the last space is not printed, and\nif some applications had a problem with it. I have not any idea. Current\ncode where last spaces are not printed is little bit complex than if the\nalign was really complete.\n\nSure, \"the issue of last invisible space\" is not big issue (it's\ntriviality) - I really think so it should be fixed on psql side, but if\nthere will not be a agreement, I can fix it on pspg side (although it will\nnot be elegant - because I have to print chars that doesn't exists).\n\nIs here any man who remember this implementation, who can say, why the code\nis implemented how it is implemented?\n\nRegards\n\nPavel\n\n\n\n> regards, tom lane\n>\n\nHipo 4. 11. 2019 v 21:55 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> st 18. 9. 2019 v 12:52 odesílatel Ahsan Hadi <ahsan.hadi@gmail.com> napsal:\n>> does this patch have any value for psql without pspg?\n\n> The benefit of this patch is just for pspg users today.\n\nTBH, I think we should just reject this patch. It makes psql's\ntable-printing behavior even more complicated than it was before.\nAnd I don't see how pspg gets any benefit --- you'll still have\nto deal with the old code, for an indefinite time into the future.I don't think so it increase printing rules too much. A default value \"auto\" doesn't any change against current state, \"always\" ensure same line width of any row.The problem, that this patch try to solve, is different width of rows - although the result is aligned.Personally I think so current behave is not correct. Correct solution should be set \"finalspaces true\" every time - for aligned output. But I don't know a motivation of authors and as solution with minimal impacts I wrote a possibility to set (it's not default) to finalspace to \"always\" as fix of some possible visual artefact (although these artefacts are almost time invisible).The patch maybe looks not trivial (although it is trivial), but it is due I try to reduce possible impact on any other application to zero. \n\nMoreover, *other* programs that pay close attention to the output\nformat will be forced to deal with the possibility that this flag\nhas been turned on, which typically they wouldn't even have a way\nto find out. So I think you're basically trying to export your\nproblems onto everyone else.I try to fix this issue where this issue coming. For this patch is important to get a agreement (or not) if this problem is a issue that should be fixed.I think so in aligned mode all rows should to have same width. On second hand, really I don't know why the last space is not printed, and if some applications had a problem with it. I have not any idea. Current code where last spaces are not printed is little bit complex than if the align was really complete.Sure, \"the issue of last invisible space\" is not big issue (it's triviality) - I really think so it should be fixed on psql side, but if there will not be a agreement, I can fix it on pspg side (although it will not be elegant - because I have to print chars that doesn't exists).Is here any man who remember this implementation, who can say, why the code is implemented how it is implemented?RegardsPavel \n\n regards, tom lane",
"msg_date": "Mon, 4 Nov 2019 23:00:40 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: patch: psql - enforce constant width of last column"
}
] |
[
{
"msg_contents": "Hi all,\n(Andrew G. in CC)\n\nWe have the following set of header files in src/common/:\ndigit_table.h\nd2s_full_table.h\nd2s_intrinsics.h\nryu_common.h\n\nShouldn't all these files be in src/include/common/ instead? HEAD is\nnot really consistent with the common practice here.\n\nThanks,\n--\nMichael",
"msg_date": "Mon, 9 Sep 2019 21:07:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Set of header files for Ryu floating-point stuff in src/common/"
},
{
"msg_contents": ">>>>> \"Michael\" == Michael Paquier <michael@paquier.xyz> writes:\n\n Michael> Hi all,\n Michael> (Andrew G. in CC)\n\n Michael> We have the following set of header files in src/common/:\n Michael> digit_table.h\n Michael> d2s_full_table.h\n Michael> d2s_intrinsics.h\n Michael> ryu_common.h\n\n Michael> Shouldn't all these files be in src/include/common/ instead?\n\nNo.\n\na) They are implementation, not interface.\n\nb) Most of them are just data tables.\n\nc) The ones that define inline functions have some specializations (e.g.\nlimits on ranges or shift counts) that make it unwise to expose more\ngenerally.\n\nThey are kept as separate files primarily because upstream had them that\nway (and having the data tables out of the way makes the code more\nreadable). But it's explicitly not a good idea for them to be installed\nanywhere or to have any additional code depending on them, since it is\nconceivable that they might have to change without warning or disappear\nin the event that we choose to track some upstream change (or replace\nRyu entirely).\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Mon, 09 Sep 2019 15:29:36 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Set of header files for Ryu floating-point stuff in src/common/"
}
] |
[
{
"msg_contents": "pgjdbc has a bug report which is as follows:\n\nThe database has a table that has a description and a constraint.\nThe constraint also has a description.\n\nsomehow the constraint and the table end up with the same OID's after\npg_upgrade.\n\nMy understanding of pg_upgrade suggests that shouldn't happen ? I realize\noids are not guaranteed to be unique, but this seems to be quite a\ncoincidence.\n\n\nDave Cramer\n\npgjdbc has a bug report which is as follows:The database has a table that has a description and a constraint.The constraint also has a description.somehow the constraint and the table end up with the same OID's after pg_upgrade. My understanding of pg_upgrade suggests that shouldn't happen ? I realize oids are not guaranteed to be unique, but this seems to be quite a coincidence.Dave Cramer",
"msg_date": "Mon, 9 Sep 2019 10:58:02 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_upgrade issues"
},
{
"msg_contents": "On Mon, Sep 9, 2019 at 10:58:02AM -0400, Dave Cramer wrote:\n> pgjdbc has a bug report which is as follows:\n> \n> The database has a table that has a description and a constraint.\n> The constraint also has a description.\n> \n> somehow the constraint and the table end up with the same OID's after\n> pg_upgrade.�\n> \n> My understanding of pg_upgrade suggests that shouldn't happen ? I realize oids\n> are not guaranteed to be unique, but this seems to be quite a coincidence.\n\nUh, the table and the table constraint have the same pg_description oid?\npg_upgrade just restores the pg_description descriptions and doesn't\nmodify them. Do you get an error on restore because of the duplicate\npg_description oids?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 27 Sep 2019 13:53:34 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade issues"
}
] |
[
{
"msg_contents": "Hi,I just want know does PostgreSQL support debian Linux with ARM CPU Platform,Thank you!\nHi,I just want know does PostgreSQL support debian Linux with ARM CPU Platform,Thank you!",
"msg_date": "Mon, 09 Sep 2019 23:07:25 +0800",
"msg_from": "<gc_11@sina.cn>",
"msg_from_op": true,
"msg_subject": "Does PostgreSQL support debian Linux on Arm CPU Platform?"
},
{
"msg_contents": "\nOn 9/9/19 11:07 AM, gc_11@sina.cn wrote:\n> Hi,I just want know does PostgreSQL support debian Linux with ARM CPU\n> Platform,Thank you!\n\n\nSee\n<https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=mereswine&br=HEAD>\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Mon, 9 Sep 2019 13:32:30 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Does PostgreSQL support debian Linux on Arm CPU Platform?"
},
{
"msg_contents": "<gc_11@sina.cn> writes:\n\n> Hi,I just want know does PostgreSQL support debian Linux with ARM CPU Platform,Thank you!\n\nThe PostgreSQL community provided packages (https://apt.postgresql.org/)\nare only built for amd64, i386 and ppc64el, but Debian itself ships\nPostgreSQL on every architecture it supports.\n\nEach Debian release only ships one major version of PostgreSQL (the\ncurrent stable release has PostgreSQL 11), but if you need other\nversions you could build them from the apt.postgresql.org source\npackages.\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl\n\n\n",
"msg_date": "Tue, 10 Sep 2019 11:08:31 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: Does PostgreSQL support debian Linux on Arm CPU Platform?"
}
] |
[
{
"msg_contents": "I wondered about this transient failure:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2019-09-09%2008%3A48%3A25\n\nThe only information that was captured was\n\n# Running: /usr/sbin/slapd -f /home/bf/bfr/root/HEAD/pgsql.build/src/test/ldap/tmp_check/slapd.conf -h ldap://localhost:57797 ldaps://localhost:57798\nBail out! system /usr/sbin/slapd failed\n\nSo, slapd failed without writing anything to stderr *or* its configured\nlog file, which is pretty unhelpful. But, casting about for a possible\nexplanation, I noticed the port setup code earlier in the script:\n\nmy $ldap_port = get_free_port();\nmy $ldaps_port = $ldap_port + 1;\n\nTranslated: we're finding one free port and just assuming that\nthe next one will be free (or even exists :-().\n\nSure enough, I can reproduce the failure seen on crake if I force\nthe $ldaps_port to be a port number that something has in use.\nPresumably that transient failure occurred because something was\nusing 57798.\n\nSo this code needs to be\n\nmy $ldap_port = get_free_port();\nmy $ldaps_port = get_free_port();\n\nI'll go fix that in a moment.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Sep 2019 14:05:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Sloppy port assignment in src/test/ldap/"
}
] |
[
{
"msg_contents": "As far as i can see,PostgreSQL BuildFarm is used to detect build failures on a large collection of platforms and configurations.Is it necessary to do function and performance test on a new platforms, Is there any tools to do this.Thank you very much. \n----- 原始邮件 -----\n发件人:Andrew Dunstan <andrew.dunstan@2ndquadrant.com>\n收件人:gc_11@sina.cn, pgsql-hackers <pgsql-hackers@lists.postgresql.org>\n主题:Re: Does PostgreSQL support debian Linux on Arm CPU Platform?\n日期:2019年09月10日 01点32分\n\n\nOn 9/9/19 11:07 AM, gc_11@sina.cn wrote:\n> Hi,I just want know does PostgreSQL support debian Linux with ARM CPU\n> Platform,Thank you!\nSee\n<https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=mereswine&br=HEAD>\ncheers\nandrew\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nAs far as i can see,PostgreSQL BuildFarm is used to detect build failures on a large collection of platforms and configurations.Is it necessary to do function and performance test on a new platforms, Is there any tools to do this.Thank you very much. ----- 原始邮件 -----发件人:Andrew Dunstan <andrew.dunstan@2ndquadrant.com>收件人:gc_11@sina.cn, pgsql-hackers <pgsql-hackers@lists.postgresql.org>主题:Re: Does PostgreSQL support debian Linux on Arm CPU Platform?日期:2019年09月10日 01点32分On 9/9/19 11:07 AM, gc_11@sina.cn wrote:> Hi,I just want know does PostgreSQL support debian Linux with ARM CPU> Platform,Thank you!See<https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=mereswine&br=HEAD>cheersandrew-- Andrew Dunstan https://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 10 Sep 2019 10:28:07 +0800",
"msg_from": "<gc_11@sina.cn>",
"msg_from_op": true,
"msg_subject": "\n =?GBK?B?u9i4tKO6UmU6IERvZXMgUG9zdGdyZVNRTCBzdXBwb3J0IGRlYmlhbiBMaW51eCBvbiBBcm0gQ1BVIFBsYXRmb3JtPw==?="
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently we do not try to pull up sub-select of type ANY_SUBLINK if it\nrefers to any Vars of the parent query, as indicated in the code snippet\nbelow:\n\nJoinExpr *\nconvert_ANY_sublink_to_join(PlannerInfo *root, SubLink *sublink,\n Relids available_rels)\n{\n ...\n\n if (contain_vars_of_level((Node *) subselect, 1))\n return NULL;\n\n\nWhy do we have this check?\n\nCan we try to pull up direct-correlated ANY SubLink with the help of\nLATERAL? That is, do the pull up in the same way as uncorrelated ANY\nSubLink, by adding the SubLink's subselect to the query's rangetable,\nbut explicitly set LATERAL for its RangeTblEntry, like:\n\n--- a/src/backend/optimizer/plan/subselect.c\n+++ b/src/backend/optimizer/plan/subselect.c\n@@ -1226,13 +1226,6 @@ convert_ANY_sublink_to_join(PlannerInfo *root,\nSubLink *sublink,\n Assert(sublink->subLinkType == ANY_SUBLINK);\n\n /*\n- * The sub-select must not refer to any Vars of the parent query.\n(Vars of\n- * higher levels should be okay, though.)\n- */\n- if (contain_vars_of_level((Node *) subselect, 1))\n- return NULL;\n-\n- /*\n * The test expression must contain some Vars of the parent query,\nelse\n * it's not gonna be a join. (Note that it won't have Vars\nreferring to\n * the subquery, rather Params.)\n@@ -1267,7 +1260,7 @@ convert_ANY_sublink_to_join(PlannerInfo *root,\nSubLink *sublink,\n rte = addRangeTableEntryForSubquery(pstate,\n subselect,\n makeAlias(\"ANY_subquery\", NIL),\n- false,\n+ contain_vars_of_level((Node *)\nsubselect, 1), /* lateral */\n false);\n parse->rtable = lappend(parse->rtable, rte);\n rtindex = list_length(parse->rtable);\n\n\nBy this way, we can convert the query:\n\nselect * from a where a.i = ANY(select i from b where a.j > b.j);\n\nTo:\n\nselect * from a SEMI JOIN lateral(select * from b where a.j > b.j) sub on\na.i = sub.i;\n\n\nDoes this make sense?\n\nThanks\nRichard\n\nHi,Currently we do not try to pull up sub-select of type ANY_SUBLINK if itrefers to any Vars of the parent query, as indicated in the code snippetbelow:JoinExpr *convert_ANY_sublink_to_join(PlannerInfo *root, SubLink *sublink, Relids available_rels){ ... if (contain_vars_of_level((Node *) subselect, 1)) return NULL;Why do we have this check?Can we try to pull up direct-correlated ANY SubLink with the help ofLATERAL? That is, do the pull up in the same way as uncorrelated ANYSubLink, by adding the SubLink's subselect to the query's rangetable,but explicitly set LATERAL for its RangeTblEntry, like:--- a/src/backend/optimizer/plan/subselect.c+++ b/src/backend/optimizer/plan/subselect.c@@ -1226,13 +1226,6 @@ convert_ANY_sublink_to_join(PlannerInfo *root, SubLink *sublink, Assert(sublink->subLinkType == ANY_SUBLINK); /*- * The sub-select must not refer to any Vars of the parent query. (Vars of- * higher levels should be okay, though.)- */- if (contain_vars_of_level((Node *) subselect, 1))- return NULL;-- /* * The test expression must contain some Vars of the parent query, else * it's not gonna be a join. (Note that it won't have Vars referring to * the subquery, rather Params.)@@ -1267,7 +1260,7 @@ convert_ANY_sublink_to_join(PlannerInfo *root, SubLink *sublink, rte = addRangeTableEntryForSubquery(pstate, subselect, makeAlias(\"ANY_subquery\", NIL),- false,+ contain_vars_of_level((Node *) subselect, 1), /* lateral */ false); parse->rtable = lappend(parse->rtable, rte); rtindex = list_length(parse->rtable);By this way, we can convert the query:select * from a where a.i = ANY(select i from b where a.j > b.j);To:select * from a SEMI JOIN lateral(select * from b where a.j > b.j) sub on a.i = sub.i;Does this make sense?ThanksRichard",
"msg_date": "Tue, 10 Sep 2019 15:26:47 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Pulling up direct-correlated ANY_SUBLINK"
},
{
"msg_contents": "Richard Guo <riguo@pivotal.io> wrote:\n\n> Can we try to pull up direct-correlated ANY SubLink with the help of\n> LATERAL?\n\n> By this way, we can convert the query:\n> \n> select * from a where a.i = ANY(select i from b where a.j > b.j);\n> \n> To:\n> \n> select * from a SEMI JOIN lateral(select * from b where a.j > b.j)\n> sub on a.i = sub.i;\n> \n\nI tried this a few years ago. This is where the problems started:\n\nhttps://www.postgresql.org/message-id/1386716782.5203.YahooMailNeo%40web162905.mail.bf1.yahoo.com\n\nI'm not sure I remember enough, but the problem has something to do with one\npossible strategy to plan SEMI JOIN: unique-ify the inner path and then\nperform plain INNER JOIN instead.\n\nI think the problemm was that the WHERE clause of the subquery didn't\nparticipate in the SEMI JOIN evaluation and was used as filter instead. Thus\nthe clause's Vars were not used in unique keys of the inner path and so the\nSEMI JOIN didn't work well.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Tue, 10 Sep 2019 10:31:48 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Pulling up direct-correlated ANY_SUBLINK"
},
{
"msg_contents": "Richard Guo <riguo@pivotal.io> writes:\n> Currently we do not try to pull up sub-select of type ANY_SUBLINK if it\n> refers to any Vars of the parent query, as indicated in the code snippet\n> below:\n> if (contain_vars_of_level((Node *) subselect, 1))\n> return NULL;\n> Why do we have this check?\n\nBecause the result would not be a join between two independent tables.\n\n> Can we try to pull up direct-correlated ANY SubLink with the help of\n> LATERAL?\n\nPerhaps. But what's the argument that you'd end up with a better\nplan? LATERAL pretty much constrains things to use a nestloop,\nso I'm not sure there's anything fundamentally different.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Sep 2019 09:48:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Pulling up direct-correlated ANY_SUBLINK"
},
{
"msg_contents": "Hi Antonin,\n\nOn Tue, Sep 10, 2019 at 4:31 PM Antonin Houska <ah@cybertec.at> wrote:\n\n> Richard Guo <riguo@pivotal.io> wrote:\n>\n> > Can we try to pull up direct-correlated ANY SubLink with the help of\n> > LATERAL?\n>\n> > By this way, we can convert the query:\n> >\n> > select * from a where a.i = ANY(select i from b where a.j > b.j);\n> >\n> > To:\n> >\n> > select * from a SEMI JOIN lateral(select * from b where a.j > b.j)\n> > sub on a.i = sub.i;\n> >\n>\n> I tried this a few years ago. This is where the problems started:\n>\n>\n> https://www.postgresql.org/message-id/1386716782.5203.YahooMailNeo%40web162905.mail.bf1.yahoo.com\n\n\nThank you for this link. Good to know the discussions years ago.\n\n\n> I'm not sure I remember enough, but the problem has something to do with\n> one\n> possible strategy to plan SEMI JOIN: unique-ify the inner path and then\n> perform plain INNER JOIN instead.\n>\n> I think the problemm was that the WHERE clause of the subquery didn't\n> participate in the SEMI JOIN evaluation and was used as filter instead.\n> Thus\n> the clause's Vars were not used in unique keys of the inner path and so the\n> SEMI JOIN didn't work well.\n>\n\nThis used to be a problem until it was fixed by commit 043f6ff0, which\nincludes the postponed qual from a LATERAL subquery into the quals seen\nby make_outerjoininfo().\n\nThanks\nRichard\n\nHi Antonin,On Tue, Sep 10, 2019 at 4:31 PM Antonin Houska <ah@cybertec.at> wrote:Richard Guo <riguo@pivotal.io> wrote:\n\n> Can we try to pull up direct-correlated ANY SubLink with the help of\n> LATERAL?\n\n> By this way, we can convert the query:\n> \n> select * from a where a.i = ANY(select i from b where a.j > b.j);\n> \n> To:\n> \n> select * from a SEMI JOIN lateral(select * from b where a.j > b.j)\n> sub on a.i = sub.i;\n> \n\nI tried this a few years ago. This is where the problems started:\n\nhttps://www.postgresql.org/message-id/1386716782.5203.YahooMailNeo%40web162905.mail.bf1.yahoo.comThank you for this link. Good to know the discussions years ago. \nI'm not sure I remember enough, but the problem has something to do with one\npossible strategy to plan SEMI JOIN: unique-ify the inner path and then\nperform plain INNER JOIN instead.\n\nI think the problemm was that the WHERE clause of the subquery didn't\nparticipate in the SEMI JOIN evaluation and was used as filter instead. Thus\nthe clause's Vars were not used in unique keys of the inner path and so the\nSEMI JOIN didn't work well.This used to be a problem until it was fixed by commit 043f6ff0, whichincludes the postponed qual from a LATERAL subquery into the quals seenby make_outerjoininfo().ThanksRichard",
"msg_date": "Wed, 11 Sep 2019 15:19:09 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Pulling up direct-correlated ANY_SUBLINK"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > Can we try to pull up direct-correlated ANY SubLink with the help of\n> > LATERAL?\n> \n> Perhaps. But what's the argument that you'd end up with a better\n> plan? LATERAL pretty much constrains things to use a nestloop,\n> so I'm not sure there's anything fundamentally different.\n\nI think that subquery pull-up is most beneficial when the queries (both the\nsubquery and the upper query) contain more than a few tables. In such a case,\nif only a few tables reference the upper query (or if just a single one does),\nthe constraints imposed by LATERAL might be less significant.\n\nNevertheless, I don't know how to overcome the problems that I mentioned\nupthread.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Wed, 11 Sep 2019 09:25:05 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Pulling up direct-correlated ANY_SUBLINK"
},
{
"msg_contents": "Hi Antonin,\n\nOn Wed, Sep 11, 2019 at 3:25 PM Antonin Houska <ah@cybertec.at> wrote:\n\n>\n> Nevertheless, I don't know how to overcome the problems that I mentioned\n> upthread.\n>\n\nDo you mean the problem \"the WHERE clause of the subquery didn't\nparticipate in the SEMI JOIN evaluation\"? Good news is it has been fixed\nby commit 043f6ff0 as I mentioned upthread.\n\nThanks\nRichard\n\nHi Antonin,On Wed, Sep 11, 2019 at 3:25 PM Antonin Houska <ah@cybertec.at> wrote:\nNevertheless, I don't know how to overcome the problems that I mentioned\nupthread.Do you mean the problem \"the WHERE clause of the subquery didn'tparticipate in the SEMI JOIN evaluation\"? Good news is it has been fixedby commit 043f6ff0 as I mentioned upthread.ThanksRichard",
"msg_date": "Wed, 11 Sep 2019 16:32:33 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Pulling up direct-correlated ANY_SUBLINK"
},
{
"msg_contents": "Hi Tom,\n\nOn Tue, Sep 10, 2019 at 9:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> > Can we try to pull up direct-correlated ANY SubLink with the help of\n> > LATERAL?\n>\n> Perhaps. But what's the argument that you'd end up with a better\n> plan? LATERAL pretty much constrains things to use a nestloop,\n> so I'm not sure there's anything fundamentally different.\n>\n\nThis is a point I didn't think of. In that case if the pull-up mostly\nresults in a nestloop then we cannot make sure we will get a better\nplan. Thank you for pointing it out.\n\nThanks\nRichard\n\nHi Tom,On Tue, Sep 10, 2019 at 9:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Can we try to pull up direct-correlated ANY SubLink with the help of\n> LATERAL?\n\nPerhaps. But what's the argument that you'd end up with a better\nplan? LATERAL pretty much constrains things to use a nestloop,\nso I'm not sure there's anything fundamentally different.This is a point I didn't think of. In that case if the pull-up mostlyresults in a nestloop then we cannot make sure we will get a betterplan. Thank you for pointing it out.ThanksRichard",
"msg_date": "Thu, 12 Sep 2019 10:59:24 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Pulling up direct-correlated ANY_SUBLINK"
},
{
"msg_contents": "Richard Guo <riguo@pivotal.io> wrote:\n\n> On Wed, Sep 11, 2019 at 3:25 PM Antonin Houska <ah@cybertec.at>\n> wrote:\n> \n> \n> Nevertheless, I don't know how to overcome the problems that I\n> mentioned\n> upthread.\n> \n> \n> Do you mean the problem \"the WHERE clause of the subquery didn't\n> participate in the SEMI JOIN evaluation\"? Good news is it has been\n> fixed\n> by commit 043f6ff0 as I mentioned upthread.\n\nDo you say that my old patch (rebased) no longer breaks the regression tests?\n\n(I noticed your other email in the thread which seems to indicate that you're\nno lo longer interested to work on the feature, but asking out of curiosity.)\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Thu, 12 Sep 2019 17:35:21 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Pulling up direct-correlated ANY_SUBLINK"
},
{
"msg_contents": "On Thu, Sep 12, 2019 at 11:35 PM Antonin Houska <ah@cybertec.at> wrote:\n\n> Richard Guo <riguo@pivotal.io> wrote:\n>\n> > On Wed, Sep 11, 2019 at 3:25 PM Antonin Houska <ah@cybertec.at>\n> > wrote:\n> >\n> >\n> > Nevertheless, I don't know how to overcome the problems that I\n> > mentioned\n> > upthread.\n> >\n> >\n> > Do you mean the problem \"the WHERE clause of the subquery didn't\n> > participate in the SEMI JOIN evaluation\"? Good news is it has been\n> > fixed\n> > by commit 043f6ff0 as I mentioned upthread.\n>\n> Do you say that my old patch (rebased) no longer breaks the regression\n> tests?\n>\n\nI think so.\n\n\n>\n> (I noticed your other email in the thread which seems to indicate that\n> you're\n> no lo longer interested to work on the feature, but asking out of\n> curiosity.)\n>\n\nTom pointed out that even if we pull up the subquery with the help of\nLATERAL, we cannot make sure we will end up with a better plan, since\nLATERAL pretty much constrains things to use a nestloop. Hmm, I think\nwhat he said makes sense.\n\nThanks\nRichard\n\nOn Thu, Sep 12, 2019 at 11:35 PM Antonin Houska <ah@cybertec.at> wrote:Richard Guo <riguo@pivotal.io> wrote:\n\n> On Wed, Sep 11, 2019 at 3:25 PM Antonin Houska <ah@cybertec.at>\n> wrote:\n> \n> \n> Nevertheless, I don't know how to overcome the problems that I\n> mentioned\n> upthread.\n> \n> \n> Do you mean the problem \"the WHERE clause of the subquery didn't\n> participate in the SEMI JOIN evaluation\"? Good news is it has been\n> fixed\n> by commit 043f6ff0 as I mentioned upthread.\n\nDo you say that my old patch (rebased) no longer breaks the regression tests?I think so. \n\n(I noticed your other email in the thread which seems to indicate that you're\nno lo longer interested to work on the feature, but asking out of curiosity.)Tom pointed out that even if we pull up the subquery with the help ofLATERAL, we cannot make sure we will end up with a better plan, sinceLATERAL pretty much constrains things to use a nestloop. Hmm, I thinkwhat he said makes sense.ThanksRichard",
"msg_date": "Tue, 17 Sep 2019 16:41:34 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Pulling up direct-correlated ANY_SUBLINK"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 4:41 PM Richard Guo <riguo@pivotal.io> wrote:\n\n>\n> On Thu, Sep 12, 2019 at 11:35 PM Antonin Houska <ah@cybertec.at> wrote:\n>\n>> Richard Guo <riguo@pivotal.io> wrote:\n>>\n>> > On Wed, Sep 11, 2019 at 3:25 PM Antonin Houska <ah@cybertec.at>\n>> > wrote:\n>> >\n>> >\n>> > Nevertheless, I don't know how to overcome the problems that I\n>> > mentioned\n>> > upthread.\n>> >\n>> >\n>> > Do you mean the problem \"the WHERE clause of the subquery didn't\n>> > participate in the SEMI JOIN evaluation\"? Good news is it has been\n>> > fixed\n>> > by commit 043f6ff0 as I mentioned upthread.\n>>\n>> Do you say that my old patch (rebased) no longer breaks the regression\n>> tests?\n>>\n>\n> I think so.\n>\n>\n>>\n>> (I noticed your other email in the thread which seems to indicate that\n>> you're\n>> no lo longer interested to work on the feature, but asking out of\n>> curiosity.)\n>>\n>\n> Tom pointed out that even if we pull up the subquery with the help of\n> LATERAL, we cannot make sure we will end up with a better plan, since\n> LATERAL pretty much constrains things to use a nestloop. Hmm, I think\n> what he said makes sense.\n>\n> Thanks\n> Richard\n>\n>\n\nEven if we can't do this for the general case, I still think we can do\nsomething\nfor some special cases, for example:\nselect count(*) from j1 where (i) *in* (select i from j2 where* j2.im5 =\nj1.im5*);\ncan be converted to\nselect count(*) from t1 where (i, im5) in (select i, im5 from j2);\n\nThe conversion can happen just before the convert_ANY_sublink_to_join.\n\n@@ -399,6 +483,7 @@ pull_up_sublinks_qual_recurse(PlannerInfo *root, Node\n*node,\n /* Is it a convertible ANY or EXISTS clause? */\n if (sublink->subLinkType == ANY_SUBLINK)\n {\n+ reduce_sublink_correlation_exprs(root, sublink);\n if ((j = convert_ANY_sublink_to_join(root, sublink,\n\n available_rels1)) != NULL)\n\nHowever we have to do lots of pre checking for this, the below is\nsomething I can think for now.\n\n1). It must be an in-subquery.\n2). The op in correlation_expr must be a mergeable op.\n3). no aggregation call in subquery->targetList and subquery->havingQual.\n4). no limit/offset cause.\n5). No volatile function involved for safety.\n\nI can't tell how often it is, I just run into this by my own and search the\nmaillist and get only 1 report [1]. Is it something worth doing or do we\nhave\na better strategy to handle it? Thanks!\n\n[1] https://www.postgresql.org/message-id/3691.1342650974@sss.pgh.pa.us\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, Sep 17, 2019 at 4:41 PM Richard Guo <riguo@pivotal.io> wrote:On Thu, Sep 12, 2019 at 11:35 PM Antonin Houska <ah@cybertec.at> wrote:Richard Guo <riguo@pivotal.io> wrote:\n\n> On Wed, Sep 11, 2019 at 3:25 PM Antonin Houska <ah@cybertec.at>\n> wrote:\n> \n> \n> Nevertheless, I don't know how to overcome the problems that I\n> mentioned\n> upthread.\n> \n> \n> Do you mean the problem \"the WHERE clause of the subquery didn't\n> participate in the SEMI JOIN evaluation\"? Good news is it has been\n> fixed\n> by commit 043f6ff0 as I mentioned upthread.\n\nDo you say that my old patch (rebased) no longer breaks the regression tests?I think so. \n\n(I noticed your other email in the thread which seems to indicate that you're\nno lo longer interested to work on the feature, but asking out of curiosity.)Tom pointed out that even if we pull up the subquery with the help ofLATERAL, we cannot make sure we will end up with a better plan, sinceLATERAL pretty much constrains things to use a nestloop. Hmm, I thinkwhat he said makes sense.ThanksRichard \nEven if we can't do this for the general case, I still think we can do somethingfor some special cases, for example: select count(*) from j1 where (i) in (select i from j2 where j2.im5 = j1.im5); can be converted to select count(*) from t1 where (i, im5) in (select i, im5 from j2); The conversion can happen just before the convert_ANY_sublink_to_join.@@ -399,6 +483,7 @@ pull_up_sublinks_qual_recurse(PlannerInfo *root, Node *node, /* Is it a convertible ANY or EXISTS clause? */ if (sublink->subLinkType == ANY_SUBLINK) {+ reduce_sublink_correlation_exprs(root, sublink); if ((j = convert_ANY_sublink_to_join(root, sublink, available_rels1)) != NULL)However we have to do lots of pre checking for this, the below is something I can think for now.1). It must be an in-subquery. 2). The op in correlation_expr must be a mergeable op.3). no aggregation call in subquery->targetList and subquery->havingQual. 4). no limit/offset cause. 5). No volatile function involved for safety. I can't tell how often it is, I just run into this by my own and search themaillist and get only 1 report [1]. Is it something worth doing or do we have a better strategy to handle it? Thanks![1] https://www.postgresql.org/message-id/3691.1342650974@sss.pgh.pa.us -- Best RegardsAndy Fan",
"msg_date": "Wed, 19 Aug 2020 13:55:16 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pulling up direct-correlated ANY_SUBLINK"
},
{
"msg_contents": "On Tue, Sep 10, 2019 at 9:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > Can we try to pull up direct-correlated ANY SubLink with the help of\n> > LATERAL?\n>\n> Perhaps. But what's the argument that you'd end up with a better\n> plan? LATERAL pretty much constrains things to use a nestloop,\n> so I'm not sure there's anything fundamentally different.\n\n\nSorry for the noise on replying such an old thread, but recently I\nrealized that pulling up direct-correlated ANY SubLink with LATERAL may\ncause another problem that we cannot find any legal join order due to\nthe constraints imposed by LATERAL references. Below is an example:\n\nselect * from A where exists\n (select * from B where A.i in (select C.i from C where C.j = B.j));\n\nFor this query, after we converting the ANY SubLink to a LATERAL\nsubquery, the subquery cannot be pulled up further into the parent query\nbecause its qual contains lateral reference to 'B', which is outside a\nhigher semi join. When considering the join of 'A' and the 'subquery',\nwe decide it's not legal due to the LATERAL reference. As a result, we\nend up with not finding any legal join order for level 2.\n\nThanks\nRichard\n\nOn Tue, Sep 10, 2019 at 9:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Can we try to pull up direct-correlated ANY SubLink with the help of\n> LATERAL?\n\nPerhaps. But what's the argument that you'd end up with a better\nplan? LATERAL pretty much constrains things to use a nestloop,\nso I'm not sure there's anything fundamentally different.Sorry for the noise on replying such an old thread, but recently Irealized that pulling up direct-correlated ANY SubLink with LATERAL maycause another problem that we cannot find any legal join order due tothe constraints imposed by LATERAL references. Below is an example:select * from A where exists (select * from B where A.i in (select C.i from C where C.j = B.j));For this query, after we converting the ANY SubLink to a LATERALsubquery, the subquery cannot be pulled up further into the parent querybecause its qual contains lateral reference to 'B', which is outside ahigher semi join. When considering the join of 'A' and the 'subquery',we decide it's not legal due to the LATERAL reference. As a result, weend up with not finding any legal join order for level 2.ThanksRichard",
"msg_date": "Thu, 21 Jul 2022 15:37:04 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pulling up direct-correlated ANY_SUBLINK"
},
{
"msg_contents": "Hi All:\n\nOn Tue, Sep 10, 2019 at 9:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <riguo@pivotal.io> writes:\n> > Currently we do not try to pull up sub-select of type ANY_SUBLINK if it\n> > refers to any Vars of the parent query, as indicated in the code snippet\n> > below:\n> > if (contain_vars_of_level((Node *) subselect, 1))\n> > return NULL;\n> > Why do we have this check?\n>\n> Because the result would not be a join between two independent tables.\n>\n\nI think this situation is caused by we pull-up the ANY-sublink with 2\nsteps, the first step is to pull up the sublink as a subquery, and the\nnext step is to pull up the subquery if it is allowed. The benefits of\nthis method are obvious, pulling up the subquery has more requirements,\neven if we can just finish the first step, we still get huge benefits.\nHowever the bad stuff happens if varlevelsup = 1 involves, step 1 fails!\n\nThe solution here is to use the lateral join to overcome the two\nindependent tables, the issue of this solution includes:\n\n1. LATERAL pretty much constrains things to use a nestloop like below,\nbut this reason is questioned since if we can pull-up the subquery, if so\nthe\nconstraint gone. [1]\n2. It has something with unique-ify the inner path. [2] , but Richard\nthought\nit should be fixed but without an agreement for all people [3].\n3. Richard [4] found it would fail to get a plan for some query. (the\nerror is\nbelow per my testing)\n\n> ERROR: failed to build any 3-way joins\n\nSo back to the root cause of this issue, IIUC, if varlevelsup = 1\ninvolves,\ncan we just bypass the 2-steps method, just as what we do for EXISTS\nsublinks? If so, we just need to convert the ANY-SUBLINK to EXIST-SUBLINK\nunder the case.\n\nThe attached is the one commit which includes the 2 methods discussed\nhere, controlled by different GUC separately, for easy testing. Per my\ntest,\nQuery 2 choosed the Unique Join with the IN-to-EXISTS method, but not\nwith the Lateral method, and query 3 raises error with the lateral method,\nbut not with the IN-to-EXISTS method.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/60794.1568104308%40antos#365d5ec69fd605a8569a2674a33909a1\n[2] https://www.postgresql.org/message-id/60794.1568104308%40antos\n[3]\nhttps://www.postgresql.org/message-id/CAN_9JTzqa-3RmHAw3wZv099Rk8xX480YdEvGy%2BJAdVw8dTnHRA%40mail.gmail.com\n[4]\nhttps://www.postgresql.org/message-id/CAMbWs49cvkF9akbomz_fCCKS%3DD5TY%3D4KGHEQcfHPZCXS1GVhkA%40mail.gmail.com\n\n\n\n\n> > Can we try to pull up direct-correlated ANY SubLink with the help of\n> > LATERAL?\n>\n> Perhaps. But what's the argument that you'd end up with a better\n> plan? LATERAL pretty much constrains things to use a nestloop,\n> so I'm not sure there's anything fundamentally different.\n>\n> regards, tom lane\n>\n>\n-- \nBest Regards\nAndy Fan",
"msg_date": "Sun, 30 Oct 2022 15:28:38 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pulling up direct-correlated ANY_SUBLINK"
}
] |
[
{
"msg_contents": "Hi,\n\nwhen the environment variable POSTGRESQL_UPGRADE_PGUPGRADE_OPTIONS is\nused to specify options for pg_upgrade, options related to\nunix_socket_directory/ies are being overridden by hardcoded options,\nmaking it difficult to upgrade in some usecases.\n\nThe attached patch changes the order of those options so that the\nhardcoded ones are eventually overridden by the user specified\noptions.\n\nAs I can see that in PostgreSQL 12 this issue has been solved by\nimplementing the -socketdir argument, my questions would be as\nfollows:\n\n1) Could such change break something that I might have missed?\n2) Would you be willing to accept this patch for versions prior to 12?\n\nThanks in advance.\n\n-- \nPatrik Novotný\n\nAssociate Software Engineer\n\nRed Hat\n\npanovotn@redhat.com\n\n\n",
"msg_date": "Tue, 10 Sep 2019 14:37:19 +0200",
"msg_from": "Patrik Novotny <panovotn@redhat.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Move user options to the end of the command in pg_upgrade"
},
{
"msg_contents": "Patrik Novotny <panovotn@redhat.com> writes:\n> when the environment variable POSTGRESQL_UPGRADE_PGUPGRADE_OPTIONS is\n> used to specify options for pg_upgrade, options related to\n> unix_socket_directory/ies are being overridden by hardcoded options,\n> making it difficult to upgrade in some usecases.\n\n> The attached patch changes the order of those options so that the\n> hardcoded ones are eventually overridden by the user specified\n> options.\n\nHi Patrik,\n\nIt looks like you forgot to attach the patch? But in any case,\nI see no references to POSTGRESQL_UPGRADE_PGUPGRADE_OPTIONS in\nany community Postgres code, so I'm wondering if this is just\nchanging some script that Red Hat supplies as part of packaging.\nThat would make it not our concern, really.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Sep 2019 10:13:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Move user options to the end of the command in pg_upgrade"
},
{
"msg_contents": "Hi Tom,\n\nthanks for you reply. You're right, and I apologise for the confusion.\nOptions I was talking about are specified via the `--old-options`\nparameter of the pg_upgrade (ex.: --old-options '-c\nunix_socket_directories=/run')\nMentioning of the environment variable came only from my own\nconfusion. I also attached the mentioned patch.\n\n\n\nRegards,\n\nOn Tue, Sep 10, 2019 at 4:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Patrik Novotny <panovotn@redhat.com> writes:\n> > when the environment variable POSTGRESQL_UPGRADE_PGUPGRADE_OPTIONS is\n> > used to specify options for pg_upgrade, options related to\n> > unix_socket_directory/ies are being overridden by hardcoded options,\n> > making it difficult to upgrade in some usecases.\n>\n> > The attached patch changes the order of those options so that the\n> > hardcoded ones are eventually overridden by the user specified\n> > options.\n>\n> Hi Patrik,\n>\n> It looks like you forgot to attach the patch? But in any case,\n> I see no references to POSTGRESQL_UPGRADE_PGUPGRADE_OPTIONS in\n> any community Postgres code, so I'm wondering if this is just\n> changing some script that Red Hat supplies as part of packaging.\n> That would make it not our concern, really.\n>\n> regards, tom lane\n\n\n\n-- \n\nPatrik Novotný\n\nAssociate Software Engineer\n\nRed Hat\n\npanovotn@redhat.com",
"msg_date": "Tue, 10 Sep 2019 16:46:58 +0200",
"msg_from": "Patrik Novotny <panovotn@redhat.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Move user options to the end of the command in pg_upgrade"
},
{
"msg_contents": "Patrik Novotny <panovotn@redhat.com> writes:\n> thanks for you reply. You're right, and I apologise for the confusion.\n> Options I was talking about are specified via the `--old-options`\n> parameter of the pg_upgrade (ex.: --old-options '-c\n> unix_socket_directories=/run')\n> Mentioning of the environment variable came only from my own\n> confusion. I also attached the mentioned patch.\n\nAh, now I see what you're on about. I agree that this is a good\nchange ... and we probably should add a comment reminding people\nto keep the user options at the end, because somebody[1] broke this\nthrough add-at-the-end syndrome.\n\n\t\t\tregards, tom lane\n\n[1] ... me, in fact\n\n\n",
"msg_date": "Tue, 10 Sep 2019 11:06:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Move user options to the end of the command in pg_upgrade"
},
{
"msg_contents": "I wrote:\n> Patrik Novotny <panovotn@redhat.com> writes:\n>> thanks for you reply. You're right, and I apologise for the confusion.\n>> Options I was talking about are specified via the `--old-options`\n>> parameter of the pg_upgrade (ex.: --old-options '-c\n>> unix_socket_directories=/run')\n>> Mentioning of the environment variable came only from my own\n>> confusion. I also attached the mentioned patch.\n\n> Ah, now I see what you're on about. I agree that this is a good\n> change ... and we probably should add a comment reminding people\n> to keep the user options at the end, because somebody[1] broke this\n> through add-at-the-end syndrome.\n\nActually ... now that I look more carefully, I'm not sure this change\nwould improve matters. You can't just reach in and select a different\nsocket directory behind pg_upgrade's back; if you try, the connection\nattempts later are going to fail, because pg_upgrade will be telling\npg_dump to use what it thinks the socket directory is.\n\nYou might be better off back-patching the addition of the --socketdir\noption (commit 2d34ad84303181111c6f0747186857ff50106267).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Sep 2019 15:34:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Move user options to the end of the command in pg_upgrade"
}
] |
[
{
"msg_contents": "I have figured out another solution to the problem that macOS SIP\ndefeats the use of DYLD_LIBRARY_PATH for running the temp-install\nregression tests. It's not without problems either, but it might show a\npath forward.\n\nFirst of all, I think I now know the exact mechanism by which this\nbreakage happens.\n\nThe precise issue is that /bin/sh filters out DYLD_* environment\nvariables (presumably all, but at least the ones we care about) when it\nstarts. If you use a shell other than /bin/sh (say, a Homebrew\ninstallation of bash or dash), there is no problem.\n\nBut /bin/sh is hardcoded in the system() library call, so in order to\nfix that, you need to override that library call. Attached is a patch\nthat shows how this could be done. It uses the DYLD_INSERT_LIBRARIES\nenvironment variable (equivalent to LD_PRELOAD) to substitute another\nversion of system(), which I hacked to allow overriding /bin/sh with\nanother shell using the environment variable PG_REGRESS_SHELL. That works.\n\nThere are also some other places where PostgreSQL code itself hardcodes\n/bin/sh as part of system()-like functionality. These have to be fixed\nup similarly, but that's easier.\n\nThe problem now is that DYLD_INSERT_LIBRARIES requires the \"flat\nnamespace\", which isn't the default. You can either build PostgreSQL\nwith -Wl,-flat_namespace, which works, but it's probably weird as a\nproper solution, or you can set the environment variable\nDYLD_FORCE_FLAT_NAMESPACE at run time, which also works but makes\neverything brutally slow.\n\nI think the way forward here is to get rid of all uses of system() for\ncalling between PostgreSQL programs. There are only a handful of those,\nand we already have well-tested replacement code like spawn_process() in\npg_regress.c that could be used. (Perhaps we could also use that\nopportunity to get rid of the need for shell quoting?)\n\nThere is a minor second issue, namely that /usr/bin/perl also filters\nout DYLD_* environment variables. This can be worked around again by\nusing a third-party installation of Perl. You just need to make sure\nthat the \"prove\" program calls that installation instead of the system\none. (I just manually edited the shebang line. There is probably a\nproper way to do it.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 10 Sep 2019 19:14:19 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "another look at macOS SIP"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I have figured out another solution to the problem that macOS SIP\n> defeats the use of DYLD_LIBRARY_PATH for running the temp-install\n> regression tests. It's not without problems either, but it might show a\n> path forward.\n> ...\n> The precise issue is that /bin/sh filters out DYLD_* environment\n> variables (presumably all, but at least the ones we care about) when it\n> starts.\n\nYeah, that was pretty much what we'd speculated.\n\n> I think the way forward here is to get rid of all uses of system() for\n> calling between PostgreSQL programs.\n\nWe could do that perhaps, but how are you going to get make to not use\n/bin/sh while spawning subprocesses? I don't think we want to also\nreimplement make ...\n\n> There is a minor second issue, namely that /usr/bin/perl also filters\n> out DYLD_* environment variables. This can be worked around again by\n> using a third-party installation of Perl.\n\nThis is not sounding better than just turning off SIP :-(\n\nWe could, however, probably fix things so that our Perl test scripts\nre-establish those environment variables internally. We don't need\nthe perl processes themselves to load test libraries, just their\ndescendants.\n\nMaybe a similar workaround is possible for the \"make\" issue?\nI have a feeling it would be less flexible than what we have\ntoday, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Sep 2019 13:26:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: another look at macOS SIP"
},
{
"msg_contents": "On 2019-09-10 19:26, Tom Lane wrote:\n>> I think the way forward here is to get rid of all uses of system() for\n>> calling between PostgreSQL programs.\n> \n> We could do that perhaps, but how are you going to get make to not use\n> /bin/sh while spawning subprocesses? I don't think we want to also\n> reimplement make ...\n\nmake is not a problem if the DYLD_* assignments are in a makefile rule\n(as currently), because then make just calls a shell with a string\n\"DYLD_*=foo some command\", which is not affected by any filtering. It\nwould be a problem if you do the variable assignments in a makefile\noutside a rule or outside a makefile.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 12 Sep 2019 20:59:54 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: another look at macOS SIP"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-10 19:14:19 +0200, Peter Eisentraut wrote:\n> I think the way forward here is to get rid of all uses of system() for\n> calling between PostgreSQL programs. There are only a handful of those,\n> and we already have well-tested replacement code like spawn_process() in\n> pg_regress.c that could be used. (Perhaps we could also use that\n> opportunity to get rid of the need for shell quoting?)\n\nYea, I think that'd be good, regardless of SIP.\n\n\n> There is a minor second issue, namely that /usr/bin/perl also filters\n> out DYLD_* environment variables. This can be worked around again by\n> using a third-party installation of Perl. You just need to make sure\n> that the \"prove\" program calls that installation instead of the system\n> one. (I just manually edited the shebang line. There is probably a\n> proper way to do it.)\n\nHm, could we just have perl code set DYLD_* again? I assume we don't\nneed prove itself to have it set, and for the testscripts we could just\nset it in TestLib.pm or such?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 17 Sep 2019 10:52:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: another look at macOS SIP"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-09-10 19:14:19 +0200, Peter Eisentraut wrote:\n>> There is a minor second issue, namely that /usr/bin/perl also filters\n>> out DYLD_* environment variables. This can be worked around again by\n>> using a third-party installation of Perl.\n\n> Hm, could we just have perl code set DYLD_* again? I assume we don't\n> need prove itself to have it set, and for the testscripts we could just\n> set it in TestLib.pm or such?\n\nYeah, that's what I was suggesting. \"Use another copy of Perl\" doesn't\nseem like an acceptable answer, or at least it's hardly better than\n\"turn off SIP\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Sep 2019 15:43:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: another look at macOS SIP"
},
{
"msg_contents": "On 2019-09-17 21:43, Tom Lane wrote:\n> Yeah, that's what I was suggesting. \"Use another copy of Perl\" doesn't\n> seem like an acceptable answer, or at least it's hardly better than\n> \"turn off SIP\".\n\nIn my mind, the Perl aspect of this is the most trivial part of the\nproblem. \"brew install perl\" is probably faster than writing out this\nemail. But I suppose everyone has their own workflows.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 17 Sep 2019 22:12:20 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: another look at macOS SIP"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-09-17 21:43, Tom Lane wrote:\n>> Yeah, that's what I was suggesting. \"Use another copy of Perl\" doesn't\n>> seem like an acceptable answer, or at least it's hardly better than\n>> \"turn off SIP\".\n\n> In my mind, the Perl aspect of this is the most trivial part of the\n> problem. \"brew install perl\" is probably faster than writing out this\n> email. But I suppose everyone has their own workflows.\n\nThere's a not-insignificant contingent that don't wish to install\neither homebrew or macports.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Sep 2019 16:14:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: another look at macOS SIP"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 1:52 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-09-10 19:14:19 +0200, Peter Eisentraut wrote:\n> > I think the way forward here is to get rid of all uses of system() for\n> > calling between PostgreSQL programs. There are only a handful of those,\n> > and we already have well-tested replacement code like spawn_process() in\n> > pg_regress.c that could be used. (Perhaps we could also use that\n> > opportunity to get rid of the need for shell quoting?)\n>\n> Yea, I think that'd be good, regardless of SIP.\n\n+1, and making some progress on the SIP issue would be good, too, even\nif we don't fix everything right away. It seems entirely possible\nthat Apple will make this even more annoying to disable than it\nalready is.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 18 Sep 2019 09:01:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: another look at macOS SIP"
}
] |
[
{
"msg_contents": "Hi,\n\nPlease see attached draft of the PG12 Beta 4 press release.\n\nI went through the list of open items that were resolved before beta\n4[1] for the detailed please. Please let me know if I described any of\nthem incorrectly, or if you believe that any other fixes should be on\nthe list.\n\nThanks!\n\nJonathan\n\n[1]\nhttps://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items#resolved_before_12beta4",
"msg_date": "Tue, 10 Sep 2019 21:37:33 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PG12 Beta 4 Press Release"
}
] |
[
{
"msg_contents": "Thank you very much.\n----- 原始邮件 -----\n发件人:ilmari@ilmari.org (Dagfinn Ilmari Mannsåker)\n收件人:<gc_11@sina.cn>\n抄送人:\"pgsql-hackers\" <pgsql-hackers@lists.postgresql.org>\n主题:Re: Does PostgreSQL support debian Linux on Arm CPU Platform?\n日期:2019年09月10日 18点08分\n\n\n<gc_11@sina.cn> writes:\n> Hi,I just want know does PostgreSQL support debian Linux with ARM CPU Platform,Thank you!\nThe PostgreSQL community provided packages (https://apt.postgresql.org/)\nare only built for amd64, i386 and ppc64el, but Debian itself ships\nPostgreSQL on every architecture it supports.\nEach Debian release only ships one major version of PostgreSQL (the\ncurrent stable release has PostgreSQL 11), but if you need other\nversions you could build them from the apt.postgresql.org source\npackages.\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl\n\nThank you very much.----- 原始邮件 -----发件人:ilmari@ilmari.org (Dagfinn Ilmari Mannsåker)收件人:<gc_11@sina.cn>抄送人:\"pgsql-hackers\" <pgsql-hackers@lists.postgresql.org>主题:Re: Does PostgreSQL support debian Linux on Arm CPU Platform?日期:2019年09月10日 18点08分<gc_11@sina.cn> writes:> Hi,I just want know does PostgreSQL support debian Linux with ARM CPU Platform,Thank you!The PostgreSQL community provided packages (https://apt.postgresql.org/)are only built for amd64, i386 and ppc64el, but Debian itself shipsPostgreSQL on every architecture it supports.Each Debian release only ships one major version of PostgreSQL (thecurrent stable release has PostgreSQL 11), but if you need otherversions you could build them from the apt.postgresql.org sourcepackages.- ilmari-- - Twitter seems more influential [than blogs] in the 'gets reported in the mainstream press' sense at least. - Matt McLeod- That'd be because the content of a tweet is easier to condense down to a mainstream media article. - Calle Dybedahl",
"msg_date": "Wed, 11 Sep 2019 13:37:40 +0800",
"msg_from": "<gc_11@sina.cn>",
"msg_from_op": true,
"msg_subject": "\n =?GBK?B?u9i4tKO6UmU6IERvZXMgUG9zdGdyZVNRTCBzdXBwb3J0IGRlYmlhbiBMaW51eCBvbiBBcm0gQ1BVIFBsYXRmb3JtPw==?="
}
] |
[
{
"msg_contents": "Hi\r\n\r\ncreate table omega(a int);\r\ncreate view omega_view as select * from omega;\r\ninsert into omega values(10);\r\n\r\npostgres=# select table_type, table_name from information_schema.tables\r\nwhere table_name like 'omega%';\r\n┌────────────┬────────────┐\r\n│ table_type │ table_name │\r\n╞════════════╪════════════╡\r\n│ BASE TABLE │ omega │\r\n│ VIEW │ omega_view │\r\n└────────────┴────────────┘\r\n(2 rows)\r\n\r\npostgres=# create materialized view omega_m_view as select * from omega;\r\nSELECT 1\r\npostgres=# select table_type, table_name from information_schema.tables\r\nwhere table_name like 'omega%';\r\n┌────────────┬────────────┐\r\n│ table_type │ table_name │\r\n╞════════════╪════════════╡\r\n│ BASE TABLE │ omega │\r\n│ VIEW │ omega_view │\r\n└────────────┴────────────┘\r\n(2 rows)\r\n\r\npostgres=# refresh materialized view omega_m_view ;\r\nREFRESH MATERIALIZED VIEW\r\npostgres=# select table_type, table_name from information_schema.tables\r\nwhere table_name like 'omega%';\r\n┌────────────┬────────────┐\r\n│ table_type │ table_name │\r\n╞════════════╪════════════╡\r\n│ BASE TABLE │ omega │\r\n│ VIEW │ omega_view │\r\n└────────────┴────────────┘\r\n(2 rows)\r\n\r\nIs it expected behave? Tested on master branch.\r\n\r\nPavel\r\n\nHicreate table omega(a int);create view omega_view as select * from omega;insert into omega values(10);postgres=# select table_type, table_name from information_schema.tables where table_name like 'omega%';┌────────────┬────────────┐│ table_type │ table_name │╞════════════╪════════════╡│ BASE TABLE │ omega ││ VIEW │ omega_view │└────────────┴────────────┘(2 rows)postgres=# create materialized view omega_m_view as select * from omega;SELECT 1postgres=# select table_type, table_name from information_schema.tables where table_name like 'omega%';┌────────────┬────────────┐│ table_type │ table_name │╞════════════╪════════════╡│ BASE TABLE │ omega ││ VIEW │ omega_view │└────────────┴────────────┘(2 rows)postgres=# refresh materialized view omega_m_view ;REFRESH MATERIALIZED VIEWpostgres=# select table_type, table_name from information_schema.tables where table_name like 'omega%';┌────────────┬────────────┐│ table_type │ table_name │╞════════════╪════════════╡│ BASE TABLE │ omega ││ VIEW │ omega_view │└────────────┴────────────┘(2 rows)Is it expected behave? Tested on master branch.Pavel",
"msg_date": "Wed, 11 Sep 2019 08:14:21 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "don't see materialized views in information_schema"
},
{
"msg_contents": "On 2019-09-11 08:14, Pavel Stehule wrote:\n> Hi\n> \n> [matviews not showing up in information_schema.tables]\n> \n> Is it expected behave? Tested on master branch.\n\nI think it is; it has been like this all along.\n\n( matviews are in pg_matviews. )\n\n\n\n",
"msg_date": "Wed, 11 Sep 2019 09:49:11 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: don't see materialized views in information_schema"
},
{
"msg_contents": "st 11. 9. 2019 v 9:49 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n\n> On 2019-09-11 08:14, Pavel Stehule wrote:\n> > Hi\n> >\n> > [matviews not showing up in information_schema.tables]\n> >\n> > Is it expected behave? Tested on master branch.\n>\n> I think it is; it has been like this all along.\n>\n> ( matviews are in pg_matviews. )\n>\n\nMinimally I miss a entry in information_schema.views\n\nTo today I expected so any object should be listed somewhere in\ninformation_schema.\n\nPavel\n\nst 11. 9. 2019 v 9:49 odesílatel Erik Rijkers <er@xs4all.nl> napsal:On 2019-09-11 08:14, Pavel Stehule wrote:\n> Hi\n> \n> [matviews not showing up in information_schema.tables]\n> \n> Is it expected behave? Tested on master branch.\n\nI think it is; it has been like this all along.\n\n( matviews are in pg_matviews. )Minimally I miss a entry in information_schema.views To today I expected so any object should be listed somewhere in information_schema.Pavel",
"msg_date": "Wed, 11 Sep 2019 10:02:21 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: don't see materialized views in information_schema"
},
{
"msg_contents": "On Wed, Sep 11, 2019 at 10:03 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> st 11. 9. 2019 v 9:49 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n>>\n>> On 2019-09-11 08:14, Pavel Stehule wrote:\n>> > Hi\n>> >\n>> > [matviews not showing up in information_schema.tables]\n>> >\n>> > Is it expected behave? Tested on master branch.\n>>\n>> I think it is; it has been like this all along.\n>>\n>> ( matviews are in pg_matviews. )\n>\n>\n> Minimally I miss a entry in information_schema.views\n>\n> To today I expected so any object should be listed somewhere in information_schema.\n>\n\nThere has been previous discussion about this topic:\n\nhttps://www.postgresql.org/message-id/3794.1412980686@sss.pgh.pa.us\n\n\nRegards,\n\nJuan José Santamaría Flecha\n\n\n",
"msg_date": "Wed, 11 Sep 2019 10:52:18 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: don't see materialized views in information_schema"
},
{
"msg_contents": "st 11. 9. 2019 v 10:52 odesílatel Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> napsal:\n\n> On Wed, Sep 11, 2019 at 10:03 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > st 11. 9. 2019 v 9:49 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n> >>\n> >> On 2019-09-11 08:14, Pavel Stehule wrote:\n> >> > Hi\n> >> >\n> >> > [matviews not showing up in information_schema.tables]\n> >> >\n> >> > Is it expected behave? Tested on master branch.\n> >>\n> >> I think it is; it has been like this all along.\n> >>\n> >> ( matviews are in pg_matviews. )\n> >\n> >\n> > Minimally I miss a entry in information_schema.views\n> >\n> > To today I expected so any object should be listed somewhere in\n> information_schema.\n> >\n>\n> There has been previous discussion about this topic:\n>\n> https://www.postgresql.org/message-id/3794.1412980686@sss.pgh.pa.us\n\n\nunderstand now.\n\nThank you\n\nPavel\n\n\n>\n>\n> Regards,\n>\n> Juan José Santamaría Flecha\n>\n\nst 11. 9. 2019 v 10:52 odesílatel Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> napsal:On Wed, Sep 11, 2019 at 10:03 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> st 11. 9. 2019 v 9:49 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n>>\n>> On 2019-09-11 08:14, Pavel Stehule wrote:\n>> > Hi\n>> >\n>> > [matviews not showing up in information_schema.tables]\n>> >\n>> > Is it expected behave? Tested on master branch.\n>>\n>> I think it is; it has been like this all along.\n>>\n>> ( matviews are in pg_matviews. )\n>\n>\n> Minimally I miss a entry in information_schema.views\n>\n> To today I expected so any object should be listed somewhere in information_schema.\n>\n\nThere has been previous discussion about this topic:\n\nhttps://www.postgresql.org/message-id/3794.1412980686@sss.pgh.pa.usunderstand now.Thank youPavel\n\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Wed, 11 Sep 2019 10:54:42 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: don't see materialized views in information_schema"
}
] |
[
{
"msg_contents": "Dear ALL,\n\nI want to report and consult about DECLARE STATEMENT.\nThis feature, committed last February, has some bugs.\n\n* This is not thread-independent.\n* If some cursors are declared for the same SQL identifier, \n only one cursor you declared at last is enabled.\n* This syntax does not have oracle compatibility.\n\nIn order to modify bugs, I think many designs, implementations, \nand specifications should be changed.\nAny operations will be done at the preprocessor process, like #define\nmacro in C.\n\nIt will take about 2 or 3 weeks to make a patch.\nIs it acceptable for PG12?\n\nBest Regards,\n\nHayato Kuroda\nFUJITSU LIMITED\nE-Mail:kuroda.hayato@fujitsu.com\n\n\n\n",
"msg_date": "Wed, 11 Sep 2019 09:46:38 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "A problem presentaion about ECPG, DECLARE STATEMENT"
},
{
"msg_contents": "Hi Kuroda-san,\n\n> This feature, committed last February, has some bugs.\n> ...\n> * This syntax does not have oracle compatibility.\n\nThis in itself is not a bug. If the syntax is not standard compliant,\nthen it's a bug. That of course does not mean we would not like to be\nOracle compatible where possible.\n\n> In order to modify bugs, I think many designs, implementations, \n> and specifications should be changed.\n\nI hope the authors of said patch speak up and explain why they\nimplemented it as is.\n\n> Is it acceptable for PG12?\n\nIn general absolutely.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n\n",
"msg_date": "Wed, 11 Sep 2019 14:32:05 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: A problem presentaion about ECPG, DECLARE STATEMENT"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> Hi Kuroda-san,\n>> In order to modify bugs, I think many designs, implementations, \n>> and specifications should be changed.\n\n> I hope the authors of said patch speak up and explain why they\n> implemented it as is.\n\n>> Is it acceptable for PG12?\n\n> In general absolutely.\n\nIt seems far too late to be considering any major rewrite for v12;\neven assuming that there was consensus on the rewrite being an\nimprovement, which I bet there won't be.\n\n\"Two or three weeks from now\" we'll be thinking about pushing 12.0\nout the door. We can't be accepting major definitional changes\nat that point.\n\nIf a solid case can be made that ECPG's DECLARE STATEMENT was done\nwrong, we'd be better off to just revert the feature out of v12\nand try again, under less time pressure, for v13.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 11 Sep 2019 09:54:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A problem presentaion about ECPG, DECLARE STATEMENT"
},
{
"msg_contents": "> > > Is it acceptable for PG12?\n> > In general absolutely.\n> \n> It seems far too late to be considering any major rewrite for v12;\n> even assuming that there was consensus on the rewrite being an\n> improvement, which I bet there won't be.\n\nOops, I read 13. Yes, it's obviously way too late for 12. Sorry for the\nnoise.\n\nIn this case I'd like to details about what is wrong with the\nimplementation.\n\nThanks.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n\n",
"msg_date": "Wed, 11 Sep 2019 18:04:03 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: A problem presentaion about ECPG, DECLARE STATEMENT"
},
{
"msg_contents": "On 2019-09-11 18:04, Michael Meskes wrote:\n>>>> Is it acceptable for PG12?\n>>> In general absolutely.\n>>\n>> It seems far too late to be considering any major rewrite for v12;\n>> even assuming that there was consensus on the rewrite being an\n>> improvement, which I bet there won't be.\n> \n> Oops, I read 13. Yes, it's obviously way too late for 12. Sorry for the\n> noise.\n> \n> In this case I'd like to details about what is wrong with the\n> implementation.\n\nI tried finding some information about where the idea for this statement\ncame from but couldn't find any. The documentation references Oracle\nand DB2, and while they indeed do have this statement, it doesn't seem\nto be used for the same purpose. The only purpose in ECPG appears to be\nto associate a statement with a connection, but for example the DB2\nimplementation doesn't even have the AT clause, so I don't see how that\ncould be the same.\n\nMoreover, I've been wondering about the behavior detail given in the\ntable at\n<https://www.postgresql.org/docs/devel/ecpg-sql-declare-statement.html>.\n In scenario 3, the declare statement says con1 but the subsequent\ndynamic statement says con2, and as a result of that, con1 is used.\nThis is not intuitive, I'd say, but given that there is no indication of\nwhere this statement came from or whose idea it follows, it's unclear\nhow to evaluate that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 16 Sep 2019 13:12:17 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: A problem presentaion about ECPG, DECLARE STATEMENT"
},
{
"msg_contents": "Dear all, \r\n\r\nHi, thank you for replying.\r\n\r\n> It seems far too late to be considering any major rewrite for v12;\r\n\r\n> If a solid case can be made that ECPG's DECLARE STATEMENT was done\r\n> wrong, we'd be better off to just revert the feature out of v12\r\n> and try again, under less time pressure, for v13.\r\n\r\nI see, I'll propose this at the next commitfest.\r\nBut I'm now considering this commit should be reverted in order to avoid \r\nthe confusion.\r\n\r\nIn oracle and postgres, this statement is used for the purpose of designating\r\na connection easily. If two functions have a similar goal, these ones should be\r\nused by same way. Some specifications denoted in the document follow oracle's one.\r\nMaybe it's not indicated in the oracle manual, and I understand it should be \r\ndiscussed more.\r\n\r\nNow, one of the major difference of usage between these DBMSs is namespace.\r\nThe current namespace unit of postgres is a process, however, oracle ensures \r\nthat SQL identifiers are unique only within the file. This means that only \r\npostgres user cannot recycle identifier. This distinction might be inconvenient, \r\nand it makes more confusing to change a namespace after releasing Postgres 12.\r\n\r\nI'm now planning remake this function and change namespace unit from a process \r\nto a file.\r\nSo I recommend you to throw this away temporally.\r\n\r\nI want to hear your opinion.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFujitsu LIMITED\r\n\r\n",
"msg_date": "Wed, 18 Sep 2019 11:41:33 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: A problem presentaion about ECPG, DECLARE STATEMENT"
},
{
"msg_contents": "Dear Peter,\r\n\r\nI want to complement about another purpose.\r\nThis is that declaring an SQL identifier.\r\n\r\nIn the oracle (and maybe DB2), the following example is not allowed:\r\n\r\n...\r\nEXEC SQL DECLARE cursor CURSOR FOR stmt;\r\n\t\t\t\t ^^^^\r\nEXEC SQL PREPARE stmt FOR \"SELECT ...\"\r\n...\r\n\r\nThis is caused because these preprocessor cannot recognize stmt as an SQL identifier and\r\nthrow an error.\r\nI think DB2 might focus on here, so AT clause is not important for them.\r\nBut ECPG can accept these sentences, so it has no meaning for postgres.\r\nThat is why I did not mention about it and I focused on the omission of AT clause.\r\n\r\nHayato Kuroda\r\nFujitsu LIMITED\r\n\r\n",
"msg_date": "Wed, 18 Sep 2019 11:46:17 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: A problem presentaion about ECPG, DECLARE STATEMENT"
},
{
"msg_contents": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com> writes:\n>> If a solid case can be made that ECPG's DECLARE STATEMENT was done\n>> wrong, we'd be better off to just revert the feature out of v12\n>> and try again, under less time pressure, for v13.\n\n> I see, I'll propose this at the next commitfest.\n> But I'm now considering this commit should be reverted in order to avoid \n> the confusion.\n\nPer this discussion, I've reverted DECLARE STATEMENT out of v12 and HEAD\nbranches.\n\nOne thing that could use more eyeballs on it is the changes in\necpg_register_prepared_stmt(); that was added after DECLARE STATEMENT\nso there was no prior state to revert to, and I had to guess a bit.\nWhat I guessed, seeing that the lone caller of that function is\nalready using stmt->connection, was that it was completely bogus\nfor ecpg_register_prepared_stmt() to be doing its own new connection\nlookup and it should just use stmt->connection. But I might be wrong\nsince I'm not too clear about where connection lookups are supposed\nto be done in this library.\n\nAnother comment is that this was one of the more painful reverts\nI've ever had to do. Some of the pain was unavoidable because\nthere were later commits (mostly the PREPARE AS one) changing\nadjacent code ... but a lot of it was due to flat-out sloppiness\nin the DECLARE STATEMENT patch, particularly with respect to\nwhitespace. Please run the next submission through pgindent\nbeforehand. Also please pay attention to the documentation cleanups\nthat other people made after the initial patch. We don't want to\nhave to repeat that cleanup work a second time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Sep 2019 12:59:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A problem presentaion about ECPG, DECLARE STATEMENT"
},
{
"msg_contents": "On Mon, Sep 16, 2019 at 01:12:17PM +0200, Peter Eisentraut wrote:\n> Moreover, I've been wondering about the behavior detail given in the\n> table at\n> <https://www.postgresql.org/docs/devel/ecpg-sql-declare-statement.html>.\n> In scenario 3, the declare statement says con1 but the subsequent\n> dynamic statement says con2, and as a result of that, con1 is used.\n> This is not intuitive, I'd say, but given that there is no indication of\n> where this statement came from or whose idea it follows, it's unclear\n> how to evaluate that.\n\nFYI, I was totally confused by this also when researching this for the\nPG 12 release notes. I am glad we are going to redo it for PG 13.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 27 Sep 2019 14:14:40 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: A problem presentaion about ECPG, DECLARE STATEMENT"
}
] |
[
{
"msg_contents": "Hi,\n\nI reproduced the error \"exceeded maxAllocatedDescs (492) while trying\nto open file ...\", which was also discussed about in the thread [1].\nThis issue is similar but not exactly the same as [1]. In [1], the\nfile for which this error used to show up was\n\"pg_logical/mappings/map....\" , while here it's the .spill file. And\nhere the issue , in short, seems to be : The .spill file does not get\nclosed there and then, unlike in [1] where there was a file descriptor\nleak.\n\nI could reproduce it using a transaction containing a long series of\nsub-transactions (possibly could be reproduced without\nsub-transactions, but looking at the code I could come up with this\nscript using sub-transactions easily) :\n\ncreate table tab(id int);\n\n-- Function that does huge changes in a single transaction\ncreate or replace function f(id int) returns int as\n$$\nbegin\n -- Iterate this more than 492 times (max transient file\ndescriptors PG would typically allow)\n -- This will create that many sub-transactions due to presence of\nexception block.\n FOR i IN 1..600 LOOP\n\n BEGIN\n -- Iterate more than 4096 times (so that changes spill to\ndisk: max_changes_in_memory)\n FOR j IN 1..5000 LOOP\n insert into tab values (1);\n END LOOP;\n EXCEPTION\n when division_by_zero then perform 'dummy';\n END;\n\n END LOOP;\n\n return id;\nend $$ language plpgsql;\n\nSELECT * FROM pg_create_logical_replication_slot('logical', 'test_decoding');\n\nbegin;\nselect f(1); -- Do huge changes in a single transaction\ncommit;\n\n\\! pg_recvlogical -d postgres --slot=logical --verbose --start -f -\n\npg_recvlogical: starting log streaming at 0/0 (slot logical)\npg_recvlogical: streaming initiated\npg_recvlogical: confirming write up to 0/0, flush to 0/0 (slot logical)\nBEGIN 1869\npg_recvlogical: confirming write up to 0/1B6D6E38, flush to 0/1B6D6E38\n(slot logical)\npg_recvlogical: error: unexpected termination of replication stream:\nERROR: exceeded maxAllocatedDescs (492) while trying to open file\n\"pg_replslot/logical/xid-2362-lsn-0-24000000.spill\"\npg_recvlogical: disconnected; waiting 5 seconds to try again\n\nLooking at the code, what might be happening is,\nReorderBufferIterTXNInit()=>ReorderBufferRestoreChanges() opens the\nfiles, but leaves them open if end of file is not reached. Eventually\nif end of file is reached, it gets closed. The function returns back\nwithout closing the file descriptor if reorder buffer changes being\nrestored are more than max_changes_in_memory. Probably later on, the\nrest of the changes get restored in another\nReorderBufferRestoreChanges() call. But meanwhile, if there are a lot\nof such files opened, we can run out of the max files that PG decides\nto keep open (it has some logic that takes into account system files\nallowed to be open, and already opened).\n\nOffhand, what I am thinking is, we need to close the file descriptor\nbefore returning from ReorderBufferRestoreChanges(), and keep track of\nthe file offset and file path, so that next time we can resume reading\nfrom there.\n\nComments ?\n\n[1] https://www.postgresql.org/message-id/flat/738a590a-2ce5-9394-2bef-7b1caad89b37%402ndquadrant.com\n\n--\nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Wed, 11 Sep 2019 16:14:18 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On 2019-Sep-11, Amit Khandekar wrote:\n\n> I reproduced the error \"exceeded maxAllocatedDescs (492) while trying\n> to open file ...\", which was also discussed about in the thread [1].\n> This issue is similar but not exactly the same as [1]. In [1], the\n> file for which this error used to show up was\n> \"pg_logical/mappings/map....\" , while here it's the .spill file. And\n> here the issue , in short, seems to be : The .spill file does not get\n> closed there and then, unlike in [1] where there was a file descriptor\n> leak.\n\nUh-oh :-( Thanks for the reproducer -- I confirm it breaks things.\n\n> Looking at the code, what might be happening is,\n> ReorderBufferIterTXNInit()=>ReorderBufferRestoreChanges() opens the\n> files, but leaves them open if end of file is not reached. Eventually\n> if end of file is reached, it gets closed. The function returns back\n> without closing the file descriptor if reorder buffer changes being\n> restored are more than max_changes_in_memory. Probably later on, the\n> rest of the changes get restored in another\n> ReorderBufferRestoreChanges() call. But meanwhile, if there are a lot\n> of such files opened, we can run out of the max files that PG decides\n> to keep open (it has some logic that takes into account system files\n> allowed to be open, and already opened).\n\nMakes sense.\n\n> Offhand, what I am thinking is, we need to close the file descriptor\n> before returning from ReorderBufferRestoreChanges(), and keep track of\n> the file offset and file path, so that next time we can resume reading\n> from there.\n\nI think doing this all the time would make restore very slow -- there's\na reason we keep the files open, after all. It would be better if we\ncan keep the descriptors open as much as possible, and only close them\nif there's trouble. I was under the impression that using\nOpenTransientFile was already taking care of that, but that's evidently\nnot the case.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 11 Sep 2019 09:51:40 -0300",
"msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, Sep 11, 2019 at 09:51:40AM -0300, Alvaro Herrera from 2ndQuadrant wrote:\n>On 2019-Sep-11, Amit Khandekar wrote:\n>\n>> I reproduced the error \"exceeded maxAllocatedDescs (492) while trying\n>> to open file ...\", which was also discussed about in the thread [1].\n>> This issue is similar but not exactly the same as [1]. In [1], the\n>> file for which this error used to show up was\n>> \"pg_logical/mappings/map....\" , while here it's the .spill file. And\n>> here the issue , in short, seems to be : The .spill file does not get\n>> closed there and then, unlike in [1] where there was a file descriptor\n>> leak.\n>\n>Uh-oh :-( Thanks for the reproducer -- I confirm it breaks things.\n>\n>> Looking at the code, what might be happening is,\n>> ReorderBufferIterTXNInit()=>ReorderBufferRestoreChanges() opens the\n>> files, but leaves them open if end of file is not reached. Eventually\n>> if end of file is reached, it gets closed. The function returns back\n>> without closing the file descriptor if reorder buffer changes being\n>> restored are more than max_changes_in_memory. Probably later on, the\n>> rest of the changes get restored in another\n>> ReorderBufferRestoreChanges() call. But meanwhile, if there are a lot\n>> of such files opened, we can run out of the max files that PG decides\n>> to keep open (it has some logic that takes into account system files\n>> allowed to be open, and already opened).\n>\n>Makes sense.\n>\n>> Offhand, what I am thinking is, we need to close the file descriptor\n>> before returning from ReorderBufferRestoreChanges(), and keep track of\n>> the file offset and file path, so that next time we can resume reading\n>> from there.\n>\n>I think doing this all the time would make restore very slow -- there's a\n>reason we keep the files open, after all.\n\nHow much slower? It certainly will have a hit, but maybe it's negligible\ncompared to all the other stuff happening in this code?\n\n>It would be better if we can keep the descriptors open as much as\n>possible, and only close them if there's trouble. I was under the\n>impression that using OpenTransientFile was already taking care of that,\n>but that's evidently not the case.\n>\n\nI don't see how the current API could do that transparently - it does\ntrack the files, but the user only gets a file descriptor. With just a\nfile descriptor, how could the code know to do reopen/seek when it's going\njust through the regular fopen/fclose?\n\nAnyway, I agree we need to do something, to fix this corner case (many\nserialized in-progress transactions). ISTM we have two options - either do\nsomething in the context of reorderbuffer.c, or extend the transient file\nAPI somehow. I'd say the second option is the right thing going forward,\nbecause it does allow doing it transparently and without leaking details\nabout maxAllocatedDescs etc. There are two issues, though - it does\nrequire changes / extensions to the API, and it's not backpatchable.\n\nSo maybe we should start with the localized fix in reorderbuffer, and I\nagree tracking offset seems reasonable.\n\nAs a sidenote - in the other thread about streaming, one of the patches\ndoes change how we log subxact assignments. In the end, this allows using\njust a single file for the top-level transaction, instead of having one\nfile per subxact. That would also solve this.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 12 Sep 2019 11:30:55 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On 2019-Sep-12, Tomas Vondra wrote:\n\n> On Wed, Sep 11, 2019 at 09:51:40AM -0300, Alvaro Herrera from 2ndQuadrant wrote:\n> > On 2019-Sep-11, Amit Khandekar wrote:\n\n> > I think doing this all the time would make restore very slow -- there's a\n> > reason we keep the files open, after all.\n> \n> How much slower? It certainly will have a hit, but maybe it's negligible\n> compared to all the other stuff happening in this code?\n\nI dunno -- that's a half-assed guess based on there being many more\nsyscalls, and on the fact that the API is how it is in the first place.\n(Andres did a lot of perf benchmarking and tweaking back when he was\nwriting this ... I just point out that I have a colleague that had to\ninvent *a lot* of new MemCxt infrastructure in order to make some of\nAndres' perf optimizations cleaner, just as a semi-related data point.\nAnyway, I digress.)\n\nAnyway, such a fix would pessimize all cases, including every single\ncase that works today (which evidently is almost every single usage of\nthis feature, since we hadn't heard of this problem until yesterday), in\norder to solve a problem that you can only find in very rare ones.\n\nAnother point of view is that we should make it work first, then make it\nfast. But the point remains that it works fine and fast for 99.99% of\ncases.\n\n> > It would be better if we can keep the descriptors open as much as\n> > possible, and only close them if there's trouble. I was under the\n> > impression that using OpenTransientFile was already taking care of that,\n> > but that's evidently not the case.\n> \n> I don't see how the current API could do that transparently - it does\n> track the files, but the user only gets a file descriptor. With just a\n> file descriptor, how could the code know to do reopen/seek when it's going\n> just through the regular fopen/fclose?\n\nYeah, I don't know what was in Amit's mind, but it seemed obvious to me\nthat such a fix required changing that API so that a seekpos is kept\ntogether with the fd. ReorderBufferRestoreChange is static in\nreorderbuffer.c so it's not a problem to change its ABI.\n\nI agree with trying to get a reorderbuffer-localized back-patchable fix\nfirst, then we how to improve from that.\n\n> As a sidenote - in the other thread about streaming, one of the patches\n> does change how we log subxact assignments. In the end, this allows using\n> just a single file for the top-level transaction, instead of having one\n> file per subxact. That would also solve this.\n\n:-( While I would love to get that patch done and get rid of the issue,\nit's not a backpatchable fix either. ... however, it does mean we maybe\ncan get away with the reorderbuffer.c-local fix, and then just use your\nstreaming stuff to get rid of the perf problem forever?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 12 Sep 2019 09:57:55 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, Sep 12, 2019 at 5:31 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> I don't see how the current API could do that transparently - it does\n> track the files, but the user only gets a file descriptor. With just a\n> file descriptor, how could the code know to do reopen/seek when it's going\n> just through the regular fopen/fclose?\n>\n> Anyway, I agree we need to do something, to fix this corner case (many\n> serialized in-progress transactions). ISTM we have two options - either do\n> something in the context of reorderbuffer.c, or extend the transient file\n> API somehow. I'd say the second option is the right thing going forward,\n> because it does allow doing it transparently and without leaking details\n> about maxAllocatedDescs etc. There are two issues, though - it does\n> require changes / extensions to the API, and it's not backpatchable.\n>\n> So maybe we should start with the localized fix in reorderbuffer, and I\n> agree tracking offset seems reasonable.\n\nWe've already got code that knows how to track this sort of thing.\nYou just need to go through the File abstraction (PathNameOpenFile or\nPathNameOpenFilePerm or OpenTemporaryFile) rather than using the\nfunctions that deal directly with fds (OpenTransientFile,\nBasicOpenFile, etc.). It seems like it would be better to reuse the\nexisting VFD layer than to invent a whole new one specific to logical\nreplication.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 12 Sep 2019 09:41:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-12 09:41:02 -0400, Robert Haas wrote:\n> On Thu, Sep 12, 2019 at 5:31 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> > I don't see how the current API could do that transparently - it does\n> > track the files, but the user only gets a file descriptor. With just a\n> > file descriptor, how could the code know to do reopen/seek when it's going\n> > just through the regular fopen/fclose?\n> >\n> > Anyway, I agree we need to do something, to fix this corner case (many\n> > serialized in-progress transactions). ISTM we have two options - either do\n> > something in the context of reorderbuffer.c, or extend the transient file\n> > API somehow. I'd say the second option is the right thing going forward,\n> > because it does allow doing it transparently and without leaking details\n> > about maxAllocatedDescs etc. There are two issues, though - it does\n> > require changes / extensions to the API, and it's not backpatchable.\n> >\n> > So maybe we should start with the localized fix in reorderbuffer, and I\n> > agree tracking offset seems reasonable.\n> \n> We've already got code that knows how to track this sort of thing.\n> You just need to go through the File abstraction (PathNameOpenFile or\n> PathNameOpenFilePerm or OpenTemporaryFile) rather than using the\n> functions that deal directly with fds (OpenTransientFile,\n> BasicOpenFile, etc.). It seems like it would be better to reuse the\n> existing VFD layer than to invent a whole new one specific to logical\n> replication.\n\nYea, I agree that that is the right fix.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 12 Sep 2019 11:31:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-12 09:57:55 -0300, Alvaro Herrera wrote:\n> On 2019-Sep-12, Tomas Vondra wrote:\n> \n> > On Wed, Sep 11, 2019 at 09:51:40AM -0300, Alvaro Herrera from 2ndQuadrant wrote:\n> > > On 2019-Sep-11, Amit Khandekar wrote:\n> \n> > > I think doing this all the time would make restore very slow -- there's a\n> > > reason we keep the files open, after all.\n> > \n> > How much slower? It certainly will have a hit, but maybe it's negligible\n> > compared to all the other stuff happening in this code?\n\nI'd expect it to be significant.\n\n\n> > As a sidenote - in the other thread about streaming, one of the patches\n> > does change how we log subxact assignments. In the end, this allows using\n> > just a single file for the top-level transaction, instead of having one\n> > file per subxact. That would also solve this.\n\nUhm, how is rollback to savepoint going to be handled in that case? I\ndon't think it's great to just retain space for all rolled back\nsubtransactions.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 12 Sep 2019 11:34:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, Sep 12, 2019 at 11:34:01AM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-09-12 09:57:55 -0300, Alvaro Herrera wrote:\n>> On 2019-Sep-12, Tomas Vondra wrote:\n>>\n>> > On Wed, Sep 11, 2019 at 09:51:40AM -0300, Alvaro Herrera from 2ndQuadrant wrote:\n>> > > On 2019-Sep-11, Amit Khandekar wrote:\n>>\n>> > > I think doing this all the time would make restore very slow -- there's a\n>> > > reason we keep the files open, after all.\n>> >\n>> > How much slower? It certainly will have a hit, but maybe it's negligible\n>> > compared to all the other stuff happening in this code?\n>\n>I'd expect it to be significant.\n>\n>\n>> > As a sidenote - in the other thread about streaming, one of the patches\n>> > does change how we log subxact assignments. In the end, this allows using\n>> > just a single file for the top-level transaction, instead of having one\n>> > file per subxact. That would also solve this.\n>\n>Uhm, how is rollback to savepoint going to be handled in that case? I\n>don't think it's great to just retain space for all rolled back\n>subtransactions.\n>\n\nThe code can just do ftruncate() to the position of the subtransaction.\nThat's what the patch [1] does.\n\n[1] https://commitfest.postgresql.org/24/1927/\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 13 Sep 2019 16:57:34 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, 12 Sep 2019 at 19:11, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Sep 12, 2019 at 5:31 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> > I don't see how the current API could do that transparently - it does\n> > track the files, but the user only gets a file descriptor. With just a\n> > file descriptor, how could the code know to do reopen/seek when it's going\n> > just through the regular fopen/fclose?\n> >\n> > Anyway, I agree we need to do something, to fix this corner case (many\n> > serialized in-progress transactions). ISTM we have two options - either do\n> > something in the context of reorderbuffer.c, or extend the transient file\n> > API somehow. I'd say the second option is the right thing going forward,\n> > because it does allow doing it transparently and without leaking details\n> > about maxAllocatedDescs etc. There are two issues, though - it does\n> > require changes / extensions to the API, and it's not backpatchable.\n> >\n> > So maybe we should start with the localized fix in reorderbuffer, and I\n> > agree tracking offset seems reasonable.\n>\n\n> We've already got code that knows how to track this sort of thing.\n\nYou mean tracking excess kernel fds right ? Yeah, we can use VFDs so\nthat excess fds are automatically closed. But Alvaro seems to be\ntalking in context of tracking of file seek position. VFD does not\nhave a mechanism to track file offsets if one of the vfd cached file\nis closed and reopened. Robert, are you suggesting to add this\ncapability to VFD ? I agree that we could do it, but for\nback-patching, offhand I couldn't think of a simpler way.\n\n\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Fri, 13 Sep 2019 21:28:12 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Amit Khandekar <amitdkhan.pg@gmail.com> writes:\n> You mean tracking excess kernel fds right ? Yeah, we can use VFDs so\n> that excess fds are automatically closed. But Alvaro seems to be\n> talking in context of tracking of file seek position. VFD does not\n> have a mechanism to track file offsets if one of the vfd cached file\n> is closed and reopened.\n\nHm. It used to, but somebody got rid of that on the theory that\nwe could use pread/pwrite instead. I'm inclined to think that that\nwas the right tradeoff, but it'd mean that getting logical decoding\nto adhere to the VFD API requires extra work to track file position\non the caller side.\n\nAgain, though, the advice that's been given here is that we should\nfix logical decoding to use the VFD API as it stands, not change\nthat API. I concur with that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 13 Sep 2019 12:14:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 12:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Khandekar <amitdkhan.pg@gmail.com> writes:\n> > You mean tracking excess kernel fds right ? Yeah, we can use VFDs so\n> > that excess fds are automatically closed. But Alvaro seems to be\n> > talking in context of tracking of file seek position. VFD does not\n> > have a mechanism to track file offsets if one of the vfd cached file\n> > is closed and reopened.\n>\n> Hm. It used to, but somebody got rid of that on the theory that\n> we could use pread/pwrite instead. I'm inclined to think that that\n> was the right tradeoff, but it'd mean that getting logical decoding\n> to adhere to the VFD API requires extra work to track file position\n> on the caller side.\n\nOops. I forgot that we'd removed that.\n\n> Again, though, the advice that's been given here is that we should\n> fix logical decoding to use the VFD API as it stands, not change\n> that API. I concur with that.\n\nA reasonable position. So I guess logical decoding has to track the\nfile position itself, but perhaps use the VFD layer for managing FD\npooling.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 13 Sep 2019 12:31:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, 13 Sep 2019 at 22:01, Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Sep 13, 2019 at 12:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Again, though, the advice that's been given here is that we should\n> > fix logical decoding to use the VFD API as it stands, not change\n> > that API. I concur with that.\n>\n> A reasonable position. So I guess logical decoding has to track the\n> file position itself, but perhaps use the VFD layer for managing FD\n> pooling.\n\nYeah, something like the attached patch. I think this tracking of\noffsets would have been cleaner if we add in-built support in VFD. But\nyeah, for bank branches at least, we need to handle it outside of VFD.\nOr may be we would add it if we find one more use-case.\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company",
"msg_date": "Sat, 14 Sep 2019 20:35:25 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Amit Khandekar <amitdkhan.pg@gmail.com> writes:\n> On Fri, 13 Sep 2019 at 22:01, Robert Haas <robertmhaas@gmail.com> wrote:\n>> On Fri, Sep 13, 2019 at 12:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Again, though, the advice that's been given here is that we should\n>>> fix logical decoding to use the VFD API as it stands, not change\n>>> that API. I concur with that.\n\n>> A reasonable position. So I guess logical decoding has to track the\n>> file position itself, but perhaps use the VFD layer for managing FD\n>> pooling.\n\n> Yeah, something like the attached patch. I think this tracking of\n> offsets would have been cleaner if we add in-built support in VFD. But\n> yeah, for bank branches at least, we need to handle it outside of VFD.\n> Or may be we would add it if we find one more use-case.\n\nAgain, we had that and removed it, for what seem to me to be solid\nreasons. It adds cycles when we're forced to close/reopen a file,\nand it also adds failure modes that we could do without (ie, failure\nof either the ftell or the lseek, which are particularly nasty because\nthey shouldn't happen according to the VFD abstraction). I do not\nthink there is going to be any argument strong enough to make us\nput it back, especially not for non-mainstream callers like logical\ndecoding.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 14 Sep 2019 14:34:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-14 14:34:21 -0400, Tom Lane wrote:\n> Amit Khandekar <amitdkhan.pg@gmail.com> writes:\n> > On Fri, 13 Sep 2019 at 22:01, Robert Haas <robertmhaas@gmail.com> wrote:\n> >> On Fri, Sep 13, 2019 at 12:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> Again, though, the advice that's been given here is that we should\n> >>> fix logical decoding to use the VFD API as it stands, not change\n> >>> that API. I concur with that.\n> \n> >> A reasonable position. So I guess logical decoding has to track the\n> >> file position itself, but perhaps use the VFD layer for managing FD\n> >> pooling.\n> \n> > Yeah, something like the attached patch. I think this tracking of\n> > offsets would have been cleaner if we add in-built support in VFD. But\n> > yeah, for bank branches at least, we need to handle it outside of VFD.\n> > Or may be we would add it if we find one more use-case.\n> \n> Again, we had that and removed it, for what seem to me to be solid\n> reasons. It adds cycles when we're forced to close/reopen a file,\n> and it also adds failure modes that we could do without (ie, failure\n> of either the ftell or the lseek, which are particularly nasty because\n> they shouldn't happen according to the VFD abstraction). I do not\n> think there is going to be any argument strong enough to make us\n> put it back, especially not for non-mainstream callers like logical\n> decoding.\n\nYea, I think that's the right call. Avoiding kernel seeks is quite\nworthwhile, and we shouldn't undo it just because of this usecase. And\nthat'll become more and more important performance-wise (and has already\ndone so, with all the intel fixes making syscalls much slower).\n\nI could see an argument for adding a separate generic layer providing\nposition tracking ontop of the VFD abstraction however. Seems quite\npossible that there's some other parts of the system that could benefit\nfrom using VFDs rather than plain fds. And they'd probably also need the\npositional tracking.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 17 Sep 2019 08:49:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Tue, 17 Sep 2019 at 21:19, Andres Freund <andres@anarazel.de> wrote:\n> On 2019-09-14 14:34:21 -0400, Tom Lane wrote:\n> > Amit Khandekar <amitdkhan.pg@gmail.com> writes:\n> > > Yeah, something like the attached patch. I think this tracking of\n> > > offsets would have been cleaner if we add in-built support in VFD. But\n> > > yeah, for bank branches at least, we need to handle it outside of VFD.\n> > > Or may be we would add it if we find one more use-case.\n> >\n> > Again, we had that and removed it, for what seem to me to be solid\n> > reasons. It adds cycles when we're forced to close/reopen a file,\n> > and it also adds failure modes that we could do without (ie, failure\n> > of either the ftell or the lseek, which are particularly nasty because\n> > they shouldn't happen according to the VFD abstraction).\n\nOk. So you mean, when the caller would call FileRead() for sequential\nreading, underneath VFD would do a pread(), but if pread() returns\nerror, the errno can belong to read() or it might as well belong to\nlseek(). If it's due to lseek(), it's not expected from the caller\nbecause for the caller it's just a sequential read. Yeah, makes sense.\n\n>> I do not\n> > think there is going to be any argument strong enough to make us\n> > put it back, especially not for non-mainstream callers like logical\n> > decoding.\n\nOk. Also, more about putting back is in the below comments ...\n\n>\n> Yea, I think that's the right call. Avoiding kernel seeks is quite\n> worthwhile, and we shouldn't undo it just because of this usecase. And\n> that'll become more and more important performance-wise (and has already\n> done so, with all the intel fixes making syscalls much slower).\n\nBy the way, I was not thinking about adding back the read() and\nlseek() calls. I was saying we continue to use the pread() call, so\nit's just a single system call. FileReadAt(..., offset) would do\npread() with user-supplied offset, and FileRead() would do pread()\nusing internally tracked offset. So for the user, FileReadAt() is like\npread(), and FileRead() would be like read().\n\nBut I agree with Tom's objection about having to unnecessarily handle\nlseek error codes.\n\n>\n> I could see an argument for adding a separate generic layer providing\n> position tracking ontop of the VFD abstraction however. Seems quite\n> possible that there's some other parts of the system that could benefit\n> from using VFDs rather than plain fds. And they'd probably also need the\n> positional tracking.\n\nYeah, that also could be done.\n\nProbably, for now at least, what everyone seems to agree is to take my\nearlier attached patch forward.\n\nI am going to see if I can add a TAP test for the patch, and will add\nthe patch into the commitfest soon.\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Wed, 18 Sep 2019 12:24:50 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, 18 Sep 2019 at 12:24, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> Probably, for now at least, what everyone seems to agree is to take my\n> earlier attached patch forward.\n>\n> I am going to see if I can add a TAP test for the patch, and will add\n> the patch into the commitfest soon.\n\nAttached is an updated patch v2. Has a new test scenario added in\ncontrib/test_decoding/sql/spill test, and some minor code cleanup.\n\nGoing to add this into Nov commitfest.\n\n\n\n\n\n--\nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company",
"msg_date": "Thu, 3 Oct 2019 16:47:19 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, Oct 3, 2019 at 4:48 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Wed, 18 Sep 2019 at 12:24, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > Probably, for now at least, what everyone seems to agree is to take my\n> > earlier attached patch forward.\n> >\n> > I am going to see if I can add a TAP test for the patch, and will add\n> > the patch into the commitfest soon.\n>\n> Attached is an updated patch v2.\n>\n\nI see that you have made changes in ReorderBufferRestoreChanges to use\nPathNameOpenFile, but not in ReorderBufferSerializeTXN. Is there a\nreason for the same? In my test environment, with the test provided\nby you, I got the error (reported in this thread) via\nReorderBufferSerializeTXN. See call stack below:\n\n!errfinish(int dummy=0, ...) Line 441 C\n!OpenTransientFilePerm(const char * fileName=0x012deeac, int\nfileFlags=33033, unsigned short fileMode=384) Line 2272 + 0x57 bytes\nC\n!OpenTransientFile(const char * fileName=0x012deeac, int\nfileFlags=33033) Line 2256 + 0x15 bytes C\n!ReorderBufferSerializeTXN(ReorderBuffer * rb=0x01ee4d80,\nReorderBufferTXN * txn=0x1f9a6ce8) Line 2302 + 0x11 bytes C\n!ReorderBufferIterTXNInit(ReorderBuffer * rb=0x01ee4d80,\nReorderBufferTXN * txn=0x01f08f80) Line 1044 + 0xd bytes C\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Nov 2019 17:20:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Mon, 18 Nov 2019 at 17:20, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I see that you have made changes in ReorderBufferRestoreChanges to use\n> PathNameOpenFile, but not in ReorderBufferSerializeTXN. Is there a\n> reason for the same? In my test environment, with the test provided\n> by you, I got the error (reported in this thread) via\n> ReorderBufferSerializeTXN.\n\nYou didn't get this error with the patch applied, did you ?\n\nIf you were debugging this without the patch applied, I suspect that\nthe reason why ReorderBufferSerializeTXN() => OpenTransientFile() is\ngenerating this error is because the max limit must be already crossed\nbecause of earlier calls to ReorderBufferRestoreChanges().\n\nNote that in ReorderBufferSerializeTXN(), OpenTransientFile() is\nsufficient because the code in that function has made sure the fd gets\nclosed there itself.\n\nIf you are getting this error even with the patch applied, then this\nneeds investigation.\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Mon, 18 Nov 2019 17:50:22 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Mon, Nov 18, 2019 at 5:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 3, 2019 at 4:48 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> >\n> > On Wed, 18 Sep 2019 at 12:24, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > > Probably, for now at least, what everyone seems to agree is to take my\n> > > earlier attached patch forward.\n> > >\n> > > I am going to see if I can add a TAP test for the patch, and will add\n> > > the patch into the commitfest soon.\n> >\n> > Attached is an updated patch v2.\n> >\n>\n> I see that you have made changes in ReorderBufferRestoreChanges to use\n> PathNameOpenFile, but not in ReorderBufferSerializeTXN. Is there a\n> reason for the same?\n>\n\nI have one more question regarding this patch. It seems to me that\nthe files opened via OpenTransientFile or OpenTemporaryFile are\nautomatically closed at transaction end(abort), but that doesn't seem\nto be the case for files opened with PathNameOpenFile. See\nAtEOXact_Files and AtEOSubXact_Files. So, now with the change\nproposed by this patch, don't we need to deal it in some other way?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Nov 2019 17:52:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Mon, 18 Nov 2019 at 17:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I have one more question regarding this patch. It seems to me that\n> the files opened via OpenTransientFile or OpenTemporaryFile are\n> automatically closed at transaction end(abort), but that doesn't seem\n> to be the case for files opened with PathNameOpenFile. See\n> AtEOXact_Files and AtEOSubXact_Files. So, now with the change\n> proposed by this patch, don't we need to deal it in some other way?\n\nFor the API's that use VFDs (like PathNameOpenFile), the files opened\nare always recorded in the VfdCache array. So it is not required to do\nthe cleanup at (sub)transaction end, because the kernel fds get closed\ndynamically in ReleaseLruFiles() whenever they reach max_safe_fds\nlimit. So if a transaction aborts, the fds might remain open, but\nthose will get cleaned up whenever we require more fds, through\nReleaseLruFiles(). Whereas, for files opened through\nOpenTransientFile(), VfdCache is not involved, so this needs\ntransaction end cleanup.\n\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Mon, 18 Nov 2019 18:29:43 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Mon, Nov 18, 2019 at 5:50 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Mon, 18 Nov 2019 at 17:20, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I see that you have made changes in ReorderBufferRestoreChanges to use\n> > PathNameOpenFile, but not in ReorderBufferSerializeTXN. Is there a\n> > reason for the same? In my test environment, with the test provided\n> > by you, I got the error (reported in this thread) via\n> > ReorderBufferSerializeTXN.\n>\n> You didn't get this error with the patch applied, did you ?\n>\n\nNo, I got this before applying the patch. However, after applying the\npatch, I got below error in the same test:\n\npostgres=# SELECT 1 from\npg_logical_slot_get_changes('regression_slot', NULL,NULL) LIMIT 1;\nERROR: could not read from reorderbuffer spill file: Invalid argument\n\nIt seems to me that FileRead API used in the patch can return value <\n0 on EOF. See the API usage in BufFileLoadBuffer. I got this error\non a windows machine and in the server log the message was \"LOG:\nunrecognized win32 error code: 38\" which indicates \"Reached the end of\nthe file.\"\n\n> If you were debugging this without the patch applied, I suspect that\n> the reason why ReorderBufferSerializeTXN() => OpenTransientFile() is\n> generating this error is because the max limit must be already crossed\n> because of earlier calls to ReorderBufferRestoreChanges().\n>\n> Note that in ReorderBufferSerializeTXN(), OpenTransientFile() is\n> sufficient because the code in that function has made sure the fd gets\n> closed there itself.\n>\n\nOkay, then we might not need it there, but we should at least add a\ncomment in ReorderBufferRestoreChanges to explain why we have used a\ndifferent function to operate on the file at that place.\n\n>\n> For the API's that use VFDs (like PathNameOpenFile), the files opened\n> are always recorded in the VfdCache array. So it is not required to do\n> the cleanup at (sub)transaction end, because the kernel fds get closed\n> dynamically in ReleaseLruFiles() whenever they reach max_safe_fds\n> limit. So if a transaction aborts, the fds might remain open, but\n> those will get cleaned up whenever we require more fds, through\n> ReleaseLruFiles(). Whereas, for files opened through\n> OpenTransientFile(), VfdCache is not involved, so this needs\n> transaction end cleanup.\n>\n\nHave you tried by injecting some error? After getting the error\nmentioned above in email, when I retried the same query, I got the\nbelow message.\n\npostgres=# SELECT 1 from\npg_logical_slot_get_changes('regression_slot', NULL,NULL) LIMIT 1;\nERROR: could not remove file\n\"pg_replslot/regression_slot/xid-1693-lsn-0-18000000.spill\" during\nremoval of pg_replslot/regression_slot/xid*: Permission denied\n\nAnd, then I tried to drop the replication slot and I got below error.\npostgres=# SELECT * FROM pg_drop_replication_slot('regression_slot');\nERROR: could not rename file \"pg_replslot/regression_slot\" to\n\"pg_replslot/regression_slot.tmp\": Permission denied\n\nIt might be something related to Windows, but you can once try by\ninjecting some error after reading a few files in the code path and\nsee the behavior.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 19 Nov 2019 14:07:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Tue, 19 Nov 2019 at 14:07, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 18, 2019 at 5:50 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> >\n> > On Mon, 18 Nov 2019 at 17:20, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > I see that you have made changes in ReorderBufferRestoreChanges to use\n> > > PathNameOpenFile, but not in ReorderBufferSerializeTXN. Is there a\n> > > reason for the same? In my test environment, with the test provided\n> > > by you, I got the error (reported in this thread) via\n> > > ReorderBufferSerializeTXN.\n> >\n> > You didn't get this error with the patch applied, did you ?\n> >\n>\n> No, I got this before applying the patch. However, after applying the\n> patch, I got below error in the same test:\n>\n> postgres=# SELECT 1 from\n> pg_logical_slot_get_changes('regression_slot', NULL,NULL) LIMIT 1;\n> ERROR: could not read from reorderbuffer spill file: Invalid argument\n>\n> It seems to me that FileRead API used in the patch can return value <\n> 0 on EOF. See the API usage in BufFileLoadBuffer. I got this error\n> on a windows machine and in the server log the message was \"LOG:\n> unrecognized win32 error code: 38\" which indicates \"Reached the end of\n> the file.\"\n\nOn Windows, it is documented that ReadFile() (which is called by\npg_pread) will return false on EOF but only when the file is open for\nasynchronous reads/writes. But here we are just dealing with usual\nsynchronous reads. So pg_pread() code should indeed return 0 on EOF on\nWindows. Not yet able to figure out how FileRead() managed to return\nthis error on Windows. But from your symptoms, it does look like\npg_pread()=>ReadFile() returned false (despite doing asynchronous\nreads), and so _dosmaperr() gets called, and then it does not find the\neof error in doserrors[], so the \"unrecognized win32 error code\"\nmessage is printed. May have to dig up more on this.\n\n\n>\n> > If you were debugging this without the patch applied, I suspect that\n> > the reason why ReorderBufferSerializeTXN() => OpenTransientFile() is\n> > generating this error is because the max limit must be already crossed\n> > because of earlier calls to ReorderBufferRestoreChanges().\n> >\n> > Note that in ReorderBufferSerializeTXN(), OpenTransientFile() is\n> > sufficient because the code in that function has made sure the fd gets\n> > closed there itself.\n> >\n>\n> Okay, then we might not need it there, but we should at least add a\n> comment in ReorderBufferRestoreChanges to explain why we have used a\n> different function to operate on the file at that place.\n\nYeah, that might make sense.\n\n>\n> >\n> > For the API's that use VFDs (like PathNameOpenFile), the files opened\n> > are always recorded in the VfdCache array. So it is not required to do\n> > the cleanup at (sub)transaction end, because the kernel fds get closed\n> > dynamically in ReleaseLruFiles() whenever they reach max_safe_fds\n> > limit. So if a transaction aborts, the fds might remain open, but\n> > those will get cleaned up whenever we require more fds, through\n> > ReleaseLruFiles(). Whereas, for files opened through\n> > OpenTransientFile(), VfdCache is not involved, so this needs\n> > transaction end cleanup.\n> >\n>\n> Have you tried by injecting some error? After getting the error\n> mentioned above in email, when I retried the same query, I got the\n> below message.\n>\n> postgres=# SELECT 1 from\n> pg_logical_slot_get_changes('regression_slot', NULL,NULL) LIMIT 1;\n> ERROR: could not remove file\n> \"pg_replslot/regression_slot/xid-1693-lsn-0-18000000.spill\" during\n> removal of pg_replslot/regression_slot/xid*: Permission denied\n>\n> And, then I tried to drop the replication slot and I got below error.\n> postgres=# SELECT * FROM pg_drop_replication_slot('regression_slot');\n> ERROR: could not rename file \"pg_replslot/regression_slot\" to\n> \"pg_replslot/regression_slot.tmp\": Permission denied\n>\n> It might be something related to Windows\n\nOh ok, I missed the fact that on Windows we can't delete the files\nthat are already open, unlike Linux/Unix.\nI guess, I may have to use FD_CLOSE_AT_EOXACT flags; or simply use\nOpenTemporaryFile(). I wonder though if this same issue might come up\nfor the other use-case of PathNameOpenFile() :\nlogical_rewrite_log_mapping().\n\n> but you can once try by\n> injecting some error after reading a few files in the code path and\n> see the behavior.\nYeah, will check the behaviour, although on Linux, I think I won't get\nthis error. But yes, like I mentioned above, I think we might have to\narrange for something.\n\n\n\n--\nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Tue, 19 Nov 2019 16:58:18 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 12:28 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> On Tue, 19 Nov 2019 at 14:07, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > No, I got this before applying the patch. However, after applying the\n> > patch, I got below error in the same test:\n> >\n> > postgres=# SELECT 1 from\n> > pg_logical_slot_get_changes('regression_slot', NULL,NULL) LIMIT 1;\n> > ERROR: could not read from reorderbuffer spill file: Invalid argument\n> >\n> > It seems to me that FileRead API used in the patch can return value <\n> > 0 on EOF. See the API usage in BufFileLoadBuffer. I got this error\n> > on a windows machine and in the server log the message was \"LOG:\n> > unrecognized win32 error code: 38\" which indicates \"Reached the end of\n> > the file.\"\n>\n> On Windows, it is documented that ReadFile() (which is called by\n> pg_pread) will return false on EOF but only when the file is open for\n> asynchronous reads/writes. But here we are just dealing with usual\n> synchronous reads. So pg_pread() code should indeed return 0 on EOF on\n> Windows. Not yet able to figure out how FileRead() managed to return\n> this error on Windows. But from your symptoms, it does look like\n> pg_pread()=>ReadFile() returned false (despite doing asynchronous\n> reads), and so _dosmaperr() gets called, and then it does not find the\n> eof error in doserrors[], so the \"unrecognized win32 error code\"\n> message is printed. May have to dig up more on this.\n\nHmm. See also this report:\n\nhttps://www.postgresql.org/message-id/flat/CABuU89MfEvJE%3DWif%2BHk7SCqjSOF4rhgwJWW6aR3hjojpGqFbjQ%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 20 Nov 2019 00:49:02 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Tue, Nov 19, 2019 at 12:49 PM Thomas Munro <thomas.munro@gmail.com>\nwrote:\n\n> On Wed, Nov 20, 2019 at 12:28 AM Amit Khandekar <amitdkhan.pg@gmail.com>\n> wrote:\n> > On Tue, 19 Nov 2019 at 14:07, Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > > No, I got this before applying the patch. However, after applying the\n> > > patch, I got below error in the same test:\n> > >\n> > > postgres=# SELECT 1 from\n> > > pg_logical_slot_get_changes('regression_slot', NULL,NULL) LIMIT 1;\n> > > ERROR: could not read from reorderbuffer spill file: Invalid argument\n> > >\n> > > It seems to me that FileRead API used in the patch can return value <\n> > > 0 on EOF. See the API usage in BufFileLoadBuffer. I got this error\n> > > on a windows machine and in the server log the message was \"LOG:\n> > > unrecognized win32 error code: 38\" which indicates \"Reached the end of\n> > > the file.\"\n> >\n> > On Windows, it is documented that ReadFile() (which is called by\n> > pg_pread) will return false on EOF but only when the file is open for\n> > asynchronous reads/writes. But here we are just dealing with usual\n> > synchronous reads. So pg_pread() code should indeed return 0 on EOF on\n> > Windows. Not yet able to figure out how FileRead() managed to return\n> > this error on Windows. But from your symptoms, it does look like\n> > pg_pread()=>ReadFile() returned false (despite doing asynchronous\n> > reads), and so _dosmaperr() gets called, and then it does not find the\n> > eof error in doserrors[], so the \"unrecognized win32 error code\"\n> > message is printed. May have to dig up more on this.\n>\n> Hmm. See also this report:\n>\n>\n> https://www.postgresql.org/message-id/flat/CABuU89MfEvJE%3DWif%2BHk7SCqjSOF4rhgwJWW6aR3hjojpGqFbjQ%40mail.gmail.com\n>\n>\nThe files from pgwin32_open() are open for synchronous access, while\npg_pread() uses the asynchronous functionality to offset the read. Under\nthese circunstances, a read past EOF will return ERROR_HANDLE_EOF (38), as\nexplained in:\n\nhttps://devblogs.microsoft.com/oldnewthing/20150121-00/?p=44863\n\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Nov 19, 2019 at 12:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Wed, Nov 20, 2019 at 12:28 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> On Tue, 19 Nov 2019 at 14:07, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > No, I got this before applying the patch. However, after applying the\n> > patch, I got below error in the same test:\n> >\n> > postgres=# SELECT 1 from\n> > pg_logical_slot_get_changes('regression_slot', NULL,NULL) LIMIT 1;\n> > ERROR: could not read from reorderbuffer spill file: Invalid argument\n> >\n> > It seems to me that FileRead API used in the patch can return value <\n> > 0 on EOF. See the API usage in BufFileLoadBuffer. I got this error\n> > on a windows machine and in the server log the message was \"LOG:\n> > unrecognized win32 error code: 38\" which indicates \"Reached the end of\n> > the file.\"\n>\n> On Windows, it is documented that ReadFile() (which is called by\n> pg_pread) will return false on EOF but only when the file is open for\n> asynchronous reads/writes. But here we are just dealing with usual\n> synchronous reads. So pg_pread() code should indeed return 0 on EOF on\n> Windows. Not yet able to figure out how FileRead() managed to return\n> this error on Windows. But from your symptoms, it does look like\n> pg_pread()=>ReadFile() returned false (despite doing asynchronous\n> reads), and so _dosmaperr() gets called, and then it does not find the\n> eof error in doserrors[], so the \"unrecognized win32 error code\"\n> message is printed. May have to dig up more on this.\n\nHmm. See also this report:\n\nhttps://www.postgresql.org/message-id/flat/CABuU89MfEvJE%3DWif%2BHk7SCqjSOF4rhgwJWW6aR3hjojpGqFbjQ%40mail.gmail.com\nThe files from pgwin32_open() are open for synchronous access, while pg_pread() uses the asynchronous functionality to offset the read. Under these circunstances, a read past EOF will return ERROR_HANDLE_EOF (38), as explained in:https://devblogs.microsoft.com/oldnewthing/20150121-00/?p=44863 Regards,Juan José Santamaría Flecha",
"msg_date": "Tue, 19 Nov 2019 13:14:22 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 1:14 AM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n> On Tue, Nov 19, 2019 at 12:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> On Wed, Nov 20, 2019 at 12:28 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>> > On Windows, it is documented that ReadFile() (which is called by\n>> > pg_pread) will return false on EOF but only when the file is open for\n>> > asynchronous reads/writes. But here we are just dealing with usual\n>> > synchronous reads. So pg_pread() code should indeed return 0 on EOF on\n>> > Windows. Not yet able to figure out how FileRead() managed to return\n>> > this error on Windows. But from your symptoms, it does look like\n>> > pg_pread()=>ReadFile() returned false (despite doing asynchronous\n>> > reads), and so _dosmaperr() gets called, and then it does not find the\n>> > eof error in doserrors[], so the \"unrecognized win32 error code\"\n>> > message is printed. May have to dig up more on this.\n>>\n>> Hmm. See also this report:\n>>\n>> https://www.postgresql.org/message-id/flat/CABuU89MfEvJE%3DWif%2BHk7SCqjSOF4rhgwJWW6aR3hjojpGqFbjQ%40mail.gmail.com\n>>\n>\n> The files from pgwin32_open() are open for synchronous access, while pg_pread() uses the asynchronous functionality to offset the read. Under these circunstances, a read past EOF will return ERROR_HANDLE_EOF (38), as explained in:\n\nOh, thanks.\n\n> https://devblogs.microsoft.com/oldnewthing/20150121-00/?p=44863\n\n!?!\n\nAmit, since it looks like you are Windows-enabled and have a repro,\nwould you mind confirming that this fixes the problem?\n\n--- a/src/port/pread.c\n+++ b/src/port/pread.c\n@@ -41,6 +41,9 @@ pg_pread(int fd, void *buf, size_t size, off_t offset)\n overlapped.Offset = offset;\n if (!ReadFile(handle, buf, size, &result, &overlapped))\n {\n+ if (GetLastError() == ERROR_HANDLE_EOF)\n+ return 0;\n+\n _dosmaperr(GetLastError());\n return -1;\n }\n\n\n",
"msg_date": "Wed, 20 Nov 2019 07:58:16 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 7:58 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Nov 20, 2019 at 1:14 AM Juan José Santamaría Flecha\n> > https://devblogs.microsoft.com/oldnewthing/20150121-00/?p=44863\n>\n> !?!\n\nOne thing I don't understand (besides, apparently, the documentation):\nhow did this problem escape detection by check-world for such a long\ntime? Surely we expect to hit the end of various temporary files in\nvarious tests. Is it intermittent, or dependent on Windows version,\nor something like that?\n\n\n",
"msg_date": "Wed, 20 Nov 2019 08:34:39 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, 20 Nov 2019 at 01:05, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Nov 20, 2019 at 7:58 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Wed, Nov 20, 2019 at 1:14 AM Juan José Santamaría Flecha\n> > > https://devblogs.microsoft.com/oldnewthing/20150121-00/?p=44863\n> >\n> > !?!\n\nThanks Juan and Thomas for pointing to these links where already this\nwas discussed.\n\n>\n> One thing I don't understand (besides, apparently, the documentation):\n> how did this problem escape detection by check-world for such a long\n> time? Surely we expect to hit the end of various temporary files in\n> various tests. Is it intermittent, or dependent on Windows version,\n> or something like that?\n\nPossibly there aren't any callers who try to pread() at end-of-file\nusing FileRead/pg_pread :\n\n- mdread() seems to read from an offset which it seems to know that it\nis inside the end-of file, including the whole BLCKSZ.\n- BufFileLoadBuffer() seems to deliberately ignore FileRead()'s return\nvalue if it is -1\n if (file->nbytes < 0) file->nbytes = 0;\n- XLogPageRead() also seems to know that the offset is a valid offset.\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Wed, 20 Nov 2019 09:24:03 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 12:28 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Nov 20, 2019 at 1:14 AM Juan José Santamaría Flecha\n> <juanjo.santamaria@gmail.com> wrote:\n> > On Tue, Nov 19, 2019 at 12:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >> On Wed, Nov 20, 2019 at 12:28 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> >> > On Windows, it is documented that ReadFile() (which is called by\n> >> > pg_pread) will return false on EOF but only when the file is open for\n> >> > asynchronous reads/writes. But here we are just dealing with usual\n> >> > synchronous reads. So pg_pread() code should indeed return 0 on EOF on\n> >> > Windows. Not yet able to figure out how FileRead() managed to return\n> >> > this error on Windows. But from your symptoms, it does look like\n> >> > pg_pread()=>ReadFile() returned false (despite doing asynchronous\n> >> > reads), and so _dosmaperr() gets called, and then it does not find the\n> >> > eof error in doserrors[], so the \"unrecognized win32 error code\"\n> >> > message is printed. May have to dig up more on this.\n> >>\n> >> Hmm. See also this report:\n> >>\n> >> https://www.postgresql.org/message-id/flat/CABuU89MfEvJE%3DWif%2BHk7SCqjSOF4rhgwJWW6aR3hjojpGqFbjQ%40mail.gmail.com\n> >>\n> >\n> > The files from pgwin32_open() are open for synchronous access, while pg_pread() uses the asynchronous functionality to offset the read. Under these circunstances, a read past EOF will return ERROR_HANDLE_EOF (38), as explained in:\n>\n> Oh, thanks.\n>\n> > https://devblogs.microsoft.com/oldnewthing/20150121-00/?p=44863\n>\n> !?!\n>\n> Amit, since it looks like you are Windows-enabled and have a repro,\n> would you mind confirming that this fixes the problem?\n>\n> --- a/src/port/pread.c\n> +++ b/src/port/pread.c\n> @@ -41,6 +41,9 @@ pg_pread(int fd, void *buf, size_t size, off_t offset)\n> overlapped.Offset = offset;\n> if (!ReadFile(handle, buf, size, &result, &overlapped))\n> {\n> + if (GetLastError() == ERROR_HANDLE_EOF)\n> + return 0;\n> +\n> _dosmaperr(GetLastError());\n> return -1;\n> }\n\nYes, this works for me.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Nov 2019 10:14:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 4:54 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> - BufFileLoadBuffer() seems to deliberately ignore FileRead()'s return\n> value if it is -1\n> if (file->nbytes < 0) file->nbytes = 0;\n\nOk, that's a different problem we need to fix then. But it does\nexplain how we didn't know. And sure enough there is \"unrecognized\nwin32 error code: 38\" LOG-spam on the build farm, at places where\ntuplestores are expected:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=whelk&dt=2019-11-20%2002%3A41%3A41&stg=check\n\n\n",
"msg_date": "Wed, 20 Nov 2019 17:48:41 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Tue, Nov 19, 2019 at 4:58 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Tue, 19 Nov 2019 at 14:07, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Nov 18, 2019 at 5:50 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > >\n> > > For the API's that use VFDs (like PathNameOpenFile), the files opened\n> > > are always recorded in the VfdCache array. So it is not required to do\n> > > the cleanup at (sub)transaction end, because the kernel fds get closed\n> > > dynamically in ReleaseLruFiles() whenever they reach max_safe_fds\n> > > limit. So if a transaction aborts, the fds might remain open, but\n> > > those will get cleaned up whenever we require more fds, through\n> > > ReleaseLruFiles(). Whereas, for files opened through\n> > > OpenTransientFile(), VfdCache is not involved, so this needs\n> > > transaction end cleanup.\n> > >\n> >\n> > Have you tried by injecting some error? After getting the error\n> > mentioned above in email, when I retried the same query, I got the\n> > below message.\n> >\n> > postgres=# SELECT 1 from\n> > pg_logical_slot_get_changes('regression_slot', NULL,NULL) LIMIT 1;\n> > ERROR: could not remove file\n> > \"pg_replslot/regression_slot/xid-1693-lsn-0-18000000.spill\" during\n> > removal of pg_replslot/regression_slot/xid*: Permission denied\n> >\n> > And, then I tried to drop the replication slot and I got below error.\n> > postgres=# SELECT * FROM pg_drop_replication_slot('regression_slot');\n> > ERROR: could not rename file \"pg_replslot/regression_slot\" to\n> > \"pg_replslot/regression_slot.tmp\": Permission denied\n> >\n> > It might be something related to Windows\n>\n> Oh ok, I missed the fact that on Windows we can't delete the files\n> that are already open, unlike Linux/Unix.\n> I guess, I may have to use FD_CLOSE_AT_EOXACT flags; or simply use\n> OpenTemporaryFile().\n>\n\nI think setting FD_CLOSE_AT_EOXACT won't work unless you also set\nhave_xact_temporary_files because it checks that flag in\nCleanupTempFiles. Also, OpenTemporaryFile() doesn't take the input\nfile path, so how will you use it?\n\n> I wonder though if this same issue might come up\n> for the other use-case of PathNameOpenFile() :\n> logical_rewrite_log_mapping().\n>\n\nIt is possible, but I haven't tested that path.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Nov 2019 10:46:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 5:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Wed, Nov 20, 2019 at 12:28 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > + if (GetLastError() == ERROR_HANDLE_EOF)\n> > + return 0;\n\n> Yes, this works for me.\n\nThanks, pushed.\n\n\n",
"msg_date": "Wed, 20 Nov 2019 18:33:44 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Tue, Nov 19, 2019 at 4:58 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Tue, 19 Nov 2019 at 14:07, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Have you tried by injecting some error? After getting the error\n> > mentioned above in email, when I retried the same query, I got the\n> > below message.\n> >\n> > postgres=# SELECT 1 from\n> > pg_logical_slot_get_changes('regression_slot', NULL,NULL) LIMIT 1;\n> > ERROR: could not remove file\n> > \"pg_replslot/regression_slot/xid-1693-lsn-0-18000000.spill\" during\n> > removal of pg_replslot/regression_slot/xid*: Permission denied\n> >\n> > And, then I tried to drop the replication slot and I got below error.\n> > postgres=# SELECT * FROM pg_drop_replication_slot('regression_slot');\n> > ERROR: could not rename file \"pg_replslot/regression_slot\" to\n> > \"pg_replslot/regression_slot.tmp\": Permission denied\n> >\n> > It might be something related to Windows\n>\n> Oh ok, I missed the fact that on Windows we can't delete the files\n> that are already open, unlike Linux/Unix.\n>\n\nSee comment in pgunlink() \"We need to loop because even though\nPostgreSQL uses flags that allow unlink while the file is open, other\napplications might have the file\nopen without those flags.\". Can you once see if there is any flag\nthat you have missed to pass to allow this? If there is nothing we\ncan do about it, then we might need to use some different API or maybe\ndefine a new API that can handle this.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Nov 2019 13:10:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, 20 Nov 2019 at 13:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 19, 2019 at 4:58 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> >\n> > On Tue, 19 Nov 2019 at 14:07, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Have you tried by injecting some error? After getting the error\n> > > mentioned above in email, when I retried the same query, I got the\n> > > below message.\n> > >\n> > > postgres=# SELECT 1 from\n> > > pg_logical_slot_get_changes('regression_slot', NULL,NULL) LIMIT 1;\n> > > ERROR: could not remove file\n> > > \"pg_replslot/regression_slot/xid-1693-lsn-0-18000000.spill\" during\n> > > removal of pg_replslot/regression_slot/xid*: Permission denied\n> > >\n> > > And, then I tried to drop the replication slot and I got below error.\n> > > postgres=# SELECT * FROM pg_drop_replication_slot('regression_slot');\n> > > ERROR: could not rename file \"pg_replslot/regression_slot\" to\n> > > \"pg_replslot/regression_slot.tmp\": Permission denied\n> > >\n> > > It might be something related to Windows\n> >\n> > Oh ok, I missed the fact that on Windows we can't delete the files\n> > that are already open, unlike Linux/Unix.\n> >\n>\n> See comment in pgunlink() \"We need to loop because even though\n> PostgreSQL uses flags that allow unlink while the file is open, other\n> applications might have the file\n> open without those flags.\". Can you once see if there is any flag\n> that you have missed to pass to allow this? If there is nothing we\n> can do about it, then we might need to use some different API or maybe\n> define a new API that can handle this.\n\nHmm, looks like there is one such flag: FILE_SHARE_DELETE. When file\nis opened with this flag, other processes can delete as well as rename\nthe file.\n\nBut it turns out that in pgwin32_open(), we already use\nFILE_SHARE_DELETE. So, this is again confusing.\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Wed, 20 Nov 2019 14:02:33 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, 20 Nov 2019 at 13:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> See comment in pgunlink() \"We need to loop because even though\n> PostgreSQL uses flags that allow unlink while the file is open, other\n> applications might have the file\n> open without those flags.\". Can you once see if there is any flag\n> that you have missed to pass to allow this?\n\n> If there is nothing we\n> can do about it, then we might need to use some different API or maybe\n> define a new API that can handle this.\n\nThere were objections against modifying the vfd api only for this\nreplication-related use-case. Having a new API will require all the\nchanges required to enable the virtual FDs feature that we need from\nvfd. If nothing works out from the FILE_SHARE_DELETE thing, I am\nthinking, we can use VFD, plus we can keep track of per-subtransaction\nvfd handles, and do something similar to AtEOSubXact_Files().\n\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Wed, 20 Nov 2019 14:18:08 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 9:48 AM Amit Khandekar <amitdkhan.pg@gmail.com>\nwrote:\n\n> On Wed, 20 Nov 2019 at 13:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > See comment in pgunlink() \"We need to loop because even though\n> > PostgreSQL uses flags that allow unlink while the file is open, other\n> > applications might have the file\n> > open without those flags.\". Can you once see if there is any flag\n> > that you have missed to pass to allow this?\n>\n> > If there is nothing we\n> > can do about it, then we might need to use some different API or maybe\n> > define a new API that can handle this.\n>\n> There were objections against modifying the vfd api only for this\n> replication-related use-case. Having a new API will require all the\n> changes required to enable the virtual FDs feature that we need from\n> vfd. If nothing works out from the FILE_SHARE_DELETE thing, I am\n> thinking, we can use VFD, plus we can keep track of per-subtransaction\n> vfd handles, and do something similar to AtEOSubXact_Files().\n>\n>\nThe comment about \"other applications might have the file open without\nthose flags.\" is surely due to systems working with an antivirus touching\nPostgres files.\n\nI was not able to reproduce the Permission denied error with current HEAD,\nuntil I opened another CMD inside the \"pg_replslot/regression_slot\" folder.\nThis will be problematic, is the deletion of the folder actually needed?\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Nov 20, 2019 at 9:48 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:On Wed, 20 Nov 2019 at 13:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> See comment in pgunlink() \"We need to loop because even though\n> PostgreSQL uses flags that allow unlink while the file is open, other\n> applications might have the file\n> open without those flags.\". Can you once see if there is any flag\n> that you have missed to pass to allow this?\n\n> If there is nothing we\n> can do about it, then we might need to use some different API or maybe\n> define a new API that can handle this.\n\nThere were objections against modifying the vfd api only for this\nreplication-related use-case. Having a new API will require all the\nchanges required to enable the virtual FDs feature that we need from\nvfd. If nothing works out from the FILE_SHARE_DELETE thing, I am\nthinking, we can use VFD, plus we can keep track of per-subtransaction\nvfd handles, and do something similar to AtEOSubXact_Files().The comment about \"other applications might have the file open without those flags.\" is surely due to systems working with an antivirus touching Postgres files.I was not able to reproduce the Permission denied error with current HEAD, until I opened another CMD inside the \"pg_replslot/regression_slot\" folder. This will be problematic, is the deletion of the folder actually needed?Regards,Juan José Santamaría Flecha",
"msg_date": "Wed, 20 Nov 2019 13:11:26 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On 2019-Nov-20, Juan Jos� Santamar�a Flecha wrote:\n\n> I was not able to reproduce the Permission denied error with current HEAD,\n> until I opened another CMD inside the \"pg_replslot/regression_slot\" folder.\n> This will be problematic, is the deletion of the folder actually needed?\n\nYes :-( The code assumes that if the directory is there, then it's\nvalid. Trying to remove that assumption is probably a more invasive\nfix.\n\nI think ReplicationSlotDropAcquired is too pessimistic (no recourse if\nthe rename fails) and too optimistic (this will almost never happen).\nWe could change it so that the rename is retried a few times, and avoid\nthe failure. (Naturally, the rmtree should also be retried.) The code\nseems written with the POSIX semantics in mind, but it seems easy to\nimprove.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 20 Nov 2019 10:54:11 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, 20 Nov 2019 at 19:24, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Nov-20, Juan José Santamaría Flecha wrote:\n>\n> > I was not able to reproduce the Permission denied error with current HEAD,\n> > until I opened another CMD inside the \"pg_replslot/regression_slot\" folder.\n> > This will be problematic, is the deletion of the folder actually needed?\n>\n> Yes :-( The code assumes that if the directory is there, then it's\n> valid. Trying to remove that assumption is probably a more invasive\n> fix.\n>\n> I think ReplicationSlotDropAcquired is too pessimistic (no recourse if\n> the rename fails) and too optimistic (this will almost never happen).\n> We could change it so that the rename is retried a few times, and avoid\n> the failure. (Naturally, the rmtree should also be retried.) The code\n> seems written with the POSIX semantics in mind, but it seems easy to\n> improve.\n\nJust to be clear, there are two issues being discussed here :\n\n1. Issue with the patch, where pg_replslot/slotname/xid-*.spill files\ncan't be removed because the same backend process has left these files\nopened because of an abort. This is happening despite the file being\nopened using FILE_SHARE_DELETE flag. I am going to investigate\n(possibly the flag is not applicable in case a single process is\ninvolved)\n\n2. This existing issue where pg_replslot/slotname directory removal\nwill fail if someone else is accessing this directory. This has\nnothing to do with the patch.\n\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Thu, 21 Nov 2019 08:54:40 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 5:41 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n> On Wed, Nov 20, 2019 at 9:48 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>>\n>> On Wed, 20 Nov 2019 at 13:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> > See comment in pgunlink() \"We need to loop because even though\n>> > PostgreSQL uses flags that allow unlink while the file is open, other\n>> > applications might have the file\n>> > open without those flags.\". Can you once see if there is any flag\n>> > that you have missed to pass to allow this?\n>>\n>> > If there is nothing we\n>> > can do about it, then we might need to use some different API or maybe\n>> > define a new API that can handle this.\n>>\n>> There were objections against modifying the vfd api only for this\n>> replication-related use-case. Having a new API will require all the\n>> changes required to enable the virtual FDs feature that we need from\n>> vfd. If nothing works out from the FILE_SHARE_DELETE thing, I am\n>> thinking, we can use VFD, plus we can keep track of per-subtransaction\n>> vfd handles, and do something similar to AtEOSubXact_Files().\n>>\n>\n> The comment about \"other applications might have the file open without those flags.\" is surely due to systems working with an antivirus touching Postgres files.\n>\n> I was not able to reproduce the Permission denied error with current HEAD,\n>\n\nI am not sure what exactly you tried. Can you share the steps and\nyour environment details?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 21 Nov 2019 09:32:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 2:18 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Wed, 20 Nov 2019 at 13:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > See comment in pgunlink() \"We need to loop because even though\n> > PostgreSQL uses flags that allow unlink while the file is open, other\n> > applications might have the file\n> > open without those flags.\". Can you once see if there is any flag\n> > that you have missed to pass to allow this?\n>\n> > If there is nothing we\n> > can do about it, then we might need to use some different API or maybe\n> > define a new API that can handle this.\n>\n> There were objections against modifying the vfd api only for this\n> replication-related use-case. Having a new API will require all the\n> changes required to enable the virtual FDs feature that we need from\n> vfd. If nothing works out from the FILE_SHARE_DELETE thing,\n>\n\nWhile experimenting with FILE_SHARE_DELETE, I think you can once try\nto open/close the file before unlink. If you read the specs [1] of\nthis flag, it seems they allow you to open the file with delete access\neven when it is already opened by someone else. I am not sure if that\nis helpful, but at least we can try out.\n\n> I am\n> thinking, we can use VFD, plus we can keep track of per-subtransaction\n> vfd handles, and do something similar to AtEOSubXact_Files().\n>\n\nI think if we can't make the current API work, then it is better to\nsketch the design for this approach and probably the design of new API\nusing existing infrastructure. Then we can see which approach people\nprefer.\n\n\n[1] - Enables subsequent open operations on a file or device to\nrequest delete access. Otherwise, other processes cannot open the\nfile or device if they request delete access.\n(https://docs.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-createfilea)\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 21 Nov 2019 10:26:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, Nov 21, 2019 at 5:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, Nov 20, 2019 at 5:41 PM Juan José Santamaría Flecha\n> <juanjo.santamaria@gmail.com> wrote:\n> >\n> > On Wed, Nov 20, 2019 at 9:48 AM Amit Khandekar <amitdkhan.pg@gmail.com>\n> wrote:\n> >>\n> >> On Wed, 20 Nov 2019 at 13:10, Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >> > See comment in pgunlink() \"We need to loop because even though\n> >> > PostgreSQL uses flags that allow unlink while the file is open, other\n> >> > applications might have the file\n> >> > open without those flags.\". Can you once see if there is any flag\n> >> > that you have missed to pass to allow this?\n> >>\n> >> > If there is nothing we\n> >> > can do about it, then we might need to use some different API or maybe\n> >> > define a new API that can handle this.\n> >>\n> >> There were objections against modifying the vfd api only for this\n> >> replication-related use-case. Having a new API will require all the\n> >> changes required to enable the virtual FDs feature that we need from\n> >> vfd. If nothing works out from the FILE_SHARE_DELETE thing, I am\n> >> thinking, we can use VFD, plus we can keep track of per-subtransaction\n> >> vfd handles, and do something similar to AtEOSubXact_Files().\n> >>\n> >\n> > The comment about \"other applications might have the file open without\n> those flags.\" is surely due to systems working with an antivirus touching\n> Postgres files.\n> >\n> > I was not able to reproduce the Permission denied error with current\n> HEAD,\n> >\n>\n> I am not sure what exactly you tried. Can you share the steps and\n> your environment details?\n>\n>\nSure, I was trying to reproduce the Permission denied error after\nthe ERROR_HANDLE_EOF fix.\n\n1. Using a clean environment [1] the spill.sql script produces the expected\noutput.\n2. I manually injected a negative value for readBytes after @@ -2611,10\n+2627,11 @@ ReorderBufferRestoreChanges(ReorderBuffer *rb, ReorderBufferTXN\n*txn. In doing so pg_logical_slot_get_changes() failed, but following\nexecutions did not run into Permission denied.\n3. During the cleanup of some of the tests, pg_drop_replication_slot()\nfailed because the \"pg_replslot/regression_slot\" folder was is use.\n\n[1] Win10 (1903) MSVC 19.22.27905\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Thu, Nov 21, 2019 at 5:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Nov 20, 2019 at 5:41 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n> On Wed, Nov 20, 2019 at 9:48 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>>\n>> On Wed, 20 Nov 2019 at 13:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> > See comment in pgunlink() \"We need to loop because even though\n>> > PostgreSQL uses flags that allow unlink while the file is open, other\n>> > applications might have the file\n>> > open without those flags.\". Can you once see if there is any flag\n>> > that you have missed to pass to allow this?\n>>\n>> > If there is nothing we\n>> > can do about it, then we might need to use some different API or maybe\n>> > define a new API that can handle this.\n>>\n>> There were objections against modifying the vfd api only for this\n>> replication-related use-case. Having a new API will require all the\n>> changes required to enable the virtual FDs feature that we need from\n>> vfd. If nothing works out from the FILE_SHARE_DELETE thing, I am\n>> thinking, we can use VFD, plus we can keep track of per-subtransaction\n>> vfd handles, and do something similar to AtEOSubXact_Files().\n>>\n>\n> The comment about \"other applications might have the file open without those flags.\" is surely due to systems working with an antivirus touching Postgres files.\n>\n> I was not able to reproduce the Permission denied error with current HEAD,\n>\n\nI am not sure what exactly you tried. Can you share the steps and\nyour environment details?Sure, I was trying to reproduce the Permission denied error after the ERROR_HANDLE_EOF fix.1. Using a clean environment [1] the spill.sql script produces the expected output.2. I manually injected a negative value for readBytes after @@ -2611,10 +2627,11 @@ ReorderBufferRestoreChanges(ReorderBuffer *rb, ReorderBufferTXN *txn. In doing so pg_logical_slot_get_changes() failed, but following executions did not run into Permission denied.3. During the cleanup of some of the tests, pg_drop_replication_slot() failed because the \"pg_replslot/regression_slot\" folder was is use.[1] Win10 (1903) MSVC 19.22.27905Regards,Juan José Santamaría Flecha",
"msg_date": "Thu, 21 Nov 2019 16:02:18 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, Nov 21, 2019 at 8:32 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n> On Thu, Nov 21, 2019 at 5:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Wed, Nov 20, 2019 at 5:41 PM Juan José Santamaría Flecha\n>> <juanjo.santamaria@gmail.com> wrote:\n>> >\n>> > On Wed, Nov 20, 2019 at 9:48 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>> >>\n>> >> On Wed, 20 Nov 2019 at 13:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >> > See comment in pgunlink() \"We need to loop because even though\n>> >> > PostgreSQL uses flags that allow unlink while the file is open, other\n>> >> > applications might have the file\n>> >> > open without those flags.\". Can you once see if there is any flag\n>> >> > that you have missed to pass to allow this?\n>> >>\n>> >> > If there is nothing we\n>> >> > can do about it, then we might need to use some different API or maybe\n>> >> > define a new API that can handle this.\n>> >>\n>> >> There were objections against modifying the vfd api only for this\n>> >> replication-related use-case. Having a new API will require all the\n>> >> changes required to enable the virtual FDs feature that we need from\n>> >> vfd. If nothing works out from the FILE_SHARE_DELETE thing, I am\n>> >> thinking, we can use VFD, plus we can keep track of per-subtransaction\n>> >> vfd handles, and do something similar to AtEOSubXact_Files().\n>> >>\n>> >\n>> > The comment about \"other applications might have the file open without those flags.\" is surely due to systems working with an antivirus touching Postgres files.\n>> >\n>> > I was not able to reproduce the Permission denied error with current HEAD,\n>> >\n>>\n>> I am not sure what exactly you tried. Can you share the steps and\n>> your environment details?\n>>\n>\n> Sure, I was trying to reproduce the Permission denied error after the ERROR_HANDLE_EOF fix.\n>\n\nHave you tried before that fix , if not, can you once try by\ntemporarily reverting that fix in your environment and share the\noutput of each step? After you get the error due to EOF, check that\nyou have .spill files in pg_replslot/<slot_name>/ and then again try\nto get changes by pg_logical_slot_get_changes(). If you want, you\ncan use the test provided in Amit Khandekar's patch.\n\n>\n> [1] Win10 (1903) MSVC 19.22.27905\n>\n\nI have tested this on Windows7. I am not sure if it is due to a\ndifferent version of windows, but I think we can't rule out that\npossibility.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Nov 2019 09:07:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, 22 Nov 2019 at 09:08, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Have you tried before that fix , if not, can you once try by\n> temporarily reverting that fix in your environment and share the\n> output of each step? After you get the error due to EOF, check that\n> you have .spill files in pg_replslot/<slot_name>/ and then again try\n> to get changes by pg_logical_slot_get_changes(). If you want, you\n> can use the test provided in Amit Khandekar's patch.\n\nOn my Linux machine, I added elog() in ReorderBufferRestoreChanges(),\njust after FileRead() returns 0. This results in error. But the thing is, in\nReorderBufferCommit(), the error is already handled using PG_CATCH :\n\nPG_CATCH();\n{\n.....\n AbortCurrentTransaction();\n.......\n if (using_subtxn)\n RollbackAndReleaseCurrentSubTransaction();\n........\n........\n /* remove potential on-disk data, and deallocate */\n ReorderBufferCleanupTXN(rb, txn);\n}\n\nSo ReorderBufferCleanupTXN() removes all the .spill files using unlink().\n\nAnd on Windows, what should happen is : unlink() should succeed\nbecause the file is opened using FILE_SHARE_DELETE. But the files\nshould still remain there because these are still open. It is just\nmarked for deletion until there is no one having opened the file. That\nis what is my conclusion from running a sample attached program test.c\n.\nBut what you are seeing is \"Permission denied\" errors. Not sure why\nunlink() is failing.\n\nThe thing that is still a problem is : On Windows, if the file remains\nopen, and later even when the unlink() succeeds, the file will be left\nthere until it is closed. So subsequent operations will open the same\nold file. Not sure what happens if we open a file that is marked for\ndeletion.\n\n-\n\nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company",
"msg_date": "Fri, 22 Nov 2019 10:59:34 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 11:00 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Fri, 22 Nov 2019 at 09:08, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Have you tried before that fix , if not, can you once try by\n> > temporarily reverting that fix in your environment and share the\n> > output of each step? After you get the error due to EOF, check that\n> > you have .spill files in pg_replslot/<slot_name>/ and then again try\n> > to get changes by pg_logical_slot_get_changes(). If you want, you\n> > can use the test provided in Amit Khandekar's patch.\n>\n> On my Linux machine, I added elog() in ReorderBufferRestoreChanges(),\n> just after FileRead() returns 0. This results in error. But the thing is, in\n> ReorderBufferCommit(), the error is already handled using PG_CATCH :\n>\n> PG_CATCH();\n> {\n> .....\n> AbortCurrentTransaction();\n> .......\n> if (using_subtxn)\n> RollbackAndReleaseCurrentSubTransaction();\n> ........\n> ........\n> /* remove potential on-disk data, and deallocate */\n> ReorderBufferCleanupTXN(rb, txn);\n> }\n>\n> So ReorderBufferCleanupTXN() removes all the .spill files using unlink().\n>\n> And on Windows, what should happen is : unlink() should succeed\n> because the file is opened using FILE_SHARE_DELETE. But the files\n> should still remain there because these are still open. It is just\n> marked for deletion until there is no one having opened the file. That\n> is what is my conclusion from running a sample attached program test.c\n>\n\nI think this is exactly the reason for the problem. In my test [1],\nthe error \"permission denied\" occurred when I second time executed\npg_logical_slot_get_changes() which means on first execution the\nunlink would have been successful but the files are still not removed\nas they were not closed. Then on second execution, it gets an error\n\"Permission denied\" when it again tries to unlink files via\nReorderBufferCleanupSerializedTXNs().\n\n\n.\n> But what you are seeing is \"Permission denied\" errors. Not sure why\n> unlink() is failing.\n>\n\nIn your test program, if you try to unlink the file second time, you\nshould see the error \"Permission denied\".\n\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2Bcey6i6a0zD9kk_eaDXb4RPNZqu4UwXO9LbHAgMpMBkg%40mail.gmail.com\n\n\n\n--\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Nov 2019 16:26:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, 22 Nov 2019 at 4:26 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Nov 22, 2019 at 11:00 AM Amit Khandekar <amitdkhan.pg@gmail.com>\n> wrote:\n> >\n> > On Fri, 22 Nov 2019 at 09:08, Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > > Have you tried before that fix , if not, can you once try by\n> > > temporarily reverting that fix in your environment and share the\n> > > output of each step? After you get the error due to EOF, check that\n> > > you have .spill files in pg_replslot/<slot_name>/ and then again try\n> > > to get changes by pg_logical_slot_get_changes(). If you want, you\n> > > can use the test provided in Amit Khandekar's patch.\n> >\n> > On my Linux machine, I added elog() in ReorderBufferRestoreChanges(),\n> > just after FileRead() returns 0. This results in error. But the thing\n> is, in\n> > ReorderBufferCommit(), the error is already handled using PG_CATCH :\n> >\n> > PG_CATCH();\n> > {\n> > .....\n> > AbortCurrentTransaction();\n> > .......\n> > if (using_subtxn)\n> > RollbackAndReleaseCurrentSubTransaction();\n> > ........\n> > ........\n> > /* remove potential on-disk data, and deallocate */\n> > ReorderBufferCleanupTXN(rb, txn);\n> > }\n> >\n> > So ReorderBufferCleanupTXN() removes all the .spill files using unlink().\n> >\n> > And on Windows, what should happen is : unlink() should succeed\n> > because the file is opened using FILE_SHARE_DELETE. But the files\n> > should still remain there because these are still open. It is just\n> > marked for deletion until there is no one having opened the file. That\n> > is what is my conclusion from running a sample attached program test.c\n> >\n>\n> I think this is exactly the reason for the problem. In my test [1],\n> the error \"permission denied\" occurred when I second time executed\n> pg_logical_slot_get_changes() which means on first execution the\n> unlink would have been successful but the files are still not removed\n> as they were not closed. Then on second execution, it gets an error\n> \"Permission denied\" when it again tries to unlink files via\n> ReorderBufferCleanupSerializedTXNs().\n>\n>\n> .\n> > But what you are seeing is \"Permission denied\" errors. Not sure why\n> > unlink() is failing.\n> >\n>\n> In your test program, if you try to unlink the file second time, you\n> should see the error \"Permission denied\".\n\n I tested using the sample program and indeed I got the error 5 (access\ndenied) when I called unlink the second time.\n\n>\n>\n> [1] -\n> https://www.postgresql.org/message-id/CAA4eK1%2Bcey6i6a0zD9kk_eaDXb4RPNZqu4UwXO9LbHAgMpMBkg%40mail.gmail.com\n>\n>\n>\n> --\n> With Regards,\n> Amit Kapila.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\nOn Fri, 22 Nov 2019 at 4:26 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Nov 22, 2019 at 11:00 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Fri, 22 Nov 2019 at 09:08, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Have you tried before that fix , if not, can you once try by\n> > temporarily reverting that fix in your environment and share the\n> > output of each step? After you get the error due to EOF, check that\n> > you have .spill files in pg_replslot/<slot_name>/ and then again try\n> > to get changes by pg_logical_slot_get_changes(). If you want, you\n> > can use the test provided in Amit Khandekar's patch.\n>\n> On my Linux machine, I added elog() in ReorderBufferRestoreChanges(),\n> just after FileRead() returns 0. This results in error. But the thing is, in\n> ReorderBufferCommit(), the error is already handled using PG_CATCH :\n>\n> PG_CATCH();\n> {\n> .....\n> AbortCurrentTransaction();\n> .......\n> if (using_subtxn)\n> RollbackAndReleaseCurrentSubTransaction();\n> ........\n> ........\n> /* remove potential on-disk data, and deallocate */\n> ReorderBufferCleanupTXN(rb, txn);\n> }\n>\n> So ReorderBufferCleanupTXN() removes all the .spill files using unlink().\n>\n> And on Windows, what should happen is : unlink() should succeed\n> because the file is opened using FILE_SHARE_DELETE. But the files\n> should still remain there because these are still open. It is just\n> marked for deletion until there is no one having opened the file. That\n> is what is my conclusion from running a sample attached program test.c\n>\n\nI think this is exactly the reason for the problem. In my test [1],\nthe error \"permission denied\" occurred when I second time executed\npg_logical_slot_get_changes() which means on first execution the\nunlink would have been successful but the files are still not removed\nas they were not closed. Then on second execution, it gets an error\n\"Permission denied\" when it again tries to unlink files via\nReorderBufferCleanupSerializedTXNs().\n\n\n.\n> But what you are seeing is \"Permission denied\" errors. Not sure why\n> unlink() is failing.\n>\n\nIn your test program, if you try to unlink the file second time, you\nshould see the error \"Permission denied\". I tested using the sample program and indeed I got the error 5 (access denied) when I called unlink the second time. \n[1] - https://www.postgresql.org/message-id/CAA4eK1%2Bcey6i6a0zD9kk_eaDXb4RPNZqu4UwXO9LbHAgMpMBkg%40mail.gmail.com\n\n\n\n--\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n-- Thanks,-Amit KhandekarEnterpriseDB CorporationThe Postgres Database Company",
"msg_date": "Fri, 22 Nov 2019 19:38:20 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 4:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Thu, Nov 21, 2019 at 8:32 PM Juan José Santamaría Flecha\n> <juanjo.santamaria@gmail.com> wrote:\n> >\n> > [1] Win10 (1903) MSVC 19.22.27905\n> >\n>\n> I have tested this on Windows7. I am not sure if it is due to a\n> different version of windows, but I think we can't rule out that\n> possibility.\n>\n>\nThis seems to be the case. The unexpected behaviour is on my end, which is\nworking as described in FILE_DISPOSITION_POSIX_SEMANTICS [1].\n\nThe expected behaviour is what you have already diagnosed.\n\n[1]\nhttps://docs.microsoft.com/es-es/windows-hardware/drivers/ddi/ntddk/ns-ntddk-_file_disposition_information_ex\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Fri, Nov 22, 2019 at 4:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Thu, Nov 21, 2019 at 8:32 PM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>> [1] Win10 (1903) MSVC 19.22.27905\n>\n\nI have tested this on Windows7. I am not sure if it is due to a\ndifferent version of windows, but I think we can't rule out that\npossibility.This seems to be the case. The unexpected behaviour is on my end, which is working as described in FILE_DISPOSITION_POSIX_SEMANTICS [1].The expected behaviour is what you have already diagnosed.[1] https://docs.microsoft.com/es-es/windows-hardware/drivers/ddi/ntddk/ns-ntddk-_file_disposition_information_exRegards,Juan José Santamaría Flecha",
"msg_date": "Mon, 25 Nov 2019 16:06:46 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 7:38 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Fri, 22 Nov 2019 at 4:26 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> I think this is exactly the reason for the problem. In my test [1],\n>> the error \"permission denied\" occurred when I second time executed\n>> pg_logical_slot_get_changes() which means on first execution the\n>> unlink would have been successful but the files are still not removed\n>> as they were not closed. Then on second execution, it gets an error\n>> \"Permission denied\" when it again tries to unlink files via\n>> ReorderBufferCleanupSerializedTXNs().\n>>\n>>\n>> .\n>> > But what you are seeing is \"Permission denied\" errors. Not sure why\n>> > unlink() is failing.\n>> >\n>>\n>> In your test program, if you try to unlink the file second time, you\n>> should see the error \"Permission denied\".\n>\n> I tested using the sample program and indeed I got the error 5 (access denied) when I called unlink the second time.\n>\n\nSo, what is the next step here? How about if we somehow check whether\nthe file exists before doing unlink, say by using stat? If that\ndoesn't work, I think we might need to go in the direction of tracking\nfile handles in some way, so that they can be closed during an abort.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Nov 2019 10:49:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Tue, 26 Nov 2019 at 10:49, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Nov 22, 2019 at 7:38 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> >\n> > On Fri, 22 Nov 2019 at 4:26 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> I think this is exactly the reason for the problem. In my test [1],\n> >> the error \"permission denied\" occurred when I second time executed\n> >> pg_logical_slot_get_changes() which means on first execution the\n> >> unlink would have been successful but the files are still not removed\n> >> as they were not closed. Then on second execution, it gets an error\n> >> \"Permission denied\" when it again tries to unlink files via\n> >> ReorderBufferCleanupSerializedTXNs().\n> >>\n> >>\n> >> .\n> >> > But what you are seeing is \"Permission denied\" errors. Not sure why\n> >> > unlink() is failing.\n> >> >\n> >>\n> >> In your test program, if you try to unlink the file second time, you\n> >> should see the error \"Permission denied\".\n> >\n> > I tested using the sample program and indeed I got the error 5 (access denied) when I called unlink the second time.\n> >\n>\n> So, what is the next step here? How about if we somehow check whether\n> the file exists before doing unlink, say by using stat?\nBut the thing is, the behaviour is so much in a grey area, that we\ncannot reliably say for instance that when stat() says there is no\nsuch file, there is indeed no such file, and if we re-create the same\nfile when it is still open, it is always going to open a new file,\netc.\n\n> If that doesn't work, I think we might need to go in the direction of tracking\n> file handles in some way, so that they can be closed during an abort.\nYeah, that is one way. I am still working on different approaches.\nWIll get back with proposals.\n\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Tue, 26 Nov 2019 11:18:53 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Tue, Nov 26, 2019 at 11:19 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Tue, 26 Nov 2019 at 10:49, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > So, what is the next step here? How about if we somehow check whether\n> > the file exists before doing unlink, say by using stat?\n> But the thing is, the behaviour is so much in a grey area, that we\n> cannot reliably say for instance that when stat() says there is no\n> such file, there is indeed no such file,\n>\n\nWhy so?\n\n> and if we re-create the same\n> file when it is still open, it is always going to open a new file,\n> etc.\n>\n\nYeah, or maybe even if we don't create with the same name, there is\nalways be some dangling file which again doesn't sound like a good\nthing.\n\n> > If that doesn't work, I think we might need to go in the direction of tracking\n> > file handles in some way, so that they can be closed during an abort.\n> Yeah, that is one way. I am still working on different approaches.\n> WIll get back with proposals.\n>\n\nFair enough. See, if you can also consider an approach that is local\nto ReorderBuffer module wherein we can track those handles in\nReorderBufferTxn or some other place local to that module.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Nov 2019 12:09:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Tue, 26 Nov 2019 at 12:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 26, 2019 at 11:19 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> >\n> > On Tue, 26 Nov 2019 at 10:49, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > So, what is the next step here? How about if we somehow check whether\n> > > the file exists before doing unlink, say by using stat?\n> > But the thing is, the behaviour is so much in a grey area, that we\n> > cannot reliably say for instance that when stat() says there is no\n> > such file, there is indeed no such file,\n> >\n>\n> Why so?\nThis was just an example. What I meant was, we are really not sure of\nthe behaviour of file operations when the file is in this state.\nunlink() returning \"Permission denied\" when called twice is itself\nweird enough.\n\n>\n> > and if we re-create the same\n> > file when it is still open, it is always going to open a new file,\n> > etc.\n> >\n>\n> Yeah, or maybe even if we don't create with the same name, there is\n> always be some dangling file which again doesn't sound like a good\n> thing.\n\nRight.\n\n>\n> > > If that doesn't work, I think we might need to go in the direction of tracking\n> > > file handles in some way, so that they can be closed during an abort.\n> > Yeah, that is one way. I am still working on different approaches.\n> > WIll get back with proposals.\n> >\n>\n> Fair enough. See, if you can also consider an approach that is local\n> to ReorderBuffer module wherein we can track those handles in\n> ReorderBufferTxn or some other place local to that module.\n\nWhat I found was : We do attempt to close the opened vfds in the\nPG_CATCH block. In ReorderBufferCommit(), ReorderBufferIterTXNFinish\nis called both in PG_TRY and PG_CATCH. This closes all the opened\nvfds. But the issue is : if the ereport() occurs inside\nReorderBufferIterTXNInit(), then iterstate is still NULL. So in\nPG_CATCH, ReorderBufferIterTXNFinish() is not called, so the vfds in\nstate->entries[] remain open.\n\nWe can have &iterstate passed to ReorderBufferIterTXNInit() as another\nargument, and initialize it first thing inside the function. This way,\nit will never be NULL. But need to be careful about the possibility of\nhaving a iterstate in a half-cooked state, so cleanup might use some\nuninitialized handles. Will work on it. At least, we can make sure the\niterstate->entries handle doesn't have junk values.\n\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Wed, 27 Nov 2019 14:16:16 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, 27 Nov 2019 at 14:16, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> What I found was : We do attempt to close the opened vfds in the\n> PG_CATCH block. In ReorderBufferCommit(), ReorderBufferIterTXNFinish\n> is called both in PG_TRY and PG_CATCH. This closes all the opened\n> vfds. But the issue is : if the ereport() occurs inside\n> ReorderBufferIterTXNInit(), then iterstate is still NULL. So in\n> PG_CATCH, ReorderBufferIterTXNFinish() is not called, so the vfds in\n> state->entries[] remain open.\n>\n> We can have &iterstate passed to ReorderBufferIterTXNInit() as another\n> argument, and initialize it first thing inside the function. This way,\n> it will never be NULL. But need to be careful about the possibility of\n> having a iterstate in a half-cooked state, so cleanup might use some\n> uninitialized handles. Will work on it. At least, we can make sure the\n> iterstate->entries handle doesn't have junk values.\n\nDone as stated above; attached v3 patch. I have verified that the file\nhandles do get closed in PG_CATCH block via\nReorderBufferIterTXNFinish().\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company",
"msg_date": "Tue, 3 Dec 2019 11:09:36 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Tue, Dec 3, 2019 at 11:10 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Wed, 27 Nov 2019 at 14:16, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > What I found was : We do attempt to close the opened vfds in the\n> > PG_CATCH block. In ReorderBufferCommit(), ReorderBufferIterTXNFinish\n> > is called both in PG_TRY and PG_CATCH. This closes all the opened\n> > vfds. But the issue is : if the ereport() occurs inside\n> > ReorderBufferIterTXNInit(), then iterstate is still NULL. So in\n> > PG_CATCH, ReorderBufferIterTXNFinish() is not called, so the vfds in\n> > state->entries[] remain open.\n> >\n> > We can have &iterstate passed to ReorderBufferIterTXNInit() as another\n> > argument, and initialize it first thing inside the function. This way,\n> > it will never be NULL. But need to be careful about the possibility of\n> > having a iterstate in a half-cooked state, so cleanup might use some\n> > uninitialized handles. Will work on it. At least, we can make sure the\n> > iterstate->entries handle doesn't have junk values.\n>\n> Done as stated above; attached v3 patch. I have verified that the file\n> handles do get closed in PG_CATCH block via\n> ReorderBufferIterTXNFinish().\n>\n\nI couldn't reproduce the original problem (on HEAD) reported with the\ntest case in the patch. So, I can't verify the fix. I think it is\nbecause of recent commits cec2edfa7859279f36d2374770ca920c59c73dd8 and\n9290ad198b15d6b986b855d2a58d087a54777e87. It seems you need to either\nchange the value of logical_decoding_work_mem or change the test in\nsome way so that the original problem can be reproduced.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 5 Dec 2019 16:20:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, Dec 5, 2019 at 4:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Dec 3, 2019 at 11:10 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> >\n> >\n> > Done as stated above; attached v3 patch. I have verified that the file\n> > handles do get closed in PG_CATCH block via\n> > ReorderBufferIterTXNFinish().\n> >\n>\n> I couldn't reproduce the original problem (on HEAD) reported with the\n> test case in the patch. So, I can't verify the fix. I think it is\n> because of recent commits cec2edfa7859279f36d2374770ca920c59c73dd8 and\n> 9290ad198b15d6b986b855d2a58d087a54777e87. It seems you need to either\n> change the value of logical_decoding_work_mem or change the test in\n> some way so that the original problem can be reproduced.\n>\n\nFew comments:\n----------------------\n\n1.\n+ /* Now that the state fields are initialized, it is safe to return it. */\n+ *iter_state = state;\n+\n /* allocate heap */\n state->heap =\nbinaryheap_allocate(state->nr_txns,\n ReorderBufferIterCompare,\n\nIs there a reason for not initializing iter_state after\nbinaryheap_allocate? If we do so, then we don't need additional check\nyou have added in ReorderBufferIterTXNFinish.\n\n2.\n/* No harm in resetting the offset even in case of failure */\nfile->curOffset = 0;\n\nThe above comment is not clear because you are not setting it in case\nof error rather this is a success path.\n\n3.\n+ *\n+ * Note: The iterator state is returned through iter_state parameter rather\n+ * than the function's return value. This is because the state gets cleaned up\n+ * in a PG_CATCH block, so we want to make sure the caller gets back the state\n+ * even if this function throws an exception, so that the state resources can\n+ * be cleaned up.\n\nHow about changing it slightly as follows to make it more clear.\n\"Note: The iterator state is returned through iter_state parameter\nrather than the function's return value. This is because the state\ngets cleaned up in a PG_CATCH block in the caller, so we want to make\nsure the caller gets back the state even if this function throws an\nexception.\"\n\n4. I think we should also check how much time increase will happen for\ntest_decoding regression test after the test added by this patch?\n\n5. One naive question about the usage of PathNameOpenFile(). When it\nreaches the max limit, it will automatically close one of the files,\nbut how will that be reflected in the data structure (TXNEntryFile)\nyou are managing. Basically, when PathNameOpenFile closes some file,\nhow will the corresponding vfd in TXNEntryFile be changed. Because if\nit is not changed, then won't it start pointing to some wrong\nfilehandle?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 6 Dec 2019 15:40:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, 6 Dec 2019 at 15:40, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Dec 5, 2019 at 4:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Dec 3, 2019 at 11:10 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > >\n> > >\n> > > Done as stated above; attached v3 patch. I have verified that the file\n> > > handles do get closed in PG_CATCH block via\n> > > ReorderBufferIterTXNFinish().\n> > >\n> >\n> > I couldn't reproduce the original problem (on HEAD) reported with the\n> > test case in the patch. So, I can't verify the fix. I think it is\n> > because of recent commits cec2edfa7859279f36d2374770ca920c59c73dd8 and\n> > 9290ad198b15d6b986b855d2a58d087a54777e87. It seems you need to either\n> > change the value of logical_decoding_work_mem or change the test in\n> > some way so that the original problem can be reproduced.\n\nYeah, it does seem like the commit cec2edfa78592 must have caused the\ntest to not reproduce on your env, although the test does fail for me\nstill. Setting logical_decoding_work_mem to a low value does sound\nlike a good idea. Will work on it.\n\n> >\n>\n> Few comments:\n> ----------------------\n>\n> 1.\n> + /* Now that the state fields are initialized, it is safe to return it. */\n> + *iter_state = state;\n> +\n> /* allocate heap */\n> state->heap =\n> binaryheap_allocate(state->nr_txns,\n> ReorderBufferIterCompare,\n>\n> Is there a reason for not initializing iter_state after\n> binaryheap_allocate? If we do so, then we don't need additional check\n> you have added in ReorderBufferIterTXNFinish.\n\nIf iter_state is initialized *after* binaryheap_allocate, then we\nwon't be able to close the vfds if binaryheap_allocate() ereports().\n\n>\n> 2.\n> /* No harm in resetting the offset even in case of failure */\n> file->curOffset = 0;\n>\n> The above comment is not clear because you are not setting it in case\n> of error rather this is a success path.\n\nI meant, even if PathNameOpenFile() failed, it is ok to set\nfile->curOffset to 0, so need not set it only in case of *fd >= 0. In\nmost of the cases, fd would be valid, so just set file->curOffset to 0\nalways.\n\n>\n> 3.\n> + *\n> + * Note: The iterator state is returned through iter_state parameter rather\n> + * than the function's return value. This is because the state gets cleaned up\n> + * in a PG_CATCH block, so we want to make sure the caller gets back the state\n> + * even if this function throws an exception, so that the state resources can\n> + * be cleaned up.\n>\n> How about changing it slightly as follows to make it more clear.\n> \"Note: The iterator state is returned through iter_state parameter\n> rather than the function's return value. This is because the state\n> gets cleaned up in a PG_CATCH block in the caller, so we want to make\n> sure the caller gets back the state even if this function throws an\n> exception.\"\n\nAgreed. Will do that in the next patch version.\n\n>\n> 4. I think we should also check how much time increase will happen for\n> test_decoding regression test after the test added by this patch?\nYeah, it currently takes noticeably longer compared to the others.\nLet's see if setting logical_decoding_work_mem to a min value allows\nus to reproduce the test with much lesser number of inserts.\n\n>\n> 5. One naive question about the usage of PathNameOpenFile(). When it\n> reaches the max limit, it will automatically close one of the files,\n> but how will that be reflected in the data structure (TXNEntryFile)\n> you are managing. Basically, when PathNameOpenFile closes some file,\n> how will the corresponding vfd in TXNEntryFile be changed. Because if\n> it is not changed, then won't it start pointing to some wrong\n> filehandle?\n\nIn PathNameOpenFile(), excess kernel fds could be closed\n(ReleaseLruFiles). But with that, the vfds themselves don't get\ninvalidated. Only the underlying kernel fd gets closed, and the\nvfd->fd is marked VFD_CLOSED. The vfd array element remains valid (a\nnon-null vfd->fileName means the vfd slot is valid; check\nFileIsValid). So later, when FileRead(vfd1) is called and that vfd1\nhappens to be the one that had got it's kernel fd closed, it gets\nopened again through FileAccess().\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Fri, 6 Dec 2019 17:00:10 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, Dec 6, 2019 at 5:00 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Fri, 6 Dec 2019 at 15:40, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Few comments:\n> > ----------------------\n> >\n> > 1.\n> > + /* Now that the state fields are initialized, it is safe to return it. */\n> > + *iter_state = state;\n> > +\n> > /* allocate heap */\n> > state->heap =\n> > binaryheap_allocate(state->nr_txns,\n> > ReorderBufferIterCompare,\n> >\n> > Is there a reason for not initializing iter_state after\n> > binaryheap_allocate? If we do so, then we don't need additional check\n> > you have added in ReorderBufferIterTXNFinish.\n>\n> If iter_state is initialized *after* binaryheap_allocate, then we\n> won't be able to close the vfds if binaryheap_allocate() ereports().\n>\n\nIs it possible to have vfds opened before binaryheap_allocate(), if so how?\n\n> >\n> > 5. One naive question about the usage of PathNameOpenFile(). When it\n> > reaches the max limit, it will automatically close one of the files,\n> > but how will that be reflected in the data structure (TXNEntryFile)\n> > you are managing. Basically, when PathNameOpenFile closes some file,\n> > how will the corresponding vfd in TXNEntryFile be changed. Because if\n> > it is not changed, then won't it start pointing to some wrong\n> > filehandle?\n>\n> In PathNameOpenFile(), excess kernel fds could be closed\n> (ReleaseLruFiles). But with that, the vfds themselves don't get\n> invalidated. Only the underlying kernel fd gets closed, and the\n> vfd->fd is marked VFD_CLOSED. The vfd array element remains valid (a\n> non-null vfd->fileName means the vfd slot is valid; check\n> FileIsValid). So later, when FileRead(vfd1) is called and that vfd1\n> happens to be the one that had got it's kernel fd closed, it gets\n> opened again through FileAccess().\n>\n\nI was under impression that once the fd is closed due to excess kernel\nfds that are opened, the slot in VfdCache array could be resued by\nsomeone else, but on closer inspection that is not true. It will be\nonly available for reuse after we explicitly call FileClose, right?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 7 Dec 2019 11:37:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 1:14 AM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n> On Tue, Nov 19, 2019 at 12:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> On Wed, Nov 20, 2019 at 12:28 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>> > On Windows, it is documented that ReadFile() (which is called by\n>> > pg_pread) will return false on EOF but only when the file is open for\n>> > asynchronous reads/writes. But here we are just dealing with usual\n>> > synchronous reads. So pg_pread() code should indeed return 0 on EOF on\n>> > Windows. Not yet able to figure out how FileRead() managed to return\n>> > this error on Windows. But from your symptoms, it does look like\n>> > pg_pread()=>ReadFile() returned false (despite doing asynchronous\n>> > reads), and so _dosmaperr() gets called, and then it does not find the\n>> > eof error in doserrors[], so the \"unrecognized win32 error code\"\n>> > message is printed. May have to dig up more on this.\n>>\n>> Hmm. See also this report:\n>>\n>> https://www.postgresql.org/message-id/flat/CABuU89MfEvJE%3DWif%2BHk7SCqjSOF4rhgwJWW6aR3hjojpGqFbjQ%40mail.gmail.com\n>\n> The files from pgwin32_open() are open for synchronous access, while pg_pread() uses the asynchronous functionality to offset the read. Under these circunstances, a read past EOF will return ERROR_HANDLE_EOF (38), as explained in:\n>\n> https://devblogs.microsoft.com/oldnewthing/20150121-00/?p=44863\n\nFWIW, I sent a pull request to see if the MicrosoftDocs project agrees\nthat the ReadFile page is misleading on this point:\n\nhttps://github.com/MicrosoftDocs/sdk-api/pull/7\n\n\n",
"msg_date": "Mon, 9 Dec 2019 12:10:33 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Sat, 7 Dec 2019 at 11:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 6, 2019 at 5:00 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> >\n> > On Fri, 6 Dec 2019 at 15:40, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Few comments:\n> > > ----------------------\n> > >\n> > > 1.\n> > > + /* Now that the state fields are initialized, it is safe to return it. */\n> > > + *iter_state = state;\n> > > +\n> > > /* allocate heap */\n> > > state->heap =\n> > > binaryheap_allocate(state->nr_txns,\n> > > ReorderBufferIterCompare,\n> > >\n> > > Is there a reason for not initializing iter_state after\n> > > binaryheap_allocate? If we do so, then we don't need additional check\n> > > you have added in ReorderBufferIterTXNFinish.\n> >\n> > If iter_state is initialized *after* binaryheap_allocate, then we\n> > won't be able to close the vfds if binaryheap_allocate() ereports().\n> >\n>\n> Is it possible to have vfds opened before binaryheap_allocate(), if so how?\nNo it does not look possible for the vfds to be opened before\nbinaryheap_allocate(). But actually, the idea behind placing the\niter_state at the place where I put is that, we should return back the\niter_state at the *earliest* place in the code where it is safe to\nreturn.\n\n\n> > I couldn't reproduce the original problem (on HEAD) reported with the\n> > test case in the patch. So, I can't verify the fix. I think it is\n> > because of recent commits cec2edfa7859279f36d2374770ca920c59c73dd8 and\n> > 9290ad198b15d6b986b855d2a58d087a54777e87. It seems you need to either\n> > change the value of logical_decoding_work_mem or change the test in\n> > some way so that the original problem can be reproduced.\nAmit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> Yeah, it does seem like the commit cec2edfa78592 must have caused the\n> test to not reproduce on your env, although the test does fail for me\n> still. Setting logical_decoding_work_mem to a low value does sound\n> like a good idea. Will work on it.\n\nI checked that setting logical_decoding_work_mem to its min value\n(64KB) causes early serialization. So I think if you set this, you\nshould be able to reproduce with the spill.sql test that has the new\ntestcase. I will anyway set this value in the test. Also check below\n...\n\n>> 4. I think we should also check how much time increase will happen for\n>> test_decoding regression test after the test added by this patch?\nAmit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> Yeah, it currently takes noticeably longer compared to the others.\n> Let's see if setting logical_decoding_work_mem to a min value allows\n> us to reproduce the test with much lesser number of inserts.\n\nThe test in the patch takes around 20 seconds, as compared to the max\ntime of 2 seconds any of the other tests take in that test suite.\n\nBut if we set the max_files_per_process to a very low value (say 26)\nas against the default 1000, we can reproduce the issue with as low as\n20 sub-transactions as against the 600 that I used in spill.sql test.\nAnd with this, the test runs in around 4 seconds, so this is good. But\nthe problem is : max_files_per_process needs server restart. So either\nwe have to shift this test to src/test/recovery in one of the\nlogical_decoding test, or retain it in contrib/test_decoding and let\nit run for 20 seconds. Let me know if you figure out any other\napproach.\n\n\n>\n> > >\n> > > 5. One naive question about the usage of PathNameOpenFile(). When it\n> > > reaches the max limit, it will automatically close one of the files,\n> > > but how will that be reflected in the data structure (TXNEntryFile)\n> > > you are managing. Basically, when PathNameOpenFile closes some file,\n> > > how will the corresponding vfd in TXNEntryFile be changed. Because if\n> > > it is not changed, then won't it start pointing to some wrong\n> > > filehandle?\n> >\n> > In PathNameOpenFile(), excess kernel fds could be closed\n> > (ReleaseLruFiles). But with that, the vfds themselves don't get\n> > invalidated. Only the underlying kernel fd gets closed, and the\n> > vfd->fd is marked VFD_CLOSED. The vfd array element remains valid (a\n> > non-null vfd->fileName means the vfd slot is valid; check\n> > FileIsValid). So later, when FileRead(vfd1) is called and that vfd1\n> > happens to be the one that had got it's kernel fd closed, it gets\n> > opened again through FileAccess().\n> >\n>\n> I was under impression that once the fd is closed due to excess kernel\n> fds that are opened, the slot in VfdCache array could be resued by\n> someone else, but on closer inspection that is not true. It will be\n> only available for reuse after we explicitly call FileClose, right?\n\nYes, that's right.\n\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Wed, 11 Dec 2019 16:16:43 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 4:17 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Sat, 7 Dec 2019 at 11:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Dec 6, 2019 at 5:00 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > >\n> > > On Fri, 6 Dec 2019 at 15:40, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > 1.\n> > > > + /* Now that the state fields are initialized, it is safe to return it. */\n> > > > + *iter_state = state;\n> > > > +\n> > > > /* allocate heap */\n> > > > state->heap =\n> > > > binaryheap_allocate(state->nr_txns,\n> > > > ReorderBufferIterCompare,\n> > > >\n> > > > Is there a reason for not initializing iter_state after\n> > > > binaryheap_allocate? If we do so, then we don't need additional check\n> > > > you have added in ReorderBufferIterTXNFinish.\n> > >\n> > > If iter_state is initialized *after* binaryheap_allocate, then we\n> > > won't be able to close the vfds if binaryheap_allocate() ereports().\n> > >\n> >\n> > Is it possible to have vfds opened before binaryheap_allocate(), if so how?\n> No it does not look possible for the vfds to be opened before\n> binaryheap_allocate(). But actually, the idea behind placing the\n> iter_state at the place where I put is that, we should return back the\n> iter_state at the *earliest* place in the code where it is safe to\n> return.\n>\n\nSure, I get that point, but it seems it is equally good to do this\nafter binaryheap_allocate(). It will be sligthly better because if\nthere is any error in binaryheap_allocate, then we don't need to even\ncall ReorderBufferIterTXNFinish().\n\n>\n> >> 4. I think we should also check how much time increase will happen for\n> >> test_decoding regression test after the test added by this patch?\n> Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > Yeah, it currently takes noticeably longer compared to the others.\n> > Let's see if setting logical_decoding_work_mem to a min value allows\n> > us to reproduce the test with much lesser number of inserts.\n>\n> The test in the patch takes around 20 seconds, as compared to the max\n> time of 2 seconds any of the other tests take in that test suite.\n>\n> But if we set the max_files_per_process to a very low value (say 26)\n> as against the default 1000, we can reproduce the issue with as low as\n> 20 sub-transactions as against the 600 that I used in spill.sql test.\n> And with this, the test runs in around 4 seconds, so this is good.\n>\n\nDo you get 4s even after setting the minimum value of\nlogical_decoding_work_mem? I think 4s is also too much for this test\nwhich is going to test one extreme scenario. Can you try with some\nbigger rows or something else to reduce this time? I think if we can\nget it close to 2s or whatever maximum time taken by any other logical\ndecoding tests, then good, otherwise, it doesn't seem like a good idea\nto add this test.\n\n> But\n> the problem is : max_files_per_process needs server restart. So either\n> we have to shift this test to src/test/recovery in one of the\n> logical_decoding test, or retain it in contrib/test_decoding and let\n> it run for 20 seconds. Let me know if you figure out any other\n> approach.\n>\n\nI don't think 20s for one test is acceptable. So, we should just give\nup that path, you can try what I suggested above, if we can reduce the\ntest time, then good, otherwise, I suggest to drop the test from the\npatch. Having said that, we should try our best to reduce the test\ntime as it will be good if we can have such a test.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 Dec 2019 09:49:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, 12 Dec 2019 at 09:49, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 11, 2019 at 4:17 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> >\n> > On Sat, 7 Dec 2019 at 11:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Dec 6, 2019 at 5:00 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > > >\n> > > > On Fri, 6 Dec 2019 at 15:40, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > 1.\n> > > > > + /* Now that the state fields are initialized, it is safe to return it. */\n> > > > > + *iter_state = state;\n> > > > > +\n> > > > > /* allocate heap */\n> > > > > state->heap =\n> > > > > binaryheap_allocate(state->nr_txns,\n> > > > > ReorderBufferIterCompare,\n> > > > >\n> > > > > Is there a reason for not initializing iter_state after\n> > > > > binaryheap_allocate? If we do so, then we don't need additional check\n> > > > > you have added in ReorderBufferIterTXNFinish.\n> > > >\n> > > > If iter_state is initialized *after* binaryheap_allocate, then we\n> > > > won't be able to close the vfds if binaryheap_allocate() ereports().\n> > > >\n> > >\n> > > Is it possible to have vfds opened before binaryheap_allocate(), if so how?\n> > No it does not look possible for the vfds to be opened before\n> > binaryheap_allocate(). But actually, the idea behind placing the\n> > iter_state at the place where I put is that, we should return back the\n> > iter_state at the *earliest* place in the code where it is safe to\n> > return.\n> >\n>\n> Sure, I get that point, but it seems it is equally good to do this\n> after binaryheap_allocate(). It will be sligthly better because if\n> there is any error in binaryheap_allocate, then we don't need to even\n> call ReorderBufferIterTXNFinish().\n\nAll right. WIll do that.\n\n>\n> >\n> > >> 4. I think we should also check how much time increase will happen for\n> > >> test_decoding regression test after the test added by this patch?\n> > Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > > Yeah, it currently takes noticeably longer compared to the others.\n> > > Let's see if setting logical_decoding_work_mem to a min value allows\n> > > us to reproduce the test with much lesser number of inserts.\n> >\n> > The test in the patch takes around 20 seconds, as compared to the max\n> > time of 2 seconds any of the other tests take in that test suite.\n> >\n> > But if we set the max_files_per_process to a very low value (say 26)\n> > as against the default 1000, we can reproduce the issue with as low as\n> > 20 sub-transactions as against the 600 that I used in spill.sql test.\n> > And with this, the test runs in around 4 seconds, so this is good.\n> >\n>\n> Do you get 4s even after setting the minimum value of\n> logical_decoding_work_mem? I think 4s is also too much for this test\n> which is going to test one extreme scenario. Can you try with some\n> bigger rows or something else to reduce this time? I think if we can\n> get it close to 2s or whatever maximum time taken by any other logical\n> decoding tests, then good, otherwise, it doesn't seem like a good idea\n> to add this test.\n\n>\n> > But\n> > the problem is : max_files_per_process needs server restart. So either\n> > we have to shift this test to src/test/recovery in one of the\n> > logical_decoding test, or retain it in contrib/test_decoding and let\n> > it run for 20 seconds. Let me know if you figure out any other\n> > approach.\n> >\n>\n> I don't think 20s for one test is acceptable. So, we should just give\n> up that path, you can try what I suggested above, if we can reduce the\n> test time, then good, otherwise, I suggest to drop the test from the\n> patch. Having said that, we should try our best to reduce the test\n> time as it will be good if we can have such a test.\n\nI tried; it is actually roughly 3.4 seconds. Note that reducing\nlogical_decoding_work_mem does not reduce the test time. It causes the\nserialization to start early, and so increases the chance of\nreproducing the problem. During restore of the serialized data, we\nstill use max_changes_in_memory. So max_changes_in_memory is the one\nthat allows us to reduce the number of transactions required, so we\ncan cut down on the outer loop iterations and make the test finish\nmuch earlier.\n\nBut also note that, we can't use the test suite in\ncontrib/test_decoding, because max_changes_in_memory needs server\nrestart. So we need to shift this test to src/test/recovery. And\nthere, I guess it is not that critical for the testcase to be very\nquick because the tests in general are much slower than the ones in\ncontrib/test_decoding, although it would be nice to make it fast. What\nI propose is to modify max_changes_in_memory, do a server restart\n(which takes hardly a sec), run the testcase (3.5 sec) and then\nrestart after resetting the guc. So totally it will be around 4-5\nseconds.\n\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Thu, 12 Dec 2019 11:34:10 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, 12 Dec 2019 at 11:34, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n\n> So max_changes_in_memory is the one\n> that allows us to reduce the number of transactions required, so we\n> can cut down on the outer loop iterations and make the test finish\n> much earlier.\n\n>\n> But also note that, we can't use the test suite in\n> contrib/test_decoding, because max_changes_in_memory needs server\n> restart. So we need to shift this test to src/test/recovery. And\n> there, I guess it is not that critical for the testcase to be very\n> quick because the tests in general are much slower than the ones in\n> contrib/test_decoding, although it would be nice to make it fast. What\n> I propose is to modify max_changes_in_memory, do a server restart\n> (which takes hardly a sec), run the testcase (3.5 sec) and then\n> restart after resetting the guc. So totally it will be around 4-5\n> seconds.\n\nSorry I meant max_files_per_process. We need to reduce\nmax_files_per_process, so that it causes max_safe_fds to be reduced,\nand so only a few transactions are sufficient to reproduce the\nproblem, because the reserveAllocatedDesc() will return false much\nsooner due to low max_safe_fds.\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Thu, 12 Dec 2019 11:52:47 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, Dec 12, 2019 at 11:53 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Thu, 12 Dec 2019 at 11:34, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> > So max_changes_in_memory is the one\n> > that allows us to reduce the number of transactions required, so we\n> > can cut down on the outer loop iterations and make the test finish\n> > much earlier.\n>\n> >\n> > But also note that, we can't use the test suite in\n> > contrib/test_decoding, because max_changes_in_memory needs server\n> > restart. So we need to shift this test to src/test/recovery. And\n> > there, I guess it is not that critical for the testcase to be very\n> > quick because the tests in general are much slower than the ones in\n> > contrib/test_decoding, although it would be nice to make it fast. What\n> > I propose is to modify max_changes_in_memory, do a server restart\n> > (which takes hardly a sec), run the testcase (3.5 sec) and then\n> > restart after resetting the guc. So totally it will be around 4-5\n> > seconds.\n>\n> Sorry I meant max_files_per_process.\n>\n\nOkay, what time other individual tests take in that directory on your\nmachine? How about providing a separate test patch for this case so\nthat I can also test it? I think we can take the opinion of others as\nwell if they are fine with adding this test, otherwise, we can go\nahead with the main patch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 Dec 2019 14:18:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, 12 Dec 2019 at 14:18, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Dec 12, 2019 at 11:53 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> >\n> > On Thu, 12 Dec 2019 at 11:34, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> >\n> > > So max_changes_in_memory is the one\n> > > that allows us to reduce the number of transactions required, so we\n> > > can cut down on the outer loop iterations and make the test finish\n> > > much earlier.\n> >\n> > >\n> > > But also note that, we can't use the test suite in\n> > > contrib/test_decoding, because max_changes_in_memory needs server\n> > > restart. So we need to shift this test to src/test/recovery. And\n> > > there, I guess it is not that critical for the testcase to be very\n> > > quick because the tests in general are much slower than the ones in\n> > > contrib/test_decoding, although it would be nice to make it fast. What\n> > > I propose is to modify max_changes_in_memory, do a server restart\n> > > (which takes hardly a sec), run the testcase (3.5 sec) and then\n> > > restart after resetting the guc. So totally it will be around 4-5\n> > > seconds.\n> >\n> > Sorry I meant max_files_per_process.\n> >\n>\n> Okay, what time other individual tests take in that directory on your\n> machine?\nFor src/test/recovery directory, on average, a test takes about 4-5 seconds.\n\n> How about providing a separate test patch for this case so\n> that I can also test it?\nAttached is a v4 patch that also addresses your code comments so far.\nI have included the test case in 006_logical_decoding.pl. I observed\nthat the test case just adds only about 0.5 to 1 sec time. Please\nverify on your env also, and also whether the test reproduces the\nissue without the code changes.\n\n> I think we can take the opinion of others as\n> well if they are fine with adding this test, otherwise, we can go\n> ahead with the main patch.\nSure.\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company",
"msg_date": "Thu, 12 Dec 2019 21:50:12 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, Dec 12, 2019 at 9:50 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> Attached is a v4 patch that also addresses your code comments so far.\n> I have included the test case in 006_logical_decoding.pl. I observed\n> that the test case just adds only about 0.5 to 1 sec time. Please\n> verify on your env also, and also whether the test reproduces the\n> issue without the code changes.\n>\n\nIt takes roughly the same time on my machine as well. I have checked\non Windows as well, it increases the time from 14 to 16 (17) seconds\nfor this test. I don't think this is any big increase considering the\ntiming of other tests and it would be good to have a test for such\nboundary conditions. I have slightly change the comments in the patch\nand ran pgindent. Attached, find the patch with a proposed commit\nmessage.\n\nI have also made minor changes related to below code in patch:\n- else if (readBytes != sizeof(ReorderBufferDiskChange))\n+\n+ file->curOffset += readBytes;\n+\n+ if (readBytes !=\nsizeof(ReorderBufferDiskChange))\n\nWhy the size is added before the error check? I think it should be\nafter that check, so changed accordingly. Similarly, I don't see why\nwe need to change 'else if' to 'if' in this code, so changed back.\n\nI think we need to change/tweak the test for back branches as there we\ndon't have logical_decoding_work_mem. Can you please look into that\nand see if you can run perltidy for the test file.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sat, 14 Dec 2019 11:59:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Sat, 14 Dec 2019 at 11:59, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Dec 12, 2019 at 9:50 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> >\n> > Attached is a v4 patch that also addresses your code comments so far.\n> > I have included the test case in 006_logical_decoding.pl. I observed\n> > that the test case just adds only about 0.5 to 1 sec time. Please\n> > verify on your env also, and also whether the test reproduces the\n> > issue without the code changes.\n> >\n>\n> It takes roughly the same time on my machine as well. I have checked\n> on Windows as well, it increases the time from 14 to 16 (17) seconds\n> for this test. I don't think this is any big increase considering the\n> timing of other tests and it would be good to have a test for such\n> boundary conditions. I have slightly change the comments in the patch\n> and ran pgindent. Attached, find the patch with a proposed commit\n> message.\n>\n> I have also made minor changes related to below code in patch:\n> - else if (readBytes != sizeof(ReorderBufferDiskChange))\n> +\n> + file->curOffset += readBytes;\n> +\n> + if (readBytes !=\n> sizeof(ReorderBufferDiskChange))\n>\n> Why the size is added before the error check?\nThe logic was : even though it's an error that the readBytes does not\nmatch the expected size, the file read is successful so update the vfd\noffset as early as possible. In our case, this might not matter much,\nbut who knows, in the future, in the exception block (say, in\nReorderBufferIterTXNFinish, someone assumes that the file offset is\ncorrect and does something with that, then we will get in trouble,\nalthough I agree that it's very unlikely. But IMO, because we want to\nsimulate the file offset support in vfd, we should update the file\noffset immediately after a file read is known to have succeeded.\n\n> I think it should be\n> after that check, so changed accordingly. Similarly, I don't see why\n> we need to change 'else if' to 'if' in this code, so changed back.\nSince for adding the size before the error check I had to remove the\nelse-if, so to be consistent, I removed the else-if at surrounding\nplaces also.\n\n>\n> I think we need to change/tweak the test for back branches as there we\n> don't have logical_decoding_work_mem. Can you please look into that\nYeah, I believe we need to backport up to PG 9.4 where logical\ndecoding was introduced, so I am first trying out with 9.4 branch.\n\n> and see if you can run perltidy for the test file.\nHmm, I tried perltidy, and it seems to mostly add a space after ( and\na space before ) if there's already; so \"('postgres',\" is replaced by\n\"(<space> 'postgres',\". And this is going to be inconsistent with\nother places. And it replaces tab with spaces. Do you think we should\ntry perltidy, or have we before been using this tool for the tap tests\n?\n\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Mon, 16 Dec 2019 15:25:52 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 3:26 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Sat, 14 Dec 2019 at 11:59, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I have also made minor changes related to below code in patch:\n> > - else if (readBytes != sizeof(ReorderBufferDiskChange))\n> > +\n> > + file->curOffset += readBytes;\n> > +\n> > + if (readBytes !=\n> > sizeof(ReorderBufferDiskChange))\n> >\n> > Why the size is added before the error check?\n> The logic was : even though it's an error that the readBytes does not\n> match the expected size, the file read is successful so update the vfd\n> offset as early as possible. In our case, this might not matter much,\n> but who knows, in the future, in the exception block (say, in\n> ReorderBufferIterTXNFinish, someone assumes that the file offset is\n> correct and does something with that, then we will get in trouble,\n> although I agree that it's very unlikely.\n>\n\nI am not sure if there is any such need, but even if it is there, I\nthink updating after a *short* read (read less than expected) doesn't\nseem like a good idea because there is clearly some problem with the\nread call. Also, in the case below that case where we read the actual\nchange data, the offset is updated after the check of *short* read. I\ndon't see any advantage in such an inconsistency. I still feel it is\nbetter to update the offset after all error checks.\n\n>\n> > and see if you can run perltidy for the test file.\n> Hmm, I tried perltidy, and it seems to mostly add a space after ( and\n> a space before ) if there's already; so \"('postgres',\" is replaced by\n> \"(<space> 'postgres',\". And this is going to be inconsistent with\n> other places. And it replaces tab with spaces. Do you think we should\n> try perltidy, or have we before been using this tool for the tap tests\n> ?\n>\n\nSee text in src/test/perl/README (Note that all tests and test tools\nshould have perltidy run on them before patches are submitted, using\nperltidy - profile=src/tools/pgindent/perltidyrc). It is recommended\nto use perltidy.\n\nNow, if it is making the added code inconsistent with nearby code,\nthen I suggest to leave it.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 Dec 2019 16:52:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Mon, 16 Dec 2019 at 16:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Dec 16, 2019 at 3:26 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> >\n> > On Sat, 14 Dec 2019 at 11:59, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I have also made minor changes related to below code in patch:\n> > > - else if (readBytes != sizeof(ReorderBufferDiskChange))\n> > > +\n> > > + file->curOffset += readBytes;\n> > > +\n> > > + if (readBytes !=\n> > > sizeof(ReorderBufferDiskChange))\n> > >\n> > > Why the size is added before the error check?\n> > The logic was : even though it's an error that the readBytes does not\n> > match the expected size, the file read is successful so update the vfd\n> > offset as early as possible. In our case, this might not matter much,\n> > but who knows, in the future, in the exception block (say, in\n> > ReorderBufferIterTXNFinish, someone assumes that the file offset is\n> > correct and does something with that, then we will get in trouble,\n> > although I agree that it's very unlikely.\n> >\n>\n> I am not sure if there is any such need, but even if it is there, I\n> think updating after a *short* read (read less than expected) doesn't\n> seem like a good idea because there is clearly some problem with the\n> read call. Also, in the case below that case where we read the actual\n> change data, the offset is updated after the check of *short* read. I\n> don't see any advantage in such an inconsistency. I still feel it is\n> better to update the offset after all error checks.\nOk, no problem; I don't see any harm in doing the updates after the size checks.\n\nBy the way, the backport patch is turning out to be simpler. It's\nbecause in pre-12 versions, the file offset is part of the Vfd\nstructure, so all the offset handling is not required.\n\n>\n> >\n> > > and see if you can run perltidy for the test file.\n> > Hmm, I tried perltidy, and it seems to mostly add a space after ( and\n> > a space before ) if there's already; so \"('postgres',\" is replaced by\n> > \"(<space> 'postgres',\". And this is going to be inconsistent with\n> > other places. And it replaces tab with spaces. Do you think we should\n> > try perltidy, or have we before been using this tool for the tap tests\n> > ?\n> >\n>\n> See text in src/test/perl/README (Note that all tests and test tools\n> should have perltidy run on them before patches are submitted, using\n> perltidy - profile=src/tools/pgindent/perltidyrc). It is recommended\n> to use perltidy.\n>\n> Now, if it is making the added code inconsistent with nearby code,\n> then I suggest to leave it.\nIn many places, it is becoming inconsistent, but will see if there are\nsome places where it does make sense and does not break consistency.\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Tue, 17 Dec 2019 17:40:58 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Tue, 17 Dec 2019 at 17:40, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> By the way, the backport patch is turning out to be simpler. It's\n> because in pre-12 versions, the file offset is part of the Vfd\n> structure, so all the offset handling is not required.\n\nPlease have a look at the attached backport patch for PG 11. branch.\nOnce you are ok with the patch, I will port it on other branches.\nNote that in the patch, wherever applicable I have renamed the fd\nvariable to vfd to signify that it is a vfd, and not the kernel fd. If\nwe don't do the renaming, the patch would be still smaller, but I\nthink the renaming makes sense.\n\nThe recovery TAP tests don't seem to be there on 9.4 and 9.5 branch,\nso I think it's ok to not have any tests with the patches on these\nbranches that don't have the tap tests.\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company",
"msg_date": "Wed, 18 Dec 2019 12:33:39 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 12:34 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Tue, 17 Dec 2019 at 17:40, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > By the way, the backport patch is turning out to be simpler. It's\n> > because in pre-12 versions, the file offset is part of the Vfd\n> > structure, so all the offset handling is not required.\n>\n> Please have a look at the attached backport patch for PG 11. branch.\n> Once you are ok with the patch, I will port it on other branches.\n> Note that in the patch, wherever applicable I have renamed the fd\n> variable to vfd to signify that it is a vfd, and not the kernel fd. If\n> we don't do the renaming, the patch would be still smaller, but I\n> think the renaming makes sense.\n>\n\nThe other usage of PathNameOpenFile in md.c is already using 'fd' as a\nvariable name (also, if you see example in fd.h, that also uses fd as\nvariable name), so I don't see any problem with using fd especially if\nthat leads to lesser changes. Apart from that, your patch LGTM.\n\n> The recovery TAP tests don't seem to be there on 9.4 and 9.5 branch,\n> so I think it's ok to not have any tests with the patches on these\n> branches that don't have the tap tests.\n>\n\nYeah, that is fine.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 Dec 2019 11:59:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, 19 Dec 2019 at 11:59, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 18, 2019 at 12:34 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> >\n> > On Tue, 17 Dec 2019 at 17:40, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > > By the way, the backport patch is turning out to be simpler. It's\n> > > because in pre-12 versions, the file offset is part of the Vfd\n> > > structure, so all the offset handling is not required.\n> >\n> > Please have a look at the attached backport patch for PG 11. branch.\n> > Once you are ok with the patch, I will port it on other branches.\n> > Note that in the patch, wherever applicable I have renamed the fd\n> > variable to vfd to signify that it is a vfd, and not the kernel fd. If\n> > we don't do the renaming, the patch would be still smaller, but I\n> > think the renaming makes sense.\n> >\n>\n> The other usage of PathNameOpenFile in md.c is already using 'fd' as a\n> variable name (also, if you see example in fd.h, that also uses fd as\n> variable name), so I don't see any problem with using fd especially if\n> that leads to lesser changes.\n\nOk. I have retained fd name.\n\n> Apart from that, your patch LGTM.\nAttached are the patches from master back up to 94 branch.\n\nPG 9.4 and 9.5 have a common patch to be applied :\npg94_95_use_vfd_for_logrep.patch\n From PG 9.6 onwards, each version has a separate patch.\n\nFor PG 9.6, there is no logical decoding perl test file. So I have\nmade a new file 006_logical_decoding_spill.pl that has only the\nspecific testcase. Also, for building the test_decoding.so, I had to\nadd the EXTRA_INSTALL=contrib/test_decoding line in the\nsrc/test/recovery/Makefile, because this is the first time we are\nusing the plugin in the 9.6 tap test.\n\n From PG 10 onwards, pgstat_report_*() calls around read() are removed\nin the patch, because FileRead() itself reports the wait events.\n\n From PG 12 onwards, the vfd offset handling had to be added, because\nthe offset is not present in Vfd structure.\n\nIn master, logical_decoding_work_mem is used in the test file.\n\n\n--\nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company",
"msg_date": "Fri, 20 Dec 2019 09:31:10 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 9:31 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> Attached are the patches from master back up to 94 branch.\n>\n> PG 9.4 and 9.5 have a common patch to be applied :\n> pg94_95_use_vfd_for_logrep.patch\n> From PG 9.6 onwards, each version has a separate patch.\n>\n> For PG 9.6, there is no logical decoding perl test file. So I have\n> made a new file 006_logical_decoding_spill.pl that has only the\n> specific testcase. Also, for building the test_decoding.so, I had to\n> add the EXTRA_INSTALL=contrib/test_decoding line in the\n> src/test/recovery/Makefile, because this is the first time we are\n> using the plugin in the 9.6 tap test.\n>\n\nI am not sure if we need to go that far for 9.6 branch. If the other\ntests for logical decoding are not present, then I don't see why we\nneed to create a new test file for this test only. Also, I think this\nwill make the patch the same for 9.4,9.5 and 9.6.\n\n> From PG 10 onwards, pgstat_report_*() calls around read() are removed\n> in the patch, because FileRead() itself reports the wait events.\n>\n\nWhy there are different patches for 10 and 11?\n\nWe should try to minimize the difference between patches in different branches.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 Dec 2019 10:41:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Tue, 24 Dec 2019 at 10:41, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 20, 2019 at 9:31 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > Attached are the patches from master back up to 94 branch.\n> >\n> > PG 9.4 and 9.5 have a common patch to be applied :\n> > pg94_95_use_vfd_for_logrep.patch\n> > From PG 9.6 onwards, each version has a separate patch.\n> >\n> > For PG 9.6, there is no logical decoding perl test file. So I have\n> > made a new file 006_logical_decoding_spill.pl that has only the\n> > specific testcase. Also, for building the test_decoding.so, I had to\n> > add the EXTRA_INSTALL=contrib/test_decoding line in the\n> > src/test/recovery/Makefile, because this is the first time we are\n> > using the plugin in the 9.6 tap test.\n> >\n>\n> I am not sure if we need to go that far for 9.6 branch. If the other\n> tests for logical decoding are not present, then I don't see why we\n> need to create a new test file for this test only. Also, I think this\n> will make the patch the same for 9.4,9.5 and 9.6.\n\nOk. I tested pg94_95_use_vfd_for_logrep.patch for 9.6 branch, and it\nworks there. So please use this patch for all the three branches.\n\n>\n> > From PG 10 onwards, pgstat_report_*() calls around read() are removed\n> > in the patch, because FileRead() itself reports the wait events.\n> >\n>\n> Why there are different patches for 10 and 11?\nFor PG10, OpenTransientFile() and PathNameOpenFile() each have an\nextra parameter for specifying file creation modes such as S_IRUSR or\nS_IWUSR. For 11, these functions don't accept the flags, rather the\nfile is always opened with PG_FILE_MODE_OWNER. Because of these\ndifferences in the calls, the PG 10 patch does not apply on 11.\n\n\n>\n> We should try to minimize the difference between patches in different branches.\nOk.\n\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Tue, 24 Dec 2019 14:31:02 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Tue, Dec 24, 2019 at 2:31 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Tue, 24 Dec 2019 at 10:41, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Dec 20, 2019 at 9:31 AM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n> > > Attached are the patches from master back up to 94 branch.\n> > >\n> > > PG 9.4 and 9.5 have a common patch to be applied :\n> > > pg94_95_use_vfd_for_logrep.patch\n> > > From PG 9.6 onwards, each version has a separate patch.\n> > >\n> > > For PG 9.6, there is no logical decoding perl test file. So I have\n> > > made a new file 006_logical_decoding_spill.pl that has only the\n> > > specific testcase. Also, for building the test_decoding.so, I had to\n> > > add the EXTRA_INSTALL=contrib/test_decoding line in the\n> > > src/test/recovery/Makefile, because this is the first time we are\n> > > using the plugin in the 9.6 tap test.\n> > >\n> >\n> > I am not sure if we need to go that far for 9.6 branch. If the other\n> > tests for logical decoding are not present, then I don't see why we\n> > need to create a new test file for this test only. Also, I think this\n> > will make the patch the same for 9.4,9.5 and 9.6.\n>\n> Ok. I tested pg94_95_use_vfd_for_logrep.patch for 9.6 branch, and it\n> works there. So please use this patch for all the three branches.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 2 Jan 2020 17:44:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, Jan 2, 2020 at 5:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Dec 24, 2019 at 2:31 PM Amit Khandekar <amitdkhan.pg@gmail.com>\n> wrote:\n> >\n> >\n> > Ok. I tested pg94_95_use_vfd_for_logrep.patch for 9.6 branch, and it\n> > works there. So please use this patch for all the three branches.\n> >\n>\n> Pushed!\n>\n\n\nI see one failure in REL_10_STABLE [1] which seems to be due to this commit:\n\nTest Summary Report\n-------------------\nt/006_logical_decoding.pl (Wstat: 7424 Tests: 10 Failed: 0)\n Non-zero exit status: 29\n Parse errors: Bad plan. You planned 11 tests but ran 10.\nFiles=14, Tests=122, 1968 wallclock secs ( 0.10 usr 0.03 sys + 19.00 cusr\n21.98 csys = 41.11 CPU)\nResult: FAIL\nMakefile:19: recipe for target 'check' failed\nmake: *** [check] Error 1\n\n\nSee below snippet from 006_logical_decoding_master.log\n..\n..\n2020-01-03 01:30:48.254 UTC [12189836:9] t/006_logical_decoding.pl\nSTATEMENT: SELECT data from pg_logical_slot_get_changes('test_slot', NULL,\nNULL)\n WHERE data LIKE '%INSERT%' ORDER BY lsn LIMIT 1;\n2020-01-03 01:30:51.990 UTC [6882174:3] LOG: server process (PID 12189836)\nwas terminated by signal 11\n2020-01-03 01:30:51.990 UTC [6882174:4] DETAIL: Failed process was\nrunning: SELECT data from pg_logical_slot_get_changes('test_slot', NULL,\nNULL)\n WHERE data LIKE '%INSERT%' ORDER BY lsn LIMIT 1;\n2020-01-03 01:30:51.990 UTC [6882174:5] LOG: terminating any other active\nserver processes\n\nThe strange thing is that the same test passes on master on the same\nmachine [2] and for 10 as well, it passes on other machines, so not sure\nwhat could cause this. Any clue?\n\n[1] -\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2020-01-02%2023%3A36%3A31\n[2] -\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2020-01-02%2010%3A37%3A33\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Jan 2, 2020 at 5:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Dec 24, 2019 at 2:31 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n>\n> Ok. I tested pg94_95_use_vfd_for_logrep.patch for 9.6 branch, and it\n> works there. So please use this patch for all the three branches.\n>\n\nPushed!I see one failure in REL_10_STABLE [1] which seems to be due to this commit:Test Summary Report-------------------t/006_logical_decoding.pl (Wstat: 7424 Tests: 10 Failed: 0) Non-zero exit status: 29 Parse errors: Bad plan. You planned 11 tests but ran 10.Files=14, Tests=122, 1968 wallclock secs ( 0.10 usr 0.03 sys + 19.00 cusr 21.98 csys = 41.11 CPU)Result: FAILMakefile:19: recipe for target 'check' failedmake: *** [check] Error 1See below snippet from 006_logical_decoding_master.log....2020-01-03 01:30:48.254 UTC [12189836:9] t/006_logical_decoding.pl STATEMENT: SELECT data from pg_logical_slot_get_changes('test_slot', NULL, NULL)\t WHERE data LIKE '%INSERT%' ORDER BY lsn LIMIT 1;2020-01-03 01:30:51.990 UTC [6882174:3] LOG: server process (PID 12189836) was terminated by signal 112020-01-03 01:30:51.990 UTC [6882174:4] DETAIL: Failed process was running: SELECT data from pg_logical_slot_get_changes('test_slot', NULL, NULL)\t WHERE data LIKE '%INSERT%' ORDER BY lsn LIMIT 1;2020-01-03 01:30:51.990 UTC [6882174:5] LOG: terminating any other active server processesThe strange thing is that the same test passes on master on the same machine [2] and for 10 as well, it passes on other machines, so not sure what could cause this. Any clue?[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2020-01-02%2023%3A36%3A31[2] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2020-01-02%2010%3A37%3A33-- With Regards,Amit Kapila.EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 3 Jan 2020 08:29:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 8:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Thu, Jan 2, 2020 at 5:44 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n>\n>> On Tue, Dec 24, 2019 at 2:31 PM Amit Khandekar <amitdkhan.pg@gmail.com>\n>> wrote:\n>> >\n>> >\n>> > Ok. I tested pg94_95_use_vfd_for_logrep.patch for 9.6 branch, and it\n>> > works there. So please use this patch for all the three branches.\n>> >\n>>\n>> Pushed!\n>>\n>\n>\n> I see one failure in REL_10_STABLE [1] which seems to be due to this\n> commit:\n>\n>\nI tried this test on my CentOs and Power8 machine more than 50 times, but\ncouldn't reproduce it. So, adding Noah to see if he can try this test [1]\non his machine (tern) and get stack track or some other information?\n\n[1] - make -C src/test/recovery/ check PROVE_TESTS=t/006_logical_decoding.pl\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, Jan 3, 2020 at 8:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Thu, Jan 2, 2020 at 5:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Dec 24, 2019 at 2:31 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n>\n> Ok. I tested pg94_95_use_vfd_for_logrep.patch for 9.6 branch, and it\n> works there. So please use this patch for all the three branches.\n>\n\nPushed!I see one failure in REL_10_STABLE [1] which seems to be due to this commit:I tried this test on my CentOs and Power8 machine more than 50 times, but couldn't reproduce it. So, adding Noah to see if he can try this test [1] on his machine (tern) and get stack track or some other information?[1] - make -C src/test/recovery/ check PROVE_TESTS=t/006_logical_decoding.pl-- With Regards,Amit Kapila.EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 3 Jan 2020 10:19:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, 3 Jan 2020 at 10:19, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 3, 2020 at 8:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Thu, Jan 2, 2020 at 5:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>\n>>> On Tue, Dec 24, 2019 at 2:31 PM Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>>> >\n>>> >\n>>> > Ok. I tested pg94_95_use_vfd_for_logrep.patch for 9.6 branch, and it\n>>> > works there. So please use this patch for all the three branches.\n>>> >\n>>>\n>>> Pushed!\n>>\n>>\n>>\n>> I see one failure in REL_10_STABLE [1] which seems to be due to this commit:\n>>\n>\n> I tried this test on my CentOs and Power8 machine more than 50 times, but couldn't reproduce it. So, adding Noah to see if he can try this test [1] on his machine (tern) and get stack track or some other information?\n>\n> [1] - make -C src/test/recovery/ check PROVE_TESTS=t/006_logical_decoding.pl\n\nI also tested multiple times using PG 10 branch; also tried to inject\nan error so that PG_CATCH related code also gets covered, but\nunfortunately didn't get the crash on my machine. I guess, we will\nhave to somehow get the stacktrace.\n\n>\n> --\n> With Regards,\n> Amit Kapila.\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n\n-- \nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Fri, 3 Jan 2020 14:20:09 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, Jan 03, 2020 at 02:20:09PM +0530, Amit Khandekar wrote:\n> On Fri, 3 Jan 2020 at 10:19, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Jan 3, 2020 at 8:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> I see one failure in REL_10_STABLE [1] which seems to be due to this commit:\n> >\n> > I tried this test on my CentOs and Power8 machine more than 50 times, but couldn't reproduce it. So, adding Noah to see if he can try this test [1] on his machine (tern) and get stack track or some other information?\n> >\n> > [1] - make -C src/test/recovery/ check PROVE_TESTS=t/006_logical_decoding.pl\n> \n> I also tested multiple times using PG 10 branch; also tried to inject\n> an error so that PG_CATCH related code also gets covered, but\n> unfortunately didn't get the crash on my machine. I guess, we will\n> have to somehow get the stacktrace.\n\nI have buildfarm member tern running this test in a loop. In the 290\niterations so far, it hasn't failed. I've leave it running for another week\nor so.\n\nThe buildfarm client can capture stack traces, but it currently doesn't do so\nfor TAP test suites (search the client code for get_stack_trace). If someone\nfeels like writing a fix for that, it would be a nice improvement. Perhaps,\nrather than having the client code know all the locations where core files\nmight appear, failed runs should walk the test directory tree for core files?\n\n\n",
"msg_date": "Sat, 4 Jan 2020 10:51:48 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Sun, Jan 5, 2020 at 12:21 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Fri, Jan 03, 2020 at 02:20:09PM +0530, Amit Khandekar wrote:\n> > On Fri, 3 Jan 2020 at 10:19, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Fri, Jan 3, 2020 at 8:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >> I see one failure in REL_10_STABLE [1] which seems to be due to this commit:\n> > >\n> > > I tried this test on my CentOs and Power8 machine more than 50 times, but couldn't reproduce it. So, adding Noah to see if he can try this test [1] on his machine (tern) and get stack track or some other information?\n> > >\n> > > [1] - make -C src/test/recovery/ check PROVE_TESTS=t/006_logical_decoding.pl\n> >\n> > I also tested multiple times using PG 10 branch; also tried to inject\n> > an error so that PG_CATCH related code also gets covered, but\n> > unfortunately didn't get the crash on my machine. I guess, we will\n> > have to somehow get the stacktrace.\n>\n> I have buildfarm member tern running this test in a loop. In the 290\n> iterations so far, it hasn't failed. I've leave it running for another week\n> or so.\n>\n\nOkay, thanks! FYI, your other machine 'mandril' also exhibits the\nexact same behavior and on v10. Both the machines (tern and mandril)\nseem to have the same specs which seems to be the reason that they are\nfailing in the same way. The thing that bothers me is that the fix\nand test are the same for v11 and test passes for v11 on both\nmachines. Does this indicate any random behavior or maybe some other\nbug in v10 which is discovered by this test?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 5 Jan 2020 10:29:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Sun, 5 Jan 2020 at 00:21, Noah Misch <noah@leadboat.com> wrote:\n> The buildfarm client can capture stack traces, but it currently doesn't do so\n> for TAP test suites (search the client code for get_stack_trace). If someone\n> feels like writing a fix for that, it would be a nice improvement. Perhaps,\n> rather than having the client code know all the locations where core files\n> might appear, failed runs should walk the test directory tree for core files?\n\nI think this might end up having the same code to walk the directory\nspread out on multiple files. Instead, I think in the build script, in\nget_stack_trace(), we can do an equivalent of \"find <inputdir> -name\n\"*core*\" , as against the current way in which it looks for core files\nonly in the specific data directory. So get_stack_trace(bindir,\ndatadir) would change to get_stack_trace(bindir, input_dir) where\ninput_dir can be any directory that can contain multiple data\ndirectories. E.g. a recovery test can create multiple instances so\nthere would be multiple data directories inside the test directory.\n\nNoah, is it possible to run a patch'ed build script once I submit a\npatch, so that we can quickly get the stack trace ? I mean, can we do\nthis before getting the patch committed ? I guess, we can run the\nbuild script with a single branch specified, right ?\n\n\n\n\n--\nThanks,\n-Amit Khandekar\nEnterpriseDB Corporation\nThe Postgres Database Company\n\n\n",
"msg_date": "Wed, 8 Jan 2020 14:50:53 +0530",
"msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Sun, Jan 5, 2020 at 10:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Jan 5, 2020 at 12:21 AM Noah Misch <noah@leadboat.com> wrote:\n> >\n> > On Fri, Jan 03, 2020 at 02:20:09PM +0530, Amit Khandekar wrote:\n> > > On Fri, 3 Jan 2020 at 10:19, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > On Fri, Jan 3, 2020 at 8:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >> I see one failure in REL_10_STABLE [1] which seems to be due to this commit:\n> > > >\n> > > > I tried this test on my CentOs and Power8 machine more than 50 times, but couldn't reproduce it. So, adding Noah to see if he can try this test [1] on his machine (tern) and get stack track or some other information?\n> > > >\n> > > > [1] - make -C src/test/recovery/ check PROVE_TESTS=t/006_logical_decoding.pl\n> > >\n> > > I also tested multiple times using PG 10 branch; also tried to inject\n> > > an error so that PG_CATCH related code also gets covered, but\n> > > unfortunately didn't get the crash on my machine. I guess, we will\n> > > have to somehow get the stacktrace.\n> >\n> > I have buildfarm member tern running this test in a loop. In the 290\n> > iterations so far, it hasn't failed. I've leave it running for another week\n> > or so.\n> >\n>\n> Okay, thanks! FYI, your other machine 'mandril' also exhibits the\n> exact same behavior and on v10. Both the machines (tern and mandril)\n> seem to have the same specs which seems to be the reason that they are\n> failing in the same way. The thing that bothers me is that the fix\n> and test are the same for v11 and test passes for v11 on both\n> machines. Does this indicate any random behavior or maybe some other\n> bug in v10 which is discovered by this test?\n>\n\nAnother thing to notice here is that on buildfarm 'tern' (for v10), it\nis getting reproduced, whereas when you ran it independently, then the\nproblem is not reproduced even after so many runs. What could be the\ndifference which is causing this?\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 9 Jan 2020 07:42:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Wed, Jan 08, 2020 at 02:50:53PM +0530, Amit Khandekar wrote:\n> On Sun, 5 Jan 2020 at 00:21, Noah Misch <noah@leadboat.com> wrote:\n> > The buildfarm client can capture stack traces, but it currently doesn't do so\n> > for TAP test suites (search the client code for get_stack_trace). If someone\n> > feels like writing a fix for that, it would be a nice improvement. Perhaps,\n> > rather than having the client code know all the locations where core files\n> > might appear, failed runs should walk the test directory tree for core files?\n> \n> I think this might end up having the same code to walk the directory\n> spread out on multiple files. Instead, I think in the build script, in\n> get_stack_trace(), we can do an equivalent of \"find <inputdir> -name\n> \"*core*\" , as against the current way in which it looks for core files\n> only in the specific data directory.\n\nAgreed.\n\n> Noah, is it possible to run a patch'ed build script once I submit a\n> patch, so that we can quickly get the stack trace ? I mean, can we do\n> this before getting the patch committed ? I guess, we can run the\n> build script with a single branch specified, right ?\n\nYes to all questions, but it would not have helped in this case. First, v10\ndeletes PostgresNode base directories at the end of this test file, despite\nthe failure[1]. Second, the stack trace was minimal:\n\n (gdb) bt \n #0 0xd011119c in extend_brk () from /usr/lib/libc.a(shr.o)\n\nEven so, a web search for \"extend_brk\" led to the answer. By default, 32-bit\nAIX binaries get only 256M of RAM for stack and sbrk. The new regression test\nused more than that, hence this crash. Setting LDR_CNTRL=MAXDATA=0x80000000\nin the environment cured the crash. I've put that in the buildfarm member\nconfiguration and started a new run.\n\n(PostgreSQL documentation actually covers this problem:\nhttps://www.postgresql.org/docs/devel/installation-platform-notes.html#INSTALLATION-NOTES-AIX)\n\n\n[1] It has the all_tests_passing() logic in an attempt to stop this. I'm\nguessing it didn't help because the file failed by calling die \"connection\nerror: ...\", not by reporting a failure to Test::More via ok(0) or similar.\n\n\n",
"msg_date": "Wed, 8 Jan 2020 21:37:04 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> Even so, a web search for \"extend_brk\" led to the answer. By default, 32-bit\n> AIX binaries get only 256M of RAM for stack and sbrk. The new regression test\n> used more than that, hence this crash.\n\nHm, so\n\n(1) Why did we get a crash and not some more-decipherable out-of-resources\nerror? Can we improve that experience?\n\n(2) Should we be dialing back the resource consumption of this test?\nEven on machines where it doesn't fail outright, I'd imagine that it's\ncosting a lot of buildfarm cycles. Is it actually worth that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 00:45:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 11:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Noah Misch <noah@leadboat.com> writes:\n> > Even so, a web search for \"extend_brk\" led to the answer. By default, 32-bit\n> > AIX binaries get only 256M of RAM for stack and sbrk. The new regression test\n> > used more than that, hence this crash.\n>\n> Hm, so\n>\n> (1) Why did we get a crash and not some more-decipherable out-of-resources\n> error? Can we improve that experience?\n>\n> (2) Should we be dialing back the resource consumption of this test?\n>\n\nIn HEAD, we have a guc variable 'logical_decoding_work_mem' by which\nwe can control the memory usage of changes and we have used that, but\nfor back branches, we don't have such a control.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 9 Jan 2020 11:47:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 11:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Noah Misch <noah@leadboat.com> writes:\n> > Even so, a web search for \"extend_brk\" led to the answer. By default, 32-bit\n> > AIX binaries get only 256M of RAM for stack and sbrk. The new regression test\n> > used more than that, hence this crash.\n>\n> Hm, so\n>\n> (1) Why did we get a crash and not some more-decipherable out-of-resources\n> error? Can we improve that experience?\n>\n> (2) Should we be dialing back the resource consumption of this test?\n> Even on machines where it doesn't fail outright, I'd imagine that it's\n> costing a lot of buildfarm cycles. Is it actually worth that?\n>\n\nAfter the latest changes by Noah, the tern and mandrill both are\ngreen. I will revert the test added by this patch unless there is\nsome strong argument to keep it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 9 Jan 2020 16:21:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On 2020-Jan-09, Amit Kapila wrote:\n\n> In HEAD, we have a guc variable 'logical_decoding_work_mem' by which\n> we can control the memory usage of changes and we have used that, but\n> for back branches, we don't have such a control.\n\n> After the latest changes by Noah, the tern and mandrill both are\n> green. I will revert the test added by this patch unless there is\n> some strong argument to keep it.\n\nHmm, so why not revert the test only in the back branches, given that\nit's not so onerous in master?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 9 Jan 2020 11:44:29 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Hmm, so why not revert the test only in the back branches, given that\n> it's not so onerous in master?\n\nI grow tired of repeating myself, but: it's purely accidental that this\ntest passes in master for the existing set of buildfarm members.\n\nIf I have to do so to prove my point, I will set up a buildfarm member\nthat uses USE_NAMED_POSIX_SEMAPHORES, and then insist that the patch\ncope with that.\n\nBut the real issue is that the test is abusing max_files_per_process\nto do something it was never intended for. What it was intended for,\nand works well at, is to constrain the total FD consumption of a\ncollection of backends. It doesn't work well to constrain the maximum\nallocatedDescs consumption, because there's too much variability in\nour demand for other FDs. If we feel that we should have a test that\nis constraining that, then we need to invent some other mechanism to\ndo it with. If we're not willing to invent an appropriate mechanism\nto support the test, then we should drop the test, because a\nhalf-baked test is worse than none.\n\nAn appropriate mechanism, perhaps, would be some way to constrain\nmax_safe_fds directly, without any platform- or environment-dependent\neffects in the way. It could be as simple as\n\n\n\t/*\n\t * Take off the FDs reserved for system() etc.\n\t */\n\tmax_safe_fds -= NUM_RESERVED_FDS;\n\n+\t/*\n+\t * Apply debugging limit, if defined.\n+\t */\n+#ifdef MAX_SAFE_FDS_LIMIT\n+\tmax_safe_fds = Min(max_safe_fds, MAX_SAFE_FDS_LIMIT);\n+#endif\n+\n\t/*\n\t * Make sure we still have enough to get by.\n\t */\n\nand then somebody who was concerned about this could run a buildfarm\nmember with \"-DMAX_SAFE_FDS_LIMIT=10\" or so.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 10:25:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On 2020-Jan-09, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Hmm, so why not revert the test only in the back branches, given that\n> > it's not so onerous in master?\n> \n> I grow tired of repeating myself, but: it's purely accidental that this\n> test passes in master for the existing set of buildfarm members.\n\nOh, I forgot we had that problem. I agree with reverting the test,\nrather than building all the extra functionality needed to make it more\nstable, in that case.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 9 Jan 2020 12:30:58 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Thu, Jan 9, 2020 at 11:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Noah Misch <noah@leadboat.com> writes:\n>>> Even so, a web search for \"extend_brk\" led to the answer. By default, 32-bit\n>>> AIX binaries get only 256M of RAM for stack and sbrk. The new regression test\n>>> used more than that, hence this crash.\n\n>> Hm, so\n>> (1) Why did we get a crash and not some more-decipherable out-of-resources\n>> error? Can we improve that experience?\n>> (2) Should we be dialing back the resource consumption of this test?\n\n> In HEAD, we have a guc variable 'logical_decoding_work_mem' by which\n> we can control the memory usage of changes and we have used that, but\n> for back branches, we don't have such a control.\n\nI poked into this a bit more by running the src/test/recovery tests under\nrestrictive ulimit settings. I used\n\nulimit -s 1024\nulimit -v 250000\n\n(At least on my 64-bit RHEL6 box, reducing ulimit -v much below this\ncauses initdb to fail, apparently because the post-bootstrap process\ntries to load all our tsearch and encoding conversion shlibs at once,\nand it hasn't got enough VM space to do so. Someday we may have to\nimprove that.)\n\nI did not manage to duplicate Noah's crash this way. What I see in\nthe v10 branch is that the new 006_logical_decoding.pl test fails,\nbut with a clean \"out of memory\" error. The memory map dump that\nthat produces fingers the culprit pretty unambiguously:\n\n...\n ReorderBuffer: 223302560 total in 26995 blocks; 7056 free (3 chunks); 223295504 used\n ReorderBufferByXid: 24576 total in 2 blocks; 11888 free (3 chunks); 12688 used\n Slab: TXN: 8192 total in 1 blocks; 5208 free (21 chunks); 2984 used\n Slab: Change: 2170880 total in 265 blocks; 2800 free (35 chunks); 2168080 used\n...\nGrand total: 226714720 bytes in 27327 blocks; 590888 free (785 chunks); 226123832 used\n\nThe test case is only inserting 50K fairly-short rows, so this seems\nlike an unreasonable amount of memory to be consuming for that; and\neven if you think it's reasonable, it clearly isn't going to scale\nto large production transactions.\n\nNow, the good news is that v11 and later get through\n006_logical_decoding.pl just fine under the same restriction.\nSo we did something in v11 to fix this excessive memory consumption.\nHowever, unless we're willing to back-port whatever that was, this\ntest case is clearly consuming excessive resources for the v10 branch.\n\nWe're not out of the woods either. I also observe that v12 and HEAD\nfall over, under these same test conditions, with a stack-overflow\nerror in the 012_subtransactions.pl test. This seems to be due to\nsomebody's decision to use a heavily recursive function to generate a\nbunch of subtransactions. Is there a good reason for hs_subxids() to\nuse recursion instead of a loop? If there is, what's the value of\nusing 201 levels rather than, say, 10?\n\nAnyway it remains unclear why Noah's machine got a crash instead of\nsomething more user-friendly. But the reason why it's only in the\nv10 branch seems non-mysterious.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 18:51:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "I wrote:\n> ReorderBuffer: 223302560 total in 26995 blocks; 7056 free (3 chunks); 223295504 used\n\n> The test case is only inserting 50K fairly-short rows, so this seems\n> like an unreasonable amount of memory to be consuming for that; and\n> even if you think it's reasonable, it clearly isn't going to scale\n> to large production transactions.\n\n> Now, the good news is that v11 and later get through\n> 006_logical_decoding.pl just fine under the same restriction.\n> So we did something in v11 to fix this excessive memory consumption.\n> However, unless we're willing to back-port whatever that was, this\n> test case is clearly consuming excessive resources for the v10 branch.\n\nI dug around a little in the git history for backend/replication/logical/,\nand while I find several commit messages mentioning memory leaks and\nfaulty spill logic, they all claim to have been back-patched as far\nas 9.4.\n\nIt seems reasonably likely to me that this result is telling us about\nan actual bug, ie, faulty back-patching of one or more of those fixes\ninto v10 and perhaps earlier branches.\n\nI don't know this code well enough to take point on looking for the\nproblem, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 19:40:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, Jan 09, 2020 at 12:45:41AM -0500, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > Even so, a web search for \"extend_brk\" led to the answer. By default, 32-bit\n> > AIX binaries get only 256M of RAM for stack and sbrk. The new regression test\n> > used more than that, hence this crash.\n> \n> Hm, so\n> \n> (1) Why did we get a crash and not some more-decipherable out-of-resources\n> error? Can we improve that experience?\n\nBy default, 32-bit AIX binaries have maxdata:0x00000000. Specifying\nmaxdata:0x10000000 provides the same 256M of RAM, yet it magically changes the\nSIGSEGV to ENOMEM:\n\n$ OBJECT_MODE=32 gcc maxdata.c && ./a.out\nSegmentation fault\n$ OBJECT_MODE=32 gcc -Wl,-bmaxdata:0x00000000 maxdata.c && ./a.out\nSegmentation fault\n$ OBJECT_MODE=32 gcc -Wl,-bmaxdata:0x10000000 maxdata.c && ./a.out\ndone at 255 MiB: Not enough space\n\nWe could add -Wl,-bmaxdata:0x10000000 (or a higher value) to LDFLAGS when\nbuilding for 32-bit AIX.\n\n> (2) Should we be dialing back the resource consumption of this test?\n> Even on machines where it doesn't fail outright, I'd imagine that it's\n> costing a lot of buildfarm cycles. Is it actually worth that?\n\nThe test's resource usage, being quite low, should not be a factor in the\ntest's fate. On my usual development machine, the entire\n006_logical_decoding.pl file takes just 3s and ~250 MiB of RAM.\n\n\n",
"msg_date": "Thu, 9 Jan 2020 17:57:36 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 6:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > ReorderBuffer: 223302560 total in 26995 blocks; 7056 free (3 chunks); 223295504 used\n>\n> > The test case is only inserting 50K fairly-short rows, so this seems\n> > like an unreasonable amount of memory to be consuming for that; and\n> > even if you think it's reasonable, it clearly isn't going to scale\n> > to large production transactions.\n>\n> > Now, the good news is that v11 and later get through\n> > 006_logical_decoding.pl just fine under the same restriction.\n> > So we did something in v11 to fix this excessive memory consumption.\n> > However, unless we're willing to back-port whatever that was, this\n> > test case is clearly consuming excessive resources for the v10 branch.\n>\n> I dug around a little in the git history for backend/replication/logical/,\n> and while I find several commit messages mentioning memory leaks and\n> faulty spill logic, they all claim to have been back-patched as far\n> as 9.4.\n>\n> It seems reasonably likely to me that this result is telling us about\n> an actual bug, ie, faulty back-patching of one or more of those fixes\n> into v10 and perhaps earlier branches.\n>\n\nI think it would be good to narrow down this problem, but it seems we\ncan do this separately. I think to avoid forgetting about this, can\nwe track it somewhere as an open issue (In Older Bugs section of\nPostgreSQL 12 Open Items or some other place)?\n\nIt seems to me that this test has found a problem in back-branches, so\nwe might want to keep it after removing the max_files_per_process\nrestriction. However, unless we narrow down this memory leak it is\nnot a good idea to keep it at least not in v10. So, we have the below\noptions:\n(a) remove this test entirely from all branches and once we found the\nmemory leak problem in back-branches, then consider adding it again\nwithout max_files_per_process restriction.\n(b) keep this test without max_files_per_process restriction till v11\nand once the memory leak issue in v10 is found, we can back-patch to\nv10 as well.\n\nSuggestions?\n\n> If I have to do so to prove my point, I will set up a buildfarm member\n> that uses USE_NAMED_POSIX_SEMAPHORES, and then insist that the patch\n> cope with that.\n>\n\nShall we document that under USE_NAMED_POSIX_SEMAPHORES, we consume\nadditional fd? I thought about it because the minimum limit for\nmax_files_per_process is 25 and the system won't even start if someone\nhas used a platform where USE_NAMED_POSIX_SEMAPHORES is enabled.\nAlso, if it would have been explicitly mentioned, then I think this\ntest wouldn't have tried to become so optimistic about\nmax_files_per_process.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 10 Jan 2020 09:31:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Thu, Jan 09, 2020 at 12:45:41AM -0500, Tom Lane wrote:\n>> (1) Why did we get a crash and not some more-decipherable out-of-resources\n>> error? Can we improve that experience?\n\n> By default, 32-bit AIX binaries have maxdata:0x00000000. Specifying\n> maxdata:0x10000000 provides the same 256M of RAM, yet it magically changes the\n> SIGSEGV to ENOMEM:\n> ...\n> We could add -Wl,-bmaxdata:0x10000000 (or a higher value) to LDFLAGS when\n> building for 32-bit AIX.\n\n+1, seems like that would improve matters considerably on that platform.\n\n>> (2) Should we be dialing back the resource consumption of this test?\n>> Even on machines where it doesn't fail outright, I'd imagine that it's\n>> costing a lot of buildfarm cycles. Is it actually worth that?\n\n> The test's resource usage, being quite low, should not be a factor in the\n> test's fate. On my usual development machine, the entire\n> 006_logical_decoding.pl file takes just 3s and ~250 MiB of RAM.\n\nYeah, as I noted downthread, it appears that initdb itself can't\nsucceed with less than ~250MB these days. My old-school self\nfeels like that's excessive, but I must admit I'm not motivated\nto go reduce it right now. But I think it's a clear win to fail\nwith \"out of memory\" rather than \"SIGSEGV\", so I think we ought\nto adjust the AIX build options as you suggest.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Jan 2020 00:16:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 9:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 10, 2020 at 6:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > I wrote:\n> > > ReorderBuffer: 223302560 total in 26995 blocks; 7056 free (3 chunks); 223295504 used\n> >\n> > > The test case is only inserting 50K fairly-short rows, so this seems\n> > > like an unreasonable amount of memory to be consuming for that; and\n> > > even if you think it's reasonable, it clearly isn't going to scale\n> > > to large production transactions.\n> >\n> > > Now, the good news is that v11 and later get through\n> > > 006_logical_decoding.pl just fine under the same restriction.\n> > > So we did something in v11 to fix this excessive memory consumption.\n> > > However, unless we're willing to back-port whatever that was, this\n> > > test case is clearly consuming excessive resources for the v10 branch.\n> >\n> > I dug around a little in the git history for backend/replication/logical/,\n> > and while I find several commit messages mentioning memory leaks and\n> > faulty spill logic, they all claim to have been back-patched as far\n> > as 9.4.\n> >\n> > It seems reasonably likely to me that this result is telling us about\n> > an actual bug, ie, faulty back-patching of one or more of those fixes\n> > into v10 and perhaps earlier branches.\n> >\n>\n> I think it would be good to narrow down this problem, but it seems we\n> can do this separately. I think to avoid forgetting about this, can\n> we track it somewhere as an open issue (In Older Bugs section of\n> PostgreSQL 12 Open Items or some other place)?\n>\n> It seems to me that this test has found a problem in back-branches, so\n> we might want to keep it after removing the max_files_per_process\n> restriction. However, unless we narrow down this memory leak it is\n> not a good idea to keep it at least not in v10. So, we have the below\n> options:\n> (a) remove this test entirely from all branches and once we found the\n> memory leak problem in back-branches, then consider adding it again\n> without max_files_per_process restriction.\n> (b) keep this test without max_files_per_process restriction till v11\n> and once the memory leak issue in v10 is found, we can back-patch to\n> v10 as well.\n>\n\nI am planning to go with option (a) and attached are patches to revert\nthe entire test on HEAD and back branches. I am planning to commit\nthese by Tuesday unless someone has a better idea.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sat, 11 Jan 2020 11:06:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Fri, Jan 10, 2020 at 9:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> ... So, we have the below\n>> options:\n>> (a) remove this test entirely from all branches and once we found the\n>> memory leak problem in back-branches, then consider adding it again\n>> without max_files_per_process restriction.\n>> (b) keep this test without max_files_per_process restriction till v11\n>> and once the memory leak issue in v10 is found, we can back-patch to\n>> v10 as well.\n\n> I am planning to go with option (a) and attached are patches to revert\n> the entire test on HEAD and back branches. I am planning to commit\n> these by Tuesday unless someone has a better idea.\n\nMakes sense to me. We've certainly found out something interesting\nfrom this test, but not what it was expecting to find ;-). I think\nthat there could be scope for two sorts of successor tests:\n\n* I still like my idea of directly constraining max_safe_fds through\nsome sort of debug option. But to my mind, we want to run the entire\nregression suite with that restriction, not just one small test.\n\n* The seeming bug in v10 suggests that we aren't testing large enough\nlogical-decoding cases, or at least aren't noticing leaks in that\narea. I'm not sure what a good design is for testing that. I'm not\nthrilled with just using a larger (and slower) test case, but it's\nnot clear to me how else to attack it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Jan 2020 00:46:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Sat, Jan 11, 2020 at 11:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Fri, Jan 10, 2020 at 9:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> ... So, we have the below\n> >> options:\n> >> (a) remove this test entirely from all branches and once we found the\n> >> memory leak problem in back-branches, then consider adding it again\n> >> without max_files_per_process restriction.\n> >> (b) keep this test without max_files_per_process restriction till v11\n> >> and once the memory leak issue in v10 is found, we can back-patch to\n> >> v10 as well.\n>\n> > I am planning to go with option (a) and attached are patches to revert\n> > the entire test on HEAD and back branches. I am planning to commit\n> > these by Tuesday unless someone has a better idea.\n>\n> Makes sense to me. We've certainly found out something interesting\n> from this test, but not what it was expecting to find ;-). I think\n> that there could be scope for two sorts of successor tests:\n>\n> * I still like my idea of directly constraining max_safe_fds through\n> some sort of debug option. But to my mind, we want to run the entire\n> regression suite with that restriction, not just one small test.\n>\n\nGood idea.\n\n> * The seeming bug in v10 suggests that we aren't testing large enough\n> logical-decoding cases, or at least aren't noticing leaks in that\n> area. I'm not sure what a good design is for testing that. I'm not\n> thrilled with just using a larger (and slower) test case, but it's\n> not clear to me how else to attack it.\n>\n\nIt is not clear to me either at this stage, but I think we can decide\nthat after chasing the issue in v10. My current plan is to revert\nthis test and make a note of the memory leak problem found (probably\ntrack in Older Bugs section of PostgreSQL 12 Open Items). I think\nonce we found the issue id v10, we might be in a better position to\ndecide if the test on the lines of the current test would make sense\nor we need something else.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 12 Jan 2020 08:18:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Thu, Jan 09, 2020 at 07:40:12PM -0500, Tom Lane wrote:\n>I wrote:\n>> ReorderBuffer: 223302560 total in 26995 blocks; 7056 free (3 chunks); 223295504 used\n>\n>> The test case is only inserting 50K fairly-short rows, so this seems\n>> like an unreasonable amount of memory to be consuming for that; and\n>> even if you think it's reasonable, it clearly isn't going to scale\n>> to large production transactions.\n>\n>> Now, the good news is that v11 and later get through\n>> 006_logical_decoding.pl just fine under the same restriction.\n>> So we did something in v11 to fix this excessive memory consumption.\n>> However, unless we're willing to back-port whatever that was, this\n>> test case is clearly consuming excessive resources for the v10 branch.\n>\n>I dug around a little in the git history for backend/replication/logical/,\n>and while I find several commit messages mentioning memory leaks and\n>faulty spill logic, they all claim to have been back-patched as far\n>as 9.4.\n>\n>It seems reasonably likely to me that this result is telling us about\n>an actual bug, ie, faulty back-patching of one or more of those fixes\n>into v10 and perhaps earlier branches.\n>\n>I don't know this code well enough to take point on looking for the\n>problem, though.\n>\n\nWell, one thing we did in 11 is introduction of the Generation context.\nIn 10 we're still stashing all tuple data into the main AllocSet. I\nwonder if backporting a4ccc1cef5a04cc054af83bc4582a045d5232cb3 and a\ncouple of follow-up fixes would make the issue go away.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 12 Jan 2020 04:39:30 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Thu, Jan 09, 2020 at 07:40:12PM -0500, Tom Lane wrote:\n>> It seems reasonably likely to me that this result is telling us about\n>> an actual bug, ie, faulty back-patching of one or more of those fixes\n>> into v10 and perhaps earlier branches.\n\n> Well, one thing we did in 11 is introduction of the Generation context.\n> In 10 we're still stashing all tuple data into the main AllocSet. I\n> wonder if backporting a4ccc1cef5a04cc054af83bc4582a045d5232cb3 and a\n> couple of follow-up fixes would make the issue go away.\n\nHm. I'm loath to back-port Generation contexts. But looking at\na4ccc1cef5a04cc054af83bc4582a045d5232cb3, I see that (a) the\ncommit message mentions space savings, but (b) the replaced code\nin reorderbuffer.c doesn't look like it really would move the needle\nmuch in that regard. The old code had a one-off slab allocator\nthat we got rid of, but I don't see any actual leak there ...\nremind me where the win came from, exactly?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Jan 2020 22:53:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Sat, Jan 11, 2020 at 10:53:57PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Thu, Jan 09, 2020 at 07:40:12PM -0500, Tom Lane wrote:\n>>> It seems reasonably likely to me that this result is telling us about\n>>> an actual bug, ie, faulty back-patching of one or more of those fixes\n>>> into v10 and perhaps earlier branches.\n>\n>> Well, one thing we did in 11 is introduction of the Generation context.\n>> In 10 we're still stashing all tuple data into the main AllocSet. I\n>> wonder if backporting a4ccc1cef5a04cc054af83bc4582a045d5232cb3 and a\n>> couple of follow-up fixes would make the issue go away.\n>\n>Hm. I'm loath to back-port Generation contexts.\n\nYeah, I agree. My suggestion was to try backpatching it and see if it\nresolves the issue.\n\n>But looking at\n>a4ccc1cef5a04cc054af83bc4582a045d5232cb3, I see that (a) the\n>commit message mentions space savings, but (b) the replaced code\n>in reorderbuffer.c doesn't look like it really would move the needle\n>much in that regard. The old code had a one-off slab allocator\n>that we got rid of, but I don't see any actual leak there ...\n>remind me where the win came from, exactly?\n>\n\nWell, the problem is that in 10 we allocate tuple data in the main\nmemory ReorderBuffer context, and when the transaction gets decoded we\npfree() it. But in AllocSet that only moves the data to the freelists,\nit does not release it entirely. So with the right allocation pattern\n(sufficiently diverse chunk sizes) this can easily result in allocation\nof large amount of memory that is never released.\n\nI don't know if this is what's happening in this particular test, but I\nwouldn't be surprised by it.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 12 Jan 2020 05:10:08 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Sat, Jan 11, 2020 at 10:53:57PM -0500, Tom Lane wrote:\n>> remind me where the win came from, exactly?\n\n> Well, the problem is that in 10 we allocate tuple data in the main\n> memory ReorderBuffer context, and when the transaction gets decoded we\n> pfree() it. But in AllocSet that only moves the data to the freelists,\n> it does not release it entirely. So with the right allocation pattern\n> (sufficiently diverse chunk sizes) this can easily result in allocation\n> of large amount of memory that is never released.\n\n> I don't know if this is what's happening in this particular test, but I\n> wouldn't be surprised by it.\n\nNah, don't think I believe that: the test inserts a bunch of tuples,\nbut they look like they will all be *exactly* the same size.\n\nCREATE TABLE decoding_test(x integer, y text);\n...\n\n FOR i IN 1..10 LOOP\n BEGIN\n INSERT INTO decoding_test(x) SELECT generate_series(1,5000);\n EXCEPTION\n when division_by_zero then perform 'dummy';\n END;\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Jan 2020 23:20:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Sun, Jan 12, 2020 at 8:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Jan 11, 2020 at 11:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n>\n> > * The seeming bug in v10 suggests that we aren't testing large enough\n> > logical-decoding cases, or at least aren't noticing leaks in that\n> > area. I'm not sure what a good design is for testing that. I'm not\n> > thrilled with just using a larger (and slower) test case, but it's\n> > not clear to me how else to attack it.\n> >\n>\n> It is not clear to me either at this stage, but I think we can decide\n> that after chasing the issue in v10. My current plan is to revert\n> this test and make a note of the memory leak problem found (probably\n> track in Older Bugs section of PostgreSQL 12 Open Items).\n>\n\nPushed the revert and added an open item in the 'Older Bugs' section\nof PostgreSQL 12 Open Items [1].\n\n\n[1] - https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 Jan 2020 09:58:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 9:58 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Jan 12, 2020 at 8:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Jan 11, 2020 at 11:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> >\n> > > * The seeming bug in v10 suggests that we aren't testing large enough\n> > > logical-decoding cases, or at least aren't noticing leaks in that\n> > > area. I'm not sure what a good design is for testing that. I'm not\n> > > thrilled with just using a larger (and slower) test case, but it's\n> > > not clear to me how else to attack it.\n> > >\n> >\n> > It is not clear to me either at this stage, but I think we can decide\n> > that after chasing the issue in v10. My current plan is to revert\n> > this test and make a note of the memory leak problem found (probably\n> > track in Older Bugs section of PostgreSQL 12 Open Items).\n> >\n>\n> Pushed the revert\n>\n\nSidewinder is green now on back branches.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 Jan 2020 07:10:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Forking thread \"logical decoding : exceeded maxAllocatedDescs for .spill\nfiles\" for this side issue:\n\nOn Wed, Jan 08, 2020 at 09:37:04PM -0800, Noah Misch wrote:\n> v10\n> deletes PostgresNode base directories at the end of this test file, despite\n> the failure[1].\n\n> [1] It has the all_tests_passing() logic in an attempt to stop this. I'm\n> guessing it didn't help because the file failed by calling die \"connection\n> error: ...\", not by reporting a failure to Test::More via ok(0) or similar.\n\nThat is what happened. We should test the exit status to decide whether to\nkeep temporaries, as attached. PostgresNode does that, since commit 90627cf\n(thread https://postgr.es/m/flat/6205.1492883490%40sss.pgh.pa.us). That\nthread already discussed $SUBJECT[1] and the __DIE__ handler being\nredundant[2]. I plan to back-patch, since it's most useful for v10 and v9.6.\n\n[1] https://postgr.es/m/CAMsr+YFyFU=+MVFZqhthfMW22x5-h517e6ck6ET+DT+X4bUO7g@mail.gmail.com\n[2] https://postgr.es/m/FEA925B2-C3AE-4BA9-9194-5F5616AD0794@yesql.se",
"msg_date": "Sun, 2 Feb 2020 09:01:55 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "TestLib condition for deleting temporary directories"
},
{
"msg_contents": "> On 2 Feb 2020, at 18:01, Noah Misch <noah@leadboat.com> wrote:\n> \n> Forking thread \"logical decoding : exceeded maxAllocatedDescs for .spill\n> files\" for this side issue:\n\nThanks, I hadn't seen this.\n\n> On Wed, Jan 08, 2020 at 09:37:04PM -0800, Noah Misch wrote:\n>> v10\n>> deletes PostgresNode base directories at the end of this test file, despite\n>> the failure[1].\n> \n>> [1] It has the all_tests_passing() logic in an attempt to stop this. I'm\n>> guessing it didn't help because the file failed by calling die \"connection\n>> error: ...\", not by reporting a failure to Test::More via ok(0) or similar.\n> \n> That is what happened. We should test the exit status to decide whether to\n> keep temporaries, as attached. PostgresNode does that, since commit 90627cf\n> (thread https://postgr.es/m/flat/6205.1492883490%40sss.pgh.pa.us). That\n> thread already discussed $SUBJECT[1] and the __DIE__ handler being\n> redundant[2]. I plan to back-patch, since it's most useful for v10 and v9.6.\n\nI'm travelling and haven't been able to test, but this makes sense from\nreading. +1 on backpatching.\n\ncheers ./daniel\n\n",
"msg_date": "Sun, 2 Feb 2020 18:19:04 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: TestLib condition for deleting temporary directories"
},
{
"msg_contents": "On Sun, Jan 12, 2020 at 9:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > On Sat, Jan 11, 2020 at 10:53:57PM -0500, Tom Lane wrote:\n> >> remind me where the win came from, exactly?\n>\n> > Well, the problem is that in 10 we allocate tuple data in the main\n> > memory ReorderBuffer context, and when the transaction gets decoded we\n> > pfree() it. But in AllocSet that only moves the data to the freelists,\n> > it does not release it entirely. So with the right allocation pattern\n> > (sufficiently diverse chunk sizes) this can easily result in allocation\n> > of large amount of memory that is never released.\n>\n> > I don't know if this is what's happening in this particular test, but I\n> > wouldn't be surprised by it.\n>\n> Nah, don't think I believe that: the test inserts a bunch of tuples,\n> but they look like they will all be *exactly* the same size.\n>\n> CREATE TABLE decoding_test(x integer, y text);\n> ...\n>\n> FOR i IN 1..10 LOOP\n> BEGIN\n> INSERT INTO decoding_test(x) SELECT generate_series(1,5000);\n> EXCEPTION\n> when division_by_zero then perform 'dummy';\n> END;\n>\nI performed the same test in pg11 and reproduced the issue on the\ncommit prior to a4ccc1cef5a04 (Generational memory allocator).\n\nulimit -s 1024\nulimit -v 300000\n\nwal_level = logical\nmax_replication_slots = 4\n\nAnd executed the following code snippet (shared by Amit Khandekar\nearlier in the thread).\n\nSELECT pg_create_logical_replication_slot('test_slot',\n'test_decoding');\n\nCREATE TABLE decoding_test(x integer, y text);\ndo $$\nBEGIN\n FOR i IN 1..10 LOOP\n BEGIN\n INSERT INTO decoding_test(x) SELECT\ngenerate_series(1,3000);\n EXCEPTION\n when division_by_zero then perform 'dummy';\n END;\n END LOOP;\nEND $$;\n\nSELECT data from pg_logical_slot_get_changes('test_slot', NULL, NULL) LIMIT 10;\n\nI got the following error:\nERROR: out of memory\nDETAIL: Failed on request of size 8208.\n\nAfter that, I applied the \"Generational memory allocator\" patch and\nthat solved the issue. From the error message, it is evident that the\nunderlying code is trying to allocate a MaxTupleSize memory for each\ntuple. So, I re-introduced the following lines (which are removed by\na4ccc1cef5a04) on top of the patch:\n\n--- a/src/backend/replication/logical/reorderbuffer.c\n+++ b/src/backend/replication/logical/reorderbuffer.c\n@@ -417,6 +417,9 @@ ReorderBufferGetTupleBuf(ReorderBuffer *rb, Size tuple_len)\n\n alloc_len = tuple_len + SizeofHeapTupleHeader;\n\n+ if (alloc_len < MaxHeapTupleSize)\n+ alloc_len = MaxHeapTupleSize;\n\nAnd, the issue got reproduced with the same error:\nWARNING: problem in Generation Tuples: number of free chunks 0 in\nblock 0x7fe9e9e74010 exceeds 1018 allocated\n.....\nERROR: out of memory\nDETAIL: Failed on request of size 8208.\n\nI don't understand the code well enough to comment whether we can\nback-patch only this part of the code. But, this seems to allocate a\nhuge amount of memory per chunk although the tuple is small.\n\nThoughts?\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 4 Feb 2020 10:15:01 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Tue, Feb 4, 2020 at 10:15 AM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n>\n> On Sun, Jan 12, 2020 at 9:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > > On Sat, Jan 11, 2020 at 10:53:57PM -0500, Tom Lane wrote:\n> > >> remind me where the win came from, exactly?\n> >\n> > > Well, the problem is that in 10 we allocate tuple data in the main\n> > > memory ReorderBuffer context, and when the transaction gets decoded we\n> > > pfree() it. But in AllocSet that only moves the data to the freelists,\n> > > it does not release it entirely. So with the right allocation pattern\n> > > (sufficiently diverse chunk sizes) this can easily result in allocation\n> > > of large amount of memory that is never released.\n> >\n> > > I don't know if this is what's happening in this particular test, but I\n> > > wouldn't be surprised by it.\n> >\n> > Nah, don't think I believe that: the test inserts a bunch of tuples,\n> > but they look like they will all be *exactly* the same size.\n> >\n> > CREATE TABLE decoding_test(x integer, y text);\n> > ...\n> >\n> > FOR i IN 1..10 LOOP\n> > BEGIN\n> > INSERT INTO decoding_test(x) SELECT generate_series(1,5000);\n> > EXCEPTION\n> > when division_by_zero then perform 'dummy';\n> > END;\n> >\n> I performed the same test in pg11 and reproduced the issue on the\n> commit prior to a4ccc1cef5a04 (Generational memory allocator).\n>\n> ulimit -s 1024\n> ulimit -v 300000\n>\n> wal_level = logical\n> max_replication_slots = 4\n>\n> And executed the following code snippet (shared by Amit Khandekar\n> earlier in the thread).\n>\n..\n>\n> SELECT data from pg_logical_slot_get_changes('test_slot', NULL, NULL) LIMIT 10;\n>\n> I got the following error:\n> ERROR: out of memory\n> DETAIL: Failed on request of size 8208.\n>\n> After that, I applied the \"Generational memory allocator\" patch and\n> that solved the issue. From the error message, it is evident that the\n> underlying code is trying to allocate a MaxTupleSize memory for each\n> tuple. So, I re-introduced the following lines (which are removed by\n> a4ccc1cef5a04) on top of the patch:\n>\n> --- a/src/backend/replication/logical/reorderbuffer.c\n> +++ b/src/backend/replication/logical/reorderbuffer.c\n> @@ -417,6 +417,9 @@ ReorderBufferGetTupleBuf(ReorderBuffer *rb, Size tuple_len)\n>\n> alloc_len = tuple_len + SizeofHeapTupleHeader;\n>\n> + if (alloc_len < MaxHeapTupleSize)\n> + alloc_len = MaxHeapTupleSize;\n>\n> And, the issue got reproduced with the same error:\n> WARNING: problem in Generation Tuples: number of free chunks 0 in\n> block 0x7fe9e9e74010 exceeds 1018 allocated\n> .....\n> ERROR: out of memory\n> DETAIL: Failed on request of size 8208.\n>\n> I don't understand the code well enough to comment whether we can\n> back-patch only this part of the code.\n>\n\nI don't think we can just back-patch that part of code as it is linked\nto the way we are maintaining a cache (~8MB) for frequently allocated\nobjects. See the comments around the definition of\nmax_cached_tuplebufs. But probably, we can do something once we reach\nsuch a limit, basically, once we know that we have already allocated\nmax_cached_tuplebufs number of tuples of size MaxHeapTupleSize, we\ndon't need to allocate more of that size. Does this make sense?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 4 Feb 2020 14:40:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Tue, Feb 4, 2020 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I don't think we can just back-patch that part of code as it is linked\n> to the way we are maintaining a cache (~8MB) for frequently allocated\n> objects. See the comments around the definition of\n> max_cached_tuplebufs. But probably, we can do something once we reach\n> such a limit, basically, once we know that we have already allocated\n> max_cached_tuplebufs number of tuples of size MaxHeapTupleSize, we\n> don't need to allocate more of that size. Does this make sense?\n>\n\nYeah, this makes sense. I've attached a patch that implements the\nsame. It solves the problem reported earlier. This solution will at\nleast slow down the process of going OOM even for very small sized\ntuples.\n\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 7 Feb 2020 17:31:47 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Hi,\n\nOn 2020-02-04 10:15:01 +0530, Kuntal Ghosh wrote:\n> And, the issue got reproduced with the same error:\n> WARNING: problem in Generation Tuples: number of free chunks 0 in\n> block 0x7fe9e9e74010 exceeds 1018 allocated\n\nThat seems like a problem in generation.c - because this should be\nunreachable, I think?\n\nTomas?\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Fri, 7 Feb 2020 10:33:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Hi,\n\nOn 2020-02-04 10:15:01 +0530, Kuntal Ghosh wrote:\n> I performed the same test in pg11 and reproduced the issue on the\n> commit prior to a4ccc1cef5a04 (Generational memory allocator).\n> \n> ulimit -s 1024\n> ulimit -v 300000\n> \n> wal_level = logical\n> max_replication_slots = 4\n> \n> [...]\n\n> After that, I applied the \"Generational memory allocator\" patch and\n> that solved the issue. From the error message, it is evident that the\n> underlying code is trying to allocate a MaxTupleSize memory for each\n> tuple. So, I re-introduced the following lines (which are removed by\n> a4ccc1cef5a04) on top of the patch:\n\n> --- a/src/backend/replication/logical/reorderbuffer.c\n> +++ b/src/backend/replication/logical/reorderbuffer.c\n> @@ -417,6 +417,9 @@ ReorderBufferGetTupleBuf(ReorderBuffer *rb, Size tuple_len)\n> \n> alloc_len = tuple_len + SizeofHeapTupleHeader;\n> \n> + if (alloc_len < MaxHeapTupleSize)\n> + alloc_len = MaxHeapTupleSize;\n\nMaybe I'm being slow here - but what does this actually prove? Before\nthe generation contexts were introduced we avoided fragmentation (which\nwould make things unusably slow) using a a brute force method (namely\nforcing all tuple allocations to be of the same/maximum size).\n\nWhich means that yes, we'll need more memory than necessary. Do you\nthink you see anything but that here?\n\nIt's good that the situation is better now, but I don't think this means\nwe need to necessarily backpatch something nontrivial?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Feb 2020 10:40:46 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, Feb 07, 2020 at 10:33:48AM -0800, Andres Freund wrote:\n>Hi,\n>\n>On 2020-02-04 10:15:01 +0530, Kuntal Ghosh wrote:\n>> And, the issue got reproduced with the same error:\n>> WARNING: problem in Generation Tuples: number of free chunks 0 in\n>> block 0x7fe9e9e74010 exceeds 1018 allocated\n>\n>That seems like a problem in generation.c - because this should be\n>unreachable, I think?\n>\n>Tomas?\n>\n\nThat's rather strange. How could we print this message? The code looks\nlike this\n\n if (block->nfree >= block->nchunks)\n elog(WARNING, \"problem in Generation %s: number of free chunks %d in block %p exceeds %d allocated\",\n name, block->nfree, block, block->nchunks);\n\nso this says 0 >= 1018. Or am I missing something?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 7 Feb 2020 20:02:01 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-11 23:20:56 -0500, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> Nah, don't think I believe that: the test inserts a bunch of tuples,\n> but they look like they will all be *exactly* the same size.\n>\n> CREATE TABLE decoding_test(x integer, y text);\n> ...\n>\n> FOR i IN 1..10 LOOP\n> BEGIN\n> INSERT INTO decoding_test(x) SELECT generate_series(1,5000);\n> EXCEPTION\n> when division_by_zero then perform 'dummy';\n> END;\n\nI think the issue this triggers higher memory usage in in older versions\nis that before\n\ncommit cec2edfa7859279f36d2374770ca920c59c73dd8\nAuthor: Amit Kapila <akapila@postgresql.org>\nDate: 2019-11-16 17:49:33 +0530\n\n Add logical_decoding_work_mem to limit ReorderBuffer memory usage.\n\nwe enforced how many changes to keep in memory (vs on disk)\n\n/*\n * Maximum number of changes kept in memory, per transaction. After that,\n * changes are spooled to disk.\n *\n * The current value should be sufficient to decode the entire transaction\n * without hitting disk in OLTP workloads, while starting to spool to disk in\n * other workloads reasonably fast.\n *\n * At some point in the future it probably makes sense to have a more elaborate\n * resource management here, but it's not entirely clear what that would look\n * like.\n */\nstatic const Size max_changes_in_memory = 4096;\n\non a per-transaction basis. And that subtransactions are *different*\ntransactions for that purpose (as they can be rolled back\nseparately). As the test generates loads of records for different\nsubtransactions, they each end up having quite a few changes (including\nthe tuples pointed to!) in memory at the same time.\n\nDue to the way the limit of 4096 interacts with the 5000 rows inserted\nabove, we only hit the out of memory error when loading. That's because\nwhen decoding (before the commit has been seen), we spill after 4096 changes:\n\n2020-02-07 11:18:22.399 PST [1136134][3/2] DEBUG: spill 4096 changes in XID 585 to disk\n2020-02-07 11:18:22.419 PST [1136134][3/2] DEBUG: spill 4096 changes in XID 586 to disk\n2020-02-07 11:18:22.431 PST [1136134][3/2] DEBUG: spill 4096 changes in XID 587 to disk\n2020-02-07 11:18:22.443 PST [1136134][3/2] DEBUG: spill 4096 changes in XID 588 to disk\n2020-02-07 11:18:22.454 PST [1136134][3/2] DEBUG: spill 4096 changes in XID 589 to disk\n2020-02-07 11:18:22.465 PST [1136134][3/2] DEBUG: spill 4096 changes in XID 590 to disk\n2020-02-07 11:18:22.477 PST [1136134][3/2] DEBUG: spill 4096 changes in XID 591 to disk\n2020-02-07 11:18:22.488 PST [1136134][3/2] DEBUG: spill 4096 changes in XID 592 to disk\n2020-02-07 11:18:22.499 PST [1136134][3/2] DEBUG: spill 4096 changes in XID 593 to disk\n2020-02-07 11:18:22.511 PST [1136134][3/2] DEBUG: spill 4096 changes in XID 594 to disk\n\nso there's each 5000 - 4096 changes in memory, times 10. But when\nactually calling the output plugin (at the commit record), we start with\nloading changes back into memory from the start of each\nsubtransaction. That first entails spilling the tail of that transaction\nto disk, and then loading the start:\n\n2020-02-07 11:18:22.515 PST [1136134][3/2] DEBUG: StartSubTransaction(1) name: unnamed; blockState: STARTED; state: INPROGR, xid/subid/cid: 0/1/0\n2020-02-07 11:18:22.515 PST [1136134][3/2] DEBUG: StartSubTransaction(2) name: replay; blockState: SUB BEGIN; state: INPROGR, xid/subid/cid: 0/2/0\n2020-02-07 11:18:22.515 PST [1136134][3/2] DEBUG: spill 904 changes in XID 585 to disk\n2020-02-07 11:18:22.524 PST [1136134][3/2] DEBUG: restored 4096 changes in XID 585 into memory\n2020-02-07 11:18:22.524 PST [1136134][3/2] DEBUG: spill 904 changes in XID 586 to disk\n2020-02-07 11:18:22.534 PST [1136134][3/2] DEBUG: restored 4096 changes in XID 586 into memory\n2020-02-07 11:18:22.534 PST [1136134][3/2] DEBUG: spill 904 changes in XID 587 to disk\n2020-02-07 11:18:22.544 PST [1136134][3/2] DEBUG: restored 4096 changes in XID 587 into memory\n2020-02-07 11:18:22.544 PST [1136134][3/2] DEBUG: spill 904 changes in XID 588 to disk\n2020-02-07 11:18:22.554 PST [1136134][3/2] DEBUG: restored 4096 changes in XID 588 into memory\n2020-02-07 11:18:22.554 PST [1136134][3/2] DEBUG: spill 904 changes in XID 589 to disk\nTopMemoryContext: 161440 total in 7 blocks; 80240 free (68 chunks); 81200 used\n...\n\nBecause each transaction has 4096 changes in memory, we actually need\nmore memory here than we did during the decoding phase, where all but\nthe \"current\" subtransaction only have 5000 - 4096 changes in memory.\n\nIf we instead change the test to insert 4096*2 - 1 tuples each, we run\nout of memory earlier:\n2020-02-07 11:23:20.540 PST [1136134][3/12] DEBUG: spill 4096 changes in XID 610 to disk\n2020-02-07 11:23:20.565 PST [1136134][3/12] DEBUG: spill 4096 changes in XID 611 to disk\n2020-02-07 11:23:20.587 PST [1136134][3/12] DEBUG: spill 4096 changes in XID 612 to disk\n2020-02-07 11:23:20.608 PST [1136134][3/12] DEBUG: spill 4096 changes in XID 613 to disk\n2020-02-07 11:23:20.630 PST [1136134][3/12] DEBUG: spill 4096 changes in XID 614 to disk\nTopMemoryContext: 161440 total in 7 blocks; 79264 free (82 chunks); 82176 used\n...\n2020-02-07 11:23:20.655 PST [1136134][3/12] ERROR: out of memory\n2020-02-07 11:23:20.655 PST [1136134][3/12] DETAIL: Failed on request of size 8208.\n2020-02-07 11:23:20.655 PST [1136134][3/12] STATEMENT: SELECT * FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL);\n\n\nThe reason that the per-subxact enforcement of max_changes_in_memory\nisn't as noticable in 11 is that there we have the generational\ncontext. Which means that each of the 4096*10 tuples we have in memory\ndoesn't allocate MaxHeapTupleSize, but instead something like ~30 bytes.\n\n\nI wonder if we, in the backbranches that don't have generation context,\nshould just reduce the size of slab allocated tuples to be ~1024 bytes\ninstead of MaxHeapTupleSize. That's an almost trivial change, as we\nalready have to support tuples above that limit (in cases the oldtuple\nin an update/delete contains toasted columns that we \"inlined\"). POC for\nthat attached.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 7 Feb 2020 11:34:34 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Hi,\n\nOn 2020-02-07 20:02:01 +0100, Tomas Vondra wrote:\n> On Fri, Feb 07, 2020 at 10:33:48AM -0800, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2020-02-04 10:15:01 +0530, Kuntal Ghosh wrote:\n> > > And, the issue got reproduced with the same error:\n> > > WARNING: problem in Generation Tuples: number of free chunks 0 in\n> > > block 0x7fe9e9e74010 exceeds 1018 allocated\n> > \n> > That seems like a problem in generation.c - because this should be\n> > unreachable, I think?\n\n> That's rather strange. How could we print this message? The code looks\n> like this\n> \n> if (block->nfree >= block->nchunks)\n> elog(WARNING, \"problem in Generation %s: number of free chunks %d in block %p exceeds %d allocated\",\n> name, block->nfree, block, block->nchunks);\n> \n> so this says 0 >= 1018. Or am I missing something?\n\nIndeed, it's pretty weird. I can't reproduce it either. Kuntal, which\nexact git version did you repro this on? What precise settings /\nplatform?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Feb 2020 11:47:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Sat, Feb 8, 2020 at 12:10 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-02-04 10:15:01 +0530, Kuntal Ghosh wrote:\n> > I performed the same test in pg11 and reproduced the issue on the\n> > commit prior to a4ccc1cef5a04 (Generational memory allocator).\n> >\n> > ulimit -s 1024\n> > ulimit -v 300000\n> >\n> > wal_level = logical\n> > max_replication_slots = 4\n> >\n> > [...]\n>\n> > After that, I applied the \"Generational memory allocator\" patch and\n> > that solved the issue. From the error message, it is evident that the\n> > underlying code is trying to allocate a MaxTupleSize memory for each\n> > tuple. So, I re-introduced the following lines (which are removed by\n> > a4ccc1cef5a04) on top of the patch:\n>\n> > --- a/src/backend/replication/logical/reorderbuffer.c\n> > +++ b/src/backend/replication/logical/reorderbuffer.c\n> > @@ -417,6 +417,9 @@ ReorderBufferGetTupleBuf(ReorderBuffer *rb, Size tuple_len)\n> >\n> > alloc_len = tuple_len + SizeofHeapTupleHeader;\n> >\n> > + if (alloc_len < MaxHeapTupleSize)\n> > + alloc_len = MaxHeapTupleSize;\n>\n> Maybe I'm being slow here - but what does this actually prove? Before\n> the generation contexts were introduced we avoided fragmentation (which\n> would make things unusably slow) using a a brute force method (namely\n> forcing all tuple allocations to be of the same/maximum size).\n>\n\nIt seems for this we formed a cache of max_cached_tuplebufs number of\nobjects and we don't need to allocate more than that number of tuples\nof size MaxHeapTupleSize because we will anyway return that memory to\naset.c.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 9 Feb 2020 09:18:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "Hello,\n\nOn Sat, Feb 8, 2020 at 1:18 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-02-07 20:02:01 +0100, Tomas Vondra wrote:\n> > On Fri, Feb 07, 2020 at 10:33:48AM -0800, Andres Freund wrote:\n> > > Hi,\n> > >\n> > > On 2020-02-04 10:15:01 +0530, Kuntal Ghosh wrote:\n> > > > And, the issue got reproduced with the same error:\n> > > > WARNING: problem in Generation Tuples: number of free chunks 0 in\n> > > > block 0x7fe9e9e74010 exceeds 1018 allocated\n> > >\n> > > That seems like a problem in generation.c - because this should be\n> > > unreachable, I think?\n>\n> > That's rather strange. How could we print this message? The code looks\n> > like this\n> >\n> > if (block->nfree >= block->nchunks)\n> > elog(WARNING, \"problem in Generation %s: number of free chunks %d in block %p exceeds %d allocated\",\n> > name, block->nfree, block, block->nchunks);\n> >\n> > so this says 0 >= 1018. Or am I missing something?\n>\n> Indeed, it's pretty weird. I can't reproduce it either. Kuntal, which\n> exact git version did you repro this on? What precise settings /\n> platform?\n>\n\nI've used the following steps:\n\nPlatform:\nOS: Ubuntu 64-bit 18.04.2\nVMWare Fusion Version 8.5.10 on my MacOS 10.14.6\n16GB RAM with 8 core processors\n\ngit checkout -b pg11 remotes/origin/REL_11_STABLE\ngit reset --hard a4ccc1cef5a04cc (Generational memory allocator)\ngit apply Set-alloc_len-as-MaxHeapTupleSize.patch\n\nThen, I performed the same test. I've attached the test.sql file for the same.\n\nWith this test, I wanted to check how the generational memory\nallocator patch is solving the issue. As reported by you as well\nearlier in the thread, each of the 4096*10 in-memory tuples\ndoesn't allocate MaxHeapTupleSize, but instead something like ~30\nbytes with this patch. IMHO, this is the change that is making the\ndifference. If we allocate MaxHeapTupleSize instead for all the\ntuples, we'll encounter the same out-of-memory issue.\n\nI haven't looked into the issue in generational tuple context yet.\nIt's possible that that the changes I've done in the attached patch\ndon't make sense and break the logic of generational memory allocator.\n:-)\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 10 Feb 2020 11:00:26 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Sun, Feb 9, 2020 at 9:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> It seems for this we formed a cache of max_cached_tuplebufs number of\n> objects and we don't need to allocate more than that number of tuples\n> of size MaxHeapTupleSize because we will anyway return that memory to\n> aset.c.\n>\nIn the approach suggested by Amit (approach 1), once we allocate the\nmax_cached_tuplebufs number of MaxHeapTupleSize, we can use the actual\nlength of the tuple for allocating memory. So, if we have m\nsubtransactions, the memory usage at worst case will be,\n\n(max_cached_tuplebufs * MaxHeapTupleSize) cache +\n(Maximum changes in a subtransaction before spilling) * m * (Actual tuple size)\n\n= 64 MB cache + 4095 * m * (Actual tuple size)\n\nIn the approach suggested by Andres (approach 2), we're going to\nreduce the size of a cached tuple to 1024 bytes. So, if we have m\nsub-transactions, the memory usage at worst case will be,\n\n(max_cached_tuplebufs * 1024 bytes) cache + (Maximum changes in a\nsubtransaction before spilling) * m * 1024 bytes\n\n= 8 MB cache + 4095 * m * 1024 (considering the size of the tuple is\nless than 1024 bytes)\n\nOnce the cache is filled, for 1000 sub-transactions operating on tuple\nsize, say 100 bytes, approach 1 will allocate 390 MB of memory\n(approx.) whereas approach 2 will allocate 4GB of memory\napproximately. If there is no obvious error that I'm missing, I think\nwe should implement the first approach.\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 14 Feb 2020 16:05:59 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, Feb 7, 2020 at 5:32 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n>\n> On Tue, Feb 4, 2020 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I don't think we can just back-patch that part of code as it is linked\n> > to the way we are maintaining a cache (~8MB) for frequently allocated\n> > objects. See the comments around the definition of\n> > max_cached_tuplebufs. But probably, we can do something once we reach\n> > such a limit, basically, once we know that we have already allocated\n> > max_cached_tuplebufs number of tuples of size MaxHeapTupleSize, we\n> > don't need to allocate more of that size. Does this make sense?\n> >\n>\n> Yeah, this makes sense. I've attached a patch that implements the\n> same. It solves the problem reported earlier. This solution will at\n> least slow down the process of going OOM even for very small sized\n> tuples.\n>\n\nThe patch seems to be in right direction and the test at my end shows\nthat it resolves the issue. One minor comment:\n * those. Thus always allocate at least MaxHeapTupleSize. Note that tuples\n * generated for oldtuples can be bigger, as they don't have out-of-line\n * toast columns.\n+ *\n+ * But, if we've already allocated the memory required for building the\n+ * cache later, we don't have to allocate memory more than the size of the\n+ * tuple.\n */\n\nHow about modifying the existing comment as: \"Most tuples are below\nMaxHeapTupleSize, so we use a slab allocator for those. Thus always\nallocate at least MaxHeapTupleSize till the slab cache is filled. Note\nthat tuples generated for oldtuples can be bigger, as they don't have\nout-of-line toast columns.\"?\n\nHave you tested this in 9.6 and 9.5?\n\n\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Feb 2020 11:17:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Fri, Feb 14, 2020 at 4:06 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n>\n> On Sun, Feb 9, 2020 at 9:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > It seems for this we formed a cache of max_cached_tuplebufs number of\n> > objects and we don't need to allocate more than that number of tuples\n> > of size MaxHeapTupleSize because we will anyway return that memory to\n> > aset.c.\n> >\n> In the approach suggested by Amit (approach 1), once we allocate the\n> max_cached_tuplebufs number of MaxHeapTupleSize, we can use the actual\n> length of the tuple for allocating memory. So, if we have m\n> subtransactions, the memory usage at worst case will be,\n>\n> (max_cached_tuplebufs * MaxHeapTupleSize) cache +\n> (Maximum changes in a subtransaction before spilling) * m * (Actual tuple size)\n>\n> = 64 MB cache + 4095 * m * (Actual tuple size)\n>\n> In the approach suggested by Andres (approach 2), we're going to\n> reduce the size of a cached tuple to 1024 bytes. So, if we have m\n> sub-transactions, the memory usage at worst case will be,\n>\n> (max_cached_tuplebufs * 1024 bytes) cache + (Maximum changes in a\n> subtransaction before spilling) * m * 1024 bytes\n>\n> = 8 MB cache + 4095 * m * 1024 (considering the size of the tuple is\n> less than 1024 bytes)\n>\n> Once the cache is filled, for 1000 sub-transactions operating on tuple\n> size, say 100 bytes, approach 1 will allocate 390 MB of memory\n> (approx.) whereas approach 2 will allocate 4GB of memory\n> approximately. If there is no obvious error that I'm missing, I think\n> we should implement the first approach.\n>\n\nYour calculation seems correct to me. So, I think we should proceed\nwith the patch written by you.\n\nAndres, any objections on proceeding with Kuntal's patch for\nback-branches (10, 9.6 and 9.5)?\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Feb 2020 11:20:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On 2020-02-18 11:20:17 +0530, Amit Kapila wrote:\n> Andres, any objections on proceeding with Kuntal's patch for\n> back-branches (10, 9.6 and 9.5)?\n\nYes. In my past experiments that lead to *terrible* allocator\nperformance due to fragmentation. Like, up to 90% of the time spent in\naset.c. Try a workload with a number of overlapping transactions that\nhave different tuple sizes.\n\nI'm not even sure it's the right thing to do anything in the back\nbranches to be honest. If somebody hits this badly they likely have done\nso before, and they at least have the choice to upgrade, but if we\nregress performance for more people...\n\n\n",
"msg_date": "Mon, 17 Feb 2020 22:03:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
},
{
"msg_contents": "On Tue, Feb 18, 2020 at 11:33 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2020-02-18 11:20:17 +0530, Amit Kapila wrote:\n> > Andres, any objections on proceeding with Kuntal's patch for\n> > back-branches (10, 9.6 and 9.5)?\n>\n> Yes. In my past experiments that lead to *terrible* allocator\n> performance due to fragmentation. Like, up to 90% of the time spent in\n> aset.c. Try a workload with a number of overlapping transactions that\n> have different tuple sizes.\n>\n\nI thought slab-cache would have addressed it. But, it is possible if\nthere are many-2 such overlapping transactions, then that might lead\nto performance regression. OTOH, the current code also might lead to\nworse performance for transactions with multiple subtransactions as\nthey would frequently need to malloc.\n\n> I'm not even sure it's the right thing to do anything in the back\n> branches to be honest. If somebody hits this badly they likely have done\n> so before, and they at least have the choice to upgrade, but if we\n> regress performance for more people...\n\nI could see that for some cases the current code might give better\nperformance, but OTOH, consuming memory at a high rate for some other\ncases is also not good either. But you are right that we can always\nask such users to upgrade (which again sometimes is painful for some\nof the users), so maybe the right thing is to do nothing here. Anyone\nelse has any opinion on this?\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Feb 2020 13:46:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding : exceeded maxAllocatedDescs for .spill files"
}
] |
[
{
"msg_contents": "There are only four subsystems which require a callback at the\nbeginning of each subtransaction: the relevant functions are\nAtSubStart_Memory, AtSubStart_ResourceOwner, AtSubStart_Notify, and\nAfterTriggerBeginSubXact. The AtSubStart_Memory and\nAtSubStart_ResourceOwner callbacks seem relatively unobjectionable,\nbecause almost every subtransaction is going to allocate memory and\nacquire some resource managed by a resource owner, but the others\nrepresent initialization that has to be done whether or not the\ncorresponding feature is used.\n\nGenerally, a subsystem can avoid needing a callback at subtransaction\nstart (or transaction start) by detecting new levels of\nsubtransactions at time of use. A typical practice is to maintain a\nstack which has entries only for those transaction nesting levels\nwhere the functionality was used. The attached patch implements this\nmethod for async.c. I was a little surprised to find that it makes a\npretty noticeable performance difference when starting and ending\ntrivial subtransactions. I used this test case:\n\n\\timing\ndo $$begin for i in 1 .. 10000000 loop begin null; exception when\nothers then null; end; end loop; end;$$;\n\nI ran the test four times with and without the patch and took the\nmedian of the last three. This was an attempt to exclude effects due\nto starting up the database cluster. With the patch, the result was\n3127.377 ms; without the patch, it was 3527.285 ms. That's a big\nenough difference that I'm wondering whether I did something wrong\nwhile testing this, so feel free to check my work and tell me whether\nI'm all wet. Still, I don't find it wholly unbelievable, because I've\nobserved in the past that these code paths are lean enough that a few\npalloc() calls can make a noticeable difference, and the effect of\nthis patch is to remove a few palloc() calls.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 11 Sep 2019 08:52:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "let's kill AtSubStart_Notify"
},
{
"msg_contents": "On Wed, Sep 11, 2019 at 6:22 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> There are only four subsystems which require a callback at the\n> beginning of each subtransaction: the relevant functions are\n> AtSubStart_Memory, AtSubStart_ResourceOwner, AtSubStart_Notify, and\n> AfterTriggerBeginSubXact. The AtSubStart_Memory and\n> AtSubStart_ResourceOwner callbacks seem relatively unobjectionable,\n> because almost every subtransaction is going to allocate memory and\n> acquire some resource managed by a resource owner, but the others\n> represent initialization that has to be done whether or not the\n> corresponding feature is used.\n>\n> Generally, a subsystem can avoid needing a callback at subtransaction\n> start (or transaction start) by detecting new levels of\n> subtransactions at time of use. A typical practice is to maintain a\n> stack which has entries only for those transaction nesting levels\n> where the functionality was used. The attached patch implements this\n> method for async.c. I was a little surprised to find that it makes a\n> pretty noticeable performance difference when starting and ending\n> trivial subtransactions. I used this test case:\n>\n> \\timing\n> do $$begin for i in 1 .. 10000000 loop begin null; exception when\n> others then null; end; end loop; end;$$;\n>\n> I ran the test four times with and without the patch and took the\n> median of the last three. This was an attempt to exclude effects due\n> to starting up the database cluster. With the patch, the result was\n> 3127.377 ms; without the patch, it was 3527.285 ms. That's a big\n> enough difference that I'm wondering whether I did something wrong\n> while testing this, so feel free to check my work and tell me whether\n> I'm all wet. Still, I don't find it wholly unbelievable, because I've\n> observed in the past that these code paths are lean enough that a few\n> palloc() calls can make a noticeable difference, and the effect of\n> this patch is to remove a few palloc() calls.\n\nI did not read the patch but run the same case what you have given and\nI can see the similar improvement with the patch.\nWith the patch 8832.988, without the patch 10252.701ms (median of three reading)\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 Sep 2019 09:44:49 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: let's kill AtSubStart_Notify"
},
{
"msg_contents": "At Thu, 12 Sep 2019 09:44:49 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in <CAFiTN-u8sp=1X+zk0hBPcYhZVYS6k1DcT+R3p+fucKu3iS7NHQ@mail.gmail.com>\n> On Wed, Sep 11, 2019 at 6:22 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > trivial subtransactions. I used this test case:\n> >\n> > \\timing\n> > do $$begin for i in 1 .. 10000000 loop begin null; exception when\n> > others then null; end; end loop; end;$$;\n> >\n> > I ran the test four times with and without the patch and took the\n> > median of the last three. This was an attempt to exclude effects due\n> > to starting up the database cluster. With the patch, the result was\n> > 3127.377 ms; without the patch, it was 3527.285 ms. That's a big\n> > enough difference that I'm wondering whether I did something wrong\n> > while testing this, so feel free to check my work and tell me whether\n> > I'm all wet. Still, I don't find it wholly unbelievable, because I've\n> > observed in the past that these code paths are lean enough that a few\n> > palloc() calls can make a noticeable difference, and the effect of\n> > this patch is to remove a few palloc() calls.\n> \n> I did not read the patch but run the same case what you have given and\n> I can see the similar improvement with the patch.\n> With the patch 8832.988, without the patch 10252.701ms (median of three reading)\n\nI see the similar result. The patch let it run faster by about\n25%. The gain is reduced to 3-6% by a crude check by adding { (in\nTopTxCxt) lcons(0, p1); lcons(0, p2); } to the place where\nAtSubStart_Notify was called and respective list_delete_first's\njust after the call to AtSubCommit_Notfiy. At least around 20% of\nthe gain seems to be the result of removing palloc/pfree's.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 12 Sep 2019 19:23:06 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: let's kill AtSubStart_Notify"
},
{
"msg_contents": "Hi Robert,\n\nGenerally, a subsystem can avoid needing a callback at subtransaction\n> start (or transaction start) by detecting new levels of\n> subtransactions at time of use.\n\n\nYes I agree with this argument.\n\n\n> A typical practice is to maintain a\n> stack which has entries only for those transaction nesting levels\n> where the functionality was used. The attached patch implements this\n> method for async.c.\n\n\nI have reviewed your patch, and it seems correctly implementing the\nactions per subtransactions using stack. Atleast I could not find\nany flaw with your implementation here.\n\n\n> I was a little surprised to find that it makes a\n> pretty noticeable performance difference when starting and ending\n> trivial subtransactions. I used this test case:\n>\n> \\timing\n> do $$begin for i in 1 .. 10000000 loop begin null; exception when\n> others then null; end; end loop; end;$$;\n>\n\nI ran your testcase and on my VM I get numbers like 3593.801 ms\nwithout patch and 3593.801 with the patch, average of 5 runs each.\nThe runs were quite consistent.\n\nFurther make check also passing well.\n\nRegards,\nJeevan Ladhe\n\nHi Robert,\nGenerally, a subsystem can avoid needing a callback at subtransaction\nstart (or transaction start) by detecting new levels of\nsubtransactions at time of use.Yes I agree with this argument. A typical practice is to maintain a\nstack which has entries only for those transaction nesting levels\nwhere the functionality was used. The attached patch implements this\nmethod for async.c.I have reviewed your patch, and it seems correctly implementing theactions per subtransactions using stack. Atleast I could not findany flaw with your implementation here. I was a little surprised to find that it makes a\npretty noticeable performance difference when starting and ending\ntrivial subtransactions. I used this test case:\n\n\\timing\ndo $$begin for i in 1 .. 10000000 loop begin null; exception when\nothers then null; end; end loop; end;$$;I ran your testcase and on my VM I get numbers like 3593.801 mswithout patch and 3593.801 with the patch, average of 5 runs each.The runs were quite consistent.Further make check also passing well.Regards,Jeevan Ladhe",
"msg_date": "Fri, 27 Sep 2019 15:11:02 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: let's kill AtSubStart_Notify"
},
{
"msg_contents": ">\n> I did not read the patch but run the same case what you have given and\n> I can see the similar improvement with the patch.\n> With the patch 8832.988, without the patch 10252.701ms (median of three\n> reading)\n>\n\nPossibly you had debug symbols enabled? With debug symbols enabled\nI also get about similar number 10136.839 with patch vs 12900.044 ms\nwithout the patch.\n\nRegards,\nJeevan Ladhe\n\nI did not read the patch but run the same case what you have given and\nI can see the similar improvement with the patch.\nWith the patch 8832.988, without the patch 10252.701ms (median of three reading)Possibly you had debug symbols enabled? With debug symbols enabledI also get about similar number 10136.839 with patch vs 12900.044 mswithout the patch.Regards,Jeevan Ladhe",
"msg_date": "Fri, 27 Sep 2019 15:13:25 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: let's kill AtSubStart_Notify"
},
{
"msg_contents": "Correction -\n\nOn Fri, Sep 27, 2019 at 3:11 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>\nwrote:\n\n> I ran your testcase and on my VM I get numbers like 3593.801 ms\n> without patch and 3593.801 with the patch, average of 5 runs each.\n> The runs were quite consistent.\n>\n\n 3593.801 ms without patch and 3213.809 with the patch,\napprox. 10% gain.\n\nRegards,\nJeevan Ladhe\n\nCorrection -On Fri, Sep 27, 2019 at 3:11 PM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> wrote:I ran your testcase and on my VM I get numbers like 3593.801 mswithout patch and 3593.801 with the patch, average of 5 runs each.The runs were quite consistent. 3593.801 ms without patch and 3213.809 with the patch,approx. 10% gain.Regards,Jeevan Ladhe",
"msg_date": "Fri, 27 Sep 2019 15:19:40 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: let's kill AtSubStart_Notify"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 5:41 AM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n> I have reviewed your patch, and it seems correctly implementing the\n> actions per subtransactions using stack. Atleast I could not find\n> any flaw with your implementation here.\n\nThanks for the review. Based on this and other positive comments made\non this thread, I have committed the patch.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 4 Oct 2019 08:25:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: let's kill AtSubStart_Notify"
}
] |
[
{
"msg_contents": "Hi,\n\nThe 'locale' or 'lc_collate/lc_ctype' argument of an ICU collation may\nhave a complicated syntax, especially with non-deterministic\ncollations, and input mistakes in these names will not necessarily be\ndetected as such by ICU.\n\nThe \"display name\" of a locale is a simple way to get human-readable\nfeedback about the characteristics of that locale.\npg_import_system_collations() already push these as comments when\ncreating locales en masse.\n\nI think it would be nice to have CREATE COLLATION report this\ninformation as feedback in the form of a NOTICE message.\nPFA a simple patch implementing that.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite",
"msg_date": "Wed, 11 Sep 2019 16:53:16 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Create collation reporting the ICU locale display name"
},
{
"msg_contents": "On Wed, Sep 11, 2019 at 04:53:16PM +0200, Daniel Verite wrote:\n> I think it would be nice to have CREATE COLLATION report this\n> information as feedback in the form of a NOTICE message.\n> PFA a simple patch implementing that.\n\nWhy is that better than the descriptions provided with \\dO[S]+ in\npsql?\n--\nMichael",
"msg_date": "Thu, 12 Sep 2019 10:33:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "\tMichael Paquier wrote:\n\n> On Wed, Sep 11, 2019 at 04:53:16PM +0200, Daniel Verite wrote:\n> > I think it would be nice to have CREATE COLLATION report this\n> > information as feedback in the form of a NOTICE message.\n> > PFA a simple patch implementing that.\n> \n> Why is that better than the descriptions provided with \\dO[S]+ in\n> psql?\n\nThere is no description for collations created outside of\npg_import_system_collations().\n\nExample:\n\ndb=# create collation mycoll(provider=icu, locale='fr-FR-u-ks-level1');\nNOTICE: ICU locale: \"French (France, colstrength=primary)\"\n\ndb=# \\x auto\n\ndb=# \\dO+\nList of collations\n-[ RECORD 1 ]--+------------------\nSchema\t | public\nName\t | mycoll\nCollate | fr-FR-u-ks-level1\nCtype\t | fr-FR-u-ks-level1\nProvider | icu\nDeterministic? | yes\nDescription | \n\nThe NOTICE above is with the patch. Otherwise, the \"display name\"\nis never shown nor stored anywhere AFAICS.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Thu, 12 Sep 2019 13:55:37 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "On Wed, Sep 11, 2019 at 7:53 AM Daniel Verite <daniel@manitou-mail.org> wrote:\n> The 'locale' or 'lc_collate/lc_ctype' argument of an ICU collation may\n> have a complicated syntax, especially with non-deterministic\n> collations, and input mistakes in these names will not necessarily be\n> detected as such by ICU.\n\nThat's a real problem.\n\n> The \"display name\" of a locale is a simple way to get human-readable\n> feedback about the characteristics of that locale.\n> pg_import_system_collations() already push these as comments when\n> creating locales en masse.\n>\n> I think it would be nice to have CREATE COLLATION report this\n> information as feedback in the form of a NOTICE message.\n> PFA a simple patch implementing that.\n\nI like this idea.\n\nI wonder if it's possible to display a localized version of the\ndisplay string in the NOTICE message? Does that work, or could it? For\nexample, do you see the message in French?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 12 Sep 2019 11:30:17 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "On Thu, Sep 12, 2019 at 11:30 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I wonder if it's possible to display a localized version of the\n> display string in the NOTICE message? Does that work, or could it? For\n> example, do you see the message in French?\n\nBTW, I already know for sure that ICU supports localized display\nnames. The question is whether or not this patch can take advantage of\nthat.\n\nThe way that we use display name in pg_import_system_collations() is\nan ugly hack. It insists on only storing ASCII-safe strings in\npg_collation.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 12 Sep 2019 11:35:25 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "On 2019-Sep-12, Daniel Verite wrote:\n\n> \tMichael Paquier wrote:\n> \n> > On Wed, Sep 11, 2019 at 04:53:16PM +0200, Daniel Verite wrote:\n> > > I think it would be nice to have CREATE COLLATION report this\n> > > information as feedback in the form of a NOTICE message.\n> > > PFA a simple patch implementing that.\n> > \n> > Why is that better than the descriptions provided with \\dO[S]+ in\n> > psql?\n> \n> There is no description for collations created outside of\n> pg_import_system_collations().\n\nHmm, sounds like the collation should automatically acquire the display\nname as a comment even when created via CREATE COLLATION.\n\nI wonder if INFO is better than NOTICE (I think it is).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 12 Sep 2019 15:56:03 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I wonder if INFO is better than NOTICE (I think it is).\n\nYou're just waving a red flag in front of a bull, you know.\n\nI don't especially like the idea of having this emit a NOTICE;\nit's ugly and in-your-face. INFO is right out.\n\nThe idea of having CREATE COLLATION automatically create a comment\nis sort of interesting, although it seems pretty orthogonal to\nnormal command behavior. I wonder whether the seeming need for\nthis indicates that we should add a descriptive field to pg_collation\nproper, and not usurp the user-oriented comment feature for that.\n\nThe difficulty with localization is that whatever we put into\ntemplate1 has got to be ASCII-only, so that the template DB\ncan be copied to other encodings. I suppose we could consider\nhaving CREATE COLLATION act differently during initdb than\nlater, but that seems ugly too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Sep 2019 15:03:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "On Thu, Sep 12, 2019 at 03:03:43PM -0400, Tom Lane wrote:\n> The idea of having CREATE COLLATION automatically create a comment\n> is sort of interesting, although it seems pretty orthogonal to\n> normal command behavior. I wonder whether the seeming need for\n> this indicates that we should add a descriptive field to pg_collation\n> proper, and not usurp the user-oriented comment feature for that.\n> \n> The difficulty with localization is that whatever we put into\n> template1 has got to be ASCII-only, so that the template DB\n> can be copied to other encodings. I suppose we could consider\n> having CREATE COLLATION act differently during initdb than\n> later, but that seems ugly too.\n\nOr could it make sense to provide a system function which returns a\ncollation description for at least an ICU-provided one? We could make\nuse of that in psql for example.\n--\nMichael",
"msg_date": "Fri, 13 Sep 2019 12:23:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Sep 12, 2019 at 03:03:43PM -0400, Tom Lane wrote:\n>> The idea of having CREATE COLLATION automatically create a comment\n>> is sort of interesting, although it seems pretty orthogonal to\n>> normal command behavior. I wonder whether the seeming need for\n>> this indicates that we should add a descriptive field to pg_collation\n>> proper, and not usurp the user-oriented comment feature for that.\n>> \n>> The difficulty with localization is that whatever we put into\n>> template1 has got to be ASCII-only, so that the template DB\n>> can be copied to other encodings. I suppose we could consider\n>> having CREATE COLLATION act differently during initdb than\n>> later, but that seems ugly too.\n\n> Or could it make sense to provide a system function which returns a\n> collation description for at least an ICU-provided one? We could make\n> use of that in psql for example.\n\nOh, that seems like a good way to tackle it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 13 Sep 2019 00:31:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "\tMichael Paquier wrote:\n\n> Or could it make sense to provide a system function which returns a\n> collation description for at least an ICU-provided one? We could make\n> use of that in psql for example.\n\nIf we prefer having a function over the instant feedback effect of\nthe NOTICE, the function might look like icu_collation_attributes() [1]\nfrom the icu_ext extension. It returns a set of (attribute,value)\ntuples, among which the displayname is one of the values.\n\nAn advantage of this approach is that you may execute the\nfunction before creating the collation, instead of creating the\ncollation, realizing there was something wrong in your\nlocale/lc_collate argument, dropping the collation and trying again.\n\nAnother advantage would be the possibility of localizing the\ndisplay name, leaving the localization as a choice to the user.\nCurrently get_icu_locale_comment() forces \"en\" as the language because\nit want results in US-ASCII, but a user-callable function could have the\nlanguage code as an optional argument. When not being forced, the\nlanguage has a default value obtained by ICU from the environment\n(so that would be from where the postmaster is started in our case),\nand is also settable with uloc_setDefault().\n\nExample with icu_ext functions:\n\ntest=> select icu_set_default_locale('es');\n icu_set_default_locale \n------------------------\n es\n\ntest=> select value from icu_collation_attributes('en-US-u-ka-shifted')\n where attribute='displayname';\n\t\t value\t\t \n--------------------------------------------\n inglés (Estados Unidos, alternate=shifted)\n\nThis output tend to reveal mistakes with tags, which is why I thought\nto expose it as a NOTICE. It addresses the case of a user\nwho wouldn't suspect an error, so the \"in-your-face\" effect is\nintentional. With the function approach, the user must be\nproactive.\n\nAn example of mistake I found myself doing is forgetting the '-u-' before\nthe collation tags, which doesn't error out but is detected relatively\neasily with the display name.\n\n-- wrong\ntest=> select value from icu_collation_attributes('de-DE-ks-level1') \n where attribute='displayname';\n\t value\t \n-----------------------------\n German (Germany, KS_LEVEL1)\n\n-- right\ntest=> select value from icu_collation_attributes('de-DE-u-ks-level1')\t\n where attribute='displayname';\n\t\t value\t\t \n---------------------------------------\n German (Germany, colstrength=primary)\n\n\n[1] https://github.com/dverite/icu_ext#icu_collation_attributes\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Fri, 13 Sep 2019 15:57:10 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 9:57 AM Daniel Verite <daniel@manitou-mail.org> wrote:\n> An advantage of this approach is that you may execute the\n> function before creating the collation, instead of creating the\n> collation, realizing there was something wrong in your\n> locale/lc_collate argument, dropping the collation and trying again.\n\nThat would be really nice.\n\nI also think that the documentation is entirely inadequate in this\narea. https://www.postgresql.org/docs/11/collation.html#COLLATION-CREATE\ngives some examples, but those don't help you understand the general\nprinciples very much, and it has some links to the ICU documentation,\nwhich helps less than one might think. For example it links to\nhttp://userguide.icu-project.org/locale which describes locales like\nen_IE@currency=IEP and es__TRADITIONAL, but if you can figure out what\nall the valid possibilities are by reading that page, you are much\nsmarter than me. Then, too, according to the PostgreSQL documentation\nyou ought to prefer forms using the newer syntax, which looks like a\nbunch of dash-separated things, e.g. de-u-co-phonebk. But neither the\nPostgreSQL documentation itself nor either of the links to ICU\nincluded there actually describe the rules for that syntax. They just\nsay things like 'use BCP-47', which doesn't help at all. So I am just\nreduced to trying a bunch of things and seeing which collations appear\nto behave differently when I use them.\n\nThis proposal wouldn't fix the problem that I have to guess what\nstrings to use, but at least it might be clearer when I have or have\nnot guessed correctly.\n\nI seriously hate this stuff with a fiery passion that cannot be\nquenched. How does anybody manage to use software that seems to have\nno usable documentation and doesn't even tell whether or not you\nsupplied it with input that it thinks is valid?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 13 Sep 2019 10:59:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Sep 13, 2019 at 9:57 AM Daniel Verite <daniel@manitou-mail.org> wrote:\n>> An advantage of this approach is that you may execute the\n>> function before creating the collation, instead of creating the\n>> collation, realizing there was something wrong in your\n>> locale/lc_collate argument, dropping the collation and trying again.\n\n> That would be really nice.\n\nI think that's a useful function, but it's a different function from\nthe one first proposed, which was to tell you the properties of a\ncollation you already installed (which might not be ICU, even).\nPerhaps we should have both.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 13 Sep 2019 11:09:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "\"Daniel Verite\" <daniel@manitou-mail.org> writes:\n> This output tend to reveal mistakes with tags, which is why I thought\n> to expose it as a NOTICE. It addresses the case of a user\n> who wouldn't suspect an error, so the \"in-your-face\" effect is\n> intentional. With the function approach, the user must be\n> proactive.\n\nThat argument presupposes (a) manual execution of the creation query,\nand (b) that the user pays close attention to the NOTICE output.\nUnfortunately, I think our past over-use of notices has trained\npeople to ignore them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 13 Sep 2019 11:12:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 11:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> That argument presupposes (a) manual execution of the creation query,\n> and (b) that the user pays close attention to the NOTICE output.\n> Unfortunately, I think our past over-use of notices has trained\n> people to ignore them.\n\nOur past overuse aside, it's just easy to ignore chatter. It often\nhappens to me that I realize 10 minutes after I did something that I\ndidn't look carefully enough at the output ... which is usually\nfollowed by an attempt to scroll back through my terminal buffer to\nfind it. But after a few thousand lines of subsequent output that's\nhard. So I like the idea of making the information available\non-demand, rather than only at creation time.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 13 Sep 2019 11:18:10 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 11:09:52AM -0400, Tom Lane wrote:\n> I think that's a useful function, but it's a different function from\n> the one first proposed, which was to tell you the properties of a\n> collation you already installed (which might not be ICU, even).\n> Perhaps we should have both.\n\nPerhaps. Having a default description for the collations imported by\ninitdb is nice to have, but because of the gap with collations defined\nafter initialization it seems to me that there is an argument to\nswitch to that function for psql instead of grepping the default\ndescription added to pg_description. Enforcing a comment for a\ncollation manually created based on what libicu tells us does not\nfeel right either, as we don't enforce a comment for the creation of\nother objects.\n--\nMichael",
"msg_date": "Sat, 14 Sep 2019 11:30:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "\tTom Lane wrote:\n\n> I think that's a useful function, but it's a different function from\n> the one first proposed, which was to tell you the properties of a\n> collation you already installed (which might not be ICU, even).\n> Perhaps we should have both.\n\nThe pre-create use case would look like:\n SELECT * FROM describe_collation(locale_string text, collprovider \"char\")\n\nPost-creation, one could do:\n SELECT * FROM describe_collation(collcollate, collprovider)\n FROM pg_catalog.pg_collation WHERE oid = :OID;\n\nPossibly it could exists as SELECT * FROM describe_collation(oid)\nbut that's essentially the same function.\nOr I'm missing something about why we'd need two functions.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Sat, 14 Sep 2019 15:23:04 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "\tTom Lane wrote:\n\n> > This output tend to reveal mistakes with tags, which is why I thought\n> > to expose it as a NOTICE. It addresses the case of a user\n> > who wouldn't suspect an error, so the \"in-your-face\" effect is\n> > intentional. With the function approach, the user must be\n> > proactive.\n> \n> That argument presupposes (a) manual execution of the creation query,\n> and (b) that the user pays close attention to the NOTICE output.\n> Unfortunately, I think our past over-use of notices has trained\n> people to ignore them.\n\nWhat about DEBUG1 as the level?\nSurely we can draw a line somewhere beyond which the benefit of\ngetting that information surpasses the annoyance factor that\nyou're foreseeing?\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Sat, 14 Sep 2019 15:51:03 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "\"Daniel Verite\" <daniel@manitou-mail.org> writes:\n> \tTom Lane wrote:\n>> I think that's a useful function, but it's a different function from\n>> the one first proposed, which was to tell you the properties of a\n>> collation you already installed (which might not be ICU, even).\n>> Perhaps we should have both.\n\n> The pre-create use case would look like:\n> SELECT * FROM describe_collation(locale_string text, collprovider \"char\")\n\n> Post-creation, one could do:\n> SELECT * FROM describe_collation(collcollate, collprovider)\n> FROM pg_catalog.pg_collation WHERE oid = :OID;\n\n> Possibly it could exists as SELECT * FROM describe_collation(oid)\n> but that's essentially the same function.\n\nThe advantage of describe_collation(oid) is that we would not be\nbuilding knowledge into the callers about which columns of pg_collation\nmatter for this purpose. I'm not even convinced that the two you posit\nhere are sufficient --- the encoding seems relevant, for instance.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 14 Sep 2019 11:13:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "On Sat, Sep 14, 2019 at 8:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The advantage of describe_collation(oid) is that we would not be\n> building knowledge into the callers about which columns of pg_collation\n> matter for this purpose. I'm not even convinced that the two you posit\n> here are sufficient --- the encoding seems relevant, for instance.\n\n+1. It seems like a good idea to consider the ICU display name to be\njust that -- a display name. It should be considered a dynamic thing.\nFor one thing, it is subject to localization, so it isn't fixed even\nwhen nothing changes internally. But there is also the question of\nexternal changes. Internationalization is inherently a squishy\nbusiness.\n\nI believe that the main goal of BCP 47 (i.e. ICU's CREATE COLLATION\nlocale strings) is to fail gracefully when cultural or political\ndevelopments occur that change the expectations of users. BCP 47 is\nactually an IETF standard -- it's not from the Unicode consortium, or\nfrom ICU. It is supposed to be highly forgiving -- this is a feature,\nnot a bug. Of course, many facets of a locale control things that we\ndon't care about, or at least don't involve ICU with. For example,\nlocale controls the default currency symbol.\n\nThere are pg_upgrade scenarios in which the display string for a\ncollation will legitimately change due to external changes. For\nexample, somebody that lived in Serbia and Montenegro (a country which\nceased to exist in 2006) could have used a locale string with \"cs\" (an\nISO 3166-1 code), which has been deprecated [1]. If memory serves,\nthere is a 5 year grace period codified by some ISO standard or other,\nso recent ICU versions know nothing about Serbia and Montenegro\nspecifically. But they'll still recognize the Serbian language code,\nas well as language codes for minority languages spoken in Serbia and\nMontenegro. So, for the most part, the impact of sticking with this\nold/somewhat inaccurate locale definition string is minimal.\n(Actually, maybe downgrade scenarios are more interesting in\npractice.)\n\n[1] https://en.wikipedia.org/wiki/ISO_3166-2:CS#Codes_deleted_in_Newsletter_I-8\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 14 Sep 2019 13:46:01 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
},
{
"msg_contents": "On Sat, Sep 14, 2019 at 03:51:03PM +0200, Daniel Verite wrote:\n> What about DEBUG1 as the level?\n> Surely we can draw a line somewhere beyond which the benefit of\n> getting that information surpasses the annoyance factor that\n> you're foreseeing?\n\nDEBUG1 is even more chatty. I agree with the others that making only\nthis information available at creation time is a no-go.\n--\nMichael",
"msg_date": "Sun, 15 Sep 2019 23:29:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Create collation reporting the ICU locale display name"
}
] |
[
{
"msg_contents": "Hello,\n\nWhile thinking about looping hash joins (an alternative strategy for\nlimiting hash join memory usage currently being investigated by\nMelanie Plageman in a nearby thread[1]), the topic of parallel query\ndeadlock hazards came back to haunt me. I wanted to illustrate the\nproblems I'm aware of with the concrete code where I ran into this\nstuff, so here is a new-but-still-broken implementation of $SUBJECT.\nThis was removed from the original PHJ submission when I got stuck and\nran out of time in the release cycle for 11. Since the original\ndiscussion is buried in long threads and some of it was also a bit\nconfused, here's a fresh description of the problems as I see them.\nHopefully these thoughts might help Melanie's project move forward,\nbecause it's closely related, but I didn't want to dump another patch\ninto that other thread. Hence this new thread.\n\nI haven't succeeded in actually observing a deadlock with the attached\npatch (though I did last year, very rarely), but I also haven't tried\nvery hard. The patch seems to produce the right answers and is pretty\nscalable, so it's really frustrating not to be able to get it over the\nline.\n\nTuple queue deadlock hazard:\n\nIf the leader process is executing the subplan itself and waiting for\nall processes to arrive in ExecParallelHashEndProbe() (in this patch)\nwhile another process has filled up its tuple queue and is waiting for\nthe leader to read some tuples an unblock it, they will deadlock\nforever. That can't happen in the the committed version of PHJ,\nbecause it never waits for barriers after it has begun emitting\ntuples.\n\nSome possible ways to fix this:\n\n1. You could probably make it so that the PHJ_BATCH_SCAN_INNER phase\nin this patch (the scan for unmatched tuples) is executed by only one\nprocess, using the \"detach-and-see-if-you-were-last\" trick. Melanie\nproposed that for an equivalent problem in the looping hash join. I\nthink it probably works, but it gives up a lot of parallelism and thus\nwon't scale as nicely as the attached patch.\n\n2. You could probably make it so that only the leader process drops\nout of executing the inner unmatched scan, and then I think you\nwouldn't have this very specific problem at the cost of losing some\n(but not all) parallelism (ie the leader), but there might be other\nvariants of the problem. For example, a GatherMerge leader process\nmight be blocked waiting for the next tuple for a tuple from P1, while\nP2 is try to write to a full queue, and P1 waits for P2.\n\n3. You could introduce some kind of overflow for tuple queues, so\nthat tuple queues can never block because they're full (until you run\nout of extra memory buffers or disk and error out). I haven't\nseriously looked into this but I'm starting to suspect it's the\nindustrial strength general solution to the problem and variants of it\nthat show up in other parallelism projects (Parallel Repartition). As\nRobert mentioned last time I talked about this[2], you'd probably only\nwant to allow spooling (rather than waiting) when the leader is\nactually waiting for other processes; I'm not sure how exactly to\ncontrol that.\n\n4. <thinking-really-big>Goetz Graefe's writing about parallel sorting\ncomes close to this topic, which he calls flow control deadlocks. He\nmentions the possibility of infinite spooling like (3) as a solution.\nHe's describing a world where producers and consumers are running\nconcurrently, and the consumer doesn't just decide to start running\nthe subplan (what we call \"leader participation\"), so he doesn't\nactually have a problem like Gather deadlock. He describes\nplanner-enforced rules that allow deadlock free execution even with\nfixed-size tuple queue flow control by careful controlling where\norder-forcing operators are allowed to appear, so he doesn't have a\nproblem like Gather Merge deadlock. I'm not proposing we should\ncreate a whole bunch of producer and consumer processes to run\ndifferent plan fragments, but I think you can virtualise the general\nidea in an async executor with \"streams\", and that also solves other\nproblems when you start working with partitions in a world where it's\nnot even sure how many workers will show up. I see this as a long\nterm architectural goal requiring vast amounts of energy to achieve,\nhence my new interest in (3) for now.</thinking-really-big>\n\nHypothetical inter-node deadlock hazard:\n\nRight now I think it is the case the whenever any node begins pulling\ntuples from a subplan, it continues to do so until either the query\nends early or the subplan runs out of tuples. For example, Append\nprocesses its subplans one at a time until they're done -- it doesn't\njump back and forth. Parallel Append doesn't necessarily run them in\nthe order that they appear in the plan, but it still runs each one to\ncompletion before picking another one. If we ever had a node that\ndidn't adhere to that rule, then two Parallel Full Hash Join nodes\ncould dead lock, if some of the workers were stuck waiting in one\nwhile some were stuck waiting in the other.\n\nIf we were happy to decree that that is a rule of the current\nPostgreSQL executor, then this hypothetical problem would go away.\nFor example, consider the old patch I recently rebased[3] to allow\nAppend over a bunch of FDWs representing remote shards to return\ntuples as soon as they're ready, not necessarily sequentially (and I\nthink several others have worked on similar patches). To be\ncommittable under such a rule that applies globally to the whole\nexecutor, that patch would only be allowed to *start* them in any\norder, but once it's started pulling tuples from a given subplan it'd\nhave to pull them all to completion before considering another node.\n\n(Again, that problem goes away in an async model like (4), which will\nalso be able to do much more interesting things with FDWs, and it's\nthe FDW thing that I think generates more interest in async execution\nthan my rambling about abstract parallel query problems.)\n\nSome other notes on the patch:\n\nAside from the deadlock problem, there are some minor details to tidy\nup (handling of late starters probably not quite right, rescans not\nyet considered). There is a fun hard-coded parameter that controls\nthe parallel step size in terms of cache lines for the unmatched scan;\nI found that 8 was a lot faster than 4, but no slower than 128 on my\nlaptop, so I set it to 8. More thoughts along those micro-optimistic\nlines: instead of match bit in the header, you could tag the pointer\nand sometimes avoid having to follow it, and you could prefetch next\nnon-matching tuple's cacheline by looking a head a bit.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGKWWmf%3DWELLG%3DaUGbcugRaSQbtm0tKYiBut-B2rVKX63g%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CA%2BTgmoY4LogYcg1y5JPtto_fL-DBUqvxRiZRndDC70iFiVsVFQ%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/flat/CA%2BhUKGLBRyu0rHrDCMC4%3DRn3252gogyp1SjOgG8SEKKZv%3DFwfQ%40mail.gmail.com\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Thu, 12 Sep 2019 17:56:00 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Parallel Full Hash Join"
},
{
"msg_contents": "On Wed, Sep 11, 2019 at 11:23 PM Thomas Munro <thomas.munro@gmail.com>\nwrote:\n\n>\n> While thinking about looping hash joins (an alternative strategy for\n> limiting hash join memory usage currently being investigated by\n> Melanie Plageman in a nearby thread[1]), the topic of parallel query\n> deadlock hazards came back to haunt me. I wanted to illustrate the\n> problems I'm aware of with the concrete code where I ran into this\n> stuff, so here is a new-but-still-broken implementation of $SUBJECT.\n> This was removed from the original PHJ submission when I got stuck and\n> ran out of time in the release cycle for 11. Since the original\n> discussion is buried in long threads and some of it was also a bit\n> confused, here's a fresh description of the problems as I see them.\n> Hopefully these thoughts might help Melanie's project move forward,\n> because it's closely related, but I didn't want to dump another patch\n> into that other thread. Hence this new thread.\n>\n> I haven't succeeded in actually observing a deadlock with the attached\n> patch (though I did last year, very rarely), but I also haven't tried\n> very hard. The patch seems to produce the right answers and is pretty\n> scalable, so it's really frustrating not to be able to get it over the\n> line.\n>\n> Tuple queue deadlock hazard:\n>\n> If the leader process is executing the subplan itself and waiting for\n> all processes to arrive in ExecParallelHashEndProbe() (in this patch)\n> while another process has filled up its tuple queue and is waiting for\n> the leader to read some tuples an unblock it, they will deadlock\n> forever. That can't happen in the the committed version of PHJ,\n> because it never waits for barriers after it has begun emitting\n> tuples.\n>\n> Some possible ways to fix this:\n>\n> 1. You could probably make it so that the PHJ_BATCH_SCAN_INNER phase\n> in this patch (the scan for unmatched tuples) is executed by only one\n> process, using the \"detach-and-see-if-you-were-last\" trick. Melanie\n> proposed that for an equivalent problem in the looping hash join. I\n> think it probably works, but it gives up a lot of parallelism and thus\n> won't scale as nicely as the attached patch.\n>\n\nI have attached a patch which implements this\n(v1-0001-Parallel-FOJ-ROJ-single-worker-scan-buckets.patch).\n\nFor starters, in order to support parallel FOJ and ROJ, I re-enabled\nsetting the match bit for the tuples in the hashtable which\n3e4818e9dd5be294d97c disabled. I did so using the code suggested in [1],\nreading the match bit to see if it is already set before setting it.\n\nThen, workers except for the last worker detach after exhausting the\nouter side of a batch, leaving one worker to proceed to HJ_FILL_INNER\nand do the scan of the hash table and emit unmatched inner tuples.\n\nI have also attached a variant on this patch which I am proposing to\nreplace it (v1-0001-Parallel-FOJ-ROJ-single-worker-scan-chunks.patch)\nwhich has a new ExecParallelScanHashTableForUnmatched() in which the\nsingle worker doing the unmatched scan scans one HashMemoryChunk at a\ntime and then frees them as it goes. I thought this might perform better\nthan the version which uses the buckets because 1) it should do a bit\nless pointer chasing and 2) it frees each chunk of the hash table as it\nscans it which (maybe) would save a bit of time during\nExecHashTableDetachBatch() when it goes through and frees the hash\ntable, but, my preliminary tests showed a negligible difference between\nthis and the version using buckets. I will do a bit more testing,\nthough.\n\nI tried a few other variants of these patches, including one in which\nthe workers detach from the batch inside of the batch loading and\nprobing phase machine, ExecParallelHashJoinNewBatch(). This meant that\nall workers transition to HJ_FILL_INNER and then HJ_NEED_NEW_BATCH in\norder to detach in the batch phase machine. This, however, involved\nadding a lot of new variables to distinguish whether or or not the\nunmatched outer scan was already done, whether or not the current worker\nwas the worker elected to do the scan, etc. Overall, it is probably\nincorrect to use the HJ_NEED_NEW_BATCH state in this way. I had\noriginally tried this to avoid operating on the batch_barrier in the\nmain hash join state machine. I've found that the more different places\nwe add code attaching and detaching to the batch_barrier (and other PHJ\nbarriers, for that matter), the harder it is to debug the code, however,\nI think in this case it is required.\n\n\n> 2. You could probably make it so that only the leader process drops\n> out of executing the inner unmatched scan, and then I think you\n> wouldn't have this very specific problem at the cost of losing some\n> (but not all) parallelism (ie the leader), but there might be other\n> variants of the problem. For example, a GatherMerge leader process\n> might be blocked waiting for the next tuple for a tuple from P1, while\n> P2 is try to write to a full queue, and P1 waits for P2.\n>\n> 3. You could introduce some kind of overflow for tuple queues, so\n> that tuple queues can never block because they're full (until you run\n> out of extra memory buffers or disk and error out). I haven't\n> seriously looked into this but I'm starting to suspect it's the\n> industrial strength general solution to the problem and variants of it\n> that show up in other parallelism projects (Parallel Repartition). As\n> Robert mentioned last time I talked about this[2], you'd probably only\n> want to allow spooling (rather than waiting) when the leader is\n> actually waiting for other processes; I'm not sure how exactly to\n> control that.\n>\n> 4. <thinking-really-big>Goetz Graefe's writing about parallel sorting\n> comes close to this topic, which he calls flow control deadlocks. He\n> mentions the possibility of infinite spooling like (3) as a solution.\n> He's describing a world where producers and consumers are running\n> concurrently, and the consumer doesn't just decide to start running\n> the subplan (what we call \"leader participation\"), so he doesn't\n> actually have a problem like Gather deadlock. He describes\n> planner-enforced rules that allow deadlock free execution even with\n> fixed-size tuple queue flow control by careful controlling where\n> order-forcing operators are allowed to appear, so he doesn't have a\n> problem like Gather Merge deadlock. I'm not proposing we should\n> create a whole bunch of producer and consumer processes to run\n> different plan fragments, but I think you can virtualise the general\n> idea in an async executor with \"streams\", and that also solves other\n> problems when you start working with partitions in a world where it's\n> not even sure how many workers will show up. I see this as a long\n> term architectural goal requiring vast amounts of energy to achieve,\n> hence my new interest in (3) for now.</thinking-really-big>\n>\n> Hypothetical inter-node deadlock hazard:\n>\n> Right now I think it is the case the whenever any node begins pulling\n> tuples from a subplan, it continues to do so until either the query\n> ends early or the subplan runs out of tuples. For example, Append\n> processes its subplans one at a time until they're done -- it doesn't\n> jump back and forth. Parallel Append doesn't necessarily run them in\n> the order that they appear in the plan, but it still runs each one to\n> completion before picking another one. If we ever had a node that\n> didn't adhere to that rule, then two Parallel Full Hash Join nodes\n> could dead lock, if some of the workers were stuck waiting in one\n> while some were stuck waiting in the other.\n>\n> If we were happy to decree that that is a rule of the current\n> PostgreSQL executor, then this hypothetical problem would go away.\n> For example, consider the old patch I recently rebased[3] to allow\n> Append over a bunch of FDWs representing remote shards to return\n> tuples as soon as they're ready, not necessarily sequentially (and I\n> think several others have worked on similar patches). To be\n> committable under such a rule that applies globally to the whole\n> executor, that patch would only be allowed to *start* them in any\n> order, but once it's started pulling tuples from a given subplan it'd\n> have to pull them all to completion before considering another node.\n>\n> (Again, that problem goes away in an async model like (4), which will\n> also be able to do much more interesting things with FDWs, and it's\n> the FDW thing that I think generates more interest in async execution\n> than my rambling about abstract parallel query problems.)\n>\n>\nThe leader exclusion tactics and the spooling idea don't solve the\nexecution order deadlock possibility, so, this \"all except last detach\nand last does unmatched inner scan\" seems like the best way to solve\nboth types of deadlock.\nThere is another option that could maintain some parallelism for the\nunmatched inner scan.\n\nThis method is exactly like the \"all except last detach and last does\nunmatched inner scan\" method from the perspective of the main hash join\nstate machine. The difference is in ExecParallelHashJoinNewBatch(). In\nthe batch_barrier phase machine, workers loop around looking for batches\nthat are not done.\n\nIn this \"detach for now\" method, all workers except the last one detach\nfrom a batch after exhausting the outer side. They will mark the batch\nthey were just working on as \"provisionally done\" (as opposed to\n\"done\"). The last worker advances the batch_barrier from\nPHJ_BATCH_PROBING to PHJ_BATCH_SCAN_INNER.\n\nAll detached workers then proceed to HJ_NEED_NEW_BATCH and try to find\nanother batch to work on. If there are no batches that are neither\n\"done\" or \"provisionally done\", then the worker will re-attach to\nbatches that are \"provisionally done\" and attempt to join in conducting\nthe unmatched inner scan. Once it finishes its worker there, it will\nreturn to HJ_NEED_NEW_BATCH, enter ExecParallelHashJoinNewBatch() and\nmark the batch as \"done\".\n\nBecause the worker detached from the batch, this method solves the tuple\nqueue flow control deadlock issue--this worker could not be attempting\nto emit a tuple while the leader waits at the barrier for it. There is\nno waiting at the barrier.\n\nHowever, it is unclear to me whether or not this method will be at risk\nof inter-node deadlock/execution order deadlock. It seems like this is\nnot more at risk than the existing code is for this issue.\n\nIf a worker never returns to the HashJoin after leaving to emit a tuple,\nin any of the methods (and in master), the query would not finish\ncorrectly because the workers are attached to the batch_barrier while\nemitting tuples and, though they may not wait at this barrier again, the\nhashtable is cleaned up by the last participant to detach, and this\nwould not happen if it doesn't return to the batch phase machine. I'm\nnot sure if this exhibits the problematic behavior detailed above, but,\nif it does, it is not unique to this method.\n\nSome other notes on the patch:\n>\n> Aside from the deadlock problem, there are some minor details to tidy\n> up (handling of late starters probably not quite right, rescans not\n> yet considered).\n\n\nThese would not be an issue with only one worker doing the scan but\nwould have to be handled in a potential new parallel-enabled solution\nlike I suggested above.\n\n\n> There is a fun hard-coded parameter that controls\n> the parallel step size in terms of cache lines for the unmatched scan;\n> I found that 8 was a lot faster than 4, but no slower than 128 on my\n> laptop, so I set it to 8.\n\n\nI didn't add this cache line optimization to my chunk scanning method. I\ncould do so. Do you think it is more relevant, less relevant, or the\nsame if only one worker is doing the unmatched inner scan?\n\nMore thoughts along those micro-optimistic\n> lines: instead of match bit in the header, you could tag the pointer\n> and sometimes avoid having to follow it, and you could prefetch next\n> non-matching tuple's cacheline by looking a head a bit.\n>\n\nI would be happy to try doing this once we get the rest of the patch\nironed out so that seeing how much of a performance difference it makes\nis more straightforward.\n\n\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CA%2BhUKGKWWmf%3DWELLG%3DaUGbcugRaSQbtm0tKYiBut-B2rVKX63g%40mail.gmail.com\n> [2]\n> https://www.postgresql.org/message-id/CA%2BTgmoY4LogYcg1y5JPtto_fL-DBUqvxRiZRndDC70iFiVsVFQ%40mail.gmail.com\n> [3]\n> https://www.postgresql.org/message-id/flat/CA%2BhUKGLBRyu0rHrDCMC4%3DRn3252gogyp1SjOgG8SEKKZv%3DFwfQ%40mail.gmail.com\n>\n>\n>\n[1]\nhttps://www.postgresql.org/message-id/0F44E799048C4849BAE4B91012DB910462E9897A%40SHSMSX103.ccr.corp.intel.com\n\n-- Melanie",
"msg_date": "Mon, 21 Sep 2020 13:49:17 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Tue, Sep 22, 2020 at 8:49 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Wed, Sep 11, 2019 at 11:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> 1. You could probably make it so that the PHJ_BATCH_SCAN_INNER phase\n>> in this patch (the scan for unmatched tuples) is executed by only one\n>> process, using the \"detach-and-see-if-you-were-last\" trick. Melanie\n>> proposed that for an equivalent problem in the looping hash join. I\n>> think it probably works, but it gives up a lot of parallelism and thus\n>> won't scale as nicely as the attached patch.\n>\n> I have attached a patch which implements this\n> (v1-0001-Parallel-FOJ-ROJ-single-worker-scan-buckets.patch).\n\nHi Melanie,\n\nThanks for working on this! I have a feeling this is going to be much\neasier to land than the mighty hash loop patch. And it's good to get\none of our blocking design questions nailed down for both patches.\n\nI took it for a very quick spin and saw simple cases working nicely,\nbut TPC-DS queries 51 and 97 (which contain full joins) couldn't be\nconvinced to use it. Hmm.\n\n> For starters, in order to support parallel FOJ and ROJ, I re-enabled\n> setting the match bit for the tuples in the hashtable which\n> 3e4818e9dd5be294d97c disabled. I did so using the code suggested in [1],\n> reading the match bit to see if it is already set before setting it.\n\nCool. I'm quite keen to add a \"fill_inner\" parameter for\nExecHashJoinImpl() and have an N-dimensional lookup table of\nExecHashJoin variants, so that this and much other related branching\ncan be constant-folded out of existence by the compiler in common\ncases, which is why I think this is all fine, but that's for another\nday...\n\n> Then, workers except for the last worker detach after exhausting the\n> outer side of a batch, leaving one worker to proceed to HJ_FILL_INNER\n> and do the scan of the hash table and emit unmatched inner tuples.\n\n+1\n\nDoing better is pretty complicated within our current execution model,\nand I think this is a good compromise for now.\n\nCosting for uneven distribution is tricky; depending on your plan\nshape, specifically whether there is something else to do afterwards\nto pick up the slack, it might or might not affect the total run time\nof the query. It seems like there's not much we can do about that.\n\n> I have also attached a variant on this patch which I am proposing to\n> replace it (v1-0001-Parallel-FOJ-ROJ-single-worker-scan-chunks.patch)\n> which has a new ExecParallelScanHashTableForUnmatched() in which the\n> single worker doing the unmatched scan scans one HashMemoryChunk at a\n> time and then frees them as it goes. I thought this might perform better\n> than the version which uses the buckets because 1) it should do a bit\n> less pointer chasing and 2) it frees each chunk of the hash table as it\n> scans it which (maybe) would save a bit of time during\n> ExecHashTableDetachBatch() when it goes through and frees the hash\n> table, but, my preliminary tests showed a negligible difference between\n> this and the version using buckets. I will do a bit more testing,\n> though.\n\n+1\n\nI agree that it's the better of those two options.\n\n>> [stuff about deadlocks]\n>\n> The leader exclusion tactics and the spooling idea don't solve the\n> execution order deadlock possibility, so, this \"all except last detach\n> and last does unmatched inner scan\" seems like the best way to solve\n> both types of deadlock.\n\nAgreed (at least as long as our threads of query execution are made\nout of C call stacks and OS processes that block).\n\n>> Some other notes on the patch:\n>>\n>> Aside from the deadlock problem, there are some minor details to tidy\n>> up (handling of late starters probably not quite right, rescans not\n>> yet considered).\n>\n> These would not be an issue with only one worker doing the scan but\n> would have to be handled in a potential new parallel-enabled solution\n> like I suggested above.\n\nMakes sense. Not sure why I thought anything special was needed for rescans.\n\n>> There is a fun hard-coded parameter that controls\n>> the parallel step size in terms of cache lines for the unmatched scan;\n>> I found that 8 was a lot faster than 4, but no slower than 128 on my\n>> laptop, so I set it to 8.\n>\n> I didn't add this cache line optimization to my chunk scanning method. I\n> could do so. Do you think it is more relevant, less relevant, or the\n> same if only one worker is doing the unmatched inner scan?\n\nYeah it's irrelevant for a single process, and even more irrelevant if\nwe go with your chunk-based version.\n\n>> More thoughts along those micro-optimistic\n>> lines: instead of match bit in the header, you could tag the pointer\n>> and sometimes avoid having to follow it, and you could prefetch next\n>> non-matching tuple's cacheline by looking a head a bit.\n>\n> I would be happy to try doing this once we get the rest of the patch\n> ironed out so that seeing how much of a performance difference it makes\n> is more straightforward.\n\nIgnore that, I have no idea if the maintenance overhead for such an\nevery-tuple-in-this-chain-is-matched tag bit would be worth it, it was\njust an idle thought. I think your chunk-scan plan seems sensible for\nnow.\n\n From a quick peek:\n\n+/*\n+ * Upon arriving at the barrier, if this worker is not the last\nworker attached,\n+ * detach from the barrier and return false. If this worker is the last worker,\n+ * remain attached and advance the phase of the barrier, return true\nto indicate\n+ * you are the last or \"elected\" worker who is still attached to the barrier.\n+ * Another name I considered was BarrierUniqueify or BarrierSoloAssign\n+ */\n+bool\n+BarrierDetachOrElect(Barrier *barrier)\n\nI tried to find some existing naming in writing about\nbarriers/phasers, but nothing is jumping out at me. I think a lot of\nthis stuff comes from super computing where I guess \"make all of the\nthreads give up except one\" isn't a primitive they'd be too excited\nabout :-)\n\nBarrierArriveAndElectOrDetach()... gah, no.\n\n+ last = BarrierDetachOrElect(&batch->batch_barrier);\n\nI'd be nice to add some assertions after that, in the 'last' path,\nthat there's only one participant and that the phase is as expected,\njust to make it even clearer to the reader, and a comment in the other\npath that we are no longer attached.\n\n+ hjstate->hj_AllocatedBucketRange = 0;\n...\n+ pg_atomic_uint32 bucket; /* bucket allocator for unmatched inner scan */\n...\n+ //volatile int mybp = 0; while (mybp == 0)\n\nSome leftover fragments of the bucket-scan version and debugging stuff.\n\n\n",
"msg_date": "Tue, 22 Sep 2020 15:33:54 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Mon, Sep 21, 2020 at 8:34 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Tue, Sep 22, 2020 at 8:49 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > On Wed, Sep 11, 2019 at 11:23 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n>\n> I took it for a very quick spin and saw simple cases working nicely,\n> but TPC-DS queries 51 and 97 (which contain full joins) couldn't be\n> convinced to use it. Hmm.\n>\n\nThanks for taking a look, Thomas!\n\nBoth query 51 and query 97 have full outer joins of two CTEs, each of\nwhich are aggregate queries.\n\nDuring planning when constructing the joinrel and choosing paths, in\nhash_inner_and_outer(), we don't consider parallel hash parallel hash\njoin paths because the outerrel and innerrel do not have\npartial_pathlists.\n\nThis code\n\n if (joinrel->consider_parallel &&\n save_jointype != JOIN_UNIQUE_OUTER &&\n outerrel->partial_pathlist != NIL &&\n bms_is_empty(joinrel->lateral_relids))\n\ngates the code to generate partial paths for hash join.\n\nMy understanding of this is that if the inner and outerrel don't have\npartial paths, then they can't be executed in parallel, so the join\ncould not be executed in parallel.\n\nFor the two TPC-DS queries, even if they use parallel aggs, the finalize\nagg will have to be done by a single worker, so I don't think they could\nbe joined with a parallel hash join.\n\nI added some logging inside the \"if\" statement and ran join_hash.sql in\nregress to see what nodes were typically in the pathlist and partial\npathlist. All of them had basically just sequential scans as the outer\nand inner rel paths. regress examples are definitely meant to be\nminimal, so this probably wasn't the best place to look for examples of\nmore complex rels that can be joined with a parallel hash join.\n\n\n>\n> >> Some other notes on the patch:\n>\n> From a quick peek:\n>\n> +/*\n> + * Upon arriving at the barrier, if this worker is not the last\n> worker attached,\n> + * detach from the barrier and return false. If this worker is the last\n> worker,\n> + * remain attached and advance the phase of the barrier, return true\n> to indicate\n> + * you are the last or \"elected\" worker who is still attached to the\n> barrier.\n> + * Another name I considered was BarrierUniqueify or BarrierSoloAssign\n> + */\n> +bool\n> +BarrierDetachOrElect(Barrier *barrier)\n>\n> I tried to find some existing naming in writing about\n> barriers/phasers, but nothing is jumping out at me. I think a lot of\n> this stuff comes from super computing where I guess \"make all of the\n> threads give up except one\" isn't a primitive they'd be too excited\n> about :-)\n>\n> BarrierArriveAndElectOrDetach()... gah, no.\n>\n\nYou're right that Arrive should be in there.\nSo, I went with BarrierArriveAndDetachExceptLast()\nIt's specific, if not clever.\n\n\n>\n> + last = BarrierDetachOrElect(&batch->batch_barrier);\n>\n> I'd be nice to add some assertions after that, in the 'last' path,\n> that there's only one participant and that the phase is as expected,\n> just to make it even clearer to the reader, and a comment in the other\n> path that we are no longer attached.\n>\n\nAssert and comment added to the single worker path.\nThe other path is just back to HJ_NEED_NEW_BATCH and workers will detach\nthere as before, so I'm not sure where we could add the comment about\nthe other workers detaching.\n\n\n>\n> + hjstate->hj_AllocatedBucketRange = 0;\n> ...\n> + pg_atomic_uint32 bucket; /* bucket allocator for unmatched inner\n> scan */\n> ...\n> + //volatile int mybp = 0; while (mybp == 0)\n>\n> Some leftover fragments of the bucket-scan version and debugging stuff.\n>\n\ncleaned up (and rebased).\n\nI also changed ExecScanHashTableForUnmatched() to scan HashMemoryChunks\nin the hashtable instead of using the buckets to align parallel and\nserial hash join code.\n\nOriginally, I had that code freeing the chunks of the hashtable after\nfinishing scanning them, however, I noticed this query from regress\nfailing:\n\nselect * from\n(values (1, array[10,20]), (2, array[20,30])) as v1(v1x,v1ys)\nleft join (values (1, 10), (2, 20)) as v2(v2x,v2y) on v2x = v1x\nleft join unnest(v1ys) as u1(u1y) on u1y = v2y;\n\nIt is because the hash join gets rescanned and because there is only one\nbatch, ExecReScanHashJoin reuses the same hashtable.\n\n QUERY PLAN\n-------------------------------------------------------------\n Nested Loop Left Join\n -> Values Scan on \"*VALUES*\"\n -> Hash Right Join\n Hash Cond: (u1.u1y = \"*VALUES*_1\".column2)\n Filter: (\"*VALUES*_1\".column1 = \"*VALUES*\".column1)\n -> Function Scan on unnest u1\n -> Hash\n -> Values Scan on \"*VALUES*_1\"\n\nI was freeing the hashtable as I scanned each chunk, which clearly\ndoesn't work for a single batch hash join which gets rescanned.\n\nI don't see anything specific to parallel hash join in ExecReScanHashJoin(),\nso, it seems like the same rules apply to parallel hash join. So, I will\nhave to remove the logic that frees the hash table after scanning each\nchunk from the parallel function as well.\n\nIn addition, I still need to go through the patch with a fine tooth comb\n(refine the comments and variable names and such) but just wanted to\ncheck that these changes were in line with what you were thinking first.\n\nRegards,\nMelanie (Microsoft)",
"msg_date": "Tue, 29 Sep 2020 17:45:23 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "I've attached a patch with the corrections I mentioned upthread.\nI've gone ahead and run pgindent, though, I can't say that I'm very\nhappy with the result.\n\nI'm still not quite happy with the name\nBarrierArriveAndDetachExceptLast(). It's so literal. As you said, there\nprobably isn't a nice name for this concept, since it is a function with\nthe purpose of terminating parallelism.\n\nRegards,\nMelanie (Microsoft)",
"msg_date": "Wed, 4 Nov 2020 14:33:58 -0800",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "Hi Melanie,\n\nOn Thu, Nov 5, 2020 at 7:34 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> I've attached a patch with the corrections I mentioned upthread.\n> I've gone ahead and run pgindent, though, I can't say that I'm very\n> happy with the result.\n>\n> I'm still not quite happy with the name\n> BarrierArriveAndDetachExceptLast(). It's so literal. As you said, there\n> probably isn't a nice name for this concept, since it is a function with\n> the purpose of terminating parallelism.\n\nYou sent in your patch, v3-0001-Support-Parallel-FOJ-and-ROJ.patch to\npgsql-hackers on Nov 5, but you did not post it to the next\nCommitFest[1]. If this was intentional, then you need to take no\naction. However, if you want your patch to be reviewed as part of the\nupcoming CommitFest, then you need to add it yourself before\n2021-01-01 AOE[2]. Also, rebasing to the current HEAD may be required\nas almost two months passed since when this patch is submitted. Thanks\nfor your contributions.\n\nRegards,\n\n[1] https://commitfest.postgresql.org/31/\n[2] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 28 Dec 2020 17:48:26 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Mon, Dec 28, 2020 at 9:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Thu, Nov 5, 2020 at 7:34 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > I've attached a patch with the corrections I mentioned upthread.\n> > I've gone ahead and run pgindent, though, I can't say that I'm very\n> > happy with the result.\n> >\n> > I'm still not quite happy with the name\n> > BarrierArriveAndDetachExceptLast(). It's so literal. As you said, there\n> > probably isn't a nice name for this concept, since it is a function with\n> > the purpose of terminating parallelism.\n>\n> You sent in your patch, v3-0001-Support-Parallel-FOJ-and-ROJ.patch to\n> pgsql-hackers on Nov 5, but you did not post it to the next\n> CommitFest[1]. If this was intentional, then you need to take no\n> action. However, if you want your patch to be reviewed as part of the\n> upcoming CommitFest, then you need to add it yourself before\n> 2021-01-01 AOE[2]. Also, rebasing to the current HEAD may be required\n> as almost two months passed since when this patch is submitted. Thanks\n> for your contributions.\n\nThanks for this reminder Sawada-san. I had some feedback I meant to\npost in November but didn't get around to:\n\n+bool\n+BarrierArriveAndDetachExceptLast(Barrier *barrier)\n\nI committed this part (7888b099). I've attached a rebase of the rest\nof Melanie's v3 patch.\n\n+ WAIT_EVENT_HASH_BATCH_PROBE,\n\nThat new wait event isn't needed (we can't and don't wait).\n\n * PHJ_BATCH_PROBING -- all probe\n- * PHJ_BATCH_DONE -- end\n+\n+ * PHJ_BATCH_DONE -- queries not requiring inner fill done\n+ * PHJ_BATCH_FILL_INNER_DONE -- inner fill completed, all queries done\n\nWould it be better/tidier to keep _DONE as the final phase? That is,\nto switch around these two final phases. Or does that make it too\nhard to coordinate the detach-and-cleanup logic?\n\n+/*\n+ * ExecPrepHashTableForUnmatched\n+ * set up for a series of ExecScanHashTableForUnmatched calls\n+ * return true if this worker is elected to do the\nunmatched inner scan\n+ */\n+bool\n+ExecParallelPrepHashTableForUnmatched(HashJoinState *hjstate)\n\nComment name doesn't match function name.",
"msg_date": "Tue, 29 Dec 2020 15:28:12 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Tue, Dec 29, 2020 at 03:28:12PM +1300, Thomas Munro wrote:\n> I had some feedback I meant to\n> post in November but didn't get around to:\n> \n> * PHJ_BATCH_PROBING -- all probe\n> - * PHJ_BATCH_DONE -- end\n> +\n> + * PHJ_BATCH_DONE -- queries not requiring inner fill done\n> + * PHJ_BATCH_FILL_INNER_DONE -- inner fill completed, all queries done\n> \n> Would it be better/tidier to keep _DONE as the final phase? That is,\n> to switch around these two final phases. Or does that make it too\n> hard to coordinate the detach-and-cleanup logic?\n\nI updated this to use your suggestion. My rationale for having\nPHJ_BATCH_DONE and then PHJ_BATCH_FILL_INNER_DONE was that, for a worker\nattaching to the batch for the first time, it might be confusing that it\nis in the PHJ_BATCH_FILL_INNER state (not the DONE state) and yet that\nworker still just detaches and moves on. It didn't seem intuitive.\nAnyway, I think that is all sort of confusing and unnecessary. I changed\nit to PHJ_BATCH_FILLING_INNER -- then when a worker who hasn't ever been\nattached to this batch before attaches, it will be in the\nPHJ_BATCH_FILLING_INNER phase, which it cannot help with and it will\ndetach and move on.\n\n> \n> +/*\n> + * ExecPrepHashTableForUnmatched\n> + * set up for a series of ExecScanHashTableForUnmatched calls\n> + * return true if this worker is elected to do the\n> unmatched inner scan\n> + */\n> +bool\n> +ExecParallelPrepHashTableForUnmatched(HashJoinState *hjstate)\n> \n> Comment name doesn't match function name.\n\nUpdated -- and a few other comment updates too.\n\nI just attached the diff.",
"msg_date": "Thu, 11 Feb 2021 17:02:18 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Fri, Feb 12, 2021 at 11:02 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I just attached the diff.\n\nSquashed into one patch for the cfbot to chew on, with a few minor\nadjustments to a few comments.",
"msg_date": "Tue, 2 Mar 2021 23:27:19 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Tue, Mar 2, 2021 at 11:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Feb 12, 2021 at 11:02 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > I just attached the diff.\n>\n> Squashed into one patch for the cfbot to chew on, with a few minor\n> adjustments to a few comments.\n\nI did some more minor tidying of comments and naming. It's been on my\nto-do-list to update some phase names after commit 3048898e, and while\ndoing that I couldn't resist the opportunity to change DONE to FREE,\nwhich somehow hurts my brain less, and makes much more obvious sense\nafter the bugfix in CF #3031 that splits DONE into two separate\nphases. It also pairs obviously with ALLOCATE. I include a copy of\nthat bugix here too as 0001, because I'll likely commit that first, so\nI rebased the stack of patches that way. 0002 includes the renaming I\npropose (master only). Then 0003 is Melanie's patch, using the name\nSCAN for the new match bit scan phase. I've attached an updated\nversion of my \"phase diagram\" finger painting, to show how it looks\nwith these three patches. \"scan*\" is new.",
"msg_date": "Sat, 6 Mar 2021 14:30:33 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Fri, Mar 5, 2021 at 8:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Tue, Mar 2, 2021 at 11:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Fri, Feb 12, 2021 at 11:02 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > > I just attached the diff.\n> >\n> > Squashed into one patch for the cfbot to chew on, with a few minor\n> > adjustments to a few comments.\n>\n> I did some more minor tidying of comments and naming. It's been on my\n> to-do-list to update some phase names after commit 3048898e, and while\n> doing that I couldn't resist the opportunity to change DONE to FREE,\n> which somehow hurts my brain less, and makes much more obvious sense\n> after the bugfix in CF #3031 that splits DONE into two separate\n> phases. It also pairs obviously with ALLOCATE. I include a copy of\n> that bugix here too as 0001, because I'll likely commit that first, so\n> I rebased the stack of patches that way. 0002 includes the renaming I\n> propose (master only). Then 0003 is Melanie's patch, using the name\n> SCAN for the new match bit scan phase. I've attached an updated\n> version of my \"phase diagram\" finger painting, to show how it looks\n> with these three patches. \"scan*\" is new.\n\nFeedback on\nv6-0002-Improve-the-naming-of-Parallel-Hash-Join-phases.patch\n\nI like renaming DONE to FREE and ALLOCATE TO REALLOCATE in the grow\nbarriers. FREE only makes sense for the Build barrier if you keep the\nadded PHJ_BUILD_RUN phase, though, I assume you would change this patch\nif you decided not to add the new build barrier phase.\n\nI like the addition of the asterisks to indicate a phase is executed by\na single arbitrary process. I was thinking, shall we add one of these to\nHJ_FILL_INNER since it is only done by one process in parallel right and\nfull hash join? Maybe that's confusing because serial hash join uses\nthat state machine too, though. Maybe **? Maybe we should invent a\ncomplicated symbolic language :)\n\nOne tiny, random, unimportant thing: The function prototype for\nExecParallelHashJoinPartitionOuter() calls its parameter \"node\" and, in\nthe definition, it is called \"hjstate\". This feels like a good patch to\nthrow in that tiny random change to make the name the same.\n\nstatic void ExecParallelHashJoinPartitionOuter(HashJoinState *node);\n\nstatic void\nExecParallelHashJoinPartitionOuter(HashJoinState *hjstate)\n\n\n",
"msg_date": "Fri, 2 Apr 2021 13:29:38 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "Hi,\nFor v6-0003-Parallel-Hash-Full-Right-Outer-Join.patch\n\n+ * current_chunk_idx: index in current HashMemoryChunk\n\nThe above comment seems to be better fit\nfor ExecScanHashTableForUnmatched(), instead\nof ExecParallelPrepHashTableForUnmatched.\nI wonder where current_chunk_idx should belong (considering the above\ncomment and what the code does).\n\n+ while (hashtable->current_chunk_idx <\nhashtable->current_chunk->used)\n...\n+ next = hashtable->current_chunk->next.unshared;\n+ hashtable->current_chunk = next;\n+ hashtable->current_chunk_idx = 0;\n\nEach time we advance to the next chunk, current_chunk_idx is reset. It\nseems current_chunk_idx can be placed inside chunk.\nMaybe the consideration is that, with the current formation we save space\nby putting current_chunk_idx field at a higher level.\nIf that is the case, a comment should be added.\n\nCheers\n\nOn Fri, Mar 5, 2021 at 5:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Tue, Mar 2, 2021 at 11:27 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> > On Fri, Feb 12, 2021 at 11:02 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > > I just attached the diff.\n> >\n> > Squashed into one patch for the cfbot to chew on, with a few minor\n> > adjustments to a few comments.\n>\n> I did some more minor tidying of comments and naming. It's been on my\n> to-do-list to update some phase names after commit 3048898e, and while\n> doing that I couldn't resist the opportunity to change DONE to FREE,\n> which somehow hurts my brain less, and makes much more obvious sense\n> after the bugfix in CF #3031 that splits DONE into two separate\n> phases. It also pairs obviously with ALLOCATE. I include a copy of\n> that bugix here too as 0001, because I'll likely commit that first, so\n> I rebased the stack of patches that way. 0002 includes the renaming I\n> propose (master only). Then 0003 is Melanie's patch, using the name\n> SCAN for the new match bit scan phase. I've attached an updated\n> version of my \"phase diagram\" finger painting, to show how it looks\n> with these three patches. \"scan*\" is new.\n>\n\nHi,For v6-0003-Parallel-Hash-Full-Right-Outer-Join.patch + * current_chunk_idx: index in current HashMemoryChunkThe above comment seems to be better fit for ExecScanHashTableForUnmatched(), instead of ExecParallelPrepHashTableForUnmatched.I wonder where current_chunk_idx should belong (considering the above comment and what the code does).+ while (hashtable->current_chunk_idx < hashtable->current_chunk->used)...+ next = hashtable->current_chunk->next.unshared;+ hashtable->current_chunk = next;+ hashtable->current_chunk_idx = 0;Each time we advance to the next chunk, current_chunk_idx is reset. It seems current_chunk_idx can be placed inside chunk.Maybe the consideration is that, with the current formation we save space by putting current_chunk_idx field at a higher level.If that is the case, a comment should be added.CheersOn Fri, Mar 5, 2021 at 5:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Tue, Mar 2, 2021 at 11:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Feb 12, 2021 at 11:02 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > I just attached the diff.\n>\n> Squashed into one patch for the cfbot to chew on, with a few minor\n> adjustments to a few comments.\n\nI did some more minor tidying of comments and naming. It's been on my\nto-do-list to update some phase names after commit 3048898e, and while\ndoing that I couldn't resist the opportunity to change DONE to FREE,\nwhich somehow hurts my brain less, and makes much more obvious sense\nafter the bugfix in CF #3031 that splits DONE into two separate\nphases. It also pairs obviously with ALLOCATE. I include a copy of\nthat bugix here too as 0001, because I'll likely commit that first, so\nI rebased the stack of patches that way. 0002 includes the renaming I\npropose (master only). Then 0003 is Melanie's patch, using the name\nSCAN for the new match bit scan phase. I've attached an updated\nversion of my \"phase diagram\" finger painting, to show how it looks\nwith these three patches. \"scan*\" is new.",
"msg_date": "Fri, 2 Apr 2021 12:09:12 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 3:06 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> For v6-0003-Parallel-Hash-Full-Right-Outer-Join.patch\n>\n> + * current_chunk_idx: index in current HashMemoryChunk\n>\n> The above comment seems to be better fit for ExecScanHashTableForUnmatched(), instead of ExecParallelPrepHashTableForUnmatched.\n> I wonder where current_chunk_idx should belong (considering the above comment and what the code does).\n>\n> + while (hashtable->current_chunk_idx < hashtable->current_chunk->used)\n> ...\n> + next = hashtable->current_chunk->next.unshared;\n> + hashtable->current_chunk = next;\n> + hashtable->current_chunk_idx = 0;\n>\n> Each time we advance to the next chunk, current_chunk_idx is reset. It seems current_chunk_idx can be placed inside chunk.\n> Maybe the consideration is that, with the current formation we save space by putting current_chunk_idx field at a higher level.\n> If that is the case, a comment should be added.\n>\n\nThank you for the review. I think that moving the current_chunk_idx into\nthe HashMemoryChunk would probably take up too much space.\n\nOther places that we loop through the tuples in the chunk, we are able\nto just keep a local idx, like here in\nExecParallelHashIncreaseNumBuckets():\n\ncase PHJ_GROW_BUCKETS_REINSERTING:\n...\n while ((chunk = ExecParallelHashPopChunkQueue(hashtable, &chunk_s)))\n {\n size_t idx = 0;\n\n while (idx < chunk->used)\n\nbut, since we cannot do that while also emitting tuples, I thought,\nlet's just stash the index in the hashtable for use in serial hash join\nand the batch accessor for parallel hash join. A comment to this effect\nsounds good to me.\n\n\n",
"msg_date": "Tue, 6 Apr 2021 14:59:23 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Tue, Apr 6, 2021 at 11:59 AM Melanie Plageman <melanieplageman@gmail.com>\nwrote:\n\n> On Fri, Apr 2, 2021 at 3:06 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > Hi,\n> > For v6-0003-Parallel-Hash-Full-Right-Outer-Join.patch\n> >\n> > + * current_chunk_idx: index in current HashMemoryChunk\n> >\n> > The above comment seems to be better fit for\n> ExecScanHashTableForUnmatched(), instead of\n> ExecParallelPrepHashTableForUnmatched.\n> > I wonder where current_chunk_idx should belong (considering the above\n> comment and what the code does).\n> >\n> > + while (hashtable->current_chunk_idx <\n> hashtable->current_chunk->used)\n> > ...\n> > + next = hashtable->current_chunk->next.unshared;\n> > + hashtable->current_chunk = next;\n> > + hashtable->current_chunk_idx = 0;\n> >\n> > Each time we advance to the next chunk, current_chunk_idx is reset. It\n> seems current_chunk_idx can be placed inside chunk.\n> > Maybe the consideration is that, with the current formation we save\n> space by putting current_chunk_idx field at a higher level.\n> > If that is the case, a comment should be added.\n> >\n>\n> Thank you for the review. I think that moving the current_chunk_idx into\n> the HashMemoryChunk would probably take up too much space.\n>\n> Other places that we loop through the tuples in the chunk, we are able\n> to just keep a local idx, like here in\n> ExecParallelHashIncreaseNumBuckets():\n>\n> case PHJ_GROW_BUCKETS_REINSERTING:\n> ...\n> while ((chunk = ExecParallelHashPopChunkQueue(hashtable,\n> &chunk_s)))\n> {\n> size_t idx = 0;\n>\n> while (idx < chunk->used)\n>\n> but, since we cannot do that while also emitting tuples, I thought,\n> let's just stash the index in the hashtable for use in serial hash join\n> and the batch accessor for parallel hash join. A comment to this effect\n> sounds good to me.\n>\n\n From the way HashJoinTable is used, I don't have better idea w.r.t. the\nlocation of current_chunk_idx.\nIt is not worth introducing another level of mapping between HashJoinTable\nand the chunk index.\n\nSo the current formation is fine with additional comment\nin ParallelHashJoinBatchAccessor (current comment doesn't explicitly\nmention current_chunk_idx).\n\nCheers\n\nOn Tue, Apr 6, 2021 at 11:59 AM Melanie Plageman <melanieplageman@gmail.com> wrote:On Fri, Apr 2, 2021 at 3:06 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> For v6-0003-Parallel-Hash-Full-Right-Outer-Join.patch\n>\n> + * current_chunk_idx: index in current HashMemoryChunk\n>\n> The above comment seems to be better fit for ExecScanHashTableForUnmatched(), instead of ExecParallelPrepHashTableForUnmatched.\n> I wonder where current_chunk_idx should belong (considering the above comment and what the code does).\n>\n> + while (hashtable->current_chunk_idx < hashtable->current_chunk->used)\n> ...\n> + next = hashtable->current_chunk->next.unshared;\n> + hashtable->current_chunk = next;\n> + hashtable->current_chunk_idx = 0;\n>\n> Each time we advance to the next chunk, current_chunk_idx is reset. It seems current_chunk_idx can be placed inside chunk.\n> Maybe the consideration is that, with the current formation we save space by putting current_chunk_idx field at a higher level.\n> If that is the case, a comment should be added.\n>\n\nThank you for the review. I think that moving the current_chunk_idx into\nthe HashMemoryChunk would probably take up too much space.\n\nOther places that we loop through the tuples in the chunk, we are able\nto just keep a local idx, like here in\nExecParallelHashIncreaseNumBuckets():\n\ncase PHJ_GROW_BUCKETS_REINSERTING:\n...\n while ((chunk = ExecParallelHashPopChunkQueue(hashtable, &chunk_s)))\n {\n size_t idx = 0;\n\n while (idx < chunk->used)\n\nbut, since we cannot do that while also emitting tuples, I thought,\nlet's just stash the index in the hashtable for use in serial hash join\nand the batch accessor for parallel hash join. A comment to this effect\nsounds good to me.From the way HashJoinTable is used, I don't have better idea w.r.t. the location of current_chunk_idx.It is not worth introducing another level of mapping between HashJoinTable and the chunk index.So the current formation is fine with additional comment in ParallelHashJoinBatchAccessor (current comment doesn't explicitly mention current_chunk_idx).Cheers",
"msg_date": "Tue, 6 Apr 2021 13:56:19 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Sat, Mar 6, 2021 at 12:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Tue, Mar 2, 2021 at 11:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Fri, Feb 12, 2021 at 11:02 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > > I just attached the diff.\n> >\n> > Squashed into one patch for the cfbot to chew on, with a few minor\n> > adjustments to a few comments.\n>\n> I did some more minor tidying of comments and naming. It's been on my\n> to-do-list to update some phase names after commit 3048898e, and while\n> doing that I couldn't resist the opportunity to change DONE to FREE,\n> which somehow hurts my brain less, and makes much more obvious sense\n> after the bugfix in CF #3031 that splits DONE into two separate\n> phases. It also pairs obviously with ALLOCATE. I include a copy of\n> that bugix here too as 0001, because I'll likely commit that first, so\n> I rebased the stack of patches that way. 0002 includes the renaming I\n> propose (master only). Then 0003 is Melanie's patch, using the name\n> SCAN for the new match bit scan phase. I've attached an updated\n> version of my \"phase diagram\" finger painting, to show how it looks\n> with these three patches. \"scan*\" is new.\n\nPatches 0002, 0003 no longer apply to the master branch, seemingly\nbecause of subsequent changes to pgstat, so need rebasing.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 31 May 2021 15:17:33 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Mon, May 31, 2021 at 10:47 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Sat, Mar 6, 2021 at 12:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > On Tue, Mar 2, 2021 at 11:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > On Fri, Feb 12, 2021 at 11:02 AM Melanie Plageman\n> > > <melanieplageman@gmail.com> wrote:\n> > > > I just attached the diff.\n> > >\n> > > Squashed into one patch for the cfbot to chew on, with a few minor\n> > > adjustments to a few comments.\n> >\n> > I did some more minor tidying of comments and naming. It's been on my\n> > to-do-list to update some phase names after commit 3048898e, and while\n> > doing that I couldn't resist the opportunity to change DONE to FREE,\n> > which somehow hurts my brain less, and makes much more obvious sense\n> > after the bugfix in CF #3031 that splits DONE into two separate\n> > phases. It also pairs obviously with ALLOCATE. I include a copy of\n> > that bugix here too as 0001, because I'll likely commit that first, so\n> > I rebased the stack of patches that way. 0002 includes the renaming I\n> > propose (master only). Then 0003 is Melanie's patch, using the name\n> > SCAN for the new match bit scan phase. I've attached an updated\n> > version of my \"phase diagram\" finger painting, to show how it looks\n> > with these three patches. \"scan*\" is new.\n>\n> Patches 0002, 0003 no longer apply to the master branch, seemingly\n> because of subsequent changes to pgstat, so need rebasing.\n\nI am changing the status to \"Waiting on Author\" as the patch does not\napply on Head.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 10 Jul 2021 18:43:14 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Sat, Jul 10, 2021 at 9:13 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, May 31, 2021 at 10:47 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > On Sat, Mar 6, 2021 at 12:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > >\n> > > On Tue, Mar 2, 2021 at 11:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > > On Fri, Feb 12, 2021 at 11:02 AM Melanie Plageman\n> > > > <melanieplageman@gmail.com> wrote:\n> > > > > I just attached the diff.\n> > > >\n> > > > Squashed into one patch for the cfbot to chew on, with a few minor\n> > > > adjustments to a few comments.\n> > >\n> > > I did some more minor tidying of comments and naming. It's been on my\n> > > to-do-list to update some phase names after commit 3048898e, and while\n> > > doing that I couldn't resist the opportunity to change DONE to FREE,\n> > > which somehow hurts my brain less, and makes much more obvious sense\n> > > after the bugfix in CF #3031 that splits DONE into two separate\n> > > phases. It also pairs obviously with ALLOCATE. I include a copy of\n> > > that bugix here too as 0001, because I'll likely commit that first, so\n> > > I rebased the stack of patches that way. 0002 includes the renaming I\n> > > propose (master only). Then 0003 is Melanie's patch, using the name\n> > > SCAN for the new match bit scan phase. I've attached an updated\n> > > version of my \"phase diagram\" finger painting, to show how it looks\n> > > with these three patches. \"scan*\" is new.\n> >\n> > Patches 0002, 0003 no longer apply to the master branch, seemingly\n> > because of subsequent changes to pgstat, so need rebasing.\n>\n> I am changing the status to \"Waiting on Author\" as the patch does not\n> apply on Head.\n>\n> Regards,\n> Vignesh\n>\n>\n\nRebased patches attached. I will change status back to \"Ready for Committer\"",
"msg_date": "Fri, 30 Jul 2021 16:34:34 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 04:34:34PM -0400, Melanie Plageman wrote:\n> On Sat, Jul 10, 2021 at 9:13 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, May 31, 2021 at 10:47 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > >\n> > > On Sat, Mar 6, 2021 at 12:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > >\n> > > > On Tue, Mar 2, 2021 at 11:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > > > On Fri, Feb 12, 2021 at 11:02 AM Melanie Plageman\n> > > > > <melanieplageman@gmail.com> wrote:\n> > > > > > I just attached the diff.\n> > > > >\n> > > > > Squashed into one patch for the cfbot to chew on, with a few minor\n> > > > > adjustments to a few comments.\n> > > >\n> > > > I did some more minor tidying of comments and naming. It's been on my\n> > > > to-do-list to update some phase names after commit 3048898e, and while\n> > > > doing that I couldn't resist the opportunity to change DONE to FREE,\n> > > > which somehow hurts my brain less, and makes much more obvious sense\n> > > > after the bugfix in CF #3031 that splits DONE into two separate\n> > > > phases. It also pairs obviously with ALLOCATE. I include a copy of\n> > > > that bugix here too as 0001, because I'll likely commit that first, so\n\n\nHi Thomas,\n\nDo you intend to commit 0001 soon? Specially if this apply to 14 should\nbe committed in the next days.\n\n> > > > I rebased the stack of patches that way. 0002 includes the renaming I\n> > > > propose (master only). Then 0003 is Melanie's patch, using the name\n> > > > SCAN for the new match bit scan phase. I've attached an updated\n> > > > version of my \"phase diagram\" finger painting, to show how it looks\n> > > > with these three patches. \"scan*\" is new.\n> > >\n\n0002: my only concern is that this will cause innecesary pain in\nbackpatch-ing future code... but not doing that myself will let that to\nthe experts\n\n0003: i'm testing this now, not at a big scale but just to try to find\nproblems\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Mon, 20 Sep 2021 16:29:26 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Tue, Sep 21, 2021 at 9:29 AM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n> Do you intend to commit 0001 soon? Specially if this apply to 14 should\n> be committed in the next days.\n\nThanks for the reminder. Yes, I'm looking at this now, and looking\ninto the crash of this patch set on CI:\n\nhttps://cirrus-ci.com/task/5282889613967360\n\nUnfortunately, cfbot is using very simple and old CI rules which don't\nhave a core dump analysis step on that OS. :-( (I have a big upgrade\nto all this CI stuff in the pipeline to fix that, get full access to\nall logs, go faster, and many other improvements, after learning a lot\nof tricks about running these types of systems over the past year --\nmore soon.)\n\n> 0003: i'm testing this now, not at a big scale but just to try to find\n> problems\n\nThanks!\n\n\n",
"msg_date": "Fri, 1 Oct 2021 11:57:59 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "> Rebased patches attached. I will change status back to \"Ready for Committer\"\n\nThe CI showed a crash on freebsd, which I reproduced.\nhttps://cirrus-ci.com/task/5203060415791104\n\nThe crash is evidenced in 0001 - but only ~15% of the time.\n\nI think it's the same thing which was committed and then reverted here, so\nmaybe I'm not saying anything new.\n\nhttps://commitfest.postgresql.org/33/3031/\nhttps://www.postgresql.org/message-id/flat/20200929061142.GA29096@paquier.xyz\n\n(gdb) p pstate->build_barrier->phase \nCannot access memory at address 0x7f82e0fa42f4\n\n#1 0x00007f13de34f801 in __GI_abort () at abort.c:79\n#2 0x00005638e6a16d28 in ExceptionalCondition (conditionName=conditionName@entry=0x5638e6b62850 \"!pstate || BarrierPhase(&pstate->build_barrier) >= PHJ_BUILD_RUN\",\n errorType=errorType@entry=0x5638e6a6f00b \"FailedAssertion\", fileName=fileName@entry=0x5638e6b625be \"nodeHash.c\", lineNumber=lineNumber@entry=3305) at assert.c:69\n#3 0x00005638e678085b in ExecHashTableDetach (hashtable=0x5638e8e6ca88) at nodeHash.c:3305\n#4 0x00005638e6784656 in ExecShutdownHashJoin (node=node@entry=0x5638e8e57cb8) at nodeHashjoin.c:1400\n#5 0x00005638e67666d8 in ExecShutdownNode (node=0x5638e8e57cb8) at execProcnode.c:812\n#6 ExecShutdownNode (node=0x5638e8e57cb8) at execProcnode.c:772\n#7 0x00005638e67cd5b1 in planstate_tree_walker (planstate=planstate@entry=0x5638e8e58580, walker=walker@entry=0x5638e6766680 <ExecShutdownNode>, context=context@entry=0x0) at nodeFuncs.c:4009\n#8 0x00005638e67666b2 in ExecShutdownNode (node=0x5638e8e58580) at execProcnode.c:792\n#9 ExecShutdownNode (node=0x5638e8e58580) at execProcnode.c:772\n#10 0x00005638e67cd5b1 in planstate_tree_walker (planstate=planstate@entry=0x5638e8e58418, walker=walker@entry=0x5638e6766680 <ExecShutdownNode>, context=context@entry=0x0) at nodeFuncs.c:4009\n#11 0x00005638e67666b2 in ExecShutdownNode (node=0x5638e8e58418) at execProcnode.c:792\n#12 ExecShutdownNode (node=node@entry=0x5638e8e58418) at execProcnode.c:772\n#13 0x00005638e675f518 in ExecutePlan (execute_once=<optimized out>, dest=0x5638e8df0058, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT,\n use_parallel_mode=<optimized out>, planstate=0x5638e8e58418, estate=0x5638e8e57a10) at execMain.c:1658\n#14 standard_ExecutorRun () at execMain.c:410\n#15 0x00005638e6763e0a in ParallelQueryMain (seg=0x5638e8d823d8, toc=0x7f13df4e9000) at execParallel.c:1493\n#16 0x00005638e663f6c7 in ParallelWorkerMain () at parallel.c:1495\n#17 0x00005638e68542e4 in StartBackgroundWorker () at bgworker.c:858\n#18 0x00005638e6860f53 in do_start_bgworker (rw=<optimized out>) at postmaster.c:5883\n#19 maybe_start_bgworkers () at postmaster.c:6108\n#20 0x00005638e68619e5 in sigusr1_handler (postgres_signal_arg=<optimized out>) at postmaster.c:5272\n#21 <signal handler called>\n#22 0x00007f13de425ff7 in __GI___select (nfds=nfds@entry=7, readfds=readfds@entry=0x7ffef03b8400, writefds=writefds@entry=0x0, exceptfds=exceptfds@entry=0x0, timeout=timeout@entry=0x7ffef03b8360)\n at ../sysdeps/unix/sysv/linux/select.c:41\n#23 0x00005638e68620ce in ServerLoop () at postmaster.c:1765\n#24 0x00005638e6863bcc in PostmasterMain () at postmaster.c:1473\n#25 0x00005638e658fd00 in main (argc=8, argv=0x5638e8d54730) at main.c:198\n\n\n",
"msg_date": "Sat, 6 Nov 2021 22:04:07 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Sat, Nov 6, 2021 at 11:04 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> > Rebased patches attached. I will change status back to \"Ready for Committer\"\n>\n> The CI showed a crash on freebsd, which I reproduced.\n> https://cirrus-ci.com/task/5203060415791104\n>\n> The crash is evidenced in 0001 - but only ~15% of the time.\n>\n> I think it's the same thing which was committed and then reverted here, so\n> maybe I'm not saying anything new.\n>\n> https://commitfest.postgresql.org/33/3031/\n> https://www.postgresql.org/message-id/flat/20200929061142.GA29096@paquier.xyz\n>\n> (gdb) p pstate->build_barrier->phase\n> Cannot access memory at address 0x7f82e0fa42f4\n>\n> #1 0x00007f13de34f801 in __GI_abort () at abort.c:79\n> #2 0x00005638e6a16d28 in ExceptionalCondition (conditionName=conditionName@entry=0x5638e6b62850 \"!pstate || BarrierPhase(&pstate->build_barrier) >= PHJ_BUILD_RUN\",\n> errorType=errorType@entry=0x5638e6a6f00b \"FailedAssertion\", fileName=fileName@entry=0x5638e6b625be \"nodeHash.c\", lineNumber=lineNumber@entry=3305) at assert.c:69\n> #3 0x00005638e678085b in ExecHashTableDetach (hashtable=0x5638e8e6ca88) at nodeHash.c:3305\n> #4 0x00005638e6784656 in ExecShutdownHashJoin (node=node@entry=0x5638e8e57cb8) at nodeHashjoin.c:1400\n> #5 0x00005638e67666d8 in ExecShutdownNode (node=0x5638e8e57cb8) at execProcnode.c:812\n> #6 ExecShutdownNode (node=0x5638e8e57cb8) at execProcnode.c:772\n> #7 0x00005638e67cd5b1 in planstate_tree_walker (planstate=planstate@entry=0x5638e8e58580, walker=walker@entry=0x5638e6766680 <ExecShutdownNode>, context=context@entry=0x0) at nodeFuncs.c:4009\n> #8 0x00005638e67666b2 in ExecShutdownNode (node=0x5638e8e58580) at execProcnode.c:792\n> #9 ExecShutdownNode (node=0x5638e8e58580) at execProcnode.c:772\n> #10 0x00005638e67cd5b1 in planstate_tree_walker (planstate=planstate@entry=0x5638e8e58418, walker=walker@entry=0x5638e6766680 <ExecShutdownNode>, context=context@entry=0x0) at nodeFuncs.c:4009\n> #11 0x00005638e67666b2 in ExecShutdownNode (node=0x5638e8e58418) at execProcnode.c:792\n> #12 ExecShutdownNode (node=node@entry=0x5638e8e58418) at execProcnode.c:772\n> #13 0x00005638e675f518 in ExecutePlan (execute_once=<optimized out>, dest=0x5638e8df0058, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT,\n> use_parallel_mode=<optimized out>, planstate=0x5638e8e58418, estate=0x5638e8e57a10) at execMain.c:1658\n> #14 standard_ExecutorRun () at execMain.c:410\n> #15 0x00005638e6763e0a in ParallelQueryMain (seg=0x5638e8d823d8, toc=0x7f13df4e9000) at execParallel.c:1493\n> #16 0x00005638e663f6c7 in ParallelWorkerMain () at parallel.c:1495\n> #17 0x00005638e68542e4 in StartBackgroundWorker () at bgworker.c:858\n> #18 0x00005638e6860f53 in do_start_bgworker (rw=<optimized out>) at postmaster.c:5883\n> #19 maybe_start_bgworkers () at postmaster.c:6108\n> #20 0x00005638e68619e5 in sigusr1_handler (postgres_signal_arg=<optimized out>) at postmaster.c:5272\n> #21 <signal handler called>\n> #22 0x00007f13de425ff7 in __GI___select (nfds=nfds@entry=7, readfds=readfds@entry=0x7ffef03b8400, writefds=writefds@entry=0x0, exceptfds=exceptfds@entry=0x0, timeout=timeout@entry=0x7ffef03b8360)\n> at ../sysdeps/unix/sysv/linux/select.c:41\n> #23 0x00005638e68620ce in ServerLoop () at postmaster.c:1765\n> #24 0x00005638e6863bcc in PostmasterMain () at postmaster.c:1473\n> #25 0x00005638e658fd00 in main (argc=8, argv=0x5638e8d54730) at main.c:198\n\nYes, this looks like that issue.\n\nI've attached a v8 set with the fix I suggested in [1] included.\n(I added it to 0001).\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/flat/20200929061142.GA29096%40paquier.xyz",
"msg_date": "Wed, 17 Nov 2021 13:45:06 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "small mistake in v8.\nv9 attached.\n\n- Melanie",
"msg_date": "Wed, 17 Nov 2021 16:03:13 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Wed, Nov 17, 2021 at 01:45:06PM -0500, Melanie Plageman wrote:\n> On Sat, Nov 6, 2021 at 11:04 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > > Rebased patches attached. I will change status back to \"Ready for Committer\"\n> >\n> > The CI showed a crash on freebsd, which I reproduced.\n> > https://cirrus-ci.com/task/5203060415791104\n> >\n> > The crash is evidenced in 0001 - but only ~15% of the time.\n> >\n> > I think it's the same thing which was committed and then reverted here, so\n> > maybe I'm not saying anything new.\n> >\n> > https://commitfest.postgresql.org/33/3031/\n> > https://www.postgresql.org/message-id/flat/20200929061142.GA29096@paquier.xyz\n> \n> Yes, this looks like that issue.\n> \n> I've attached a v8 set with the fix I suggested in [1] included.\n> (I added it to 0001).\n\nThis is still crashing :(\nhttps://cirrus-ci.com/task/6738329224871936\nhttps://cirrus-ci.com/task/4895130286030848\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 20 Nov 2021 21:48:48 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Sun, Nov 21, 2021 at 4:48 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Wed, Nov 17, 2021 at 01:45:06PM -0500, Melanie Plageman wrote:\n> > Yes, this looks like that issue.\n> >\n> > I've attached a v8 set with the fix I suggested in [1] included.\n> > (I added it to 0001).\n>\n> This is still crashing :(\n> https://cirrus-ci.com/task/6738329224871936\n> https://cirrus-ci.com/task/4895130286030848\n\nI added a core file backtrace to cfbot's CI recipe a few days ago, so\nnow we have:\n\nhttps://cirrus-ci.com/task/5676480098205696\n\n#3 0x00000000009cf57e in ExceptionalCondition (conditionName=0x29cae8\n\"BarrierParticipants(&accessor->shared->batch_barrier) == 1\",\nerrorType=<optimized out>, fileName=0x2ae561 \"nodeHash.c\",\nlineNumber=lineNumber@entry=2224) at assert.c:69\nNo locals.\n#4 0x000000000071575e in ExecParallelScanHashTableForUnmatched\n(hjstate=hjstate@entry=0x80a60a3c8,\necontext=econtext@entry=0x80a60ae98) at nodeHash.c:2224\n\n\n",
"msg_date": "Sat, 27 Nov 2021 09:11:21 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Fri, Nov 26, 2021 at 3:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Sun, Nov 21, 2021 at 4:48 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Wed, Nov 17, 2021 at 01:45:06PM -0500, Melanie Plageman wrote:\n> > > Yes, this looks like that issue.\n> > >\n> > > I've attached a v8 set with the fix I suggested in [1] included.\n> > > (I added it to 0001).\n> >\n> > This is still crashing :(\n> > https://cirrus-ci.com/task/6738329224871936\n> > https://cirrus-ci.com/task/4895130286030848\n>\n> I added a core file backtrace to cfbot's CI recipe a few days ago, so\n> now we have:\n>\n> https://cirrus-ci.com/task/5676480098205696\n>\n> #3 0x00000000009cf57e in ExceptionalCondition (conditionName=0x29cae8\n> \"BarrierParticipants(&accessor->shared->batch_barrier) == 1\",\n> errorType=<optimized out>, fileName=0x2ae561 \"nodeHash.c\",\n> lineNumber=lineNumber@entry=2224) at assert.c:69\n> No locals.\n> #4 0x000000000071575e in ExecParallelScanHashTableForUnmatched\n> (hjstate=hjstate@entry=0x80a60a3c8,\n> econtext=econtext@entry=0x80a60ae98) at nodeHash.c:2224\n\nI believe this assert can be safely removed.\n\nIt is possible for a worker to attach to the batch barrier after the\n\"last\" worker was elected to scan and emit unmatched inner tuples. This\nis safe because the batch barrier is already in phase PHJ_BATCH_SCAN\nand this newly attached worker will simply detach from the batch\nbarrier and look for a new batch to work on.\n\nThe order of events would be as follows:\n\nW1: advances batch to PHJ_BATCH_SCAN\nW2: attaches to batch barrier in ExecParallelHashJoinNewBatch()\nW1: calls ExecParallelScanHashTableForUnmatched() (2 workers attached to\nbarrier at this point)\nW2: detaches from the batch barrier\n\nThe attached v10 patch removes this assert and updates the comment in\nExecParallelScanHashTableForUnmatched().\n\nI'm not sure if I should add more detail about this scenario in\nExecParallelHashJoinNewBatch() under PHJ_BATCH_SCAN or if the detail in\nExecParallelScanHashTableForUnmatched() is sufficient.\n\n- Melanie",
"msg_date": "Tue, 11 Jan 2022 16:30:37 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Wed, Jan 12, 2022 at 10:30 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Fri, Nov 26, 2021 at 3:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > #3 0x00000000009cf57e in ExceptionalCondition (conditionName=0x29cae8\n> > \"BarrierParticipants(&accessor->shared->batch_barrier) == 1\",\n> > errorType=<optimized out>, fileName=0x2ae561 \"nodeHash.c\",\n> > lineNumber=lineNumber@entry=2224) at assert.c:69\n> > No locals.\n> > #4 0x000000000071575e in ExecParallelScanHashTableForUnmatched\n> > (hjstate=hjstate@entry=0x80a60a3c8,\n> > econtext=econtext@entry=0x80a60ae98) at nodeHash.c:2224\n>\n> I believe this assert can be safely removed.\n\nAgreed.\n\nI was looking at this with a view to committing it, but I need more\ntime. This will be at the front of my queue when the tree reopens.\nI'm trying to find the tooling I had somewhere that could let you test\nattaching and detaching at every phase.\n\nThe attached version is just pgindent'd.",
"msg_date": "Fri, 8 Apr 2022 23:29:32 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "2022年4月8日(金) 20:30 Thomas Munro <thomas.munro@gmail.com>:\n>\n> On Wed, Jan 12, 2022 at 10:30 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > On Fri, Nov 26, 2021 at 3:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > #3 0x00000000009cf57e in ExceptionalCondition (conditionName=0x29cae8\n> > > \"BarrierParticipants(&accessor->shared->batch_barrier) == 1\",\n> > > errorType=<optimized out>, fileName=0x2ae561 \"nodeHash.c\",\n> > > lineNumber=lineNumber@entry=2224) at assert.c:69\n> > > No locals.\n> > > #4 0x000000000071575e in ExecParallelScanHashTableForUnmatched\n> > > (hjstate=hjstate@entry=0x80a60a3c8,\n> > > econtext=econtext@entry=0x80a60ae98) at nodeHash.c:2224\n> >\n> > I believe this assert can be safely removed.\n>\n> Agreed.\n>\n> I was looking at this with a view to committing it, but I need more\n> time. This will be at the front of my queue when the tree reopens.\n> I'm trying to find the tooling I had somewhere that could let you test\n> attaching and detaching at every phase.\n>\n> The attached version is just pgindent'd.\n\nHi Thomas\n\nThis patch is marked as \"Waiting for Committer\" in the current commitfest [1]\nwith yourself as committer; do you have any plans to move ahead with this?\n\n[1] https://commitfest.postgresql.org/40/2903/\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Thu, 17 Nov 2022 13:21:54 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Thu, Nov 17, 2022 at 5:22 PM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> This patch is marked as \"Waiting for Committer\" in the current commitfest [1]\n> with yourself as committer; do you have any plans to move ahead with this?\n\nYeah, sorry for lack of progress. Aiming to get this in shortly.\n\n\n",
"msg_date": "Thu, 17 Nov 2022 20:14:07 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "Here is a rebased and lightly hacked-upon version that I'm testing.\n\n0001-Scan-for-unmatched-hash-join-tuples-in-memory-order.patch\n\n * this change can stand on its own, separately from any PHJ changes\n * renamed hashtable->current_chunk[_idx] to unmatched_scan_{chunk,idx}\n * introduced a local variable to avoid some x->y->z stuff\n * removed some references to no-longer-relevant hj_XXX variables in\nthe Prep function\n\nI haven't attempted to prove anything about the performance of this\none yet, but it seems fairly obvious that it can't be worse than what\nwe're doing today. I have suppressed the urge to look into improving\nlocality and software prefetching.\n\n0002-Parallel-Hash-Full-Join.patch\n\n * reuse the same umatched_scan_{chunk,idx} variables as above\n * rename the list of chunks to scan to work_queue\n * fix race/memory leak if we see PHJ_BATCH_SCAN when we attach (it\nwasn't OK to just fall through)\n\nThat \"work queue\" name/concept already exists in other places that\nneed to process every chunk, namely rebucketing and repartitioning.\nIn later work, I'd like to harmonise these work queues, but I'm not\ntrying to increase the size of this patch set at this time, I just\nwant to use consistent naming.\n\nI don't love the way that both ExecHashTableDetachBatch() and\nExecParallelPrepHashTableForUnmatched() duplicate logic relating to\nthe _SCAN/_FREE protocol, but I'm struggling to find a better idea.\nPerhaps I just need more coffee.\n\nI think your idea of opportunistically joining the scan if it's\nalready running makes sense to explore for a later step, ie to make\nmulti-batch PHFJ fully fair, and I think that should be a fairly easy\ncode change, and I put in some comments where changes would be needed.\n\nContinuing to test, more soon.",
"msg_date": "Sat, 25 Mar 2023 09:21:34 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Sat, Mar 25, 2023 at 09:21:34AM +1300, Thomas Munro wrote:\n> * reuse the same umatched_scan_{chunk,idx} variables as above\n> * rename the list of chunks to scan to work_queue\n> * fix race/memory leak if we see PHJ_BATCH_SCAN when we attach (it\n> wasn't OK to just fall through)\n\nah, good catch.\n\n> I don't love the way that both ExecHashTableDetachBatch() and\n> ExecParallelPrepHashTableForUnmatched() duplicate logic relating to\n> the _SCAN/_FREE protocol, but I'm struggling to find a better idea.\n> Perhaps I just need more coffee.\n\nI'm not sure if I have strong feelings either way.\nTo confirm I understand, though: in ExecHashTableDetachBatch(), the call\nto BarrierArriveAndDetachExceptLast() serves only to advance the barrier\nphase through _SCAN, right? It doesn't really matter if this worker is\nthe last worker since BarrierArriveAndDetach() handles that for us.\nThere isn't another barrier function to do this (and I mostly think it\nis fine), but I did have to think on it for a bit.\n\nOh, and, unrelated, but it is maybe worth updating the BarrierAttach()\nfunction comment to mention BarrierArriveAndDetachExceptLast().\n\n> I think your idea of opportunistically joining the scan if it's\n> already running makes sense to explore for a later step, ie to make\n> multi-batch PHFJ fully fair, and I think that should be a fairly easy\n> code change, and I put in some comments where changes would be needed.\n\nmakes sense.\n\nI have some very minor pieces of feedback, mainly about extraneous\ncommas that made me uncomfortable ;)\n\n> From 8b526377eb4a4685628624e75743aedf37dd5bfe Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Fri, 24 Mar 2023 14:19:07 +1300\n> Subject: [PATCH v12 1/2] Scan for unmatched hash join tuples in memory order.\n> \n> In a full/right outer join, we need to scan every tuple in the hash\n> table to find the ones that were not matched while probing, so that we\n\nGiven how you are using the word \"so\" here, I think that comma before it\nis not needed.\n\n> @@ -2083,58 +2079,45 @@ bool\n> ExecScanHashTableForUnmatched(HashJoinState *hjstate, ExprContext *econtext)\n> {\n> \tHashJoinTable hashtable = hjstate->hj_HashTable;\n> -\tHashJoinTuple hashTuple = hjstate->hj_CurTuple;\n> +\tHashMemoryChunk chunk;\n> \n> -\tfor (;;)\n> +\twhile ((chunk = hashtable->unmatched_scan_chunk))\n> \t{\n> -\t\t/*\n> -\t\t * hj_CurTuple is the address of the tuple last returned from the\n> -\t\t * current bucket, or NULL if it's time to start scanning a new\n> -\t\t * bucket.\n> -\t\t */\n> -\t\tif (hashTuple != NULL)\n> -\t\t\thashTuple = hashTuple->next.unshared;\n> -\t\telse if (hjstate->hj_CurBucketNo < hashtable->nbuckets)\n> -\t\t{\n> -\t\t\thashTuple = hashtable->buckets.unshared[hjstate->hj_CurBucketNo];\n> -\t\t\thjstate->hj_CurBucketNo++;\n> -\t\t}\n> -\t\telse if (hjstate->hj_CurSkewBucketNo < hashtable->nSkewBuckets)\n> +\t\twhile (hashtable->unmatched_scan_idx < chunk->used)\n> \t\t{\n> -\t\t\tint\t\t\tj = hashtable->skewBucketNums[hjstate->hj_CurSkewBucketNo];\n> +\t\t\tHashJoinTuple hashTuple = (HashJoinTuple)\n> +\t\t\t(HASH_CHUNK_DATA(hashtable->unmatched_scan_chunk) +\n> +\t\t\t hashtable->unmatched_scan_idx);\n> \n> -\t\t\thashTuple = hashtable->skewBucket[j]->tuples;\n> -\t\t\thjstate->hj_CurSkewBucketNo++;\n> -\t\t}\n> -\t\telse\n> -\t\t\tbreak;\t\t\t\t/* finished all buckets */\n> +\t\t\tMinimalTuple tuple = HJTUPLE_MINTUPLE(hashTuple);\n> +\t\t\tint\t\t\thashTupleSize = (HJTUPLE_OVERHEAD + tuple->t_len);\n> \n> -\t\twhile (hashTuple != NULL)\n> -\t\t{\n> -\t\t\tif (!HeapTupleHeaderHasMatch(HJTUPLE_MINTUPLE(hashTuple)))\n> -\t\t\t{\n> -\t\t\t\tTupleTableSlot *inntuple;\n> +\t\t\t/* next tuple in this chunk */\n> +\t\t\thashtable->unmatched_scan_idx += MAXALIGN(hashTupleSize);\n> \n> -\t\t\t\t/* insert hashtable's tuple into exec slot */\n> -\t\t\t\tinntuple = ExecStoreMinimalTuple(HJTUPLE_MINTUPLE(hashTuple),\n> -\t\t\t\t\t\t\t\t\t\t\t\t hjstate->hj_HashTupleSlot,\n> -\t\t\t\t\t\t\t\t\t\t\t\t false);\t/* do not pfree */\n> -\t\t\t\tecontext->ecxt_innertuple = inntuple;\n> +\t\t\tif (HeapTupleHeaderHasMatch(HJTUPLE_MINTUPLE(hashTuple)))\n> +\t\t\t\tcontinue;\n> \n> -\t\t\t\t/*\n> -\t\t\t\t * Reset temp memory each time; although this function doesn't\n> -\t\t\t\t * do any qual eval, the caller will, so let's keep it\n> -\t\t\t\t * parallel to ExecScanHashBucket.\n> -\t\t\t\t */\n> -\t\t\t\tResetExprContext(econtext);\n\nI don't think I had done this before. Good call.\n\n> +\t\t\t/* insert hashtable's tuple into exec slot */\n> +\t\t\tecontext->ecxt_innertuple =\n> +\t\t\t\tExecStoreMinimalTuple(HJTUPLE_MINTUPLE(hashTuple),\n> +\t\t\t\t\t\t\t\t\t hjstate->hj_HashTupleSlot,\n> +\t\t\t\t\t\t\t\t\t false);\n\n> From 6f4e82f0569e5b388440ca0ef268dd307388e8f8 Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Fri, 24 Mar 2023 15:23:14 +1300\n> Subject: [PATCH v12 2/2] Parallel Hash Full Join.\n> \n> Full and right outer joins were not supported in the initial\n> implementation of Parallel Hash Join, because of deadlock hazards (see\n\nno comma needed before the \"because\" here\n\n> discussion). Therefore FULL JOIN inhibited page-based parallelism,\n> as the other join strategies can't do it either.\n\nI actually don't quite understand what this means? It's been awhile for\nme, so perhaps I'm being dense, but what is page-based parallelism?\nAlso, I would put a comma after \"Therefore\" :)\n\n> Add a new PHJ phase PHJ_BATCH_SCAN that scans for unmatched tuples on\n> the inner side of one batch's hash table. For now, sidestep the\n> deadlock problem by terminating parallelism there. The last process to\n> arrive at that phase emits the unmatched tuples, while others detach and\n> are free to go and work on other batches, if there are any, but\n> otherwise they finish the join early.\n> \n> That unfairness is considered acceptable for now, because it's better\n> than no parallelism at all. The build and probe phases are run in\n> parallel, and the new scan-for-unmatched phase, while serial, is usually\n> applied to the smaller of the two relations and is either limited by\n> some multiple of work_mem, or it's too big and is partitioned into\n> batches and then the situation is improved by batch-level parallelism.\n> In future work on deadlock avoidance strategies, we may find a way to\n> parallelize the new phase safely.\n\nIs it worth mentioning something about parallel-oblivious parallel hash\njoin not being able to do this still? Or is that obvious?\n\n> *\n> @@ -2908,6 +3042,12 @@ ExecParallelHashTupleAlloc(HashJoinTable hashtable, size_t size,\n> \tchunk->next.shared = hashtable->batches[curbatch].shared->chunks;\n> \thashtable->batches[curbatch].shared->chunks = chunk_shared;\n> \n> +\t/*\n> +\t * Also make this the head of the work_queue list. This is used as a\n> +\t * cursor for scanning all chunks in the batch.\n> +\t */\n> +\thashtable->batches[curbatch].shared->work_queue = chunk_shared;\n> +\n> \tif (size <= HASH_CHUNK_THRESHOLD)\n> \t{\n> \t\t/*\n> @@ -3116,18 +3256,31 @@ ExecHashTableDetachBatch(HashJoinTable hashtable)\n> \t{\n> \t\tint\t\t\tcurbatch = hashtable->curbatch;\n> \t\tParallelHashJoinBatch *batch = hashtable->batches[curbatch].shared;\n> +\t\tbool\t\tattached = true;\n> \n> \t\t/* Make sure any temporary files are closed. */\n> \t\tsts_end_parallel_scan(hashtable->batches[curbatch].inner_tuples);\n> \t\tsts_end_parallel_scan(hashtable->batches[curbatch].outer_tuples);\n> \n> -\t\t/* Detach from the batch we were last working on. */\n> -\t\tif (BarrierArriveAndDetach(&batch->batch_barrier))\n> +\t\t/* After attaching we always get at least to PHJ_BATCH_PROBE. */\n> +\t\tAssert(BarrierPhase(&batch->batch_barrier) == PHJ_BATCH_PROBE ||\n> +\t\t\t BarrierPhase(&batch->batch_barrier) == PHJ_BATCH_SCAN);\n> +\n> +\t\t/*\n> +\t\t * Even if we aren't doing a full/right outer join, we'll step through\n> +\t\t * the PHJ_BATCH_SCAN phase just to maintain the invariant that freeing\n> +\t\t * happens in PHJ_BATCH_FREE, but that'll be wait-free.\n> +\t\t */\n> +\t\tif (BarrierPhase(&batch->batch_barrier) == PHJ_BATCH_PROBE)\n\nfull/right joins should never fall into this code path, right?\n\nIf so, would we be able to assert about that? Maybe it doesn't make\nsense, though...\n\n> +\t\t\tattached = BarrierArriveAndDetachExceptLast(&batch->batch_barrier);\n> +\t\tif (attached && BarrierArriveAndDetach(&batch->batch_barrier))\n> \t\t{\n> \t\t\t/*\n> -\t\t\t * Technically we shouldn't access the barrier because we're no\n> -\t\t\t * longer attached, but since there is no way it's moving after\n> -\t\t\t * this point it seems safe to make the following assertion.\n> +\t\t\t * We are not longer attached to the batch barrier, but we're the\n> +\t\t\t * process that was chosen to free resources and it's safe to\n> +\t\t\t * assert the current phase. The ParallelHashJoinBatch can't go\n> +\t\t\t * away underneath us while we are attached to the build barrier,\n> +\t\t\t * making this access safe.\n> \t\t\t */\n> \t\t\tAssert(BarrierPhase(&batch->batch_barrier) == PHJ_BATCH_FREE);\n\nOtherwise, LGTM.\n\n- Melanie\n\n\n",
"msg_date": "Sat, 25 Mar 2023 16:51:59 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Sun, Mar 26, 2023 at 9:52 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I have some very minor pieces of feedback, mainly about extraneous\n> commas that made me uncomfortable ;)\n\nOffensive punctuation removed.\n\n> > discussion). Therefore FULL JOIN inhibited page-based parallelism,\n> > as the other join strategies can't do it either.\n>\n> I actually don't quite understand what this means? It's been awhile for\n> me, so perhaps I'm being dense, but what is page-based parallelism?\n\nReworded. I just meant our usual kind of \"partial path\" parallelism\n(the kind when you don't know anything at all about the values of the\ntuples that each process sees, and typically it's chopped up by\nstorage pages at the scan level).\n\n> > That unfairness is considered acceptable for now, because it's better\n> > than no parallelism at all. The build and probe phases are run in\n> > parallel, and the new scan-for-unmatched phase, while serial, is usually\n> > applied to the smaller of the two relations and is either limited by\n> > some multiple of work_mem, or it's too big and is partitioned into\n> > batches and then the situation is improved by batch-level parallelism.\n> > In future work on deadlock avoidance strategies, we may find a way to\n> > parallelize the new phase safely.\n>\n> Is it worth mentioning something about parallel-oblivious parallel hash\n> join not being able to do this still? Or is that obvious?\n\nThat's kind of what I meant above.\n\n> > @@ -3116,18 +3256,31 @@ ExecHashTableDetachBatch(HashJoinTable hashtable)\n\n> full/right joins should never fall into this code path, right?\n\nYeah, this is the normal way we detach from a batch. This is reached\nwhen shutting down the executor early, or when moving to the next\nbatch, etc.\n\n***\n\nI found another problem. I realised that ... FULL JOIN ... LIMIT n\nmight be able to give wrong answers with unlucky scheduling.\nUnfortunately I have been unable to reproduce the phenomenon I am\nimagining yet but I can't think of any mechanism that prevents the\nfollowing sequence of events:\n\nP0 probes, pulls n tuples from the outer relation and then the\nexecutor starts to shut down (see commit 3452dc52 which pushed down\nLIMIT), but just then P1 attaches, right before P0 does. P1\ncontinues, and finds < n outer tuples while probing and then runs out\nso it enters the unmatched scan phase, and starts emitting bogusly\nunmatched tuples. Some outer tuples we needed to get the complete set\nof match bits and thus the right answer were buffered inside P0's\nsubplan and abandoned.\n\nI've attached a simple fixup for this problem. Short version: if\nyou're abandoning your PHJ_BATCH_PROBE phase without reaching the end,\nyou must be shutting down, so the executor must think it's OK to\nabandon tuples this process has buffered, so it must also be OK to\nthrow all unmatched tuples out the window too, as if this process was\nabout to emit them. Right?\n\n***\n\nWith all the long and abstract discussion of hard to explain problems\nin this thread and related threads, I thought I should take a step\nback and figure out a way to demonstrate what this thing really does\nvisually. I wanted to show that this is a very useful feature that\nunlocks previously unobtainable parallelism, and to show the\ncompromise we've had to make so far in an intuitive way. With some\nextra instrumentation hacked up locally, I produced the attached\n\"progress\" graphs for a very simple query: SELECT COUNT(*) FROM r FULL\nJOIN s USING (i). Imagine a time axis along the bottom, but I didn't\nbother to add numbers because I'm just trying to convey the 'shape' of\nexecution with relative times and synchronisation points.\n\nFigures 1-3 show that phases 'h' (hash) and 'p' (probe) are\nparallelised and finish sooner as we add more processes to help out,\nbut 's' (= the unmatched inner tuple scan) is not. Note that if all\ninner tuples are matched, 's' becomes extremely small and the\nparallelism is almost as fair as a plain old inner join, but here I've\nmaximised it: all inner tuples were unmatched, because the two\nrelations have no matches at all. Even if we achieve perfect linear\nscalability for the other phases, the speedup will be governed by\nhttps://en.wikipedia.org/wiki/Amdahl%27s_law and the only thing that\ncan mitigate that is if there is more useful work those early-quitting\nprocesses could do somewhere else in your query plan.\n\nFigure 4 shows that it gets a lot fairer in a multi-batch join,\nbecause there is usually useful work to do on other batches of the\nsame join. Notice how processes initially work on loading, probing\nand scanning different batches to reduce contention, but they are\ncapable of ganging up to load and/or probe the same batch if there is\nnothing else left to do (for example P2 and P3 both work on p5 near\nthe end). For now, they can't do that for the s phases. (BTW, the\nlittle gaps before loading is the allocation phase that I didn't\nbother to plot because they can't fit a label on them; this\nvisualisation technique is a WIP.)\n\nWith the \"opportunistic\" change we are discussing for later work,\nfigure 4 would become completely fair (P0 and P2 would be able to join\nin and help out with s6 and s7), but single-batch figures 1-3 would\nnot (that would require a different executor design AFAICT, or a\neureka insight we haven't had yet; see long-winded discussion).\n\nThe last things I'm thinking about now: Are the planner changes\nright? Are the tests enough? I suspect we'll finish up changing that\nchunk-based approach yet again in future work on memory efficiency,\nbut I'm OK with that; this change suits the current problem and we\ndon't know what we'll eventually settle on with more research.",
"msg_date": "Tue, 28 Mar 2023 12:03:42 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 7:04 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I found another problem. I realised that ... FULL JOIN ... LIMIT n\n> might be able to give wrong answers with unlucky scheduling.\n> Unfortunately I have been unable to reproduce the phenomenon I am\n> imagining yet but I can't think of any mechanism that prevents the\n> following sequence of events:\n>\n> P0 probes, pulls n tuples from the outer relation and then the\n> executor starts to shut down (see commit 3452dc52 which pushed down\n> LIMIT), but just then P1 attaches, right before P0 does. P1\n> continues, and finds < n outer tuples while probing and then runs out\n> so it enters the unmatched scan phase, and starts emitting bogusly\n> unmatched tuples. Some outer tuples we needed to get the complete set\n> of match bits and thus the right answer were buffered inside P0's\n> subplan and abandoned.\n>\n> I've attached a simple fixup for this problem. Short version: if\n> you're abandoning your PHJ_BATCH_PROBE phase without reaching the end,\n> you must be shutting down, so the executor must think it's OK to\n> abandon tuples this process has buffered, so it must also be OK to\n> throw all unmatched tuples out the window too, as if this process was\n> about to emit them. Right?\n\nI understand the scenario you are thinking of, however, I question how\nthose incorrectly formed tuples would ever be returned by the query. The\nhashjoin would only start to shutdown once enough tuples had been\nemitted to satisfy the limit, at which point, those tuples buffered in\np0 may be emitted by this worker but wouldn't be included in the query\nresult, no?\n\nI suppose even if what I said is true, we do not want the hashjoin node\nto ever produce incorrect tuples. In which case, your fix seems correct to me.\n\n> With all the long and abstract discussion of hard to explain problems\n> in this thread and related threads, I thought I should take a step\n> back and figure out a way to demonstrate what this thing really does\n> visually. I wanted to show that this is a very useful feature that\n> unlocks previously unobtainable parallelism, and to show the\n> compromise we've had to make so far in an intuitive way. With some\n> extra instrumentation hacked up locally, I produced the attached\n> \"progress\" graphs for a very simple query: SELECT COUNT(*) FROM r FULL\n> JOIN s USING (i). Imagine a time axis along the bottom, but I didn't\n> bother to add numbers because I'm just trying to convey the 'shape' of\n> execution with relative times and synchronisation points.\n>\n> Figures 1-3 show that phases 'h' (hash) and 'p' (probe) are\n> parallelised and finish sooner as we add more processes to help out,\n> but 's' (= the unmatched inner tuple scan) is not. Note that if all\n> inner tuples are matched, 's' becomes extremely small and the\n> parallelism is almost as fair as a plain old inner join, but here I've\n> maximised it: all inner tuples were unmatched, because the two\n> relations have no matches at all. Even if we achieve perfect linear\n> scalability for the other phases, the speedup will be governed by\n> https://en.wikipedia.org/wiki/Amdahl%27s_law and the only thing that\n> can mitigate that is if there is more useful work those early-quitting\n> processes could do somewhere else in your query plan.\n>\n> Figure 4 shows that it gets a lot fairer in a multi-batch join,\n> because there is usually useful work to do on other batches of the\n> same join. Notice how processes initially work on loading, probing\n> and scanning different batches to reduce contention, but they are\n> capable of ganging up to load and/or probe the same batch if there is\n> nothing else left to do (for example P2 and P3 both work on p5 near\n> the end). For now, they can't do that for the s phases. (BTW, the\n> little gaps before loading is the allocation phase that I didn't\n> bother to plot because they can't fit a label on them; this\n> visualisation technique is a WIP.)\n>\n> With the \"opportunistic\" change we are discussing for later work,\n> figure 4 would become completely fair (P0 and P2 would be able to join\n> in and help out with s6 and s7), but single-batch figures 1-3 would\n> not (that would require a different executor design AFAICT, or a\n> eureka insight we haven't had yet; see long-winded discussion).\n\nCool diagrams!\n\n> The last things I'm thinking about now: Are the planner changes\n> right?\n\nI think the current changes are correct. I wonder if we have to change\nanything in initial/final_cost_hashjoin to account for the fact that\nfor a single batch full/right parallel hash join, part of the\nexecution is serial. And, if so, do we need to consider the estimated\nnumber of unmatched tuples to be emitted?\n\n> Are the tests enough?\n\nSo, the tests currently in the patch set cover the unmatched tuple scan\nphase for single batch parallel full hash join. I've attached the\ndumbest possible addition to that which adds in a multi-batch full\nparallel hash join case. I did not do any checking to ensure I picked\nthe case which would add the least execution time to the test, etc.\n\nOf course, this does leave the skip_unmatched code you added uncovered,\nbut I think if we had the testing infrastructure to test that, we would\nbe on a beach somewhere reading a book instead of beating our heads\nagainst the wall trying to determine if there are any edge cases we are\nmissing in adding this feature.\n\n- Melanie",
"msg_date": "Thu, 30 Mar 2023 15:23:15 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Fri, Mar 31, 2023 at 8:23 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I understand the scenario you are thinking of, however, I question how\n> those incorrectly formed tuples would ever be returned by the query. The\n> hashjoin would only start to shutdown once enough tuples had been\n> emitted to satisfy the limit, at which point, those tuples buffered in\n> p0 may be emitted by this worker but wouldn't be included in the query\n> result, no?\n\nYeah, I think I must have been confused by that too early on. The\nthing is, Gather asks every worker process for n tuples so that any\none of them could satisfy the LIMIT if required, but it's unknown\nwhich process's output the Gather node will receive first (or might\nmake it into intermediate nodes and affect the results). I guess to\nsee bogus unmatched tuples actually escaping anywhere (with the\nearlier patches) you'd need parallel leader off + diabolical\nscheduling?\n\nI thought about 3 solutions before settling on #3: (1)\nHypothetically, P1 could somehow steal/finish P0's work, but our\nexecutor has no mechanism for anything like that. (2) P0 isn't\nallowed to leave the probe early, instead it has to keep going but\nthrow away the tuples it'd normally emit, so we are sure we have all\nthe match bits in shared memory. (3) P0 seizes responsibility for\nemitting those tuples, but then does nothing because the top level\nexecutor doesn't want more tuples, which in practice looks like a flag\ntelling everyone else not to bother.\n\nIdea #1 would probably require shared address space (threads) and a\nnon-recursive executor, as speculated about a few times before, and\nthat type of magic could address several kinds of deadlock risks, but\nin this case we still wouldn't want to do that even if we could; it's\nwork that is provably (by idea #3's argument) a waste of time. Idea\n#2 is a horrible pessimisation of idea #1 within our existing executor\ndesign, but it helped me think about what it really means to be\nauthorised to throw away tuples from on high.\n\n> I suppose even if what I said is true, we do not want the hashjoin node\n> to ever produce incorrect tuples. In which case, your fix seems correct to me.\n\nYeah, that's a good way to put it.\n\n> > The last things I'm thinking about now: Are the planner changes\n> > right?\n>\n> I think the current changes are correct. I wonder if we have to change\n> anything in initial/final_cost_hashjoin to account for the fact that\n> for a single batch full/right parallel hash join, part of the\n> execution is serial. And, if so, do we need to consider the estimated\n> number of unmatched tuples to be emitted?\n\nI have no idea how to model that, and I'm assuming the existing model\nshould continue to work as well as it does today \"on average\". The\nexpected number of tuples will be the same across all workers, it's\njust an unfortunate implementation detail that the distribution sucks\n(but is still much better than a serial plan). I wondered if\nget_parallel_divisor() might provide some inspiration but that's\ndealing with a different problem: a partial extra process that will\ntake some of the work (ie tuples) away from the other processes, and\nthat's not the case here.\n\n> > Are the tests enough?\n>\n> So, the tests currently in the patch set cover the unmatched tuple scan\n> phase for single batch parallel full hash join. I've attached the\n> dumbest possible addition to that which adds in a multi-batch full\n> parallel hash join case. I did not do any checking to ensure I picked\n> the case which would add the least execution time to the test, etc.\n\nThanks, added.\n\nI should probably try to figure out how to get the join_hash tests to\nrun with smaller tables. It's one of the slower tests and this adds\nto it. I vaguely recall it was hard to get the batch counts to be\nstable across the build farm, which makes me hesitant to change the\ntests but perhaps I can figure out how to screw it down...\n\nI decided to drop the scan order change for now (0001 in v13). Yes,\nit's better than what we have now, but it seems to cut off some other\npossible ideas to do even better, so it feels premature to change it\nwithout more work. I changed the parallel unmatched scan back to\nbeing as similar as possible to the serial one for now.\n\nI committed the main patch.\n\nHere are a couple of ideas that came up while working on this, for future study:\n\n* the \"opportunistic help\" thing you once suggested to make it a\nlittle fairer in multi-batch cases. Quick draft attached, for future\nexperimentation. Seems to work pretty well, but could definitely be\ntidier and there may be holes in it. Pretty picture attached.\n\n* should we pass HJ_FILL_INNER(hjstate) into a new parameter\nfill_inner to ExecHashJoinImpl(), so that we can make specialised hash\njoin routines for the yes and no cases, so that we can remove\nbranching and memory traffic related to match bits?\n\n* could we use tagged pointers to track matched tuples? Tuples are\nMAXALIGNed, so bits 0 and 1 of pointers to them are certainly always\n0. Perhaps we could use bit 0 for \"matched\" and bit 1 for \"I am not\nthe last tuple in my chain, you'll have to check the next one too\".\nThen you could scan for unmatched without following many pointers, if\nyou're lucky. You could skip the required masking etc for that if\n!fill_inner.\n\n* should we use software prefetching to smooth over the random memory\norder problem when you do have to follow them? Though it's hard to\nprefetch chains, here we have an array full of pointers at least to\nthe first tuples in each chain. This probably goes along with the\ngeneral hash join memory prefetching work that I started a couple of\nyears back and need to restart for 17.\n\n* this idea is probably stupid overkill, but it's something that\nv13-0001 made me think about: could it be worth the effort to sample a\nfraction of the match bits in the hash table buckets (with the scheme\nabove), and determine whether you'll be emitting a high fraction of\nthe tuples, and then switch to chunk based so that you can do it in\nmemory order if so? That requires having the match flag in *two*\nplaces, which seems silly; you'd need some experimental evidence that\nany of this is worth bothering with\n\n* currently, the \"hash inner\" phase only loads tuples into batch 0's\nhash table (the so-called \"hybrid Grace\" technique), but if there are\n(say) 4 processes, you could actually load batches 0-3 into memory\nduring that phase, to avoid having to dump 1-3 out to disk and then\nimmediately load them back in again; you'd get to skip \"l1\", \"l2\",\n\"l3\" on those diagrams and finish a good bit faster",
"msg_date": "Fri, 31 Mar 2023 11:55:53 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I committed the main patch.\n\nThis left the following code in hash_inner_and_outer (joinpath.c):\n\n /*\n * If the joinrel is parallel-safe, we may be able to consider a\n * partial hash join. However, we can't handle JOIN_UNIQUE_OUTER,\n * because the outer path will be partial, and therefore we won't be\n * able to properly guarantee uniqueness. Similarly, we can't handle\n * JOIN_FULL and JOIN_RIGHT, because they can produce false null\n * extended rows. Also, the resulting path must not be parameterized.\n */\n if (joinrel->consider_parallel &&\n save_jointype != JOIN_UNIQUE_OUTER &&\n outerrel->partial_pathlist != NIL &&\n bms_is_empty(joinrel->lateral_relids))\n {\n\nThe comment is no longer in sync with the code: this if-test used to\nreject JOIN_FULL and JOIN_RIGHT, and no longer does so, but the comment\nstill claims it should. Shouldn't we drop the sentence beginning\n\"Similarly\"? (I see that there's now one sub-section that still rejects\nsuch cases, but it no longer seems correct to claim that they're rejected\noverall.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 Apr 2023 15:37:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Wed, Apr 5, 2023 at 7:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The comment is no longer in sync with the code: this if-test used to\n> reject JOIN_FULL and JOIN_RIGHT, and no longer does so, but the comment\n> still claims it should. Shouldn't we drop the sentence beginning\n> \"Similarly\"? (I see that there's now one sub-section that still rejects\n> such cases, but it no longer seems correct to claim that they're rejected\n> overall.)\n\nYeah, thanks. Done.\n\n\n",
"msg_date": "Wed, 5 Apr 2023 09:53:19 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I committed the main patch.\n\nBTW, it was easy to miss in all the buildfarm noise from\nlast-possible-minute patches, but chimaera just showed something\nthat looks like a bug in this code [1]:\n\n2023-04-08 12:25:28.709 UTC [18027:321] pg_regress/join_hash LOG: statement: savepoint settings;\n2023-04-08 12:25:28.709 UTC [18027:322] pg_regress/join_hash LOG: statement: set local max_parallel_workers_per_gather = 2;\n2023-04-08 12:25:28.710 UTC [18027:323] pg_regress/join_hash LOG: statement: explain (costs off)\n\t select count(*) from simple r full outer join simple s on (r.id = 0 - s.id);\n2023-04-08 12:25:28.710 UTC [18027:324] pg_regress/join_hash LOG: statement: select count(*) from simple r full outer join simple s on (r.id = 0 - s.id);\nTRAP: failed Assert(\"BarrierParticipants(&batch->batch_barrier) == 1\"), File: \"nodeHash.c\", Line: 2118, PID: 19147\npostgres: parallel worker for PID 18027 (ExceptionalCondition+0x84)[0x10ae2bfa4]\npostgres: parallel worker for PID 18027 (ExecParallelPrepHashTableForUnmatched+0x224)[0x10aa67544]\npostgres: parallel worker for PID 18027 (+0x3db868)[0x10aa6b868]\npostgres: parallel worker for PID 18027 (+0x3c4204)[0x10aa54204]\npostgres: parallel worker for PID 18027 (+0x3c81b8)[0x10aa581b8]\npostgres: parallel worker for PID 18027 (+0x3b3d28)[0x10aa43d28]\npostgres: parallel worker for PID 18027 (standard_ExecutorRun+0x208)[0x10aa39768]\npostgres: parallel worker for PID 18027 (ParallelQueryMain+0x2bc)[0x10aa4092c]\npostgres: parallel worker for PID 18027 (ParallelWorkerMain+0x660)[0x10a874870]\npostgres: parallel worker for PID 18027 (StartBackgroundWorker+0x2a8)[0x10ab8abf8]\npostgres: parallel worker for PID 18027 (+0x50290c)[0x10ab9290c]\npostgres: parallel worker for PID 18027 (+0x5035e4)[0x10ab935e4]\npostgres: parallel worker for PID 18027 (PostmasterMain+0x1304)[0x10ab96334]\npostgres: parallel worker for PID 18027 (main+0x86c)[0x10a79daec]\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chimaera&dt=2023-04-08%2012%3A07%3A08\n\n\n",
"msg_date": "Sat, 08 Apr 2023 12:33:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Sat, Apr 8, 2023 at 12:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I committed the main patch.\n>\n> BTW, it was easy to miss in all the buildfarm noise from\n> last-possible-minute patches, but chimaera just showed something\n> that looks like a bug in this code [1]:\n>\n> 2023-04-08 12:25:28.709 UTC [18027:321] pg_regress/join_hash LOG: statement: savepoint settings;\n> 2023-04-08 12:25:28.709 UTC [18027:322] pg_regress/join_hash LOG: statement: set local max_parallel_workers_per_gather = 2;\n> 2023-04-08 12:25:28.710 UTC [18027:323] pg_regress/join_hash LOG: statement: explain (costs off)\n> select count(*) from simple r full outer join simple s on (r.id = 0 - s.id);\n> 2023-04-08 12:25:28.710 UTC [18027:324] pg_regress/join_hash LOG: statement: select count(*) from simple r full outer join simple s on (r.id = 0 - s.id);\n> TRAP: failed Assert(\"BarrierParticipants(&batch->batch_barrier) == 1\"), File: \"nodeHash.c\", Line: 2118, PID: 19147\n> postgres: parallel worker for PID 18027 (ExceptionalCondition+0x84)[0x10ae2bfa4]\n> postgres: parallel worker for PID 18027 (ExecParallelPrepHashTableForUnmatched+0x224)[0x10aa67544]\n> postgres: parallel worker for PID 18027 (+0x3db868)[0x10aa6b868]\n> postgres: parallel worker for PID 18027 (+0x3c4204)[0x10aa54204]\n> postgres: parallel worker for PID 18027 (+0x3c81b8)[0x10aa581b8]\n> postgres: parallel worker for PID 18027 (+0x3b3d28)[0x10aa43d28]\n> postgres: parallel worker for PID 18027 (standard_ExecutorRun+0x208)[0x10aa39768]\n> postgres: parallel worker for PID 18027 (ParallelQueryMain+0x2bc)[0x10aa4092c]\n> postgres: parallel worker for PID 18027 (ParallelWorkerMain+0x660)[0x10a874870]\n> postgres: parallel worker for PID 18027 (StartBackgroundWorker+0x2a8)[0x10ab8abf8]\n> postgres: parallel worker for PID 18027 (+0x50290c)[0x10ab9290c]\n> postgres: parallel worker for PID 18027 (+0x5035e4)[0x10ab935e4]\n> postgres: parallel worker for PID 18027 (PostmasterMain+0x1304)[0x10ab96334]\n> postgres: parallel worker for PID 18027 (main+0x86c)[0x10a79daec]\n\nHaving not done much debugging on buildfarm animals before, I don't\nsuppose there is any way to get access to the core itself? I'd like to\nsee how many participants the batch barrier had at the time of the\nassertion failure. I assume it was 2, but I just wanted to make sure I\nunderstand the race.\n\n- Melanie\n\n\n",
"msg_date": "Sat, 8 Apr 2023 13:30:24 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "Melanie Plageman <melanieplageman@gmail.com> writes:\n> Having not done much debugging on buildfarm animals before, I don't\n> suppose there is any way to get access to the core itself? I'd like to\n> see how many participants the batch barrier had at the time of the\n> assertion failure. I assume it was 2, but I just wanted to make sure I\n> understand the race.\n\nI don't know about chimaera in particular, but buildfarm animals are\nnot typically configured to save any build products. They'd run out\nof disk space after awhile :-(.\n\nIf you think the number of participants would be useful data, I'd\nsuggest replacing that Assert with an elog() that prints what you\nwant to know.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Apr 2023 13:51:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Sat, Apr 8, 2023 at 1:30 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Sat, Apr 8, 2023 at 12:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > I committed the main patch.\n> >\n> > BTW, it was easy to miss in all the buildfarm noise from\n> > last-possible-minute patches, but chimaera just showed something\n> > that looks like a bug in this code [1]:\n> >\n> > 2023-04-08 12:25:28.709 UTC [18027:321] pg_regress/join_hash LOG: statement: savepoint settings;\n> > 2023-04-08 12:25:28.709 UTC [18027:322] pg_regress/join_hash LOG: statement: set local max_parallel_workers_per_gather = 2;\n> > 2023-04-08 12:25:28.710 UTC [18027:323] pg_regress/join_hash LOG: statement: explain (costs off)\n> > select count(*) from simple r full outer join simple s on (r.id = 0 - s.id);\n> > 2023-04-08 12:25:28.710 UTC [18027:324] pg_regress/join_hash LOG: statement: select count(*) from simple r full outer join simple s on (r.id = 0 - s.id);\n> > TRAP: failed Assert(\"BarrierParticipants(&batch->batch_barrier) == 1\"), File: \"nodeHash.c\", Line: 2118, PID: 19147\n\nSo, after staring at this for awhile, I suspect this assertion is just\nplain wrong. BarrierArriveAndDetachExceptLast() contains this code:\n\n if (barrier->participants > 1)\n {\n --barrier->participants;\n SpinLockRelease(&barrier->mutex);\n\n return false;\n }\n Assert(barrier->participants == 1);\n\nSo in between this assertion and the one we tripped,\n\n if (!BarrierArriveAndDetachExceptLast(&batch->batch_barrier))\n {\n ...\n return false;\n }\n\n /* Now we are alone with this batch. */\n Assert(BarrierPhase(&batch->batch_barrier) == PHJ_BATCH_SCAN);\n Assert(BarrierParticipants(&batch->batch_barrier) == 1);\n\nAnother worker attached to the batch barrier, saw that it was in\nPHJ_BATCH_SCAN, marked it done and detached. This is fine.\nBarrierArriveAndDetachExceptLast() is meant to ensure no one waits\n(deadlock hazard) and that at least one worker stays to do the unmatched\nscan. It doesn't hurt anything for another worker to join and find out\nthere is no work to do.\n\nWe should simply delete this assertion.\n\n- Melanie\n\n\n",
"msg_date": "Sat, 8 Apr 2023 14:19:54 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Sat, Apr 08, 2023 at 02:19:54PM -0400, Melanie Plageman wrote:\n> Another worker attached to the batch barrier, saw that it was in\n> PHJ_BATCH_SCAN, marked it done and detached. This is fine.\n> BarrierArriveAndDetachExceptLast() is meant to ensure no one waits\n> (deadlock hazard) and that at least one worker stays to do the unmatched\n> scan. It doesn't hurt anything for another worker to join and find out\n> there is no work to do.\n> \n> We should simply delete this assertion.\n\nI have added an open item about that. This had better be tracked.\n--\nMichael",
"msg_date": "Mon, 10 Apr 2023 08:33:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Full Hash Join"
},
{
"msg_contents": "On Mon, Apr 10, 2023 at 11:33 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Sat, Apr 08, 2023 at 02:19:54PM -0400, Melanie Plageman wrote:\n> > Another worker attached to the batch barrier, saw that it was in\n> > PHJ_BATCH_SCAN, marked it done and detached. This is fine.\n> > BarrierArriveAndDetachExceptLast() is meant to ensure no one waits\n> > (deadlock hazard) and that at least one worker stays to do the unmatched\n> > scan. It doesn't hurt anything for another worker to join and find out\n> > there is no work to do.\n> >\n> > We should simply delete this assertion.\n\nAgreed, and pushed. Thanks!\n\n> I have added an open item about that. This had better be tracked.\n\nThanks, will update.\n\n\n",
"msg_date": "Thu, 13 Apr 2023 09:40:02 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Full Hash Join"
}
] |
[
{
"msg_contents": "Hi! Few months ago we have encountered situation when some quite big open log files were open by Postres despite being deleted.This affects free space caluculation in out managed PostgreSQL instances.Currently I'm investigating this issue.We traced some roots to unclosed descriptors in Perl code of postgresql-common [0], but we still face some unclosed descriptors pointing to log file. Debian tools from postgresql-common starts pg_ctl piped to logfile. Descriptor is piped to logfile and block it for delete.That is why we can't delete logfile.1 after logrotate.And after second logrotate logfile.1 is in \"deleted\" status, but can't be deleted in fact. Here I apply path with possible solution. In this patch stdout and stderr pipes are just closed in syslogger. --Sviatoslav ErmilinYandex [0] https://salsa.debian.org/postgresql/postgresql-common/commit/580aa0677edc222ebaf6e1031cf3929f847f27fb",
"msg_date": "Thu, 12 Sep 2019 16:10:26 +0500",
"msg_from": "=?utf-8?B?0KHQstGP0YLQvtGB0LvQsNCyINCV0YDQvNC40LvQuNC9?=\n <munakoiso@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Close stdout and stderr in syslogger"
},
{
"msg_contents": "=?utf-8?B?0KHQstGP0YLQvtGB0LvQsNCyINCV0YDQvNC40LvQuNC9?= <munakoiso@yandex-team.ru> writes:\n> <div><div>Hi!</div><div> </div><div>Few months ago we have encountered situation when some quite big open log files were open by Postres despite being deleted.</div><div>This affects free space caluculation in out managed PostgreSQL instances.</div><div>Currently I'm investigating this issue.</div><div>We traced some roots to unclosed descriptors in Perl code of postgresql-common [0], but we still face some unclosed descriptors pointing to log file.</div><div> </div><div>Debian tools from postgresql-common starts pg_ctl piped to logfile. Descriptor is piped to logfile and block it for delete.</div><div>That is why we can't delete logfile.1 after logrotate.</div><div>And after second logrotate logfile.1 is in \"deleted\" status, but can't be deleted in fact.</div><div> </div><div>Here I apply path with possible solution. In this patch stdout and stderr pipes are just closed in syslogger.</div><div> </div><div>--</div><div>Sviatoslav Ermilin</div><div>Yandex</div><div> </div><div>[0] https://salsa.debian.org/postgresql/postgresql-common/commit/580aa0677edc222ebaf6e1031cf3929f847f27fb</div></div>\n\nI'm quite certain that the current behavior is intentional, if only\nbecause closing the syslogger's stderr would make it impossible to\ndebug problems inside the syslogger. Why is it a problem that we\nleave it open? I don't believe either that the file will grow much\n(in normal cases anyway), or that it's impossible to unlink it\n(except on Windows, but you didn't say anything about Windows).\n\nIn any case, the proposed patch almost certainly introduces new\nproblems, in that you dropped the fcloses's into code that\nexecutes repeatedly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Sep 2019 09:32:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Close stdout and stderr in syslogger"
},
{
"msg_contents": "Hi Tom, Thank you for quick reply. > I'm quite certain that the current behavior is intentional, if only> because closing the syslogger's stderr would make it impossible to> debug problems inside the syslogger.Developer who debugs syslogger, probably, can remove fclose() calls.Maybe we can have a switch for this?As far as I understand there is no stdout printing in syslogger? > Why is it a problem that we leave it open? For us it's problem because logfiles are rotated by size.The size of a file could be on order of few Gbs.We can unlink logfile, logrotate is doing so, but the file will not be deleteform file system. We can use copy-truncate log rotation, this solves the problem,but such solution seems outdated. > I don't believe either that the file will grow much (in normal cases anyway) We are not in control of what user write to logs. We provide managed PostgreSQL.Some users can set log_statement = 'all' and get tons of logs.Our goal is to make logging system accountable to free space monitoring andefficient (no log copy, no extra occupied space). We could also avoid pointing pg_ctl log to same logfile as syslogger writes.But that's how postgresql-common works and it is easier for our users to seethis output among usual logs to understand what was goining on with database. > In any case, the proposed patch almost certainly introduces new> problems, in that you dropped the fcloses's into code that> executes repeatedly. Probably, I should place fclose() right after successfull syslogger start? _____ To reproduce problem on can follow this steps: 1) Use this settings:log_destination = 'stderr,csvlog'logging_collector = on log_filename = 'postgresql-11-data.log'log_file_mode = '0640' log_truncate_on_rotation = off 2) pg_ctl -D DemoDb --log=/private/tmp/pg/postgresql-11-data.log start 3) Check out open descriptors (look for types 1w - stdout and 2w - stderr)lsof | grep postgres | grep logpostgres 7968 munakoiso 1w REG 1,4 1170 3074156 /private/tmp/pg/postgresql-11-data.logpostgres 7968 munakoiso 2w REG 1,4 1170 3074156 /private/tmp/pg/postgresql-11-data.logpostgres 7968 munakoiso 12w REG 1,4 1170 3074156 /private/tmp/pg/postgresql-11-data.log 4) delete file /private/tmp/pg/postgresql-11-data.log to imitate logrotate, thenpsql postgres -c \"select pg_rotate_logfile();\" 5) Check out open descriptorswithout using this patch you shold find here lines with 1w and 2wwith using should not lsof | grep postgres | grep logpostgres 8082 munakoiso 5w REG 1,4 2451 3074156 /private/tmp/pg/postgresql-11-data.log ______________Sviatoslav ErmilinYandex 12.09.2019, 18:33, \"Tom Lane\" <tgl@sss.pgh.pa.us>:=?utf-8?B?0KHQstGP0YLQvtGB0LvQsNCyINCV0YDQvNC40LvQuNC9?= <munakoiso@yandex-team.ru> writes: <div><div>Hi!</div><div> </div><div>Few months ago we have encountered situation when some quite big open log files were open by Postres despite being deleted.</div><div>This affects free space caluculation in out managed PostgreSQL instances.</div><div>Currently I'm investigating this issue.</div><div>We traced some roots to unclosed descriptors in Perl code of postgresql-common [0], but we still face some unclosed descriptors pointing to log file.</div><div> </div><div>Debian tools from postgresql-common starts pg_ctl piped to logfile. Descriptor is piped to logfile and block it for delete.</div><div>That is why we can't delete logfile.1 after logrotate.</div><div>And after second logrotate logfile.1 is in \"deleted\" status, but can't be deleted in fact.</div><div> </div><div>Here I apply path with possible solution. In this patch stdout and stderr pipes are just closed in syslogger.</div><div> </div><div>--</div><div>Sviatoslav Ermilin</div><div>Yandex</div><div> </div><div>[0] https://salsa.debian.org/postgresql/postgresql-common/commit/580aa0677edc222ebaf6e1031cf3929f847f27fb</div></div>I'm quite certain that the current behavior is intentional, if onlybecause closing the syslogger's stderr would make it impossible todebug problems inside the syslogger. Why is it a problem that weleave it open? I don't believe either that the file will grow much(in normal cases anyway), or that it's impossible to unlink it(except on Windows, but you didn't say anything about Windows).In any case, the proposed patch almost certainly introduces newproblems, in that you dropped the fcloses's into code thatexecutes repeatedly. regards, tom lane",
"msg_date": "Fri, 13 Sep 2019 11:41:11 +0500",
"msg_from": "=?utf-8?B?0KHQstGP0YLQvtGB0LvQsNCyINCV0YDQvNC40LvQuNC9?=\n <munakoiso@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Close stdout and stderr in syslogger"
},
{
"msg_contents": "Hi Tom,\n\nThank you for quick reply.\n\n> I'm quite certain that the current behavior is intentional, if only\n> because closing the syslogger's stderr would make it impossible to\n> debug problems inside the syslogger.\nDeveloper who debugs syslogger, probably, can remove fclose() calls.\nMaybe we can have a switch for this?\nAs far as I understand there is no stdout printing in syslogger?\n\n> Why is it a problem that we leave it open?\n\nFor us it's problem because logfiles are rotated by size.\nThe size of a file could be on order of few Gbs.\nWe can unlink logfile, logrotate is doing so, but the file will not be delete\nform file system. We can use copy-truncate log rotation, this solves the problem,\nbut such solution seems outdated.\n\n> I don't believe either that the file will grow much (in normal cases anyway)\n\nWe are not in control of what user write to logs. We provide managed PostgreSQL.\nSome users can set log_statement = 'all' and get tons of logs.\nOur goal is to make logging system accountable to free space monitoring and\nefficient (no log copy, no extra occupied space).\n\nWe could also avoid pointing pg_ctl log to same logfile as syslogger writes.\nBut that's how postgresql-common works and it is easier for our users to see\nthis output among usual logs to understand what was goining on with database.\n\n> In any case, the proposed patch almost certainly introduces new\n> problems, in that you dropped the fcloses's into code that\n> executes repeatedly.\n\nProbably, I should place fclose() right after successfull syslogger start?\n\n_____\n\nTo reproduce problem on can follow this steps:\n\n1) Use this settings:\nlog_destination = 'stderr,csvlog'\nlogging_collector = on\nlog_filename = 'postgresql-11-data.log'\nlog_file_mode = '0640' \nlog_truncate_on_rotation = off\n\n2) pg_ctl -D DemoDb --log=/private/tmp/pg/postgresql-11-data.log start\n\n3) Check out open descriptors (look for types 1w - stdout and 2w - stderr)\nlsof | grep postgres | grep log\npostgres 7968 munakoiso 1w REG 1,4 1170 3074156 /private/tmp/pg/postgresql-11-data.log\npostgres 7968 munakoiso 2w REG 1,4 1170 3074156 /private/tmp/pg/postgresql-11-data.log\npostgres 7968 munakoiso 12w REG 1,4 1170 3074156 /private/tmp/pg/postgresql-11-data.log\n\n4) delete file /private/tmp/pg/postgresql-11-data.log to imitate logrotate, then\npsql postgres -c \"select pg_rotate_logfile();\"\n\n5) Check out open descriptors\nwithout using this patch you shold find here lines with 1w and 2w\nwith using should not\n\nlsof | grep postgres | grep log\npostgres 8082 munakoiso 5w REG 1,4 2451 3074156 /private/tmp/pg/postgresql-11-data.log\n\n\n______________\nSviatoslav Ermilin\nYandex\n\n\n",
"msg_date": "Sat, 14 Sep 2019 12:27:38 +0500",
"msg_from": "=?utf-8?B?0KHQstGP0YLQvtGB0LvQsNCyINCV0YDQvNC40LvQuNC9?=\n <munakoiso@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Close stdout and stderr in syslogger"
},
{
"msg_contents": "Hi,\n\n> In any case, the proposed patch almost certainly introduces new\n> problems, in that you dropped the fcloses's into code that\n> executes repeatedly.\n\nI guess it's better to place fclose() right after successful syslogger start. \nIn that case we close descriptors just one time. But it's enough to solve the problem.\nDevelopers who debug syslogger generally should see problems before we close descriptors.\nIn other case they can edit code of syslogger.\n\nThere is another way to solve this problem:\nWe can give users the opportunity to leave or close descriptors.\nI.e. config switch for this. But I think that it's too complicated.\n\n\nOne can reproduce the problem by following steps in previous messages.\n\n_________\n\nRegards, Sviatoslav Ermilin",
"msg_date": "Thu, 03 Oct 2019 17:30:40 +0500",
"msg_from": "=?utf-8?B?0KHQstGP0YLQvtGB0LvQsNCyINCV0YDQvNC40LvQuNC9?=\n <munakoiso@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Close stdout and stderr in syslogger"
},
{
"msg_contents": "On Thu, Oct 3, 2019 at 8:30 AM Святослав Ермилин\n<munakoiso@yandex-team.ru> wrote:\n> There is another way to solve this problem:\n> We can give users the opportunity to leave or close descriptors.\n> I.e. config switch for this. But I think that it's too complicated.\n\nThe typical solution to this problem is to send the stdout/stderr to a\nlogfile that isn't rotated because it never gets very large, and\nsubsequent output to the real logfile. Seems like that would be the\nlow-stress way to go here, rather than trying to change the code.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 4 Oct 2019 09:23:26 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Close stdout and stderr in syslogger"
}
] |
[
{
"msg_contents": "Hi!\n\nThis is a thread to discuss amcheck feature started in other thread[0].\n\nCurrently amcheck is scanning every B-tree level. If verification is done with ShareLock - amcheck will test that each page leftlink is pointing to page with rightlink backwards.\nThis is important invariant, in our experience it proved to be good at detecting various corruptions.\nBut this invariant can be detected only if both pages are not modified (e.g. split concurrently).\n\nPFA patch, that in case of suspicion locks two pages and retests that invariant. This allows detection of several corruptions on standby.\n\nThis patch violates one of amcheck design principles: current code does not ever take more than one page lock. I do not know: should we hold this rule or should we use more deep check?\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/DA9B33AC-53CB-4643-96D4-7A0BBC037FA1@yandex-team.ru",
"msg_date": "Thu, 12 Sep 2019 18:07:57 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "On 2019-Sep-12, Andrey Borodin wrote:\n\n> This patch violates one of amcheck design principles: current code\n> does not ever take more than one page lock. I do not know: should we\n> hold this rule or should we use more deep check?\n\nThe check does seem worthwhile to me.\n\nAs far as I know, in btree you can lock the right sibling of a page\nwhile keeping lock on the page itself, without risk of deadlock. So I'm\nnot sure what's the issue with that. This is in the README:\n\n In most cases we release our lock and pin on a page before attempting\n to acquire pin and lock on the page we are moving to. In a few places\n it is necessary to lock the next page before releasing the current one.\n This is safe when moving right or up, but not when moving left or down\n (else we'd create the possibility of deadlocks).\n\nI suppose Peter was concerned about being able to run amcheck without\ncausing any trouble at all for concurrent operation; maybe we can retain\nthat property by making this check optional.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 12 Sep 2019 10:16:12 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "On Thu, Sep 12, 2019 at 10:16:12AM -0300, Alvaro Herrera wrote:\n>On 2019-Sep-12, Andrey Borodin wrote:\n>\n>> This patch violates one of amcheck design principles: current code\n>> does not ever take more than one page lock. I do not know: should we\n>> hold this rule or should we use more deep check?\n>\n>The check does seem worthwhile to me.\n>\n>As far as I know, in btree you can lock the right sibling of a page\n>while keeping lock on the page itself, without risk of deadlock. So I'm\n>not sure what's the issue with that. This is in the README:\n>\n> In most cases we release our lock and pin on a page before attempting\n> to acquire pin and lock on the page we are moving to. In a few places\n> it is necessary to lock the next page before releasing the current one.\n> This is safe when moving right or up, but not when moving left or down\n> (else we'd create the possibility of deadlocks).\n>\n>I suppose Peter was concerned about being able to run amcheck without\n>causing any trouble at all for concurrent operation; maybe we can retain\n>that property by making this check optional.\n>\n\nPeter, any opinion on this proposed amcheck patch? In the other thread\n[1] you seemed to agree this is worth checking, and Alvaro's proposal to\nmake this check optional seems like a reasonable compromise with respect\nto the locking.\n\n[1] https://www.postgresql.org/message-id/flat/DA9B33AC-53CB-4643-96D4-7A0BBC037FA1@yandex-team.ru\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 11 Jan 2020 02:45:38 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 5:45 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> Peter, any opinion on this proposed amcheck patch? In the other thread\n> [1] you seemed to agree this is worth checking, and Alvaro's proposal to\n> make this check optional seems like a reasonable compromise with respect\n> to the locking.\n\nIt's a good idea, and it probably doesn't even need to be made\noptional -- lock coupling to the right is safe on a primary, and\nshould also be safe on standbys (though I should triple check the REDO\nroutines to be sure). The patch only does lock coupling when it proves\nnecessary, which ought to only happen when there is a concurrent page\nsplit, which ought to be infrequent. Maybe there is no need to\ncompromise.\n\nI'm curious why Andrey's corruption problems were not detected by the\ncross-page amcheck test, though. We compare the first non-pivot tuple\non the right sibling leaf page with the last one on the target page,\ntowards the end of bt_target_page_check() -- isn't that almost as good\nas what you have here in practice? I probably would have added\nsomething like this myself earlier, if I had reason to think that\nverification would be a lot more effective that way.\n\nTo be clear, I believe that Andrey wrote this patch for a reason -- I\nassume that it makes a noticeable and consistent difference. I would\nlike to gain a better understanding of why that was for my own\nbenefit, though. For example, it might be that page deletion was a\nfactor that made the test I mentioned less effective. I care about the\nspecifics.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 10 Jan 2020 18:49:33 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 06:49:33PM -0800, Peter Geoghegan wrote:\n>On Fri, Jan 10, 2020 at 5:45 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> Peter, any opinion on this proposed amcheck patch? In the other thread\n>> [1] you seemed to agree this is worth checking, and Alvaro's proposal to\n>> make this check optional seems like a reasonable compromise with respect\n>> to the locking.\n>\n>It's a good idea, and it probably doesn't even need to be made\n>optional -- lock coupling to the right is safe on a primary, and\n>should also be safe on standbys (though I should triple check the REDO\n>routines to be sure). The patch only does lock coupling when it proves\n>necessary, which ought to only happen when there is a concurrent page\n>split, which ought to be infrequent. Maybe there is no need to\n>compromise.\n>\n\nOK, that makes sense.\n\n>I'm curious why Andrey's corruption problems were not detected by the\n>cross-page amcheck test, though. We compare the first non-pivot tuple\n>on the right sibling leaf page with the last one on the target page,\n>towards the end of bt_target_page_check() -- isn't that almost as good\n>as what you have here in practice? I probably would have added\n>something like this myself earlier, if I had reason to think that\n>verification would be a lot more effective that way.\n>\n>To be clear, I believe that Andrey wrote this patch for a reason -- I\n>assume that it makes a noticeable and consistent difference. I would\n>like to gain a better understanding of why that was for my own\n>benefit, though. For example, it might be that page deletion was a\n>factor that made the test I mentioned less effective. I care about the\n>specifics.\n>\n\nUnderstood. Is that a reason to not commit of this patch now, though?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 11 Jan 2020 13:25:01 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "On Sat, Jan 11, 2020 at 4:25 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> Understood. Is that a reason to not commit of this patch now, though?\n\nIt could use some polishing. Are you interested in committing it?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 13 Jan 2020 15:49:40 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 03:49:40PM -0800, Peter Geoghegan wrote:\n>On Sat, Jan 11, 2020 at 4:25 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> Understood. Is that a reason to not commit of this patch now, though?\n>\n>It could use some polishing. Are you interested in committing it?\n>\n\nNot really - as a CFM I was trying to revive patches that seem in good\nshape but not moving.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 14 Jan 2020 02:07:54 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "Hi Peter! Sorry for answering so long.\n\n> 11 янв. 2020 г., в 7:49, Peter Geoghegan <pg@bowt.ie> написал(а):\n> \n> I'm curious why Andrey's corruption problems were not detected by the\n> cross-page amcheck test, though. We compare the first non-pivot tuple\n> on the right sibling leaf page with the last one on the target page,\n> towards the end of bt_target_page_check() -- isn't that almost as good\n> as what you have here in practice? I probably would have added\n> something like this myself earlier, if I had reason to think that\n> verification would be a lot more effective that way.\n\nWe were dealing with corruption caused by lost page update. Consider two pages:\nA->B\nIf A is split into A` and C we get:\nA`->C->B\nBut if update of A is lost we still have\nA->B, but B backward pointers points to C.\nB's smallest key is bigger than hikey of A`, but this do not violate \ncross-pages invariant.\n\nPage updates may be lost due to bug in backup software with incremental \nbackups, bug in storage layer of Aurora-style system, bug in page cache, incorrect\nfsync error handling, bug in ssd firmware etc. And our data checksums do not\ndetect this kind of corruption. BTW I think that it would be better if our\nchecksums were not stored on a page itseft, they could detect this kind of faults.\n\nWe were able to timely detect all those problems on primaries in our testing\nenvironment. But much later found out that some standbys were corrupted,\nthe problem appeared only when they were promoted.\nAlso, in nearby thread Grygory Rylov (g0djan) is trying to enable one more\ninvariant in standby checks.\n\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 14 Jan 2020 09:47:38 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "\n\n> 14 янв. 2020 г., в 9:47, Andrey Borodin <x4mmm@yandex-team.ru> написал(а):\n> \n> Page updates may be lost due to bug in backup software with incremental \n> backups, bug in storage layer of Aurora-style system, bug in page cache, incorrect\n> fsync error handling, bug in ssd firmware etc. And our data checksums do not\n> detect this kind of corruption. BTW I think that it would be better if our\n> checksums were not stored on a page itseft, they could detect this kind of faults.\n\nObserved it just now.\nThere is one HA cluster where a node was marked dead. This node was disconnected from cluster, but due to human error there was postgres running.\nNode managed to install block-level incremental backup to the chain. And backup software did not detect that backup step was taken from part of timeline that was not in actual timeline's history.\nResult of restoration is:\n\nman-w%/%db R # select bt_index_check('%.pk_%');\n bt_index_check \n----------------\n \n(1 row)\n\nTime: 1411.065 ms (00:01.411)\nman-w%/%db R # select patched_index_check('%.pk_%');\nERROR: XX002: left link/right link pair in index \"pk_labels\" not in agreement\nDETAIL: Block=42705 left block=42707 left link from block=45495.\nLOCATION: bt_recheck_block_rightlink, verify_nbtree.c:621\nTime: 671.336 ms\n\n('%' is replacing removed chars)\n\nI understand that this corruption was not introduced by postgres itself, but by combination of bug in two 3rd party tools and human error.\nBut I can imagine similar corruptions with different root causes.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 16 Jan 2020 14:50:28 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 8:47 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > 11 янв. 2020 г., в 7:49, Peter Geoghegan <pg@bowt.ie> написал(а):\n> > I'm curious why Andrey's corruption problems were not detected by the\n> > cross-page amcheck test, though. We compare the first non-pivot tuple\n> > on the right sibling leaf page with the last one on the target page,\n> > towards the end of bt_target_page_check() -- isn't that almost as good\n> > as what you have here in practice? I probably would have added\n> > something like this myself earlier, if I had reason to think that\n> > verification would be a lot more effective that way.\n>\n> We were dealing with corruption caused by lost page update. Consider two pages:\n> A->B\n> If A is split into A` and C we get:\n> A`->C->B\n> But if update of A is lost we still have\n> A->B, but B backward pointers points to C.\n> B's smallest key is bigger than hikey of A`, but this do not violate\n> cross-pages invariant.\n>\n> Page updates may be lost due to bug in backup software with incremental\n> backups, bug in storage layer of Aurora-style system, bug in page cache, incorrect\n> fsync error handling, bug in ssd firmware etc. And our data checksums do not\n> detect this kind of corruption. BTW I think that it would be better if our\n> checksums were not stored on a page itseft, they could detect this kind of faults.\n\nI find this argument convincing. I'll try to get this committed soon.\n\nWhile you could have used bt_index_parent_check() or heapallindexed to\ndetect the issue, those two options are a lot more expensive (plus the\nformer option won't work on a standby). Relaxing the principle that\nsays that we shouldn't hold multiple buffer locks concurrently doesn't\nseem like that much to ask for to get such a clear benefit.\n\nI think that this is safe, but page deletion/half-dead pages need more\nthought. In general, the target page could have become \"ignorable\"\nwhen rechecked.\n\n> We were able to timely detect all those problems on primaries in our testing\n> environment. But much later found out that some standbys were corrupted,\n> the problem appeared only when they were promoted.\n> Also, in nearby thread Grygory Rylov (g0djan) is trying to enable one more\n> invariant in standby checks.\n\nI looked at that thread just now, but Grygory didn't say why this true\nroot check was particularly important, so I can't see much upside.\nPlus that seems riskier than what you have in mind here.\n\nDoes it have something to do with the true root looking like a deleted\npage? The details matter.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 16 Jan 2020 17:11:03 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 5:11 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I find this argument convincing. I'll try to get this committed soon.\n>\n> While you could have used bt_index_parent_check() or heapallindexed to\n> detect the issue, those two options are a lot more expensive (plus the\n> former option won't work on a standby). Relaxing the principle that\n> says that we shouldn't hold multiple buffer locks concurrently doesn't\n> seem like that much to ask for to get such a clear benefit.\n\nHaving looked into it some more, I now have my doubts about this\npatch. REDO routine code like btree_xlog_split() and\nbtree_xlog_unlink_page() feel entitled to only lock one page at a\ntime, which invalidates the assumption that we can couple locks on the\nleaf level to verify mutual agreement in left and right sibling links\n(with only an AccessShareLock on bt_index_check()'s target index\nrelation). It would definitely be safe for bt_index_check() to so this\nwere it not running in recovery mode, but that doesn't seem very\nuseful on its own.\n\nI tried to come up with a specific example of how this could be\nunsafe, but my explanation was all over the place (this could have had\nsomething to do with it being Friday evening). Even still, it's up to\nthe patch to justify why it's safe, and that seems even more\ndifficult.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 17 Jan 2020 17:43:04 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "On Fri, Jan 17, 2020 at 5:43 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I tried to come up with a specific example of how this could be\n> unsafe, but my explanation was all over the place (this could have had\n> something to do with it being Friday evening). Even still, it's up to\n> the patch to justify why it's safe, and that seems even more\n> difficult.\n\nI can't see a way around this problem, so I'm marking the patch rejected.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 22 Jan 2020 11:59:16 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "Hi!\n\n> 23 янв. 2020 г., в 00:59, Peter Geoghegan <pg@bowt.ie> написал(а):\n> \n> On Fri, Jan 17, 2020 at 5:43 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> I tried to come up with a specific example of how this could be\n>> unsafe, but my explanation was all over the place (this could have had\n>> something to do with it being Friday evening). Even still, it's up to\n>> the patch to justify why it's safe, and that seems even more\n>> difficult.\n> \n> I can't see a way around this problem, so I'm marking the patch rejected.\n\nIn this thread [0] we decided that lock coupling is necessary for btree_xlog_unlink_page().\nSo, maybe let's revive this patch?\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/CAH2-Wzm7T_O%2BVUeo7%3D_NGPncs08z3JEybEwVLZGaASnbfg5vDA%40mail.gmail.com#a4ef597251fed0eb5c2896937bdbd0cc\n\n",
"msg_date": "Mon, 20 Jul 2020 23:46:47 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "On Mon, Jul 20, 2020 at 11:46 AM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> In this thread [0] we decided that lock coupling is necessary for btree_xlog_unlink_page().\n> So, maybe let's revive this patch?\n\nYes, let's do that. Can you resubmit it, please?\n\n\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 3 Aug 2020 15:44:34 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "> 4 авг. 2020 г., в 03:44, Peter Geoghegan <pg@bowt.ie> написал(а):\n> \n> On Mon, Jul 20, 2020 at 11:46 AM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> In this thread [0] we decided that lock coupling is necessary for btree_xlog_unlink_page().\n>> So, maybe let's revive this patch?\n> \n> Yes, let's do that. Can you resubmit it, please?\n\nPFA v3.\nChanges: fixed few typos in comments.\n\nBTW, reviewing this patch again I cannot understand why we verify link coherence only on leaf level but not for internal pages?\nI do not see any actual problems here.\n\nCorruption detection power of leftlink-rightlinks on internal pages is diminishingly small compared to leaf pages. But there seems to be no reason to exclude internal pages?\n\nThanks!\n\nBest regards, Andrey Borodin.",
"msg_date": "Tue, 4 Aug 2020 21:32:57 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "On Tue, Aug 4, 2020 at 9:33 AM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> BTW, reviewing this patch again I cannot understand why we verify link coherence only on leaf level but not for internal pages?\n> I do not see any actual problems here.\n\nWell, I thought that it might be a good idea to limit it to the leaf\nlevel, based on the theory that we rarely couple locks on internal\npage levels in general. But yeah, that's probably not a good enough\nreason to avoid lock coupling on internal pages. It's probably better\nto do it everywhere than to explain why we don't do it on the internal\nlevel -- the explanation will probably be confusing. And even if there\nwas a performance issue, it could only happen when there are\nconcurrent internal page splits -- but those are supposed to be rare.\n\nAttached is v4, which now checks internal pages (just like leaf\npages). The main other change in this revised version is that we make\nthe error raised by bt_index_check() match the error used in the old\nbt_index_parent_check() case -- we always want to blame the current\ntarget page when amcheck complains (technically the page we blame when\nthe invariant fails isn't strictly guaranteed to be quite the same\nthing as the target, but it's close enough to not really matter in\nreality). Other adjustments:\n\n* Added _bt_checkpage() calls for buffers, as is standard practice in nbtree.\n\n* Added protection against locking the same page a second time in the\nevent of a faulty sibling link -- we should avoid a self-deadlock in\nthe event of a page that is corrupt in just the wrong way.\n\n* Updated obsolescent comments that claimed that we never couple\nbuffer locks in amcheck.\n\nI would like to commit something like this in the next day or two.\n\nThoughts?\n\n--\nPeter Geoghegan",
"msg_date": "Wed, 5 Aug 2020 16:25:14 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "\n\n> 6 авг. 2020 г., в 04:25, Peter Geoghegan <pg@bowt.ie> написал(а):\n> \n> * Added _bt_checkpage() calls for buffers, as is standard practice in nbtree.\n> \n> * Added protection against locking the same page a second time in the\n> event of a faulty sibling link -- we should avoid a self-deadlock in\n> the event of a page that is corrupt in just the wrong way.\n> \n> * Updated obsolescent comments that claimed that we never couple\n> buffer locks in amcheck.\nCool, thanks!\nThere's mintor typo: missing space in \"of_bt_check_unique\".\n\n> \n> I would like to commit something like this in the next day or two.\n> \n> Thoughts?\n\nSounds great! Thanks!\n\nBest regards, Andrey Borodin.\n\n\n\n",
"msg_date": "Thu, 6 Aug 2020 09:50:40 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "On Wed, Aug 5, 2020 at 9:50 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> Sounds great! Thanks!\n\nI'm afraid that there is another problem, this time with\nbtree_xlog_split(). It's possible to get false positives when running\nthe new test continually on a standby. You can see this by running\nverification on a standby continually, while the primary runs with a\nworkload that gets many page splits.\n\nIt's easy to see if you apply this patch:\n\n--- a/src/backend/access/nbtree/nbtxlog.c\n+++ b/src/backend/access/nbtree/nbtxlog.c\n@@ -435,6 +435,9 @@ btree_xlog_split(bool newitemonleft,\nXLogReaderState *record)\n UnlockReleaseBuffer(lbuf);\n UnlockReleaseBuffer(rbuf);\n\n+ /* trick */\n+ pg_usleep(10 * 1000L);\n+\n\nThe only thing that we can do is adjust the locking in\nbtree_xlog_split() to match the primary (kind of like commit 9a9db08a,\nexcept with page splits instead of page deletion). Attached is a\nrevised version of the patch, along with the changes that we'd need to\nREDO to make the amcheck patch really work.\n\nI'm not sure if this change to the REDO routine is worth the overhead\nor trouble, though. I have to think about it some more.\n\nBTW, the first patch in the series now has a new check for page\ndeletion -- that was missing from v4.\n\n-- \nPeter Geoghegan",
"msg_date": "Thu, 6 Aug 2020 09:38:56 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "\n\n> 6 авг. 2020 г., в 21:38, Peter Geoghegan <pg@bowt.ie> написал(а):\n> \n> On Wed, Aug 5, 2020 at 9:50 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> Sounds great! Thanks!\n> \n> I'm afraid that there is another problem, this time with\n> btree_xlog_split(). It's possible to get false positives when running\n> the new test continually on a standby. You can see this by running\n> verification on a standby continually, while the primary runs with a\n> workload that gets many page splits.\nYes, I see the problem...\n> \n> The only thing that we can do is adjust the locking in\n> btree_xlog_split() to match the primary (kind of like commit 9a9db08a,\n> except with page splits instead of page deletion). Attached is a\n> revised version of the patch, along with the changes that we'd need to\n> REDO to make the amcheck patch really work.\n> \n> I'm not sure if this change to the REDO routine is worth the overhead\n> or trouble, though. I have to think about it some more.\nIf we want to check relations between pages we must either apply them together (under locks) or tolerate some fraction of false positives. I understand that mitigating and tolerating false positives is nonsense in mathematica sense, but from practical point of view it's just OK.\n\nBut having complete solution with no false positives seems much better.\n\n> \n> BTW, the first patch in the series now has a new check for page\n> deletion -- that was missing from v4.\nYes, seems like that was a bug..\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n\n",
"msg_date": "Fri, 7 Aug 2020 10:59:26 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "On Thu, Aug 6, 2020 at 10:59 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> But having complete solution with no false positives seems much better.\n\nAgreed. I know that you didn't pursue this for no reason -- having the\ncheck available makes bt_check_index() a lot more valuable in\npractice. It detects what is actually a classic example of subtle\nB-Tree corruption (left link corruption), which appears in Modern\nB-Tree techniques in its discussion of corruption detection. It's\nactually the canonical example of how B-Tree corruption can be very\nsubtle in the real world.\n\nI pushed a cleaned up version of this patch just now. I added some\ncommentary about this canonical example in header comments for the new\nfunction.\n\nThanks\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 8 Aug 2020 11:14:10 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
},
{
"msg_contents": "\n\n> 8 авг. 2020 г., в 23:14, Peter Geoghegan <pg@bowt.ie> написал(а):\n> \n> I pushed a cleaned up version of this patch just now. I added some\n> commentary about this canonical example in header comments for the new\n> function.\n\nThanks for working on this!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sun, 9 Aug 2020 11:08:52 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Amcheck: do rightlink verification with lock coupling"
}
] |
[
{
"msg_contents": "Hi,\n\nOur customer encountered a curious scenario. They have a table with GIN\nindex on expression, which performs multiple joins with this table\nitself. These joins employ another btree index for efficiency.\nVACUUM FULL on this table fails with error like\n\nERROR: could not read block 3534 in file \"base/41366676/56697497\": read only 0 of 8192 bytes\n\nIt happens because order in which indexes are rebuilt is not specified.\nGIN index is being rebuilt when btree index is not reconstructed yet;\nan attempt to use old index with rewritten heap crashes.\n\nA problem of similar nature can be reproduced with the following\nstripped-down scenario:\n\nCREATE TABLE pears(f1 int primary key, f2 int);\nINSERT INTO pears SELECT i, i+1 FROM generate_series(1, 100) i;\nCREATE OR REPLACE FUNCTION pears_f(i int) RETURNS int LANGUAGE SQL IMMUTABLE AS $$\n SELECT f1 FROM pears WHERE pears.f2 = 42\n$$;\nCREATE index ON pears ((pears_f(f1)));\n\nHere usage of not-yet-created index on pears_f(f1) for its own\nconstruction is pointless, however planner in principle considers it in\nget_relation_info, tries to get btree height (_bt_getrootheight) -- and\nfails.\n\n\nThere is already a mechanism which prevents usage of indexes during\nreindex -- ReindexIsProcessingIndex et al. However, to the contrary of\nwhat index.c:3664 comment say, these protect only indexes on system\ncatalogs, not user tables: the only real caller is genam.c.\n\nAttached patch extends it: the same check is added to\nget_relation_info. Also SetReindexProcessing is cocked in index_create\nto defend from index self usage during creation as in stripped example\nabove. There are some other still unprotected callers of index_build;\nconcurrent index creation doesn't need it because index is\n'not indisvalid' during the build, and in RelationTruncateIndexes\ntable is empty, so it looks like it can be omitted.\n\n\nOne might argue that function selecting from table can hardly be called\nimmutable, and immutability is required for index expressions. However,\nif user is sure table contents doesn't change, why not? Also, the\npossiblity of triggering \"could not read block\" error with plain SQL is\ndefinitely not nice.\n\n\n\n\n\n--\nArseny Sher\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 12 Sep 2019 17:52:05 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "(Re)building index using itself or another index of the same table"
},
{
"msg_contents": "Arseny Sher <a.sher@postgrespro.ru> writes:\n> A problem of similar nature can be reproduced with the following\n> stripped-down scenario:\n\n> CREATE TABLE pears(f1 int primary key, f2 int);\n> INSERT INTO pears SELECT i, i+1 FROM generate_series(1, 100) i;\n> CREATE OR REPLACE FUNCTION pears_f(i int) RETURNS int LANGUAGE SQL IMMUTABLE AS $$\n> SELECT f1 FROM pears WHERE pears.f2 = 42\n> $$;\n> CREATE index ON pears ((pears_f(f1)));\n\nWe've seen complaints about this sort of thing before, and rejected\nthem because, as you say, that function is NOT immutable. When you\nlie to the system like that, you should not be surprised if things\nbreak.\n\n> There is already a mechanism which prevents usage of indexes during\n> reindex -- ReindexIsProcessingIndex et al. However, to the contrary of\n> what index.c:3664 comment say, these protect only indexes on system\n> catalogs, not user tables: the only real caller is genam.c.\n> Attached patch extends it: the same check is added to\n> get_relation_info. Also SetReindexProcessing is cocked in index_create\n> to defend from index self usage during creation as in stripped example\n> above. There are some other still unprotected callers of index_build;\n> concurrent index creation doesn't need it because index is\n> 'not indisvalid' during the build, and in RelationTruncateIndexes\n> table is empty, so it looks like it can be omitted.\n\nI have exactly no faith that this fixes things enough to make such\ncases supportable. And I have no interest in opening that can of\nworms anyway. I'd rather put in some code to reject database\naccesses in immutable functions.\n\n> One might argue that function selecting from table can hardly be called\n> immutable, and immutability is required for index expressions. However,\n> if user is sure table contents doesn't change, why not?\n\nIf the table contents never change, why are you doing VACUUM FULL on it?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Sep 2019 11:08:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: (Re)building index using itself or another index of the same\n table"
},
{
"msg_contents": "On Thu, Sep 12, 2019 at 11:08:28AM -0400, Tom Lane wrote:\n>Arseny Sher <a.sher@postgrespro.ru> writes:\n>> A problem of similar nature can be reproduced with the following\n>> stripped-down scenario:\n>\n>> CREATE TABLE pears(f1 int primary key, f2 int);\n>> INSERT INTO pears SELECT i, i+1 FROM generate_series(1, 100) i;\n>> CREATE OR REPLACE FUNCTION pears_f(i int) RETURNS int LANGUAGE SQL IMMUTABLE AS $$\n>> SELECT f1 FROM pears WHERE pears.f2 = 42\n>> $$;\n>> CREATE index ON pears ((pears_f(f1)));\n>\n>We've seen complaints about this sort of thing before, and rejected\n>them because, as you say, that function is NOT immutable. When you\n>lie to the system like that, you should not be surprised if things\n>break.\n>\n>> There is already a mechanism which prevents usage of indexes during\n>> reindex -- ReindexIsProcessingIndex et al. However, to the contrary of\n>> what index.c:3664 comment say, these protect only indexes on system\n>> catalogs, not user tables: the only real caller is genam.c.\n>> Attached patch extends it: the same check is added to\n>> get_relation_info. Also SetReindexProcessing is cocked in index_create\n>> to defend from index self usage during creation as in stripped example\n>> above. There are some other still unprotected callers of index_build;\n>> concurrent index creation doesn't need it because index is\n>> 'not indisvalid' during the build, and in RelationTruncateIndexes\n>> table is empty, so it looks like it can be omitted.\n>\n>I have exactly no faith that this fixes things enough to make such\n>cases supportable. And I have no interest in opening that can of\n>worms anyway. I'd rather put in some code to reject database\n>accesses in immutable functions.\n>\n\nSame here. My hunch is a non-trivaial fraction of applications using\nthis \"trick\" is silently broken in various subtle ways.\n\n>> One might argue that function selecting from table can hardly be called\n>> immutable, and immutability is required for index expressions. However,\n>> if user is sure table contents doesn't change, why not?\n>\n>If the table contents never change, why are you doing VACUUM FULL on it?\n>\n\nIt's possible the columns referenced by the index expression are not\nchanging, but some additional columns are updated.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 15 Sep 2019 22:02:51 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: (Re)building index using itself or another index of the same\n table"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n\n> On Thu, Sep 12, 2019 at 11:08:28AM -0400, Tom Lane wrote:\n\n>>I have exactly no faith that this fixes things enough to make such\n>>cases supportable. And I have no interest in opening that can of\n>>worms anyway. I'd rather put in some code to reject database\n>>accesses in immutable functions.\n>>\n>\n> Same here. My hunch is a non-trivaial fraction of applications using\n> this \"trick\" is silently broken in various subtle ways.\n\nOk, I see the point. However, \"could not read block\" error might seem\nquite scary to the users; it looks like data corruption. How about\nERRORing out then in get_relation_info instead of skipping reindexing\nindexes, like in attached? Even if this doesn't cover all cases, at\nleast one scenario observed in the field would have better error\nmessage.\n\nRejecting database access completely in immutable functions would be\nunfortunate for our particular case, because this GIN index on\nexpression joining the very indexed table multiple times (and using thus\nbtree index) is, well, useful. Here is a brief description of the\ncase. Indexed table stores postal addresses, which are of hierarchical\nnature (e.g. country-region-city-street-house). Single row is one element\nof any depth (e.g. region or house); each row stores link to its parent\nin parent_guid column, establishing thus the hierarchy\n(e.g. house has link to the street).\n\nThe task it to get the full address by typing random parts of it\n(imagine typing hints in Google Maps). For that, FTS is used. GIN index\nis built on full addresses, and to get the full address table is climbed\nup about six times (hierarchy depth) by following parent_guid chain.\n\nWe could materialize full addresses in the table and eliminate the need\nto form them in the index expression, but that would seriously increase\namount of required storage -- GIN doesn't store indexed columns fully,\nand thus it is cheaper to 'materialize' full addresses inside it only.\n\n\nSurely this is a hack which cheats the system. We might imagine creating\nsome functionality (kinda index referring to multiple rows of the table\n-- or even rows of different tables) making it unneccessary, but such\nfunctionality doesn't exist today, and the hack is useful, if you\nunderstand the risk.\n\n\n>>> One might argue that function selecting from table can hardly be called\n>>> immutable, and immutability is required for index expressions. However,\n>>> if user is sure table contents doesn't change, why not?\n>>\n>>If the table contents never change, why are you doing VACUUM FULL on it?\n>>\n>\n> It's possible the columns referenced by the index expression are not\n> changing, but some additional columns are updated.\n\nYeah. Also table can be CLUSTERed without VACUUM FULL.\n\n\n--\nArseny Sher\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 16 Sep 2019 16:24:19 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: (Re)building index using itself or another index of the same\n table"
}
] |
[
{
"msg_contents": "Currently, texteq() and textne() are marked leakproof, while\nsibling operations such as textlt() are not. The argument\nfor that was that those two functions depend only on memcmp()\nso they can be seen to be safe, whereas it's a whole lot less\nclear that text_cmp() should be considered leakproof.\n\nOf course, the addition of nondeterministic collations has\nutterly obliterated that argument: it's now possible for\ntexteq() to call text_cmp(), so if the latter is leaky then\nthe former certainly must be considered so as well.\n\nSeems like we have two choices:\n\n1. Remove the leakproof marking on texteq()/textne(). This\nwould, unfortunately, be catastrophic for performance in\na lot of scenarios where leakproofness matters.\n\n2. Fix text_cmp to actually be leakproof, whereupon we might\nas well mark all the remaining btree comparison operators\nleakproof.\n\nI think #2 is pretty obviously the superior answer, if we\ncan do it.\n\nISTM we can't ship v12 without dealing with this one way\nor the other, so I'll go add an open item.\n\nComments?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Sep 2019 11:51:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Leakproofness of texteq()/textne()"
},
{
"msg_contents": "I wrote:\n> Seems like we have two choices:\n> 1. Remove the leakproof marking on texteq()/textne(). This\n> would, unfortunately, be catastrophic for performance in\n> a lot of scenarios where leakproofness matters.\n> 2. Fix text_cmp to actually be leakproof, whereupon we might\n> as well mark all the remaining btree comparison operators\n> leakproof.\n> I think #2 is pretty obviously the superior answer, if we\n> can do it.\n\nAfter burrowing down further, it's visibly the case that\ntext_cmp and varstr_cmp don't leak in the sense of actually\nreporting any part of their input strings. What they do do,\nin some code paths, is things like\n\n ereport(ERROR,\n (errmsg(\"could not convert string to UTF-16: error code %lu\",\n GetLastError())));\n\nSo this seems to mostly be a question of interpretation:\ndoes an error like this represent enough of an information\nleak to violate the promises of leakproofness? I'm not sure\nthat this error is really capable of disclosing any information\nabout the input strings. (Note that this error occurs only if\nwe failed to convert UTF8 to UTF16, which probably could only\nhappen if the input isn't valid UTF8, which would mean a failure\nof encoding checking somewhere up the line.)\n\nThere are also various pallocs and such that could fail, which\nperhaps could be argued to disclose the lengths of the input\nstrings. But that hazard exists already inside PG_GETARG_TEXT_PP;\nif you want to complain about that, then functions like byteaeq()\naren't legitimately leakproof either.\n\nOn the whole I'm prepared to say that these aren't leakproofness\nviolations, but it would depend a lot on how strict you want to be.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Sep 2019 12:19:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Leakproofness of texteq()/textne()"
},
{
"msg_contents": "On Thu, Sep 12, 2019 at 12:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> After burrowing down further, it's visibly the case that\n> text_cmp and varstr_cmp don't leak in the sense of actually\n> reporting any part of their input strings. What they do do,\n> in some code paths, is things like\n>\n> ereport(ERROR,\n> (errmsg(\"could not convert string to UTF-16: error code %lu\",\n> GetLastError())));\n\nIs this possible? I mean, I'm sure it could happen if the data's\ncorrupted, but we ought to have validated it on the way into the\ndatabase. But maybe this code path also gets used for non-Unicode\nencodings?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 12 Sep 2019 12:44:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Leakproofness of texteq()/textne()"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Sep 12, 2019 at 12:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> After burrowing down further, it's visibly the case that\n>> text_cmp and varstr_cmp don't leak in the sense of actually\n>> reporting any part of their input strings. What they do do,\n>> in some code paths, is things like\n>> ereport(ERROR,\n>> (errmsg(\"could not convert string to UTF-16: error code %lu\",\n>> GetLastError())));\n\n> Is this possible? I mean, I'm sure it could happen if the data's\n> corrupted, but we ought to have validated it on the way into the\n> database. But maybe this code path also gets used for non-Unicode\n> encodings?\n\nNope, the above is inside \n\n#ifdef WIN32\n /* Win32 does not have UTF-8, so we need to map to UTF-16 */\n if (GetDatabaseEncoding() == PG_UTF8\n && (!mylocale || mylocale->provider == COLLPROVIDER_LIBC))\n\nI agree with your point that this is a shouldn't-happen corner case.\nThe question boils down to, if it *does* happen, does that constitute\na meaningful information leak? Up to now we've taken quite a hard\nline about what leakproofness means, so deciding that varstr_cmp\nis leakproof would constitute moving the goalposts a bit. They'd\nstill be in the same stadium, though, IMO.\n\nAnother approach would be to try to remove these failure cases,\nbut I don't really see how we'd do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Sep 2019 13:01:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Leakproofness of texteq()/textne()"
},
{
"msg_contents": "I wrote:\n> I agree with your point that this is a shouldn't-happen corner case.\n> The question boils down to, if it *does* happen, does that constitute\n> a meaningful information leak? Up to now we've taken quite a hard\n> line about what leakproofness means, so deciding that varstr_cmp\n> is leakproof would constitute moving the goalposts a bit. They'd\n> still be in the same stadium, though, IMO.\n\nFor most of us it might be more meaningful to look at the non-Windows\ncode paths, for which the question reduces to what we think of this:\n\n UErrorCode status;\n\n status = U_ZERO_ERROR;\n result = ucol_strcollUTF8(mylocale->info.icu.ucol,\n arg1, len1,\n arg2, len2,\n &status);\n if (U_FAILURE(status))\n ereport(ERROR,\n (errmsg(\"collation failed: %s\", u_errorName(status))));\n\nwhich, as it happens, is also a UTF8-encoding-only code path.\nCan this throw an error in practice, and if so does that\nconstitute a meaningful information leak? (For bonus points:\nis this error report up to project standards?)\n\nThumbing through the list of UErrorCode values, it seems like the only\nones that are applicable here and aren't internal-error cases are\nU_INVALID_CHAR_FOUND and the like, so that this boils down to \"one of\nthe strings contains a character that ICU can't cope with\". Maybe that's\nimpossible except with a pre-existing encoding violation, or maybe not.\n\nIn any case, from a purely theoretical viewpoint, such an error message\n*does* constitute a leak of information about the input strings. Whether\nit's a usable leak is very debatable, but that's basically what we've\ngot to decide.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Sep 2019 13:38:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Leakproofness of texteq()/textne()"
},
{
"msg_contents": "On Thu, Sep 12, 2019 at 1:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> In any case, from a purely theoretical viewpoint, such an error message\n> *does* constitute a leak of information about the input strings. Whether\n> it's a usable leak is very debatable, but that's basically what we've\n> got to decide.\n\nI'm pretty content to ignore information leaks that can only happen if\nthe database is corrupt anyway. If that's moving the goalposts at\nall, it's about a quarter-inch. I mean, a slightly differently\ncorrupted varlena would could crash the database entirely.\n\nI wouldn't feel comfortable with ignoring information leaks that can\nhappen with some valid strings but not others. That sounds like\nexactly the sort of information leak that we must prevent. The user\ncan write arbitrary stuff in their query, potentially transforming\nstrings so that the result hits the ERROR iff the original string had\nsome arbitrary property P for which they wish to test. Allowing that\nsounds no different than deciding that int4div is leakproof, which it\nsure isn't.\n\nHowever, I wonder if there's any realistic case outside of an encoding\nconversion where such failures can occur. I would expect, perhaps\nnaively, that the set of characters that can be represented by UTF-16\nis the same set as can be represented by UTF-8.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 12 Sep 2019 16:56:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Leakproofness of texteq()/textne()"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I wouldn't feel comfortable with ignoring information leaks that can\n> happen with some valid strings but not others. That sounds like\n> exactly the sort of information leak that we must prevent. The user\n> can write arbitrary stuff in their query, potentially transforming\n> strings so that the result hits the ERROR iff the original string had\n> some arbitrary property P for which they wish to test.\n\nAgreed.\n\n> However, I wonder if there's any realistic case outside of an encoding\n> conversion where such failures can occur. I would expect, perhaps\n> naively, that the set of characters that can be represented by UTF-16\n> is the same set as can be represented by UTF-8.\n\nUnless Microsoft did something weird, that doesn't seem very likely.\nI'm more worried about the possibility that ICU contains weird exception\ncases that will make it fail to compare specific strings. Noting\nthat ucnv_toUChars has an error indicator but ucol_strcoll does not,\nit seems like again any such cases are going to boil down to\nencoding conversion problems.\n\nHowever, if there is some character C that makes ICU misbehave like\nthat, we are going to have problems with indexing strings containing C,\nwhether we think varstr_cmp is leaky or not. So I'm not sure that\nfocusing our attention on leakiness is a helpful way to proceed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Sep 2019 17:19:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Leakproofness of texteq()/textne()"
},
{
"msg_contents": "On Thu, Sep 12, 2019 at 5:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> However, if there is some character C that makes ICU misbehave like\n> that, we are going to have problems with indexing strings containing C,\n> whether we think varstr_cmp is leaky or not. So I'm not sure that\n> focusing our attention on leakiness is a helpful way to proceed.\n\nThat seems like a compelling argument to me.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 13 Sep 2019 08:14:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Leakproofness of texteq()/textne()"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Sep 12, 2019 at 5:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > However, if there is some character C that makes ICU misbehave like\n> > that, we are going to have problems with indexing strings containing C,\n> > whether we think varstr_cmp is leaky or not. So I'm not sure that\n> > focusing our attention on leakiness is a helpful way to proceed.\n> \n> That seems like a compelling argument to me.\n\nAgreed.\n\nThanks,\n\nStephen",
"msg_date": "Sun, 15 Sep 2019 22:04:40 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Leakproofness of texteq()/textne()"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n>> On Thu, Sep 12, 2019 at 5:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> However, if there is some character C that makes ICU misbehave like\n>>> that, we are going to have problems with indexing strings containing C,\n>>> whether we think varstr_cmp is leaky or not. So I'm not sure that\n>>> focusing our attention on leakiness is a helpful way to proceed.\n\n>> That seems like a compelling argument to me.\n\n> Agreed.\n\nSo it seems that the consensus is that it's okay to mark these\nfunctions leakproof, because if any of the errors they throw\nare truly reachable for other than data-corruption reasons,\nwe would wish to try to prevent such errors. (Maybe through\nupstream validity checks? Hard to say how we'd do it exactly,\nwhen we don't have an idea what the problem is.)\n\nMy inclination is to do the proleakproof changes in HEAD, but\nnot touch v12. The inconsistency in leakproof markings in v12\nis annoying but it's not a regression or security hazard, so\nI'm thinking it's not worth a late catversion bump to fix.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Sep 2019 00:24:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Leakproofness of texteq()/textne()"
},
{
"msg_contents": "On 2019-09-16 06:24, Tom Lane wrote:\n> So it seems that the consensus is that it's okay to mark these\n> functions leakproof, because if any of the errors they throw\n> are truly reachable for other than data-corruption reasons,\n> we would wish to try to prevent such errors. (Maybe through\n> upstream validity checks? Hard to say how we'd do it exactly,\n> when we don't have an idea what the problem is.)\n\nYeah, it seems like as we expand our Unicode capabilities, we will see\nmore cases like \"it could fail here in theory, but it shouldn't happen\nfor normal data\", and the answer can't be to call all that untrusted or\nleaky. It's the job of the database software to sort that out.\nObviously, it will require careful evaluation in each case.\n\n> My inclination is to do the proleakproof changes in HEAD, but\n> not touch v12. The inconsistency in leakproof markings in v12\n> is annoying but it's not a regression or security hazard, so\n> I'm thinking it's not worth a late catversion bump to fix.\n\nSounds good, unless we do another catversion bump.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 17 Sep 2019 09:17:21 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Leakproofness of texteq()/textne()"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-09-16 06:24, Tom Lane wrote:\n>> So it seems that the consensus is that it's okay to mark these\n>> functions leakproof, because if any of the errors they throw\n>> are truly reachable for other than data-corruption reasons,\n>> we would wish to try to prevent such errors. (Maybe through\n>> upstream validity checks? Hard to say how we'd do it exactly,\n>> when we don't have an idea what the problem is.)\n\n> Yeah, it seems like as we expand our Unicode capabilities, we will see\n> more cases like \"it could fail here in theory, but it shouldn't happen\n> for normal data\", and the answer can't be to call all that untrusted or\n> leaky. It's the job of the database software to sort that out.\n> Obviously, it will require careful evaluation in each case.\n\nHere's a proposed patch to mark functions that depend on varstr_cmp\nas leakproof. I think we can apply this to HEAD and then close the\nopen item as \"won't fix for v12\".\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 17 Sep 2019 13:00:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Leakproofness of texteq()/textne()"
}
] |
[
{
"msg_contents": "Although the following problem does not seem to be exposed in the core, I\nthink it's still a problem to fix. (I've hit it when implementing a custom\nparser for extension configuration file.)\n\nIf makeJsonLexContextCstringLen() is passed need_escapes=false,\nJsonLexContext.strval is not initialized, and in turn, functions of\nJsonSemAction which should receive the string token value\n(e.g. object_field_start) receive NULL.\n\nAttached is a patch that fixes the problem. If this approach is acceptable,\nthen it'd probably be worth to also rename the JsonLexContext.strval field to\nsomething that recalls the \"de-escaping\", e.g. \"noesc\"?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Thu, 12 Sep 2019 17:51:50 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "JSON parser discards value of string token"
}
] |
[
{
"msg_contents": "I've been studying At{Sub,}{Abort,Cleanup}_Portals() for the last few\ndays and have come to the conclusion that the code is not entirely up\nto our usual standards. I believe that a good deal of the reason for\nthis is attributable to the poor quality of the code comments in this\narea, although there are perhaps some other contributing factors as\nwell, such as bullheadedness on my part and perhaps others.\n\nThe trouble starts with the header comment for AtAbort_Portals, which\nstates that \"At this point we run the cleanup hook if present, but we\ncan't release the portal's memory until the cleanup call.\" At the time\nthis logic was introduced in commit\nde28dc9a04c4df5d711815b7a518501b43535a26 (2003-05-02),\nAtAbort_Portals() affected all non-held portals without caring whether\nthey were active or not, and, UserAbortTransactionBlock() called\nAbortTransaction() directly, so typing \"ROLLBACK;\" would cause\nAtAbort_Portals() to be reached from within PortalRun(). Even if\nPortalRun() managed to return without crashing, the caller would next\ntry to call PortalDrop() on what was now an invalid pointer. However,\ncommit 8f9f1986034a2273e09ad10671e10d1adda21d1f (2004-09-16) changed\nthings so that UserAbortEndTransaction() just sets things up so that\nthe subsequent call to CommitTransactionCommand() would call\nAbortTransaction() instead of trying to do it right away, and that\ndoesn't happen until after we're done with the portal. As far as I\ncan see, that change made this comment mostly false, but the comment\nhas nevertheless managed to survive for another ~15 years. I think we\ncan, and in fact should, just drop the portal right here.\n\nAs far as actually making that work, there are a few wrinkles. The\nfirst is that we might be in the middle of FATAL. In that case, unlike\nthe ROLLBACK case, a call to PortalRun() is still on the stack, but\nwe'll exit the process rather than returning, so the fact that we've\ncreated a dangling pointer for the caller won't matter. However, as\nshown by commit ad9a274778d2d88c46b90309212b92ee7fdf9afe (2018-02-01)\nand the report that led up to it at\nhttps://www.postgresql.org/message-id/20180128034531.h6o4w3727ifof3jy%40alap3.anarazel.de\nit's not a good idea to try to clean up the portal in that case,\nbecause we might've already started shutting down critical systems.\nIt seems not only risky but also unnecessary: our process-local state\nis about to go away, and the executor shouldn't need to clean up any\nshared memory state that won't also get cleaned up by some other\nmechanism. So, it seems to me that if we reach AtAbort_Portals()\nduring FATAL processing, we should either (1) do nothing at all and\njust return or (2) forget about all the existing portals without\ncleaning them up and then return. The second option seems a little\nsafer to me, because it guarantees that if we somehow reach code that\nmight try to look up a portal later, it won't find anything. But I\nthink it's arguable.\n\nThe second wrinkle is that there might be an active portal. Apart\nfrom the FATAL case already mentioned, I think the only way this can\nhappen is some C code that calls purposefully calls AbortTransaction()\nin the middle of executing a command. It can't be an ERROR, because\nthen the portal would be marked as failed before we get here, and it\ncan't be an explicit ROLLBACK, because as noted above, that case was\nchanged 15 years ago. It's got to be some other case where C code\ncalls AbortTransaction() voluntarily in the middle of a statement. For\nover a decade, there were no cases at all of this type, but the code\nin this function catered to hypothetical cases by marking the portal\nfailed. By 2016, Noah had figured out that this was bogus, and that\nany future cases would likely require different handling, but Tom and\nI shouted him down:\n\nhttp://postgr.es/m/67674.1454259004@sss.pgh.pa.us\n\nThe end result of that discussion was commit\n41baee7a9312eefb315b6b2973ac058c9efaa9cf (2016-02-05) which left the\ncode as it was but added comments nothing that it was wrong. It\nactually wasn't entirely wrong, because it handled the FATAL case\nmentioned above by the byzantine mechanism of invoking the portal's\ncleanup callback after first setting the status to PORTAL_FAILED.\nSince the only existing cleanup callback arranges to do nothing if the\nstatus is PORTAL_FAILED, this worked out to a complicated way of\n(correctly) skipping callback in the FATAL case.\n\nBut, probably because that wasn't documented in any understandable\nway, possibly because nobody really understood it, when commit\n8561e4840c81f7e345be2df170839846814fa004 (2018-01-22) added support\nfor transaction control in procedures, it just removed the code\nmarking active portals as failed, just as Noah had predicted would be\nnecessary ~2 years earlier. Andres noticed that this broke the FATAL\ncase and tracked it back to the removal of this code, resulting it it\ngetting put back, but just for the FATAL case, in commit\nad9a274778d2d88c46b90309212b92ee7fdf9afe (2018-02-01). See also\ndiscussion at:\n\nhttps://www.postgresql.org/message-id/20180128034531.h6o4w3727ifof3jy%40alap3.anarazel.de\n\nI think that the code here still isn't really right. Apart from the\nfact that the comments don't explain anything very clearly and the\nFATAL case is handled in a way that looks overly complicated and\naccidental, the handling in AtSubAbort_Portals() hasn't been updated,\nso it's now inconsistent with AtAbort_Portals() as to both substance\nand comments. I think that's only held together because stored\nprocedures don't yet support explicit control over subtransactions,\nonly top-level transactions.\n\nStepping back a bit, stored procedures are a good example of a command\nthat uses multiple transactions internally. We have a few others, such\nas VACUUM, but at present, that case only commits transactions\ninternally; it does not roll them back internally. If it did, it\nwould want the same thing that procedures want, namely, to leave the\nactive portal alone. It doesn't quite work to leave the active portal\ncompletely alone, because the portal has got a pointer to a\nResourceOwner which is about to be freed by end-of-transaction\ncleanup; at the least, we've got to clear the pointer to that when the\nmulti-transaction statement eventually finishes in some future\ntransaction, it doesn't try to do anything with a dangling pointer.\nBut note that the resource owner can't really be in use in this\nsituation anyway; if it is, then the transaction abort is about to\nblow away resources that the statement still needs. Similarly, the\nstatement can't be using any transaction-local memory, because that\ntoo is about to get blown away. The only thing that can work at all\nhere is a statement that's been carefully designed to be able to\nsurvive starting and ending transactions internally. Such a statement\nmust not rely on any transaction-local resources. The only thing I'm\nsure we have to do is set portal->resowner to NULL. Calling the\ncleanup hook, as the current code does, can't be right, because we'd\nbe cleaning up something that isn't going away. I think it only works\nnow because this is another case where the cleanup hook arranges to do\nnothing in the cases where calling it is wrong in the first place. The\ncurrent code also calls PortalReleaseCachedPlan in this case; I'm not\n100% certain whether that's appropriate or not.\n\nAttached is a patch that (1) removes At(Sub)Cleanup_Portals() entirely\nin favor of dropping portals on the spot in At(Sub)Abort_Portals(),\n(2) overhauls the comments in this area, and (3) makes\nAtSubAbort_Portals() consistent with AtAbort_Portals(). One possible\nconcern here - which Andres mentioned to me during off-list discussion\n- is that someone might create a portal for a ROLLBACK statement, and\nthen try to execute that portal after an error has occurred. In\ntheory, keeping the portal around between abort time and cleanup time\nmight allow this to work, but it actually doesn't work, so there's no\nfunctional regression. As noted by Amit Kapila also during off-list\ndiscussion, IsTransactionStmtList() only works if portal->stmts is\nset, and PortalReleaseCachedPlan() clears portal->stmts, so once we've\naborted the transaction, any previously-created portals are\nunrecognizable as transaction statements.\n\nThis area is incredibly confusing to understand, so it's entirely\npossible that I've gotten some things wrong. However, I think it's\nworth the effort to try to do some cleanup here, because I think it's\noverly complex and under-documented. On top of the commits already\nmentioned, some of which I think demonstrate that the issues here\nhaven't been completely understood, I found other bug fix commits that\nlook like bugs that might've never happened in the first place if this\nweren't all so confusing.\n\nFeedback appreciated,\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 12 Sep 2019 16:42:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "abort-time portal cleanup"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 2:13 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> I've been studying At{Sub,}{Abort,Cleanup}_Portals() for the last few\n> days and have come to the conclusion that the code is not entirely up\n> to our usual standards. I believe that a good deal of the reason for\n> this is attributable to the poor quality of the code comments in this\n> area, although there are perhaps some other contributing factors as\n> well, such as bullheadedness on my part and perhaps others.\n>\n> The trouble starts with the header comment for AtAbort_Portals, which\n> states that \"At this point we run the cleanup hook if present, but we\n> can't release the portal's memory until the cleanup call.\" At the time\n> this logic was introduced in commit\n> de28dc9a04c4df5d711815b7a518501b43535a26 (2003-05-02),\n> AtAbort_Portals() affected all non-held portals without caring whether\n> they were active or not, and, UserAbortTransactionBlock() called\n> AbortTransaction() directly, so typing \"ROLLBACK;\" would cause\n> AtAbort_Portals() to be reached from within PortalRun(). Even if\n> PortalRun() managed to return without crashing, the caller would next\n> try to call PortalDrop() on what was now an invalid pointer. However,\n> commit 8f9f1986034a2273e09ad10671e10d1adda21d1f (2004-09-16) changed\n> things so that UserAbortEndTransaction() just sets things up so that\n> the subsequent call to CommitTransactionCommand() would call\n> AbortTransaction() instead of trying to do it right away, and that\n> doesn't happen until after we're done with the portal. As far as I\n> can see, that change made this comment mostly false, but the comment\n> has nevertheless managed to survive for another ~15 years. I think we\n> can, and in fact should, just drop the portal right here.\n>\n> As far as actually making that work, there are a few wrinkles. The\n> first is that we might be in the middle of FATAL. In that case, unlike\n> the ROLLBACK case, a call to PortalRun() is still on the stack, but\n> we'll exit the process rather than returning, so the fact that we've\n> created a dangling pointer for the caller won't matter. However, as\n> shown by commit ad9a274778d2d88c46b90309212b92ee7fdf9afe (2018-02-01)\n> and the report that led up to it at\n> https://www.postgresql.org/message-id/20180128034531.h6o4w3727ifof3jy%40alap3.anarazel.de\n> it's not a good idea to try to clean up the portal in that case,\n> because we might've already started shutting down critical systems.\n> It seems not only risky but also unnecessary: our process-local state\n> is about to go away, and the executor shouldn't need to clean up any\n> shared memory state that won't also get cleaned up by some other\n> mechanism. So, it seems to me that if we reach AtAbort_Portals()\n> during FATAL processing, we should either (1) do nothing at all and\n> just return or (2) forget about all the existing portals without\n> cleaning them up and then return. The second option seems a little\n> safer to me, because it guarantees that if we somehow reach code that\n> might try to look up a portal later, it won't find anything. But I\n> think it's arguable.\n>\n\nI agree with your position on this.\n\n>\n> Attached is a patch that (1) removes At(Sub)Cleanup_Portals() entirely\n> in favor of dropping portals on the spot in At(Sub)Abort_Portals(),\n> (2) overhauls the comments in this area, and (3) makes\n> AtSubAbort_Portals() consistent with AtAbort_Portals().\n\nThe overall idea seems good to me, but I have a few comments on the changes.\n\n1.\n@@ -2756,7 +2756,6 @@ CleanupTransaction(void)\n /*\n * do abort cleanup processing\n */\n- AtCleanup_Portals(); /* now safe to release portal\nmemory */\n AtEOXact_Snapshot(false, true); /* and release the transaction's\nsnapshots */\n\n CurrentResourceOwner = NULL; /* and resource owner */\n@@ -5032,8 +5031,6 @@ CleanupSubTransaction(void)\n elog(WARNING, \"CleanupSubTransaction while in %s state\",\n TransStateAsString(s->state));\n\n- AtSubCleanup_Portals(s->subTransactionId);\n-\n\nAfter this cleanup, I think we don't need At(Sub)Abort_Portals in\nAbortOutOfAnyTransaction() for the states TBLOCK_(SUB)ABORT and\nfriends. This is because AbortTransaction itself would have zapped the\nportal.\n\n2. You seem to forgot removing AtCleanup_Portals() from portal.h\n\n3.\n /*\n- * If it was created in the current transaction, we\ncan't do normal\n- * shutdown on a READY portal either; it might refer to\nobjects\n- * created in the failed transaction. See comments in\n- * AtSubAbort_Portals.\n- */\n- if (portal->status == PORTAL_READY)\n- MarkPortalFailed(portal);\n-\n\nWhy it is safe to remove this check? It has been explained in commit\n7981c342 why we need that check. I don't see any explanation in email\nor patch which justifies this code removal. Is it because you removed\nPortalCleanup? If so, that is still called from PortalDrop?\n\n4.\n-AtCleanup_Portals(void)\n-{\n- HASH_SEQ_STATUS status;\n- PortalHashEnt *hentry;\n-\n- hash_seq_init(&status, PortalHashTable);\n-\n- while ((hentry = (PortalHashEnt *) hash_seq_search(&status)) !=\nNULL)\n- {\n- Portal portal = hentry->portal;\n-\n- /*\n- * Do not touch active portals --- this can only happen\nin the case of\n- * a multi-transaction command.\n+ * If the status is PORTAL_ACTIVE, then we must be\nexecuting a command\n+ * that uses multiple transactions internally. In that\ncase, the\n+ * command in question must be one that does not\ninternally rely on\n+ * any transaction-lifetime resources, because they\nwould disappear\n+ * in the upcoming transaction-wide cleanup.\n */\n if (portal->status == PORTAL_ACTIVE)\n\nI am not able to understand how we can reach with the portal state as\n'active' for a multi-transaction command. It seems wherever we mark\nportal as active, we don't relinquish the control unless its state is\nchanged. Can you share some example where this can happen?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 Sep 2019 15:00:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: abort-time portal cleanup"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 2:13 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n/*\n* Otherwise, do nothing to cursors held over from a previous\n* transaction.\n*/\nif (portal->createSubid == InvalidSubTransactionId)\ncontinue;\n\n/*\n* Do nothing to auto-held cursors. This is similar to the case of a\n* cursor from a previous transaction, but it could also be that the\n* cursor was auto-held in this transaction, so it wants to live on.\n*/\nif (portal->autoHeld)\ncontinue;\n\nI have one doubt that why do we need the second check. Because before\nsetting portal->autoHeld to true we always call HoldPortal therein we\nset portal->createSubid to InvalidSubTransactionId. So it seems to me\nthat the second condition will never reach. Am I missing something?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 Sep 2019 16:04:17 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: abort-time portal cleanup"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-12 16:42:46 -0400, Robert Haas wrote:\n> The trouble starts with the header comment for AtAbort_Portals, which\n> states that \"At this point we run the cleanup hook if present, but we\n> can't release the portal's memory until the cleanup call.\" At the time\n> this logic was introduced in commit\n> de28dc9a04c4df5d711815b7a518501b43535a26 (2003-05-02),\n> AtAbort_Portals() affected all non-held portals without caring whether\n> they were active or not, and, UserAbortTransactionBlock() called\n> AbortTransaction() directly, so typing \"ROLLBACK;\" would cause\n> AtAbort_Portals() to be reached from within PortalRun(). Even if\n> PortalRun() managed to return without crashing, the caller would next\n> try to call PortalDrop() on what was now an invalid pointer. However,\n> commit 8f9f1986034a2273e09ad10671e10d1adda21d1f (2004-09-16) changed\n> things so that UserAbortEndTransaction() just sets things up so that\n> the subsequent call to CommitTransactionCommand() would call\n> AbortTransaction() instead of trying to do it right away, and that\n> doesn't happen until after we're done with the portal. As far as I\n> can see, that change made this comment mostly false, but the comment\n> has nevertheless managed to survive for another ~15 years. I think we\n> can, and in fact should, just drop the portal right here.\n\nNice digging.\n\n\n> As far as actually making that work, there are a few wrinkles. The\n> first is that we might be in the middle of FATAL. In that case, unlike\n> the ROLLBACK case, a call to PortalRun() is still on the stack, but\n> we'll exit the process rather than returning, so the fact that we've\n> created a dangling pointer for the caller won't matter. However, as\n> shown by commit ad9a274778d2d88c46b90309212b92ee7fdf9afe (2018-02-01)\n> and the report that led up to it at\n> https://www.postgresql.org/message-id/20180128034531.h6o4w3727ifof3jy%40alap3.anarazel.de\n> it's not a good idea to try to clean up the portal in that case,\n> because we might've already started shutting down critical systems.\n> It seems not only risky but also unnecessary: our process-local state\n> is about to go away, and the executor shouldn't need to clean up any\n> shared memory state that won't also get cleaned up by some other\n> mechanism. So, it seems to me that if we reach AtAbort_Portals()\n> during FATAL processing, we should either (1) do nothing at all and\n> just return or (2) forget about all the existing portals without\n> cleaning them up and then return. The second option seems a little\n> safer to me, because it guarantees that if we somehow reach code that\n> might try to look up a portal later, it won't find anything. But I\n> think it's arguable.\n\nHm. Doing that cleanup requires digging through all the portals etc. I'd\nrather rely on less state being correct than more during FATAL\nprocessing.\n\n\n> The second wrinkle is that there might be an active portal. Apart\n> from the FATAL case already mentioned, I think the only way this can\n> happen is some C code that calls purposefully calls AbortTransaction()\n> in the middle of executing a command. It can't be an ERROR, because\n> then the portal would be marked as failed before we get here, and it\n> can't be an explicit ROLLBACK, because as noted above, that case was\n> changed 15 years ago. It's got to be some other case where C code\n> calls AbortTransaction() voluntarily in the middle of a statement. For\n> over a decade, there were no cases at all of this type, but the code\n> in this function catered to hypothetical cases by marking the portal\n> failed. By 2016, Noah had figured out that this was bogus, and that\n> any future cases would likely require different handling, but Tom and\n> I shouted him down:\n\nHm. But wouldn't doing so run into a ton of problems anyway? I mean we\nneed to do a ton of checks and special case hangups to make\nCommitTransactionCommand();StartTransactionCommand(); work for VACUUM,\nCIC, ...\n\nThe cases where one can use AbortTransaction() (via\nAbortCurrentTransaction() presumably) are ones where either there's no\nsurrounding code relying on the transaction (e.g. autovacuum,\npostgres.c), or where special care has been taken with portals\n(e.g. _SPI_rollback()). We didn't have the pin mechanism back then, so\nI think even if we accept your/Tom's reasoning from back then (I don't\nreally), it's outdated now that the pin mechanism exists.\n\nI'd be happy if we added some defenses against such bogus cases being\nintroduced (i.e. erroring out if we encounter an active portal during\nabort processing).\n\n> Stepping back a bit, stored procedures are a good example of a command\n> that uses multiple transactions internally. We have a few others, such\n> as VACUUM, but at present, that case only commits transactions\n> internally; it does not roll them back internally. If it did, it\n> would want the same thing that procedures want, namely, to leave the\n> active portal alone. It doesn't quite work to leave the active portal\n> completely alone, because the portal has got a pointer to a\n> ResourceOwner which is about to be freed by end-of-transaction\n> cleanup;\n\nWell, that's why _SPI_commit()/_SPI_rollback() do a HoldPinnedPortals(),\nwhich, via HoldPortal(), sets portal->resowner to = NULL.\n\n\n> The current code also calls PortalReleaseCachedPlan in this case; I'm\n> not 100% certain whether that's appropriate or not.\n\nI think it's appropriate, because we cannot guarantee that the plan is\nstill usable. Besides normal plan invalidation issues, the catalog\ncontents the plan might rely on might have only existed in the aborted\ntransaction - which seems like a fatal problem. That's why holding\nportals persists the portalstore. Which then also obsoletes the plan.\n\n\n> Attached is a patch that (1) removes At(Sub)Cleanup_Portals() entirely\n> in favor of dropping portals on the spot in At(Sub)Abort_Portals(),\n> (2) overhauls the comments in this area, and (3) makes\n> AtSubAbort_Portals() consistent with AtAbort_Portals(). One possible\n> concern here - which Andres mentioned to me during off-list discussion\n> - is that someone might create a portal for a ROLLBACK statement, and\n> then try to execute that portal after an error has occurred.\n\nBesides not quite happening as I though, as you reference below, I think\nthat only mattered if a rollback gets executed in a failed transaction -\nthat has to happen after a sync message, because otherwise we'd just\nskip the command. But to be problematic, the bind would have to be from\nbefore the sync, and the exec from after - which'd be an extremely\nabsurd use of the protocol.\n\n\n> /*\n> * Abort processing for portals.\n> *\n> - * At this point we run the cleanup hook if present, but we can't release the\n> - * portal's memory until the cleanup call.\n> + * Most portals don't and shouldn't survive transaction abort, but there are\n> + * some important special cases where they do and must: (1) held portals must\n> + * survive by definition, and (2) any active portal must be part of a command\n> + * that uses multiple transactions internally, and needs to survive until\n> + * execution of that command has completed.\n\nHm. Why are active, rather than pinnned, portals relevant here? Normal\nmulti-transactional commands (e.g. vacuum, CIC) shouldn't get here with\nan active portal, as they don't catch errors. And procedures should have\npinned the relevant portals?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 24 Sep 2019 13:45:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: abort-time portal cleanup"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 4:45 PM Andres Freund <andres@anarazel.de> wrote:\n> Hm. Doing that cleanup requires digging through all the portals etc. I'd\n> rather rely on less state being correct than more during FATAL\n> processing.\n\nI agree with the principle, but the advantage of removing the hash\ntable entries is that it protects against any other code that might\ntry to access the portal machinery later; I thought that was\nworthwhile enough to justify doing this here. I don't feel\nsuper-strongly about it, but I do still like that approach.\n\n> > The second wrinkle is that there might be an active portal. Apart\n> > from the FATAL case already mentioned, I think the only way this can\n> > happen is some C code that calls purposefully calls AbortTransaction()\n> > in the middle of executing a command. It can't be an ERROR, because\n> > then the portal would be marked as failed before we get here, and it\n> > can't be an explicit ROLLBACK, because as noted above, that case was\n> > changed 15 years ago. It's got to be some other case where C code\n> > calls AbortTransaction() voluntarily in the middle of a statement. For\n> > over a decade, there were no cases at all of this type, but the code\n> > in this function catered to hypothetical cases by marking the portal\n> > failed. By 2016, Noah had figured out that this was bogus, and that\n> > any future cases would likely require different handling, but Tom and\n> > I shouted him down:\n>\n> Hm. But wouldn't doing so run into a ton of problems anyway? I mean we\n> need to do a ton of checks and special case hangups to make\n> CommitTransactionCommand();StartTransactionCommand(); work for VACUUM,\n> CIC, ...\n\nRight... those are the kinds of cases I'm talking about here, just for\nabort rather than commit.\n\n> The cases where one can use AbortTransaction() (via\n> AbortCurrentTransaction() presumably) are ones where either there's no\n> surrounding code relying on the transaction (e.g. autovacuum,\n> postgres.c), or where special care has been taken with portals\n> (e.g. _SPI_rollback()). We didn't have the pin mechanism back then, so\n> I think even if we accept your/Tom's reasoning from back then (I don't\n> really), it's outdated now that the pin mechanism exists.\n\nIt isn't, actually. To respond to this and also your question below\nabout why I'm looking at active portal rather than pinned portals, try\nadding this debugging code to AtAbort_Portals:\n\n+ if (portal->status == PORTAL_ACTIVE)\n+ elog(NOTICE, \"this portal is ACTIVE and %spinned\",\n+ portal->portalPinned ? \"\" : \"NOT \");\n\nThen run 'make -C src/pl/plpgsql check' and check\nsrc/pl/plpgsql/src/regression.diffs and you'll see a whole lot of\nthis:\n\n+NOTICE: this portal is ACTIVE and NOT pinned\n\nThe PLs pin the portals they generate internally, but they don't force\nthe surrounding portal in which the toplevel query is executing to be\npinned. AFAICT, pinning is mostly guarding against explicit\nuser-initiated drops of portals that were automatically generated by a\nPL, whereas the portal's state is about tracking what the system is\ndoing with the portal.\n\n(I think this could be a lot better documented than it is, but looking\nat the commit history, I'm fairly sure that's what is happening here.)\n\n> I'd be happy if we added some defenses against such bogus cases being\n> introduced (i.e. erroring out if we encounter an active portal during\n> abort processing).\n\nErroring out during error handling is probably a bad idea, but also, see above.\n\n> > Stepping back a bit, stored procedures are a good example of a command\n> > that uses multiple transactions internally. We have a few others, such\n> > as VACUUM, but at present, that case only commits transactions\n> > internally; it does not roll them back internally. If it did, it\n> > would want the same thing that procedures want, namely, to leave the\n> > active portal alone. It doesn't quite work to leave the active portal\n> > completely alone, because the portal has got a pointer to a\n> > ResourceOwner which is about to be freed by end-of-transaction\n> > cleanup;\n>\n> Well, that's why _SPI_commit()/_SPI_rollback() do a HoldPinnedPortals(),\n> which, via HoldPortal(), sets portal->resowner to = NULL.\n\nRight, but it's still necessary for AtAbort_Portals() to do the same\nthing, again because the top-level portal isn't pinned. If you apply\nmy patch, comment out the line that does portal->resowner = NULL; for\nan active portal, and run make -C src/pl/plpgsql check, it will seg\nfault inside exec_simple_query -> PortalDrop -> ResourceOwnerRelease\n-> etc.\n\n> > The current code also calls PortalReleaseCachedPlan in this case; I'm\n> > not 100% certain whether that's appropriate or not.\n>\n> I think it's appropriate, because we cannot guarantee that the plan is\n> still usable. Besides normal plan invalidation issues, the catalog\n> contents the plan might rely on might have only existed in the aborted\n> transaction - which seems like a fatal problem. That's why holding\n> portals persists the portalstore. Which then also obsoletes the plan.\n\nIn a case like this, it can't really be an actual planable statement.\nIf the executor were involved, aborting the running transaction and\nstarting a new one would certainly result in a crash, because the\nmemory context in which we had saved all of our working state would be\nvanish out from under us. It's got to be a procedure call or maybe\nsome kind of DDL command that has been specially-crafted to survive a\nmid-command abort. It's not clear to me that the issues for plan\ninvalidation and catalog contents are the same in those kinds of cases\nas they are for planable statements, but perhaps they are.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 27 Sep 2019 10:35:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: abort-time portal cleanup"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 5:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> After this cleanup, I think we don't need At(Sub)Abort_Portals in\n> AbortOutOfAnyTransaction() for the states TBLOCK_(SUB)ABORT and\n> friends. This is because AbortTransaction itself would have zapped the\n> portal.\n\nNot if the ROLLBACK itself failed - in that case, the portal would\nhave been active at the time, and thus not subject to removal. And, as\nthe existing comments in xact.c state, that's exactly why that\nfunction call is there.\n\n> 2. You seem to forgot removing AtCleanup_Portals() from portal.h\n\nOops. Fixed in the attached version.\n\n> - if (portal->status == PORTAL_READY)\n> - MarkPortalFailed(portal);\n>\n> Why it is safe to remove this check? It has been explained in commit\n> 7981c342 why we need that check. I don't see any explanation in email\n> or patch which justifies this code removal. Is it because you removed\n> PortalCleanup? If so, that is still called from PortalDrop?\n\nAll MarkPortalFailed() does is change the status to PORTAL_FAILED and\ncall the cleanup hook. PortalDrop() calls the cleanup hook, and we\ndon't need to change the status if we're removing it completely.\n\n> 4.\n> + * If the status is PORTAL_ACTIVE, then we must be\n> executing a command\n> + * that uses multiple transactions internally. In that\n> case, the\n> + * command in question must be one that does not\n> internally rely on\n> + * any transaction-lifetime resources, because they\n> would disappear\n> + * in the upcoming transaction-wide cleanup.\n> */\n> if (portal->status == PORTAL_ACTIVE)\n>\n> I am not able to understand how we can reach with the portal state as\n> 'active' for a multi-transaction command. It seems wherever we mark\n> portal as active, we don't relinquish the control unless its state is\n> changed. Can you share some example where this can happen?\n\nYeah -- a plpgsql function or procedure that does \"ROLLBACK;\"\ninternally. The calling code doesn't relinquish control, but it does\nreach AbortTransaction().\n\nIf you want to see it happen, just put an elog() inside that block and\nrun make -C src/pl/plpgsql check.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 7 Oct 2019 12:14:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: abort-time portal cleanup"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 6:34 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> On Fri, Sep 13, 2019 at 2:13 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> /*\n> * Otherwise, do nothing to cursors held over from a previous\n> * transaction.\n> */\n> if (portal->createSubid == InvalidSubTransactionId)\n> continue;\n>\n> /*\n> * Do nothing to auto-held cursors. This is similar to the case of a\n> * cursor from a previous transaction, but it could also be that the\n> * cursor was auto-held in this transaction, so it wants to live on.\n> */\n> if (portal->autoHeld)\n> continue;\n>\n> I have one doubt that why do we need the second check. Because before\n> setting portal->autoHeld to true we always call HoldPortal therein we\n> set portal->createSubid to InvalidSubTransactionId. So it seems to me\n> that the second condition will never reach. Am I missing something?\n\nNot that I can see, but I don't necessarily think this patch needs to\nchange it, either.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 7 Oct 2019 12:27:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: abort-time portal cleanup"
},
{
"msg_contents": "Hi,\n\nOn 2019-10-07 12:14:52 -0400, Robert Haas wrote:\n> > - if (portal->status == PORTAL_READY)\n> > - MarkPortalFailed(portal);\n> >\n> > Why it is safe to remove this check? It has been explained in commit\n> > 7981c342 why we need that check. I don't see any explanation in email\n> > or patch which justifies this code removal. Is it because you removed\n> > PortalCleanup? If so, that is still called from PortalDrop?\n>\n> All MarkPortalFailed() does is change the status to PORTAL_FAILED and\n> call the cleanup hook. PortalDrop() calls the cleanup hook, and we\n> don't need to change the status if we're removing it completely.\n\nNote that currently PortalCleanup() behaves differently depending on\nwhether the portal is set to failed or not...\n\n- Andres\n\n\n",
"msg_date": "Tue, 8 Oct 2019 11:10:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: abort-time portal cleanup"
},
{
"msg_contents": "On Tue, Oct 8, 2019 at 2:10 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-10-07 12:14:52 -0400, Robert Haas wrote:\n> > > - if (portal->status == PORTAL_READY)\n> > > - MarkPortalFailed(portal);\n> > >\n> > > Why it is safe to remove this check? It has been explained in commit\n> > > 7981c342 why we need that check. I don't see any explanation in email\n> > > or patch which justifies this code removal. Is it because you removed\n> > > PortalCleanup? If so, that is still called from PortalDrop?\n> >\n> > All MarkPortalFailed() does is change the status to PORTAL_FAILED and\n> > call the cleanup hook. PortalDrop() calls the cleanup hook, and we\n> > don't need to change the status if we're removing it completely.\n>\n> Note that currently PortalCleanup() behaves differently depending on\n> whether the portal is set to failed or not...\n\nUrk, yeah, I forgot about that. I think that's a wretched hack that\nsomebody ought to untangle at some point, but maybe for purposes of\nthis patch it makes more sense to just put the MarkPortalFailed call\nback.\n\nIt's unclear to me why there's a special case here specifically for\nPORTAL_READY. Like, why is PORTAL_NEW or PORTAL_DEFINED or\nPORTAL_DONE any different? It seems like if we're aborting the\ntransaction, we should not be calling ExecutorFinish()/ExecutorEnd()\nfor anything. We could achieve that result by just nulling out the\ncleanup hook unconditionally instead of having this complicated dance\nwhere we mark ready portals failed, which calls the cleanup hook,\nwhich decides not to do anything because the portal has been marked\nfailed. It'd be great if there were a few more comments in this file\nexplaining what the thinking behind all this was.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 9 Oct 2019 09:25:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: abort-time portal cleanup"
},
{
"msg_contents": "On Wed, Oct 9, 2019 at 6:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Oct 8, 2019 at 2:10 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-10-07 12:14:52 -0400, Robert Haas wrote:\n> > > > - if (portal->status == PORTAL_READY)\n> > > > - MarkPortalFailed(portal);\n> > > >\n> > > > Why it is safe to remove this check? It has been explained in commit\n> > > > 7981c342 why we need that check. I don't see any explanation in email\n> > > > or patch which justifies this code removal. Is it because you removed\n> > > > PortalCleanup? If so, that is still called from PortalDrop?\n> > >\n> > > All MarkPortalFailed() does is change the status to PORTAL_FAILED and\n> > > call the cleanup hook. PortalDrop() calls the cleanup hook, and we\n> > > don't need to change the status if we're removing it completely.\n> >\n> > Note that currently PortalCleanup() behaves differently depending on\n> > whether the portal is set to failed or not...\n> >\n\nYeah, this is the reason, I mentioned it.\n\n> Urk, yeah, I forgot about that. I think that's a wretched hack that\n> somebody ought to untangle at some point, but maybe for purposes of\n> this patch it makes more sense to just put the MarkPortalFailed call\n> back.\n>\n\n+1.\n\n> It's unclear to me why there's a special case here specifically for\n> PORTAL_READY. Like, why is PORTAL_NEW or PORTAL_DEFINED or\n> PORTAL_DONE any different?\n>\n\nIf read the commit message of commit 7981c34279 [1] which introduced\nthis, then we might get some clue. It is quite possible that we need\nsame handling for PORTAL_NEW, PORTAL_DEFINED, etc. but it seems we\njust hit the problem mentioned in commit 7981c34279 for PORTAL_READY\nstate. I think as per commit, if we don't mark it failed, then with\nauto_explain things can go wrong.\n\n[1] -\ncommit 7981c34279fbddc254cfccb9a2eec4b35e692a12\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Thu Feb 18 03:06:46 2010 +0000\n\nForce READY portals into FAILED state when a transaction or\nsubtransaction is aborted, if they were created within the failed\nxact. This prevents ExecutorEnd from being run on them, which is a\ngood idea because they may contain references to tables or other\nobjects that no longer exist. In particular this is hazardous when\nauto_explain is active, but it's really rather surprising that nobody\nhas seen an issue with this before. I'm back-patching this to 8.4,\nsince that's the first version that contains auto_explain or an\nExecutorEnd hook, but I wonder whether we shouldn't back-patch\nfurther.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 10 Oct 2019 18:03:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: abort-time portal cleanup"
}
] |
[
{
"msg_contents": "Hello,\n\nIn the following code in execTuples.c, shouldn' srcdesc point to the source slot's tuple descriptor? The attached fix passes make check. What kind of failure could this cause?\n\nBTW, I thought that in PostgreSQL coding convention, local variables should be defined at the top of blocks, but this function writes \"for (int natts;\". I didn't modify it because many other source files also write in that way.\n\n\n--------------------------------------------------\nstatic void\ntts_virtual_copyslot(TupleTableSlot *dstslot, TupleTableSlot *srcslot)\n{\n TupleDesc srcdesc = dstslot->tts_tupleDescriptor;\n\n Assert(srcdesc->natts <= dstslot->tts_tupleDescriptor->natts);\n\n tts_virtual_clear(dstslot);\n\n slot_getallattrs(srcslot);\n\n for (int natt = 0; natt < srcdesc->natts; natt++)\n {\n dstslot->tts_values[natt] = srcslot->tts_values[natt];\n dstslot->tts_isnull[natt] = srcslot->tts_isnull[natt];\n }\n--------------------------------------------------\n\n\nRegards\nTakayuki Tsunakawa",
"msg_date": "Fri, 13 Sep 2019 00:21:15 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "[bug fix??] Fishy code in tts_cirtual_copyslot()"
},
{
"msg_contents": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com> writes:\n> In the following code in execTuples.c, shouldn' srcdesc point to the source slot's tuple descriptor? The attached fix passes make check. What kind of failure could this cause?\n\nYeah, sure looks like a typo to me too.\n\nI temporarily changed the Assert to be \"==\" rather than \"<=\", and\nit still passed check-world, so evidently we are not testing any\ncases where the descriptors are of different lengths. This explains\nthe lack of symptoms. It's still a bug though, so pushed.\n\n> BTW, I thought that in PostgreSQL coding convention, local variables should be defined at the top of blocks, but this function writes \"for (int natts;\".\n\nYeah, we've agreed to join the 21st century to the extent of allowing\nlocal for-loop variables.\n\nThanks for the report!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 22 Sep 2019 14:24:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix??] Fishy code in tts_cirtual_copyslot()"
},
{
"msg_contents": "From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> I temporarily changed the Assert to be \"==\" rather than \"<=\", and\n> it still passed check-world, so evidently we are not testing any\n> cases where the descriptors are of different lengths. This explains\n> the lack of symptoms. It's still a bug though, so pushed.\n\nThank you for committing.\n\n> > BTW, I thought that in PostgreSQL coding convention, local variables\n> should be defined at the top of blocks, but this function writes \"for (int\n> natts;\".\n> \n> Yeah, we've agreed to join the 21st century to the extent of allowing\n> local for-loop variables.\n\nThat's good news. It'll help a bit to code comfortably.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n\n",
"msg_date": "Mon, 23 Sep 2019 23:59:07 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [bug fix??] Fishy code in tts_cirtual_copyslot()"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-22 14:24:36 -0400, Tom Lane wrote:\n> \"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com> writes:\n> > In the following code in execTuples.c, shouldn' srcdesc point to the source slot's tuple descriptor? The attached fix passes make check. What kind of failure could this cause?\n> \n> Yeah, sure looks like a typo to me too.\n\nIndeed, thanks for catching and pushing.\n\n\n> I temporarily changed the Assert to be \"==\" rather than \"<=\", and\n> it still passed check-world, so evidently we are not testing any\n> cases where the descriptors are of different lengths. This explains\n> the lack of symptoms.\n\nI have a hard time seeing cases where it'd be a good idea to copy slots\nof a smaller natts into a slot with larger natts. So i'm not too\nsurprised.\n\n\n> It's still a bug though, so pushed.\n\nIndeed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 24 Sep 2019 14:57:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [bug fix??] Fishy code in tts_cirtual_copyslot()"
}
] |
[
{
"msg_contents": "Dear all\n\nWhile developing MobilityDB we needed to extend the range type operators so\nthey cope with elements. In the same way that currently the range types\nsupport both\n- @> contains range/element\n- <@ element/range is contained by\nwe extended the left (<<), overleft (&<), right (>>), and overright (&>)\noperators so they can cope with both elements and ranges at the left- or\nright-hand side. These can be seen in github\nhttps://github.com/ULB-CoDE-WIT/MobilityDB/blob/master/src/rangetypes_ext.c\n\nIf you think that these extensions could be useful for the community at\nlarge, I can prepare a PR. Please let me know.\n\nEsteban\n\nDear allWhile developing MobilityDB we needed to extend the range type operators so they cope with elements. In the same way that currently the range types support both - @> contains range/element- <@ element/range is contained bywe extended the left (<<), overleft (&<), right (>>), and overright (&>) operators so they can cope with both elements and ranges at the left- or right-hand side. These can be seen in githubhttps://github.com/ULB-CoDE-WIT/MobilityDB/blob/master/src/rangetypes_ext.cIf you think that these extensions could be useful for the community at large, I can prepare a PR. Please let me know.Esteban",
"msg_date": "Fri, 13 Sep 2019 08:50:18 +0200",
"msg_from": "Esteban Zimanyi <ezimanyi@ulb.ac.be>",
"msg_from_op": true,
"msg_subject": "Extending range type operators to cope with elements"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 08:50:18AM +0200, Esteban Zimanyi wrote:\n>Dear all\n>\n>While developing MobilityDB we needed to extend the range type operators so\n>they cope with elements. In the same way that currently the range types\n>support both\n>- @> contains range/element\n>- <@ element/range is contained by\n>we extended the left (<<), overleft (&<), right (>>), and overright (&>)\n>operators so they can cope with both elements and ranges at the left- or\n>right-hand side. These can be seen in github\n>https://github.com/ULB-CoDE-WIT/MobilityDB/blob/master/src/rangetypes_ext.c\n>\n>If you think that these extensions could be useful for the community at\n>large, I can prepare a PR. Please let me know.\n>\n\nWell, we don't really use pull requests, but other than that I don't see\nwhy not to at least consider such improvement.\n\nI'm not a heavy user or range types, so I can't really judge how useful\nthat is in practice, but it seems like a fairly natural extension of the\nexisting operators. I mean, if I understand it correctly, the proposed\nbehavior is equal to treating the element as a \"collapsed range\".\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 14 Sep 2019 23:09:10 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Extending range type operators to cope with elements"
},
{
"msg_contents": ">\n>\n> >- @> contains range/element\n> >- <@ element/range is contained by\n>\n\n\nI'm not a heavy user or range types, so I can't really judge how useful\n> that is in practice, but it seems like a fairly natural extension of the\n> existing operators. I mean, if I understand it correctly, the proposed\n> behavior is equal to treating the element as a \"collapsed range\".\n>\n\nI used to give a talk on ranges and partitioning, prior to postgresql\ngetting native partitioning (see:\nhttps://wiki.postgresql.org/images/1/1b/Ranges%2C_Partitioning_and_Limitations.pdf\n )\nIn that talk, I mention the need for exactly these operators, specifically\nfor an extension called range_partitioning which had some logic for \"If I\nwere to insert a row with this value, what partition would it end up in?\"\nwhich allowed for a subsequent COPY operation directly to that partition.\nThat logic essentially binary-searched a series of ranges, so it needed an\n\"elem <@ range\" as well as << and >>.\n\nYes, constructing a collapsed range was the work-around I used in the\nabsence of real functions.\n\nThat extension has been replaced by real table partitioning and the planner\nitself now does similar logic for partition pruning.\n\nSo yes, I've had a need for those operators in the past. What I don't know\nis whether adding these functions will be worth the catalog clutter.\n\n>- @> contains range/element\n>- <@ element/range is contained byI'm not a heavy user or range types, so I can't really judge how useful\nthat is in practice, but it seems like a fairly natural extension of the\nexisting operators. I mean, if I understand it correctly, the proposed\nbehavior is equal to treating the element as a \"collapsed range\".I used to give a talk on ranges and partitioning, prior to postgresql getting native partitioning (see: https://wiki.postgresql.org/images/1/1b/Ranges%2C_Partitioning_and_Limitations.pdf )In that talk, I mention the need for exactly these operators, specifically for an extension called range_partitioning which had some logic for \"If I were to insert a row with this value, what partition would it end up in?\" which allowed for a subsequent COPY operation directly to that partition. That logic essentially binary-searched a series of ranges, so it needed an \"elem <@ range\" as well as << and >>.Yes, constructing a collapsed range was the work-around I used in the absence of real functions.That extension has been replaced by real table partitioning and the planner itself now does similar logic for partition pruning.So yes, I've had a need for those operators in the past. What I don't know is whether adding these functions will be worth the catalog clutter.",
"msg_date": "Sat, 14 Sep 2019 18:35:52 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Extending range type operators to cope with elements"
},
{
"msg_contents": ">\n>\n> So yes, I've had a need for those operators in the past. What I don't know\n> is whether adding these functions will be worth the catalog clutter.\n>\n\nThe operators are tested and running within MobilityDB. It concerns lines\n231-657 for the C code in file\nhttps://github.com/ULB-CoDE-WIT/MobilityDB/blob/master/src/rangetypes_ext.c\n\nand lines 32-248 for the SQL code in file\nhttps://github.com/ULB-CoDE-WIT/MobilityDB/blob/master/src/sql/07_rangetypes_ext.in.sql\n\n\nSince you don't really use PR, please let me know whether I can be of\nany help.\n\nRegards\n\nEsteban\n\n-- \n------------------------------------------------------------\nProf. Esteban Zimanyi\nDepartment of Computer & Decision Engineering (CoDE) CP 165/15\nUniversite Libre de Bruxelles\nAvenue F. D. Roosevelt 50\nB-1050 Brussels, Belgium\nfax: + 32.2.650.47.13\ntel: + 32.2.650.31.85\ne-mail: ezimanyi@ulb.ac.be\nInternet: http://code.ulb.ac.be/\n------------------------------------------------------------\n\nSo yes, I've had a need for those operators in the past. What I don't know is whether adding these functions will be worth the catalog clutter.The operators are tested and running within MobilityDB. It concerns lines 231-657 for the C code in filehttps://github.com/ULB-CoDE-WIT/MobilityDB/blob/master/src/rangetypes_ext.c and lines 32-248 for the SQL code in filehttps://github.com/ULB-CoDE-WIT/MobilityDB/blob/master/src/sql/07_rangetypes_ext.in.sql Since you don't really use PR, please let me know whether I can be of any help.RegardsEsteban-- ------------------------------------------------------------Prof. Esteban ZimanyiDepartment of Computer & Decision Engineering (CoDE) CP 165/15 Universite Libre de Bruxelles Avenue F. D. Roosevelt 50 B-1050 Brussels, Belgium fax: + 32.2.650.47.13tel: + 32.2.650.31.85e-mail: ezimanyi@ulb.ac.beInternet: http://code.ulb.ac.be/------------------------------------------------------------",
"msg_date": "Sun, 15 Sep 2019 16:18:38 +0200",
"msg_from": "Esteban Zimanyi <estebanzimanyi@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Extending range type operators to cope with elements"
},
{
"msg_contents": "> So yes, I've had a need for those operators in the past. What I don't\nknow is whether adding these functions will be worth the catalog clutter.\n\nThe operators are tested and running within MobilityDB. It concerns lines\n231-657 for the C code in file\nhttps://github.com/ULB-CoDE-WIT/MobilityDB/blob/master/src/rangetypes_ext.c\n\nand lines 32-248 for the SQL code in file\nhttps://github.com/ULB-CoDE-WIT/MobilityDB/blob/master/src/sql/07_rangetypes_ext.in.sql\n\n\nSince you don't really use PR, please let me know whether I can be of\nany help.\n\nRegards\nEsteban\n\n-- \n------------------------------------------------------------\nProf. Esteban Zimanyi\nDepartment of Computer & Decision Engineering (CoDE) CP 165/15\nUniversite Libre de Bruxelles\nAvenue F. D. Roosevelt 50\nB-1050 Brussels, Belgium\nfax: + 32.2.650.47.13\ntel: + 32.2.650.31.85\ne-mail: ezimanyi@ulb.ac.be\nInternet: http://code.ulb.ac.be/\n------------------------------------------------------------\n\n> So yes, I've had a need for those operators in the past. What I don't know is whether adding these functions will be worth the catalog clutter.The operators are tested and running within MobilityDB. It concerns lines 231-657 for the C code in filehttps://github.com/ULB-CoDE-WIT/MobilityDB/blob/master/src/rangetypes_ext.c and lines 32-248 for the SQL code in filehttps://github.com/ULB-CoDE-WIT/MobilityDB/blob/master/src/sql/07_rangetypes_ext.in.sql Since you don't really use PR, please let me know whether I can be of any help.RegardsEsteban-- ------------------------------------------------------------Prof. Esteban ZimanyiDepartment of Computer & Decision Engineering (CoDE) CP 165/15 Universite Libre de Bruxelles Avenue F. D. Roosevelt 50 B-1050 Brussels, Belgium fax: + 32.2.650.47.13tel: + 32.2.650.31.85e-mail: ezimanyi@ulb.ac.beInternet: http://code.ulb.ac.be/------------------------------------------------------------",
"msg_date": "Sun, 15 Sep 2019 16:30:52 +0200",
"msg_from": "Esteban Zimanyi <ezimanyi@ulb.ac.be>",
"msg_from_op": true,
"msg_subject": "Fwd: Extending range type operators to cope with elements"
},
{
"msg_contents": "On Sun, Sep 15, 2019 at 04:30:52PM +0200, Esteban Zimanyi wrote:\n> > So yes, I've had a need for those operators in the past. What I don't\n> know is whether adding these functions will be worth the catalog clutter.\n> \n> The operators are tested and running within MobilityDB. It concerns lines\n> 231-657 for the C code in file\n> https://github.com/ULB-CoDE-WIT/MobilityDB/blob/master/src/rangetypes_ext.c\n> \n> and lines 32-248 for the SQL code in file\n> https://github.com/ULB-CoDE-WIT/MobilityDB/blob/master/src/sql/07_rangetypes_ext.in.sql\n> \n> Since you don't really use PR, please let me know whether I can be of\n> any help.\n\nIt's not done by pull request at this time. Instead, it is done by sending\npatches to this mailing list.\n\nhttp://wiki.postgresql.org/wiki/Development_information\nhttp://wiki.postgresql.org/wiki/Submitting_a_Patch\nhttps://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\nhttp://www.interdb.jp/pg/\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 17 Sep 2019 05:18:26 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Extending range type operators to cope with elements"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 5:18 AM David Fetter <david@fetter.org> wrote:\n> It's not done by pull request at this time. Instead, it is done by sending\n> patches to this mailing list.\n\nDear all\n\nYou will find enclosed the patch that extends the range type operators so\nthey cope with elements.\n\nAny comments most welcome.\n\nEsteban",
"msg_date": "Sat, 21 Sep 2019 17:52:50 +0200",
"msg_from": "Esteban Zimanyi <ezimanyi@ulb.ac.be>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: Extending range type operators to cope with elements"
},
{
"msg_contents": "Dear all\n\nAfter a long time (as you can imagine, this year everything has been upside\ndown ...), you will find enclosed the patch for extending the range\noperators so they can cope with range <op> element and element <op> range\nin addition to the existing range <op> range.\n\nBest regards\n\nEsteban\n\n------------------------------------------------------------\nProf. Esteban Zimanyi\nDepartment of Computer & Decision Engineering (CoDE) CP 165/15\nUniversite Libre de Bruxelles\nAvenue F. D. Roosevelt 50\nB-1050 Brussels, Belgium\nfax: + 32.2.650.47.13\ntel: + 32.2.650.31.85\ne-mail: ezimanyi@ulb.ac.be\nInternet: http://cs.ulb.ac.be/members/esteban/\n------------------------------------------------------------\n\nOn Tue, Sep 17, 2019 at 5:18 AM David Fetter <david@fetter.org> wrote:\n\n> On Sun, Sep 15, 2019 at 04:30:52PM +0200, Esteban Zimanyi wrote:\n> > > So yes, I've had a need for those operators in the past. What I don't\n> > know is whether adding these functions will be worth the catalog clutter.\n> >\n> > The operators are tested and running within MobilityDB. It concerns lines\n> > 231-657 for the C code in file\n> >\n> https://github.com/MobilityDB/MobilityDB/blob/master/src/rangetypes_ext.c\n> <https://github.com/ULB-CoDE-WIT/MobilityDB/blob/master/src/rangetypes_ext.c>\n> >\n> > and lines 32-248 for the SQL code in file\n> >\n> https://github.com/ULB-CoDE-WIT/MobilityDB/blob/master/src/sql/07_rangetypes_ext.in.sql\n> >\n> > Since you don't really use PR, please let me know whether I can be of\n> > any help.\n>\n> It's not done by pull request at this time. Instead, it is done by sending\n> patches to this mailing list.\n>\n> http://wiki.postgresql.org/wiki/Development_information\n> http://wiki.postgresql.org/wiki/Submitting_a_Patch\n> https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\n> http://www.interdb.jp/pg/\n>\n> Best,\n> David.\n> --\n> David Fetter <david(at)fetter(dot)org> http://fetter.org/\n> Phone: +1 415 235 3778\n>\n> Remember to vote!\n> Consider donating to Postgres: http://www.postgresql.org/about/donate\n>",
"msg_date": "Sun, 27 Sep 2020 16:00:37 +0200",
"msg_from": "Esteban Zimanyi <ezimanyi@ulb.ac.be>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: Extending range type operators to cope with elements"
},
{
"msg_contents": "Esteban Zimanyi <ezimanyi@ulb.ac.be> writes:\n> After a long time (as you can imagine, this year everything has been upside\n> down ...), you will find enclosed the patch for extending the range\n> operators so they can cope with range <op> element and element <op> range\n> in addition to the existing range <op> range.\n\nCool. Please add this to the open commitfest list [1] to ensure we don't\nlose track of it.\n\n\t\t\tregards, tom lane\n\n[1] https://commitfest.postgresql.org/30/\n\n\n",
"msg_date": "Sun, 27 Sep 2020 13:06:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Extending range type operators to cope with elements"
},
{
"msg_contents": "Hi,\r\n\r\nthank you for your contribution.\r\n\r\nI did notice that the cfbot [1] is failing for this patch.\r\nPlease try to address the issues if you can for the upcoming commitfest.\r\n\r\nCheers,\r\n//Georgios\r\n\r\n[1] http://cfbot.cputube.org/esteban-zimanyi.html",
"msg_date": "Fri, 30 Oct 2020 16:01:27 +0000",
"msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Extending range type operators to cope with elements"
},
{
"msg_contents": "Hi,\n\nOn Fri, Oct 30, 2020 at 04:01:27PM +0000, Georgios Kokolatos wrote:\n>Hi,\n>\n>thank you for your contribution.\n>\n>I did notice that the cfbot [1] is failing for this patch.\n>Please try to address the issues if you can for the upcoming commitfest.\n>\n\nI took a look at the patch today - the regression failure was trivial,\nthe expected output for one query was added to the wrong place, a couple\nlines off the proper place. Attached is an updated version of the patch,\nfixing that.\n\nI also reviewed the code - it seems pretty clean and in line with the\nsurrounding code in rangetypes.c. Good job Esteban! I'll do a bit more\nreview next week, and I'll see if I can get it committed.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 30 Oct 2020 23:08:19 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Extending range type operators to cope with elements"
},
{
"msg_contents": "On 31.10.2020 01:08, Tomas Vondra wrote:\n> Hi,\n>\n> On Fri, Oct 30, 2020 at 04:01:27PM +0000, Georgios Kokolatos wrote:\n>> Hi,\n>>\n>> thank you for your contribution.\n>>\n>> I did notice that the cfbot [1] is failing for this patch.\n>> Please try to address the issues if you can for the upcoming commitfest.\n>>\n>\n> I took a look at the patch today - the regression failure was trivial,\n> the expected output for one query was added to the wrong place, a couple\n> lines off the proper place. Attached is an updated version of the patch,\n> fixing that.\n>\n> I also reviewed the code - it seems pretty clean and in line with the\n> surrounding code in rangetypes.c. Good job Esteban! I'll do a bit more\n> review next week, and I'll see if I can get it committed.\n>\n> regards\n>\n\nCFM reminder. Just in case you forgot about this thread)\nThe commitfest is heading to the end. Tomas, will you have time to push \nthis patch?\n\nThe patch still applies and passes all cfbot checks. I also took a quick \nlook at the code and everything looks good to me.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Fri, 27 Nov 2020 13:38:58 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Extending range type operators to cope with elements"
},
{
"msg_contents": "On Fri, Oct 30, 2020 at 11:08:19PM +0100, Tomas Vondra wrote:\n> Hi,\n> \n> + <row>\n> + <entry role=\"func_table_entry\"><para role=\"func_signature\">\n> + <type>anyelement</type> <literal>>></literal> <type>anyrange</type>\n> + <returnvalue>boolean</returnvalue>\n> + </para>\n> + <para>\n> + Is the element strictly right of the element?\n> + </para>\n\nshould say \"of the range\" ?\n\n> +++ b/src/backend/utils/adt/rangetypes.c\n\n> +\t/* An empty range is neither left nor right any other range */\n> +\t/* An empty range is neither left nor right any element */\n> +\t/* An empty range is neither left nor right any other range */\n> +\t/* An empty range is neither left nor right any element */\n> +\t/* An empty range is neither left nor right any element */\n> +\t/* An empty range is neither left nor right any element */\n\nI these comments should all say \".. left nor right OF any ...\"\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 27 Feb 2021 14:35:49 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Extending range type operators to cope with elements"
},
{
"msg_contents": "On Sun, Feb 28, 2021 at 1:36 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Fri, Oct 30, 2020 at 11:08:19PM +0100, Tomas Vondra wrote:\n> > Hi,\n> >\n> > + <row>\n> > + <entry role=\"func_table_entry\"><para role=\"func_signature\">\n> > + <type>anyelement</type> <literal>>></literal>\n> <type>anyrange</type>\n> > + <returnvalue>boolean</returnvalue>\n> > + </para>\n> > + <para>\n> > + Is the element strictly right of the element?\n> > + </para>\n>\n> should say \"of the range\" ?\n>\n> > +++ b/src/backend/utils/adt/rangetypes.c\n>\n> > + /* An empty range is neither left nor right any other range */\n> > + /* An empty range is neither left nor right any element */\n> > + /* An empty range is neither left nor right any other range */\n> > + /* An empty range is neither left nor right any element */\n> > + /* An empty range is neither left nor right any element */\n> > + /* An empty range is neither left nor right any element */\n>\n> I these comments should all say \".. left nor right OF any ...\"\n>\n> --\n> Justin\n>\n>\n>\nThis patch set no longer applies.\n\nhttp://cfbot.cputube.org/patch_32_2747.log\n\nCan we get a rebase?\n\nI am marking the patch \"Waiting on Author\"\n\n--\nIbrar Ahmed\n\nOn Sun, Feb 28, 2021 at 1:36 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Fri, Oct 30, 2020 at 11:08:19PM +0100, Tomas Vondra wrote:\n> Hi,\n> \n> + <row>\n> + <entry role=\"func_table_entry\"><para role=\"func_signature\">\n> + <type>anyelement</type> <literal>>></literal> <type>anyrange</type>\n> + <returnvalue>boolean</returnvalue>\n> + </para>\n> + <para>\n> + Is the element strictly right of the element?\n> + </para>\n\nshould say \"of the range\" ?\n\n> +++ b/src/backend/utils/adt/rangetypes.c\n\n> + /* An empty range is neither left nor right any other range */\n> + /* An empty range is neither left nor right any element */\n> + /* An empty range is neither left nor right any other range */\n> + /* An empty range is neither left nor right any element */\n> + /* An empty range is neither left nor right any element */\n> + /* An empty range is neither left nor right any element */\n\nI these comments should all say \".. left nor right OF any ...\"\n\n-- \nJustin\n\n\nThis patch set no longer applies.http://cfbot.cputube.org/patch_32_2747.logCan we get a rebase? I am marking the patch \"Waiting on Author\"--Ibrar Ahmed",
"msg_date": "Thu, 4 Mar 2021 16:11:54 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Extending range type operators to cope with elements"
},
{
"msg_contents": "On 3/4/21 6:11 AM, Ibrar Ahmed wrote:\n> \n> This patch set no longer applies.\n> \n> http://cfbot.cputube.org/patch_32_2747.log \n> <http://cfbot.cputube.org/patch_32_2747.log>\n> \n> Can we get a rebase?\n> \n> I am marking the patch \"Waiting on Author\"\n\nThis patch needs updates and a rebase and there has been no new patch \nsix months, so marking Returned with Feedback.\n\nPlease resubmit to the next CF when you have a new patch.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 8 Apr 2021 11:12:17 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Extending range type operators to cope with elements"
}
] |
[
{
"msg_contents": "On 2019-Sep-13, Fabien COELHO wrote:\n\n> Hello Alvaro,\n> \n> > I think the TestLib.pm changes should be done separately, not together\n> > with the rest of the hacking in this patch.\n> > \n> > Mostly, because I think they're going to cause trouble. Adding a\n> > parameter in the middle of the list may cause trouble for third-party\n> > users of TestLib.\n> \n> That is also what I thought, however, see below.\n\nI see. But you seem to have skipped my suggestion without considering\nit.\n\nI think the current API of these functions where they just receive a\nplain array of arguments, and all callers have to be patched in unison,\nis not very convenient. Also, I *think* your new icommand_checks method\nis the same as command_checks_all, except that you also have the \"init\"\npart. So you're duplicating code because the original doesn't have\nfunctionality you need? But why do that, if you could have *one*\nfunction that does both things? If some callers don't have the \"init\"\npart, just omit it from the parameters.\n\n(Whether it's implemented using Expect or not should not matter. Either\nExpect works everywhere, and we can use it, or it doesn't and we can't.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 13 Sep 2019 09:46:57 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: psql - improve test coverage from 41% to 88%"
},
{
"msg_contents": "\nHello Alvaro,\n\n>>> I think the TestLib.pm changes should be done separately, not together\n>>> with the rest of the hacking in this patch.\n>>>\n>>> Mostly, because I think they're going to cause trouble. Adding a\n>>> parameter in the middle of the list may cause trouble for third-party\n>>> users of TestLib.\n>>\n>> That is also what I thought, however, see below.\n>\n> I see. But you seem to have skipped my suggestion without considering\n> it.\n\nI did understand it, but as Tom did not want simple hocus-pocus, ISTM that \ndynamically checking the argument type would not be considered a very good \nidea either.\n\n> I think the current API of these functions where they just receive a\n> plain array of arguments, and all callers have to be patched in unison,\n> is not very convenient.\n\nI agree, but the no diff solution was rejected. I can bring one back, but \ngoing against Tom's views has not proven a good move in the past.\n\n> Also, I *think* your new icommand_checks method is the same as \n> command_checks_all, except that you also have the \"init\" part.\n\nNope, it is an interactive version based on Expect, which sends input and \nwaits for output, the process is quite different from a simple one shot no \ntimeout exec version.\n\n> So you're duplicating code because the original doesn't have \n> functionality you need?\n\nYes, I'm creating a interactive validation variant.\n\n> But why do that, if you could have *one* function that does both things? \n> If some callers don't have the \"init\" part, just omit it from the \n> parameters.\n\nAlthough it could be abstracted somehow, I do not think that having one \nfunction behaving so differently under the hood is a good idea. It is not \njust the question of the init part.\n\n> (Whether it's implemented using Expect or not should not matter. Either\n> Expect works everywhere, and we can use it, or it doesn't and we can't.)\n\nFor me the question is not about Expect dependency, it is more about how \nthe test behaves.\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 13 Sep 2019 19:39:38 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: psql - improve test coverage from 41% to 88%"
}
] |
[
{
"msg_contents": "Implemented the Logical Streaming Replication thing are working fine I see\nthe XLogData message appearing and I'm able to parse them.\n\nBut I haven't see any \"Primary Keepalive message\" yet. I had tried setting\nthe *tcp_keepalive_interval*, *tcp_keepalives_idle* both from client\nruntime paramter and well as from postgresql.conf still no clue of it.\n\nAny information around it?\n\nImplemented the Logical Streaming Replication thing are working fine I see the XLogData message appearing and I'm able to parse them.But I haven't see any \"Primary Keepalive message\" yet. I had tried setting the tcp_keepalive_interval, tcp_keepalives_idle both from client runtime paramter and well as from postgresql.conf still no clue of it.Any information around it?",
"msg_date": "Fri, 13 Sep 2019 18:41:24 +0530",
"msg_from": "Virendra Negi <viren.negi@teliax.com>",
"msg_from_op": true,
"msg_subject": "Primary keepalive message not appearing in Logical Streaming\n Replication"
},
{
"msg_contents": "I forgot to mention the plugin I have been using along with logical\nreplication\n\nits wal2json.\n\nOn Friday, September 13, 2019, Virendra Negi <viren.negi@teliax.com> wrote:\n\n> Implemented the Logical Streaming Replication thing are working fine I see\n> the XLogData message appearing and I'm able to parse them.\n>\n> But I haven't see any \"Primary Keepalive message\" yet. I had tried\n> setting the *tcp_keepalive_interval*, *tcp_keepalives_idle* both from\n> client runtime paramter and well as from postgresql.conf still no clue of\n> it.\n>\n> Any information around it?\n>\n>\n>\n>\n\nI forgot to mention the plugin I have been using along with logical replication its wal2json.On Friday, September 13, 2019, Virendra Negi <viren.negi@teliax.com> wrote:Implemented the Logical Streaming Replication thing are working fine I see the XLogData message appearing and I'm able to parse them.But I haven't see any \"Primary Keepalive message\" yet. I had tried setting the tcp_keepalive_interval, tcp_keepalives_idle both from client runtime paramter and well as from postgresql.conf still no clue of it.Any information around it?",
"msg_date": "Sun, 15 Sep 2019 00:10:22 +0530",
"msg_from": "Virendra Negi <viren.negi@teliax.com>",
"msg_from_op": true,
"msg_subject": "Re: Primary keepalive message not appearing in Logical Streaming\n Replication"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 07:12 Virendra Negi <viren.negi@teliax.com> wrote:\n\n> Implemented the Logical Streaming Replication thing are working fine I see\n> the XLogData message appearing and I'm able to parse them.\n>\n> But I haven't see any \"Primary Keepalive message\" yet. I had tried\n> setting the *tcp_keepalive_interval*, *tcp_keepalives_idle* both from\n> client runtime paramter and well as from postgresql.conf still no clue of\n> it.\n>\n> Any information around it?\n>\n\nBoth of these options are not in the Pg protocol. They are within the OS\nTCP stack and are not visible to the applications at all.\n\n>\n>\n>\n> --\n\n\"Genius might be described as a supreme capacity for getting its possessors\ninto trouble of all kinds.\"\n-- Samuel Butler\n\nOn Fri, Sep 13, 2019 at 07:12 Virendra Negi <viren.negi@teliax.com> wrote:Implemented the Logical Streaming Replication thing are working fine I see the XLogData message appearing and I'm able to parse them.But I haven't see any \"Primary Keepalive message\" yet. I had tried setting the tcp_keepalive_interval, tcp_keepalives_idle both from client runtime paramter and well as from postgresql.conf still no clue of it.Any information around it?Both of these options are not in the Pg protocol. They are within the OS TCP stack and are not visible to the applications at all.\n-- \"Genius might be described as a supreme capacity for getting its possessorsinto trouble of all kinds.\"-- Samuel Butler",
"msg_date": "Sun, 15 Sep 2019 08:09:13 -0600",
"msg_from": "Michael Loftis <mloftis@wgops.com>",
"msg_from_op": false,
"msg_subject": "Re: Primary keepalive message not appearing in Logical Streaming\n Replication"
},
{
"msg_contents": "Agreed but why is there a message specification for it describe in the\ndocumentation and it ask to client reply back if a particular *bit* is\nset.(1 means that the client should reply to this message as soon as\npossible, to avoid a timeout disconnect. 0 otherwise)\n\n\nPrimary keepalive message (B)\nByte1('k')\n\nIdentifies the message as a sender keepalive.\nInt64\n\nThe current end of WAL on the server.\nInt64\n\nThe server's system clock at the time of transmission, as microseconds\nsince midnight on 2000-01-01.\nByte1\n\n1 means that the client should reply to this message as soon as possible,\nto avoid a timeout disconnect. 0 otherwise.\n\nThe receiving process can send replies back to the sender at any time,\nusing one of the following message formats (also in the payload of a\nCopyData message):\n\n\nOn Sun, Sep 15, 2019 at 7:39 PM Michael Loftis <mloftis@wgops.com> wrote:\n\n>\n>\n> On Fri, Sep 13, 2019 at 07:12 Virendra Negi <viren.negi@teliax.com> wrote:\n>\n>> Implemented the Logical Streaming Replication thing are working fine I\n>> see the XLogData message appearing and I'm able to parse them.\n>>\n>> But I haven't see any \"Primary Keepalive message\" yet. I had tried\n>> setting the *tcp_keepalive_interval*, *tcp_keepalives_idle* both from\n>> client runtime paramter and well as from postgresql.conf still no clue of\n>> it.\n>>\n>> Any information around it?\n>>\n>\n> Both of these options are not in the Pg protocol. They are within the OS\n> TCP stack and are not visible to the applications at all.\n>\n>>\n>>\n>>\n>> --\n>\n> \"Genius might be described as a supreme capacity for getting its possessors\n> into trouble of all kinds.\"\n> -- Samuel Butler\n>\n\nAgreed but why is there a message specification for it describe in the documentation and it ask to client reply back if a particular *bit* is set.(1 means that the client should reply to this message as soon as possible, to avoid a timeout disconnect. 0 otherwise)Primary keepalive message (B)Byte1('k')Identifies the message as a sender keepalive.Int64The current end of WAL on the server.Int64The server's system clock at the time of transmission, as microseconds since midnight on 2000-01-01.Byte11 means that the client should reply to this message as soon as possible, to avoid a timeout disconnect. 0 otherwise.The receiving process can send replies back to the sender at any time, using one of the following message formats (also in the payload of a CopyData message):On Sun, Sep 15, 2019 at 7:39 PM Michael Loftis <mloftis@wgops.com> wrote:On Fri, Sep 13, 2019 at 07:12 Virendra Negi <viren.negi@teliax.com> wrote:Implemented the Logical Streaming Replication thing are working fine I see the XLogData message appearing and I'm able to parse them.But I haven't see any \"Primary Keepalive message\" yet. I had tried setting the tcp_keepalive_interval, tcp_keepalives_idle both from client runtime paramter and well as from postgresql.conf still no clue of it.Any information around it?Both of these options are not in the Pg protocol. They are within the OS TCP stack and are not visible to the applications at all.\n-- \"Genius might be described as a supreme capacity for getting its possessorsinto trouble of all kinds.\"-- Samuel Butler",
"msg_date": "Sun, 15 Sep 2019 20:05:51 +0530",
"msg_from": "Virendra Negi <viren.negi@teliax.com>",
"msg_from_op": true,
"msg_subject": "Re: Primary keepalive message not appearing in Logical Streaming\n Replication"
},
{
"msg_contents": "Oh I miss the documentation link there you go\nhttps://www.postgresql.org/docs/9.5/protocol-replication.html\n\nOn Sun, Sep 15, 2019 at 8:05 PM Virendra Negi <viren.negi@teliax.com> wrote:\n\n> Agreed but why is there a message specification for it describe in the\n> documentation and it ask to client reply back if a particular *bit* is\n> set.(1 means that the client should reply to this message as soon as\n> possible, to avoid a timeout disconnect. 0 otherwise)\n>\n>\n> Primary keepalive message (B)\n> Byte1('k')\n>\n> Identifies the message as a sender keepalive.\n> Int64\n>\n> The current end of WAL on the server.\n> Int64\n>\n> The server's system clock at the time of transmission, as microseconds\n> since midnight on 2000-01-01.\n> Byte1\n>\n> 1 means that the client should reply to this message as soon as possible,\n> to avoid a timeout disconnect. 0 otherwise.\n>\n> The receiving process can send replies back to the sender at any time,\n> using one of the following message formats (also in the payload of a\n> CopyData message):\n>\n>\n> On Sun, Sep 15, 2019 at 7:39 PM Michael Loftis <mloftis@wgops.com> wrote:\n>\n>>\n>>\n>> On Fri, Sep 13, 2019 at 07:12 Virendra Negi <viren.negi@teliax.com>\n>> wrote:\n>>\n>>> Implemented the Logical Streaming Replication thing are working fine I\n>>> see the XLogData message appearing and I'm able to parse them.\n>>>\n>>> But I haven't see any \"Primary Keepalive message\" yet. I had tried\n>>> setting the *tcp_keepalive_interval*, *tcp_keepalives_idle* both from\n>>> client runtime paramter and well as from postgresql.conf still no clue of\n>>> it.\n>>>\n>>> Any information around it?\n>>>\n>>\n>> Both of these options are not in the Pg protocol. They are within the OS\n>> TCP stack and are not visible to the applications at all.\n>>\n>>>\n>>>\n>>>\n>>> --\n>>\n>> \"Genius might be described as a supreme capacity for getting its\n>> possessors\n>> into trouble of all kinds.\"\n>> -- Samuel Butler\n>>\n>\n\nOh I miss the documentation link there you go https://www.postgresql.org/docs/9.5/protocol-replication.htmlOn Sun, Sep 15, 2019 at 8:05 PM Virendra Negi <viren.negi@teliax.com> wrote:Agreed but why is there a message specification for it describe in the documentation and it ask to client reply back if a particular *bit* is set.(1 means that the client should reply to this message as soon as possible, to avoid a timeout disconnect. 0 otherwise)Primary keepalive message (B)Byte1('k')Identifies the message as a sender keepalive.Int64The current end of WAL on the server.Int64The server's system clock at the time of transmission, as microseconds since midnight on 2000-01-01.Byte11 means that the client should reply to this message as soon as possible, to avoid a timeout disconnect. 0 otherwise.The receiving process can send replies back to the sender at any time, using one of the following message formats (also in the payload of a CopyData message):On Sun, Sep 15, 2019 at 7:39 PM Michael Loftis <mloftis@wgops.com> wrote:On Fri, Sep 13, 2019 at 07:12 Virendra Negi <viren.negi@teliax.com> wrote:Implemented the Logical Streaming Replication thing are working fine I see the XLogData message appearing and I'm able to parse them.But I haven't see any \"Primary Keepalive message\" yet. I had tried setting the tcp_keepalive_interval, tcp_keepalives_idle both from client runtime paramter and well as from postgresql.conf still no clue of it.Any information around it?Both of these options are not in the Pg protocol. They are within the OS TCP stack and are not visible to the applications at all.\n-- \"Genius might be described as a supreme capacity for getting its possessorsinto trouble of all kinds.\"-- Samuel Butler",
"msg_date": "Sun, 15 Sep 2019 20:06:24 +0530",
"msg_from": "Virendra Negi <viren.negi@teliax.com>",
"msg_from_op": true,
"msg_subject": "Re: Primary keepalive message not appearing in Logical Streaming\n Replication"
},
{
"msg_contents": "On Sun, Sep 15, 2019 at 08:36 Virendra Negi <viren.negi@teliax.com> wrote:\n\n> Oh I miss the documentation link there you go\n> https://www.postgresql.org/docs/9.5/protocol-replication.html\n>\n> On Sun, Sep 15, 2019 at 8:05 PM Virendra Negi <viren.negi@teliax.com>\n> wrote:\n>\n>> Agreed but why is there a message specification for it describe in the\n>> documentation and it ask to client reply back if a particular *bit* is\n>> set.(1 means that the client should reply to this message as soon as\n>> possible, to avoid a timeout disconnect. 0 otherwise)\n>>\n>\nThis is unrelated to TCP keepalive. I honestly don't know where the knob is\nto turn these on but the configuration variables you quoted earlier I am\nfamiliar with and they are not it. Perhaps someone else can chime in with\nhow to enable the protocol level keepalive in replication.\n\n\n>>\n>> Primary keepalive message (B)\n>> Byte1('k')\n>>\n>> Identifies the message as a sender keepalive.\n>> Int64\n>>\n>> The current end of WAL on the server.\n>> Int64\n>>\n>> The server's system clock at the time of transmission, as microseconds\n>> since midnight on 2000-01-01.\n>> Byte1\n>>\n>> 1 means that the client should reply to this message as soon as possible,\n>> to avoid a timeout disconnect. 0 otherwise.\n>>\n>> The receiving process can send replies back to the sender at any time,\n>> using one of the following message formats (also in the payload of a\n>> CopyData message):\n>>\n>>\n>> On Sun, Sep 15, 2019 at 7:39 PM Michael Loftis <mloftis@wgops.com> wrote:\n>>\n>>>\n>>>\n>>> On Fri, Sep 13, 2019 at 07:12 Virendra Negi <viren.negi@teliax.com>\n>>> wrote:\n>>>\n>>>> Implemented the Logical Streaming Replication thing are working fine I\n>>>> see the XLogData message appearing and I'm able to parse them.\n>>>>\n>>>> But I haven't see any \"Primary Keepalive message\" yet. I had tried\n>>>> setting the *tcp_keepalive_interval*, *tcp_keepalives_idle* both from\n>>>> client runtime paramter and well as from postgresql.conf still no clue of\n>>>> it.\n>>>>\n>>>> Any information around it?\n>>>>\n>>>\n>>> Both of these options are not in the Pg protocol. They are within the OS\n>>> TCP stack and are not visible to the applications at all.\n>>>\n>>>>\n>>>>\n>>>>\n>>>> --\n>>>\n>>> \"Genius might be described as a supreme capacity for getting its\n>>> possessors\n>>> into trouble of all kinds.\"\n>>> -- Samuel Butler\n>>>\n>> --\n\n\"Genius might be described as a supreme capacity for getting its possessors\ninto trouble of all kinds.\"\n-- Samuel Butler\n\nOn Sun, Sep 15, 2019 at 08:36 Virendra Negi <viren.negi@teliax.com> wrote:Oh I miss the documentation link there you go https://www.postgresql.org/docs/9.5/protocol-replication.htmlOn Sun, Sep 15, 2019 at 8:05 PM Virendra Negi <viren.negi@teliax.com> wrote:Agreed but why is there a message specification for it describe in the documentation and it ask to client reply back if a particular *bit* is set.(1 means that the client should reply to this message as soon as possible, to avoid a timeout disconnect. 0 otherwise)This is unrelated to TCP keepalive. I honestly don't know where the knob is to turn these on but the configuration variables you quoted earlier I am familiar with and they are not it. Perhaps someone else can chime in with how to enable the protocol level keepalive in replication. Primary keepalive message (B)Byte1('k')Identifies the message as a sender keepalive.Int64The current end of WAL on the server.Int64The server's system clock at the time of transmission, as microseconds since midnight on 2000-01-01.Byte11 means that the client should reply to this message as soon as possible, to avoid a timeout disconnect. 0 otherwise.The receiving process can send replies back to the sender at any time, using one of the following message formats (also in the payload of a CopyData message):On Sun, Sep 15, 2019 at 7:39 PM Michael Loftis <mloftis@wgops.com> wrote:On Fri, Sep 13, 2019 at 07:12 Virendra Negi <viren.negi@teliax.com> wrote:Implemented the Logical Streaming Replication thing are working fine I see the XLogData message appearing and I'm able to parse them.But I haven't see any \"Primary Keepalive message\" yet. I had tried setting the tcp_keepalive_interval, tcp_keepalives_idle both from client runtime paramter and well as from postgresql.conf still no clue of it.Any information around it?Both of these options are not in the Pg protocol. They are within the OS TCP stack and are not visible to the applications at all.\n-- \"Genius might be described as a supreme capacity for getting its possessorsinto trouble of all kinds.\"-- Samuel Butler\n\n\n-- \"Genius might be described as a supreme capacity for getting its possessorsinto trouble of all kinds.\"-- Samuel Butler",
"msg_date": "Sun, 15 Sep 2019 09:44:14 -0600",
"msg_from": "Michael Loftis <mloftis@wgops.com>",
"msg_from_op": false,
"msg_subject": "Re: Primary keepalive message not appearing in Logical Streaming\n Replication"
},
{
"msg_contents": "On Sun, Sep 15, 2019 at 09:44:14AM -0600, Michael Loftis wrote:\n>On Sun, Sep 15, 2019 at 08:36 Virendra Negi <viren.negi@teliax.com> wrote:\n>\n>> Oh I miss the documentation link there you go\n>> https://www.postgresql.org/docs/9.5/protocol-replication.html\n>>\n>> On Sun, Sep 15, 2019 at 8:05 PM Virendra Negi <viren.negi@teliax.com>\n>> wrote:\n>>\n>>> Agreed but why is there a message specification for it describe in the\n>>> documentation and it ask to client reply back if a particular *bit* is\n>>> set.(1 means that the client should reply to this message as soon as\n>>> possible, to avoid a timeout disconnect. 0 otherwise)\n>>>\n>>\n>This is unrelated to TCP keepalive. I honestly don't know where the knob is\n>to turn these on but the configuration variables you quoted earlier I am\n>familiar with and they are not it. Perhaps someone else can chime in with\n>how to enable the protocol level keepalive in replication.\n>\n\nPretty sure it's wal_sender_timeout. Which by default is 60s, but if you\ntune it down it should send keepalives more often.\n\nSee WalSndKeepaliveIfNecessary in [1]:\n\n[1] https://github.com/postgres/postgres/blob/master/src/backend/replication/walsender.c#L3425\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 15 Sep 2019 18:01:29 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Primary keepalive message not appearing in Logical Streaming\n Replication"
},
{
"msg_contents": "On Sun, Sep 15, 2019 at 11:44 AM Michael Loftis <mloftis@wgops.com> wrote:\n\n>\n>\n> On Sun, Sep 15, 2019 at 08:36 Virendra Negi <viren.negi@teliax.com> wrote:\n>\n>> Oh I miss the documentation link there you go\n>> https://www.postgresql.org/docs/9.5/protocol-replication.html\n>>\n>> On Sun, Sep 15, 2019 at 8:05 PM Virendra Negi <viren.negi@teliax.com>\n>> wrote:\n>>\n>>> Agreed but why is there a message specification for it describe in the\n>>> documentation and it ask to client reply back if a particular *bit* is\n>>> set.(1 means that the client should reply to this message as soon as\n>>> possible, to avoid a timeout disconnect. 0 otherwise)\n>>>\n>>\n> This is unrelated to TCP keepalive. I honestly don't know where the knob\n> is to turn these on but the configuration variables you quoted earlier I am\n> familiar with and they are not it. Perhaps someone else can chime in with\n> how to enable the protocol level keepalive in replication.\n>\n\nProtocol-level keepalives are governed by \"wal_sender_timeout\"\n\nCheers,\n\nJeff\n\nOn Sun, Sep 15, 2019 at 11:44 AM Michael Loftis <mloftis@wgops.com> wrote:On Sun, Sep 15, 2019 at 08:36 Virendra Negi <viren.negi@teliax.com> wrote:Oh I miss the documentation link there you go https://www.postgresql.org/docs/9.5/protocol-replication.htmlOn Sun, Sep 15, 2019 at 8:05 PM Virendra Negi <viren.negi@teliax.com> wrote:Agreed but why is there a message specification for it describe in the documentation and it ask to client reply back if a particular *bit* is set.(1 means that the client should reply to this message as soon as possible, to avoid a timeout disconnect. 0 otherwise)This is unrelated to TCP keepalive. I honestly don't know where the knob is to turn these on but the configuration variables you quoted earlier I am familiar with and they are not it. Perhaps someone else can chime in with how to enable the protocol level keepalive in replication. Protocol-level keepalives are governed by \"wal_sender_timeout\"Cheers,Jeff",
"msg_date": "Sun, 15 Sep 2019 12:30:52 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Primary keepalive message not appearing in Logical Streaming\n Replication"
}
] |
[
{
"msg_contents": "It struck me that the real reason that we keep getting gripes about\nthe weird behavior of CHAR(n) is that these functions (and, hence,\ntheir corresponding operators) fail to obey the \"trailing blanks\naren't significant\" rule:\n\n regprocedure | prosrc \n-------------------------------------------+----------------------\n bpcharlike(character,text) | textlike\n bpcharnlike(character,text) | textnlike\n bpcharicregexeq(character,text) | texticregexeq\n bpcharicregexne(character,text) | texticregexne\n bpcharregexeq(character,text) | textregexeq\n bpcharregexne(character,text) | textregexne\n bpchariclike(character,text) | texticlike\n bpcharicnlike(character,text) | texticnlike\n\nThey're just relying on binary compatibility of bpchar to text ...\nbut of course textlike etc. think trailing blanks are significant.\n\nEvery other primitive operation we have for bpchar correctly ignores\nthe trailing spaces.\n\nWe could fix this, and save some catalog space too, if we simply\ndeleted these functions/operators and let such calls devolve\ninto implicit casts to text.\n\nThis might annoy people who are actually writing trailing spaces\nin their patterns to make such cases work. But I think there\nare probably not too many such people, and having real consistency\nhere is worth something.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 13 Sep 2019 10:43:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Modest proposal for making bpchar less inconsistent"
},
{
"msg_contents": "Dne pá 13. 9. 2019 16:43 uživatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> It struck me that the real reason that we keep getting gripes about\n> the weird behavior of CHAR(n) is that these functions (and, hence,\n> their corresponding operators) fail to obey the \"trailing blanks\n> aren't significant\" rule:\n>\n> regprocedure | prosrc\n> -------------------------------------------+----------------------\n> bpcharlike(character,text) | textlike\n> bpcharnlike(character,text) | textnlike\n> bpcharicregexeq(character,text) | texticregexeq\n> bpcharicregexne(character,text) | texticregexne\n> bpcharregexeq(character,text) | textregexeq\n> bpcharregexne(character,text) | textregexne\n> bpchariclike(character,text) | texticlike\n> bpcharicnlike(character,text) | texticnlike\n>\n> They're just relying on binary compatibility of bpchar to text ...\n> but of course textlike etc. think trailing blanks are significant.\n>\n> Every other primitive operation we have for bpchar correctly ignores\n> the trailing spaces.\n>\n> We could fix this, and save some catalog space too, if we simply\n> deleted these functions/operators and let such calls devolve\n> into implicit casts to text.\n>\n> This might annoy people who are actually writing trailing spaces\n> in their patterns to make such cases work. But I think there\n> are probably not too many such people, and having real consistency\n> here is worth something.\n>\n\nhas sense\n\nPavel\n\n>\n> regards, tom lane\n>\n>\n>\n\nDne pá 13. 9. 2019 16:43 uživatel Tom Lane <tgl@sss.pgh.pa.us> napsal:It struck me that the real reason that we keep getting gripes about\nthe weird behavior of CHAR(n) is that these functions (and, hence,\ntheir corresponding operators) fail to obey the \"trailing blanks\naren't significant\" rule:\n\n regprocedure | prosrc \n-------------------------------------------+----------------------\n bpcharlike(character,text) | textlike\n bpcharnlike(character,text) | textnlike\n bpcharicregexeq(character,text) | texticregexeq\n bpcharicregexne(character,text) | texticregexne\n bpcharregexeq(character,text) | textregexeq\n bpcharregexne(character,text) | textregexne\n bpchariclike(character,text) | texticlike\n bpcharicnlike(character,text) | texticnlike\n\nThey're just relying on binary compatibility of bpchar to text ...\nbut of course textlike etc. think trailing blanks are significant.\n\nEvery other primitive operation we have for bpchar correctly ignores\nthe trailing spaces.\n\nWe could fix this, and save some catalog space too, if we simply\ndeleted these functions/operators and let such calls devolve\ninto implicit casts to text.\n\nThis might annoy people who are actually writing trailing spaces\nin their patterns to make such cases work. But I think there\nare probably not too many such people, and having real consistency\nhere is worth something.has sensePavel\n\n regards, tom lane",
"msg_date": "Fri, 13 Sep 2019 21:50:10 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Modest proposal for making bpchar less inconsistent"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 09:50:10PM +0200, Pavel Stehule wrote:\n> \n> \n> Dne pá 13. 9. 2019 16:43 uživatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> \n> It struck me that the real reason that we keep getting gripes about\n> the weird behavior of CHAR(n) is that these functions (and, hence,\n> their corresponding operators) fail to obey the \"trailing blanks\n> aren't significant\" rule:\n> \n> regprocedure | prosrc \n> -------------------------------------------+----------------------\n> bpcharlike(character,text) | textlike\n> bpcharnlike(character,text) | textnlike\n> bpcharicregexeq(character,text) | texticregexeq\n> bpcharicregexne(character,text) | texticregexne\n> bpcharregexeq(character,text) | textregexeq\n> bpcharregexne(character,text) | textregexne\n> bpchariclike(character,text) | texticlike\n> bpcharicnlike(character,text) | texticnlike\n> \n> They're just relying on binary compatibility of bpchar to text ...\n> but of course textlike etc. think trailing blanks are significant.\n> \n> Every other primitive operation we have for bpchar correctly ignores\n> the trailing spaces.\n> \n> We could fix this, and save some catalog space too, if we simply\n> deleted these functions/operators and let such calls devolve\n> into implicit casts to text.\n> \n> This might annoy people who are actually writing trailing spaces\n> in their patterns to make such cases work. But I think there\n> are probably not too many such people, and having real consistency\n> here is worth something.\n> \n> \n> has sense\n\nYes, I think this is a great idea!\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 28 Sep 2019 08:22:22 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Modest proposal for making bpchar less inconsistent"
},
{
"msg_contents": "At Sat, 28 Sep 2019 08:22:22 -0400, Bruce Momjian <bruce@momjian.us> wrote in <20190928122222.GA26853@momjian.us>\r\n> On Fri, Sep 13, 2019 at 09:50:10PM +0200, Pavel Stehule wrote:\r\n> > \r\n> > \r\n> > Dne pá 13. 9. 2019 16:43 uživatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\r\n> > \r\n> > It struck me that the real reason that we keep getting gripes about\r\n> > the weird behavior of CHAR(n) is that these functions (and, hence,\r\n> > their corresponding operators) fail to obey the \"trailing blanks\r\n> > aren't significant\" rule:\r\n> > \r\n> > regprocedure | prosrc \r\n> > -------------------------------------------+----------------------\r\n> > bpcharlike(character,text) | textlike\r\n> > bpcharnlike(character,text) | textnlike\r\n> > bpcharicregexeq(character,text) | texticregexeq\r\n> > bpcharicregexne(character,text) | texticregexne\r\n> > bpcharregexeq(character,text) | textregexeq\r\n> > bpcharregexne(character,text) | textregexne\r\n> > bpchariclike(character,text) | texticlike\r\n> > bpcharicnlike(character,text) | texticnlike\r\n> > \r\n> > They're just relying on binary compatibility of bpchar to text ...\r\n> > but of course textlike etc. think trailing blanks are significant.\r\n> > \r\n> > Every other primitive operation we have for bpchar correctly ignores\r\n> > the trailing spaces.\r\n> > \r\n> > We could fix this, and save some catalog space too, if we simply\r\n> > deleted these functions/operators and let such calls devolve\r\n> > into implicit casts to text.\r\n> > \r\n> > This might annoy people who are actually writing trailing spaces\r\n> > in their patterns to make such cases work. But I think there\r\n> > are probably not too many such people, and having real consistency\r\n> > here is worth something.\r\n> > \r\n> > \r\n> > has sense\r\n> \r\n> Yes, I think this is a great idea!\r\n\r\nI totally agree.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Tue, 01 Oct 2019 18:56:33 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Modest proposal for making bpchar less inconsistent"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Sat, 28 Sep 2019 08:22:22 -0400, Bruce Momjian <bruce@momjian.us> wrote in <20190928122222.GA26853@momjian.us>\n>> On Fri, Sep 13, 2019 at 09:50:10PM +0200, Pavel Stehule wrote:\n>>> Dne pá 13. 9. 2019 16:43 uživatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>>>> It struck me that the real reason that we keep getting gripes about\n>>>> the weird behavior of CHAR(n) is that these functions (and, hence,\n>>>> their corresponding operators) fail to obey the \"trailing blanks\n>>>> aren't significant\" rule:\n>>>> ...\n>>>> We could fix this, and save some catalog space too, if we simply\n>>>> deleted these functions/operators and let such calls devolve\n>>>> into implicit casts to text.\n\n>> Yes, I think this is a great idea!\n\n> I totally agree.\n\nI experimented with this, as per the attached simple patch. If you just\napply the catalog deletions, no regression test results change, which\nsays more about our lack of test coverage in this area than anything else.\nSo I added a few simple test cases too.\n\nHowever, playing with this more, I'm not sure it's the direction we want\nto go. I realized that the BPCHAR-related code paths in like_support.c\nare dead code with this patch, because it's no longer possible to match\na LIKE/regex operator to a bpchar column. For example, in existing\nreleases you can do\n\nregression=# create table t(f1 char(20) unique);\nCREATE TABLE\nregression=# explain select * from t where f1 like 'abcdef';\n QUERY PLAN \n------------------------------------------------------------------------\n Index Only Scan using t_f1_key on t (cost=0.15..8.17 rows=1 width=24)\n Index Cond: (f1 = 'abcdef'::bpchar)\n Filter: (f1 ~~ 'abcdef'::text)\n(3 rows)\n\nregression=# explain select * from t where f1 like 'abcdef%';\n QUERY PLAN \n----------------------------------------------------------------------------\n Bitmap Heap Scan on t (cost=4.23..14.39 rows=8 width=24)\n Filter: (f1 ~~ 'abcdef%'::text)\n -> Bitmap Index Scan on t_f1_key (cost=0.00..4.23 rows=8 width=0)\n Index Cond: ((f1 >= 'abcdef'::bpchar) AND (f1 < 'abcdeg'::bpchar))\n(4 rows)\n\nBut with this patch, you just get dumb seqscan plans because the\nexpression trees now look like \"f1::text ~~ constant\" which doesn't\nmatch to an index on the bare column f1.\n\nIf we wanted to preserve these index optimizations while still\nredefining the pattern match operators as ignoring trailing whitespace,\nwe could keep the operators/functions but change them to point at new\nC functions that strip trailing blanks before invoking the pattern\nmatch machinery. Some thought would need to be given as well to whether\nlike_fixed_prefix et al need to behave differently to agree with this\nbehavior. (Offhand it seems like they might need to strip trailing\nblanks from what would otherwise be the fixed prefix, but I'm not\nquite sure.)\n\nThat would be much more work than this patch of course (though still\nnot an enormous amount), and I'm not quite sure if it's worth the\ntrouble. Is this a case that anyone is using in practice?\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 01 Oct 2019 18:09:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Modest proposal for making bpchar less inconsistent"
}
] |
[
{
"msg_contents": "The pg_rewind docs assert that the state of the target's data directory\nafter rewind is equivalent to the source's data directory. But that\nisn't true both because the base state is further back in time and\nbecause the target's data directory will include the current state on\nthe source of any copied blocks.\n\nSo I've attached a patch to summarize more correctly as well as\ndocument clearly the state of the cluster after the operation and also\nthe operation sequencing dangers caused by copying configuration\nfiles from the source.\n\nJames Coleman",
"msg_date": "Fri, 13 Sep 2019 13:47:03 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_rewind docs correction"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 01:47:03PM -0400, James Coleman wrote:\n> So I've attached a patch to summarize more correctly as well as\n> document clearly the state of the cluster after the operation and also\n> the operation sequencing dangers caused by copying configuration\n> files from the source.\n\n+ After a successful rewind, the target data directory is equivalent\nto the\n+ to the state of the data directory at the point at which the\nsource and\n+ target diverged plus the current state on the source of any blocks\nchanged\n+ on the target after that divergence. While only changed blocks\nfrom relation\n+ files are copied; all other files are copied in full, including\nconfiguration\n+ files and WAL segments. The advantage of\n<application>pg_rewind</application>\n+ over taking a new base backup, or tools like\n<application>rsync</application>,\n+ is that <application>pg_rewind</application> does not require\ncomparing or\n+ copying unchanged relation blocks in the cluster. As such the\nrewind operation\n+ is significantly faster than other approaches when the database is\nlarge and\n+ only a small fraction of blocks differ between the clusters.\n\nThe point of divergence could be defined as the LSN position where WAL\nhas forked on the new timeline, but the block diffs are copied from\nactually the last checkpoint just before WAL has forked. So this new\nparagraph brings confusion about the actual divergence point.\n\nRegarding the relation files, if the file does not exist on the target\nbut does exist on the source, it is also copied fully, so the second\nsentence is wrong here to mention as relation files could also be\ncopied fully.\n--\nMichael",
"msg_date": "Sat, 14 Sep 2019 13:20:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind docs correction"
},
{
"msg_contents": "On Sat, Sep 14, 2019 at 12:20 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Sep 13, 2019 at 01:47:03PM -0400, James Coleman wrote:\n> > So I've attached a patch to summarize more correctly as well as\n> > document clearly the state of the cluster after the operation and also\n> > the operation sequencing dangers caused by copying configuration\n> > files from the source.\n>\n> + After a successful rewind, the target data directory is equivalent\n> to the\n> + to the state of the data directory at the point at which the\n> source and\n> + target diverged plus the current state on the source of any blocks\n> changed\n> + on the target after that divergence. While only changed blocks\n> from relation\n> + files are copied; all other files are copied in full, including\n> configuration\n> + files and WAL segments. The advantage of\n> <application>pg_rewind</application>\n> + over taking a new base backup, or tools like\n> <application>rsync</application>,\n> + is that <application>pg_rewind</application> does not require\n> comparing or\n> + copying unchanged relation blocks in the cluster. As such the\n> rewind operation\n> + is significantly faster than other approaches when the database is\n> large and\n> + only a small fraction of blocks differ between the clusters.\n>\n> The point of divergence could be defined as the LSN position where WAL\n> has forked on the new timeline, but the block diffs are copied from\n> actually the last checkpoint just before WAL has forked. So this new\n> paragraph brings confusion about the actual divergence point.\n>\n> Regarding the relation files, if the file does not exist on the target\n> but does exist on the source, it is also copied fully, so the second\n> sentence is wrong here to mention as relation files could also be\n> copied fully.\n\nUpdated (plus some additional wordsmithing).\n\nJames Coleman",
"msg_date": "Sat, 14 Sep 2019 19:00:54 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind docs correction"
},
{
"msg_contents": "On Sat, Sep 14, 2019 at 07:00:54PM -0400, James Coleman wrote:\n> Updated (plus some additional wordsmithing).\n\n+ The rewind operation is not expected to result in a consistent data\n+ directory state either internally to the node or with respect to the rest\n+ of the cluster. Instead the resulting data directory will only be consistent\n+ after WAL replay has completed to at least the LSN at which changed blocks\n+ copied from the source were originally written on the source.\n\nThat's not necessarily true. pg_rewind enforces in the control file\nof the target the minimum consistency LSN to be\npg_current_wal_insert_lsn() when using a live source or the last\ncheckpoint LSN for a stopped source, so while that sounds true from\nthe point of view of all the blocks copied, the control file may still\ncause a complain that the target recovering has not reached its\nconsistent point even if all the blocks are already at a position\nnot-so-far from what has been registered in the control file.\n\n+ the point at which the WAL timelines of the source and target diverged plus\n+ the current state on the source of any blocks changed on the target after\n+ that divergence. While only changed blocks from existing relation files are\n\nAnd here we could mention that all the blocks copied from the source\nare the ones which are found in the WAL records of the target until\nthe end of WAL of its timeline. Still, that's basically what is\nmentioned in the first part of \"How It Works\", which explains things\nbetter. I honestly don't really see that all this paragraph is an\nimprovement over the simplicity of the original when it comes to\nunderstand the global idea of what pg_rewind does.\n\n+ <para>\n+ Because <application>pg_rewind</application> copies configuration files\n+ entirely from the source, correcting recovery configuration options before\n+ restarting the server is necessary if you intend to re-introduce the target\n+ as a replica of the source. If you restart the server after the rewind\n+ operation has finished but without configuring recovery, the target will\n+ again diverge from the primary.\n+ </para>\n\nNo objections regarding that part. Now it seems to me that we had\nbetter apply that to the last part of \"How it works\" instead? I kind\nof agree that the last paragraph could provide more details regarding\nthe risks of overwriting the wanted configuration. The existing docs\nalso mention that pg_rewind only creates a backup_label file to start\nrecovery, perhaps we could mention up to which point recovery happens\nin this section? There is a bit more here than just \"apply the WAL\".\n--\nMichael",
"msg_date": "Sun, 15 Sep 2019 23:25:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind docs correction"
},
{
"msg_contents": "On Sun, Sep 15, 2019 at 10:25 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Sep 14, 2019 at 07:00:54PM -0400, James Coleman wrote:\n> > Updated (plus some additional wordsmithing).\n>\n> + The rewind operation is not expected to result in a consistent data\n> + directory state either internally to the node or with respect to the rest\n> + of the cluster. Instead the resulting data directory will only be consistent\n> + after WAL replay has completed to at least the LSN at which changed blocks\n> + copied from the source were originally written on the source.\n>\n> That's not necessarily true. pg_rewind enforces in the control file\n> of the target the minimum consistency LSN to be\n> pg_current_wal_insert_lsn() when using a live source or the last\n> checkpoint LSN for a stopped source, so while that sounds true from\n> the point of view of all the blocks copied, the control file may still\n> cause a complain that the target recovering has not reached its\n> consistent point even if all the blocks are already at a position\n> not-so-far from what has been registered in the control file.\n\nI could just say \"after WAL replay has completed to a consistent state\"?\n\n> + the point at which the WAL timelines of the source and target diverged plus\n> + the current state on the source of any blocks changed on the target after\n> + that divergence. While only changed blocks from existing relation files are\n>\n> And here we could mention that all the blocks copied from the source\n> are the ones which are found in the WAL records of the target until\n> the end of WAL of its timeline. Still, that's basically what is\n> mentioned in the first part of \"How It Works\", which explains things\n> better. I honestly don't really see that all this paragraph is an\n> improvement over the simplicity of the original when it comes to\n> understand the global idea of what pg_rewind does.\n\nThe problem with the original is that while simple, it's actually\nincorrect in that simplicity. Pg_rewind does *not* result in the data\ndirectory on the target matching the data directory on the source.\n\n> + <para>\n> + Because <application>pg_rewind</application> copies configuration files\n> + entirely from the source, correcting recovery configuration options before\n> + restarting the server is necessary if you intend to re-introduce the target\n> + as a replica of the source. If you restart the server after the rewind\n> + operation has finished but without configuring recovery, the target will\n> + again diverge from the primary.\n> + </para>\n>\n> No objections regarding that part. Now it seems to me that we had\n> better apply that to the last part of \"How it works\" instead? I kind\n> of agree that the last paragraph could provide more details regarding\n> the risks of overwriting the wanted configuration. The existing docs\n> also mention that pg_rewind only creates a backup_label file to start\n> recovery, perhaps we could mention up to which point recovery happens\n> in this section? There is a bit more here than just \"apply the WAL\".\n\nI'll look to see if there's a better place to put this.\n\nJames Coleman\n\n\n",
"msg_date": "Sun, 15 Sep 2019 10:36:04 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind docs correction"
},
{
"msg_contents": "On Sun, Sep 15, 2019 at 10:36:04AM -0400, James Coleman wrote:\n> On Sun, Sep 15, 2019 at 10:25 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> + The rewind operation is not expected to result in a consistent data\n>> + directory state either internally to the node or with respect to the rest\n>> + of the cluster. Instead the resulting data directory will only be consistent\n>> + after WAL replay has completed to at least the LSN at which changed blocks\n>> + copied from the source were originally written on the source.\n>>\n>> That's not necessarily true. pg_rewind enforces in the control file\n>> of the target the minimum consistency LSN to be\n>> pg_current_wal_insert_lsn() when using a live source or the last\n>> checkpoint LSN for a stopped source, so while that sounds true from\n>> the point of view of all the blocks copied, the control file may still\n>> cause a complain that the target recovering has not reached its\n>> consistent point even if all the blocks are already at a position\n>> not-so-far from what has been registered in the control file.\n> \n> I could just say \"after WAL replay has completed to a consistent state\"?\n\nI still would not change this paragraph. The first sentence means\nthat we have an equivalency, because that's the case if you think\nabout it as we make sure that the target is able to sync with the\nsource, and the target gets into a state where it as an on-disk state\nequivalent to the target up to the minimum consistency point defined\nin the control file once the tool has done its work (this last point\nis too precise to be included in a global description to be honest).\nAnd the second sentence makes clear what are the actual diffs are.\n\n>> + the point at which the WAL timelines of the source and target diverged plus\n>> + the current state on the source of any blocks changed on the target after\n>> + that divergence. While only changed blocks from existing relation files are\n>>\n>> And here we could mention that all the blocks copied from the source\n>> are the ones which are found in the WAL records of the target until\n>> the end of WAL of its timeline. Still, that's basically what is\n>> mentioned in the first part of \"How It Works\", which explains things\n>> better. I honestly don't really see that all this paragraph is an\n>> improvement over the simplicity of the original when it comes to\n>> understand the global idea of what pg_rewind does.\n> \n> The problem with the original is that while simple, it's actually\n> incorrect in that simplicity. Pg_rewind does *not* result in the data\n> directory on the target matching the data directory on the source.\n\nThat's not what I get from the original docs, but I may be too much\nused to it.\n\n>> + <para>\n>> + Because <application>pg_rewind</application> copies configuration files\n>> + entirely from the source, correcting recovery configuration options before\n>> + restarting the server is necessary if you intend to re-introduce the target\n>> + as a replica of the source. If you restart the server after the rewind\n>> + operation has finished but without configuring recovery, the target will\n>> + again diverge from the primary.\n>> + </para>\n>>\n>> No objections regarding that part. Now it seems to me that we had\n>> better apply that to the last part of \"How it works\" instead? I kind\n>> of agree that the last paragraph could provide more details regarding\n>> the risks of overwriting the wanted configuration. The existing docs\n>> also mention that pg_rewind only creates a backup_label file to start\n>> recovery, perhaps we could mention up to which point recovery happens\n>> in this section? There is a bit more here than just \"apply the WAL\".\n> \n> I'll look to see if there's a better place to put this.\n\nThanks. From what I can see, we could improve further the doc part\nabout how the tool works in details, especially regarding the\nconfiguration files which may get overwritten and be more precise\nabout that.\n--\nMichael",
"msg_date": "Tue, 17 Sep 2019 16:51:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind docs correction"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 3:51 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Sep 15, 2019 at 10:36:04AM -0400, James Coleman wrote:\n> > On Sun, Sep 15, 2019 at 10:25 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> + The rewind operation is not expected to result in a consistent data\n> >> + directory state either internally to the node or with respect to the rest\n> >> + of the cluster. Instead the resulting data directory will only be consistent\n> >> + after WAL replay has completed to at least the LSN at which changed blocks\n> >> + copied from the source were originally written on the source.\n> >>\n> >> That's not necessarily true. pg_rewind enforces in the control file\n> >> of the target the minimum consistency LSN to be\n> >> pg_current_wal_insert_lsn() when using a live source or the last\n> >> checkpoint LSN for a stopped source, so while that sounds true from\n> >> the point of view of all the blocks copied, the control file may still\n> >> cause a complain that the target recovering has not reached its\n> >> consistent point even if all the blocks are already at a position\n> >> not-so-far from what has been registered in the control file.\n> >\n> > I could just say \"after WAL replay has completed to a consistent state\"?\n>\n> I still would not change this paragraph. The first sentence means\n> that we have an equivalency, because that's the case if you think\n> about it as we make sure that the target is able to sync with the\n> source, and the target gets into a state where it as an on-disk state\n> equivalent to the target up to the minimum consistency point defined\n> in the control file once the tool has done its work (this last point\n> is too precise to be included in a global description to be honest).\n> And the second sentence makes clear what are the actual diffs are.\n> >> + the point at which the WAL timelines of the source and target diverged plus\n> >> + the current state on the source of any blocks changed on the target after\n> >> + that divergence. While only changed blocks from existing relation files are\n> >>\n> >> And here we could mention that all the blocks copied from the source\n> >> are the ones which are found in the WAL records of the target until\n> >> the end of WAL of its timeline. Still, that's basically what is\n> >> mentioned in the first part of \"How It Works\", which explains things\n> >> better. I honestly don't really see that all this paragraph is an\n> >> improvement over the simplicity of the original when it comes to\n> >> understand the global idea of what pg_rewind does.\n> >\n> > The problem with the original is that while simple, it's actually\n> > incorrect in that simplicity. Pg_rewind does *not* result in the data\n> > directory on the target matching the data directory on the source.\n>\n> That's not what I get from the original docs, but I may be too much\n> used to it.\n\nI don't agree that that's a valid equivalency. I myself spent a lot of\ntime trying to understand how this could possibly be true a while\nback, and even looked at source code to be certain. I've asked other\npeople and found the same confusion.\n\nAs I read it the 2nd second sentence doesn't actually tell you the\ndifferences; it makes a quick attempt at summarizing *how* the first\nsentence is true, but if the first sentence isn't accurate, then it's\nhard to read the 2nd one as helping.\n\nIf you'd prefer something less detailed at this point at that point in\nthe docs, then something along the lines of \"results in a data\ndirectory state which can then be safely replayed from the source\" or\nsome such.\n\nThe docs shouldn't be correct just for someone how already understands\nthe intricacies. And the end user shouldn't have to read the \"how it\nworks\" (which incidentally is kinda hidden at the bottom underneath\nthe CLI args -- perhaps we could move that?) to extrapolate things in\nthe primary documentation.\n\nJames Coleman\n\n\n",
"msg_date": "Tue, 17 Sep 2019 08:38:18 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind docs correction"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 08:38:18AM -0400, James Coleman wrote:\n> I don't agree that that's a valid equivalency. I myself spent a lot of\n> time trying to understand how this could possibly be true a while\n> back, and even looked at source code to be certain. I've asked other\n> people and found the same confusion.\n> \n> As I read it the 2nd second sentence doesn't actually tell you the\n> differences; it makes a quick attempt at summarizing *how* the first\n> sentence is true, but if the first sentence isn't accurate, then it's\n> hard to read the 2nd one as helping.\n\nWell, then it comes back to the part where I am used to the existing\ndocs :)\n\n> If you'd prefer something less detailed at this point at that point in\n> the docs, then something along the lines of \"results in a data\n> directory state which can then be safely replayed from the source\" or\n> some such.\n\nActually this is a good suggestion, and could replace the first\nsentence of this paragraph.\n\n> The docs shouldn't be correct just for someone how already understands\n> the intricacies. And the end user shouldn't have to read the \"how it\n> works\" (which incidentally is kinda hidden at the bottom underneath\n> the CLI args -- perhaps we could move that?) to extrapolate things in\n> the primary documentation.\n\nPerhaps. This doc page is not that long either.\n--\nMichael",
"msg_date": "Wed, 18 Sep 2019 10:41:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind docs correction"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 9:41 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Sep 17, 2019 at 08:38:18AM -0400, James Coleman wrote:\n> > I don't agree that that's a valid equivalency. I myself spent a lot of\n> > time trying to understand how this could possibly be true a while\n> > back, and even looked at source code to be certain. I've asked other\n> > people and found the same confusion.\n> >\n> > As I read it the 2nd second sentence doesn't actually tell you the\n> > differences; it makes a quick attempt at summarizing *how* the first\n> > sentence is true, but if the first sentence isn't accurate, then it's\n> > hard to read the 2nd one as helping.\n>\n> Well, then it comes back to the part where I am used to the existing\n> docs :)\n>\n> > If you'd prefer something less detailed at this point at that point in\n> > the docs, then something along the lines of \"results in a data\n> > directory state which can then be safely replayed from the source\" or\n> > some such.\n>\n> Actually this is a good suggestion, and could replace the first\n> sentence of this paragraph.\n>\n> > The docs shouldn't be correct just for someone how already understands\n> > the intricacies. And the end user shouldn't have to read the \"how it\n> > works\" (which incidentally is kinda hidden at the bottom underneath\n> > the CLI args -- perhaps we could move that?) to extrapolate things in\n> > the primary documentation.\n>\n> Perhaps. This doc page is not that long either.\n>\n\nI'd set this aside for quite a while, but I was looking at it again this\nafternoon, and I've come to see your concern about the opening paragraphs\nremaining relatively simple. To that end I believe I've come up with a\npatch that's a good compromise: retaining that simplicity and being more\nclear and accurate at the same time.\n\nIn the first paragraph I've updated it to refer to both \"successful rewind\nand subsequent WAL replay\" and the result I describe as being equivalent to\nthe result of a base backup, since that's more technically correct anyway\n(the current text could be read as implying a full out copy of the data\ndirectory, but that's not really true just as it isn't with pg_basebackup).\n\nI've added the information about how the backup label control file is\nwritten, and updated the How It Works steps to refer to that separately\nfrom restart.\n\nAdditionally the How It Works is updated to include WAL segments and new\nrelation files in the list of files copied wholesale, since that was\npreviously stated but somewhat contradicted there.\n\nI realized I didn't previously add this to the CF; since it's not a new\npatch I've added it to the current CF, but if this is incorrect please let\nme know.\n\nThanks,\nJames",
"msg_date": "Sun, 8 Mar 2020 17:13:21 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind docs correction"
},
{
"msg_contents": "On Sun, Mar 8, 2020 at 5:13 PM James Coleman <jtc331@gmail.com> wrote:\n\n>\n> I realized I didn't previously add this to the CF; since it's not a new\n> patch I've added it to the current CF, but if this is incorrect please let\n> me know.\n>\n\n Hmm, looks like I can't add it to the current one. I added it to the next\none. I think it could probably go now, since the patch is really 6 months\nold, but either way is fine -- it's just a docs patch.\n\nJames\n\nOn Sun, Mar 8, 2020 at 5:13 PM James Coleman <jtc331@gmail.com> wrote:I realized I didn't previously add this to the CF; since it's not a new patch I've added it to the current CF, but if this is incorrect please let me know. Hmm, looks like I can't add it to the current one. I added it to the next one. I think it could probably go now, since the patch is really 6 months old, but either way is fine -- it's just a docs patch.James",
"msg_date": "Sun, 8 Mar 2020 17:17:39 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind docs correction"
},
{
"msg_contents": "On Sun, Mar 08, 2020 at 05:13:21PM -0400, James Coleman wrote:\n> I've added the information about how the backup label control file is\n> written, and updated the How It Works steps to refer to that separately\n> from restart.\n> \n> Additionally the How It Works is updated to include WAL segments and new\n> relation files in the list of files copied wholesale, since that was\n> previously stated but somewhat contradicted there.\n\n- The result is equivalent to replacing the target data directory with the\n- source one. Only changed blocks from relation files are copied;\n- all other files are copied in full, including configuration files. The\n- advantage of <application>pg_rewind</application> over taking a new base backup, or\n- tools like <application>rsync</application>, is that <application>pg_rewind</application> does\n- not require reading through unchanged blocks in the cluster. This makes\n- it a lot faster when the database is large and only a small\n- fraction of blocks differ between the clusters.\n+ After a successful rewind and subsequent WAL replay, the target data\n+ directory is equivalent to a base backup of the source data directory. While\n+ only changed blocks from existing relation files are copied; all other files\n+ are copied in full, including new relation files, configuration files, and WAL\n+ segments. The advantage of <application>pg_rewind</application> over taking a\n\nThe first sentence you are adding refers to \"subsequent WAL replay\".\nHowever, this paragraph emphasizes with the state of the target\ncluster after running pg_rewind but *before* make the target cluster\nstart recovery. So shouldn't you just remove the part \"and subsequent\nWAL replay\" from your first new sentence?\n\nIn the same paragraph, I think that you should remove the \"While\" from\n\"While only changed blocks\", as the second part of the sentence refers\nto the other files, WAl segments, etc.\n\nThe second paragraph of the docs regarding timeline lookup is\nunchanged, which is fine.\n\n- When the target server is started for the first time after running\n- <application>pg_rewind</application>, it will go into recovery mode and replay all\n- WAL generated in the source server after the point of divergence.\n+ After running <application>pg_rewind</application> the data directory is\n+ not immediately in a consistent state. However\n+ <application>pg_rewind</application> configures the control file so that when\n+ the target server is started again it will enter recovery mode and replay all\n+ WAL generated in the source server after the point of divergence.\n\nThe second part of the third paragraph is not changed, and the\nmodification you are doing here is about the control file. I am\nstill unconvinced that this is a good change, because mentioning the\ncontrol file would be actually more adapted to the part \"How it\nworks\", where you are adding details about the backup_label file, and\nalready include details about the minimum consistency LSN itself\nstored in the control file.\n\n+ <para>\n+ Because <application>pg_rewind</application> copies configuration files\n+ entirely from the source, correcting recovery configuration options before\n+ restarting the server is necessary if you intend to re-introduce the target\n+ as a replica of the source. If you restart the server after the rewind\n+ operation has finished but without configuring recovery, the target will\n+ again diverge from the primary.\n+ </para>\n\nTrue that this is not outlined enough.\n\n+ The relation files are now to their state at the last checkpoint completed\n+ prior to the point at which the WAL timelines of the source and target\n+ diverged plus the current state on the source of any blocks changed on the\n+ target after that divergence.\n\n\"Relation files are now in a state equivalent to the moment of the\nlast completed checkpoint prior to the point..\"?\n\n- <filename>pg_stat_tmp/</filename>, and\n- <filename>pg_subtrans/</filename> are omitted from the data copied\n- from the source cluster. Any file or directory beginning with\n- <filename>pgsql_tmp</filename> is omitted, as well as are\n+ <filename>pg_stat_tmp/</filename>, and <filename>pg_subtrans/</filename>\n+ are omitted from the data copied from the source cluster. The files\n\nThis is just reorganizing an existing list, why?\n\n+ Create a backup label file to begin WAL replay at the checkpoint created\n+ at failover and a minimum consistency LSN using\n+ <literal>pg_current_wal_insert_lsn()</literal>, when using a live source\n+ and the last checkpoint LSN, when using a stopped source.\n\nNow would be the moment to mention the control file.\n\n> I realized I didn't previously add this to the CF; since it's not a new\n> patch I've added it to the current CF, but if this is incorrect please let\n> me know.\n\nThe last CF of Postgres 13 began at the beginning of February :(\n--\nMichael",
"msg_date": "Mon, 9 Mar 2020 15:59:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind docs correction"
},
{
"msg_contents": "On Mon, Mar 9, 2020 at 2:59 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sun, Mar 08, 2020 at 05:13:21PM -0400, James Coleman wrote:\n> > I've added the information about how the backup label control file is\n> > written, and updated the How It Works steps to refer to that separately\n> > from restart.\n> >\n> > Additionally the How It Works is updated to include WAL segments and new\n> > relation files in the list of files copied wholesale, since that was\n> > previously stated but somewhat contradicted there.\n>\n> - The result is equivalent to replacing the target data directory with\n> the\n> - source one. Only changed blocks from relation files are copied;\n> - all other files are copied in full, including configuration files. The\n> - advantage of <application>pg_rewind</application> over taking a new\n> base backup, or\n> - tools like <application>rsync</application>, is that\n> <application>pg_rewind</application> does\n> - not require reading through unchanged blocks in the cluster. This makes\n> - it a lot faster when the database is large and only a small\n> - fraction of blocks differ between the clusters.\n> + After a successful rewind and subsequent WAL replay, the target data\n> + directory is equivalent to a base backup of the source data directory.\n> While\n> + only changed blocks from existing relation files are copied; all other\n> files\n> + are copied in full, including new relation files, configuration files,\n> and WAL\n> + segments. The advantage of <application>pg_rewind</application> over\n> taking a\n>\n> The first sentence you are adding refers to \"subsequent WAL replay\".\n> However, this paragraph emphasizes with the state of the target\n> cluster after running pg_rewind but *before* make the target cluster\n> start recovery. So shouldn't you just remove the part \"and subsequent\n> WAL replay\" from your first new sentence?\n>\n\nI'd originally typed this:\nI'm not sure I follow. After pg_rewind but before replay the directory is\n*not* equivalent to a base backup. I don't see how paragraph is clearly\nlimited to describing what pg_rewind does. While the 2nd sentence is about\npg_rewind steps specifically, the paragraph (even in the original) goes on\nto compare it to a base backup so we're talking about the operation in\ntotality not just the one tool.\n\nBut I realized while typing it that I was probably missing something of\nwhat you were getting at: is the hangup on calling out the WAL replay that\na base backup (or rsync even) *also* requires WAL reply to reach a\nconsistent state? I hadn't thought of that while writing this initially, so\nI've updated the patch to eliminate that part but also to make the analogy\nto base backups more direct, since it's helpful in understanding what\nresult the tool is trying to accomplish and how it differs.\n\nIn the same paragraph, I think that you should remove the \"While\" from\n> \"While only changed blocks\", as the second part of the sentence refers\n> to the other files, WAl segments, etc.\n>\n\nFixed as part of the above.\n\n\n> The second paragraph of the docs regarding timeline lookup is\n> unchanged, which is fine.\n>\n> - When the target server is started for the first time after running\n> - <application>pg_rewind</application>, it will go into recovery mode\n> and replay all\n> - WAL generated in the source server after the point of divergence.\n> + After running <application>pg_rewind</application> the data directory\n> is\n> + not immediately in a consistent state. However\n> + <application>pg_rewind</application> configures the control file so\n> that when\n> + the target server is started again it will enter recovery mode and\n> replay all\n> + WAL generated in the source server after the point of divergence.\n>\n> The second part of the third paragraph is not changed, and the\n> modification you are doing here is about the control file. I am\n> still unconvinced that this is a good change, because mentioning the\n> control file would be actually more adapted to the part \"How it\n> works\", where you are adding details about the backup_label file, and\n> already include details about the minimum consistency LSN itself\n> stored in the control file.\n>\n\nI've removed the control file reference and instead continued the analogy\nto base backups.\n\n\n> + <para>\n> + Because <application>pg_rewind</application> copies configuration\n> files\n> + entirely from the source, correcting recovery configuration options\n> before\n> + restarting the server is necessary if you intend to re-introduce the\n> target\n> + as a replica of the source. If you restart the server after the rewind\n> + operation has finished but without configuring recovery, the target\n> will\n> + again diverge from the primary.\n> + </para>\n>\n> True that this is not outlined enough.\n>\n\nThanks.\n\n\n> + The relation files are now to their state at the last checkpoint\n> completed\n> + prior to the point at which the WAL timelines of the source and\n> target\n> + diverged plus the current state on the source of any blocks changed\n> on the\n> + target after that divergence.\n>\n> \"Relation files are now in a state equivalent to the moment of the\n> last completed checkpoint prior to the point..\"?\n>\n\nUpdated.\n\n\n> - <filename>pg_stat_tmp/</filename>, and\n> - <filename>pg_subtrans/</filename> are omitted from the data copied\n> - from the source cluster. Any file or directory beginning with\n> - <filename>pgsql_tmp</filename> is omitted, as well as are\n> + <filename>pg_stat_tmp/</filename>, and\n> <filename>pg_subtrans/</filename>\n> + are omitted from the data copied from the source cluster. The files\n>\n> This is just reorganizing an existing list, why?\n>\n\nThe grammar seemed a bit awkward to me, so while I was already reworking\nthis paragraph I tried to clean that up a bit.\n\n\n> + Create a backup label file to begin WAL replay at the checkpoint\n> created\n> + at failover and a minimum consistency LSN using\n> + <literal>pg_current_wal_insert_lsn()</literal>, when using a live\n> source\n> + and the last checkpoint LSN, when using a stopped source.\n>\n> Now would be the moment to mention the control file.\n>\n\nI made that more explicit here, and also referenced the filenames directly\n(and with tags).\n\n\n> > I realized I didn't previously add this to the CF; since it's not a new\n> > patch I've added it to the current CF, but if this is incorrect please\n> let\n> > me know.\n>\n> The last CF of Postgres 13 began at the beginning of February :(\n>\n\nStill ongoing, correct? I guess I mentally think of them as being only one\nmonth, but I guess that's not actually true. Regardless I'm not sure what\npolicy is for patches that have been in flight in hackers for a while but\njust missed being added to the CF app.\n\nThanks,\nJames",
"msg_date": "Mon, 9 Mar 2020 09:26:17 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind docs correction"
},
{
"msg_contents": "On Mon, Mar 09, 2020 at 09:26:17AM -0400, James Coleman wrote:\n>> - <filename>pg_stat_tmp/</filename>, and\n>> - <filename>pg_subtrans/</filename> are omitted from the data copied\n>> - from the source cluster. Any file or directory beginning with\n>> - <filename>pgsql_tmp</filename> is omitted, as well as are\n>> + <filename>pg_stat_tmp/</filename>, and\n>> <filename>pg_subtrans/</filename>\n>> + are omitted from the data copied from the source cluster. The files\n>>\n>> This is just reorganizing an existing list, why?\n>>\n> \n> The grammar seemed a bit awkward to me, so while I was already reworking\n> this paragraph I tried to clean that up a bit.\n\nThanks for the new patch, and sorry for the delay.\n\nOkay, I saw what you were coming at here, with one sentence for\ndirectories, and one for files.\n\n> Still ongoing, correct? I guess I mentally think of them as being only one\n> month, but I guess that's not actually true. Regardless I'm not sure what\n> policy is for patches that have been in flight in hackers for a while but\n> just missed being added to the CF app.\n\nThis is a documentation patch, so improving this part of the docs now\nis fine by me, particularly as this is an improvement. Here are more\nnotes from me:\n- I have removed the \"As with a base backup\" at the beginning of the\nsecond paragraph you modified. The first paragraph modified already\nreferences a base backup, so one reference is enough IMO.\n- WAL replay does not happen from the WAL position where WAL diverged,\nbut from the last checkpoint before WAL diverged.\n- Did some tweaks about the new part for configuration files, as it\nmay actually not be necessary to update the configuration for recovery\nto complete (depending on the settings of the source, the target may\njust require the creation of a standby.signal file in its data\ndirectory particularly with a common archive location for multiple\nclusters).\n- Some word-smithing in the step-by-step description.\n\nIs the updated version fine for you?\n--\nMichael",
"msg_date": "Tue, 28 Apr 2020 13:31:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind docs correction"
},
{
"msg_contents": "On Tue, Apr 28, 2020 at 12:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Mar 09, 2020 at 09:26:17AM -0400, James Coleman wrote:\n> >> - <filename>pg_stat_tmp/</filename>, and\n> >> - <filename>pg_subtrans/</filename> are omitted from the data copied\n> >> - from the source cluster. Any file or directory beginning with\n> >> - <filename>pgsql_tmp</filename> is omitted, as well as are\n> >> + <filename>pg_stat_tmp/</filename>, and\n> >> <filename>pg_subtrans/</filename>\n> >> + are omitted from the data copied from the source cluster. The files\n> >>\n> >> This is just reorganizing an existing list, why?\n> >>\n> >\n> > The grammar seemed a bit awkward to me, so while I was already reworking\n> > this paragraph I tried to clean that up a bit.\n>\n> Thanks for the new patch, and sorry for the delay.\n>\n> Okay, I saw what you were coming at here, with one sentence for\n> directories, and one for files.\n>\n> > Still ongoing, correct? I guess I mentally think of them as being only one\n> > month, but I guess that's not actually true. Regardless I'm not sure what\n> > policy is for patches that have been in flight in hackers for a while but\n> > just missed being added to the CF app.\n>\n> This is a documentation patch, so improving this part of the docs now\n> is fine by me, particularly as this is an improvement. Here are more\n> notes from me:\n> - I have removed the \"As with a base backup\" at the beginning of the\n> second paragraph you modified. The first paragraph modified already\n> references a base backup, so one reference is enough IMO.\n> - WAL replay does not happen from the WAL position where WAL diverged,\n> but from the last checkpoint before WAL diverged.\n> - Did some tweaks about the new part for configuration files, as it\n> may actually not be necessary to update the configuration for recovery\n> to complete (depending on the settings of the source, the target may\n> just require the creation of a standby.signal file in its data\n> directory particularly with a common archive location for multiple\n> clusters).\n> - Some word-smithing in the step-by-step description.\n>\n> Is the updated version fine for you?\n\nIn your revised patched the follow paragraph:\n\n+ <para>\n+ As <application>pg_rewind</application> copies configuration files\n+ entirely from the source, it may be required to correct the configuration\n+ used for recovery before restarting the target server, especially the\n+ the target is reintroduced as a standby of the source. If you restart\n+ the server after the rewind operation has finished but without configuring\n+ recovery, the target may again diverge from the primary.\n+ </para>\n\nI think is missing a word. Instead of \"especially the the target\"\nshould be \"especially if the target\".\n\nIn this block:\n\n+ Create a <filename>backup_label</filename> file to begin WAL replay at\n+ the checkpoint created at failover and configure the\n+ <filename>pg_control</filename> file with a minimum consistency LSN\n+ defined as the result of <literal>pg_current_wal_insert_lsn()</literal>\n+ when rewinding from a live source and using the last checkpoint LSN\n+ when rewinding from a stopped source.\n+ </para>\n\nPerhaps change \"and using the last checkpoint LSN\" to \"or the last\ncheckpoint LSN\". Alternatively you could make the grammar parallel by\nchanging to \"and defined as the last checkpoint LSN\", but that seems\nwordy, and the \"defined as [item or item]\" is already a good grammar\nconstruction.\n\nOther than those two small things, your proposed revision looks good to me.\n\nThanks,\nJames\n\n\n",
"msg_date": "Tue, 28 Apr 2020 12:13:38 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind docs correction"
},
{
"msg_contents": "On Tue, Apr 28, 2020 at 12:13:38PM -0400, James Coleman wrote:\n> I think is missing a word. Instead of \"especially the the target\"\n> should be \"especially if the target\".\n\nThanks, fixed.\n\n> In this block:\n> \n> + Create a <filename>backup_label</filename> file to begin WAL replay at\n> + the checkpoint created at failover and configure the\n> + <filename>pg_control</filename> file with a minimum consistency LSN\n> + defined as the result of <literal>pg_current_wal_insert_lsn()</literal>\n> + when rewinding from a live source and using the last checkpoint LSN\n> + when rewinding from a stopped source.\n> + </para>\n> \n> Perhaps change \"and using the last checkpoint LSN\" to \"or the last\n> checkpoint LSN\". Alternatively you could make the grammar parallel by\n> changing to \"and defined as the last checkpoint LSN\", but that seems\n> wordy, and the \"defined as [item or item]\" is already a good grammar\n> construction.\n\nUsing your first suggestion of \"or the last checkpoint LSN\" sounds\nmore natural as of this morning, so updated the patch with that.\n\nI am letting that aside for a couple of days to see if others have\nmore comments, and will likely commit it after an extra lookup.\n--\nMichael",
"msg_date": "Wed, 29 Apr 2020 09:15:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind docs correction"
},
{
"msg_contents": "On Wed, Apr 29, 2020 at 09:15:06AM +0900, Michael Paquier wrote:\n> I am letting that aside for a couple of days to see if others have\n> more comments, and will likely commit it after an extra lookup.\n\nAnd applied after an extra lookup. Thanks for the discussion, James.\n--\nMichael",
"msg_date": "Fri, 1 May 2020 17:45:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind docs correction"
},
{
"msg_contents": "On Fri, May 1, 2020 at 4:46 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Apr 29, 2020 at 09:15:06AM +0900, Michael Paquier wrote:\n> > I am letting that aside for a couple of days to see if others have\n> > more comments, and will likely commit it after an extra lookup.\n>\n> And applied after an extra lookup. Thanks for the discussion, James.\n\nYep. Thanks for pushing to make sure it was as correct as possible\nwhile improving it.\n\nJames\n\n\n",
"msg_date": "Fri, 1 May 2020 12:32:41 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind docs correction"
}
] |
[
{
"msg_contents": "Hello,\n\nI noticed the tests for range types do this:\n\ncreate table numrange_test2(nr numrange);\ncreate index numrange_test2_hash_idx on numrange_test2 (nr);\n\nDoes that need a `using hash`? It seems like that's the intention. We\nonly use that table for equality comparisions. The script already\ncreates a table with a btree index further up. If I don't drop the\ntable I can see it's not a hash index:\n\nregression=# \\d numrange_test2\n Table \"public.numrange_test2\"\n Column | Type | Collation | Nullable | Default\n--------+----------+-----------+----------+---------\n nr | numrange | | |\nIndexes:\n \"numrange_test2_hash_idx\" btree (nr)\n\nEverything else passes if I change just that one line in the\n{sql,expected} files.\n\nRegards,\nPaul\n\n\n",
"msg_date": "Fri, 13 Sep 2019 12:17:59 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "range test for hash index?"
},
{
"msg_contents": "On Sat, Sep 14, 2019 at 12:48 AM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n>\n> Hello,\n>\n> I noticed the tests for range types do this:\n>\n> create table numrange_test2(nr numrange);\n> create index numrange_test2_hash_idx on numrange_test2 (nr);\n>\n> Does that need a `using hash`? It seems like that's the intention.\n>\n\nI also think so. It appears to be added by commit 4429f6a9e3 which\nhas also added support for hash_range. So ideally this index should\nbe there to cover hash_range. I think you can once cross-check if by\ndefault this test-file covers the case of hash_range? If not and the\nchange you are proposing starts covering that code, then there is a\ngood chance that your finding is correct.\n\nIn general, the hash_range is covered by some of the existing test,\nbut I don't which test. See the code coverage report here:\nhttps://coverage.postgresql.org/src/backend/utils/adt/rangetypes.c.gcov.html\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 14 Sep 2019 17:43:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range test for hash index?"
},
{
"msg_contents": "On Sat, Sep 14, 2019 at 5:13 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> In general, the hash_range is covered by some of the existing test,\n> but I don't which test. See the code coverage report here:\n> https://coverage.postgresql.org/src/backend/utils/adt/rangetypes.c.gcov.html\n\nThanks! I did some experimenting, and the current test code *only*\ncalls `hash_range_internal` when we force it like this:\n\nset enable_nestloop=f;\nset enable_hashjoin=t;\nset enable_mergejoin=f;\nselect * from numrange_test natural join numrange_test2 order by nr;\n\nBut if I create that index as a hash index instead, we also call it\nfor these inserts and selects (except for the empty ranges):\n\ncreate table numrange_test2(nr numrange);\ncreate index numrange_test2_hash_idx on numrange_test2 (nr);\n\nINSERT INTO numrange_test2 VALUES('[, 5)');\nINSERT INTO numrange_test2 VALUES(numrange(1.1, 2.2));\nINSERT INTO numrange_test2 VALUES(numrange(1.1, 2.2));\nINSERT INTO numrange_test2 VALUES(numrange(1.1, 2.2,'()'));\nINSERT INTO numrange_test2 VALUES('empty');\n\nselect * from numrange_test2 where nr = 'empty'::numrange;\nselect * from numrange_test2 where nr = numrange(1.1, 2.2);\nselect * from numrange_test2 where nr = numrange(1.1, 2.3);\n\n(None of that is surprising, right? :-)\n\nSo that seems like more confirmation that it was always intended to be\na hash index. Would you like a commit for that? Is it a small enough\nchange for a committer to just do it? The entire change is simply\n(also attached as a file):\n\ndiff --git a/src/test/regress/expected/rangetypes.out\nb/src/test/regress/expected/rangetypes.out\nindex 60d875e898..6fd16bddd1 100644\n--- a/src/test/regress/expected/rangetypes.out\n+++ b/src/test/regress/expected/rangetypes.out\n@@ -519,7 +519,7 @@ select numrange(1.0, 2.0) * numrange(2.5, 3.0);\n (1 row)\n\n create table numrange_test2(nr numrange);\n-create index numrange_test2_hash_idx on numrange_test2 (nr);\n+create index numrange_test2_hash_idx on numrange_test2 using hash (nr);\n INSERT INTO numrange_test2 VALUES('[, 5)');\n INSERT INTO numrange_test2 VALUES(numrange(1.1, 2.2));\n INSERT INTO numrange_test2 VALUES(numrange(1.1, 2.2));\ndiff --git a/src/test/regress/sql/rangetypes.sql\nb/src/test/regress/sql/rangetypes.sql\nindex 9fdb1953df..8960add976 100644\n--- a/src/test/regress/sql/rangetypes.sql\n+++ b/src/test/regress/sql/rangetypes.sql\n@@ -119,7 +119,7 @@ select numrange(1.0, 2.0) * numrange(1.5, 3.0);\n select numrange(1.0, 2.0) * numrange(2.5, 3.0);\n\n create table numrange_test2(nr numrange);\n-create index numrange_test2_hash_idx on numrange_test2 (nr);\n+create index numrange_test2_hash_idx on numrange_test2 using hash (nr);\n\n INSERT INTO numrange_test2 VALUES('[, 5)');\n INSERT INTO numrange_test2 VALUES(numrange(1.1, 2.2));\n\nYours,\nPaul",
"msg_date": "Sun, 15 Sep 2019 18:52:49 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Re: range test for hash index?"
},
{
"msg_contents": "On Mon, Sep 16, 2019 at 7:23 AM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n>\n> On Sat, Sep 14, 2019 at 5:13 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > In general, the hash_range is covered by some of the existing test,\n> > but I don't which test. See the code coverage report here:\n> > https://coverage.postgresql.org/src/backend/utils/adt/rangetypes.c.gcov.html\n>\n> Thanks! I did some experimenting, and the current test code *only*\n> calls `hash_range_internal` when we force it like this:\n>\n\nI don't see this function on the master branch. Is this function name\ncorrect? Are you looking at some different branch?\n\n> set enable_nestloop=f;\n> set enable_hashjoin=t;\n> set enable_mergejoin=f;\n> select * from numrange_test natural join numrange_test2 order by nr;\n>\n> But if I create that index as a hash index instead, we also call it\n> for these inserts and selects (except for the empty ranges):\n>\n> create table numrange_test2(nr numrange);\n> create index numrange_test2_hash_idx on numrange_test2 (nr);\n>\n> INSERT INTO numrange_test2 VALUES('[, 5)');\n> INSERT INTO numrange_test2 VALUES(numrange(1.1, 2.2));\n> INSERT INTO numrange_test2 VALUES(numrange(1.1, 2.2));\n> INSERT INTO numrange_test2 VALUES(numrange(1.1, 2.2,'()'));\n> INSERT INTO numrange_test2 VALUES('empty');\n>\n> select * from numrange_test2 where nr = 'empty'::numrange;\n> select * from numrange_test2 where nr = numrange(1.1, 2.2);\n> select * from numrange_test2 where nr = numrange(1.1, 2.3);\n>\n> (None of that is surprising, right? :-)\n>\n> So that seems like more confirmation that it was always intended to be\n> a hash index.\n\nYes, it indicates that.\n\nJeff/Heikki, to me the issue pointed by Paul looks like an oversight\nin commit 4429f6a9e3. Can you think of any other reason? If not, I\ncan commit this patch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 Sep 2019 17:58:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range test for hash index?"
},
{
"msg_contents": "On Mon, Sep 16, 2019 at 5:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I don't see this function on the master branch. Is this function name\n> correct? Are you looking at some different branch?\n\nSorry about that! You're right, I was on my multirange branch. But I\nsee the same thing on latest master (but calling hash_range instead of\nhash_range_internal).\n\nPaul\n\n\n",
"msg_date": "Mon, 16 Sep 2019 10:54:03 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Re: range test for hash index?"
},
{
"msg_contents": "Hello,\n\nI've done some code coverage testing by running make check-world. It\ndoesn't show any difference in the test coverage. The patch looks good to\nme.\n\n-- \nThanks & Regards,\nMahendra Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\nHello,I've done some code coverage testing by running make check-world. It doesn't show any difference in the test coverage. The patch looks good to me.-- Thanks & Regards,Mahendra ThalorEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 17 Sep 2019 18:45:06 +0530",
"msg_from": "Mahendra Singh <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range test for hash index?"
},
{
"msg_contents": "On Mon, Sep 16, 2019 at 11:24 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n>\n> On Mon, Sep 16, 2019 at 5:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I don't see this function on the master branch. Is this function name\n> > correct? Are you looking at some different branch?\n>\n> Sorry about that! You're right, I was on my multirange branch. But I\n> see the same thing on latest master (but calling hash_range instead of\n> hash_range_internal).\n>\n\nNo problem, attached is a patch with a proposed commit message. I\nwill wait for a few days to see if Heikki/Jeff or anyone else responds\nback, otherwise will commit and backpatch this early next week.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 18 Sep 2019 09:30:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range test for hash index?"
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 9:30 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Sep 16, 2019 at 11:24 PM Paul A Jungwirth\n> <pj@illuminatedcomputing.com> wrote:\n> >\n> > On Mon, Sep 16, 2019 at 5:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > I don't see this function on the master branch. Is this function name\n> > > correct? Are you looking at some different branch?\n> >\n> > Sorry about that! You're right, I was on my multirange branch. But I\n> > see the same thing on latest master (but calling hash_range instead of\n> > hash_range_internal).\n> >\n>\n> No problem, attached is a patch with a proposed commit message. I\n> will wait for a few days to see if Heikki/Jeff or anyone else responds\n> back, otherwise will commit and backpatch this early next week.\n>\n\nToday, while I was trying to backpatch, I realized that hash indexes\nwere not WAL-logged before 10 and they give warning \"WARNING: hash\nindexes are not WAL-logged and their use is discouraged\". However,\nthis test has nothing to do with the durability of hash-indexes, so I\nthink we can safely backpatch, but still, I thought it is better to\ncheck if anybody thinks that is not a good idea. In back-branches,\nwe are already using hash-index in regression tests in some cases like\nenum.sql, macaddr.sql, etc., so adding for one more genuine case\nshould be fine. OTOH, we can back-patch till 10, but the drawback is\nthe tests will be inconsistent across branches. Does anyone think it\nis not a good idea to backpatch this till 9.4?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 Sep 2019 09:07:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range test for hash index?"
},
{
"msg_contents": "On Wed, Sep 25, 2019 at 09:07:13AM +0530, Amit Kapila wrote:\n>On Wed, Sep 18, 2019 at 9:30 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Mon, Sep 16, 2019 at 11:24 PM Paul A Jungwirth\n>> <pj@illuminatedcomputing.com> wrote:\n>> >\n>> > On Mon, Sep 16, 2019 at 5:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> > > I don't see this function on the master branch. Is this function name\n>> > > correct? Are you looking at some different branch?\n>> >\n>> > Sorry about that! You're right, I was on my multirange branch. But I\n>> > see the same thing on latest master (but calling hash_range instead of\n>> > hash_range_internal).\n>> >\n>>\n>> No problem, attached is a patch with a proposed commit message. I\n>> will wait for a few days to see if Heikki/Jeff or anyone else responds\n>> back, otherwise will commit and backpatch this early next week.\n>>\n>\n>Today, while I was trying to backpatch, I realized that hash indexes\n>were not WAL-logged before 10 and they give warning \"WARNING: hash\n>indexes are not WAL-logged and their use is discouraged\". However,\n>this test has nothing to do with the durability of hash-indexes, so I\n>think we can safely backpatch, but still, I thought it is better to\n>check if anybody thinks that is not a good idea. In back-branches,\n>we are already using hash-index in regression tests in some cases like\n>enum.sql, macaddr.sql, etc., so adding for one more genuine case\n>should be fine. OTOH, we can back-patch till 10, but the drawback is\n>the tests will be inconsistent across branches. Does anyone think it\n>is not a good idea to backpatch this till 9.4?\n>\n\nBy \"inconsistent\" you mean that pre-10 versions will have different\nexpected output than versions with WAL-logged hash indexes? I don't see\nwhy that would be a reason not to backpatch to all supported versions,\nconsidering we already have the same difference for other test suites.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 27 Sep 2019 00:33:33 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: range test for hash index?"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 4:03 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Wed, Sep 25, 2019 at 09:07:13AM +0530, Amit Kapila wrote:\n> >On Wed, Sep 18, 2019 at 9:30 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> On Mon, Sep 16, 2019 at 11:24 PM Paul A Jungwirth\n> >> <pj@illuminatedcomputing.com> wrote:\n> >> >\n> >> > On Mon, Sep 16, 2019 at 5:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> > > I don't see this function on the master branch. Is this function name\n> >> > > correct? Are you looking at some different branch?\n> >> >\n> >> > Sorry about that! You're right, I was on my multirange branch. But I\n> >> > see the same thing on latest master (but calling hash_range instead of\n> >> > hash_range_internal).\n> >> >\n> >>\n> >> No problem, attached is a patch with a proposed commit message. I\n> >> will wait for a few days to see if Heikki/Jeff or anyone else responds\n> >> back, otherwise will commit and backpatch this early next week.\n> >>\n> >\n> >Today, while I was trying to backpatch, I realized that hash indexes\n> >were not WAL-logged before 10 and they give warning \"WARNING: hash\n> >indexes are not WAL-logged and their use is discouraged\". However,\n> >this test has nothing to do with the durability of hash-indexes, so I\n> >think we can safely backpatch, but still, I thought it is better to\n> >check if anybody thinks that is not a good idea. In back-branches,\n> >we are already using hash-index in regression tests in some cases like\n> >enum.sql, macaddr.sql, etc., so adding for one more genuine case\n> >should be fine. OTOH, we can back-patch till 10, but the drawback is\n> >the tests will be inconsistent across branches. Does anyone think it\n> >is not a good idea to backpatch this till 9.4?\n> >\n>\n> By \"inconsistent\" you mean that pre-10 versions will have different\n> expected output than versions with WAL-logged hash indexes?\n>\n\nYes.\n\n> I don't see\n> why that would be a reason not to backpatch to all supported versions,\n> considering we already have the same difference for other test suites.\n>\n\nYeah, I also think so. I will do this today.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 Sep 2019 06:02:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range test for hash index?"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 6:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Sep 27, 2019 at 4:03 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> >\n> > By \"inconsistent\" you mean that pre-10 versions will have different\n> > expected output than versions with WAL-logged hash indexes?\n> >\n>\n> Yes.\n>\n> > I don't see\n> > why that would be a reason not to backpatch to all supported versions,\n> > considering we already have the same difference for other test suites.\n> >\n>\n> Yeah, I also think so. I will do this today.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 Sep 2019 09:48:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range test for hash index?"
}
] |
[
{
"msg_contents": "Hi Alvaro!\n\n\n\n> Hello Tatsuro,\n> On 2019-Aug-13, Tatsuro Yamada wrote:\n> > On 2019/08/02 3:43, Alvaro Herrera wrote:\n> > > Hmm, I'm trying this out now and I don't see the index_rebuild_count\n> > > ever go up. I think it's because the indexes are built using parallel\n> > > index build ... or maybe it was the table AM changes that moved things\n> > > around, not sure. There's a period at the end when the CLUSTER command\n> > > keeps working, but it's gone from pg_stat_progress_cluster.\n> >\n> > Thanks for your report.\n> > I'll investigate it. :)\n>\n\n\nI have fixed it. Can you please verify?\n>\n\n\nThanks! I can review your patch for fix it.\nHowever, I was starting fixing the problem from the last day of PGConf.Asia\n(11 Sep).\nAttached file is WIP patch.In my patch, I added \"command id\" to all APIs of\nprogress reporting to isolate commands. Therefore, it doesn't allow to\ncascade updating system views. And my patch is on WIP so it needs clean-up\nand test.\nI share it anyway. :)\n\nHere is a test result of my patch.\nThe last column index_rebuild count is increased.\n========================================\npostgres=# select * from pg_stat_progress_cluster ; \\watch 0.001;\n11636|13591|postgres|16384|CLUSTER|initializing|0|0|0|0|0|0\n11636|13591|postgres|16384|CLUSTER|index scanning heap|16389|251|251|0|0|0\n...\n11636|13591|postgres|16384|CLUSTER|index scanning\nheap|16389|10000|10000|0|0|0\n11636|13591|postgres|16384|CLUSTER|rebuilding\nindex|16389|10000|10000|0|0|0...\n11636|13591|postgres|16384|CLUSTER|rebuilding index|16389|10000|10000|0|0|1\n...\n11636|13591|postgres|16384|CLUSTER|rebuilding index|16389|10000|10000|0|0|2\n...\n11636|13591|postgres|16384|CLUSTER|rebuilding index|16389|10000|10000|0|0|3\n...\n11636|13591|postgres|16384|CLUSTER|rebuilding index|16389|10000|10000|0|0|4\n...\n11636|13591|postgres|16384|CLUSTER|performing final\ncleanup|16389|10000|10000|0|0|5\n========================================\n\nThanks,\nTatsuro Yamada",
"msg_date": "Sat, 14 Sep 2019 13:06:32 +0900",
"msg_from": "Tattsu Yama <yamatattsu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] CLUSTER command progress monitor"
},
{
"msg_contents": "On Sat, Sep 14, 2019 at 01:06:32PM +0900, Tattsu Yama wrote:\n> Thanks! I can review your patch for fix it.\n> However, I was starting fixing the problem from the last day of PGConf.Asia\n> (11 Sep).\n> Attached file is WIP patch.In my patch, I added \"command id\" to all APIs of\n> progress reporting to isolate commands. Therefore, it doesn't allow to\n> cascade updating system views. And my patch is on WIP so it needs clean-up\n> and test.\n> I share it anyway. :)\n\n+ if (cmdtype == PROGRESS_COMMAND_INVALID || beentry->st_progress_command == cmdtype)\n+ {\n+ PGSTAT_BEGIN_WRITE_ACTIVITY(beentry);\n+ beentry->st_progress_param[index] = val;\n+ PGSTAT_END_WRITE_ACTIVITY(beentry);\n+ }\nYou basically don't need the progress reports if the command ID is\ninvalid, no?\n\nAnother note is that you don't actually fix the problems related to\nthe calls of pgstat_progress_end_command() which have been added for\nREINDEX reporting, so a progress report started for CLUSTER can get\nended earlier than expected, preventing the follow-up progress updates\nto show up.\n--\nMichael",
"msg_date": "Sat, 14 Sep 2019 13:30:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CLUSTER command progress monitor"
},
{
"msg_contents": "Hi Michael!\n\n> Attached file is WIP patch.In my patch, I added \"command id\" to all APIs\n> of\n> > progress reporting to isolate commands. Therefore, it doesn't allow to\n> > cascade updating system views. And my patch is on WIP so it needs\n> clean-up\n> > and test.\n> > I share it anyway. :)\n>\n> + if (cmdtype == PROGRESS_COMMAND_INVALID ||\n> beentry->st_progress_command == cmdtype)\n> + {\n> + PGSTAT_BEGIN_WRITE_ACTIVITY(beentry);\n> + beentry->st_progress_param[index] = val;\n> + PGSTAT_END_WRITE_ACTIVITY(beentry);\n> + }\n> You basically don't need the progress reports if the command ID is\n> invalid, no?\n>\n\n\nAh, right.\nI'll check and fix that today. :)\n\n\n\n>\n> Another note is that you don't actually fix the problems related to\n> the calls of pgstat_progress_end_command() which have been added for\n> REINDEX reporting, so a progress report started for CLUSTER can get\n> ended earlier than expected, preventing the follow-up progress updates\n> to show up.\n>\n>\n\nHmm... I fixed the problem. Please confirm the test result repeated below.\nCLUSTER is able to get the last phase: performing final clean up by using\nthe patch.\n\n# Test result\n========================================\npostgres=# select * from pg_stat_progress_cluster ; \\watch 0.001;\n11636|13591|postgres|16384|CLUSTER|initializing|0|0|0|0|0|0\n11636|13591|postgres|16384|CLUSTER|index scanning heap|16389|251|251|0|0|0\n11636|13591|postgres|16384|CLUSTER|index scanning\nheap|16389|10000|10000|0|0|0\n11636|13591|postgres|16384|CLUSTER|rebuilding\nindex|16389|10000|10000|0|0|0 <== The last column rebuild_index_count is\nincreasing!\n11636|13591|postgres|16384|CLUSTER|rebuilding index|16389|10000|10000|0|0|1\n11636|13591|postgres|16384|CLUSTER|rebuilding index|16389|10000|10000|0|0|2\n11636|13591|postgres|16384|CLUSTER|rebuilding index|16389|10000|10000|0|0|3\n11636|13591|postgres|16384|CLUSTER|rebuilding index|16389|10000|10000|0|0|4\n11636|13591|postgres|16384|CLUSTER|performing final\ncleanup|16389|10000|10000|0|0|5 <== The last phase of CLUSTER!\n========================================\n\nThanks,\nTatsuro Yamada\n\nHi Michael!> Attached file is WIP patch.In my patch, I added \"command id\" to all APIs of\n> progress reporting to isolate commands. Therefore, it doesn't allow to\n> cascade updating system views. And my patch is on WIP so it needs clean-up\n> and test.\n> I share it anyway. :)\n\n+ if (cmdtype == PROGRESS_COMMAND_INVALID || beentry->st_progress_command == cmdtype)\n+ {\n+ PGSTAT_BEGIN_WRITE_ACTIVITY(beentry);\n+ beentry->st_progress_param[index] = val;\n+ PGSTAT_END_WRITE_ACTIVITY(beentry);\n+ }\nYou basically don't need the progress reports if the command ID is\ninvalid, no?Ah, right.I'll check and fix that today. :) \n\nAnother note is that you don't actually fix the problems related to\nthe calls of pgstat_progress_end_command() which have been added for\nREINDEX reporting, so a progress report started for CLUSTER can get\nended earlier than expected, preventing the follow-up progress updates\nto show up.Hmm... I fixed the problem. Please confirm the test result repeated below. CLUSTER is able to get the last phase: performing final clean up by using the patch.# Test result ========================================postgres=# select * from pg_stat_progress_cluster ; \\watch 0.001;11636|13591|postgres|16384|CLUSTER|initializing|0|0|0|0|0|011636|13591|postgres|16384|CLUSTER|index scanning heap|16389|251|251|0|0|011636|13591|postgres|16384|CLUSTER|index scanning heap|16389|10000|10000|0|0|011636|13591|postgres|16384|CLUSTER|rebuilding index|16389|10000|10000|0|0|0 \n\n<== The last column rebuild_index_count is increasing!11636|13591|postgres|16384|CLUSTER|rebuilding index|16389|10000|10000|0|0|111636|13591|postgres|16384|CLUSTER|rebuilding index|16389|10000|10000|0|0|211636|13591|postgres|16384|CLUSTER|rebuilding index|16389|10000|10000|0|0|311636|13591|postgres|16384|CLUSTER|rebuilding index|16389|10000|10000|0|0|411636|13591|postgres|16384|CLUSTER|performing final cleanup|16389|10000|10000|0|0|5 <== The last phase of CLUSTER!======================================== Thanks,Tatsuro Yamada",
"msg_date": "Sun, 15 Sep 2019 12:35:10 +0900",
"msg_from": "Tattsu Yama <yamatattsu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] CLUSTER command progress monitor"
},
{
"msg_contents": "Hi Michael,\n\n\n>> > Attached file is WIP patch.In my patch, I added \"command id\" to all\n>> APIs of\n>> > progress reporting to isolate commands. Therefore, it doesn't allow to\n>> > cascade updating system views. And my patch is on WIP so it needs\n>> clean-up\n>> > and test.\n>> > I share it anyway. :)\n>>\n>> + if (cmdtype == PROGRESS_COMMAND_INVALID ||\n>> beentry->st_progress_command == cmdtype)\n>> + {\n>> + PGSTAT_BEGIN_WRITE_ACTIVITY(beentry);\n>> + beentry->st_progress_param[index] = val;\n>> + PGSTAT_END_WRITE_ACTIVITY(beentry);\n>> + }\n>> You basically don't need the progress reports if the command ID is\n>> invalid, no?\n>>\n>\n>\n> Ah, right.\n> I'll check and fix that today. :)\n>\n>\n\nI fixed the patch based on your comment.\nPlease find attached file. :)\n\nI should have explained the API changes more. I added cmdtype as a given\nparameter for all functions (See below).\nTherefore, I suppose that my patch is similar to the following fix as you\nmentioned on -hackers.\n\n- Allow only reporting for a given command ID, which would basically\n> require to pass down the command ID to progress update APIs and bypass an\n> update\n> if the command ID provided by caller does not match the existing one\n> started (?).\n\n\n#pgstat.c\npgstat_progress_start_command(ProgressCommandType cmdtype,...)\n - Progress reporter starts when beentry->st_progress_command is\nPROGRESS_COMMAND_INVALID\n\npgstat_progress_end_command(ProgressCommandType cmdtype,...)\n - Progress reporter ends when beentry->st_progress_command equals cmdtype\n\npgstat_progress_update_param(ProgressCommandType cmdtype,...) and\npgstat_progress_update_multi_param(ProgressCommandType cmdtype,...)\n - Progress reporter updates parameters if beentry->st_progress_command\nequals cmdtype\n\nNote:\ncmdtype means the ProgressCommandType below:\n\n# pgstat.h\ntypedef enum ProgressCommandType\n{\n PROGRESS_COMMAND_INVALID,\n PROGRESS_COMMAND_VACUUM,\n PROGRESS_COMMAND_CLUSTER,\n PROGRESS_COMMAND_CREATE_INDEX\n} ProgressCommandType;\n\nThanks,\nTatsuro Yamada",
"msg_date": "Mon, 16 Sep 2019 15:26:10 +0900",
"msg_from": "Tattsu Yama <yamatattsu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] CLUSTER command progress monitor"
},
{
"msg_contents": "On 2019-Sep-16, Tattsu Yama wrote:\n\n> I should have explained the API changes more. I added cmdtype as a given\n> parameter for all functions (See below).\n> Therefore, I suppose that my patch is similar to the following fix as you\n> mentioned on -hackers.\n\nIs this fix strictly necessary for pg12, or is this something that we\ncan leave for pg13?\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 16 Sep 2019 11:12:15 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CLUSTER command progress monitor"
},
{
"msg_contents": "Hi Alvaro,\n\n\nOn 2019/09/16 23:12, Alvaro Herrera wrote:\n> On 2019-Sep-16, Tattsu Yama wrote:\n> \n>> I should have explained the API changes more. I added cmdtype as a given\n>> parameter for all functions (See below).\n>> Therefore, I suppose that my patch is similar to the following fix as you\n>> mentioned on -hackers.\n> \n> Is this fix strictly necessary for pg12, or is this something that we\n> can leave for pg13?\n\n\nNot only me but many DBA needs this progress report feature on PG12,\ntherefore I'm trying to fix the problem. If you send other patch to\nfix the problem, and it is more elegant than mine, I can withdraw my patch.\nAnyway, I want to avoid this feature being reverted.\nDo you have any ideas to fix the problem?\n\n\nThanks,\nTatsuro Yamada\n\n\n\n\n\n\n",
"msg_date": "Tue, 17 Sep 2019 11:01:28 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CLUSTER command progress monitor"
},
{
"msg_contents": "On 2019-Sep-17, Tatsuro Yamada wrote:\n\n> On 2019/09/16 23:12, Alvaro Herrera wrote:\n\n> > Is this fix strictly necessary for pg12, or is this something that we\n> > can leave for pg13?\n> \n> Not only me but many DBA needs this progress report feature on PG12,\n> therefore I'm trying to fix the problem. If you send other patch to\n> fix the problem, and it is more elegant than mine, I can withdraw my patch.\n> Anyway, I want to avoid this feature being reverted.\n> Do you have any ideas to fix the problem?\n\nI committed a fix for the originally reported problem as da47e43dc32e in\nbranch REL_12_STABLE. Is that insufficient, and if so why?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 16 Sep 2019 23:08:32 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CLUSTER command progress monitor"
},
{
"msg_contents": "Hi Alvaro!\n\n>>> Is this fix strictly necessary for pg12, or is this something that we\n>>> can leave for pg13?\n>>\n>> Not only me but many DBA needs this progress report feature on PG12,\n>> therefore I'm trying to fix the problem. If you send other patch to\n>> fix the problem, and it is more elegant than mine, I can withdraw my patch.\n>> Anyway, I want to avoid this feature being reverted.\n>> Do you have any ideas to fix the problem?\n> \n> I committed a fix for the originally reported problem as da47e43dc32e in\n> branch REL_12_STABLE. Is that insufficient, and if so why?\n\n\nOoops, I misunderstood. I now realized you committed your patch to\nfix the problem. Thanks! I'll test it later. :)\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=da47e43dc32e3c5916396f0cbcfa974b371e4875\n\n\nThanks,\nTatsuro Yamada\n\n\n\n",
"msg_date": "Tue, 17 Sep 2019 11:34:21 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CLUSTER command progress monitor"
},
{
"msg_contents": "Hi Alvaro!\n\n>>>> Is this fix strictly necessary for pg12, or is this something that we\n>>>> can leave for pg13?\n>>>\n>>> Not only me but many DBA needs this progress report feature on PG12,\n>>> therefore I'm trying to fix the problem. If you send other patch to\n>>> fix the problem, and it is more elegant than mine, I can withdraw my patch.\n>>> Anyway, I want to avoid this feature being reverted.\n>>> Do you have any ideas to fix the problem?\n>>\n>> I committed a fix for the originally reported problem as da47e43dc32e in\n>> branch REL_12_STABLE. Is that insufficient, and if so why?\n> \n> \n> Ooops, I misunderstood. I now realized you committed your patch to\n> fix the problem. Thanks! I'll test it later. :)\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=da47e43dc32e3c5916396f0cbcfa974b371e4875\n\n\nI tested your patch (da47e43d) and it works fine. Thanks! :)\nSo, my patch improving progress reporting API can leave for PG13.\n\n\n#Test scenario\n===================\n[Session #1]\nselect * from pg_stat_progress_cluster ; \\watch 0.0001\n\n[Session #2]\ncreate table hoge as select a from generate_series(1, 100000) a;\ncreate index ind_hoge1 on hoge(a);\ncreate index ind_hoge2 on hoge((a%2));\ncreate index ind_hoge3 on hoge((a%3));\ncreate index ind_hoge4 on hoge((a%4));\ncreate index ind_hoge5 on hoge((a%5));\ncluster hoge using ind_hoge1;\n===================\n\n#Test result\n===================\n22283|13593|postgres|16384|CLUSTER|initializing|0|0|0|0|0|0\n...\n22283|13593|postgres|16384|CLUSTER|rebuilding index|16387|100000|100000|0|0|0 <= Increasing from 0 to 5\n22283|13593|postgres|16384|CLUSTER|rebuilding index|16387|100000|100000|0|0|1\n22283|13593|postgres|16384|CLUSTER|rebuilding index|16387|100000|100000|0|0|2\n22283|13593|postgres|16384|CLUSTER|rebuilding index|16387|100000|100000|0|0|3\n22283|13593|postgres|16384|CLUSTER|rebuilding index|16387|100000|100000|0|0|4\n22283|13593|postgres|16384|CLUSTER|performing final cleanup|16387|100000|100000|0|0|5\n===================\n\n\nThanks,\nTatsuro Yamada\n\n\n\n\n",
"msg_date": "Tue, 17 Sep 2019 12:30:12 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CLUSTER command progress monitor"
},
{
"msg_contents": "On Mon, Sep 16, 2019 at 03:26:10PM +0900, Tattsu Yama wrote:\n> I should have explained the API changes more. I added cmdtype as a given\n> parameter for all functions (See below).\n> Therefore, I suppose that my patch is similar to the following fix as you\n> mentioned on -hackers.\n\nYes, that's an option I mentioned here, but it has drawbacks:\nhttps://www.postgresql.org/message-id/20190914024547.GB15406@paquier.xyz\n\nI have just looked at it again after a small rebase and there are\nissues with the design of your patch:\n- When aborting a transaction, we need to enforce a reset of the\ncommand ID used in st_progress_command to be PROGRESS_COMMAND_INVALID.\nUnfortunately, your patch does not consider the case where an error\nhappens while a command ID is set, causing a command to still be\ntracked with the next transactions of the session. Even worse, it\nprevents pgstat_progress_start_command() to be called again in this\ncase for another command.\n- CLUSTER can rebuild indexes, and we'd likely want to be able to\ntrack some of the information from CREATE INDEX for CLUSTER.\n\nThe second issue is perhaps fine as it is not really straight-forward\nto share the same progress phases across multiple commands, and we\ncould live without it for now, or require a follow-up patch to make\nthe information of CREATE INDEX available to CLUSTER.\n\nNow, the first issue is of another caliber and a no-go :(\n\nOn HEAD, pgstat_progress_end_command() has the limitation to not be\nable to stack multiple commands, so calling it in cascade has the\ndisadvantage to perhaps erase the progress state of a command (and it\nis not designed for that anyway), which is what happens with CLUSTER\nwhen reindex_index() starts a new progress report, but the simplicity\nof the current infrastructure is very safe when it comes to failure\nhandling, to make sure that an reset happens as long as the command ID\nis not invalid. Your patch makes that part unpredictable.\n--\nMichael",
"msg_date": "Tue, 17 Sep 2019 14:13:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] CLUSTER command progress monitor"
}
] |
[
{
"msg_contents": "Hi,\n\nso I got two questions:\n\n1) I have multiple Postgresql Standby servers replicating over WAN, and \nI would like to reduce that to a single connection.\nIs there a utility that can be put in between and store the wal files \nfrom the primary and provide it to the standby server, even if they are \ndelayed by > 1 day or more (provided there is storage?)\n\n2) These standby servers sometimes run very long queries (2 - 3 hours) \nand at some point the replication stops, because I guess some row \nversion which are used are removed on the master.\nI do have hot_standby_feedback \"on\", why does this still happen, \nshouldn't this prevent the removal on the primary and allow replication \nto continue even if queries are active?\n\nThanks\nThomas\n\n\n",
"msg_date": "Sat, 14 Sep 2019 18:03:34 +0200",
"msg_from": "\"Thomas Rosenstein\" <thomas.rosenstein@creamfinance.com>",
"msg_from_op": true,
"msg_subject": "Standby Replication and Replication Delay"
},
{
"msg_contents": "On Sat, Sep 14, 2019 at 06:03:34PM +0200, Thomas Rosenstein wrote:\n>Hi,\n>\n>so I got two questions:\n>\n>1) I have multiple Postgresql Standby servers replicating over WAN, \n>and I would like to reduce that to a single connection.\n\nPresumably the standbys are all located on the same LAN / in the same\nDC? Why don't you use cascading replication, then? I.e. one standby\nconnecting to the primary, the rest connecting to the first standby.\n\nYou can also archive the WAL on the first standby (since 9.5) and the\nother standby nodes can get the WAL from the local WAL.\n\n>Is there a utility that can be put in between and store the wal files \n>from the primary and provide it to the standby server, even if they \n>are delayed by > 1 day or more (provided there is storage?)\n>\n\nNot sure what utility you have in mind. The first standby can act as a\nlocal primary, creating a local WAL archive etc.\n\n>2) These standby servers sometimes run very long queries (2 - 3 hours) \n>and at some point the replication stops, because I guess some row \n>version which are used are removed on the master.\n>I do have hot_standby_feedback \"on\", why does this still happen, \n>shouldn't this prevent the removal on the primary and allow \n>replication to continue even if queries are active?\n>\n\nWell, you haven't really told us what \"replication stops\" does means.\nhot_standby_feedback does prevent aborts of of queries on the standby,\nit should not stop replication AFAIK.\n\nMaybe show us the error messages, tell us which PostgreSQL version are\nyou actually using, etc.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 14 Sep 2019 21:16:51 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby Replication and Replication Delay"
},
{
"msg_contents": "Hi Tomas,\n\nI'm using Postgresql 10.10 on the standbys and 10.5 on the primary.\n\nOn 14 Sep 2019, at 21:16, Tomas Vondra wrote:\n\n> On Sat, Sep 14, 2019 at 06:03:34PM +0200, Thomas Rosenstein wrote:\n>> Hi,\n>>\n>> so I got two questions:\n>>\n>> 1) I have multiple Postgresql Standby servers replicating over WAN, \n>> and I would like to reduce that to a single connection.\n>\n> Presumably the standbys are all located on the same LAN / in the same\n> DC? Why don't you use cascading replication, then? I.e. one standby\n> connecting to the primary, the rest connecting to the first standby.\n>\n> You can also archive the WAL on the first standby (since 9.5) and the\n> other standby nodes can get the WAL from the local WAL.\n\nYes they are on the same LAN, but if a long running query is executed on \none of them, then the replication lag increases and all of the standbys \nalso increase their replication delay.\nI don't have the free resources to just run a standby with a full \ndataset.\n\nThe wal is archived from the primary anyways, but I would like to have \nto streaming replication as a backup to the wal archival. (and the \nstandbys can restore from that archive)\n\n>\n>> Is there a utility that can be put in between and store the wal files \n>> from the primary and provide it to the standby server, even if they \n>> are delayed by > 1 day or more (provided there is storage?)\n>>\n>\n> Not sure what utility you have in mind. The first standby can act as a\n> local primary, creating a local WAL archive etc.\n\nSee above, Wal archives are anyways available, the idea is as a \nsecondary backup, in case the wal archival lags behind (i.e. issue with \nstorage or the server where the wal archival happens)\n\n>\n>> 2) These standby servers sometimes run very long queries (2 - 3 \n>> hours) and at some point the replication stops, because I guess some \n>> row version which are used are removed on the master.\n>> I do have hot_standby_feedback \"on\", why does this still happen, \n>> shouldn't this prevent the removal on the primary and allow \n>> replication to continue even if queries are active?\n>>\n>\n> Well, you haven't really told us what \"replication stops\" does means.\n> hot_standby_feedback does prevent aborts of of queries on the standby,\n> it should not stop replication AFAIK.\n>\n> Maybe show us the error messages, tell us which PostgreSQL version are\n> you actually using, etc.\n\nReplication stops means that the standby servers do not replay the WAL \narchive and the replication lag increases.\nThere is no error message.\n\nI have also set:\n\nmax_standby_archive_delay = -1\nmax_standby_streaming_delay = -1\n\n\n>\n>\n> regards\n>\n> -- \n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 14 Sep 2019 21:26:26 +0200",
"msg_from": "\"Thomas Rosenstein\" <thomas.rosenstein@creamfinance.com>",
"msg_from_op": true,
"msg_subject": "Re: Standby Replication and Replication Delay"
},
{
"msg_contents": "On Sat, Sep 14, 2019 at 09:26:26PM +0200, Thomas Rosenstein wrote:\n>Hi Tomas,\n>\n>I'm using Postgresql 10.10 on the standbys and 10.5 on the primary.\n>\n>On 14 Sep 2019, at 21:16, Tomas Vondra wrote:\n>\n>>On Sat, Sep 14, 2019 at 06:03:34PM +0200, Thomas Rosenstein wrote:\n>>>Hi,\n>>>\n>>>so I got two questions:\n>>>\n>>>1) I have multiple Postgresql Standby servers replicating over \n>>>WAN, and I would like to reduce that to a single connection.\n>>\n>>Presumably the standbys are all located on the same LAN / in the same\n>>DC? Why don't you use cascading replication, then? I.e. one standby\n>>connecting to the primary, the rest connecting to the first standby.\n>>\n>>You can also archive the WAL on the first standby (since 9.5) and the\n>>other standby nodes can get the WAL from the local WAL.\n>\n>Yes they are on the same LAN, but if a long running query is executed \n>on one of them, then the replication lag increases and all of the \n>standbys also increase their replication delay.\n>I don't have the free resources to just run a standby with a full \n>dataset.\n>\n\nBut each existing standby already is a full dataset, the idea was to\nreuse one of those.\n\n>The wal is archived from the primary anyways, but I would like to have \n>to streaming replication as a backup to the wal archival. (and the \n>standbys can restore from that archive)\n>\n\nTBH it's not quite clear to me what problem you're trying to solve. If\nyou want to reduce the number of WAN connections to the primary, you can\nhave a single primary standby connected to it. And then you can either\nconnect the remaining standbys to the first one using streaming, or use\nrecovery from the archive. Also, WAL archive is usually backup for\nstreaming, not the other way around.\n\n>>\n>>>Is there a utility that can be put in between and store the wal \n>>>files from the primary and provide it to the standby server, even \n>>>if they are delayed by > 1 day or more (provided there is \n>>>storage?)\n>>>\n>>\n>>Not sure what utility you have in mind. The first standby can act as a\n>>local primary, creating a local WAL archive etc.\n>\n>See above, Wal archives are anyways available, the idea is as a \n>secondary backup, in case the wal archival lags behind (i.e. issue \n>with storage or the server where the wal archival happens)\n>\n\nWell, as I said, it's usually the other way around - WAL archival is\nconsidered backup for the streaming, in case the standby falls behind\nfor some reason.\n\n>>\n>>>2) These standby servers sometimes run very long queries (2 - 3 \n>>>hours) and at some point the replication stops, because I guess \n>>>some row version which are used are removed on the master.\n>>>I do have hot_standby_feedback \"on\", why does this still happen, \n>>>shouldn't this prevent the removal on the primary and allow \n>>>replication to continue even if queries are active?\n>>>\n>>\n>>Well, you haven't really told us what \"replication stops\" does means.\n>>hot_standby_feedback does prevent aborts of of queries on the standby,\n>>it should not stop replication AFAIK.\n>>\n>>Maybe show us the error messages, tell us which PostgreSQL version are\n>>you actually using, etc.\n>\n>Replication stops means that the standby servers do not replay the WAL \n>archive and the replication lag increases.\n>There is no error message.\n>\n>I have also set:\n>\n>max_standby_archive_delay = -1\n>max_standby_streaming_delay = -1\n>\n\nNot sure.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 14 Sep 2019 22:08:24 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby Replication and Replication Delay"
},
{
"msg_contents": "On 14 Sep 2019, at 22:08, Tomas Vondra wrote:\n\n> On Sat, Sep 14, 2019 at 09:26:26PM +0200, Thomas Rosenstein wrote:\n>> Hi Tomas,\n>>\n>> I'm using Postgresql 10.10 on the standbys and 10.5 on the primary.\n>>\n>> On 14 Sep 2019, at 21:16, Tomas Vondra wrote:\n>>\n>>> On Sat, Sep 14, 2019 at 06:03:34PM +0200, Thomas Rosenstein wrote:\n>>>> Hi,\n>>>>\n>>>> so I got two questions:\n>>>>\n>>>> 1) I have multiple Postgresql Standby servers replicating over WAN, \n>>>> and I would like to reduce that to a single connection.\n>>>\n>>> Presumably the standbys are all located on the same LAN / in the \n>>> same\n>>> DC? Why don't you use cascading replication, then? I.e. one standby\n>>> connecting to the primary, the rest connecting to the first standby.\n>>>\n>>> You can also archive the WAL on the first standby (since 9.5) and \n>>> the\n>>> other standby nodes can get the WAL from the local WAL.\n>>\n>> Yes they are on the same LAN, but if a long running query is executed \n>> on one of them, then the replication lag increases and all of the \n>> standbys also increase their replication delay.\n>> I don't have the free resources to just run a standby with a full \n>> dataset.\n>>\n>\n> But each existing standby already is a full dataset, the idea was to\n> reuse one of those.\n>\n>> The wal is archived from the primary anyways, but I would like to \n>> have to streaming replication as a backup to the wal archival. (and \n>> the standbys can restore from that archive)\n>>\n>\n> TBH it's not quite clear to me what problem you're trying to solve. If\n> you want to reduce the number of WAN connections to the primary, you \n> can\n> have a single primary standby connected to it. And then you can either\n> connect the remaining standbys to the first one using streaming, or \n> use\n> recovery from the archive. Also, WAL archive is usually backup for\n> streaming, not the other way around.\n>\n>>>\n>>>> Is there a utility that can be put in between and store the wal \n>>>> files from the primary and provide it to the standby server, even \n>>>> if they are delayed by > 1 day or more (provided there is storage?)\n>>>>\n>>>\n>>> Not sure what utility you have in mind. The first standby can act as \n>>> a\n>>> local primary, creating a local WAL archive etc.\n>>\n>> See above, Wal archives are anyways available, the idea is as a \n>> secondary backup, in case the wal archival lags behind (i.e. issue \n>> with storage or the server where the wal archival happens)\n>>\n>\n> Well, as I said, it's usually the other way around - WAL archival is\n> considered backup for the streaming, in case the standby falls behind\n> for some reason.\n>\n\nWell yes, first the streaming replication should transfer, and if that \nbreaks the WALs should be restored from the archive.\n\nBUT if the replication lag increases too much, then the primary won't \nhave the WALs anymore due to keep wal_keep_segments, then you are forced \nto load it from the archive, if the archive for some reason it slow / \ndown / whatever you are screwed.\n\n---\n\nIf queries are executed on the one standby that is the proxy, then the \nreplication delay incurred on this one will also be incurred on the \nothers, if I replicate directly from primary they are independent.\n\nThe software should just keep the wals to keep the standbys independent, \nbut don't keep the data (> 2 TB)\n\n\n>>>\n>>>> 2) These standby servers sometimes run very long queries (2 - 3 \n>>>> hours) and at some point the replication stops, because I guess \n>>>> some row version which are used are removed on the master.\n>>>> I do have hot_standby_feedback \"on\", why does this still happen, \n>>>> shouldn't this prevent the removal on the primary and allow \n>>>> replication to continue even if queries are active?\n>>>>\n>>>\n>>> Well, you haven't really told us what \"replication stops\" does \n>>> means.\n>>> hot_standby_feedback does prevent aborts of of queries on the \n>>> standby,\n>>> it should not stop replication AFAIK.\n>>>\n>>> Maybe show us the error messages, tell us which PostgreSQL version \n>>> are\n>>> you actually using, etc.\n>>\n>> Replication stops means that the standby servers do not replay the \n>> WAL archive and the replication lag increases.\n>> There is no error message.\n>>\n>> I have also set:\n>>\n>> max_standby_archive_delay = -1\n>> max_standby_streaming_delay = -1\n>>\n>\n> Not sure.\n\nSo, anyone else an idea why this happens, or how to track it down? \nReplication just stops at a point in time until all queries are \ncanceled.\n\n>\n>\n> regards\n>\n> -- \n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n\n\n\n\nOn 14 Sep 2019, at 22:08, Tomas Vondra wrote:\n\nOn Sat, Sep 14, 2019 at 09:26:26PM +0200, Thomas Rosenstein wrote:\n\nHi Tomas,\nI'm using Postgresql 10.10 on the standbys and 10.5 on the primary.\nOn 14 Sep 2019, at 21:16, Tomas Vondra wrote:\n\nOn Sat, Sep 14, 2019 at 06:03:34PM +0200, Thomas Rosenstein wrote:\n\nHi,\nso I got two questions:\n1) I have multiple Postgresql Standby servers replicating over WAN, and I would like to reduce that to a single connection.\n\nPresumably the standbys are all located on the same LAN / in the same\nDC? Why don't you use cascading replication, then? I.e. one standby\nconnecting to the primary, the rest connecting to the first standby.\nYou can also archive the WAL on the first standby (since 9.5) and the\nother standby nodes can get the WAL from the local WAL.\n\nYes they are on the same LAN, but if a long running query is executed on one of them, then the replication lag increases and all of the standbys also increase their replication delay.\nI don't have the free resources to just run a standby with a full dataset.\n\nBut each existing standby already is a full dataset, the idea was to\nreuse one of those.\n\nThe wal is archived from the primary anyways, but I would like to have to streaming replication as a backup to the wal archival. (and the standbys can restore from that archive)\n\nTBH it's not quite clear to me what problem you're trying to solve. If\nyou want to reduce the number of WAN connections to the primary, you can\nhave a single primary standby connected to it. And then you can either\nconnect the remaining standbys to the first one using streaming, or use\nrecovery from the archive. Also, WAL archive is usually backup for\nstreaming, not the other way around.\n\n\n\nIs there a utility that can be put in between and store the wal files from the primary and provide it to the standby server, even if they are delayed by > 1 day or more (provided there is storage?)\n\nNot sure what utility you have in mind. The first standby can act as a\nlocal primary, creating a local WAL archive etc.\n\nSee above, Wal archives are anyways available, the idea is as a secondary backup, in case the wal archival lags behind (i.e. issue with storage or the server where the wal archival happens)\n\nWell, as I said, it's usually the other way around - WAL archival is\nconsidered backup for the streaming, in case the standby falls behind\nfor some reason.\n\nWell yes, first the streaming replication should transfer, and if that breaks the WALs should be restored from the archive.\nBUT if the replication lag increases too much, then the primary won't have the WALs anymore due to keep wal_keep_segments, then you are forced to load it from the archive, if the archive for some reason it slow / down / whatever you are screwed.\n\nIf queries are executed on the one standby that is the proxy, then the replication delay incurred on this one will also be incurred on the others, if I replicate directly from primary they are independent.\nThe software should just keep the wals to keep the standbys independent, but don't keep the data (> 2 TB)\n\n\n\n\n2) These standby servers sometimes run very long queries (2 - 3 hours) and at some point the replication stops, because I guess some row version which are used are removed on the master.\nI do have hot_standby_feedback \"on\", why does this still happen, shouldn't this prevent the removal on the primary and allow replication to continue even if queries are active?\n\nWell, you haven't really told us what \"replication stops\" does means.\nhot_standby_feedback does prevent aborts of of queries on the standby,\nit should not stop replication AFAIK.\nMaybe show us the error messages, tell us which PostgreSQL version are\nyou actually using, etc.\n\nReplication stops means that the standby servers do not replay the WAL archive and the replication lag increases.\nThere is no error message.\nI have also set:\nmax_standby_archive_delay = -1\nmax_standby_streaming_delay = -1\n\nNot sure.\n\nSo, anyone else an idea why this happens, or how to track it down? Replication just stops at a point in time until all queries are canceled.\n\nregards\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 14 Sep 2019 22:14:53 +0200",
"msg_from": "\"Thomas Rosenstein\" <thomas.rosenstein@creamfinance.com>",
"msg_from_op": true,
"msg_subject": "Re: Standby Replication and Replication Delay"
}
] |
[
{
"msg_contents": "Folks,\n\nPlease find attached a couple of patches intended to $subject.\n\nThis patch set cut the time to copy ten million rows of randomly sized\nint8s (10 of them) by about a third, so at least for that case, it's\npretty decent.\n\nThanks to Andrew Gierth for lots of patient help.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sun, 15 Sep 2019 09:18:49 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Efficient output for integer types"
},
{
"msg_contents": "\n\n> 15 сент. 2019 г., в 12:18, David Fetter <david@fetter.org> написал(а):\n> \n> Please find attached a couple of patches intended to $subject.\n> \n> This patch set cut the time to copy ten million rows of randomly sized\n> int8s (10 of them) by about a third, so at least for that case, it's\n> pretty decent.\n\nHi! Looks cool.\n\nJust curious if for any fixed base and square here\n\n+\t\twhile(uvalue >= base)\n \t\t{\n+\t\t\tconst int i = (uvalue % square) * 2;\n+\t\t\tuvalue /= square;\n+\t\t\tvallen += 2;\n+\t\t\tmemcpy(convert + sizeof(convert) - vallen, digits + i, 2);\n+\t\t}\n\ncompiler will have a chance to avoid idiv instruction?\nMaybe few specialized functions could work better than generic algorithm?\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sun, 15 Sep 2019 14:06:29 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "On Sun, Sep 15, 2019 at 02:06:29PM +0500, Andrey Borodin wrote:\n> > 15 сент. 2019 г., в 12:18, David Fetter <david@fetter.org> написал(а):\n> > \n> > Please find attached a couple of patches intended to $subject.\n> > \n> > This patch set cut the time to copy ten million rows of randomly sized\n> > int8s (10 of them) by about a third, so at least for that case, it's\n> > pretty decent.\n> \n> Hi! Looks cool.\n> \n> Just curious if for any fixed base and square here\n> \n> +\t\twhile(uvalue >= base)\n> \t\t{\n> +\t\t\tconst int i = (uvalue % square) * 2;\n> +\t\t\tuvalue /= square;\n> +\t\t\tvallen += 2;\n> +\t\t\tmemcpy(convert + sizeof(convert) - vallen, digits + i, 2);\n> +\t\t}\n> \n> compiler will have a chance to avoid idiv instruction?\n\nThat could very well be. I took the idea (and most of the code) from\nthe Ryū implementation Andrew Gierth committed for 12.\n\n> Maybe few specialized functions could work better than generic\n> algorithm?\n\nCould be. What do you have in mind? I'm guessing that the ones for\ndecimals, that being both the most common case and the least obvious\nas to how to optimize, would give the most benefit.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sun, 15 Sep 2019 18:12:03 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "On Sun, Sep 15, 2019 at 09:18:49AM +0200, David Fetter wrote:\n> Folks,\n> \n> Please find attached a couple of patches intended to $subject.\n> \n> This patch set cut the time to copy ten million rows of randomly sized\n> int8s (10 of them) by about a third, so at least for that case, it's\n> pretty decent.\n\nAdded int4 output, removed the sprintf stuff, as it didn't seem to\nhelp in any cases I was testing.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Tue, 17 Sep 2019 08:55:05 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 08:55:05AM +0200, David Fetter wrote:\n> On Sun, Sep 15, 2019 at 09:18:49AM +0200, David Fetter wrote:\n> > Folks,\n> > \n> > Please find attached a couple of patches intended to $subject.\n> > \n> > This patch set cut the time to copy ten million rows of randomly sized\n> > int8s (10 of them) by about a third, so at least for that case, it's\n> > pretty decent.\n> \n> Added int4 output, removed the sprintf stuff, as it didn't seem to\n> help in any cases I was testing.\n\nFound a couple of \"whiles\" that should have been \"ifs.\"\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Tue, 17 Sep 2019 09:01:57 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 09:01:57AM +0200, David Fetter wrote:\n> On Tue, Sep 17, 2019 at 08:55:05AM +0200, David Fetter wrote:\n> > On Sun, Sep 15, 2019 at 09:18:49AM +0200, David Fetter wrote:\n> > > Folks,\n> > > \n> > > Please find attached a couple of patches intended to $subject.\n> > > \n> > > This patch set cut the time to copy ten million rows of randomly sized\n> > > int8s (10 of them) by about a third, so at least for that case, it's\n> > > pretty decent.\n> > \n> > Added int4 output, removed the sprintf stuff, as it didn't seem to\n> > help in any cases I was testing.\n> \n> Found a couple of \"whiles\" that should have been \"ifs.\"\n\nFactored out some inefficient functions and made the guts use the more\nefficient function.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Wed, 18 Sep 2019 05:42:01 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 05:42:01AM +0200, David Fetter wrote:\n> On Tue, Sep 17, 2019 at 09:01:57AM +0200, David Fetter wrote:\n> > On Tue, Sep 17, 2019 at 08:55:05AM +0200, David Fetter wrote:\n> > > On Sun, Sep 15, 2019 at 09:18:49AM +0200, David Fetter wrote:\n> > > > Folks,\n> > > > \n> > > > Please find attached a couple of patches intended to $subject.\n> > > > \n> > > > This patch set cut the time to copy ten million rows of randomly sized\n> > > > int8s (10 of them) by about a third, so at least for that case, it's\n> > > > pretty decent.\n> > > \n> > > Added int4 output, removed the sprintf stuff, as it didn't seem to\n> > > help in any cases I was testing.\n> > \n> > Found a couple of \"whiles\" that should have been \"ifs.\"\n> \n> Factored out some inefficient functions and made the guts use the more\n> efficient function.\n\nFix copy-paste-o that introduced some unneeded 64-bit math.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Wed, 18 Sep 2019 07:51:42 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 07:51:42AM +0200, David Fetter wrote:\n> On Wed, Sep 18, 2019 at 05:42:01AM +0200, David Fetter wrote:\n> > On Tue, Sep 17, 2019 at 09:01:57AM +0200, David Fetter wrote:\n> > > On Tue, Sep 17, 2019 at 08:55:05AM +0200, David Fetter wrote:\n> > > > On Sun, Sep 15, 2019 at 09:18:49AM +0200, David Fetter wrote:\n> > > > > Folks,\n> > > > > \n> > > > > Please find attached a couple of patches intended to $subject.\n> > > > > \n> > > > > This patch set cut the time to copy ten million rows of randomly sized\n> > > > > int8s (10 of them) by about a third, so at least for that case, it's\n> > > > > pretty decent.\n> > > > \n> > > > Added int4 output, removed the sprintf stuff, as it didn't seem to\n> > > > help in any cases I was testing.\n> > > \n> > > Found a couple of \"whiles\" that should have been \"ifs.\"\n> > \n> > Factored out some inefficient functions and made the guts use the more\n> > efficient function.\n> \n> Fix copy-paste-o that introduced some unneeded 64-bit math.\n\nRemoved static annotation that shouldn't have been present.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Wed, 18 Sep 2019 08:26:35 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "Hello.\n\nAt Wed, 18 Sep 2019 05:42:01 +0200, David Fetter <david@fetter.org> wrote in <20190918034201.GX31596@fetter.org>\n> On Tue, Sep 17, 2019 at 09:01:57AM +0200, David Fetter wrote:\n> > On Tue, Sep 17, 2019 at 08:55:05AM +0200, David Fetter wrote:\n> > > On Sun, Sep 15, 2019 at 09:18:49AM +0200, David Fetter wrote:\n> > > > Folks,\n> > > > \n> > > > Please find attached a couple of patches intended to $subject.\n> > > > \n> > > > This patch set cut the time to copy ten million rows of randomly sized\n> > > > int8s (10 of them) by about a third, so at least for that case, it's\n> > > > pretty decent.\n> > > \n> > > Added int4 output, removed the sprintf stuff, as it didn't seem to\n> > > help in any cases I was testing.\n> > \n> > Found a couple of \"whiles\" that should have been \"ifs.\"\n> \n> Factored out some inefficient functions and made the guts use the more\n> efficient function.\n\nI'm not sure this is on the KISS principle, but looked it and\nhave several random comments.\n\n\n+numutils.o: CFLAGS += $(PERMIT_DECLARATION_AFTER_STATEMENT)\n\nI don't think that we are allowing that as project coding\npolicy. It seems to have been introduced only to accept external\ncode as-is.\n\n\n- char str[23]; /* sign, 21 digits and '\\0' */\n+ char str[MAXINT8LEN];\n\nIt's uneasy that MAXINT8LEN contains tailling NUL. MAXINT8BUFLEN\ncan be so. I think MAXINT8LEN should be 20 and the definition\nshould be str[MAXINT8LEN + 1].\n\n\n\n+static const char DIGIT_TABLE[200] = {\n+\t'0', '0', '0', '1', '0', '2', '0', '3', '0', '4', '0', '5', '0', '6', '0', '7', '0', '8', '0', '9',\n\nWouldn't it be simpler if it were defined as a constant string?\n\nstatic const char DIGIT_TABLE[201] =\n \"000102030405....19\"\n \"202122232425....39\"\n..\n\n\n+pg_ltoa_n(int32 value, char *a)\n...\n+\t/* Compute the result string. */\n+\twhile (value >= 100000000)\n\nWe have only two degits above the value. Isn't the stuff inside\nthe while a waste of cycles?\n\n\n+\t\t/* Expensive 64-bit division. Optimize? */\n\nI believe compiler treats such trivial optimizations. (concretely\nconverts into shifts and subtractons if faster.)\n\n\n+\t\tmemcpy(a + olength - i - 2, DIGIT_TABLE + c0, 2);\n\nMaybe it'd be easy to read if 'a + olength - i' is a single variable.\n\n\n+\ti += adjust;\n+\treturn i;\n\nIf 'a + olength - i' is replaced with a variable, the return\nstatement is replacable with \"return olength + adjust;\".\n\n\n+\treturn t + (v >= PowersOfTen[t]);\n\nI think it's better that if it were 't - (v < POT[t]) + 1; /*\nlog10(v) + 1 */'. At least we need an explanation of the\ndifference. (I'didn't checked the algorithm is truely right,\nthough.)\n\n\n> void\n> pg_lltoa(int64 value, char *a)\n> {\n..\n> \t\tmemcpy(a, \"-9223372036854775808\", 21);\n..\n>\t\tmemcpy(a, \"0\", 2);\n\nThe lines need a comment like \"/* length contains trailing '\\0'\n*/\"\n\n\n+\tif (value >= 0)\n...\n+\telse\n+ {\n+\t\tif (value == PG_INT32_MIN)\n+\t\t\tmemcpy(str, \"-2147483648\", 11);\n+\t\t\treturn str + 11;\n> \t\t}\n+\t\t*str++ = '-';\n+\t\treturn pg_ltostr_zeropad(str, -value, minwidth - 1);\n\nIf then block of the if statement were (values < 0), we won't\nneed to reenter the functaion.\n\n\n+\t\tlen = pg_ltoa_n(value, str);\n+\t\tif (minwidth <= len)\n+\t\t\treturn str + len;\n+\n+\t\tmemmove(str + minwidth - len, str, len);\n\nIf the function had the parameters str with the room only for two\ndigits and a NUL, 2 as minwidth but 1000 as value, the function\nwould overrun the buffer. The original function just ignores\noverflowing digits.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 18 Sep 2019 16:27:46 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 04:27:46PM +0900, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> At Wed, 18 Sep 2019 05:42:01 +0200, David Fetter <david@fetter.org> wrote in <20190918034201.GX31596@fetter.org>\n> > On Tue, Sep 17, 2019 at 09:01:57AM +0200, David Fetter wrote:\n> > > On Tue, Sep 17, 2019 at 08:55:05AM +0200, David Fetter wrote:\n> > > > On Sun, Sep 15, 2019 at 09:18:49AM +0200, David Fetter wrote:\n> > > > > Folks,\n> > > > > \n> > > > > Please find attached a couple of patches intended to $subject.\n> > > > > \n> > > > > This patch set cut the time to copy ten million rows of randomly sized\n> > > > > int8s (10 of them) by about a third, so at least for that case, it's\n> > > > > pretty decent.\n> > > > \n> > > > Added int4 output, removed the sprintf stuff, as it didn't seem to\n> > > > help in any cases I was testing.\n> > > \n> > > Found a couple of \"whiles\" that should have been \"ifs.\"\n> > \n> > Factored out some inefficient functions and made the guts use the more\n> > efficient function.\n> \n> I'm not sure this is on the KISS principle, but looked it and\n> have several random comments.\n> \n> +numutils.o: CFLAGS += $(PERMIT_DECLARATION_AFTER_STATEMENT)\n> \n> I don't think that we are allowing that as project coding\n> policy. It seems to have been introduced only to accept external\n> code as-is.\n\nChanged to fit current policy.\n\n> - char str[23]; /* sign, 21 digits and '\\0' */\n> + char str[MAXINT8LEN];\n> \n> It's uneasy that MAXINT8LEN contains tailling NUL. MAXINT8BUFLEN\n> can be so. I think MAXINT8LEN should be 20 and the definition\n> should be str[MAXINT8LEN + 1].\n\nDone.\n\n> +static const char DIGIT_TABLE[200] = {\n> +\t'0', '0', '0', '1', '0', '2', '0', '3', '0', '4', '0', '5', '0', '6', '0', '7', '0', '8', '0', '9',\n> \n> Wouldn't it be simpler if it were defined as a constant string?\n> \n> static const char DIGIT_TABLE[201] =\n> \"000102030405....19\"\n> \"202122232425....39\"\n> ..\n\nI thought this might be even clearer:\n\n\"00\" \"01\" \"02\" \"03\" \"04\" \"05\" \"06\" \"07\" \"08\" \"09\"\n\"10\" \"11\" \"12\" \"13\" \"14\" \"15\" \"16\" \"17\" \"18\" \"19\"\n\"20\" \"21\" \"22\" \"23\" \"24\" \"25\" \"26\" \"27\" \"28\" \"29\"\n\"30\" \"31\" \"32\" \"33\" \"34\" \"35\" \"36\" \"37\" \"38\" \"39\"\n\"40\" \"41\" \"42\" \"43\" \"44\" \"45\" \"46\" \"47\" \"48\" \"49\"\n\"50\" \"51\" \"52\" \"53\" \"54\" \"55\" \"56\" \"57\" \"58\" \"59\"\n\"60\" \"61\" \"62\" \"63\" \"64\" \"65\" \"66\" \"67\" \"68\" \"69\"\n\"70\" \"71\" \"72\" \"73\" \"74\" \"75\" \"76\" \"77\" \"78\" \"79\"\n\"80\" \"81\" \"82\" \"83\" \"84\" \"85\" \"86\" \"87\" \"88\" \"89\"\n\"90\" \"91\" \"92\" \"93\" \"94\" \"95\" \"96\" \"97\" \"98\" \"99\";\n\n> +pg_ltoa_n(int32 value, char *a)\n> ...\n> +\t/* Compute the result string. */\n> +\twhile (value >= 100000000)\n> \n> We have only two degits above the value. Isn't the stuff inside\n> the while a waste of cycles?\n\nChanged the while to an if.\n\n> +\t\t/* Expensive 64-bit division. Optimize? */\n> \n> I believe compiler treats such trivial optimizations. (concretely\n> converts into shifts and subtractons if faster.)\n\nComments removed.\n\n> +\t\tmemcpy(a + olength - i - 2, DIGIT_TABLE + c0, 2);\n> \n> Maybe it'd be easy to read if 'a + olength - i' is a single variable.\n\nDone.\n\n> +\ti += adjust;\n> +\treturn i;\n> \n> If 'a + olength - i' is replaced with a variable, the return\n> statement is replacable with \"return olength + adjust;\".\n\nI'm not sure I understand this.\n\n> +\treturn t + (v >= PowersOfTen[t]);\n> \n> I think it's better that if it were 't - (v < POT[t]) + 1; /*\n> log10(v) + 1 */'. At least we need an explanation of the\n> difference. (I'didn't checked the algorithm is truely right,\n> though.)\n\nComments added.\n\n> > void\n> > pg_lltoa(int64 value, char *a)\n> > {\n> ..\n> > \t\tmemcpy(a, \"-9223372036854775808\", 21);\n> ..\n> >\t\tmemcpy(a, \"0\", 2);\n> \n> The lines need a comment like \"/* length contains trailing '\\0'\n> */\"\n\nComments added.\n\n> +\tif (value >= 0)\n> ...\n> +\telse\n> + {\n> +\t\tif (value == PG_INT32_MIN)\n> +\t\t\tmemcpy(str, \"-2147483648\", 11);\n> +\t\t\treturn str + 11;\n> > \t\t}\n> +\t\t*str++ = '-';\n> +\t\treturn pg_ltostr_zeropad(str, -value, minwidth - 1);\n> \n> If then block of the if statement were (values < 0), we won't\n> need to reenter the functaion.\n\nThis is a tail-call recursion, so it's probably optimized already.\n\n> +\t\tlen = pg_ltoa_n(value, str);\n> +\t\tif (minwidth <= len)\n> +\t\t\treturn str + len;\n> +\n> +\t\tmemmove(str + minwidth - len, str, len);\n> \n> If the function had the parameters str with the room only for two\n> digits and a NUL, 2 as minwidth but 1000 as value, the function\n> would overrun the buffer. The original function just ignores\n> overflowing digits.\n\nI believe the original was incorrect.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Fri, 20 Sep 2019 21:14:51 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 09:14:51PM +0200, David Fetter wrote:\n> On Wed, Sep 18, 2019 at 04:27:46PM +0900, Kyotaro Horiguchi wrote:\n> > Hello.\n> > \n> > At Wed, 18 Sep 2019 05:42:01 +0200, David Fetter <david@fetter.org> wrote in <20190918034201.GX31596@fetter.org>\n> > > On Tue, Sep 17, 2019 at 09:01:57AM +0200, David Fetter wrote:\n> > > > On Tue, Sep 17, 2019 at 08:55:05AM +0200, David Fetter wrote:\n> > > > > On Sun, Sep 15, 2019 at 09:18:49AM +0200, David Fetter wrote:\n> > > > > > Folks,\n> > > > > > \n> > > > > > Please find attached a couple of patches intended to $subject.\n> > > > > > \n> > > > > > This patch set cut the time to copy ten million rows of randomly sized\n> > > > > > int8s (10 of them) by about a third, so at least for that case, it's\n> > > > > > pretty decent.\n> > > > > \n> > > > > Added int4 output, removed the sprintf stuff, as it didn't seem to\n> > > > > help in any cases I was testing.\n> > > > \n> > > > Found a couple of \"whiles\" that should have been \"ifs.\"\n> > > \n> > > Factored out some inefficient functions and made the guts use the more\n> > > efficient function.\n> > \n> > I'm not sure this is on the KISS principle, but looked it and\n> > have several random comments.\n> > \n> > +numutils.o: CFLAGS += $(PERMIT_DECLARATION_AFTER_STATEMENT)\n\nOops. Missed a few.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Fri, 20 Sep 2019 23:09:16 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 11:09:16PM +0200, David Fetter wrote:\n> On Fri, Sep 20, 2019 at 09:14:51PM +0200, David Fetter wrote:\n> > On Wed, Sep 18, 2019 at 04:27:46PM +0900, Kyotaro Horiguchi wrote:\n> > > Hello.\n> > > \n> > > At Wed, 18 Sep 2019 05:42:01 +0200, David Fetter <david@fetter.org> wrote in <20190918034201.GX31596@fetter.org>\n> > > > On Tue, Sep 17, 2019 at 09:01:57AM +0200, David Fetter wrote:\n> > > > > On Tue, Sep 17, 2019 at 08:55:05AM +0200, David Fetter wrote:\n> > > > > > On Sun, Sep 15, 2019 at 09:18:49AM +0200, David Fetter wrote:\n> > > > > > > Folks,\n> > > > > > > \n> > > > > > > Please find attached a couple of patches intended to $subject.\n> > > > > > > \n> > > > > > > This patch set cut the time to copy ten million rows of randomly sized\n> > > > > > > int8s (10 of them) by about a third, so at least for that case, it's\n> > > > > > > pretty decent.\n> > > > > > \n> > > > > > Added int4 output, removed the sprintf stuff, as it didn't seem to\n> > > > > > help in any cases I was testing.\n> > > > > \n> > > > > Found a couple of \"whiles\" that should have been \"ifs.\"\n> > > > \n> > > > Factored out some inefficient functions and made the guts use the more\n> > > > efficient function.\n> > > \n> > > I'm not sure this is on the KISS principle, but looked it and\n> > > have several random comments.\n> > > \n> > > +numutils.o: CFLAGS += $(PERMIT_DECLARATION_AFTER_STATEMENT)\n> \n> Oops. Missed a few.\n\nD'oh! Wrong patch.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Fri, 20 Sep 2019 23:18:13 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": ">>>>> \"David\" == David Fetter <david@fetter.org> writes:\n\n David> +\t/* Compute the result string. */\n David> +\tif (value >= 100000000)\n David> +\t{\n David> +\t\tconst\tuint32 value2 = value % 100000000;\n David> +\n David> +\t\tconst uint32 c = value2 % 10000;\n David> +\t\tconst uint32 d = value2 / 10000;\n David> +\t\tconst uint32 c0 = (c % 100) << 1;\n David> +\t\tconst uint32 c1 = (c / 100) << 1;\n David> +\t\tconst uint32 d0 = (d % 100) << 1;\n David> +\t\tconst uint32 d1 = (d / 100) << 1;\n David> +\n David> +\t\tchar *pos = a + olength - i;\n David> +\n David> +\t\tvalue /= 100000000;\n David> +\n David> +\t\tmemcpy(pos - 2, DIGIT_TABLE + c0, 2);\n David> +\t\tmemcpy(pos - 4, DIGIT_TABLE + c1, 2);\n David> +\t\tmemcpy(pos - 6, DIGIT_TABLE + d0, 2);\n David> +\t\tmemcpy(pos - 8, DIGIT_TABLE + d1, 2);\n David> +\t\ti += 8;\n David> +\t}\n\nFor the 32-bit case, there's no point in doing an 8-digit divide\nspecially, it doesn't save any time. It's sufficient to just change\n\n David> +\tif (value >= 10000)\n\nto while(value >= 10000)\n\nin order to process 4 digits at a time.\n\n David> +\t\tfor(int i = 0; i < minwidth - len; i++)\n David> +\t\t{\n David> +\t\t\tmemcpy(str + i, DIGIT_TABLE, 1);\n David> +\t\t}\n\nShould be:\n memset(str, '0', minwidth-len);\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Sat, 21 Sep 2019 03:36:21 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "On Sat, Sep 21, 2019 at 03:36:21AM +0100, Andrew Gierth wrote:\n> >>>>> \"David\" == David Fetter <david@fetter.org> writes:\n> \n> David> +\t/* Compute the result string. */\n> David> +\tif (value >= 100000000)\n> David> +\t{\n> David> +\t\tconst\tuint32 value2 = value % 100000000;\n> David> +\n> David> +\t\tconst uint32 c = value2 % 10000;\n> David> +\t\tconst uint32 d = value2 / 10000;\n> David> +\t\tconst uint32 c0 = (c % 100) << 1;\n> David> +\t\tconst uint32 c1 = (c / 100) << 1;\n> David> +\t\tconst uint32 d0 = (d % 100) << 1;\n> David> +\t\tconst uint32 d1 = (d / 100) << 1;\n> David> +\n> David> +\t\tchar *pos = a + olength - i;\n> David> +\n> David> +\t\tvalue /= 100000000;\n> David> +\n> David> +\t\tmemcpy(pos - 2, DIGIT_TABLE + c0, 2);\n> David> +\t\tmemcpy(pos - 4, DIGIT_TABLE + c1, 2);\n> David> +\t\tmemcpy(pos - 6, DIGIT_TABLE + d0, 2);\n> David> +\t\tmemcpy(pos - 8, DIGIT_TABLE + d1, 2);\n> David> +\t\ti += 8;\n> David> +\t}\n> \n> For the 32-bit case, there's no point in doing an 8-digit divide\n> specially, it doesn't save any time. It's sufficient to just change\n> \n> David> +\tif (value >= 10000)\n> \n> to while(value >= 10000)\n\nDone.\n\n> in order to process 4 digits at a time.\n> \n> David> +\t\tfor(int i = 0; i < minwidth - len; i++)\n> David> +\t\t{\n> David> +\t\t\tmemcpy(str + i, DIGIT_TABLE, 1);\n> David> +\t\t}\n> \n> Should be:\n> memset(str, '0', minwidth-len);\n\nDone.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sat, 21 Sep 2019 08:08:35 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": ">>>>> \"David\" == David Fetter <david@fetter.org> writes:\n\n David> +static inline uint32\n David> +decimalLength64(const uint64_t v)\n\nShould be uint64, not uint64_t.\n\nAlso return an int, not a uint32.\n\nFor int vs. int32, my own inclination is to use \"int\" where the value is\njust a (smallish) number, especially one that will be used as an index\nor loop count, and use \"int32\" when it actually matters that it's 32\nbits rather than some other size. Other opinions may differ.\n\n David> +{\n David> +\tuint32\t\t\tt;\n David> +\tstatic uint64_t\tPowersOfTen[] = {\n\nuint64 not uint64_t here too.\n\n David> +int32\n David> +pg_ltoa_n(uint32 value, char *a)\n\nIf this is going to handle only unsigned values, it should probably be\nnamed pg_ultoa_n.\n\n David> +\tuint32\ti = 0, adjust = 0;\n\n\"adjust\" is not assigned anywhere else. Presumably that's from previous\nhandling of negative numbers?\n\n David> +\t\tmemcpy(a, \"0\", 1);\n\n *a = '0'; would suffice.\n\n David> +\ti += adjust;\n\nSuperfluous?\n\n David> +\tuint32_t\tuvalue = (uint32_t)value;\n\nuint32 not uint32_t.\n\n David> +\tint32\t\tlen;\n\nSee above re. int vs. int32.\n\n David> +\t\tuvalue = (uint32_t)0 - (uint32_t)value;\n\nShould be uint32 not uint32_t again.\n\nFor anyone wondering, I suggested this to David in place of the ugly\nspecial casing of INT32_MIN. This method avoids the UB of doing (-value)\nwhere value==INT32_MIN, and is nevertheless required to produce the\ncorrect result:\n\n1. If value < 0, then ((uint32)value) is (value + UINT32_MAX + 1)\n2. (uint32)0 - (uint32)value\n becomes (UINT32_MAX+1)-(value+UINT32_MAX+1)\n which is (-value) as required\n\n David> +int32\n David> +pg_lltoa_n(uint64_t value, char *a)\n\nAgain, if this is doing unsigned, then it should be named pg_ulltoa_n\n\n David> +\t\tif (value == PG_INT32_MIN)\n\nThis being inconsistent with the others is not nice.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Sat, 21 Sep 2019 07:29:25 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "On Sat, Sep 21, 2019 at 07:29:25AM +0100, Andrew Gierth wrote:\n> >>>>> \"David\" == David Fetter <david@fetter.org> writes:\n> \n> David> +static inline uint32\n> David> +decimalLength64(const uint64_t v)\n> \n> Should be uint64, not uint64_t.\n\nFixed.\n\n> Also return an int, not a uint32.\n\nFixed.\n\n> For int vs. int32, my own inclination is to use \"int\" where the value is\n> just a (smallish) number, especially one that will be used as an index\n> or loop count, and use \"int32\" when it actually matters that it's 32\n> bits rather than some other size. Other opinions may differ.\n\nDone with int.\n\n> David> +{\n> David> +\tuint32\t\t\tt;\n> David> +\tstatic uint64_t\tPowersOfTen[] = {\n> \n> uint64 not uint64_t here too.\n\nFixed.\n\n> David> +int32\n> David> +pg_ltoa_n(uint32 value, char *a)\n> \n> If this is going to handle only unsigned values, it should probably be\n> named pg_ultoa_n.\n\nIt does signed values now.\n\n> David> +\tuint32\ti = 0, adjust = 0;\n> \n> \"adjust\" is not assigned anywhere else. Presumably that's from previous\n> handling of negative numbers?\n\nIt was, and now it's gone.\n\n> David> +\t\tmemcpy(a, \"0\", 1);\n> \n> *a = '0'; would suffice.\n\nFixed.\n\n> David> +\ti += adjust;\n> \n> Superfluous?\n\nYep. Gone.\n\n> David> +\tuint32_t\tuvalue = (uint32_t)value;\n> \n> uint32 not uint32_t.\n\nFixed.\n\n> David> +\tint32\t\tlen;\n> \n> See above re. int vs. int32.\n\nDone that way.\n\n> David> +\t\tuvalue = (uint32_t)0 - (uint32_t)value;\n> \n> Should be uint32 not uint32_t again.\n\nDone.\n\n> For anyone wondering, I suggested this to David in place of the ugly\n> special casing of INT32_MIN. This method avoids the UB of doing (-value)\n> where value==INT32_MIN, and is nevertheless required to produce the\n> correct result:\n> \n> 1. If value < 0, then ((uint32)value) is (value + UINT32_MAX + 1)\n> 2. (uint32)0 - (uint32)value\n> becomes (UINT32_MAX+1)-(value+UINT32_MAX+1)\n> which is (-value) as required\n> \n> David> +int32\n> David> +pg_lltoa_n(uint64_t value, char *a)\n> \n> Again, if this is doing unsigned, then it should be named pg_ulltoa_n\n\nRenamed to allow the uint64s that de-special-casing INT32_MIN/INT64_MIN requires.\n\n> David> +\t\tif (value == PG_INT32_MIN)\n> \n> This being inconsistent with the others is not nice.\n\nFixed.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sun, 22 Sep 2019 23:58:04 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "Moin,\n\nOn 2019-09-22 23:58, David Fetter wrote:\n> On Sat, Sep 21, 2019 at 07:29:25AM +0100, Andrew Gierth wrote:\n>> >>>>> \"David\" == David Fetter <david@fetter.org> writes:\n\n> Fixed.\n\nGood work, more performance is sure nice :)\n\nNoticed one more thing in the patch:\n\n> -\t\t*start++ = *a;\n> -\t\t*a-- = swap;\n> +\t\tmemcpy(pos - 2, DIGIT_TABLE + c, 2);\n> +\t\ti += 2;\n> \t}\n> +\telse\n> +\t\t*a = (char) ('0' + value2);\n> +\n> +\treturn olength;\n> }\n\nThe line \"i += 2;\" modifies i, but i is never used again nor returned.\n\nBest regards,\n\nTels\n\n\n",
"msg_date": "Mon, 23 Sep 2019 10:28:09 +0200",
"msg_from": "Tels <nospam-pg-abuse@bloodgate.com>",
"msg_from_op": false,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": ">>>>> \"David\" == David Fetter <david@fetter.org> writes:\n\n David> + return pg_ltostr_zeropad(str, (uint32)0 - (uint32)value, minwidth - 1);\n\nNo, this is just reintroducing the undefined behavior again. Once the\nvalue has been converted to unsigned you can't cast it back to signed or\npass it to a function expecting a signed value, since it will overflow\nin the INT_MIN case. (and in this example would probably output '-'\nsigns until it ran off the end of memory).\n\nHere's how I would do it:\n\nchar *\npg_ltostr_zeropad(char *str, int32 value, int32 minwidth)\n{\n\tint32\t\tlen;\n\tuint32\t\tuvalue = value;\n\n\tAssert(minwidth > 0);\n\n\tif (value >= 0)\n\t{\n\t\tif (value < 100 && minwidth == 2) /* Short cut for common case */\n\t\t{\n\t\t\tmemcpy(str, DIGIT_TABLE + value*2, 2);\n\t\t\treturn str + 2;\n\t\t}\n\t}\n\telse\n\t{\n\t\t*str++ = '-';\n\t\tminwidth -= 1;\n\t\tuvalue = (uint32)0 - uvalue;\n\t}\n\t\t\t\n\tlen = pg_ultoa_n(uvalue, str);\n\tif (len >= minwidth)\n\t\treturn str + len;\n\n\tmemmove(str + minwidth - len, str, len);\n\tmemset(str, '0', minwidth - len);\n\treturn str + minwidth;\n}\n\n David> pg_ltostr(char *str, int32 value)\n David> +\tint32\tlen = pg_ultoa_n(value, str);\n David> +\treturn str + len;\n\nThis seems to have lost its handling of negative numbers entirely (which\ndoesn't say much for the regression test coverage)\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Mon, 23 Sep 2019 13:16:36 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "On Mon, Sep 23, 2019 at 10:28:09AM +0200, Tels wrote:\n> Moin,\n> \n> On 2019-09-22 23:58, David Fetter wrote:\n> > On Sat, Sep 21, 2019 at 07:29:25AM +0100, Andrew Gierth wrote:\n> > > >>>>> \"David\" == David Fetter <david@fetter.org> writes:\n> \n> > Fixed.\n> \n> Good work, more performance is sure nice :)\n> \n> Noticed one more thing in the patch:\n> \n> > -\t\t*start++ = *a;\n> > -\t\t*a-- = swap;\n> > +\t\tmemcpy(pos - 2, DIGIT_TABLE + c, 2);\n> > +\t\ti += 2;\n> > \t}\n> > +\telse\n> > +\t\t*a = (char) ('0' + value2);\n> > +\n> > +\treturn olength;\n> > }\n> \n> The line \"i += 2;\" modifies i, but i is never used again nor returned.\n\nI found a similar one in a similar function, and removed it, too.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Mon, 23 Sep 2019 22:25:54 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "On Mon, Sep 23, 2019 at 01:16:36PM +0100, Andrew Gierth wrote:\n> >>>>> \"David\" == David Fetter <david@fetter.org> writes:\n> \n> David> + return pg_ltostr_zeropad(str, (uint32)0 - (uint32)value, minwidth - 1);\n> \n> No, this is just reintroducing the undefined behavior again. Once the\n> value has been converted to unsigned you can't cast it back to signed or\n> pass it to a function expecting a signed value, since it will overflow\n> in the INT_MIN case. (and in this example would probably output '-'\n> signs until it ran off the end of memory).\n> \n> Here's how I would do it:\n> \n> char *\n> pg_ltostr_zeropad(char *str, int32 value, int32 minwidth)\n> {\n> \tint32\t\tlen;\n> \tuint32\t\tuvalue = value;\n> \n> \tAssert(minwidth > 0);\n> \n> \tif (value >= 0)\n> \t{\n> \t\tif (value < 100 && minwidth == 2) /* Short cut for common case */\n> \t\t{\n> \t\t\tmemcpy(str, DIGIT_TABLE + value*2, 2);\n> \t\t\treturn str + 2;\n> \t\t}\n> \t}\n> \telse\n> \t{\n> \t\t*str++ = '-';\n> \t\tminwidth -= 1;\n> \t\tuvalue = (uint32)0 - uvalue;\n> \t}\n> \t\t\t\n> \tlen = pg_ultoa_n(uvalue, str);\n> \tif (len >= minwidth)\n> \t\treturn str + len;\n> \n> \tmemmove(str + minwidth - len, str, len);\n> \tmemset(str, '0', minwidth - len);\n> \treturn str + minwidth;\n> }\n\nDone pretty much that way.\n\n> David> pg_ltostr(char *str, int32 value)\n> David> +\tint32\tlen = pg_ultoa_n(value, str);\n> David> +\treturn str + len;\n> \n> This seems to have lost its handling of negative numbers entirely\n\nGiven the comment that precedes it and all the use cases in the code,\nI changed the signature to take an unsigned integer instead. It's\npretty clear that the intent was to add digits and only digits to the\npassed-in string.\n\n> (which doesn't say much for the regression test coverage)\n\nI didn't see any obvious way to test functions not surfaced to SQL.\nShould we have one?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Mon, 23 Sep 2019 23:35:07 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "On Mon, Sep 23, 2019 at 11:35:07PM +0200, David Fetter wrote:\n> On Mon, Sep 23, 2019 at 01:16:36PM +0100, Andrew Gierth wrote:\n\nPer discussion on IRC, change some functions to take only unsigned\ninteger types so as not to branch for the case of negative numbers\nthey're never actually called with.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Tue, 24 Sep 2019 06:30:18 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 06:30:18AM +0200, David Fetter wrote:\n> On Mon, Sep 23, 2019 at 11:35:07PM +0200, David Fetter wrote:\n> > On Mon, Sep 23, 2019 at 01:16:36PM +0100, Andrew Gierth wrote:\n> \n> Per discussion on IRC, change some functions to take only unsigned\n> integer types so as not to branch for the case of negative numbers\n> they're never actually called with.\n> \n> Best,\n> David.\n\n...and part of a pgindent run\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Tue, 24 Sep 2019 07:26:21 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Efficient output for integer types"
},
{
"msg_contents": "Hi,\n\nAny plans regarding committing this patch? I see the thread is silent\nsince September 24, when the last patch version was posted. The patch is\nalready marked as RFC since December, when David changed the status. I\ndon't have any opinion whether the patch is RFC or not (it might well\nbe), but IMHO it should have been mentioned in this thread.\n\nI did a quick test to see how much more efficient this is, and for a\ntable with 10 bigint columns and 5M random rows the COPY to /dev/null\nwent from 3000 ms to ~2700 ms. That's not the 30% speedup mentioned by\nDavid in the first message, but 10% is still pretty nice.\n\nOf course, for real-world use cases the speedup will be lower because of\nusing other data types too, I/O etc. But it's still nice.\n\nSo, is anyone opposed to pushing this? If not, who'll to do that? I see\nAndrew Gierth was involved in the discussions on IRC and it's related to\nthe Ryu patch, so maybe he want's to take care of this. Andrew?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 11 Jan 2020 14:31:59 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Efficient output for integer types"
}
] |
[
{
"msg_contents": "I'm sending this to hackers, because it is not exactly a bug, and it can't\nbe addressed from userland. I think it is a coding issue, although I\nhaven't identified the exact code.\n\nWhen closing the local session which had used postgres_fdw over an ssl\nconnection, I get log spam on the foreign server saying:\n\nLOG: could not receive data from client: Connection reset by peer\n\nIt is easy to reproduce, but you must be using ssl to do so.\n\nOn searching, I see that a lot of people have run into this issue, with\nconsiderable confusion, but as far as I can see it has never been diagnosed.\n\nIs there anything that can be done about this, other than just learning to\nignore it?\n\nCheers,\n\nJeff",
"msg_date": "Sun, 15 Sep 2019 10:40:49 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "log spam with postgres_fdw"
},
{
"msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> When closing the local session which had used postgres_fdw over an ssl\n> connection, I get log spam on the foreign server saying:\n> LOG: could not receive data from client: Connection reset by peer\n> It is easy to reproduce, but you must be using ssl to do so.\n> On searching, I see that a lot of people have run into this issue, with\n> considerable confusion, but as far as I can see it has never been diagnosed.\n\nIn\nhttps://www.postgresql.org/message-id/flat/3DPLMQIC.YU6IFMLY.3PLOWL6W%40FQT5M7HS.IFBAANAE.A7GUPCPM\n\nwe'd concluded that the issue is probably that postgres_fdw has no\nlogic to shut down its external connections when the session closes.\nIt's not very clear why the SSL dependency, but we speculated that\nadding an on_proc_exit callback to close the connection(s) would help.\n\nI imagine dblink has a similar issue.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 15 Sep 2019 11:14:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: log spam with postgres_fdw"
},
{
"msg_contents": "On Sun, Sep 15, 2019 at 11:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Jeff Janes <jeff.janes@gmail.com> writes:\n> > When closing the local session which had used postgres_fdw over an ssl\n> > connection, I get log spam on the foreign server saying:\n> > LOG: could not receive data from client: Connection reset by peer\n> > It is easy to reproduce, but you must be using ssl to do so.\n> > On searching, I see that a lot of people have run into this issue, with\n> > considerable confusion, but as far as I can see it has never been\n> diagnosed.\n>\n> In\n>\n> https://www.postgresql.org/message-id/flat/3DPLMQIC.YU6IFMLY.3PLOWL6W%40FQT5M7HS.IFBAANAE.A7GUPCPM\n>\n>\nThanks, I had not spotted that one, I guess because the log message itself\nwas not in the subject so it ranked lower.\n\n\n> we'd concluded that the issue is probably that postgres_fdw has no\n> logic to shut down its external connections when the session closes.\n> It's not very clear why the SSL dependency, but we speculated that\n> adding an on_proc_exit callback to close the connection(s) would help.\n>\n>\nIt is easy to reproduce the ssl dependency without any FDW, just by doing a\nkill -9 on psql. Apparently the backend process for unencrypted connections\nare happy to be ghosted, while ssl ones are not; which seems like an odd\ndistinction to make. So should this be addressed on both sides (the server\nnot whining, and the client doing the on_proc_exit anyway?). I can take a\nstab at the client side one, but I'm over my head on the ssl connection\nhandling logic on the server side.\n\nCheers,\n\nJeff\n\nOn Sun, Sep 15, 2019 at 11:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Jeff Janes <jeff.janes@gmail.com> writes:\n> When closing the local session which had used postgres_fdw over an ssl\n> connection, I get log spam on the foreign server saying:\n> LOG: could not receive data from client: Connection reset by peer\n> It is easy to reproduce, but you must be using ssl to do so.\n> On searching, I see that a lot of people have run into this issue, with\n> considerable confusion, but as far as I can see it has never been diagnosed.\n\nIn\nhttps://www.postgresql.org/message-id/flat/3DPLMQIC.YU6IFMLY.3PLOWL6W%40FQT5M7HS.IFBAANAE.A7GUPCPM\nThanks, I had not spotted that one, I guess because the log message itself was not in the subject so it ranked lower. \nwe'd concluded that the issue is probably that postgres_fdw has no\nlogic to shut down its external connections when the session closes.\nIt's not very clear why the SSL dependency, but we speculated that\nadding an on_proc_exit callback to close the connection(s) would help.\nIt is easy to reproduce the ssl dependency without any FDW, just by doing a kill -9 on psql. Apparently the backend process for unencrypted connections are happy to be ghosted, while ssl ones are not; which seems like an odd distinction to make. So should this be addressed on both sides (the server not whining, and the client doing the on_proc_exit anyway?). I can take a stab at the client side one, but I'm over my head on the ssl connection handling logic on the server side. Cheers,Jeff",
"msg_date": "Sun, 15 Sep 2019 12:20:28 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: log spam with postgres_fdw"
}
] |
[
{
"msg_contents": "As I understand it, the current patch performs immediate IVM using AFTER\nSTATEMENT trigger transition tables.\n\nHowever, multiple tables can be modified *before* AFTER STATEMENT triggers\nare fired.\n\nCREATE TABLE example1 (a int);\nCREATE TABLE example2 (a int);\n\nCREATE INCREMENTAL MATERIALIZED VIEW mv AS\nSELECT example1.a, example2.a\nFROM example1 JOIN example2 ON a;\n\nWITH\n insert1 AS (INSERT INTO example1 VALUES (1)),\n insert2 AS (INSERT INTO example2 VALUES (1))\nSELECT NULL;\n\nChanges to example1 are visible in an AFTER STATEMENT trigger on example2,\nand vice versa. Would this not result in the (1, 1) tuple being\n\"double-counted\"?\n\nIVM needs to either:\n\n(1) Evaluate deltas \"serially' (e.g. EACH ROW triggers)\n\n(2) Have simultaneous access to multiple deltas:\ndelta_mv = example1 x delta_example2 + example2 x delta_example1 -\ndelta_example1 x delta_example2\n\nThis latter method is the \"logged\" approach that has been discussed for\ndeferred evaluation.\n\ntl;dr It seems that AFTER STATEMENT triggers required a deferred-like\nimplementation anyway.\n\nAs I understand it, the current patch performs immediate IVM using AFTER STATEMENT trigger transition tables.However, multiple tables can be modified *before* AFTER STATEMENT triggers are fired.CREATE TABLE example1 (a int);CREATE TABLE example2 (a int);CREATE INCREMENTAL MATERIALIZED VIEW mv ASSELECT example1.a, example2.aFROM example1 JOIN example2 ON a;WITH insert1 AS (INSERT INTO example1 VALUES (1)), insert2 AS (INSERT INTO example2 VALUES (1))SELECT NULL;Changes to example1 are visible in an AFTER STATEMENT trigger on example2, and vice versa. Would this not result in the (1, 1) tuple being \"double-counted\"?IVM needs to either:(1) Evaluate deltas \"serially' (e.g. EACH ROW triggers)(2) Have simultaneous access to multiple deltas:delta_mv = example1 x delta_example2 + example2 x delta_example1 - delta_example1 x delta_example2This latter method is the \"logged\" approach that has been discussed for deferred evaluation.tl;dr It seems that AFTER STATEMENT triggers required a deferred-like implementation anyway.",
"msg_date": "Sun, 15 Sep 2019 11:52:22 -0600",
"msg_from": "Paul Draper <paulddraper@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Hi Paul,\n\nThank you for your suggestion.\n\nOn Sun, 15 Sep 2019 11:52:22 -0600\nPaul Draper <paulddraper@gmail.com> wrote:\n\n> As I understand it, the current patch performs immediate IVM using AFTER\n> STATEMENT trigger transition tables.\n> \n> However, multiple tables can be modified *before* AFTER STATEMENT triggers\n> are fired.\n> \n> CREATE TABLE example1 (a int);\n> CREATE TABLE example2 (a int);\n> \n> CREATE INCREMENTAL MATERIALIZED VIEW mv AS\n> SELECT example1.a, example2.a\n> FROM example1 JOIN example2 ON a;\n> \n> WITH\n> insert1 AS (INSERT INTO example1 VALUES (1)),\n> insert2 AS (INSERT INTO example2 VALUES (1))\n> SELECT NULL;\n> \n> Changes to example1 are visible in an AFTER STATEMENT trigger on example2,\n> and vice versa. Would this not result in the (1, 1) tuple being\n> \"double-counted\"?\n> \n> IVM needs to either:\n> \n> (1) Evaluate deltas \"serially' (e.g. EACH ROW triggers)\n> \n> (2) Have simultaneous access to multiple deltas:\n> delta_mv = example1 x delta_example2 + example2 x delta_example1 -\n> delta_example1 x delta_example2\n> \n> This latter method is the \"logged\" approach that has been discussed for\n> deferred evaluation.\n> \n> tl;dr It seems that AFTER STATEMENT triggers required a deferred-like\n> implementation anyway.\n\nYou are right, the latest patch doesn't support the situation where\nmultiple tables are modified in a query. I noticed this when working\non self-join, which also virtually need to handle multiple table\nmodification.\n\nI am now working on this issue and the next patch will enable to handle\nthis situation. I plan to submit the patch during this month. Roughly\nspeaking, in the new implementation, AFTER STATEMENT triggers are used to\ncollect information of modified table and its changes (= transition tables), \nand then the only last trigger updates the view. This will avoid the\ndouble-counting. I think this implementation also would be a base of\ndeferred approach implementation in future where \"logs\" are used instead\nof transition tables.\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Tue, 17 Sep 2019 19:02:40 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "Have you had any thoughts for more than two joined tables?\n\nEither there needs to be an quadratic number of joins, or intermediate join\nresults need to be stored and reused.\n\nOn Tue, Sep 17, 2019 at 8:50 AM Yugo Nagata <nagata@sraoss.co.jp> wrote:\n\n> Hi Paul,\n>\n> Thank you for your suggestion.\n>\n> On Sun, 15 Sep 2019 11:52:22 -0600\n> Paul Draper <paulddraper@gmail.com> wrote:\n>\n> > As I understand it, the current patch performs immediate IVM using AFTER\n> > STATEMENT trigger transition tables.\n> >\n> > However, multiple tables can be modified *before* AFTER STATEMENT\n> triggers\n> > are fired.\n> >\n> > CREATE TABLE example1 (a int);\n> > CREATE TABLE example2 (a int);\n> >\n> > CREATE INCREMENTAL MATERIALIZED VIEW mv AS\n> > SELECT example1.a, example2.a\n> > FROM example1 JOIN example2 ON a;\n> >\n> > WITH\n> > insert1 AS (INSERT INTO example1 VALUES (1)),\n> > insert2 AS (INSERT INTO example2 VALUES (1))\n> > SELECT NULL;\n> >\n> > Changes to example1 are visible in an AFTER STATEMENT trigger on\n> example2,\n> > and vice versa. Would this not result in the (1, 1) tuple being\n> > \"double-counted\"?\n> >\n> > IVM needs to either:\n> >\n> > (1) Evaluate deltas \"serially' (e.g. EACH ROW triggers)\n> >\n> > (2) Have simultaneous access to multiple deltas:\n> > delta_mv = example1 x delta_example2 + example2 x delta_example1 -\n> > delta_example1 x delta_example2\n> >\n> > This latter method is the \"logged\" approach that has been discussed for\n> > deferred evaluation.\n> >\n> > tl;dr It seems that AFTER STATEMENT triggers required a deferred-like\n> > implementation anyway.\n>\n> You are right, the latest patch doesn't support the situation where\n> multiple tables are modified in a query. I noticed this when working\n> on self-join, which also virtually need to handle multiple table\n> modification.\n>\n> I am now working on this issue and the next patch will enable to handle\n> this situation. I plan to submit the patch during this month. Roughly\n> speaking, in the new implementation, AFTER STATEMENT triggers are used to\n> collect information of modified table and its changes (= transition\n> tables),\n> and then the only last trigger updates the view. This will avoid the\n> double-counting. I think this implementation also would be a base of\n> deferred approach implementation in future where \"logs\" are used instead\n> of transition tables.\n>\n> Regards,\n> Yugo Nagata\n>\n> --\n> Yugo Nagata <nagata@sraoss.co.jp>\n>\n\nHave you had any thoughts for more than two joined tables?Either there needs to be an quadratic number of joins, or intermediate join results need to be stored and reused. On Tue, Sep 17, 2019 at 8:50 AM Yugo Nagata <nagata@sraoss.co.jp> wrote:Hi Paul,\n\nThank you for your suggestion.\n\nOn Sun, 15 Sep 2019 11:52:22 -0600\nPaul Draper <paulddraper@gmail.com> wrote:\n\n> As I understand it, the current patch performs immediate IVM using AFTER\n> STATEMENT trigger transition tables.\n> \n> However, multiple tables can be modified *before* AFTER STATEMENT triggers\n> are fired.\n> \n> CREATE TABLE example1 (a int);\n> CREATE TABLE example2 (a int);\n> \n> CREATE INCREMENTAL MATERIALIZED VIEW mv AS\n> SELECT example1.a, example2.a\n> FROM example1 JOIN example2 ON a;\n> \n> WITH\n> insert1 AS (INSERT INTO example1 VALUES (1)),\n> insert2 AS (INSERT INTO example2 VALUES (1))\n> SELECT NULL;\n> \n> Changes to example1 are visible in an AFTER STATEMENT trigger on example2,\n> and vice versa. Would this not result in the (1, 1) tuple being\n> \"double-counted\"?\n> \n> IVM needs to either:\n> \n> (1) Evaluate deltas \"serially' (e.g. EACH ROW triggers)\n> \n> (2) Have simultaneous access to multiple deltas:\n> delta_mv = example1 x delta_example2 + example2 x delta_example1 -\n> delta_example1 x delta_example2\n> \n> This latter method is the \"logged\" approach that has been discussed for\n> deferred evaluation.\n> \n> tl;dr It seems that AFTER STATEMENT triggers required a deferred-like\n> implementation anyway.\n\nYou are right, the latest patch doesn't support the situation where\nmultiple tables are modified in a query. I noticed this when working\non self-join, which also virtually need to handle multiple table\nmodification.\n\nI am now working on this issue and the next patch will enable to handle\nthis situation. I plan to submit the patch during this month. Roughly\nspeaking, in the new implementation, AFTER STATEMENT triggers are used to\ncollect information of modified table and its changes (= transition tables), \nand then the only last trigger updates the view. This will avoid the\ndouble-counting. I think this implementation also would be a base of\ndeferred approach implementation in future where \"logs\" are used instead\nof transition tables.\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Tue, 17 Sep 2019 12:03:20 -0600",
"msg_from": "Paul Draper <paulddraper@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "> Have you had any thoughts for more than two joined tables?\n\nI am not sure what you are asking here but if you are asking if IVM\nsupports two or more tables involved in a join, we already support it:\n\nDROP MATERIALIZED VIEW mv1;\nDROP MATERIALIZED VIEW\nDROP TABLE t1;\nDROP TABLE\nDROP TABLE t2;\nDROP TABLE\nDROP TABLE t3;\nDROP TABLE\nCREATE TABLE t1(i int, j int);\nCREATE TABLE\nCREATE TABLE t2(k int, l int);\nCREATE TABLE\nCREATE TABLE t3(m int, n int);\nCREATE TABLE\nINSERT INTO t1 VALUES(1,10),(2,11);\nINSERT 0 2\nINSERT INTO t2 VALUES(1,20),(2,21);\nINSERT 0 2\nINSERT INTO t3 VALUES(1,30),(2,31);\nINSERT 0 2\nCREATE INCREMENTAL MATERIALIZED VIEW mv1 AS SELECT * FROM t1 INNER JOIN t2 ON t1.i = t2.k INNER JOIN t3 ON t1.i = t3.m;\nSELECT 2\nSELECT * FROM mv1;\n i | j | k | l | m | n \n---+----+---+----+---+----\n 1 | 10 | 1 | 20 | 1 | 30\n 2 | 11 | 2 | 21 | 2 | 31\n(2 rows)\n\nUPDATE t1 SET j = 15 WHERE i = 1;\nUPDATE 1\nSELECT * FROM mv1;\n i | j | k | l | m | n \n---+----+---+----+---+----\n 2 | 11 | 2 | 21 | 2 | 31\n 1 | 15 | 1 | 20 | 1 | 30\n(2 rows)\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n> Either there needs to be an quadratic number of joins, or intermediate join\n> results need to be stored and reused.\n> \n> On Tue, Sep 17, 2019 at 8:50 AM Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> \n>> Hi Paul,\n>>\n>> Thank you for your suggestion.\n>>\n>> On Sun, 15 Sep 2019 11:52:22 -0600\n>> Paul Draper <paulddraper@gmail.com> wrote:\n>>\n>> > As I understand it, the current patch performs immediate IVM using AFTER\n>> > STATEMENT trigger transition tables.\n>> >\n>> > However, multiple tables can be modified *before* AFTER STATEMENT\n>> triggers\n>> > are fired.\n>> >\n>> > CREATE TABLE example1 (a int);\n>> > CREATE TABLE example2 (a int);\n>> >\n>> > CREATE INCREMENTAL MATERIALIZED VIEW mv AS\n>> > SELECT example1.a, example2.a\n>> > FROM example1 JOIN example2 ON a;\n>> >\n>> > WITH\n>> > insert1 AS (INSERT INTO example1 VALUES (1)),\n>> > insert2 AS (INSERT INTO example2 VALUES (1))\n>> > SELECT NULL;\n>> >\n>> > Changes to example1 are visible in an AFTER STATEMENT trigger on\n>> example2,\n>> > and vice versa. Would this not result in the (1, 1) tuple being\n>> > \"double-counted\"?\n>> >\n>> > IVM needs to either:\n>> >\n>> > (1) Evaluate deltas \"serially' (e.g. EACH ROW triggers)\n>> >\n>> > (2) Have simultaneous access to multiple deltas:\n>> > delta_mv = example1 x delta_example2 + example2 x delta_example1 -\n>> > delta_example1 x delta_example2\n>> >\n>> > This latter method is the \"logged\" approach that has been discussed for\n>> > deferred evaluation.\n>> >\n>> > tl;dr It seems that AFTER STATEMENT triggers required a deferred-like\n>> > implementation anyway.\n>>\n>> You are right, the latest patch doesn't support the situation where\n>> multiple tables are modified in a query. I noticed this when working\n>> on self-join, which also virtually need to handle multiple table\n>> modification.\n>>\n>> I am now working on this issue and the next patch will enable to handle\n>> this situation. I plan to submit the patch during this month. Roughly\n>> speaking, in the new implementation, AFTER STATEMENT triggers are used to\n>> collect information of modified table and its changes (= transition\n>> tables),\n>> and then the only last trigger updates the view. This will avoid the\n>> double-counting. I think this implementation also would be a base of\n>> deferred approach implementation in future where \"logs\" are used instead\n>> of transition tables.\n>>\n>> Regards,\n>> Yugo Nagata\n>>\n>> --\n>> Yugo Nagata <nagata@sraoss.co.jp>\n>>\n\n\n",
"msg_date": "Fri, 27 Sep 2019 11:47:40 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
},
{
"msg_contents": "On Tue, 17 Sep 2019 12:03:20 -0600\nPaul Draper <paulddraper@gmail.com> wrote:\n\n> Have you had any thoughts for more than two joined tables?\n> \n> Either there needs to be an quadratic number of joins, or intermediate join\n> results need to be stored and reused.\n\nI don't think that we need to store intermediate join results.\n\nSuppose that we have a view V joining table R,S, and new tuples are inserted\nto each table, dR,dS, and dT respectively.\n\n V = R*S*T\n R_new = R + dR\n S_new = S + dS\n T_new = T + dT\n\nIn this situation, we can calculate the new view state as bellow.\n\nV_new \n= R_new * S_new * T_new\n= (R + dR) * (S + dS) * (T + dT)\n= R*S*T + dR*(S + dS)*(T + dT) + R*dS*(T + dT) + R*S*dT\n= V + dR*(S + dS)*(T + dT) + R*dS*(T + dT) + R*S*dT\n\nAlthough the number of terms is 2^3(=8), if we can use both of pre-update\nstate (eg. R) and post-update state (eg. R+dR), we only need only three joins.\nActually, post-update state is available in AFTER trigger, and pre-update state\ncan be calculated by using delta tables (transition tables) and cmin/xmin system\ncolumns (or snapshot). This is the approach my implementation adopts.\n\n\n> \n> On Tue, Sep 17, 2019 at 8:50 AM Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> \n> > Hi Paul,\n> >\n> > Thank you for your suggestion.\n> >\n> > On Sun, 15 Sep 2019 11:52:22 -0600\n> > Paul Draper <paulddraper@gmail.com> wrote:\n> >\n> > > As I understand it, the current patch performs immediate IVM using AFTER\n> > > STATEMENT trigger transition tables.\n> > >\n> > > However, multiple tables can be modified *before* AFTER STATEMENT\n> > triggers\n> > > are fired.\n> > >\n> > > CREATE TABLE example1 (a int);\n> > > CREATE TABLE example2 (a int);\n> > >\n> > > CREATE INCREMENTAL MATERIALIZED VIEW mv AS\n> > > SELECT example1.a, example2.a\n> > > FROM example1 JOIN example2 ON a;\n> > >\n> > > WITH\n> > > insert1 AS (INSERT INTO example1 VALUES (1)),\n> > > insert2 AS (INSERT INTO example2 VALUES (1))\n> > > SELECT NULL;\n> > >\n> > > Changes to example1 are visible in an AFTER STATEMENT trigger on\n> > example2,\n> > > and vice versa. Would this not result in the (1, 1) tuple being\n> > > \"double-counted\"?\n> > >\n> > > IVM needs to either:\n> > >\n> > > (1) Evaluate deltas \"serially' (e.g. EACH ROW triggers)\n> > >\n> > > (2) Have simultaneous access to multiple deltas:\n> > > delta_mv = example1 x delta_example2 + example2 x delta_example1 -\n> > > delta_example1 x delta_example2\n> > >\n> > > This latter method is the \"logged\" approach that has been discussed for\n> > > deferred evaluation.\n> > >\n> > > tl;dr It seems that AFTER STATEMENT triggers required a deferred-like\n> > > implementation anyway.\n> >\n> > You are right, the latest patch doesn't support the situation where\n> > multiple tables are modified in a query. I noticed this when working\n> > on self-join, which also virtually need to handle multiple table\n> > modification.\n> >\n> > I am now working on this issue and the next patch will enable to handle\n> > this situation. I plan to submit the patch during this month. Roughly\n> > speaking, in the new implementation, AFTER STATEMENT triggers are used to\n> > collect information of modified table and its changes (= transition\n> > tables),\n> > and then the only last trigger updates the view. This will avoid the\n> > double-counting. I think this implementation also would be a base of\n> > deferred approach implementation in future where \"logs\" are used instead\n> > of transition tables.\n> >\n> > Regards,\n> > Yugo Nagata\n> >\n> > --\n> > Yugo Nagata <nagata@sraoss.co.jp>\n> >\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 27 Sep 2019 17:17:42 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Implementing Incremental View Maintenance"
}
] |
[
{
"msg_contents": "We want to export data from PG to Kafka,\nWe can't rely on extension which we have written as there could be any\nproblems which we are not aware of and PG might break.\nWe don't want our master to go down because of the extension we have\nwritten.\n\nSo, we are okay with having a new PG instance whose work is just to export\ndata, as slave.\n\nWhat we thought of doing is to pause recovery (start-up) process, on any\nvacuum changes on system catalog tables and resume once, our\nlogical-decoding has caught up.\nThat way, we aren't bloating our master.\n\nOur problem is, we aren't able to exactly identify what WAL records are\ncausing Vacuum changes, as far as our understanding goes, `HEAP_2 (rmgr),\nCLEAN` and `BTREE (rmgr), VACUUM` records.\n\nInorder to see our understanding is right or not, IPC (inter-process\ncommunication) between WAL_SENDER and START_UP process, is not efficient\nenough.\n\nI have seen Latches, but I'm not sure exactly how to use them, as from\ncomments, my understanding is START_UP process is not available in PGPROC\narray.\n\nFor efficient inter-process communication, what would be ideal way to\ncommunicate between two processes.\n\nI'm new to PostgreSQL, and C - world.\n\nWhat we are trying to achieve is something similar to this:\n\nSTART_UP process goes to sleep, as soon as it sees any vacuum on catalog.\nWAL_SENDER process will resume recovery (wakeup), as soon as it caught up\nand goes to sleep.\nSTART_UP process will wake up when THERE is something for WAL_SENDER to\nsend.\n\nBasically, IPC, between two processes, where one process generates work,\nand other consumes it.\nproducer should go to sleep, until consumer caught up.\nconsumer should signal producer of it's completion and wake up producer.\ncycle goes on...\n\nAs I have indicated before, I new to C and PostgreSQL, I familiar with\nGoLang, and I have written a sample program in Go (attached below).\n\nAny suggestions and pointers would be greatly helpful.\n\n-- \nThank you\n\nWe want to export data from PG to Kafka,We can't rely on extension which we have written as there could be any problems which we are not aware of and PG might break.We don't want our master to go down because of the extension we have written.So, we are okay with having a new PG instance whose work is just to export data, as slave.What we thought of doing is to pause recovery (start-up) process, on any vacuum changes on system catalog tables and resume once, our logical-decoding has caught up.That way, we aren't bloating our master.Our problem is, we aren't able to exactly identify what WAL records are causing Vacuum changes, as far as our understanding goes, `HEAP_2 (rmgr), CLEAN` and `BTREE (rmgr), VACUUM` records.Inorder to see our understanding is right or not, IPC (inter-process communication) between WAL_SENDER and START_UP process, is not efficient enough.I have seen Latches, but I'm not sure exactly how to use them, as from comments, my understanding is START_UP process is not available in PGPROC array.For efficient inter-process communication, what would be ideal way to communicate between two processes.I'm new to PostgreSQL, and C - world.What we are trying to achieve is something similar to this:START_UP process goes to sleep, as soon as it sees any vacuum on catalog.WAL_SENDER process will resume recovery (wakeup), as soon as it caught up and goes to sleep.START_UP process will wake up when THERE is something for WAL_SENDER to send.Basically, IPC, between two processes, where one process generates work, and other consumes it.producer should go to sleep, until consumer caught up.consumer should signal producer of it's completion and wake up producer.cycle goes on...As I have indicated before, I new to C and PostgreSQL, I familiar with GoLang, and I have written a sample program in Go (attached below).Any suggestions and pointers would be greatly helpful.-- Thank you",
"msg_date": "Mon, 16 Sep 2019 11:11:31 +0530",
"msg_from": "nilsocket <nilsocket@gmail.com>",
"msg_from_op": true,
"msg_subject": ""
},
{
"msg_contents": "Extremely, sorry forgot to add attachment.\n\nOn Mon, Sep 16, 2019 at 11:11 AM nilsocket <nilsocket@gmail.com> wrote:\n\n> We want to export data from PG to Kafka,\n> We can't rely on extension which we have written as there could be any\n> problems which we are not aware of and PG might break.\n> We don't want our master to go down because of the extension we have\n> written.\n>\n> So, we are okay with having a new PG instance whose work is just to export\n> data, as slave.\n>\n> What we thought of doing is to pause recovery (start-up) process, on any\n> vacuum changes on system catalog tables and resume once, our\n> logical-decoding has caught up.\n> That way, we aren't bloating our master.\n>\n> Our problem is, we aren't able to exactly identify what WAL records are\n> causing Vacuum changes, as far as our understanding goes, `HEAP_2 (rmgr),\n> CLEAN` and `BTREE (rmgr), VACUUM` records.\n>\n> Inorder to see our understanding is right or not, IPC (inter-process\n> communication) between WAL_SENDER and START_UP process, is not efficient\n> enough.\n>\n> I have seen Latches, but I'm not sure exactly how to use them, as from\n> comments, my understanding is START_UP process is not available in PGPROC\n> array.\n>\n> For efficient inter-process communication, what would be ideal way to\n> communicate between two processes.\n>\n> I'm new to PostgreSQL, and C - world.\n>\n> What we are trying to achieve is something similar to this:\n>\n> START_UP process goes to sleep, as soon as it sees any vacuum on catalog.\n> WAL_SENDER process will resume recovery (wakeup), as soon as it caught up\n> and goes to sleep.\n> START_UP process will wake up when THERE is something for WAL_SENDER to\n> send.\n>\n> Basically, IPC, between two processes, where one process generates work,\n> and other consumes it.\n> producer should go to sleep, until consumer caught up.\n> consumer should signal producer of it's completion and wake up producer.\n> cycle goes on...\n>\n> As I have indicated before, I new to C and PostgreSQL, I familiar with\n> GoLang, and I have written a sample program in Go (attached below).\n>\n> Any suggestions and pointers would be greatly helpful.\n>\n> --\n> Thank you\n>\n\n\n-- \nThank you",
"msg_date": "Mon, 16 Sep 2019 11:23:33 +0530",
"msg_from": "nilsocket <nilsocket@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re:"
}
] |
[
{
"msg_contents": "The ecpglib major version (SO_MAJOR_VERSION) was changed in\nbd7c95f0c1a38becffceb3ea7234d57167f6d4bf (Add DECLARE STATEMENT support\nto ECPG.), but I don't see anything in that patch that would warrant\nthat. I think we should undo that change.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 16 Sep 2019 13:14:21 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "ecpglib major version changed"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> The ecpglib major version (SO_MAJOR_VERSION) was changed in\n> bd7c95f0c1a38becffceb3ea7234d57167f6d4bf (Add DECLARE STATEMENT support\n> to ECPG.), but I don't see anything in that patch that would warrant\n> that. I think we should undo that change.\n\nOuch. Yeah, that's a Big Deal from a packaging standpoint.\nWhy would it be necessary?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Sep 2019 09:41:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ecpglib major version changed"
},
{
"msg_contents": "On 2019-09-16 15:41, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> The ecpglib major version (SO_MAJOR_VERSION) was changed in\n>> bd7c95f0c1a38becffceb3ea7234d57167f6d4bf (Add DECLARE STATEMENT support\n>> to ECPG.), but I don't see anything in that patch that would warrant\n>> that. I think we should undo that change.\n> \n> Ouch. Yeah, that's a Big Deal from a packaging standpoint.\n> Why would it be necessary?\n\nI double-checked this with abidiff and there was nothing changed except\na few added functions. So I've reverted it now.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 19 Sep 2019 09:37:04 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: ecpglib major version changed"
}
] |
[
{
"msg_contents": "Whilst poking at the leakproofness-of-texteq issue, I realized\nthat there's an independent problem caused by the nondeterminism\npatch. To wit, that the text_pattern_ops btree opclass uses\ntexteq as its equality operator, even though that operator is\nno longer guaranteed to be bitwise equality. That means that\ndepending on which collation happens to get attached to the\noperator, equality might be inconsistent with the other members\nof the opclass, leading to who-knows-what bad results.\n\nbpchar_pattern_ops has the same issue with respect to bpchareq.\n\nThe obvious fix for this is to invent separate new equality operators,\nbut that's actually rather disastrous for performance, because\ntext_pattern_ops indexes would no longer be able to use WHERE clauses\nusing plain equality. That also feeds into whether equality clauses\ndeduced from equivalence classes will work for them (nope, not any\nmore). People using such indexes are just about certain to be\nbitterly unhappy.\n\nWe may not have any choice but to do that, though --- I sure don't\nsee any other easy fix. If we could be certain that the collation\nattached to the operator is deterministic, then it would still work\nwith a pattern_ops index, but that's not a concept that the index\ninfrastructure has got right now.\n\nWhatever we do about this is likely to require a catversion bump,\nmeaning we've got to fix it *now*.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Sep 2019 19:13:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Nondeterministic collations vs. text_pattern_ops"
},
{
"msg_contents": "On 2019-09-17 01:13, Tom Lane wrote:\n> Whilst poking at the leakproofness-of-texteq issue, I realized\n> that there's an independent problem caused by the nondeterminism\n> patch. To wit, that the text_pattern_ops btree opclass uses\n> texteq as its equality operator, even though that operator is\n> no longer guaranteed to be bitwise equality. That means that\n> depending on which collation happens to get attached to the\n> operator, equality might be inconsistent with the other members\n> of the opclass, leading to who-knows-what bad results.\n\nYou can't create a text_pattern_ops index on a column with\nnondeterministic collation:\n\ncreate collation c1 (provider = icu, locale = 'und', deterministic = false);\ncreate table t1 (a int, b text collate c1);\ncreate index on t1 (b text_pattern_ops);\nERROR: nondeterministic collations are not supported for operator class\n\"text_pattern_ops\"\n\nThere is some discussion in internal_text_pattern_compare().\n\nAre there other cases we need to consider?\n\nI notice that there is a hash opclass text_pattern_ops, which I'd\nactually never heard of until now, and I don't see documented. What\nwould we need to do about that?\n\n> The obvious fix for this is to invent separate new equality operators,\n> but that's actually rather disastrous for performance, because\n> text_pattern_ops indexes would no longer be able to use WHERE clauses\n> using plain equality. That also feeds into whether equality clauses\n> deduced from equivalence classes will work for them (nope, not any\n> more). People using such indexes are just about certain to be\n> bitterly unhappy.\n\nWould it help if one created COLLATE \"C\" indexes instead of\ntext_pattern_ops? What are the tradeoffs between the two approaches?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 17 Sep 2019 10:35:22 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Nondeterministic collations vs. text_pattern_ops"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-09-17 01:13, Tom Lane wrote:\n>> Whilst poking at the leakproofness-of-texteq issue, I realized\n>> that there's an independent problem caused by the nondeterminism\n>> patch. To wit, that the text_pattern_ops btree opclass uses\n>> texteq as its equality operator, even though that operator is\n>> no longer guaranteed to be bitwise equality. That means that\n>> depending on which collation happens to get attached to the\n>> operator, equality might be inconsistent with the other members\n>> of the opclass, leading to who-knows-what bad results.\n\n> You can't create a text_pattern_ops index on a column with\n> nondeterministic collation:\n\n> create collation c1 (provider = icu, locale = 'und', deterministic = false);\n> create table t1 (a int, b text collate c1);\n> create index on t1 (b text_pattern_ops);\n> ERROR: nondeterministic collations are not supported for operator class\n> \"text_pattern_ops\"\n\nOh! I'd seen that error message, but not realized that it'd get\nthrown during index creation.\n\n> There is some discussion in internal_text_pattern_compare().\n\nI don't much like doing it that way: looking up the determinism property\nof the collation over again in every single comparison seems pretty\nexpensive, plus the operator is way exceeding its actual knowledge\nof the call context by throwing an error phrased that way.\n\n> Are there other cases we need to consider?\n\nI think that disallowing indexes with this combination of opclass and\ncollation may actually be sufficient. A query can request equality\nusing any collation, but it won't get matched to an index with a\ndifferent collation, so I think we're safe against index-related\nproblems if we have that restriction.\n\nAFAIR, the only other place in the system where non-default opclasses\ncan be invoked is ORDER BY. Somebody could write\n\n\tORDER BY textcol COLLATE \"nondeterm\" USING ~<~\n\nHowever, I think we're actually okay on that one, because although\nthe equality opclass member is out of sync with the rest, it won't\nget consulted during a sort. internal_text_pattern_compare will\nthrow an error for this, but I don't believe it actually needs to.\n\nMy recommendation is to get rid of the run-time checks and instead\nput a hack like this into DefineIndex or some minion thereof:\n\n\tif ((opclass == TEXT_PATTERN_BTREE_CLASS_OID ||\n\t opclass == VARCHAR_PATTERN_BTREE_CLASS_OID ||\n\t opclass == BPCHAR_PATTERN_BTREE_CLASS_OID) &&\n\t !get_collation_isdeterministic(collid))\n\t ereport(ERROR, ...);\n\nHard-wiring that is ugly; maybe someday we'd wish to expose \"doesn't\nallow nondeterminism\" as a more generally-available opclass property.\nBut without some other examples that have a need for it, I think it's\nnot worth the work to create infrastructure for that. It's not like\nthere are no other hard-wired legacy behaviors in DefineIndex...\n\n> I notice that there is a hash opclass text_pattern_ops, which I'd\n> actually never heard of until now, and I don't see documented. What\n> would we need to do about that?\n\nHuh. I wonder why that's there --- I can't see a reason why it'd\nbehave differently from the regular hash text_ops. Maybe we feared\nthat someday it would need to be different? Anyway, I think we can\njust ignore it for this purpose.\n\n> Would it help if one created COLLATE \"C\" indexes instead of\n> text_pattern_ops? What are the tradeoffs between the two approaches?\n\nOf course, the pattern_ops opclasses long predate our COLLATE support.\nI suspect if we'd had COLLATE we never would have invented them.\nI believe the pattern_ops are equivalent to a COLLATE \"C\" index, both\ntheoretically and in terms of the optimizations we know about making.\nThere might be some minor performance difference from not having to\nlook up the collation, but not much if we've done the collate-is-c\nfast paths properly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Sep 2019 11:17:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Nondeterministic collations vs. text_pattern_ops"
},
{
"msg_contents": "On 2019-09-17 17:17, Tom Lane wrote:\n> My recommendation is to get rid of the run-time checks and instead\n> put a hack like this into DefineIndex or some minion thereof:\n> \n> \tif ((opclass == TEXT_PATTERN_BTREE_CLASS_OID ||\n> \t opclass == VARCHAR_PATTERN_BTREE_CLASS_OID ||\n> \t opclass == BPCHAR_PATTERN_BTREE_CLASS_OID) &&\n> \t !get_collation_isdeterministic(collid))\n> \t ereport(ERROR, ...);\n\nHere is a draft patch.\n\nIt will require a catversion change because those operator classes don't\nhave assigned OIDs so far.\n\nThe comment block I just moved over for the time being. It should\nprobably be rephrased a bit.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 18 Sep 2019 14:31:39 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Nondeterministic collations vs. text_pattern_ops"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Here is a draft patch.\n\n> It will require a catversion change because those operator classes don't\n> have assigned OIDs so far.\n\nThat's slightly annoying given where we are with v12. We could\navoid it by looking up the opclass's opfamily and seeing if it's\nTEXT_BTREE_FAM_OID etc, which do already have hand-assigned OIDs.\nBut maybe avoiding a catversion bump now is not worth the cost of\nan extra syscache lookup. (It'd give me an excuse to shove the\nleakproofness-marking changes from the other thread into v12, so\nthere's that.)\n\nSpeaking of extra syscache lookups, I don't like that you rearranged\nthe if-test to check nondeterminism before the opclass identity checks.\nThat's usually going to be a wasted lookup.\n\n> The comment block I just moved over for the time being. It should\n> probably be rephrased a bit.\n\nIndeed. Maybe like\n\n * text_pattern_ops uses text_eq as the equality operator, which is\n * fine as long as the collation is deterministic; text_eq then\n * reduces to bitwise equality and so it is semantically compatible\n * with the other operators and functions in the opclass. But with a\n * nondeterministic collation, text_eq could yield results that are\n * incompatible with the actual behavior of the index (which is\n * determined by the opclass's comparison function). We prevent\n * such problems by refusing creation of an index with this opclass\n * and a nondeterministic collation.\n *\n * The same applies to varchar_pattern_ops and bpchar_pattern_ops.\n * If we find more cases, we might decide to create a real mechanism\n * for marking opclasses as incompatible with nondeterminism; but\n * for now, this small hack suffices.\n *\n * Another solution is to use a special operator, not text_eq, as the\n * equality opclass member, but that is undesirable because it would\n * prevent index usage in many queries that work fine today.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Sep 2019 11:04:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Nondeterministic collations vs. text_pattern_ops"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Here is a draft patch.\n\nWhere are we on pushing that? I'm starting to get antsy about the\namount of time remaining before rc1. It's a low-risk fix, but still,\nit'd be best to have a complete buildfarm cycle on it before Monday's\nwrap.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Sep 2019 19:24:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Nondeterministic collations vs. text_pattern_ops"
},
{
"msg_contents": "I wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>> Here is a draft patch.\n\n> Where are we on pushing that? I'm starting to get antsy about the\n> amount of time remaining before rc1. It's a low-risk fix, but still,\n> it'd be best to have a complete buildfarm cycle on it before Monday's\n> wrap.\n\nSince time is now really running short, I went ahead and pushed\nthis, after doing a closer review and finding one nitpicky bug.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 21 Sep 2019 16:30:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Nondeterministic collations vs. text_pattern_ops"
},
{
"msg_contents": "On 2019-09-21 22:30, Tom Lane wrote:\n> I wrote:\n>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>>> Here is a draft patch.\n> \n>> Where are we on pushing that? I'm starting to get antsy about the\n>> amount of time remaining before rc1. It's a low-risk fix, but still,\n>> it'd be best to have a complete buildfarm cycle on it before Monday's\n>> wrap.\n> \n> Since time is now really running short, I went ahead and pushed\n> this, after doing a closer review and finding one nitpicky bug.\n\nGreat! I think that covers all the open issues then.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 21 Sep 2019 23:11:58 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Nondeterministic collations vs. text_pattern_ops"
}
] |
[
{
"msg_contents": "Hi,\n\n* Commit 7086be6e3 should have documented the limitation that the\ndirect modification is disabled when WCO constraints are present, but\ndidn't, which is definitely my fault.\n\n* Commit fc22b6623 should have documented the limitation that the\ndirect modification is disabled when generated columns are defined,\nbut didn't.\n\nAttached is a patch for updating the documentation.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Tue, 17 Sep 2019 15:45:57 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Documentation updates for direct foreign table modification"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 3:45 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> * Commit 7086be6e3 should have documented the limitation that the\n> direct modification is disabled when WCO constraints are present, but\n> didn't, which is definitely my fault.\n>\n> * Commit fc22b6623 should have documented the limitation that the\n> direct modification is disabled when generated columns are defined,\n> but didn't.\n>\n> Attached is a patch for updating the documentation.\n\nCommitted.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 18 Sep 2019 20:03:34 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Documentation updates for direct foreign table modification"
}
] |
[
{
"msg_contents": "Hi!\n\nI might have missed prior discussions about this, but I wonder if it\nwould be possible to support binary payloads for NOTIFY/LISTEN? Again\nand again I find it very limiting with just text (have to base64\nencode data, or convert it to JSON).\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n\n",
"msg_date": "Tue, 17 Sep 2019 01:01:32 -0700",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Feature request: binary NOTIFY"
},
{
"msg_contents": "Hi\n\nút 17. 9. 2019 v 10:01 odesílatel Mitar <mmitar@gmail.com> napsal:\n\n> Hi!\n>\n> I might have missed prior discussions about this, but I wonder if it\n> would be possible to support binary payloads for NOTIFY/LISTEN? Again\n> and again I find it very limiting with just text (have to base64\n> encode data, or convert it to JSON).\n>\n\nI think so is not any problem to pass binary data already. Text type \"text\"\nand binary type \"bytea\" is internally very similar.\n\nBut the message doesn't any info about type, so it should be ensure so\nclients understand to message and takes data in binary format.\n\nYou can overwrite pg_notify function for bytea format.\n\nIs not possible to use NOTIFY with binary data, because this statement\ndoesn't allow parametrization\n\nPavel\n\n>\n>\n> Mitar\n>\n> --\n> http://mitar.tnode.com/\n> https://twitter.com/mitar_m\n>\n>\n>\n\nHiút 17. 9. 2019 v 10:01 odesílatel Mitar <mmitar@gmail.com> napsal:Hi!\n\nI might have missed prior discussions about this, but I wonder if it\nwould be possible to support binary payloads for NOTIFY/LISTEN? Again\nand again I find it very limiting with just text (have to base64\nencode data, or convert it to JSON).I think so is not any problem to pass binary data already. Text type \"text\" and binary type \"bytea\" is internally very similar. But the message doesn't any info about type, so it should be ensure so clients understand to message and takes data in binary format.You can overwrite pg_notify function for bytea format.Is not possible to use NOTIFY with binary data, because this statement doesn't allow parametrizationPavel\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m",
"msg_date": "Tue, 17 Sep 2019 10:22:35 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature request: binary NOTIFY"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> út 17. 9. 2019 v 10:01 odesílatel Mitar <mmitar@gmail.com> napsal:\n>> I might have missed prior discussions about this, but I wonder if it\n>> would be possible to support binary payloads for NOTIFY/LISTEN? Again\n>> and again I find it very limiting with just text (have to base64\n>> encode data, or convert it to JSON).\n\n> I think so is not any problem to pass binary data already.\n\nYeah it is ... the internal async-queue data structure assumes\nnull-terminated strings. What's a lot worse, so does the\nwire protocol's NotificationResponse message, as well as every\nexisting client that can read it. (For instance, libpq's exposed\nAPI for notify messages hands back the payload as a null-terminated\nstring.) I don't think this is going to happen.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Sep 2019 10:10:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature request: binary NOTIFY"
},
{
"msg_contents": "út 17. 9. 2019 v 16:10 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > út 17. 9. 2019 v 10:01 odesílatel Mitar <mmitar@gmail.com> napsal:\n> >> I might have missed prior discussions about this, but I wonder if it\n> >> would be possible to support binary payloads for NOTIFY/LISTEN? Again\n> >> and again I find it very limiting with just text (have to base64\n> >> encode data, or convert it to JSON).\n>\n> > I think so is not any problem to pass binary data already.\n>\n> Yeah it is ... the internal async-queue data structure assumes\n> null-terminated strings. What's a lot worse, so does the\n> wire protocol's NotificationResponse message, as well as every\n> existing client that can read it. (For instance, libpq's exposed\n> API for notify messages hands back the payload as a null-terminated\n> string.) I don't think this is going to happen.\n>\n\nok, thank you for correction.\n\nRegards\n\nPavel\n\n>\n> regards, tom lane\n>\n\nút 17. 9. 2019 v 16:10 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> út 17. 9. 2019 v 10:01 odesílatel Mitar <mmitar@gmail.com> napsal:\n>> I might have missed prior discussions about this, but I wonder if it\n>> would be possible to support binary payloads for NOTIFY/LISTEN? Again\n>> and again I find it very limiting with just text (have to base64\n>> encode data, or convert it to JSON).\n\n> I think so is not any problem to pass binary data already.\n\nYeah it is ... the internal async-queue data structure assumes\nnull-terminated strings. What's a lot worse, so does the\nwire protocol's NotificationResponse message, as well as every\nexisting client that can read it. (For instance, libpq's exposed\nAPI for notify messages hands back the payload as a null-terminated\nstring.) I don't think this is going to happen.ok, thank you for correction.RegardsPavel\n\n regards, tom lane",
"msg_date": "Tue, 17 Sep 2019 17:07:43 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature request: binary NOTIFY"
},
{
"msg_contents": "Hi!\n\nOn Tue, Sep 17, 2019 at 7:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah it is ... the internal async-queue data structure assumes\n> null-terminated strings. What's a lot worse, so does the\n> wire protocol's NotificationResponse message, as well as every\n> existing client that can read it. (For instance, libpq's exposed\n> API for notify messages hands back the payload as a null-terminated\n> string.) I don't think this is going to happen.\n\nAhh. Any particular reason for this design decision at that time?\n\nWhat about adding NOTIFYB and LISTENB commands? And\nNotificationBinaryResponse? For binary?\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n\n",
"msg_date": "Wed, 18 Sep 2019 20:46:25 -0700",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature request: binary NOTIFY"
},
{
"msg_contents": "Mitar <mmitar@gmail.com> writes:\n> What about adding NOTIFYB and LISTENB commands? And\n> NotificationBinaryResponse? For binary?\n\n[ shrug... ] We can put that on the list of things we might want\nto do if the wire protocol ever gets changed. I urgently recommend\nnot holding your breath.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Sep 2019 00:22:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature request: binary NOTIFY"
},
{
"msg_contents": "Hi!\n\nOn Wed, Sep 18, 2019 at 9:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> [ shrug... ] We can put that on the list of things we might want\n> to do if the wire protocol ever gets changed. I urgently recommend\n> not holding your breath.\n\nWhat is the process to add it to the list?\n\nAnd yes, I will not expect it soon. :-) Thanks.\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n\n",
"msg_date": "Wed, 18 Sep 2019 21:32:30 -0700",
"msg_from": "Mitar <mmitar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature request: binary NOTIFY"
}
] |
[
{
"msg_contents": "This patch allows PostgreSQL to pause recovery before PITR target is reached \nif recovery_target_time is specified.\n\nMissing WAL's could then be restored from backup and applied on next restart.\n\nToday PostgreSQL opens the database in read/write on a new timeline even when \nPITR tareg is not reached.\n\nmake check is run with this patch with result \"All 192 tests passed.\"\nSource used is from version 12b4.\n\nFor both examples below \"recovery_target_time = '2019-09-17 09:24:00'\"\n\n_________________________\nLog from todays behavior:\n\n[20875] LOG: starting point-in-time recovery to 2019-09-17 09:24:00+02\n[20875] LOG: restored log file \"000000010000000000000002\" from archive\n[20875] LOG: redo starts at 0/2000028\n[20875] LOG: consistent recovery state reached at 0/2000100\n[20870] LOG: database system is ready to accept read only connections\n[20875] LOG: restored log file \"000000010000000000000003\" from archive\n[20875] LOG: restored log file \"000000010000000000000004\" from archive\ncp: cannot stat '/var/lib/pgsql/12/archivedwal/000000010000000000000005': No such file or directory\n[20875] LOG: redo done at 0/40080C8\n[20875] LOG: last completed transaction was at log time 2019-09-17 09:13:10.524645+02\n[20875] LOG: restored log file \"000000010000000000000004\" from archive\ncp: cannot stat '/var/lib/pgsql/12/archivedwal/00000002.history': No such file or directory\n[20875] LOG: selected new timeline ID: 2\n[20875] LOG: archive recovery complete\ncp: cannot stat '/var/lib/pgsql/12/archivedwal/00000001.history': No such file or directory\n[20870] LOG: database system is ready to accept connections\n\n________________________\nAnd with patched source:\n\n[20899] LOG: starting point-in-time recovery to 2019-09-17 09:24:00+02\n[20899] LOG: restored log file \"000000010000000000000002\" from archive\n[20899] LOG: redo starts at 0/2000028\n[20899] LOG: consistent recovery state reached at 0/20002B0\n[20895] LOG: database system is ready to accept read only connections\n[20899] LOG: restored log file \"000000010000000000000003\" from archive\n[20899] LOG: restored log file \"000000010000000000000004\" from archive\ncp: cannot stat '/var/lib/pgsql/12m/archivedwal/000000010000000000000005': No such file or directory\n[20899] LOG: Recovery target not reached but next WAL record culd not be read\n[20899] LOG: redo done at 0/4007D40\n[20899] LOG: last completed transaction was at log time 2019-09-17 09:13:10.539546+02\n[20899] LOG: recovery has paused\n[20899] HINT: Execute pg_wal_replay_resume() to continue.\n\n\nYou could restore WAL in several steps and when target is reached you get this log\n\n[21943] LOG: starting point-in-time recovery to 2019-09-17 09:24:00+02\n[21943] LOG: restored log file \"000000010000000000000005\" from archive\n[21943] LOG: redo starts at 0/5003C38\n[21943] LOG: consistent recovery state reached at 0/6000000\n[21941] LOG: database system is ready to accept read only connections\n[21943] LOG: restored log file \"000000010000000000000006\" from archive\n[21943] LOG: recovery stopping before commit of transaction 859, time 2019-09-17 09:24:02.58576+02\n[21943] LOG: recovery has paused\n[21943] HINT: Execute pg_wal_replay_resume() to continue.\n\nExecute pg_wal_replay_resume() as hinted.\n\n[21943] LOG: redo done at 0/6001830\n[21943] LOG: last completed transaction was at log time 2019-09-17 09:23:57.496945+02\ncp: cannot stat '/var/lib/pgsql/12m/archivedwal/00000002.history': No such file or directory\n[21943] LOG: selected new timeline ID: 2\n[21943] LOG: archive recovery complete\ncp: cannot stat '/var/lib/pgsql/12m/archivedwal/00000001.history': No such file or directory\n[21941] LOG: database system is ready to accept connections\n\n\n\n----------------\n\nLeif Gunnar Erlandsen",
"msg_date": "Tue, 17 Sep 2019 11:23:45 +0000",
"msg_from": "\"Leif Gunnar Erlandsen\" <leif@lako.no>",
"msg_from_op": true,
"msg_subject": "pause recovery if pitr target not reached"
},
{
"msg_contents": "On 2019-09-17 13:23, Leif Gunnar Erlandsen wrote:\n> This patch allows PostgreSQL to pause recovery before PITR target is reached \n> if recovery_target_time is specified.\n> \n> Missing WAL's could then be restored from backup and applied on next restart.\n> \n> Today PostgreSQL opens the database in read/write on a new timeline even when \n> PITR tareg is not reached.\n\nI think this idea is worth thinking about. I don't think this should be\nspecific to a time-based recovery target. This could apply for example\nto a target xid as well. Also, there should be a way to get the old\nbehavior. Perhaps this whole thing should be a new\nrecovery_target_action, say, 'pause_unless_reached'.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 19 Oct 2019 21:45:11 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "On Sat, 2019-10-19 at 21:45 +0200, Peter Eisentraut wrote:\n> On 2019-09-17 13:23, Leif Gunnar Erlandsen wrote:\n> > This patch allows PostgreSQL to pause recovery before PITR target is reached \n> > if recovery_target_time is specified.\n> > \n> > Missing WAL's could then be restored from backup and applied on next restart.\n> > \n> > Today PostgreSQL opens the database in read/write on a new timeline even when \n> > PITR tareg is not reached.\n> \n> I think this idea is worth thinking about. I don't think this should be\n> specific to a time-based recovery target. This could apply for example\n> to a target xid as well. Also, there should be a way to get the old\n> behavior. Perhaps this whole thing should be a new\n> recovery_target_action, say, 'pause_unless_reached'.\n\n+1 for pausing if end-of-logs is reached before the recovery target.\n\nI don't think that we need to add a new \"recovery_target_action\" to\nretain the old behavior, because I think that nobody ever wants that.\nI'd say that this typically happens in two cases:\n\n1. Someone forgot to archive the WAL segment that contains the target.\n In this case the proposed change will solve the problem.\n\n2. Someone specified the recovery target wrong, e.g. used CET rather\n than CEST in the recovery target time, so that the recovery target\n was later than intended.\n In that case the only solution is to start recovery from scratch.\n\nBut perhaps there are use cases I didn't think of.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Sun, 20 Oct 2019 23:06:32 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "On Sun, Oct 20, 2019 at 4:46 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-09-17 13:23, Leif Gunnar Erlandsen wrote:\n> > This patch allows PostgreSQL to pause recovery before PITR target is reached\n> > if recovery_target_time is specified.\n> >\n> > Missing WAL's could then be restored from backup and applied on next restart.\n> >\n> > Today PostgreSQL opens the database in read/write on a new timeline even when\n> > PITR tareg is not reached.\n>\n> I think this idea is worth thinking about. I don't think this should be\n> specific to a time-based recovery target. This could apply for example\n> to a target xid as well. Also, there should be a way to get the old\n> behavior. Perhaps this whole thing should be a new\n> recovery_target_action, say, 'pause_unless_reached'.\n\nProbably we can use standby mode + recovery target setting for\nthe almost same purpose. In this configuration, if end-of-WAL is reached\nbefore recovery target, the startup process keeps waiting for new WAL to\nbe available. Then, if recovery target is reached, the startup process works\nas recovery_target_action indicates.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Mon, 21 Oct 2019 15:44:46 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "On 2019-10-21 08:44, Fujii Masao wrote:\n> Probably we can use standby mode + recovery target setting for\n> the almost same purpose. In this configuration, if end-of-WAL is reached\n> before recovery target, the startup process keeps waiting for new WAL to\n> be available. Then, if recovery target is reached, the startup process works\n> as recovery_target_action indicates.\n\nSo basically get rid of recovery.signal mode and honor recovery target \nparameters in standby mode? That has some appeal because it simplify \nthis whole space significantly, but perhaps it would be too confusing \nfor end users?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 1 Nov 2019 13:41:46 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "On Fri, Nov 1, 2019 at 9:41 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-10-21 08:44, Fujii Masao wrote:\n> > Probably we can use standby mode + recovery target setting for\n> > the almost same purpose. In this configuration, if end-of-WAL is reached\n> > before recovery target, the startup process keeps waiting for new WAL to\n> > be available. Then, if recovery target is reached, the startup process works\n> > as recovery_target_action indicates.\n>\n> So basically get rid of recovery.signal mode and honor recovery target\n> parameters in standby mode?\n\nYes, currently not only archive recovery mode but also standby mode honors\nthe recovery target settings.\n\n> That has some appeal because it simplify\n> this whole space significantly, but perhaps it would be too confusing\n> for end users?\n\nThis looks less confusing than extending archive recovery. But I'd like to\nhear more opinions about that.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Tue, 5 Nov 2019 11:52:07 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "On 2019-09-17 13:23, Leif Gunnar Erlandsen wrote:\n> This patch allows PostgreSQL to pause recovery before PITR target is reached\n> if recovery_target_time is specified.\n\nBtw., this discussion/patch seems related: \nhttps://www.postgresql.org/message-id/flat/a3f650f1-fb0f-c913-a000-a4671f12a013%40postgrespro.ru\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 6 Nov 2019 08:32:35 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": ">\"Peter Eisentraut\" <peter.eisentraut@2ndquadrant.com> skrev 6. november 2019 kl. 08:32:\n>\n>> Btw., this discussion/patch seems related:\n>> https://www.postgresql.org/message-id/flat/a3f650f1-fb0f-c913-a000-a4671f12a013@postgrespro.ru\n\nI have read through this other proposal. As far as I could see in the suggested patch, it does not solve the same problem.\nIt still stops recovery when the recovery process does not find any more WAL. \nI would like the process to pause so administrator get to choose to find more WAL to apply.\n\n\nMy patch should probably be extended to include\nRECOVERY_TARGET_XID, RECOVERY_TARGET_NAME, RECOVERY_TARGET_LSN as well as RECOVERY_TARGET_TIME.\n\n\n---\nLeif Gunnar Erlandsen\n\n\n",
"msg_date": "Wed, 06 Nov 2019 12:24:41 +0000",
"msg_from": "\"Leif Gunnar Erlandsen\" <leif@lako.no>",
"msg_from_op": true,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "After studying this a bit more, I think the current behavior is totally \nbogus and needs a serious rethink.\n\nIf you specify a recovery target and it is reached, recovery pauses \n(depending on recovery_target_action).\n\nIf you specify a recovery target and it is not reached when the end of \nthe archive is reached (i.e., restore_command fails), then recovery ends \nand the server is promoted, without any further information. This is \nclearly wrong in multiple ways.\n\nI think what we should do is if we specify a recovery target and we \ndon't reach it, we should ereport(FATAL). Somewhere around\n\n /*\n * end of main redo apply loop\n */\n\nin StartupXLOG(), where we already check for other conditions that are \nundesirable at the end of recovery. Then a user can make fixes either \nby getting more WAL files to restore and adjusting the recovery target \nand starting again. I don't think pausing is the right behavior, but \nperhaps an argument could be made to offer it as a nondefault behavior.\n\nThere is an interesting overlap with the other thread that wants to make \n\"end of archive\" and explicitly settable recovery target. The current \nbehavior, however, is more like \"recovery time (say) or end of archive, \nwhichever happens first\", which is not a behavior that is currently \nselectable or intended with other methods of recovery target \nspecification. Also, if you want the end of the archive as your \nrecovery target, that currently does not respect the \nrecovery_target_action setting, but perhaps it should.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 21 Nov 2019 13:34:31 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "Adding another patch which is not only for recovery_target_time but also for xid, name and lsn.\n\n> After studying this a bit more, I think the current behavior is totally bogus and needs a serious\n> rethink.\n> \n> If you specify a recovery target and it is reached, recovery pauses (depending on\n> recovery_target_action).\n> \n> If you specify a recovery target and it is not reached when the end of the archive is reached\n> (i.e., restore_command fails), then recovery ends and the server is promoted, without any further\n> information. This is clearly wrong in multiple ways.\n\nYes, that is why I have created the patch.\n\n> \n> I think what we should do is if we specify a recovery target and we don't reach it, we should\n> ereport(FATAL). Somewhere around\n> \nIf recovery pauses or a FATAL error is reported, is not important, as long as it is possible to get some more WAL and continue recovery. Pause has the benefit of the possibility to inspect tables in the database.\n\n> in StartupXLOG(), where we already check for other conditions that are undesirable at the end of\n> recovery. Then a user can make fixes either by getting more WAL files to restore and adjusting the\n> recovery target and starting again. I don't think pausing is the right behavior, but perhaps an\n> argument could be made to offer it as a nondefault behavior.\n\nPausing was choosen in the patch as pause was the expected behaivior if target was reached.\n\nAnd the patch does not interfere with any other functionality as far as I know.\n\n--\nLeif Gunnar Erlandsen",
"msg_date": "Thu, 21 Nov 2019 12:50:18 +0000",
"msg_from": "\"Leif Gunnar Erlandsen\" <leif@lako.no>",
"msg_from_op": true,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "Hello, Lief, Peter.\n\nAt Thu, 21 Nov 2019 12:50:18 +0000, \"Leif Gunnar Erlandsen\" <leif@lako.no> wrote in \n> Adding another patch which is not only for recovery_target_time but also for xid, name and lsn.\n> \n> > After studying this a bit more, I think the current behavior is totally bogus and needs a serious\n> > rethink.\n> > \n> > If you specify a recovery target and it is reached, recovery pauses (depending on\n> > recovery_target_action).\n> > \n> > If you specify a recovery target and it is not reached when the end of the archive is reached\n> > (i.e., restore_command fails), then recovery ends and the server is promoted, without any further\n> > information. This is clearly wrong in multiple ways.\n> \n> Yes, that is why I have created the patch.\n\nIt seems premising to be used in prepeated trial-and-error recovery by\nwell experiecned operators. When it is used, I think that the target\ngoes back gradually through repetitions so anyway we need to start\nfrom a clean backup for each repetition, in the expected\nusage. Unintended promotion doesn't harm in the case.\n\nIn this persipective, I don't think the behavior is totally wrong but\nFATAL'ing at EO-WAL before target seems good to do.\n\n> > I think what we should do is if we specify a recovery target and we don't reach it, we should\n> > ereport(FATAL). Somewhere around\n> > \n> If recovery pauses or a FATAL error is reported, is not important, as long as it is possible to get some more WAL and continue recovery. Pause has the benefit of the possibility to inspect tables in the database.\n> \n> > in StartupXLOG(), where we already check for other conditions that are undesirable at the end of\n> > recovery. Then a user can make fixes either by getting more WAL files to restore and adjusting the\n> > recovery target and starting again. I don't think pausing is the right behavior, but perhaps an\n> > argument could be made to offer it as a nondefault behavior.\n> \n> Pausing was choosen in the patch as pause was the expected behaivior if target was reached.\n> \n> And the patch does not interfere with any other functionality as far as I know.\n\nWith the current behavior, if server promotes without stopping as told\nby target_action variables, it is a sign that something's wrong. But\nif server pauses before reaching target, operators may overlook the\nmessage if they don't know of the behavior. And if server poses in the\ncase, I think there's nothing to do.\n\nSo +1 for FATAL.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 22 Nov 2019 13:26:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "On 2019-11-21 13:50, Leif Gunnar Erlandsen wrote:\n> Pausing was choosen in the patch as pause was the expected behaivior if target was reached.\n\nPausing is the expect behavior when the target is reached because that \nis the default setting of recovery_target_action. Your patch does not \ntake recovery_target_action into account.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Nov 2019 11:50:49 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "\"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> skrev 22. november 2019 kl. 05:26:\n\n> Hello, Lief, Peter.\n> \n> At Thu, 21 Nov 2019 12:50:18 +0000, \"Leif Gunnar Erlandsen\" <leif@lako.no> wrote in \n> \n>> Adding another patch which is not only for recovery_target_time but also for xid, name and lsn.\n>> \n>> After studying this a bit more, I think the current behavior is totally bogus and needs a serious\n>> rethink.\n>> \n>> If you specify a recovery target and it is reached, recovery pauses (depending on\n>> recovery_target_action).\n>> \n>> If you specify a recovery target and it is not reached when the end of the archive is reached\n>> (i.e., restore_command fails), then recovery ends and the server is promoted, without any further\n>> information. This is clearly wrong in multiple ways.\n>> \n>> Yes, that is why I have created the patch.\n> \n> It seems premising to be used in prepeated trial-and-error recovery by\n> well experiecned operators. When it is used, I think that the target\n> goes back gradually through repetitions so anyway we need to start\n> from a clean backup for each repetition, in the expected\n> usage. Unintended promotion doesn't harm in the case.\nIf going back in time and gradually recover less WAL todays behaiviour is adequate.\nThe patch is for circumstances where for some reason you do not have all the WAL's ready at once.\n\n> \n> In this persipective, I don't think the behavior is totally wrong but\n> FATAL'ing at EO-WAL before target seems good to do.\n> \n>> I think what we should do is if we specify a recovery target and we don't reach it, we should\n>> ereport(FATAL). Somewhere around\n>> \n>> If recovery pauses or a FATAL error is reported, is not important, as long as it is possible to get\n>> some more WAL and continue recovery. Pause has the benefit of the possibility to inspect tables in\n>> the database.\n>> \n>> in StartupXLOG(), where we already check for other conditions that are undesirable at the end of\n>> recovery. Then a user can make fixes either by getting more WAL files to restore and adjusting the\n>> recovery target and starting again. I don't think pausing is the right behavior, but perhaps an\n>> argument could be made to offer it as a nondefault behavior.\n>> \n>> Pausing was choosen in the patch as pause was the expected behaivior if target was reached.\n>> \n>> And the patch does not interfere with any other functionality as far as I know.\n> \n> With the current behavior, if server promotes without stopping as told\n> by target_action variables, it is a sign that something's wrong. But\n> if server pauses before reaching target, operators may overlook the\n> message if they don't know of the behavior. And if server poses in the\n> case, I think there's nothing to do.\nYes, that is correct. FATAL might be the correct behaiviour.\n> \n> So +1 for FATAL.\n> \n> regards.\n> \n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 22 Nov 2019 11:23:43 +0000",
"msg_from": "\"Leif Gunnar Erlandsen\" <leif@lako.no>",
"msg_from_op": true,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "\"Peter Eisentraut\" <peter.eisentraut@2ndquadrant.com> skrev 22. november 2019 kl. 11:50:\n\n> On 2019-11-21 13:50, Leif Gunnar Erlandsen wrote:\n> \n>> Pausing was choosen in the patch as pause was the expected behaivior if target was reached.\n> \n> Pausing is the expect behavior when the target is reached because that is the default setting of\n> recovery_target_action. Your patch does not take recovery_target_action into account.\n\nNo it does not. It works well to demonstrate its purpose though.\nAnd it might be to stop with FATAL would be more correct.\n\n> \n> -- Peter Eisentraut http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Nov 2019 11:26:59 +0000",
"msg_from": "\"Leif Gunnar Erlandsen\" <leif@lako.no>",
"msg_from_op": true,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 11:26:59AM +0000, Leif Gunnar Erlandsen wrote:\n> No it does not. It works well to demonstrate its purpose though.\n> And it might be to stop with FATAL would be more correct.\n\nThis is still under active discussion. Please note that the latest\npatch does not apply, so a rebase would be nice to have. I have moved\nthe patch to next CF, waiting on author.\n--\nMichael",
"msg_date": "Sun, 1 Dec 2019 11:08:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "Adding patch written for 13dev from git\n\n\"Michael Paquier\" <michael@paquier.xyz> skrev 1. desember 2019 kl. 03:08:\n\n> On Fri, Nov 22, 2019 at 11:26:59AM +0000, Leif Gunnar Erlandsen wrote:\n> \n>> No it does not. It works well to demonstrate its purpose though.\n>> And it might be to stop with FATAL would be more correct.\n> \n> This is still under active discussion. Please note that the latest\n> patch does not apply, so a rebase would be nice to have. I have moved\n> the patch to next CF, waiting on author.\n> --\n> Michael",
"msg_date": "Wed, 11 Dec 2019 11:40:26 +0000",
"msg_from": "\"Leif Gunnar Erlandsen\" <leif@lako.no>",
"msg_from_op": true,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "On 2019-12-11 12:40, Leif Gunnar Erlandsen wrote:\n> Adding patch written for 13dev from git\n> \n> \"Michael Paquier\" <michael@paquier.xyz> skrev 1. desember 2019 kl. 03:08:\n> \n>> On Fri, Nov 22, 2019 at 11:26:59AM +0000, Leif Gunnar Erlandsen wrote:\n>>\n>>> No it does not. It works well to demonstrate its purpose though.\n>>> And it might be to stop with FATAL would be more correct.\n>>\n>> This is still under active discussion. Please note that the latest\n>> patch does not apply, so a rebase would be nice to have. I have moved\n>> the patch to next CF, waiting on author.\n\nI reworked your patch a bit. I changed the outcome to be an error, as \nwas discussed. I also added tests and documentation. Please take a look.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 14 Jan 2020 21:13:51 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "At Tue, 14 Jan 2020 21:13:51 +0100, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in \n> On 2019-12-11 12:40, Leif Gunnar Erlandsen wrote:\n> > Adding patch written for 13dev from git\n> > \"Michael Paquier\" <michael@paquier.xyz> skrev 1. desember 2019\n> > kl. 03:08:\n> > \n> >> On Fri, Nov 22, 2019 at 11:26:59AM +0000, Leif Gunnar Erlandsen wrote:\n> >>\n> >>> No it does not. It works well to demonstrate its purpose though.\n> >>> And it might be to stop with FATAL would be more correct.\n> >>\n> >> This is still under active discussion. Please note that the latest\n> >> patch does not apply, so a rebase would be nice to have. I have moved\n> >> the patch to next CF, waiting on author.\n> \n> I reworked your patch a bit. I changed the outcome to be an error, as\n> was discussed. I also added tests and documentation. Please take a\n> look.\n\nIt doesn't show how far the last recovery actually reached. I don't\nthink activating resource managers harms. Don't we check the\nnot-reached condition *only* after the else block of the \"if (record\n!= NULL)\" statement?\n\n> /* just have to read next record after CheckPoint */\n> record = ReadRecord(xlogreader, InvalidXLogRecPtr, LOG, false);\n> }\n>\n> if (record != NULL)\n> {\n>\t...\n>\t}\n>\telse\n>\t{\n> /* there are no WAL records following the checkpoint */\n> ereport(LOG,\n> (errmsg(\"redo is not required\")));\n> }\n>\n+ if (recoveryTarget != RECOVERY_TARGET_UNSET && !reachedStopPoint)\n..\n\n\nrecvoery_target_* is not cleared after startup. If a server crashed\njust after the last shutdown checkpoint, any recovery_target_* setting\nprevents the server from starting regardless of its value.\n\n> LOG: database system was not properly shut down; automatic recovery in progress\n> LOG: invalid record length at 0/9000420: wanted 24, got 0\n(recovery is skipped)\n> FATAL: recovery ended before configured recovery target was reached\n\nI think we should ignore the setting while crash recovery. Targeted\nrecovery mode is documented as a feature of archive recovery. Perhaps\nArchiveRecoveryRequested is needed in the condition.\n\n> if (ArchiveRecoveryRequested &&\n> recoveryTarget != RECOVERY_TARGET_UNSET && !reachedStopPoint)\n\t \nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 15 Jan 2020 11:02:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "FWIW, I restate this (perhaps) more clearly.\n\nAt Wed, 15 Jan 2020 11:02:24 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> recvoery_target_* is not cleared after startup. If a server crashed\n> just after the last shutdown checkpoint, any recovery_target_* setting\n> prevents the server from starting regardless of its value.\n\nrecvoery_target_* is not automatically cleared after a successful\narchive recovery. After that, if the server crashed just after the\nlast shutdown checkpoint, any recovery_target_* setting prevents the\nserver from starting regardless of its value.\n\n> > LOG: database system was not properly shut down; automatic recovery in progress\n> > LOG: invalid record length at 0/9000420: wanted 24, got 0\n> (recovery is skipped)\n> > FATAL: recovery ended before configured recovery target was reached\n> \n> I think we should ignore the setting while crash recovery. Targeted\n> recovery mode is documented as a feature of archive recovery. Perhaps\n> ArchiveRecoveryRequested is needed in the condition.\n> \n> > if (ArchiveRecoveryRequested &&\n> > recoveryTarget != RECOVERY_TARGET_UNSET && !reachedStopPoint)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 15 Jan 2020 13:02:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "> \"Peter Eisentraut\" <peter.eisentraut@2ndquadrant.com> skrev 14. januar 2020 kl. 21:13:\n> \n> On 2019-12-11 12:40, Leif Gunnar Erlandsen wrote:\n>> Adding patch written for 13dev from git\n>> \"Michael Paquier\" <michael@paquier.xyz> skrev 1. desember 2019 kl. 03:08:\n>> On Fri, Nov 22, 2019 at 11:26:59AM +0000, Leif Gunnar Erlandsen wrote:\n> \n>> No it does not. It works well to demonstrate its purpose though.\n>> And it might be to stop with FATAL would be more correct.\n> \n> This is still under active discussion. Please note that the latest\n> patch does not apply, so a rebase would be nice to have. I have moved\n> the patch to next CF, waiting on author.\n> \n> I reworked your patch a bit. I changed the outcome to be an error, as was discussed. I also added\n> tests and documentation. Please take a look.\n\nThank you, it was not unexpexted for the patch to be a little bit smaller.\nAlthough it would have been nice to log where recover ended before reporting fatal error.\nAnd since you use RECOVERY_TARGET_UNSET, RECOVERY_TARGET_IMMEDIATE also gets included, is this correct?\n\n\n> -- Peter Eisentraut http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 15 Jan 2020 08:25:17 +0000",
"msg_from": "\"Leif Gunnar Erlandsen\" <leif@lako.no>",
"msg_from_op": true,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "On 2020-01-15 05:02, Kyotaro Horiguchi wrote:\n> FWIW, I restate this (perhaps) more clearly.\n> \n> At Wed, 15 Jan 2020 11:02:24 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>> recvoery_target_* is not cleared after startup. If a server crashed\n>> just after the last shutdown checkpoint, any recovery_target_* setting\n>> prevents the server from starting regardless of its value.\n> \n> recvoery_target_* is not automatically cleared after a successful\n> archive recovery. After that, if the server crashed just after the\n> last shutdown checkpoint, any recovery_target_* setting prevents the\n> server from starting regardless of its value.\n\nThank you for this clarification. Here is a new patch that addresses \nthat and also the other comments raised about my previous patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 27 Jan 2020 12:16:02 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "Hello.\n\nAt Mon, 27 Jan 2020 12:16:02 +0100, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in \n> On 2020-01-15 05:02, Kyotaro Horiguchi wrote:\n> > FWIW, I restate this (perhaps) more clearly.\n> > At Wed, 15 Jan 2020 11:02:24 +0900 (JST), Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote in\n> >> recvoery_target_* is not cleared after startup. If a server crashed\n> >> just after the last shutdown checkpoint, any recovery_target_* setting\n> >> prevents the server from starting regardless of its value.\n> > recvoery_target_* is not automatically cleared after a successful\n> > archive recovery. After that, if the server crashed just after the\n> > last shutdown checkpoint, any recovery_target_* setting prevents the\n> > server from starting regardless of its value.\n> \n> Thank you for this clarification. Here is a new patch that addresses\n> that and also the other comments raised about my previous patch.\n\nThe code looks fine, renaming reachedStopPoint to\nreachedRecoveryTarget looks very nice. Doc part looks fine, too.\n\n\nPostgresNode.pm\n+Is has_restoring is used, standby mode is used by default. To use\n\n\"Is has_restoring used,\", or \"If has_restoring is used,\" ?\n\n\n+\t$params{standby} = 1 unless defined $params{standby};\n..\n-\t$self->enable_restoring($root_node) if $params{has_restoring};\n+\t$self->enable_restoring($root_node, $params{standby}) if $params{has_restoring};\n...\n+\tif ($standby)\n+\t{\n+\t\t$self->set_standby_mode();\n+\t}\n+\telse\n+\t{\n+\t\t$self->set_recovery_mode();\n+\t}\n\nThe change seems aiming not to break compatibility with external test\nscripts, but it looks quite confusing to me. The problem here is both\nenable_streaming/restoring independently put trigger files, so don't\nwe separte placing of trigger files out of the functions?\n\nAs you know, set_standby_mode and set_recovery_mode are described as\n\"internal\" but actually used by some test scripts. I think it's\npreferable that the functions are added in POD rather than change the\ncallers not to used them.\n\nThe attached patch on top of yours does that. It might be too much but\nwhat do you think about that?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 28 Jan 2020 14:01:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "Great job with the patch Peter, it has been even cleaner than before after you moved the check.\n\n\n> \"Peter Eisentraut\" <peter.eisentraut@2ndquadrant.com> skrev 27. januar 2020 kl. 12:16:\n\n\n",
"msg_date": "Tue, 28 Jan 2020 17:12:07 +0000",
"msg_from": "\"Leif Gunnar Erlandsen\" <leif@lako.no>",
"msg_from_op": true,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "On 2020-01-28 06:01, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> At Mon, 27 Jan 2020 12:16:02 +0100, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in\n>> On 2020-01-15 05:02, Kyotaro Horiguchi wrote:\n>>> FWIW, I restate this (perhaps) more clearly.\n>>> At Wed, 15 Jan 2020 11:02:24 +0900 (JST), Kyotaro Horiguchi\n>>> <horikyota.ntt@gmail.com> wrote in\n>>>> recvoery_target_* is not cleared after startup. If a server crashed\n>>>> just after the last shutdown checkpoint, any recovery_target_* setting\n>>>> prevents the server from starting regardless of its value.\n>>> recvoery_target_* is not automatically cleared after a successful\n>>> archive recovery. After that, if the server crashed just after the\n>>> last shutdown checkpoint, any recovery_target_* setting prevents the\n>>> server from starting regardless of its value.\n>>\n>> Thank you for this clarification. Here is a new patch that addresses\n>> that and also the other comments raised about my previous patch.\n> \n> The code looks fine, renaming reachedStopPoint to\n> reachedRecoveryTarget looks very nice. Doc part looks fine, too.\n> \n> \n> PostgresNode.pm\n> +Is has_restoring is used, standby mode is used by default. To use\n> \n> \"Is has_restoring used,\", or \"If has_restoring is used,\" ?\n\nCommitted with that fix.\n\n> The change seems aiming not to break compatibility with external test\n> scripts, but it looks quite confusing to me. The problem here is both\n> enable_streaming/restoring independently put trigger files, so don't\n> we separte placing of trigger files out of the functions?\n\nYeah, this is all historically grown, but a major refactoring seems out \nof scope for this thread. It seems hard to come up with a more elegant \nway, since after all the underlying mechanisms are also all intertwined. \n Your patch adds even more code, so I'm not sure it's an improvement.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 29 Jan 2020 16:01:46 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pause recovery if pitr target not reached"
},
{
"msg_contents": "At Wed, 29 Jan 2020 16:01:46 +0100, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in \n> On 2020-01-28 06:01, Kyotaro Horiguchi wrote:\n> > The code looks fine, renaming reachedStopPoint to\n> > reachedRecoveryTarget looks very nice. Doc part looks fine, too.\n> > PostgresNode.pm\n> > +Is has_restoring is used, standby mode is used by default. To use\n> > \"Is has_restoring used,\", or \"If has_restoring is used,\" ?\n> \n> Committed with that fix.\n\nThanks.\n\n> > The change seems aiming not to break compatibility with external test\n> > scripts, but it looks quite confusing to me. The problem here is both\n> > enable_streaming/restoring independently put trigger files, so don't\n> > we separte placing of trigger files out of the functions?\n> \n> Yeah, this is all historically grown, but a major refactoring seems\n> out of scope for this thread. It seems hard to come up with a more\n> elegant way, since after all the underlying mechanisms are also all\n> intertwined. Your patch adds even more code, so I'm not sure it's an\n> improvement.\n\nYeah, as I wrote, I thogut that as too much, but I think at least POD\npart for the internal-but-externally-used routines would be\nneeded. Don't we?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 30 Jan 2020 13:01:39 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pause recovery if pitr target not reached"
}
] |
[
{
"msg_contents": "Hi,\n\nWhy view needs instead of trigger to be the target of \"copy to\"?\nWith default view rule, the insert would be successful, so it should\nalso works for copy.\n\nThe reason to ask this question is I need to \"copy to\" the view using\n\"replica\" session role. But instead of trigger on view could not be\nset to \"enable always\" or \"enable replica\", because \"alter table\"\nwould error it's not a base table, e.g.\n\ntmp=# alter table foobar2_view enable always trigger foobar2_view_trigger;\nERROR: \"foobar2_view\" is not a table or foreign table\n\nHelp please, thanks.\n\nRegards,\nJinhua Luo\n\n\n",
"msg_date": "Tue, 17 Sep 2019 19:33:19 +0800",
"msg_from": "Jinhua Luo <luajit.io@gmail.com>",
"msg_from_op": true,
"msg_subject": "About copy to view"
}
] |
[
{
"msg_contents": "Hello, hackers!\n\nWe got an error for pg_upgrade check on the branch REL_11_STABLE (commit \n40ad4202513c72f5c1beeb03e26dfbc8890770c0) on Solaris 10 because IIUC the \nargument to the sed command is not enclosed in quotation marks (see \n[1]):\n\n$ gmake -C src/bin/pg_upgrade/ check\n<...>\nMAKE=gmake \nbindir=\"/home/buildfarm/mpolyakova/postgrespro_REL_11_STABLE/inst/bin\" \nlibdir=\"/home/buildfarm/mpolyakova/postgrespro_REL_11_STABLE/inst/lib\" \nEXTRA_REGRESS_OPTS=\"\" /bin/sh test.sh --install\ntest.sh: MSYS/MINGW/: not found\ngmake: *** [check] Error 1\ngmake: Leaving directory \n`/home/buildfarm/mpolyakova/postgrespro_REL_11_STABLE/src/bin/pg_upgrade'\n$ sed: command garbled: s/\n\nAttached diff.patch fixes the problem.\n\nAbout the system: SunOS, Release 5.10, KernelID Generic_141444-09.\nAbout the used shell: according to the manual, it comes from the package \nSUNWcsu.\n\nThanks to Victor Wagner for his help to investigate this issue.\n\n[1] $ man sh\n<...>\nQuoting\nThe following characters have a special meaning to the shell and cause \ntermination of a word unless quoted:\n; & ( ) | ^ < > newline space tab\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 17 Sep 2019 19:07:31 +0300",
"msg_from": "Marina Polyakova <m.polyakova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "pg_upgrade check fails on Solaris 10"
},
{
"msg_contents": "On 2019-Sep-17, Marina Polyakova wrote:\n\n> Hello, hackers!\n> \n> We got an error for pg_upgrade check on the branch REL_11_STABLE (commit\n> 40ad4202513c72f5c1beeb03e26dfbc8890770c0) on Solaris 10 because IIUC the\n> argument to the sed command is not enclosed in quotation marks (see [1]):\n\nHmm, I'm surprised it has taken this long to detect the problem.\n\n> Attached diff.patch fixes the problem.\n\nI have pushed it to all branches that have src/bin/pg_upgrade (namely,\n9.5 onwards), thanks. I hope this won't make the msys/mingw machines\nangry ;-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 18 Sep 2019 11:36:00 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade check fails on Solaris 10"
},
{
"msg_contents": "On 2019-09-18 17:36, Alvaro Herrera wrote:\n> On 2019-Sep-17, Marina Polyakova wrote:\n> \n>> Hello, hackers!\n>> \n>> We got an error for pg_upgrade check on the branch REL_11_STABLE \n>> (commit\n>> 40ad4202513c72f5c1beeb03e26dfbc8890770c0) on Solaris 10 because IIUC \n>> the\n>> argument to the sed command is not enclosed in quotation marks (see \n>> [1]):\n> \n> Hmm, I'm surprised it has taken this long to detect the problem.\n\nLooking at the members of buildfarm [1] castoroides and protosciurus - \nIIUC they do not check pg_upgrade. And I was that lucky one who have run \nthe branch with the latest commits at our buildfarm...\n\n>> Attached diff.patch fixes the problem.\n> \n> I have pushed it to all branches that have src/bin/pg_upgrade (namely,\n> 9.5 onwards), thanks. I hope this won't make the msys/mingw machines\n> angry ;-)\n\nThank you! I ran pg_upgrade tests for MSYS, everything is fine.\n\nThe branch REL9_4_STABLE (commit \n8a17afe84be6fefe76d0d2f4d26c5ee075e64487) has the same issue - according \nto the release table [2] it is still supported, isn't it?...\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_members.pl\n[2] https://www.postgresql.org/support/versioning/\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 23 Sep 2019 18:57:04 +0300",
"msg_from": "Marina Polyakova <m.polyakova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade check fails on Solaris 10"
},
{
"msg_contents": "On 2019-Sep-23, Marina Polyakova wrote:\n\n> On 2019-09-18 17:36, Alvaro Herrera wrote:\n> > On 2019-Sep-17, Marina Polyakova wrote:\n> > \n\n> > > We got an error for pg_upgrade check on the branch REL_11_STABLE\n> > > (commit\n> > > 40ad4202513c72f5c1beeb03e26dfbc8890770c0) on Solaris 10 because IIUC\n> > > the\n> > > argument to the sed command is not enclosed in quotation marks (see\n> > > [1]):\n> > \n> > Hmm, I'm surprised it has taken this long to detect the problem.\n> \n> Looking at the members of buildfarm [1] castoroides and protosciurus - IIUC\n> they do not check pg_upgrade. And I was that lucky one who have run the\n> branch with the latest commits at our buildfarm...\n\nArgh.\n\nBut I meant \"how come nobody runs pg_upgrade tests on old Solaris?\"\n\n> > I have pushed it to all branches that have src/bin/pg_upgrade (namely,\n> > 9.5 onwards), thanks. I hope this won't make the msys/mingw machines\n> > angry ;-)\n> \n> Thank you! I ran pg_upgrade tests for MSYS, everything is fine.\n> \n> The branch REL9_4_STABLE (commit 8a17afe84be6fefe76d0d2f4d26c5ee075e64487)\n> has the same issue - according to the release table [2] it is still\n> supported, isn't it?...\n\nYeah, but pg_upgrade is in contrib/ in 9.4, so nowhere as good as from\n9.5 onwards; and it's going to die in a couple of months anyway, so I'm\nnot thrilled about fixing this there.\n\nIf you *need* to have this fixed in 9.4, we can do that, but do you?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 23 Sep 2019 13:41:37 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade check fails on Solaris 10"
},
{
"msg_contents": "On 2019-09-23 19:41, Alvaro Herrera wrote:\n> On 2019-Sep-23, Marina Polyakova wrote:\n>> The branch REL9_4_STABLE (commit \n>> 8a17afe84be6fefe76d0d2f4d26c5ee075e64487)\n>> has the same issue - according to the release table [2] it is still\n>> supported, isn't it?...\n> \n> Yeah, but pg_upgrade is in contrib/ in 9.4, so nowhere as good as from\n> 9.5 onwards; and it's going to die in a couple of months anyway, so I'm\n> not thrilled about fixing this there.\n> \n> If you *need* to have this fixed in 9.4, we can do that, but do you?\n\nNo, we don't. I just noticed :-)\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 24 Sep 2019 10:00:08 +0300",
"msg_from": "Marina Polyakova <m.polyakova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade check fails on Solaris 10"
}
] |
[
{
"msg_contents": "Hi\n\nWhen I tested some hypothesis I wrote buggy code. It was surprise how fast\nI lost all free memory\n\ndo $$\nbegin\n for i in 1..3000000\n loop\n begin\n -- do some error\n if i then end if;\n exception when others then\n -- do nothing\n end;\n end loop;\nend;\n$$;\n\nproblem is somewhere in implicit casting inside IF statement. When I use\nexplicit casting -\n\n IF i::boolean THEN\n\nthen there is not memory leak.\n\nRegards\n\nPavel\n\nHiWhen I tested some hypothesis I wrote buggy code. It was surprise how fast I lost all free memorydo $$ begin for i in 1..3000000 loop begin -- do some error if i then end if; exception when others then -- do nothing end; end loop;end;$$;problem is somewhere in implicit casting inside IF statement. When I use explicit casting - IF i::boolean THENthen there is not memory leak.RegardsPavel",
"msg_date": "Tue, 17 Sep 2019 18:50:35 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "strong memory leak in plpgsql from handled rollback and lazy cast"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> When I tested some hypothesis I wrote buggy code. It was surprise how fast\n> I lost all free memory\n\n> do $$\n> begin\n> for i in 1..3000000\n> loop\n> begin\n> -- do some error\n> if i then end if;\n> exception when others then\n> -- do nothing\n> end;\n> end loop;\n> end;\n> $$;\n\nYeah, this is because an error gets thrown inside the cast-to-boolean.\nIt's intentional that the execution state tree gets thrown away if that\nhappens, per the comment in get_cast_hashentry:\n\n * Prepare the expression for execution, if it's not been done already in\n * the current transaction; also, if it's marked busy in the current\n * transaction, abandon that expression tree and build a new one, so as to\n * avoid potential problems with recursive cast expressions and failed\n * executions. (We will leak some memory intra-transaction if that\n * happens a lot, but we don't expect it to.) It's okay to update the\n\nI'm not convinced that it'd be safe to re-use an ExprState after a\nprevious execution failed (though perhaps Andres has a different opinion?)\nso I think the only way to avoid the intratransaction memory leak would\nbe to set up each new cast ExprState in its own memory context that we\ncould free. That seems like adding quite a lot of overhead to get rid\nof a leak that we've been living with for ages.\n\nMaybe we could pay the extra overhead only after the expression has\nfailed at least once. Seems a bit messy though, and I'm afraid that\nwe'd have to add PG_TRY overhead in any case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 22 Sep 2019 18:43:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: strong memory leak in plpgsql from handled rollback and lazy cast"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-22 18:43:23 -0400, Tom Lane wrote:\n> I'm not convinced that it'd be safe to re-use an ExprState after a\n> previous execution failed (though perhaps Andres has a different\n> opinion?)\n\nI don't immediately see why it'd be problematic to reuse at a later\ntime, as long as it's guaranteed that a) there's only one execution\nhappening at a time b) the failure wasn't in the middle of writing a\nvalue. a) would be problematic regardless of reuse-after-failure, and\nb) should be satisfied by only failing at ereport etc.\n\nMost memory writes during ExprState evaluation are redone from scratch\nevery execution, and the remaining things should only be things like\ntupledesc's being cached at first execution. And that even uses an\nExprContext callback to reset the cache on context shutdown.\n\nThe other piece is that on the first execution of a expression we use\nExecInterpExprStillValid, and we don't on later executions. Not sure if\nthat's relevant here?\n\n\n> so I think the only way to avoid the intratransaction memory leak would\n> be to set up each new cast ExprState in its own memory context that we\n> could free. That seems like adding quite a lot of overhead to get rid\n> of a leak that we've been living with for ages.\n\nHm. I interestingly am working on a patch that merges all the memory\nallocations done for an ExprState into one or two allocations (by\nbasically doing the traversal twice). Then it'd be feasible to just\npfree() the memory, if that helps.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 22 Sep 2019 16:57:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: strong memory leak in plpgsql from handled rollback and lazy cast"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-09-22 18:43:23 -0400, Tom Lane wrote:\n>> I'm not convinced that it'd be safe to re-use an ExprState after a\n>> previous execution failed (though perhaps Andres has a different\n>> opinion?)\n\n> I don't immediately see why it'd be problematic to reuse at a later\n> time, as long as it's guaranteed that a) there's only one execution\n> happening at a time b) the failure wasn't in the middle of writing a\n> value. a) would be problematic regardless of reuse-after-failure, and\n> b) should be satisfied by only failing at ereport etc.\n\nI think you're considering a much smaller slice of the system than\nwhat seems to me to be at risk here. As an example, an ExprState\ntree would also include any fn_extra-linked state that individual\nfunctions might've set up. We've got very little control over what\nthe validity requirements are for those or how robust the code that\ncreates them is. I *think* that most of the core code that makes\nsuch things is written in a way that it doesn't leave partially-valid\ncache state if setup fails partway through ... but I wouldn't swear\nthat it all is, and I'd certainly bet money on there being third-party\ncode that isn't careful about that.\n\n> Hm. I interestingly am working on a patch that merges all the memory\n> allocations done for an ExprState into one or two allocations (by\n> basically doing the traversal twice). Then it'd be feasible to just\n> pfree() the memory, if that helps.\n\nAgain, that'd do nothing to clean up subsidiary fn_extra state.\nIf we want no leaks, we need a separate context for the tree\nto live in.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 22 Sep 2019 20:16:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: strong memory leak in plpgsql from handled rollback and lazy cast"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-22 20:16:15 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-09-22 18:43:23 -0400, Tom Lane wrote:\n> >> I'm not convinced that it'd be safe to re-use an ExprState after a\n> >> previous execution failed (though perhaps Andres has a different\n> >> opinion?)\n> \n> > I don't immediately see why it'd be problematic to reuse at a later\n> > time, as long as it's guaranteed that a) there's only one execution\n> > happening at a time b) the failure wasn't in the middle of writing a\n> > value. a) would be problematic regardless of reuse-after-failure, and\n> > b) should be satisfied by only failing at ereport etc.\n> \n> I think you're considering a much smaller slice of the system than\n> what seems to me to be at risk here.\n\nYea, I was only referencing the expression eval logic itself, as I\nunderstood your question to aim mainly at that...\n\n\n> As an example, an ExprState tree would also include any\n> fn_extra-linked state that individual functions might've set up.\n> We've got very little control over what the validity requirements are\n> for those or how robust the code that creates them is. I *think* that\n> most of the core code that makes such things is written in a way that\n> it doesn't leave partially-valid cache state if setup fails partway\n> through ... but I wouldn't swear that it all is, and I'd certainly bet\n> money on there being third-party code that isn't careful about that.\n\nHm. I'd be kinda willing to just declare such code broken. But given\nthat the memory leak situation, as you say, still exists, I don't think\nit matters for now.\n\n\n> > Hm. I interestingly am working on a patch that merges all the memory\n> > allocations done for an ExprState into one or two allocations (by\n> > basically doing the traversal twice). Then it'd be feasible to just\n> > pfree() the memory, if that helps.\n> \n> Again, that'd do nothing to clean up subsidiary fn_extra state.\n> If we want no leaks, we need a separate context for the tree\n> to live in.\n\nWell, it'd presumably leak a lot less :/.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 22 Sep 2019 18:59:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: strong memory leak in plpgsql from handled rollback and lazy cast"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-22 20:16:15 -0400, Tom Lane wrote:\n> I think you're considering a much smaller slice of the system than\n> what seems to me to be at risk here. As an example, an ExprState\n> tree would also include any fn_extra-linked state that individual\n> functions might've set up.\n\n> > Hm. I interestingly am working on a patch that merges all the memory\n> > allocations done for an ExprState into one or two allocations (by\n> > basically doing the traversal twice). Then it'd be feasible to just\n> > pfree() the memory, if that helps.\n>\n> Again, that'd do nothing to clean up subsidiary fn_extra state.\n> If we want no leaks, we need a separate context for the tree\n> to live in.\n\nAs mentioned, as part of some expression evaluation improvements (both\nw/ and wo/ jit) I now have a prototype patch that moves nearly all the\ndynamically allocated memory, including the FunctionCallInfo, into one\nchunk of memory. That's currently allocated together with the ExprState.\nWith the information collected for that it'd be fairly trivial to reset\nthings like fn_extra in a reasonably efficient manner, without needing\nknowledge about each ExprEvalOp.\n\nObviously that'd not itself change e.g. anything about the lifetime of\nthe memory pointed to by fn_extra etc. But it'd not be particularly hard\nto have FmgrInfo->fn_mcxt point somewhere else.\n\nAs part of the above rework ExecInitExprRec() etc all now pass down a\nnew ExprStateBuilder * object, containing state needed somewhere down in\nthe expression tree. It'd e.g. be quite possible to add an\nExecInitExpr() variant that allows to specify in more detail what memory\ncontext ought to be used for what.\n\nI'm however not at all sure it's worth investing time into doing so\nspecifically for this case. But it seems like it might generally be\nsomething we'll need more infrastructure for in other cases too.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 16 Oct 2019 03:02:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: strong memory leak in plpgsql from handled rollback and lazy cast"
}
] |
[
{
"msg_contents": "> * Client- and server-side encryption for authentication using GSSAPI\n\nThis is on the wire encryption, so I don't know why it says client-side\nand server-side. Proposal:\n\n* Encrypted TCP/IP connections using GSSAPI encryption\n\nin the major features section, and later\n\n* Add GSSAPI encryption support (Robbie Harwood, Stephen Frost)\n\n This allows TCP/IP connections to be encrypted when using GSSAPI\n authentication without having to set up a separate encryption facility\n like SSL.\n\n\n> * Discovery of LDAP servers if PostgreSQL is built with OpenLDAP\n\nI would remove the \"if\" part from the major features list, since it's a\nqualification of minor importance. Instead I'd write something like\n\n* Discovery of LDAP servers using DNS SRV\n\nwhich is a clearer concept that people can easily recognize.\n\n\n> * Allow data type name to use non-C collations\n\nI'm not sure why that is listed in the \"Migration\" section.\n\nIt's also a bit confusing as a release note item relative to PostgreSQL\n11. I believe the changes were that \"name\" was made collation aware and\nthat the collation was set to \"C\" in the system catalogs (which is a\nseparate item later). This group of items could use a reshuffling.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 17 Sep 2019 19:09:09 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "some PostgreSQL 12 release notes comments"
},
{
"msg_contents": "On 9/17/19 1:09 PM, Peter Eisentraut wrote:\n>> * Client- and server-side encryption for authentication using GSSAPI\n> \n> This is on the wire encryption, so I don't know why it says client-side\n> and server-side. Proposal:\n> \n> * Encrypted TCP/IP connections using GSSAPI encryption\n\n+1, though I would s/GSSAPI encryption/ with s/GSSAPI authentcation/\n\n> in the major features section, and later\n> \n> * Add GSSAPI encryption support (Robbie Harwood, Stephen Frost)\n\nPerhaps \"* Add encrypted connection support for GSSAPI authentication\n(Robbie Harwood, Stephen Frost)\"\n\n> This allows TCP/IP connections to be encrypted when using GSSAPI\n> authentication without having to set up a separate encryption facility\n> like SSL.\n\n+1.\n\n>> * Discovery of LDAP servers if PostgreSQL is built with OpenLDAP\n> \n> I would remove the \"if\" part from the major features list, since it's a\n> qualification of minor importance. Instead I'd write something like\n> \n> * Discovery of LDAP servers using DNS SRV\n> \n> which is a clearer concept that people can easily recognize.\n\nI agree it's clearer, I'm not sure if the OpenLDAP semantic above\nchanges things? I'm not sure the relative frequency of PostgreSQL being\nbuilt with OpenLDAP vs. other LDAP libs.\n\nRegardless, I do like your change and would +1 it.\n\nWould you like me to make a patch for it or are you planning to?\n\n>> * Allow data type name to use non-C collations\n> \n> I'm not sure why that is listed in the \"Migration\" section.\n> \n> It's also a bit confusing as a release note item relative to PostgreSQL\n> 11. I believe the changes were that \"name\" was made collation aware and\n> that the collation was set to \"C\" in the system catalogs (which is a\n> separate item later). This group of items could use a reshuffling.\n\nI can't make an informed opinion on this one, so I defer to the experts.\n\nThanks!\n\nJonathan",
"msg_date": "Tue, 17 Sep 2019 15:55:11 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: some PostgreSQL 12 release notes comments"
},
{
"msg_contents": "On 2019-09-17 21:55, Jonathan S. Katz wrote:\n>> * Encrypted TCP/IP connections using GSSAPI encryption\n> \n> +1, though I would s/GSSAPI encryption/ with s/GSSAPI authentcation/\n\nBut you're not encrypting the communication using GSSAPI authentication,\nyou're encrypting it using GSSAPI encryption.\n\n>>> * Discovery of LDAP servers if PostgreSQL is built with OpenLDAP\n>>\n>> I would remove the \"if\" part from the major features list, since it's a\n>> qualification of minor importance. Instead I'd write something like\n>>\n>> * Discovery of LDAP servers using DNS SRV\n>>\n>> which is a clearer concept that people can easily recognize.\n> \n> I agree it's clearer, I'm not sure if the OpenLDAP semantic above\n> changes things? I'm not sure the relative frequency of PostgreSQL being\n> built with OpenLDAP vs. other LDAP libs.\n\nI suppose it's not-Windows vs. Windows.\n\nIt's OK if we mention OpenLDAP in the release notes, but it doesn't seem\nto belong in the major features section.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 17 Sep 2019 22:10:41 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: some PostgreSQL 12 release notes comments"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> * Add GSSAPI encryption support (Robbie Harwood, Stephen Frost)\n\n> This allows TCP/IP connections to be encrypted when using GSSAPI\n> authentication without having to set up a separate encryption facility\n> like SSL.\n\nHmm, does that imply that you don't have to have compiled --with-openssl,\nor just that you don't have to bother with setting up SSL certificates?\nBut you already don't have to do the latter. I'd be the first to admit\nthat I know nothing about GSSAPI, but this text still doesn't enlighten\nme about why I should learn.\n\n>> * Discovery of LDAP servers if PostgreSQL is built with OpenLDAP\n\n> I would remove the \"if\" part from the major features list, since it's a\n> qualification of minor importance.\n\nOK\n\n>> * Allow data type name to use non-C collations\n\n> I'm not sure why that is listed in the \"Migration\" section.\n\nI think Bruce was reacting to this comment in the commit log for\n478cacb50:\n\n Prior to v12, if you used a collation-sensitive regex feature in a\n pattern handled by processSQLNamePattern() (for instance, \\d '\\\\w+'\n in psql), the behavior you got matched the database's default collation.\n Since commit 586b98fdf you'd usually get C-collation behavior, because\n the catalog \"name\"-type columns are now marked as COLLATE \"C\". Add\n explicit COLLATE specifications to restore the prior behavior.\n\n (Note for whoever writes the v12 release notes: the need for this shows\n that while 586b98fdf preserved pre-v12 behavior of \"name\" columns for\n simple comparison operators, it changed the behavior of regex operators\n on those columns. Although this patch fixes it for pattern matches\n generated by our own tools, user-written queries will still be affected.\n So we'd better mention this issue as a compatibility item.)\n\nThe existing text for this item doesn't make that aspect clear\nenough, perhaps.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Sep 2019 16:22:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: some PostgreSQL 12 release notes comments"
},
{
"msg_contents": "On 9/17/19 4:10 PM, Peter Eisentraut wrote:\n> On 2019-09-17 21:55, Jonathan S. Katz wrote:\n>>> * Encrypted TCP/IP connections using GSSAPI encryption\n>>\n>> +1, though I would s/GSSAPI encryption/ with s/GSSAPI authentcation/\n> \n> But you're not encrypting the communication using GSSAPI authentication,\n> you're encrypting it using GSSAPI encryption.\n\nAh, I also did a s/using/for/ in my head, but that's still not correct.\n+1 to your suggested wording with no alterations.\n\n>>>> * Discovery of LDAP servers if PostgreSQL is built with OpenLDAP\n>>>\n>>> I would remove the \"if\" part from the major features list, since it's a\n>>> qualification of minor importance. Instead I'd write something like\n>>>\n>>> * Discovery of LDAP servers using DNS SRV\n>>>\n>>> which is a clearer concept that people can easily recognize.\n>>\n>> I agree it's clearer, I'm not sure if the OpenLDAP semantic above\n>> changes things? I'm not sure the relative frequency of PostgreSQL being\n>> built with OpenLDAP vs. other LDAP libs.\n> \n> I suppose it's not-Windows vs. Windows.\n> \n> It's OK if we mention OpenLDAP in the release notes, but it doesn't seem\n> to belong in the major features section.\n\nWell, if you're a Windows user, you may then be disappointed later on to\nfind out it does not work :)\n\nThat said, I'm ok with the more concise wording.\n\nJonathan",
"msg_date": "Tue, 17 Sep 2019 16:36:11 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: some PostgreSQL 12 release notes comments"
},
{
"msg_contents": "On 2019-09-17 22:22, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> * Add GSSAPI encryption support (Robbie Harwood, Stephen Frost)\n>> This allows TCP/IP connections to be encrypted when using GSSAPI\n>> authentication without having to set up a separate encryption facility\n>> like SSL.\n> Hmm, does that imply that you don't have to have compiled --with-openssl,\n> or just that you don't have to bother with setting up SSL certificates?\n> But you already don't have to do the latter. I'd be the first to admit\n> that I know nothing about GSSAPI, but this text still doesn't enlighten\n> me about why I should learn.\n\nIt means, more or less, if you already have the client and the server do\nthe GSS dance for authentication, you just have to turn on an additional\nflag and they'll also encrypt the communication while they're at it.\n\nThis does not require SSL support.\n\nSo if you already have a Kerberos infrastructure set up, you can get\nwire encryption for almost free without having to set up a parallel SSL\nCA infrastructure. Which is great for administration.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 18 Sep 2019 11:16:35 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: some PostgreSQL 12 release notes comments"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 7:09 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n>\n> > * Discovery of LDAP servers if PostgreSQL is built with OpenLDAP\n>\n> I would remove the \"if\" part from the major features list, since it's a\n> qualification of minor importance. Instead I'd write something like\n>\n> * Discovery of LDAP servers using DNS SRV\n>\n\n-> \"DNS SRV records\" I think?\n\n\nwhich is a clearer concept that people can easily recognize.\n>\n\n+1. The \"discovery\" part isn't actually part of LDAP, it's part of DNS, so\nleaving that part out does a disservice.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Sep 17, 2019 at 7:09 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> * Discovery of LDAP servers if PostgreSQL is built with OpenLDAP\n\nI would remove the \"if\" part from the major features list, since it's a\nqualification of minor importance. Instead I'd write something like\n\n* Discovery of LDAP servers using DNS SRV-> \"DNS SRV records\" I think?\nwhich is a clearer concept that people can easily recognize.+1. The \"discovery\" part isn't actually part of LDAP, it's part of DNS, so leaving that part out does a disservice.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 18 Sep 2019 12:51:15 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: some PostgreSQL 12 release notes comments"
},
{
"msg_contents": "I've pushed some release note adjustments responding to your points\nabout the GSSAPI and name-collation entries. I see the LDAP text\nis fixed already.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 21 Sep 2019 15:26:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: some PostgreSQL 12 release notes comments"
},
{
"msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n> On 2019-09-17 22:22, Tom Lane wrote:\n> > Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> >> * Add GSSAPI encryption support (Robbie Harwood, Stephen Frost)\n> >> This allows TCP/IP connections to be encrypted when using GSSAPI\n> >> authentication without having to set up a separate encryption facility\n> >> like SSL.\n> > Hmm, does that imply that you don't have to have compiled --with-openssl,\n> > or just that you don't have to bother with setting up SSL certificates?\n> > But you already don't have to do the latter. I'd be the first to admit\n> > that I know nothing about GSSAPI, but this text still doesn't enlighten\n> > me about why I should learn.\n> \n> It means, more or less, if you already have the client and the server do\n> the GSS dance for authentication, you just have to turn on an additional\n> flag and they'll also encrypt the communication while they're at it.\n> \n> This does not require SSL support.\n> \n> So if you already have a Kerberos infrastructure set up, you can get\n> wire encryption for almost free without having to set up a parallel SSL\n> CA infrastructure. Which is great for administration.\n\nRight- and more-over, you *do* get mutual authentication between the\nclient and the server when using Kerberos. This is markedly better than\n\"TLS/SSL with snakeoil certs, just to get encryption\"- it's just about\nequivilant to a full PKI environment with client and server validation\nand encryption, but without needing openssl or SSL of any kind.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 2 Oct 2019 03:09:30 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: some PostgreSQL 12 release notes comments"
}
] |
[
{
"msg_contents": "Hi folks,\n\nPrompted originally by a post by Roman Pekar [1], I wanted to share a revised version of a patch that allows REFCURSOR results to be consumed as data in a regular SQL query as well as my thoughts on how to improve the area as a whole.\n\nIn order to be clear about the purpose and how I see it fitting into a broader context, I’ve started a new thread and I’d welcome discussion about it.\n\n\nBackground\n----------\n\nThe ambition of this contribution is to make PostgreSQL able to efficiently support procedural language functions that either produce or consume sets (or both).\n\nPostgreSQL already has some support for this in functions that return SETOFs. However, as my review [3] identified, there are some gaps in PostgreSQL’s current capability, as well as scope for extension to improve its overall capability.\n\nThis first patch addresses only a small part of the overall ambition, but I wanted to share both the patch and the overall ambition as work in progress, and I’d welcome comments on both. (The patch is still based on 12beta2.)\n\n\nProblems to be solved\n---------------------\n\n1. Allow procedural languages (PLs) to efficiently consume sets of records\n\nPostgreSQL does allow PL functions to consume sets, however it does so be feeding records to the function, one row per function invocation. REFCURSORs, however can be supplied as input parameters and their content consumed by the function as it wishes, but only if the PL supports the REFCURSOR concept.\n\nTypically, PLs do allow SQL queries to be executed within the PL function [5, 6, 7]. However REFCURSOR results cannot be effectively consumed in a regular SQL SELECT, significantly limiting their use.\n\n\n2. Allow plpgsql functions to efficiently return sets of records\n\nBy ‘efficiently’, I mean that a large result set should not be required to be staged before the executor is able to process it. Staging is not an issue for small sets, but for large sets and especially if they are subject to further processing, intermediate staging it is a performance disadvantage.\n\nPostgreSQL already has some support for this functions that return SETOFs. At present, plpgsql cannot take advantage of this support while also achieving the efficiency criteria because, as the documentation [4] notes, all returned data is staged before it is retuned.\n\nAddressing this limitation could also of benefit to other PLs, however a quick survey finds at least PL Python is already well-adapted to efficiently return SETOFs.\n\n\n3. Allow optimisation of a returned query\n\nplpgsql offers a syntactic shortcut to return the results of a SQL query directly. Despite appearing to return a query, the RETURN QUERY syntax actually returns the /results/ of the query [4]. This means the optimiser has no opportunity to influence its execution, such as by pushing down expressions into its WHERE clause, or taking advantage of alternative indexes to modify its sort order.\n\nOther PLs are similarly constrained. Most PLs lack plpgsql’s syntactic sugar, but even though some PLs are better able to efficiently return SETOFs row-by-row, the optimiser cannot see “inside” the query that the PL executes even if its intent is to return the results directly.\n\nOnly SQL language functions are afforded the luxury of integration into then outer statement’s plan [8], but even SQL language functions are somewhat constrained in the types of dynamism that are permitted.\n\n\n4. Allow a set-returning query to be supplied as an input parameter\n\nIt is possible to supply a scalar value, or function call that returns a scalar value as an input parameter. And, with problems 1 & 2 addressed, sets can be supplied as input parameters. However, a literal set-returning SQL query cannot be supplied as a parameter (not without PostgreSQL invoking the ‘calling’ function for each row in the set). REFCURSORs cannot be constructed natively from SQL.\n\nA simplistic response would provide a trivial constructor for REFCURSORs, accepting the query as a text parameter. However it is quite unnatural to supply SQL in textual form, more especially to do so safely. So the challenge is to allow a set-returning subquery to be provided as a parameter in literal form.\n\n\nWhy are these problems important?\n---------------------------------\n\nMy personal wish is for PostgreSQL to offer a feature set that is consistent with itself and without arbitrary limitation. A better argument might be that that it is desirable to match the features of other RDBMSs [9], or for reason of the use cases they address [10] that are new or push the boundaries of what PostgreSQL can do [1], or that are important such as fulfilling a DW/ETL need [11], or to more directly address approaches touted of NoSQL such as Map Reduce [12].\n\n\nDesign and implementation\n-------------------------\n\n1. Set returning function (SRF) for REFCURSOR\n\nTackling first problems 1 and (part of) 2, it seems easy and obvious to allow that REFCURSORs can be consumed in a SELECT query.\n\nPostgreSQL already allows an array to be consumed one record per entry via the UNNEST(anyarray) built-in function [13]. Overloading UNNEST() to accept a REFCURSOR argument can be done easily, and the executor’s SRF machinery allows the result set to be consumed efficiently.\n\nWith such an implementation, and given a REFCURSOR-returning function, kv() the following syntax illustrates trivial usage:\n\nSELECT * \n FROM UNNEST (kv ('A')) \n AS (key TEXT, val NUMERIC);\n\nWith this UNNEST() construct, it is possible to consume a returned REFCURSOR inline in a single SQL statement.\n\nTo complete the example, the function kv() might trivially be defined as:\n\nCREATE FUNCTION kv (suffix text) \n RETURNS REFCURSOR \n STABLE LANGUAGE plpgsql \n AS $$ \nDECLARE \n cur REFCURSOR;\nBEGIN \n OPEN cur FOR EXECUTE \n 'SELECT * FROM kv_table_' || suffix;\n RETURN cur;\nEND;\n$$;\n\nOther obvious example setup includes:\n\ncreate table kv_table_a (key text, value numeric);\ninsert into kv_table_a select 'ITEM_A', generate_series (0, 99);\n\nIt is also possible to accept a REFCURSOR as an input parameter:\n\nCREATE FUNCTION limit_val (input_refcur text, val_limit numeric) \n RETURNS REFCURSOR \n STABLE LANGUAGE plpgsql \n AS $$ \nDECLARE \n cur REFCURSOR;\nBEGIN \n OPEN cur FOR SELECT * FROM UNNEST (input_refcur::REFCURSOR) as (key text, value numeric) WHERE value < val_limit;\n RETURN cur;\nEND;\n$$;\n\nSELECT * \n FROM UNNEST (limit_val (kv ('A')::text, 10))\n AS (key TEXT, val NUMERIC);\n\nHaving this construct, it is possible for plpgsql FUNCTION’s to both accept and return REFCURSOR variables. In plpgsql, is would be unnecessary to cast the REFCURSOR to and from text, but other PLs, presumably lacking first class knowledge of the REFCURSOR type, probably need to do so. In above example, limit_val() illustrates how a REFCURSOR can be accepted in text form.\n\nIn my patch, I’ve used the SPI APIs to access the Portal which lies behind the REFCURSOR. Although SPI seems to offer an easy interface, and it’s also what plpgsql uses internally, I’m not sure it wouldn’t be better to access the Portal directly.\n\nIt is interesting to note that Oracle names its similar construct TABLE() [9], rather than UNNEST(), and in later releases, its use is optional. TABLE is a reserved word and it would be unusual to overload it, although we could educate the parser to treat it specially. Oracle compatibility is an important consideration, but this is a niche area.\n\nIf we continue to use REFCURSOR, it is difficult to make some function call-like construct optional because it is already syntactically possible to select FROM a REFCURSOR-returning function. For example, SELECT * FROM kv (‘A’), is a valid and effective expression, despite being of questionable construction and utility.\n\nAn alternative might build on top of existing support for returning SETOFs, which already requires no UNNEST()-like construct. This is attractive in principle, but it makes some of the further extensions discussed below more awkward (in my opinion). \n\n\n2. Query-bound REFCURSORs\n\nProblem 3 could be addressed by educating the planner in how to extract the query inside the Portal behind the REFCURSOR.\n\nAt present, REFCURSORs are only bound to a query once they are OPENed, but when they are OPENed, the query also is fully planned, ready for immediate execution. The ambition is to allow the REFCURSOR’s query to be inlined within the outer query’s plan, so it seems wasteful to expend planner cycles, only for the plan to be thrown away.\n\nThe proposed implementation would (1) create an intermediate BOUND state for REFCURSORs, and (2) educate plpgsql about how to BIND unbound_cursorvar FOR query.\n\nMy first idea was to modify the REFCURSOR type itself, creating a new state, and adding storage for the BOUND query, but this seems unfeasible without extensive hackery. The REFCURSOR type is a trivial veneer atop a C string (which contains the Portal name), so there is no internal structure to extend.\n\nSo my plan is to retain the direct coupling of REFCURSOR<->Portal, and to allow plpgsql to set the query text at BIND time via PortalDefineQuery(). Existing plpgsql code should be unaffected as it need know nothing about the new BOUND state.\n\nIn order for any of this to work, the planner has to be able to extract the query from the returned Portal. It seems inline_set_returning_function() is the right place to make this extraction. Adding specific recognition for a function call to UNNEST() with single argument of type REFCURSOR is easy, and existing eval_const_expressions() semantics determine whether the single argument expression can be evaluated at plan time. (Of course, if it cannot, then it falls through to be processed at execution time by the REFCURSOR set returning function (SRF) described above.)\n\nIt feels slightly uncomfortable to have UNNEST(REFCURSOR) operate as a regular function, and also have specific recognition for UNNEST() elsewhere in the planner machinery. Arguably, this is already a kind of specific knowledge that inline_set_returning_function() has for SQL language FUNCTIONs, but the recognition I propose for UNNEST(REFCURSOR) is much narrower. An alternative might be to introduce a new type that inline_set_returning_function() can recognise (for example, INLINEABLE_QUERY), or to entirely separate the SRF case from the inlining case at a syntax level (for example, retaining UNNEST(REFCURSOR) as the SRF, but using INLINE(REFCURSOR) for the case at hand).\n\nI’d welcome input here. Although the implementation seems quite feasible, the SQL and plpgsql syntax is less obvious.\n\n\n3. Literal subquery type\n\nProblem 4 could be addressed by educating the parser specially about the REFCURSOR type when faced with a literal query.\n\nConsider this example:\n\nSELECT * \n FROM UNNEST (\n limit_val (\n REFCURSOR (\n SELECT key || '_COPY', value FROM kv_table_a\n ), 25)\n ) AS (key TEXT, val NUMERIC);\n\nThe REFCURSOR(literal_query) construct could be made to result in a BOUND REFCURSOR, in this case, with SELECT key || '_COPY', value FROM kv_table_a, and then passed as a constant parameter to limit_val().\n\nUsefully, at present, the construct yields a syntax error: although REFCURSOR(string) is an already valid construct (being equivalent to CAST (string AS REFCURSOR)), it’s not acceptable to provide a literal subquery without parenthesis. So, while REFCURSOR ((SELECT 'some_cursor')) is roughly equivalent to CAST ('some_cursor' AS REFCURSOR), the near-alternative of REFCURSOR (SELECT 'some_cursor') is quite simply not valid.\n\nIf I’m right, the task is simply a matter of ‘plumbing through’ special knowledge of a REFCURSOR(literal_subquery) construct through the parser. It seems tricky as there are many affected code sites, but the work seems uncomplicated.\n\nEducating the parser about special types seems, again, slightly uncomfortable. An alternative might be to create an intermediate construct: for example, QUERY(literal_subquery) might be made to return the parse tree as a pg_node_tree (similar to how VIEWs are exposed in the system catalogue), and REFCURSOR(pg_node_tree) could consume it, yielding a joined-up construct of REFCURSOR(QUERY(literal_subquery)). However, we might also simply accept REFCURSOR(literal_subquery) to be special, and if/when other need is found for a literal subquery as a parameter, then this becomes the way to supply it.\n\nFor point of reference, Oracle seems to have gone the way of making CURSOR(literal_subquery) do something similar, yielding a REF CURSOR, which allows the CURSOR to be passed by reference [2].\n\n\nOther problems\n--------------\n\n1. This contribution does not actually address the limitation in plpgsql, that the “current implementation of RETURN NEXT and RETURN QUERY stores the entire result set before returning from the function” [4]. My original investigation [3] presumed this limitation to apply generally to all PLs, but I now realise this is not the case: at least the Python PL allows efficient return of SETOFs [5]. With this in mind, I see the plpgsql limitation as less encumbering (as plpython is presumably broadly available) but I’d be interested to know if this view is shared.\n\n2. A perhaps more significant problem is the apparent duplication of plpgsql’s RETURN QUERY syntax. One could perhaps conceive that plpgsql supported an additional marker, for example, RETURN [INLINEABLE] QUERY. It is difficult to see this fitting well with other PLs.\n\n3. The current proposal also requires to declare the expected record with an AS (...) construction. This is rather inconvenient, but it is difficult to see how it could be avoided.\n\n4. Other PLs can use REFCURSORS by virtue of REFCURSOR being a thin veneer atop string. It is coherent if one understands how the PostgreSQL type system works, but quite strange otherwise. Better integration into other PLs and their type systems might be important.\n\n5. As mentioned, SETOF is a near-equivalent for some use cases. There’s no way to cast the results of a function RETURNING SETOF to a REFCURSOR without something like REFCURSOR (SELECT * from <some function>(...)). It might be useful to offer a little syntactic sugar. Perhaps we could invent NEST(SETOF <some RECORD type>) could return REFCURSOR.\n\n\nReferences\n----------\n\n[1] https://www.postgresql.org/message-id/CAAcdnuzHDnDX73jBb9CZZE%3DSv3gDTk8E6-SGRGYEUZbLAy0QRA%40mail.gmail.com <https://www.postgresql.org/message-id/CAAcdnuzHDnDX73jBb9CZZE=Sv3gDTk8E6-SGRGYEUZbLAy0QRA@mail.gmail.com>\n[2] https://docs.oracle.com/database/121/SQLRF/expressions006.htm#SQLRF52077 <https://docs.oracle.com/database/121/SQLRF/expressions006.htm#SQLRF52077>\n[3] https://www.postgresql.org/message-id/DE237364-EB7A-4851-9337-F9F6491E46A6%40qqdd.eu <https://www.postgresql.org/message-id/DE237364-EB7A-4851-9337-F9F6491E46A6%40qqdd.eu>\n[4] https://www.postgresql.org/docs/10/plpgsql-control-structures.html#PLPGSQL-STATEMENTS-RETURNING <https://www.postgresql.org/docs/10/plpgsql-control-structures.html#PLPGSQL-STATEMENTS-RETURNING>\n[5] https://www.postgresql.org/docs/10/plpython-database.html#id-1.8.11.15.3 <https://www.postgresql.org/docs/10/plpython-database.html#id-1.8.11.15.3>\n[6] https://www.postgresql.org/docs/10/plpgsql-cursors.html#PLPGSQL-CURSOR-DECLARATIONS <https://www.postgresql.org/docs/10/plpgsql-cursors.html#PLPGSQL-CURSOR-DECLARATIONS>\n[7] https://github.com/tada/pljava/wiki/Using-jdbc <https://github.com/tada/pljava/wiki/Using-jdbc>\n[8] https://wiki.postgresql.org/wiki/Inlining_of_SQL_functions <https://wiki.postgresql.org/wiki/Inlining_of_SQL_functions>\n[9] https://docs.oracle.com/cd/B19306_01/appdev.102/b14289/dcitblfns.htm <https://docs.oracle.com/cd/B19306_01/appdev.102/b14289/dcitblfns.htm>\n[10] https://www.postgresql.org/message-id/flat/005701c6dc2c%2449011fc0%240a00a8c0%40trivadis.com <https://www.postgresql.org/message-id/flat/005701c6dc2c%2449011fc0%240a00a8c0%40trivadis.com>\n[11] https://oracle-base.com/articles/misc/pipelined-table-functions <https://oracle-base.com/articles/misc/pipelined-table-functions>\n[12] https://blogs.oracle.com/datawarehousing/mapreduce-oracle-tablefunctions <https://blogs.oracle.com/datawarehousing/mapreduce-oracle-tablefunctions>\n[13] https://www.postgresql.org/docs/10/functions-array.html <https://www.postgresql.org/docs/10/functions-array.html>",
"msg_date": "Tue, 17 Sep 2019 21:06:08 +0100",
"msg_from": "Dent John <denty@QQdd.eu>",
"msg_from_op": true,
"msg_subject": "[WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "Hi John,\n\nThanks for pushing this, for me it looks like promising start! I need a bit\nmore time to go through the code (and I'm not an expert in Postgres\ninternals in any way) but I really appreciate you doing this.\n\nRoman\n\nHi John,Thanks for pushing this, for me it looks like promising start! I need a bit more time to go through the code (and I'm not an expert in Postgres internals in any way) but I really appreciate you doing this.Roman",
"msg_date": "Fri, 20 Sep 2019 13:41:22 +0200",
"msg_from": "Roman Pekar <roma.pekar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "Hi folks,\n\nI’ve made a revision of this patch. \n\nThe significant change is to access the Portal using Portal APIs rather than through SPI. It seems the persisted state necessary to survive being used to retrieve a row at a time inside an SRF just isn’t a good fit for SPI. \n\nIt turned out there was upstream machinery in the FunctionScan node that prevented Postgres being able to pipeline SRFs, even if they return ValuePerCall. So, in practice, this patch is of limited benefit without another patch that changes that behaviour (see [1]). Nevertheless, the code is independent so I’m submitting the two changes separately. \n\nI’ll push this into the Jan commit fest.\n\ndenty. \n\n[1] https://commitfest.postgresql.org/26/2372/ <https://commitfest.postgresql.org/26/2372/>",
"msg_date": "Sat, 14 Dec 2019 12:09:09 +0000",
"msg_from": "Dent John <denty@QQdd.eu>",
"msg_from_op": true,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "\tDent John wrote:\n\n> I’ve made a revision of this patch. \n\nSome comments:\n\n* the commitfest app did not extract up the patch from the mail,\npossibly because it's buried in the MIME structure of the mail\n(using plain text instead of HTML messages might help with that).\nThe patch has no status in http://commitfest.cputube.org/\nprobably because of this too.\n\n\n* unnest-refcursor-v3.patch needs a slight rebase because this chunk\nin the Makefile fails\n-\tregexp.o regproc.o ri_triggers.o rowtypes.o ruleutils.o \\\n+\trefcursor.o regexp.o regproc.o ri_triggers.o rowtypes.o ruleutils.o \\\n\n\n* I'm under the impression that refcursor_unnest() is functionally\nequivalent to a plpgsql implementation like this:\n\ncreate function unnest(x refcursor) returns setof record as $$\ndeclare\n r record;\nbegin\n loop\n fetch x into r;\n exit when not found;\n return next r;\n end loop;\nend $$ language plpgsql;\n\nbut it would differ in performance, because internally a materialization step\ncould be avoided, but only when the other patch \"Allow FunctionScans to\npipeline results\" gets in?\nOr are performance benefits expected right away with this patch?\n\n*\n--- a/src/backend/utils/adt/arrayfuncs.c\n+++ b/src/backend/utils/adt/arrayfuncs.c\n@@ -5941,7 +5941,7 @@ array_fill_internal(ArrayType *dims, ArrayType *lbs,\n \n \n /*\n- * UNNEST\n+ * UNNEST (array)\n */\n\nThis chunk looks unnecessary?\n\n* some user-facing doc would be needed.\n\n* Is it good to overload \"unnest\" rather than choosing a specific\nfunction name?\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Thu, 09 Jan 2020 18:43:14 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "> On 9 Jan 2020, at 17:43, Daniel Verite <daniel@manitou-mail.org> wrote:\n> \n> […]\n> (using plain text instead of HTML messages might help with that).\n\nThanks. I’ll do that next time.\n\n> […]\n> * unnest-refcursor-v3.patch needs a slight rebase because this chunk\n> in the Makefile fails\n> -\tregexp.o regproc.o ri_triggers.o rowtypes.o ruleutils.o \\\n> +\trefcursor.o regexp.o regproc.o ri_triggers.o rowtypes.o ruleutils.o \\\n\nLikewise I’ll make that rebase in the next version.\n\n> * I'm under the impression that refcursor_unnest() is functionally\n> equivalent to a plpgsql implementation like this:\n> \n> […]\n> \n> but it would differ in performance, because internally a materialization step\n> could be avoided, but only when the other patch \"Allow FunctionScans to\n> pipeline results\" gets in?\n\nYes. That’s at least true if unnest(x) is used in the FROM. If it’s used in the SELECT, actually it can get the performance benefit right away. However, in the SELECT case, there’s a bit of a gotcha because anonymous records can’t easily be manipulated because they have no type information available. So to make a useful performance contribution, it does need to be combined with another change — either to make it FROM pipeline as in my other patch, or perhaps enabling anonymous record types to be cast or otherwise manipulated.\n\n> […]\n> /*\n> - * UNNEST\n> + * UNNEST (array)\n> */\n> \n> This chunk looks unnecessary?\n\nIt was for purpose of disambiguating. But indeed it is unnecessary. Choosing a different name would avoid need for it.\n\n> * some user-facing doc would be needed.\n\nIndeed. I fully intend that. I figured I’d get the concept on a firmer footing first.\n\n> * Is it good to overload \"unnest\" rather than choosing a specific\n> function name?\n\nYeah. I wondered about that. A couple of syntactically obvious ideas were:\n\nSELECT … FROM TABLE (x) (which is what I think Oracle does, but is reserved)\n\nSELECT … FROM CURSOR (x) (which seems likely to confuse, but, surprisingly, isn’t actually reserved)\t\n\nSELECT … FROM FETCH (x) (which I quite like, but is reserved)\n\nSELECT … FROM ROWS_FROM (x) (is okay, but conflicts with our ROWS FROM construct)\n\nSo I kind of landed on UNNEST(x) out of lack of better idea. EXPAND(x) could be an option. Actually ROWS_IN(x) or ROWS_OF(x) might work.\n\nDo you have any preference or suggestion?\n\nThanks a lot for the feedback.\n\ndenty.\n\n",
"msg_date": "Thu, 9 Jan 2020 20:34:22 +0000",
"msg_from": "Dent John <denty@QQdd.eu>",
"msg_from_op": true,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "\tDent John wrote:\n\n> Yes. That’s at least true if unnest(x) is used in the FROM. If it’s used in\n> the SELECT, actually it can get the performance benefit right away\n\nAt a quick glance, I don't see it called in the select-list in any\nof the regression tests. When trying it, it appears to crash (segfault):\n\npostgres=# begin;\nBEGIN\n\npostgres=# declare c cursor for select oid::int as i, relname::text as r from\npg_class;\nDECLARE CURSOR\n\npostgres=# select unnest('c'::refcursor);\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nThe build is configured with:\n./configure --enable-debug --with-icu --with-perl --enable-depend\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Fri, 10 Jan 2020 16:45:15 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "> On 10 Jan 2020, at 15:45, Daniel Verite <daniel@manitou-mail.org> wrote:\n> \n> At a quick glance, I don't see it called in the select-list in any\n> of the regression tests. […]\n\nYep. I didn’t test it because I figured it wasn’t particularly useful in that context. I’ll add some tests for that too once I get to the root of the problem.\n\n> postgres=# select unnest('c'::refcursor);\n> server closed the connection unexpectedly\n> \tThis probably means the server terminated abnormally\n> \tbefore or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n\nOkay. That’s pretty bad, isn’t it.\n\nIt’s crashing when it’s checking that the returned tuple matches the declared return type in rsinfo->setDesc. Seems rsinfo->setDesc gets overwritten. So I think I have a memory management problem.\n\nTo be honest, I wasn’t fully sure I’d got a clear understanding of what is in what memory context, but things seemed to work so I figured it was close. Seems I was wrong. I need a bit of time to review. Leave it with me, but I guess it’ll take to next weekend before I get more time.\n\ndenty.\n\n",
"msg_date": "Sat, 11 Jan 2020 12:04:05 +0000",
"msg_from": "Dent John <denty@QQdd.eu>",
"msg_from_op": true,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "\tDent John wrote:\n\n> It’s crashing when it’s checking that the returned tuple matches the\n> declared return type in rsinfo->setDesc. Seems rsinfo->setDesc gets\n> overwritten. So I think I have a memory management problem.\n\nWhat is the expected result anyway? A single column with a \"record\"\ntype? FWIW I notice that with plpgsql, this is not allowed to happen:\n\nCREATE FUNCTION cursor_unnest(x refcursor) returns setof record\nas $$\ndeclare\n r record;\nbegin\n loop\n fetch x into r;\n exit when not found;\n return next r;\n end loop;\nend $$ language plpgsql;\n\nbegin;\n\ndeclare c cursor for select oid::int as i, relname::text as r from pg_class;\n\nselect cursor_unnest('c');\n\nERROR:\tset-valued function called in context that cannot accept a set\nCONTEXT: PL/pgSQL function cursor_unnest(refcursor) line 8 at RETURN NEXT\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Tue, 14 Jan 2020 15:53:37 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "> On 14 Jan 2020, at 14:53, Daniel Verite <daniel@manitou-mail.org> wrote:\n> \n> What is the expected result anyway? A single column with a \"record\"\n> type? FWIW I notice that with plpgsql, this is not allowed to happen:\n\nHmm. How interesting.\n\nI had not really investigated what happens in the case of a function returning SETOF (untyped) RECORD in a SELECT clause because, whatever the result, there’s no mechanism to access the individual fields.\n\nAs you highlight, it doesn’t work at all in plpgsql, and plperl is the same.\n\nHowever, SQL language functions get away with it. For example, inspired by _pg_expandarray():\n\nCREATE OR REPLACE FUNCTION public.my_pg_expandarray(anyarray)\nRETURNS SETOF record\nLANGUAGE sql\nIMMUTABLE PARALLEL SAFE STRICT\nAS $function$\n\tselect $1[s], s - pg_catalog.array_lower($1,1) + 1\n\t\tfrom pg_catalog.generate_series(pg_catalog.array_lower($1,1),\n\t\t\tpg_catalog.array_upper($1,1), 1) as g(s)\n$function$\n\npostgres=# select my_pg_expandarray (array[0, 1, 2, 3, 4]);\n my_pg_expandarray \n-------------------\n (0,1)\n (1,2)\n (2,3)\n (3,4)\n (4,5)\n(5 rows)\n\nBack in the FROM clause, it’s possible to manipulate the individual fields:\n\npostgres=# select b, a from my_pg_expandarray (array[0, 1, 2, 3, 4]) as r(a int, b int);\n b | a \n---+---\n 1 | 0\n 2 | 1\n 3 | 2\n 4 | 3\n 5 | 4\n(5 rows)\n\nIt’s quite interesting. All the other PLs make explicit checks for rsinfo.expectedDesc being non-NULL, but fmgr_sql() explicitly calls out the contrary: “[…] note we do not require caller to provide an expectedDesc.” So I guess either there’s something special about the SQL PL, or perhaps the other PLs are just inheriting a pattern of being cautious.\n\nEither way, though, there’s no way that I can see to \"get at” the fields inside the anonymous record that is returned when the function is in the SELECT list.\n\nBut back to the failure, I still need to make it not crash. I guess it doesn’t matter whether I simply refuse to work if called from the SELECT list, or just return an anonymous record, like fmgr_sql() does.\n\nd.\n\n",
"msg_date": "Fri, 17 Jan 2020 22:41:51 +0000",
"msg_from": "Dent John <denty@QQdd.eu>",
"msg_from_op": true,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "> On 11 Jan 2020, at 12:04, Dent John <denty@QQdd.eu> wrote:\n> \n>> On 10 Jan 2020, at 15:45, Daniel Verite <daniel@manitou-mail.org> wrote:\n>> \n>> postgres=# select unnest('c'::refcursor);\n>> server closed the connection unexpectedly\n>> \tThis probably means the server terminated abnormally\n>> \tbefore or while processing the request.\n>> The connection to the server was lost. Attempting reset: Failed.\n> \n> Okay. That’s pretty bad, isn’t it.\n\nI’ve addressed the issue, which was due to me allocating the TupleDesc in the multi_call_memory_ctx, which seemed quite reasonable, but it actually needs to be in ecxt_per_query_memory. It seems tSRF-mode queries are much more sensitive to the misstep.\n\nA v4 patch is attached, which also renames UNNEST(REFCURSOR) to ROWS_IN(REFCURSOR), adds a test case for use in tSRF mode, and makes some minor fixes to the support function.\n\nI have not yet made steps towards documentation, nor yet rebased, so the Makefile chunk will probably still fail.\n\ndenty.",
"msg_date": "Sun, 19 Jan 2020 22:30:33 +0000",
"msg_from": "Dent John <denty@QQdd.eu>",
"msg_from_op": true,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "> On 19 Jan 2020, at 22:30, Dent John <denty@QQdd.eu> wrote:\n> \n> I have not yet made steps towards documentation, nor yet rebased, so the Makefile chunk will probably still fail.\n\nAttached patch addresses these points, so should now apply cleanly agains dev.\n\nI also changed the OID assigned to ROWS_IN and its support function.\n\nIn passing, I noticed there is one existing function that can consume and make good use of ROWS_IN’s result when used in the target list, which is row_to_json. This is good, as it makes ROWS_IN useful even outside of a change to allow results in the FROM to be pipelined. I’ve called out row_to_json specifically in the documentation change.\n\ndenty.",
"msg_date": "Sat, 25 Jan 2020 10:59:24 +0000",
"msg_from": "Dent John <denty@QQdd.eu>",
"msg_from_op": true,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "On Sat, Jan 25, 2020 at 11:59 PM Dent John <denty@qqdd.eu> wrote:\n> Attached patch addresses these points, so should now apply cleanly agains dev.\n\n From the trivialities department, I see a bunch of warnings about\nlocal declaration placement (we're still using C90 rules for those by\nproject policy):\n\nrefcursor.c:138:3: error: ISO C90 forbids mixed declarations and code\n[-Werror=declaration-after-statement]\n MemoryContext oldcontext =\nMemoryContextSwitchTo(funcctx->multi_call_memory_ctx);\n ^\n\n\n",
"msg_date": "Tue, 18 Feb 2020 16:03:27 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "> On 18 Feb 2020, at 03:03, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> From the trivialities department, I see a bunch of warnings about\n> local declaration placement (we're still using C90 rules for those by\n> project policy):\n> \n> refcursor.c:138:3: error: ISO C90 forbids mixed declarations and code\n> [-Werror=declaration-after-statement]\n> MemoryContext oldcontext =\n> MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);\n> ^\n\nThanks for pointing that out.\n\nI have updated the patch.\n\ndenty.",
"msg_date": "Sat, 22 Feb 2020 10:38:30 +0000",
"msg_from": "Dent John <denty@QQdd.eu>",
"msg_from_op": true,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "> On 22 Feb 2020, at 10:38, Dent John <denty@QQdd.eu> wrote:\n> \n>> On 18 Feb 2020, at 03:03, Thomas Munro <thomas.munro@gmail.com> wrote:\n>> \n>> From the trivialities department, I see a bunch of warnings about\n>> local declaration placement (we're still using C90 rules for those by\n>> project policy):\n>> \n>> […]\n> \n> […]\n\nMy bad. I missed on declaration. \n\nAnother patch attached.\n\nd.",
"msg_date": "Fri, 6 Mar 2020 21:36:15 +0000",
"msg_from": "Dent John <denty@QQdd.eu>",
"msg_from_op": true,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "This is an interesting feature, but it seems that the author has abandoned\ndevelopment, what happens now? Will this be postponed from commitfest to\ncommitfest and never be taken over by anyone?\n\nMassimo.\n\nIl giorno ven 6 mar 2020 alle ore 22:36 Dent John <denty@qqdd.eu> ha\nscritto:\n\n> > On 22 Feb 2020, at 10:38, Dent John <denty@QQdd.eu> wrote:\n> >\n> >> On 18 Feb 2020, at 03:03, Thomas Munro <thomas.munro@gmail.com> wrote:\n> >>\n> >> From the trivialities department, I see a bunch of warnings about\n> >> local declaration placement (we're still using C90 rules for those by\n> >> project policy):\n> >>\n> >> […]\n> >\n> > […]\n>\n> My bad. I missed on declaration.\n>\n> Another patch attached.\n>\n> d.\n>\n\nThis is an interesting feature, but it seems that the author has abandoned development, what happens now? Will this be postponed from commitfest to commitfest and never be taken over by anyone?Massimo.Il giorno ven 6 mar 2020 alle ore 22:36 Dent John <denty@qqdd.eu> ha scritto:> On 22 Feb 2020, at 10:38, Dent John <denty@QQdd.eu> wrote:\n> \n>> On 18 Feb 2020, at 03:03, Thomas Munro <thomas.munro@gmail.com> wrote:\n>> \n>> From the trivialities department, I see a bunch of warnings about\n>> local declaration placement (we're still using C90 rules for those by\n>> project policy):\n>> \n>> […]\n> \n> […]\n\nMy bad. I missed on declaration. \n\nAnother patch attached.\n\nd.",
"msg_date": "Tue, 19 Jan 2021 00:09:15 +0100",
"msg_from": "Massimo Fidanza <malix0@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "Hi Massimo,\n\nThanks for the interest, and my apologies for the late reply.\n\nI’m not particularly abandoning it, but I don’t have particular reason to make further changes at the moment. Far as I’m concerned it works, and the main question is whether it is acceptable and useful.\n\nI’d be happy if you have feedback that evolves it or might push it up the queue for commitfest review.\n\nd.\n\n> On 18 Jan 2021, at 23:09, Massimo Fidanza <malix0@gmail.com> wrote:\n> \n> This is an interesting feature, but it seems that the author has abandoned development, what happens now? Will this be postponed from commitfest to commitfest and never be taken over by anyone?\n> \n> Massimo.\n> \n> Il giorno ven 6 mar 2020 alle ore 22:36 Dent John <denty@qqdd.eu <mailto:denty@qqdd.eu>> ha scritto:\n> > On 22 Feb 2020, at 10:38, Dent John <denty@QQdd.eu> wrote:\n> > \n> >> On 18 Feb 2020, at 03:03, Thomas Munro <thomas.munro@gmail.com <mailto:thomas.munro@gmail.com>> wrote:\n> >> \n> >> From the trivialities department, I see a bunch of warnings about\n> >> local declaration placement (we're still using C90 rules for those by\n> >> project policy):\n> >> \n> >> […]\n> > \n> > […]\n> \n> My bad. I missed on declaration. \n> \n> Another patch attached.\n> \n> d.\n\n\nHi Massimo,Thanks for the interest, and my apologies for the late reply.I’m not particularly abandoning it, but I don’t have particular reason to make further changes at the moment. Far as I’m concerned it works, and the main question is whether it is acceptable and useful.I’d be happy if you have feedback that evolves it or might push it up the queue for commitfest review.d.On 18 Jan 2021, at 23:09, Massimo Fidanza <malix0@gmail.com> wrote:This is an interesting feature, but it seems that the author has abandoned development, what happens now? Will this be postponed from commitfest to commitfest and never be taken over by anyone?Massimo.Il giorno ven 6 mar 2020 alle ore 22:36 Dent John <denty@qqdd.eu> ha scritto:> On 22 Feb 2020, at 10:38, Dent John <denty@QQdd.eu> wrote:\n> \n>> On 18 Feb 2020, at 03:03, Thomas Munro <thomas.munro@gmail.com> wrote:\n>> \n>> From the trivialities department, I see a bunch of warnings about\n>> local declaration placement (we're still using C90 rules for those by\n>> project policy):\n>> \n>> […]\n> \n> […]\n\nMy bad. I missed on declaration. \n\nAnother patch attached.\n\nd.",
"msg_date": "Sun, 7 Feb 2021 21:35:48 +0000",
"msg_from": "Dent John <denty@QQdd.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "Hi John,\n\nI never build postgresql from source, so I must get some information on how\nto apply your patch and do some test. I can't review your code because I\nknow nothing about Postgresql internals and just basic C. I am mainly a\nPL/SQL programmer, with experience with PHP, Python and Javascript. If I\ncan give some contribution I will be happy, but I need some help.\n\nMassimo\n\nIl giorno dom 7 feb 2021 alle ore 22:35 Dent John <denty@qqdd.co.uk> ha\nscritto:\n\n> Hi Massimo,\n>\n> Thanks for the interest, and my apologies for the late reply.\n>\n> I’m not particularly abandoning it, but I don’t have particular reason to\n> make further changes at the moment. Far as I’m concerned it works, and the\n> main question is whether it is acceptable and useful.\n>\n> I’d be happy if you have feedback that evolves it or might push it up the\n> queue for commitfest review.\n>\n> d.\n>\n> On 18 Jan 2021, at 23:09, Massimo Fidanza <malix0@gmail.com> wrote:\n>\n> This is an interesting feature, but it seems that the author has abandoned\n> development, what happens now? Will this be postponed from commitfest to\n> commitfest and never be taken over by anyone?\n>\n> Massimo.\n>\n> Il giorno ven 6 mar 2020 alle ore 22:36 Dent John <denty@qqdd.eu> ha\n> scritto:\n>\n>> > On 22 Feb 2020, at 10:38, Dent John <denty@QQdd.eu> wrote:\n>> >\n>> >> On 18 Feb 2020, at 03:03, Thomas Munro <thomas.munro@gmail.com> wrote:\n>> >>\n>> >> From the trivialities department, I see a bunch of warnings about\n>> >> local declaration placement (we're still using C90 rules for those by\n>> >> project policy):\n>> >>\n>> >> […]\n>> >\n>> > […]\n>>\n>> My bad. I missed on declaration.\n>>\n>> Another patch attached.\n>>\n>> d.\n>>\n>\n>\n\nHi John,I never build postgresql from source, so I must get some information on how to apply your patch and do some test. I can't review your code because I know nothing about Postgresql internals and just basic C. I am mainly a PL/SQL programmer, with experience with PHP, Python and Javascript. If I can give some contribution I will be happy, but I need some help.Massimo\nIl giorno dom 7 feb 2021 alle ore 22:35 Dent John <denty@qqdd.co.uk> ha scritto:Hi Massimo,Thanks for the interest, and my apologies for the late reply.I’m not particularly abandoning it, but I don’t have particular reason to make further changes at the moment. Far as I’m concerned it works, and the main question is whether it is acceptable and useful.I’d be happy if you have feedback that evolves it or might push it up the queue for commitfest review.d.On 18 Jan 2021, at 23:09, Massimo Fidanza <malix0@gmail.com> wrote:This is an interesting feature, but it seems that the author has abandoned development, what happens now? Will this be postponed from commitfest to commitfest and never be taken over by anyone?Massimo.Il giorno ven 6 mar 2020 alle ore 22:36 Dent John <denty@qqdd.eu> ha scritto:> On 22 Feb 2020, at 10:38, Dent John <denty@QQdd.eu> wrote:\n> \n>> On 18 Feb 2020, at 03:03, Thomas Munro <thomas.munro@gmail.com> wrote:\n>> \n>> From the trivialities department, I see a bunch of warnings about\n>> local declaration placement (we're still using C90 rules for those by\n>> project policy):\n>> \n>> […]\n> \n> […]\n\nMy bad. I missed on declaration. \n\nAnother patch attached.\n\nd.",
"msg_date": "Wed, 10 Feb 2021 09:57:24 +0100",
"msg_from": "Massimo Fidanza <malix0@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "Hi Massimo,\n\nHappy to help. And actually, end user (i.e., developer) feedback on the feature’s usefulness is probably one of the more important contributions.\n\nd.\n\n> On 10 Feb 2021, at 08:57, Massimo Fidanza <malix0@gmail.com> wrote:\n> \n> Hi John,\n> \n> I never build postgresql from source, so I must get some information on how to apply your patch and do some test. I can't review your code because I know nothing about Postgresql internals and just basic C. I am mainly a PL/SQL programmer, with experience with PHP, Python and Javascript. If I can give some contribution I will be happy, but I need some help.\n> \n> Massimo\n> \n> Il giorno dom 7 feb 2021 alle ore 22:35 Dent John <denty@qqdd.co.uk <mailto:denty@qqdd.co.uk>> ha scritto:\n> Hi Massimo,\n> \n> Thanks for the interest, and my apologies for the late reply.\n> \n> I’m not particularly abandoning it, but I don’t have particular reason to make further changes at the moment. Far as I’m concerned it works, and the main question is whether it is acceptable and useful.\n> \n> I’d be happy if you have feedback that evolves it or might push it up the queue for commitfest review.\n> \n> d.\n> \n>> On 18 Jan 2021, at 23:09, Massimo Fidanza <malix0@gmail.com <mailto:malix0@gmail.com>> wrote:\n>> \n>> This is an interesting feature, but it seems that the author has abandoned development, what happens now? Will this be postponed from commitfest to commitfest and never be taken over by anyone?\n>> \n>> Massimo.\n>> \n>> Il giorno ven 6 mar 2020 alle ore 22:36 Dent John <denty@qqdd.eu <mailto:denty@qqdd.eu>> ha scritto:\n>> > On 22 Feb 2020, at 10:38, Dent John <denty@QQdd.eu <mailto:denty@QQdd.eu>> wrote:\n>> > \n>> >> On 18 Feb 2020, at 03:03, Thomas Munro <thomas.munro@gmail.com <mailto:thomas.munro@gmail.com>> wrote:\n>> >> \n>> >> From the trivialities department, I see a bunch of warnings about\n>> >> local declaration placement (we're still using C90 rules for those by\n>> >> project policy):\n>> >> \n>> >> […]\n>> > \n>> > […]\n>> \n>> My bad. I missed on declaration. \n>> \n>> Another patch attached.\n>> \n>> d.\n> \n\n\nHi Massimo,Happy to help. And actually, end user (i.e., developer) feedback on the feature’s usefulness is probably one of the more important contributions.d.On 10 Feb 2021, at 08:57, Massimo Fidanza <malix0@gmail.com> wrote:Hi John,I never build postgresql from source, so I must get some information on how to apply your patch and do some test. I can't review your code because I know nothing about Postgresql internals and just basic C. I am mainly a PL/SQL programmer, with experience with PHP, Python and Javascript. If I can give some contribution I will be happy, but I need some help.Massimo\nIl giorno dom 7 feb 2021 alle ore 22:35 Dent John <denty@qqdd.co.uk> ha scritto:Hi Massimo,Thanks for the interest, and my apologies for the late reply.I’m not particularly abandoning it, but I don’t have particular reason to make further changes at the moment. Far as I’m concerned it works, and the main question is whether it is acceptable and useful.I’d be happy if you have feedback that evolves it or might push it up the queue for commitfest review.d.On 18 Jan 2021, at 23:09, Massimo Fidanza <malix0@gmail.com> wrote:This is an interesting feature, but it seems that the author has abandoned development, what happens now? Will this be postponed from commitfest to commitfest and never be taken over by anyone?Massimo.Il giorno ven 6 mar 2020 alle ore 22:36 Dent John <denty@qqdd.eu> ha scritto:> On 22 Feb 2020, at 10:38, Dent John <denty@QQdd.eu> wrote:\n> \n>> On 18 Feb 2020, at 03:03, Thomas Munro <thomas.munro@gmail.com> wrote:\n>> \n>> From the trivialities department, I see a bunch of warnings about\n>> local declaration placement (we're still using C90 rules for those by\n>> project policy):\n>> \n>> […]\n> \n> […]\n\nMy bad. I missed on declaration. \n\nAnother patch attached.\n\nd.",
"msg_date": "Fri, 19 Feb 2021 09:25:44 +0000",
"msg_from": "Dent John <denty@QQdd.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": " Hi,\n\nTrying the v7a patch, here are a few comments:\n\n* SIGSEGV with ON HOLD cursors.\n\nReproducer:\n\ndeclare c cursor with hold for select oid,relname\n from pg_class order by 1 limit 10;\n\nselect * from rows_in('c') as x(f1 oid,f2 name);\n\nconsumes a bit of time, then crashes and generates a 13 GB core file\nwithout a usable stacktrace:\n\nCore was generated by `postgres: daniel postgres [local] SELECT '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x00007f4c5b2f3dc9 in ?? ()\n(gdb) bt\n#0 0x00007f4c5b2f3dc9 in ?? ()\n#1 0x0000564567efc505 in ?? ()\n#2 0x0000000000000001 in ?? ()\n#3 0x000056456a4b28f8 in ?? ()\n#4 0x000056456a4b2908 in ?? ()\n#5 0x000056456a4b2774 in ?? ()\n#6 0x000056456a4ad218 in ?? ()\n#7 0x000056456a4b1590 in ?? ()\n#8 0x0000000000000010 in ?? ()\n#9 0x0000000000000000 in ?? ()\n\n\n* rows_in() does not fetch from the current position of the cursor,\nbut from the start. For instance, I would expect that if doing\nFETCH FROM cursor followed by SELECT * FROM rows_in('cursor'), the first\nrow would be ignored by rows_in(). That seems more convenient and more\nprincipled.\n\n\n* \n+ <para>\n+ This section describes functions that cursors to be manipulated\n+ in normal <command>SELECT</command> queries.\n+ </para>\n\nA verb seems to be missing.\nIt should be \"function that *allow* cursors to be...\" or something\nlike that?\n\n* \n+ The <type>REFCURSOR</type> must be open, and the query must be a\n+ <command>SELECT</command> statement. If the <type>REFCURSOR</type>’s\n+ output does not\n\nAfter </type> there is a fancy quote (codepoint U+2019). There is\ncurrently no codepoint outside of US-ASCII in *.sgml ref/*.sgml, so\nthey're probably not welcome.\n\n\n* Also: does the community wants it as a built-in function in core?\nAs mentioned in a previous round of review, a function like this in\nplpgsql comes close:\n\ncreate function rows_in(x refcursor) returns setof record as $$\ndeclare\n r record;\nbegin\n loop\n fetch x into r;\n exit when not found;\n return next r;\n end loop;\nend $$ language plpgsql;\n\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: https://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Thu, 29 Jul 2021 16:45:37 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
},
{
"msg_contents": "> On 29 Jul 2021, at 16:45, Daniel Verite <daniel@manitou-mail.org> wrote:\n\n> Trying the v7a patch, here are a few comments:\n\nThis thread has stalled with no update or response to the above, and the patch\nerrors out on make check for the plpgsql suite. I'm marking this Returned with\nFeedback, please resubmit an updated patch if you would like to pursue this\nfurther.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 1 Oct 2021 12:20:26 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [WIP] UNNEST(REFCURSOR): allowing SELECT to consume data from a\n REFCURSOR"
}
] |
[
{
"msg_contents": "Hi,\n\nWe're seeing occasional failures like this:\n\nrunning bootstrap script ... 2019-09-13 12:11:26.882 PDT [64926]\nFATAL: could not create semaphores: No space left on device\n2019-09-13 12:11:26.882 PDT [64926] DETAIL: Failed system call was\nsemget(5728001, 17, 03600).\n\nI think you should switch to using \"named\" POSIX semaphores by\nbuilding with USE_NAMED_POSIX_SEMAPHORES (then it'll create a\nsquillion little files under /tmp and mmap() them), or increase the\nnumber of SysV semaphores you can create with sysctl[1], or finish\nwriting your operating system[2] so you can switch to \"unnamed\" POSIX\nsemaphores :-)\n\n[1] https://www.postgresql.org/message-id/flat/27582.1546928073%40sss.pgh.pa.us\n[2] https://github.com/openbsd/src/blob/master/lib/librthread/rthread_sem.c#L112\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 Sep 2019 10:52:08 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "scorpionfly needs more semaphores"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> We're seeing occasional failures like this:\n> running bootstrap script ... 2019-09-13 12:11:26.882 PDT [64926]\n> FATAL: could not create semaphores: No space left on device\n> 2019-09-13 12:11:26.882 PDT [64926] DETAIL: Failed system call was\n> semget(5728001, 17, 03600).\n\n> I think you should switch to using \"named\" POSIX semaphores by\n> building with USE_NAMED_POSIX_SEMAPHORES (then it'll create a\n> squillion little files under /tmp and mmap() them), or increase the\n> number of SysV semaphores you can create with sysctl[1], or finish\n> writing your operating system[2] so you can switch to \"unnamed\" POSIX\n> semaphores :-)\n\nI'd recommend the second option. Since the discussion in [1],\nwe've fixed our docs for OpenBSD to say\n\n In OpenBSD 3.3 and later, IPC parameters can be adjusted using sysctl,\n for example:\n # sysctl kern.seminfo.semmni=100\n To make these settings persist over reboots, modify /etc/sysctl.conf.\n You will usually want to increase kern.seminfo.semmni and\n kern.seminfo.semmns, as OpenBSD's default settings for these are\n uncomfortably small.\n\nScorpionfly also seems to be having problems with its git repo breaking on\na regular basis. I have no idea what's up with that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Sep 2019 00:33:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: scorpionfly needs more semaphores"
},
{
"msg_contents": "Thus said Tom Lane <tgl@sss.pgh.pa.us> on Wed, 18 Sep 2019 00:33:19 -0400\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> We're seeing occasional failures like this:\n>> running bootstrap script ... 2019-09-13 12:11:26.882 PDT [64926]\n>> FATAL: could not create semaphores: No space left on device\n>> 2019-09-13 12:11:26.882 PDT [64926] DETAIL: Failed system call was\n>> semget(5728001, 17, 03600).\n> \n>> I think you should switch to using \"named\" POSIX semaphores by\n>> building with USE_NAMED_POSIX_SEMAPHORES (then it'll create a\n>> squillion little files under /tmp and mmap() them), or increase the\n>> number of SysV semaphores you can create with sysctl[1], or finish\n>> writing your operating system[2] so you can switch to \"unnamed\" POSIX\n>> semaphores :-)\n> \n> I'd recommend the second option. Since the discussion in [1],\n> we've fixed our docs for OpenBSD to say\n> \n> In OpenBSD 3.3 and later, IPC parameters can be adjusted using sysctl,\n> for example:\n> # sysctl kern.seminfo.semmni=100\n> To make these settings persist over reboots, modify /etc/sysctl.conf.\n> You will usually want to increase kern.seminfo.semmni and\n> kern.seminfo.semmns, as OpenBSD's default settings for these are\n> uncomfortably small.\n\n\nThanks, Thomas and Tom for reaching out to me. I certainly don't want to \nrecompile my kernel, as I basically run -current OpenBSD via snapshots.\n\nThat said, I've made the adjustment to the sysctl:\n\n$ sysctl | ag kern.seminfo.semmni\nkern.seminfo.semmni=100\n\n\n> \n> Scorpionfly also seems to be having problems with its git repo breaking on\n> a regular basis. I have no idea what's up with that.\n\nThat is a mystery to me as well. 9.4 stable seems to be the branch with \nthe most problems:\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=scorpionfly&br=REL9_4_STABLE\n\n\nMy cronjobs:\n0 */6 * * * cd /home/pgbuilder/bin/REL10 && ./run_build.pl --verbose\n0 */12 * * * cd /home/pgbuilder/bin/REL10 && ./run_branches.pl --run-all\n\n\nI'm willing to make more tweaks to prevent these false positives, so \nfeel free to continue monitoring to see how things work out over the \nnext several builds.\n\n> \n> \t\t\tregards, tom lane\n> \n\n\n\n\n\n",
"msg_date": "Tue, 17 Sep 2019 21:55:03 -0700",
"msg_from": "jungle boogie <jungleboogie0@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: scorpionfly needs more semaphores"
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 4:55 PM jungle boogie <jungleboogie0@gmail.com> wrote:\n> $ sysctl | ag kern.seminfo.semmni\n> kern.seminfo.semmni=100\n\nIt still seems to be happening. Perhaps you need to increase semmns too?\n\n> > Scorpionfly also seems to be having problems with its git repo breaking on\n> > a regular basis. I have no idea what's up with that.\n>\n> That is a mystery to me as well. 9.4 stable seems to be the branch with\n> the most problems:\n> https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=scorpionfly&br=REL9_4_STABLE\n>\n>\n> My cronjobs:\n> 0 */6 * * * cd /home/pgbuilder/bin/REL10 && ./run_build.pl --verbose\n> 0 */12 * * * cd /home/pgbuilder/bin/REL10 && ./run_branches.pl --run-all\n\nI think you need just the run_branches.pl entry.\n\nBTW I'm sorry for my flippant tone about sem_init() earlier, which\nsomeone pointed out to me was not great cross-project open source.\nWhat I really meant to say was: if you have OpenBSD developer\ncontacts, it would be cool if you could highlight that issue as\nsomething that would make the PostgreSQL-on-OpenBSD experience nicer\nfor end users (and I suspect other multi-process software too). On\nLinux and FreeBSD we now use sem_init()\n(PREFERRED_SEMAPHORES=UNNAMED_POSIX) so users never need to worry\nabout configuring that kernel resource. On at least AIX, we still\nhave to use SysV, but there the default limits are high enough that\n(according to our manual) no adjustment is required. Also, as I\nspeculated in that other thread: based on a quick peek at the\nimplementation, you might get better performance on very large busy\nPostgreSQL clusters from our cache line padded sem_init() array than\nyou do with your more densely packed SysV semas (I could be totally\nwrong about that, but we saw an effect like that on some other\noperating system).\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 23 Sep 2019 10:29:06 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: scorpionfly needs more semaphores"
},
{
"msg_contents": "On 2019-09-23 00:29, Thomas Munro wrote:\n> On Wed, Sep 18, 2019 at 4:55 PM jungle boogie <jungleboogie0@gmail.com> wrote:\n>> $ sysctl | ag kern.seminfo.semmni\n>> kern.seminfo.semmni=100\n> \n> It still seems to be happening. Perhaps you need to increase semmns too?\n\nI have on my OpenBSD 6.5 /etc/sysctl.conf the following:\n\nkern.seminfo.semmni=200\nkern.seminfo.semmns=4000\n\nand it seems to work fine.\n\n/Mikael\n\n\n",
"msg_date": "Mon, 23 Sep 2019 20:34:24 +0200",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: scorpionfly needs more semaphores"
},
{
"msg_contents": "On Mon Sep 23, 2019 at 8:34 PM Mikael Kjellström wrote:\n> On 2019-09-23 00:29, Thomas Munro wrote:\n> > On Wed, Sep 18, 2019 at 4:55 PM jungle boogie <jungleboogie0@gmail.com> wrote:\n> >> $ sysctl | ag kern.seminfo.semmni\n> >> kern.seminfo.semmni=100\n> > \n> > It still seems to be happening. Perhaps you need to increase semmns too?\n> \n> I have on my OpenBSD 6.5 /etc/sysctl.conf the following:\n> \n> kern.seminfo.semmni=200\n> kern.seminfo.semmns=4000\n> \n> and it seems to work fine.\n\nThanks! I've made these adjustments and we'll see what happens.\n\nThomas,\n\nNo offense taken with your input from last week. Your additional input this week\nis welcomed and I'll pass along to some folks.\n\nThank you all for continuing to monitor Postgresql building on OpenBSD.\n\n> \n> /Mikael\n\n\n",
"msg_date": "Mon, 23 Sep 2019 14:43:37 -0700",
"msg_from": "\"Jungle Boogie\" <jungleboogie0@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: scorpionfly needs more semaphores"
}
] |
[
{
"msg_contents": "Hi all,\n\nBased on the current status of the open items and where we are at in\nthe release cycle, the date for the first release candidate of\nPostgreSQL 12 will be 2019-09-26.\n\nIf all goes well with RC1, the PostgreSQL 12.0 GA release will be\n2019-10-03. This is subject to change if we find any issues during\nthe RC1 period that indicate we need to make an additional release\ncandidate prior to going GA.\n\nTo the entire community, patch authors, reviewers and committers,\nthank you for all of your hard work on developing PostgreSQL 12!\n\nOn behalf of the RMT,\n--\nMichael",
"msg_date": "Wed, 18 Sep 2019 10:17:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 12 RC1 + GA Dates"
}
] |
[
{
"msg_contents": "Dear Hackers,\n\nI have identified some OSS code where more compile-time asserts could be added. \n\nMostly these are asserting that arrays have the necessary length to accommodate the enums that are used to index into them.\n\nIn general the code is already commented with warnings such as:\n* \"If you add a new entry, remember to ...\"\n* \"When modifying this enum, update the table in ...\"\n* \"Display names for enums in ...\"\n* etc.\n\nBut comments can be accidentally overlooked, so adding the compile-time asserts can help eliminate human error.\n\nPlease refer to the attached patch.\n\nKind Regards,\nPeter Smith\n---\nFujitsu Australia",
"msg_date": "Wed, 18 Sep 2019 06:46:24 +0000",
"msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Proposal: Add more compile-time asserts to expose inconsistencies."
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 06:46:24AM +0000, Smith, Peter wrote:\n> I have identified some OSS code where more compile-time asserts could be added. \n> \n> Mostly these are asserting that arrays have the necessary length to\n> accommodate the enums that are used to index into them.\n> \n> In general the code is already commented with warnings such as:\n> * \"If you add a new entry, remember to ...\"\n> * \"When modifying this enum, update the table in ...\"\n> * \"Display names for enums in ...\"\n> * etc.\n> \n> But comments can be accidentally overlooked, so adding the\n> compile-time asserts can help eliminate human error. \n\nFor some of them it could help, and we could think about a better\nlocation for that stuff than an unused routine. The indentation of\nyour patch is weird, with \"git diff --check\" complaining a lot.\n\nIf you want to discuss more about that, could you add that to the next\ncommit fest? Here it is:\nhttps://commitfest.postgresql.org/25/\n--\nMichael",
"msg_date": "Wed, 18 Sep 2019 16:40:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n\n> On Wed, Sep 18, 2019 at 06:46:24AM +0000, Smith, Peter wrote:\n>> I have identified some OSS code where more compile-time asserts could be added. \n>> \n>> Mostly these are asserting that arrays have the necessary length to\n>> accommodate the enums that are used to index into them.\n>> \n>> In general the code is already commented with warnings such as:\n>> * \"If you add a new entry, remember to ...\"\n>> * \"When modifying this enum, update the table in ...\"\n>> * \"Display names for enums in ...\"\n>> * etc.\n>> \n>> But comments can be accidentally overlooked, so adding the\n>> compile-time asserts can help eliminate human error. \n>\n> For some of them it could help, and we could think about a better\n> location for that stuff than an unused routine.\n\nPostgres doesn't seem to have it, but it would be possible to define a\nStaticAssertDecl macro that can be used at the file level, outside any\nfunction. See for example Perl's STATIC_ASSERT_DECL:\n\nhttps://github.com/Perl/perl5/blob/v5.30.0/perl.h#L3455-L3488\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law\n\n\n",
"msg_date": "Wed, 18 Sep 2019 16:46:30 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "-----Original Message-----\nFrom: Michael Paquier <michael@paquier.xyz> Sent: Wednesday, 18 September 2019 5:40 PM\n\n> For some of them it could help, and we could think about a better location for that stuff than an unused routine. The indentation of your patch is weird, with \"git diff --check\" complaining a lot.\n>\n> If you want to discuss more about that, could you add that to the next commit fest? Here it is: https://commitfest.postgresql.org/25/\n\nHi Michael,\n\nThanks for your feedback and advice.\n\nI have modified the patch to clean up the whitespace issues, and added it to the next commit fest.\n\nKind Regards,\n---\nPeter Smith\nFujitsu Australia",
"msg_date": "Thu, 19 Sep 2019 00:47:37 +0000",
"msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 04:46:30PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> Postgres doesn't seem to have it, but it would be possible to define a\n> StaticAssertDecl macro that can be used at the file level, outside any\n> function. See for example Perl's STATIC_ASSERT_DECL:\n> \n> https://github.com/Perl/perl5/blob/v5.30.0/perl.h#L3455-L3488\n\nThat sounds like a cleaner alternative. Thanks for the pointer.\n--\nMichael",
"msg_date": "Thu, 19 Sep 2019 10:07:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "At Thu, 19 Sep 2019 10:07:40 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190919010740.GC22307@paquier.xyz>\n> On Wed, Sep 18, 2019 at 04:46:30PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> > Postgres doesn't seem to have it, but it would be possible to define a\n> > StaticAssertDecl macro that can be used at the file level, outside any\n> > function. See for example Perl's STATIC_ASSERT_DECL:\n> > \n> > https://github.com/Perl/perl5/blob/v5.30.0/perl.h#L3455-L3488\n> \n> That sounds like a cleaner alternative. Thanks for the pointer.\n\nThe cause for StaticAssertStmt not being usable outside of\nfunctions is enclosing do-while, which is needed to avoid \"mixed\ndeclaration\" warnings, which we are inhibiting to use as of\nnow. Therefore just defining another macro defined as just\n_Static_assert() works fine.\n\nI don't find an alternative way for the tool chains that don't\nhave static assertion feature. In the attached diff the macro is\ndefined as nothing. I don't find a way to warn that the assertion\nis ignored.\n\nregards.",
"msg_date": "Thu, 19 Sep 2019 11:45:02 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "-----Original Message-----\nFrom: Michael Paquier <michael@paquier.xyz> Sent: Thursday, 19 September 2019 11:08 AM\n\n>On Wed, Sep 18, 2019 at 04:46:30PM +0100, Dagfinn Ilmari Mannsåker wrote:\n>> Postgres doesn't seem to have it, but it would be possible to define a \n>> StaticAssertDecl macro that can be used at the file level, outside any \n>> function. See for example Perl's STATIC_ASSERT_DECL:\n>> \n>> https://github.com/Perl/perl5/blob/v5.30.0/perl.h#L3455-L3488\n>\n>That sounds like a cleaner alternative. Thanks for the pointer.\n\nIn the attached patch example I have defined a new macro StaticAssertDecl. A typical usage of it is shown in the relpath.c file.\n\nThe goal was to leave all existing Postgres static assert macros unchanged, but to allow static asserts to be added in the code at file scope without the need for the explicit ct_asserts function.\n\nNotice, in reality the StaticAssertDecl macro still uses a static function as a wrapper for the StaticAssertStmt, but now the function is not visible in the source.\n\nIf this strategy is acceptable I will update my original patch to remove all those ct_asserts functions, and instead put each StaticAssertDecl nearby the array that it is asserting (e.g. just like relpath.c)\n\nKind Regards,\nPeter Smith\n---\nFujitsu Australia",
"msg_date": "Thu, 19 Sep 2019 04:46:27 +0000",
"msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "Hi,\n\nOn 2019-09-19 04:46:27 +0000, Smith, Peter wrote:\n> In the attached patch example I have defined a new macro\n> StaticAssertDecl. A typical usage of it is shown in the relpath.c\n> file.\n\nI'm in favor of adding that - in fact, when I was working on adding a a\nstatic_assert wrapper, I was advocating for only supporting compilers\nthat know static_assert, so we can put these in the global scope. The\nnumber of compilers that don't support static_assert is pretty small\ntoday, especially compared to 2012, when we added these.\n\nhttps://www.postgresql.org/message-id/E1TIW1p-0002yE-AY%40gemulon.postgresql.org\nhttps://www.postgresql.org/message-id/27446.1349024252%40sss.pgh.pa.us\n\nTom, you were arguing for restricting to file scope to get broader\ncompatibility, are you ok with adding a seperate *Decl version?\n\nOr perhaps it's time to just remove the fallback implementation? I think\nwe'd have to add proper MSVC support, but that seems like something we\nshould do anyway.\n\nBack then I was wondering about using tyepedef to emulate static assert\nthat works both in file and block scope, but that struggles with needing\nunique names.\n\nFWIW, the perl5 implementation has precisely that problem. If it's used\nin multiple headers (or a header and a .c file), two static asserts may\nnot be on the same line... - which one will only notice when using an\nold compiler.\n\nI wonder if defining the fallback static assert code to something like\n extern void static_assert_func(int static_assert_failed[(condition) ? 1 : -1]);\nisn't a solution, however. I *think* that's standard C. Seems to work at\nleast with gcc, clang, msvc, icc.\n\nRe standard: C99's \"6.7 Declarations\" + 6.7.1 defines 'declaration' to\ninclude extern specifiers and in 6.7.1 5) says \"The declaration of an\nidentifier for a function that has block scope shall have no explicit\nstorage-class specifier other than extern.\". And \"6.8 Statements and\nblocks\", via \"6.8.2 Compound statement\" allows declarations in statements.\n\nYou can play with a good few compilers at: https://godbolt.org/z/fl0Mzu\n\n\n> The goal was to leave all existing Postgres static assert macros unchanged, but to allow static asserts to be added in the code at file scope without the need for the explicit ct_asserts function.\n\nIt'd probably worthwhile to move many of the current ones.\n\n\n> Notice, in reality the StaticAssertDecl macro still uses a static function as a wrapper for the StaticAssertStmt, but now the function is not visible in the source.\n\nI think this implementation is not ok, due to the unique-name issue.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 30 Sep 2019 10:14:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-09-19 04:46:27 +0000, Smith, Peter wrote:\n>> In the attached patch example I have defined a new macro\n>> StaticAssertDecl. A typical usage of it is shown in the relpath.c\n>> file.\n\n> I'm in favor of adding that - in fact, when I was working on adding a a\n> static_assert wrapper, I was advocating for only supporting compilers\n> that know static_assert, so we can put these in the global scope. The\n> number of compilers that don't support static_assert is pretty small\n> today, especially compared to 2012, when we added these.\n> https://www.postgresql.org/message-id/E1TIW1p-0002yE-AY%40gemulon.postgresql.org\n> https://www.postgresql.org/message-id/27446.1349024252%40sss.pgh.pa.us\n> Tom, you were arguing for restricting to file scope to get broader\n> compatibility, are you ok with adding a seperate *Decl version?\n\nIt could use another look now that we require C99. I'd be in favor\nof having a decl-level static assert if practical.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Sep 2019 13:20:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "Hi, \n\nOn September 30, 2019 10:20:54 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Andres Freund <andres@anarazel.de> writes:\n>> On 2019-09-19 04:46:27 +0000, Smith, Peter wrote:\n>>> In the attached patch example I have defined a new macro\n>>> StaticAssertDecl. A typical usage of it is shown in the relpath.c\n>>> file.\n>\n>> I'm in favor of adding that - in fact, when I was working on adding a\n>a\n>> static_assert wrapper, I was advocating for only supporting compilers\n>> that know static_assert, so we can put these in the global scope. The\n>> number of compilers that don't support static_assert is pretty small\n>> today, especially compared to 2012, when we added these.\n>>\n>https://www.postgresql.org/message-id/E1TIW1p-0002yE-AY%40gemulon.postgresql.org\n>>\n>https://www.postgresql.org/message-id/27446.1349024252%40sss.pgh.pa.us\n>> Tom, you were arguing for restricting to file scope to get broader\n>> compatibility, are you ok with adding a seperate *Decl version?\n>\n>It could use another look now that we require C99. I'd be in favor\n>of having a decl-level static assert if practical.\n\nWhat do you think about my proposal further down in the email to rely on extern function declarations to have one fallback that works in the relevant scopes (not in expressions, but we already treat that differently)? Seems to work on common compilers and seems standard conform?\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Mon, 30 Sep 2019 10:28:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "From: Andres Freund <andres@anarazel.de> Sent: Tuesday, 1 October 2019 3:14 AM\n\n>I wonder if defining the fallback static assert code to something like\n> extern void static_assert_func(int static_assert_failed[(condition) ? 1 : -1]); isn't a solution, however. I *think* that's standard C. Seems to work at least with gcc, clang, msvc, icc.\n>\n>Re standard: C99's \"6.7 Declarations\" + 6.7.1 defines 'declaration' to include extern specifiers and in 6.7.1 5) says \"The declaration of an identifier for a function that has block scope shall have >no explicit storage-class specifier other than extern.\". And \"6.8 Statements and blocks\", via \"6.8.2 Compound statement\" allows declarations in statements.\n>\n>You can play with a good few compilers at: https://godbolt.org/z/fl0Mzu\n\nI liked your idea of using an extern function declaration for implementing the file-scope compile-time asserts. AFAIK it is valid standard C.\n\nThank you for the useful link to that compiler explorer. I tried many scenarios of the new StaticAssertDecl and all seemed to work ok.\nhttps://godbolt.org/z/fDrmXi\n\nThe patch has been updated accordingly. All assertions identified in the original post are now adjacent the global variables they are asserting. \n\nKind Regards\n--\nPeter Smith\nFujitsu Australia",
"msg_date": "Wed, 9 Oct 2019 22:52:41 +0000",
"msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "On 2019-10-10 00:52, Smith, Peter wrote:\n> I liked your idea of using an extern function declaration for implementing the file-scope compile-time asserts. AFAIK it is valid standard C.\n> \n> Thank you for the useful link to that compiler explorer. I tried many scenarios of the new StaticAssertDecl and all seemed to work ok.\n> https://godbolt.org/z/fDrmXi\n> \n> The patch has been updated accordingly. All assertions identified in the original post are now adjacent the global variables they are asserting. \n> \n\nThe problem with this implementation is that you get a crappy error\nmessage when the assertion fails, namely something like:\n\n../../../../src/include/c.h:862:84: error: size of array\n'static_assert_failure' is negative\n\nIdeally, the implementation should end up calling _Static_assert()\nsomehow, so that we get the compiler's native error message.\n\nWe could do a configure check for whether _Static_assert() works at file\nscope. I don't know what the support for that is, but it seems to work\nin gcc and clang.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 26 Oct 2019 15:06:07 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "Hi, \n\nOn October 26, 2019 6:06:07 AM PDT, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>On 2019-10-10 00:52, Smith, Peter wrote:\n>> I liked your idea of using an extern function declaration for\n>implementing the file-scope compile-time asserts. AFAIK it is valid\n>standard C.\n>> \n>> Thank you for the useful link to that compiler explorer. I tried many\n>scenarios of the new StaticAssertDecl and all seemed to work ok.\n>> https://godbolt.org/z/fDrmXi\n>> \n>> The patch has been updated accordingly. All assertions identified in\n>the original post are now adjacent the global variables they are\n>asserting. \n>> \n>\n>The problem with this implementation is that you get a crappy error\n>message when the assertion fails, namely something like:\n>\n>../../../../src/include/c.h:862:84: error: size of array\n>'static_assert_failure' is negative\n\nMy proposal for this really was just to use this as a fallback for when static assert isn't available. Which in turn I was just suggesting because Tom wanted a fallback.\n\n\n>Ideally, the implementation should end up calling _Static_assert()\n>somehow, so that we get the compiler's native error message.\n>\n>We could do a configure check for whether _Static_assert() works at\n>file\n>scope. I don't know what the support for that is, but it seems to work\n>in gcc and clang.\n\nI think it should work everywhere that has static assert. So we should need a separate configure check.\n\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Sat, 26 Oct 2019 18:02:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "From: Andres Freund <andres@anarazel.de> Sent: Sunday, 27 October 2019 12:03 PM\r\n\r\n>>Ideally, the implementation should end up calling _Static_assert() \r\n>>somehow, so that we get the compiler's native error message.\r\n\r\nOK. I can work on that.\r\n\r\n>>We could do a configure check for whether _Static_assert() works at \r\n>>file scope. I don't know what the support for that is, but it seems to \r\n>>work in gcc and clang\r\n\r\n> I think it should work everywhere that has static assert. So we should need a separate configure check.\r\n\r\nEr, that's a typo right? I think you meant: \"So we *shouldn't* need a separate configure check\"\r\n\r\nKind Regards\r\n---\r\nPeter Smith\r\nFujitsu Australia\r\n",
"msg_date": "Sun, 27 Oct 2019 11:44:54 +0000",
"msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "On 2019-10-27 11:44:54 +0000, Smith, Peter wrote:\n> From: Andres Freund <andres@anarazel.de> Sent: Sunday, 27 October 2019 12:03 PM\n> > I think it should work everywhere that has static assert. So we should need a separate configure check.\n> \n> Er, that's a typo right? I think you meant: \"So we *shouldn't* need a separate configure check\"\n\nYes.\n\n\n",
"msg_date": "Sun, 27 Oct 2019 07:45:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "From: Andres Freund <andres@anarazel.de> Sent: Sunday, 27 October 2019 12:03 PM\r\n> My proposal for this really was just to use this as a fallback for when static assert isn't available. Which in turn I was just suggesting because Tom wanted a fallback.\r\n\r\nThe patch is updated to use \"extern\" technique only when \"_Static_assert\" is unavailable.\r\n\r\nPSA.\r\n\r\nKind Regards,\r\n---\r\nPeter Smith\r\nFujitsu Australia",
"msg_date": "Mon, 28 Oct 2019 00:30:11 +0000",
"msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "Hello.\n\nAt Mon, 28 Oct 2019 00:30:11 +0000, \"Smith, Peter\" <peters@fast.au.fujitsu.com> wrote in \n> From: Andres Freund <andres@anarazel.de> Sent: Sunday, 27 October 2019 12:03 PM\n> > My proposal for this really was just to use this as a fallback for when static assert isn't available. Which in turn I was just suggesting because Tom wanted a fallback.\n> \n> The patch is updated to use \"extern\" technique only when \"_Static_assert\" is unavailable.\n> \n> PSA.\n\nIt is missing the __cplusplus case?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 28 Oct 2019 11:26:06 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Sent: Monday, 28 October 2019 1:26 PM\n\n> It is missing the __cplusplus case?\n\nMy use cases for the macro are only in C code, so that's all I was interested in at this time.\nIf somebody else wants to extend the patch for C++ also (and test it) they can do.\n\nKind Regards,\n---\nPeter Smith\nFujitsu Australia\n\n\n\n\n",
"msg_date": "Mon, 28 Oct 2019 03:42:02 +0000",
"msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "On Mon, Oct 28, 2019 at 03:42:02AM +0000, Smith, Peter wrote:\n> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Sent: Monday, 28 October 2019 1:26 PM\n>> It is missing the __cplusplus case?\n> \n> My use cases for the macro are only in C code, so that's all I was interested in at this time.\n> If somebody else wants to extend the patch for C++ also (and test it) they can do.\n\nIt seems to me that there is a good point to be consistent with the\ntreatment of StaticAssertStmt and StaticAssertExpr in c.h, which have\nfallback implementations in *all* the configurations supported.\n\n@@ -858,7 +863,6 @@ extern void ExceptionalCondition(const char\n*conditionName,\n #endif\n #endif /* C++ */\n\n-\n /*\nA nit: noise diffs. (No need to send a new version just for that.)\n--\nMichael",
"msg_date": "Tue, 12 Nov 2019 14:41:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "Hi Peter, Peter, :)\n\n\nOn 2019-10-28 00:30:11 +0000, Smith, Peter wrote:\n> From: Andres Freund <andres@anarazel.de> Sent: Sunday, 27 October 2019 12:03 PM\n> > My proposal for this really was just to use this as a fallback for when static assert isn't available. Which in turn I was just suggesting because Tom wanted a fallback.\n>\n> The patch is updated to use \"extern\" technique only when \"_Static_assert\" is unavailable.\n\nCool.\n\n\n> /*\n> * forkname_to_number - look up fork number by name\n> diff --git a/src/include/c.h b/src/include/c.h\n> index d752cc0..3e24ff4 100644\n> --- a/src/include/c.h\n> +++ b/src/include/c.h\n> @@ -838,11 +838,16 @@ extern void ExceptionalCondition(const char *conditionName,\n> \tdo { _Static_assert(condition, errmessage); } while(0)\n> #define StaticAssertExpr(condition, errmessage) \\\n> \t((void) ({ StaticAssertStmt(condition, errmessage); true; }))\n> +/* StaticAssertDecl is suitable for use at file scope. */\n> +#define StaticAssertDecl(condition, errmessage) \\\n> +\t_Static_assert(condition, errmessage)\n> #else\t\t\t\t\t\t\t/* !HAVE__STATIC_ASSERT */\n> #define StaticAssertStmt(condition, errmessage) \\\n> \t((void) sizeof(struct { int static_assert_failure : (condition) ? 1 : -1; }))\n> #define StaticAssertExpr(condition, errmessage) \\\n> \tStaticAssertStmt(condition, errmessage)\n> +#define StaticAssertDecl(condition, errmessage) \\\n> +\textern void static_assert_func(int static_assert_failure[(condition) ? 1 : -1])\n> #endif\t\t\t\t\t\t\t/* HAVE__STATIC_ASSERT */\n> #else\t\t\t\t\t\t\t/* C++ */\n> #if defined(__cpp_static_assert) && __cpp_static_assert >= 200410\n> @@ -858,7 +863,6 @@ extern void ExceptionalCondition(const char *conditionName,\n> #endif\n> #endif\t\t\t\t\t\t\t/* C++\n> */\n\nPeter Smith:\n\nIs there a reason to not just make StaticAssertExpr and StaticAssertStmt\nbe the same? I don't want to proliferate variants that users have to\nunderstand if there's no compelling need. Nor do I think do we really\nneed two different fallback implementation for static asserts.\n\nAs far as I can tell we should be able to use the prototype based\napproach in all the cases where we currently use the \"negative bit-field\nwidth\" approach?\n\nShould then also update\n * Otherwise we fall back on a kluge that assumes the compiler will complain\n * about a negative width for a struct bit-field. This will not include a\n * helpful error message, but it beats not getting an error at all.\n\n\nPeter Eisentraut:\n\nLooking at the cplusplus variant, I'm somewhat surprised to see that you\nmade both fallback and plain version unconditionally use GCC style\ncompound expressions:\n\ncommit a2c8e5cfdb9d82ae6d4bb8f37a4dc7cbeca63ec1\nAuthor: Peter Eisentraut <peter_e@gmx.net>\nDate: 2016-08-30 12:00:00 -0400\n\n Add support for static assertions in C++\n\n...\n\n+#if defined(__cpp_static_assert) && __cpp_static_assert >= 200410\n+#define StaticAssertStmt(condition, errmessage) \\\n+ static_assert(condition, errmessage)\n+#define StaticAssertExpr(condition, errmessage) \\\n+ StaticAssertStmt(condition, errmessage)\n+#else\n+#define StaticAssertStmt(condition, errmessage) \\\n+ do { struct static_assert_struct { int static_assert_failure : (condition) ? 1 : -1; }; } while(0)\n+#define StaticAssertExpr(condition, errmessage) \\\n+ ({ StaticAssertStmt(condition, errmessage); })\n+#endif\n\nWas that intentional? The C version intentionally uses compound\nexpressions only for the _Static_assert case, where configure tests for\nthe compound expression support? As far as I can tell this'll not allow\nusing our headers e.g. with msvc in C++ mode if somebody introduce a\nstatic assertion in a header - which seems like a likely and good\noutcome with the changes proposed here?\n\n\nBtw, it looks to me like msvc supports using the C++ static_assert()\neven in C mode:\nhttps://godbolt.org/z/b_dxDW\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Nov 2019 11:00:46 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "From: Andres Freund <andres@anarazel.de> Sent: Wednesday, 13 November 2019 6:01 AM\n\n>Peter Smith:\n>\n> Is there a reason to not just make StaticAssertExpr and StaticAssertStmt be the same? I don't want to proliferate variants that users have to understand if there's no compelling \n> need. Nor do I think do we really need two different fallback implementation for static asserts.\n\n>\n> As far as I can tell we should be able to use the prototype based approach in all the cases where we currently use the \"negative bit-field width\" approach?\n\nI also thought that the new \"prototype negative array-dimension\" based approach (i.e. StaticAssertDecl) looked like an improvement over the existing \"negative bit-field width\" approach (i.e. StaticAssertStmt), because it seems to work for more scenarios (e.g. file scope). \n\nBut I did not refactor existing code to use the new way because I was fearful that there might be some subtle reason why the StaticAssertStmt was deliberately made that way (e.g. as do/while), and last thing I want to do was break working code.\n\n> Should then also update\n> * Otherwise we fall back on a kluge that assumes the compiler will complain\n> * about a negative width for a struct bit-field. This will not include a\n> * helpful error message, but it beats not getting an error at all.\n\nKind Regards.\nPeter Smith\n---\nFujitsu Australia\n\n\n\n",
"msg_date": "Wed, 13 Nov 2019 03:23:06 +0000",
"msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "On 2019-11-12 20:00, Andres Freund wrote:\n> Looking at the cplusplus variant, I'm somewhat surprised to see that you\n> made both fallback and plain version unconditionally use GCC style\n> compound expressions:\n\n> Was that intentional? The C version intentionally uses compound\n> expressions only for the _Static_assert case, where configure tests for\n> the compound expression support? As far as I can tell this'll not allow\n> using our headers e.g. with msvc in C++ mode if somebody introduce a\n> static assertion in a header - which seems like a likely and good\n> outcome with the changes proposed here?\n\nI don't recall all the details anymore, but if you're asking, why is the \nfallback implementation in C++ different from the one in C, then that's \nbecause the C variant didn't work in C++.\n\nI seem to recall that I did this work in order to get an actual \nC++-using extension to compile, so it worked(tm) at some point, but I \nprobably didn't try it with a not-gcc compatible compiler at the time.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 14 Nov 2019 15:54:02 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "Hi,\n\nOn 2019-11-13 03:23:06 +0000, Smith, Peter wrote:\n> From: Andres Freund <andres@anarazel.de> Sent: Wednesday, 13 November 2019 6:01 AM\n> \n> >Peter Smith:\n> >\n> > Is there a reason to not just make StaticAssertExpr and StaticAssertStmt be the same? I don't want to proliferate variants that users have to understand if there's no compelling \n> > need. Nor do I think do we really need two different fallback implementation for static asserts.\n> \n> >\n> > As far as I can tell we should be able to use the prototype based approach in all the cases where we currently use the \"negative bit-field width\" approach?\n> \n> I also thought that the new \"prototype negative array-dimension\" based\n> approach (i.e. StaticAssertDecl) looked like an improvement over the\n> existing \"negative bit-field width\" approach (i.e. StaticAssertStmt),\n> because it seems to work for more scenarios (e.g. file scope).\n> \n> But I did not refactor existing code to use the new way because I was\n> fearful that there might be some subtle reason why the\n> StaticAssertStmt was deliberately made that way (e.g. as do/while),\n> and last thing I want to do was break working code.\n\nThat'll just leave us with cruft. And it's not like this stuff will\nbreak in a subtle way or such....\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 14 Nov 2019 10:07:35 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "Hi Andres,\n\n>>> As far as I can tell we should be able to use the prototype based approach in all the cases where we currently use the \"negative bit-field width\" approach?\n\n>> ...\n>> But I did not refactor existing code to use the new way because I was \n>> fearful that there might be some subtle reason why the \n>> StaticAssertStmt was deliberately made that way (e.g. as do/while), \n>> and last thing I want to do was break working code.\n\n>That'll just leave us with cruft. And it's not like this stuff will break in a subtle way or such....\n\nFYI - I did try, per your suggestion, to replace the existing StaticAssertStmt to also use the same fallback \"extern\" syntax form as the new StaticAssertDecl, but the code broke as I suspected it might do:\n\n====================\npath.c: In function 'first_dir_separator':\n../../src/include/c.h:847:2: error: expected expression before 'extern'\n extern void static_assert_func(int static_assert_failure[(condition) ? 1 : -1])\n ^\n../../src/include/c.h:849:2: note: in expansion of macro 'StaticAssertStmt'\n StaticAssertStmt(condition, errmessage)\n ^\n../../src/include/c.h:1184:3: note: in expansion of macro 'StaticAssertExpr'\n (StaticAssertExpr(__builtin_types_compatible_p(__typeof(expr), const underlying_type), \\\n ^\npath.c:127:11: note: in expansion of macro 'unconstify'\n return unconstify(char *, p);\n ^\n====================\n\nKind Regards.\n---\nPeter Smith\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 27 Nov 2019 11:50:26 +0000",
"msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "> It seems to me that there is a good point to be consistent with the treatment of StaticAssertStmt and StaticAssertExpr in c.h, which have fallback implementations in *all* the configurations supported.\n\nConsistency is good, but:\n\n* That is beyond the scope for what I wanted my patch to achieve; my use-cases are C code only\n\n* It is too risky for me to simply cut/paste my C version of StaticAssertDecl and hope it will work OK for C++. It needs lots of testing because there seems evidence that bad things can happen. E.g. Peter Eisentraut wrote \"if you're asking, why is the fallback implementation in C++ different from the one in C, then that's because the C variant didn't work in C++.\"\n\n~\n\nI am happy if somebody else with more ability to test C++ properly wants to add the __cplusplus variant of the new macro.\n\nMeanwhile, I've attached latest re-based version of this patch.\n\nKind Regards.\n--\nPeter Smith\nFujitsu Australia",
"msg_date": "Wed, 27 Nov 2019 12:23:33 +0000",
"msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "On Wed, Nov 27, 2019 at 12:23:33PM +0000, Smith, Peter wrote:\n> * That is beyond the scope for what I wanted my patch to achieve; my\n> * use-cases are C code only.\n\nWell, FWIW, I do have some extensions using __cplusplus and I am\npretty sure that I am not the only one with that. The thing is that\nwith your patch folks would not get any compilation failures *now*\nbecause all the declarations of StaticAssertDecl() are added within\nthe .c files, but once a patch which includes a declaration in a\nheader, something very likely to happen, is merged then we head into\nbreaking suddenly the compilation of those modules. And that's not\nnice. That's also a point raised by Andres upthread.\n\n> I am happy if somebody else with more ability to test C++ properly\n> wants to add the __cplusplus variant of the new macro.\n\nIn short, attached is an updated version of your patch which attempts\nto solve that. I have tested this with some cplusplus stuff, and GCC\nfor both versions (static_assert is available in GCC >= 6, but a\nmanual change of c.h does the trick).\n\nI have edited the patch a bit while on it, your assertions did not use\nproject-style grammar, the use of parenthesis was inconsistent (see\nrelpath.c for example), and pgindent has complained a bit.\n\nAlso, I am bumping the patch to next CF for now. Do others have\nthoughts to share about this version? I would be actually fine to\ncommit that, even if the message generated for the fallback versions\nis a bit crappy with a complain about a negative array size, but\nthat's not new to this patch as we use that as well with\nStaticAssertStmt().\n--\nMichael",
"msg_date": "Fri, 29 Nov 2019 11:11:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "Hi,\n\nOn 2019-11-29 11:11:25 +0900, Michael Paquier wrote:\n> On Wed, Nov 27, 2019 at 12:23:33PM +0000, Smith, Peter wrote:\n> > * That is beyond the scope for what I wanted my patch to achieve; my\n> > * use-cases are C code only.\n\nI really don't think that's justification enough for having diverging\nimplementations, nor imcomplete coverage. Following that chain of\narguments we'd just end up with more and more cruft, without ever\nactually cleaning anything up.\n\n\n> diff --git a/src/include/c.h b/src/include/c.h\n> index 00e41ac546..91d6d50e76 100644\n> --- a/src/include/c.h\n> +++ b/src/include/c.h\n> @@ -845,11 +845,16 @@ extern void ExceptionalCondition(const char *conditionName,\n> \tdo { _Static_assert(condition, errmessage); } while(0)\n> #define StaticAssertExpr(condition, errmessage) \\\n> \t((void) ({ StaticAssertStmt(condition, errmessage); true; }))\n> +/* StaticAssertDecl is suitable for use at file scope. */\n> +#define StaticAssertDecl(condition, errmessage) \\\n> +\t_Static_assert(condition, errmessage)\n> #else\t\t\t\t\t\t\t/* !HAVE__STATIC_ASSERT */\n> #define StaticAssertStmt(condition, errmessage) \\\n> \t((void) sizeof(struct { int static_assert_failure : (condition) ? 1 : -1; }))\n> #define StaticAssertExpr(condition, errmessage) \\\n> \tStaticAssertStmt(condition, errmessage)\n> +#define StaticAssertDecl(condition, errmessage) \\\n> +\textern void static_assert_func(int static_assert_failure[(condition) ? 1 : -1])\n> #endif\t\t\t\t\t\t\t/* HAVE__STATIC_ASSERT */\n\nI think this a) needs an updated comment above, explaining this approach\n(note the explanation for the array approach) b) I still think we ought\nto work towards also using this implementation for StaticAssertStmt.\n\nNow that I'm back from vacation, I'll try to take a stab at b). It\nshould definitely doable to use the same approach for StaticAssertStmt,\nthe problematic case might be StaticAssertExpr.\n\n\n> #else\t\t\t\t\t\t\t/* C++ */\n> #if defined(__cpp_static_assert) && __cpp_static_assert >= 200410\n> @@ -857,12 +862,16 @@ extern void ExceptionalCondition(const char *conditionName,\n> \tstatic_assert(condition, errmessage)\n> #define StaticAssertExpr(condition, errmessage) \\\n> \t({ static_assert(condition, errmessage); })\n> -#else\n> +#define StaticAssertDecl(condition, errmessage) \\\n> +\tstatic_assert(condition, errmessage)\n> +#else\t\t\t\t\t\t\t/* !__cpp_static_assert */\n> #define StaticAssertStmt(condition, errmessage) \\\n> \tdo { struct static_assert_struct { int static_assert_failure : (condition) ? 1 : -1; }; } while(0)\n> #define StaticAssertExpr(condition, errmessage) \\\n> \t((void) ({ StaticAssertStmt(condition, errmessage); }))\n> -#endif\n> +#define StaticAssertDecl(condition, errmessage) \\\n> +\textern void static_assert_func(int static_assert_failure[(condition) ? 1 : -1])\n> +#endif\t\t\t\t\t\t\t/* __cpp_static_assert */\n> #endif\t\t\t\t\t\t\t/* C++ */\n\nI wonder if it's worth moving the fallback implementation into an else\nbranch that's \"unified\" between C and C++.\n\n\n> +StaticAssertDecl(lengthof(LockTagTypeNames) == (LOCKTAG_ADVISORY + 1),\n> +\t\t\t\t \"LockTagTypeNames array inconsistency\");\n> +\n\nThese error messages strike me as somewhat unhelpful. I'd probably just\nreword them as \"array length mismatch\" or something like that.\n\n\nI think this patch ought to include at least one StaticAssertDecl in a\nheader, to make sure we get that part right across compilers. E.g. the\none in PageIsVerified()?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 2 Dec 2019 07:55:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "On Mon, Dec 02, 2019 at 07:55:45AM -0800, Andres Freund wrote:\n> On 2019-11-29 11:11:25 +0900, Michael Paquier wrote:\n>> diff --git a/src/include/c.h b/src/include/c.h\n>> index 00e41ac546..91d6d50e76 100644\n>> --- a/src/include/c.h\n>> +++ b/src/include/c.h\n>> [...]\n> \n> I think this a) needs an updated comment above, explaining this approach\n> (note the explanation for the array approach) b) I still think we ought\n> to work towards also using this implementation for StaticAssertStmt.\n\nSure. I was not completely sure which addition would be helpful\nexcept than adding in the main comment lock that Decl() is useful at\nfile scope.\n\n> Now that I'm back from vacation, I'll try to take a stab at b). It\n> should definitely doable to use the same approach for StaticAssertStmt,\n> the problematic case might be StaticAssertExpr.\n\nSo you basically want to minimize the amount of code relying on\nexternal compiler expressions? Sounds like a plan. At quick glance,\nit seems that this should work. I haven't tested though. I'll wait\nfor what you come up with then.\n\n>> #else\t\t\t\t\t\t\t/* C++ */\n>> #if defined(__cpp_static_assert) && __cpp_static_assert >= 200410\n>> @@ -857,12 +862,16 @@ extern void ExceptionalCondition(const char *conditionName,\n>> \tstatic_assert(condition, errmessage)\n>> #define StaticAssertExpr(condition, errmessage) \\\n>> \t({ static_assert(condition, errmessage); })\n>>\n>> [...]\n>>\n>> +#define StaticAssertDecl(condition, errmessage) \\\n>> +\textern void static_assert_func(int static_assert_failure[(condition) ? 1 : -1])\n>> +#endif\t\t\t\t\t\t\t/* __cpp_static_assert */\n>> #endif\t\t\t\t\t\t\t/* C++ */\n> \n> I wonder if it's worth moving the fallback implementation into an else\n> branch that's \"unified\" between C and C++.\n\nI suspect that you would run into problems with StaticAssertExpr() and\nStaticAssertStmt().\n\n>> +StaticAssertDecl(lengthof(LockTagTypeNames) == (LOCKTAG_ADVISORY + 1),\n>> +\t\t\t\t \"LockTagTypeNames array inconsistency\");\n>> +\n> \n> These error messages strike me as somewhat unhelpful. I'd probably just\n> reword them as \"array length mismatch\" or something like that.\n\nThat's indeed better. Now I think that it is useful to have the\nstructure name in the error message as well, no?\n\n> I think this patch ought to include at least one StaticAssertDecl in a\n> header, to make sure we get that part right across compilers. E.g. the\n> one in PageIsVerified()?\n\nNo objections to have one, but I don't think that your suggestion is a\ngood choice. This static assertion is based on size_t and BLCKSZ, and\nis located close to a code path where we have a specific logic based\non both things. If in the future this code gets removed, then we'd\nlikely miss to remove the static assertion if they are separated\nacross multiple files.\n--\nMichael",
"msg_date": "Wed, 4 Dec 2019 15:16:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "On 2019-12-02 16:55, Andres Freund wrote:\n>> +StaticAssertDecl(lengthof(LockTagTypeNames) == (LOCKTAG_ADVISORY + 1),\n>> +\t\t\t\t \"LockTagTypeNames array inconsistency\");\n>> +\n> These error messages strike me as somewhat unhelpful. I'd probably just\n> reword them as \"array length mismatch\" or something like that.\n\nI'd prefer it if we could just get rid of the second argument and show \nthe actual expression in the error message, like run-time assertions work.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Dec 2019 10:09:28 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "On 2019-12-04 15:16:25 +0900, Michael Paquier wrote:\n> On Mon, Dec 02, 2019 at 07:55:45AM -0800, Andres Freund wrote:\n> > Now that I'm back from vacation, I'll try to take a stab at b). It\n> > should definitely doable to use the same approach for StaticAssertStmt,\n> > the problematic case might be StaticAssertExpr.\n> \n> So you basically want to minimize the amount of code relying on\n> external compiler expressions? Sounds like a plan. At quick glance,\n> it seems that this should work. I haven't tested though. I'll wait\n> for what you come up with then.\n\nI don't know what you mean by \"external compiler expressions\"?\n\n\n> >> +StaticAssertDecl(lengthof(LockTagTypeNames) == (LOCKTAG_ADVISORY + 1),\n> >> +\t\t\t\t \"LockTagTypeNames array inconsistency\");\n> >> +\n> > \n> > These error messages strike me as somewhat unhelpful. I'd probably just\n> > reword them as \"array length mismatch\" or something like that.\n> \n> That's indeed better. Now I think that it is useful to have the\n> structure name in the error message as well, no?\n\nNo. I think the cost of having the different error messages is much\nhigher than the cost of not having the struct name in there. Note that\nyou'll commonly get an error message including the actual source code\nfor the offending expression.\n\n\n> > I think this patch ought to include at least one StaticAssertDecl in a\n> > header, to make sure we get that part right across compilers. E.g. the\n> > one in PageIsVerified()?\n> \n> No objections to have one, but I don't think that your suggestion is a\n> good choice. This static assertion is based on size_t and BLCKSZ, and\n> is located close to a code path where we have a specific logic based\n> on both things.\n\nWell, but it's a reliance that goes beyond that specific source code\nlocation\n\n\n> If in the future this code gets removed, then we'd likely miss to\n> remove the static assertion if they are separated across multiple\n> files.\n\nIt'll never get removed. There's just plainly no way that we'd use a\nblock size that's not a multiple of size_t.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 4 Dec 2019 09:53:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "Hi,\n\nOn 2019-12-04 10:09:28 +0100, Peter Eisentraut wrote:\n> On 2019-12-02 16:55, Andres Freund wrote:\n> > > +StaticAssertDecl(lengthof(LockTagTypeNames) == (LOCKTAG_ADVISORY + 1),\n> > > +\t\t\t\t \"LockTagTypeNames array inconsistency\");\n> > > +\n> > These error messages strike me as somewhat unhelpful. I'd probably just\n> > reword them as \"array length mismatch\" or something like that.\n> \n> I'd prefer it if we could just get rid of the second argument and show the\n> actual expression in the error message, like run-time assertions work.\n\nWell, _Static_assert has an error message, so we got to pass\nsomething. And having something like \"array length mismatch\", without\nreferring again to the variable, doesn't strike me as that bad. We could\nof course just again pass the expression, this time stringified, but\nthat doesn't seem an improvement.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 4 Dec 2019 09:54:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "-----Original Message-----\nFrom: Andres Freund <andres@anarazel.de> Sent: Tuesday, 3 December 2019 2:56 AM\n> +StaticAssertDecl(lengthof(LockTagTypeNames) == (LOCKTAG_ADVISORY + 1),\n> +\t\t\t\t \"LockTagTypeNames array inconsistency\");\n> +\n>\n>These error messages strike me as somewhat unhelpful. I'd probably just reword them as \"array length mismatch\" or something like that.\n\nOK. I have no problem to modify all my current assertion messages to your suggested text (\"array length mismatch\") if you think it is better.\n\nPlease correct me if I am wrong, but I didn't think the error message text is of very great significance here because it is a compile-time issue meaning the *only* person who would see the message is the 1 developer who accidentally introduced a bug just moments beforehand. The compile will fail with a source line number, and when the developer sees the StaticAssertDecl at that source line the cause of the error is anyway self-evident by the condition parameter. \n\nKind Regards\n--\nPeter Smith\nFujitsu Australia\n\n\n\n",
"msg_date": "Thu, 5 Dec 2019 00:51:18 +0000",
"msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "Hello Michael,\n\n> In short, attached is an updated version of your patch which attempts to solve that. I have tested this with some cplusplus stuff, and GCC for both versions (static_assert is available in GCC >= 6, but a manual change of c.h does the trick).\n> I have edited the patch a bit while on it, your assertions did not use project-style grammar, the use of parenthesis was inconsistent (see relpath.c for example), and pgindent has complained a bit.\n\nThanks for your updates.\n\n~~\n\nHello Andres,\n\n>> +StaticAssertDecl(lengthof(LockTagTypeNames) == (LOCKTAG_ADVISORY + 1),\n>> +\t\t\t\t \"LockTagTypeNames array inconsistency\");\n>> +\n> These error messages strike me as somewhat unhelpful. I'd probably just reword them as \"array length mismatch\" or something like that.\n\nI updated the most recent patch (_5 from Michael) so it now has your suggested error message rewording.\n\nPSA patch _6\n\nKind Regards\n----\nPeter Smith\nFujitsu Australia",
"msg_date": "Fri, 20 Dec 2019 01:08:47 +0000",
"msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "On Wed, Dec 04, 2019 at 09:54:47AM -0800, Andres Freund wrote:\n> Well, _Static_assert has an error message, so we got to pass\n> something. And having something like \"array length mismatch\", without\n> referring again to the variable, doesn't strike me as that bad. We could\n> of course just again pass the expression, this time stringified, but\n> that doesn't seem an improvement.\n\nYeah, I would rather keep the second argument. I think that's also\nmore helpful as it gives more flexibility to extension authors willing\nto make use of it.\n--\nMichael",
"msg_date": "Mon, 23 Dec 2019 12:45:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 01:08:47AM +0000, Smith, Peter wrote:\n> I updated the most recent patch (_5 from Michael) so it now has your\n> suggested error message rewording.\n\nHmm. Based on the last messages, and this one in particular:\nhttps://www.postgresql.org/message-id/20191202155545.yzbfzuppjritidqr@alap3.anarazel.de\n\nI am still seeing that a couple of points need an extra lookup:\n- Addition of a Decl() in at least one header of the core code.\n- Perhaps unifying the fallback implementation between C and C++, with\na closer lookup in particular at StaticAssertStmt() and\nStaticAssertExpr().\n\nPeter, could you look at that? In order to test the C++ portion, you\ncould use this dummy C++ extension I have just published on my github\naccount:\nhttps://github.com/michaelpq/pg_plugins/tree/master/blackhole_cplusplus\n\nThat's a dirty 15-min hack, but that would be enough to check the C++\npart with PGXS. For now, I am switching the patch as waiting on\nauthor for now.\n--\nMichael",
"msg_date": "Tue, 24 Dec 2019 14:47:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "On Tue, Dec 24, 2019 at 02:47:29PM +0900, Michael Paquier wrote:\n> I am still seeing that a couple of points need an extra lookup:\n> - Addition of a Decl() in at least one header of the core code.\n\nI agree with the addition of Decl() definition in a header, and could\nnot think about something better than one for bufpage.h for the\nall-zero check case, so I went with that. Attached is a 0001 which\nadds the definition for StaticAssertDecl() for C and C++ for all code\npaths. If there are no objections, I would like to commit this\nversion. There is no fancy refactoring in it, and small progress is\nbetter than no progress. I have also reworked the comments in the\npatch, and did some testing on Windows.\n\n> - Perhaps unifying the fallback implementation between C and C++, with\n> a closer lookup in particular at StaticAssertStmt() and StaticAssertExpr().\n\nSeeing nothing happening on this side. I took a shot at all that, and\nI have hacked my way through it with 0002 which is an attempt to unify\nthe fallback implementation for C and C++. This is not fully baked\nyet, and it is perhaps a matter of taste if this makes the code more\nreadable or not. I think it does, because it reduces the parts\ndedicated to assertion definitions from four to three. Anyway, let's\ndiscuss about that.\n--\nMichael",
"msg_date": "Fri, 31 Jan 2020 11:47:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "At Fri, 31 Jan 2020 11:47:01 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Dec 24, 2019 at 02:47:29PM +0900, Michael Paquier wrote:\n> > I am still seeing that a couple of points need an extra lookup:\n> > - Addition of a Decl() in at least one header of the core code.\n> \n> I agree with the addition of Decl() definition in a header, and could\n> not think about something better than one for bufpage.h for the\n> all-zero check case, so I went with that. Attached is a 0001 which\n> adds the definition for StaticAssertDecl() for C and C++ for all code\n> paths. If there are no objections, I would like to commit this\n> version. There is no fancy refactoring in it, and small progress is\n> better than no progress. I have also reworked the comments in the\n> patch, and did some testing on Windows.\n\nAs a cross check, it cleanly applied and worked as expected. The\nfallback definition of StaticAssertDecl for C worked for gcc 8.3.\n\n\n- * Macros to support compile-time assertion checks.\n+ * Macros to support compile-time and declaration assertion checks.\n\nAll the StaticAssert things check compile-time assertion. I think\nthat the name StaticAssertDecl doesn't mean \"declaration assertion\",\nbut means \"static assertion as a declaration\". Is the change needed?\n\n\n- * If the \"condition\" (a compile-time-constant expression) evaluates to false,\n- * throw a compile error using the \"errmessage\" (a string literal).\n+ * If the \"condition\" (a compile-time-constant or declaration expression)\n+ * evaluates to false, throw a compile error using the \"errmessage\" (a\n+ * string literal).\n\nI'm not sure what the \"declaration expression\" here means. I think\nthe term generally means \"a variable declaration in expressions\",\nsomething like \"r = y * (int x = blah)\". In that sense, the parameter\nfor StaticAssertDecl is just a compile-time constant expression. Is it\na right rewrite?\n\n> > - Perhaps unifying the fallback implementation between C and C++, with\n> > a closer lookup in particular at StaticAssertStmt() and StaticAssertExpr().\n> \n> Seeing nothing happening on this side. I took a shot at all that, and\n> I have hacked my way through it with 0002 which is an attempt to unify\n> the fallback implementation for C and C++. This is not fully baked\n> yet, and it is perhaps a matter of taste if this makes the code more\n> readable or not. I think it does, because it reduces the parts\n> dedicated to assertion definitions from four to three. Anyway, let's\n> discuss about that.\n\n+1 as far as the unification is right. I'm not sure, but at least it\nworked for gcc 8.3.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 31 Jan 2020 14:15:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "On Fri, Jan 31, 2020 at 02:15:42PM +0900, Kyotaro Horiguchi wrote:\n> As a cross check, it cleanly applied and worked as expected. The\n> fallback definition of StaticAssertDecl for C worked for gcc 8.3.\n\nThanks for the review.\n\n> - * Macros to support compile-time assertion checks.\n> + * Macros to support compile-time and declaration assertion checks.\n> \n> All the StaticAssert things check compile-time assertion. I think\n> that the name StaticAssertDecl doesn't mean \"declaration assertion\",\n> but means \"static assertion as a declaration\". Is the change needed?\n\nHmm. Yeah, that sounds right.\n\n> - * If the \"condition\" (a compile-time-constant expression) evaluates to false,\n> - * throw a compile error using the \"errmessage\" (a string literal).\n> + * If the \"condition\" (a compile-time-constant or declaration expression)\n> + * evaluates to false, throw a compile error using the \"errmessage\" (a\n> + * string literal).\n> \n> I'm not sure what the \"declaration expression\" here means. I think\n> the term generally means \"a variable declaration in expressions\",\n> something like \"r = y * (int x = blah)\". In that sense, the parameter\n> for StaticAssertDecl is just a compile-time constant expression. Is it\n> a right rewrite?\n\nActually, thinking more about it, I'd rather remove this part as well,\nkeeping only the change in the third paragraph of this comment block.\n--\nMichael",
"msg_date": "Fri, 31 Jan 2020 14:46:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "On Fri, Jan 31, 2020 at 02:46:16PM +0900, Michael Paquier wrote:\n> Actually, thinking more about it, I'd rather remove this part as well,\n> keeping only the change in the third paragraph of this comment block.\n\nI have tweaked a bit the comments in this area, and applied the\npatch. I'll begin a new thread with the rest of the refactoring.\nThere are a couple of things I'd like to double-check first.\n--\nMichael",
"msg_date": "Mon, 3 Feb 2020 15:55:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "Hi Michael,\n\nSorry I was AWOL for a couple of months. Thanks for taking the patch further, and committing it.\n\nKind Regards\n---\nPeter Smith\nFujitsu Australia\n\n\n\n",
"msg_date": "Tue, 10 Mar 2020 00:22:32 +0000",
"msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Proposal: Add more compile-time asserts to expose\n inconsistencies."
},
{
"msg_contents": "On Tue, Mar 10, 2020 at 12:22:32AM +0000, Smith, Peter wrote:\n> Sorry I was AWOL for a couple of months. Thanks for taking the patch\n> further, and committing it. \n\nNo problem, I am glad to see you around.\n--\nMichael",
"msg_date": "Tue, 10 Mar 2020 11:23:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Add more compile-time asserts to expose\n inconsistencies."
}
] |
[
{
"msg_contents": "Hello Hackers,\n\nThis is to address a TODO I found in the JIT expression evaluation\ncode (opcode =\nEEOP_INNER_FETCHSOME/EEOP_OUTER_FETCHSOME/EEOP_SCAN_FETCHSOME):\n\n* TODO: skip nvalid check if slot is fixed and known to\n* be a virtual slot.\n\nNot only should we skip the nvalid check if the tuple is virtual, the\nwhole basic block should be a no-op. There is no deforming to be done,\nJIT (slot_compile_deform) or otherwise (slot_getsomeattrs) for virtual\ntuples.\n\nAttached is a patch 0001 that achieves exactly that by:\n\n1. Performing the check at the very beginning of the case.\n2. Emitting an unconditional branch to the next opblock if we have a\nvirtual tuple. We end up with a block with just the single\nunconditional branch instruction in this case. This block is optimized\naway when we run llvm_optimize_module().\n\nAlso enclosed is another patch 0002 that adds on some minor\nrefactoring of conditionals around whether to JIT deform or not.\n\n---\nSoumyadeep Chakraborty",
"msg_date": "Tue, 17 Sep 2019 23:54:51 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Don't codegen deform code for virtual tuples in expr eval for scan\n fetch"
},
{
"msg_contents": "Hello,\n\nIn my previous patch 0001, the resulting opblock consisted of a single\nbr instruction to it's successor opblock. Such a block represents\nunnecessary overhead. Even though such a block would be optimized\naway, what if optimization is not performed (perhaps due to\njit_optimize_above_cost)? Perhaps we could be more aggressive. We\ncould maybe remove the opblock altogether. However, such a solution\nis not without complexity.\n\nSuch unlinking is non-trivial, and is one of the things the\nsimplify-cfg pass takes care of. One solution could be to run this\npass ad-hoc for the evalexpr function. Or we could perform the\nunlinking during codegen. Such a solution is presented below.\n\nTo unlink the current opblock from\nthe cfg we have to replace all of its uses. From the current state of\nthe code, these uses are either:\n1. Terminator instruction of opblocks[i - 1]\n2. Terminator instruction of some sub-op-block of opblocks[i - 1]. By\nsub-op-block, I mean some block that is directly linked from opblock[i\n- 1] but not opblocks[i].\n3. Terminator instruction of the entry block.\n\nWe should replace all of these uses with opblocks[i + 1] and then and\nonly then can we delete opblocks[i]. My latest patch v2-0001 in my latest\npatch set, achieves this.\n\nI guard LLVMReplaceAllUsesWith() with Assert()s to ensure that we\ndon't accidentally replace non-terminator uses of opblocks[i], should\nthey be introduced in the future. If these asserts fail in the future,\nreplacing these non-terminator instructions with undefs constitutes a\ncommonly adopted solution. Otherwise, we can always fall back to making\nopblocks[i] empty with just the unconditional br, as in my previous\npatch 0001.\n\n--\nSoumyadeep\n\nOn Tue, Sep 17, 2019 at 11:54 PM Soumyadeep Chakraborty <\nsoumyadeep2007@gmail.com> wrote:\n\n> Hello Hackers,\n>\n> This is to address a TODO I found in the JIT expression evaluation\n> code (opcode =\n> EEOP_INNER_FETCHSOME/EEOP_OUTER_FETCHSOME/EEOP_SCAN_FETCHSOME):\n>\n> * TODO: skip nvalid check if slot is fixed and known to\n> * be a virtual slot.\n>\n> Not only should we skip the nvalid check if the tuple is virtual, the\n> whole basic block should be a no-op. There is no deforming to be done,\n> JIT (slot_compile_deform) or otherwise (slot_getsomeattrs) for virtual\n> tuples.\n>\n> Attached is a patch 0001 that achieves exactly that by:\n>\n> 1. Performing the check at the very beginning of the case.\n> 2. Emitting an unconditional branch to the next opblock if we have a\n> virtual tuple. We end up with a block with just the single\n> unconditional branch instruction in this case. This block is optimized\n> away when we run llvm_optimize_module().\n>\n> Also enclosed is another patch 0002 that adds on some minor\n> refactoring of conditionals around whether to JIT deform or not.\n>\n> ---\n> Soumyadeep Chakraborty\n>",
"msg_date": "Fri, 20 Sep 2019 22:19:46 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Don't codegen deform code for virtual tuples in expr eval for\n scan fetch"
},
{
"msg_contents": "Hi,\n\nThanks for working on this!\n\nOn 2019-09-17 23:54:51 -0700, Soumyadeep Chakraborty wrote:\n> This is to address a TODO I found in the JIT expression evaluation\n> code (opcode =\n> EEOP_INNER_FETCHSOME/EEOP_OUTER_FETCHSOME/EEOP_SCAN_FETCHSOME):\n> \n> * TODO: skip nvalid check if slot is fixed and known to\n> * be a virtual slot.\n\nI now think this isn't actually the right approach. Instead of doing\nthis optimization just for JIT compilation, we should do it while\ngenerating the ExprState itself. That way we don't just accelerate JITed\nprograms, but also normal interpreted execution.\n\nIOW, wherever ExecComputeSlotInfo() is called, we should only actually\npush the expression step, when ExecComputeSlotInfo does not determine\nthat a) the slot is virtual, b) and fixed (i.e. guaranteed to always be\nthe same type of slot).\n\nDoes that make sense?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 25 Sep 2019 13:02:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Don't codegen deform code for virtual tuples in expr eval for\n scan fetch"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-20 22:19:46 -0700, Soumyadeep Chakraborty wrote:\n> In my previous patch 0001, the resulting opblock consisted of a single\n> br instruction to it's successor opblock. Such a block represents\n> unnecessary overhead. Even though such a block would be optimized\n> away, what if optimization is not performed (perhaps due to\n> jit_optimize_above_cost)? Perhaps we could be more aggressive. We\n> could maybe remove the opblock altogether. However, such a solution\n> is not without complexity.\n\nI'm doubtful this is worth the complexity - and not that we already have\nplenty other places with zero length blocks.\n\nWRT jit_optimize_above_cost not triggering, I think we need to replace\nthe \"basic, unoptimized\" codegen path with one that does cheap\noptimizations only. The reason we don't do the full optimizations all\nthe time is that they're expensive, but there's enough optimizations\nthat are pretty cheap. At some point we'll probably need our own\noptimization pipeline, but I don't want to maintain that right now\n(i.e. if some other people want to help with this aspect, cool)...\n\nSee also: https://www.postgresql.org/message-id/20190904152438.pv4vdk3ctuvvnxh3%40alap3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 25 Sep 2019 13:06:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Don't codegen deform code for virtual tuples in expr eval for\n scan fetch"
},
{
"msg_contents": "Hello Andres,\n\nThank you very much for reviewing my patch!\n\nOn Wed, Sep 25, 2019 at 1:02 PM Andres Freund <andres@anarazel.de> wrote:\n> IOW, wherever ExecComputeSlotInfo() is called, we should only actually\n> push the expression step, when ExecComputeSlotInfo does not determine\n> that a) the slot is virtual, b) and fixed (i.e. guaranteed to always be\n> the same type of slot).\n>\n> Does that make sense?\n\nThat is a great suggestion and I totally agree. I have attached a patch\nthat achieves this.\n\n--\nSoumyadeep",
"msg_date": "Wed, 25 Sep 2019 22:11:51 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Don't codegen deform code for virtual tuples in expr eval for\n scan fetch"
},
{
"msg_contents": "Hi Andres,\n\nThank you for your insight and the link offered just the context I needed!\n\nOn Wed, Sep 25, 2019 at 1:06 PM Andres Freund <andres@anarazel.de> wrote:\n\n> > I'm doubtful this is worth the complexity - and not that we already have\n> plenty other places with zero length blocks.\n\nAgreed.\n\nOn Wed, Sep 25, 2019 at 1:06 PM Andres Freund <andres@anarazel.de> wrote:\n\n> > WRT jit_optimize_above_cost not triggering, I think we need to replace\n> the \"basic, unoptimized\" codegen path with one that does cheap\n> optimizations only. The reason we don't do the full optimizations all\n> the time is that they're expensive, but there's enough optimizations\n> that are pretty cheap. At some point we'll probably need our own\n> optimization pipeline, but I don't want to maintain that right now\n> (i.e. if some other people want to help with this aspect, cool)...\n\nI would absolutely love to work on this!\n\nI was thinking the same. Perhaps not only replace the default on the\ncompile path, but also introduce additional levels. These levels could then\nbe configured at a granularity lower than the -O0, -O1, .. In other words,\nperhaps we could have levels with ad-hoc pass combinations. We could then\nmaybe have costs associated with each level. I think such a framework\nwould be ideal.\n\nTo summarize this into TODOs:\n1. Tune default compilation path optimizations.\n2. New costs per optimization level.\n3. Capacity to define \"levels\" with ad-hoc pass combinations that are\ncosting\nsensitive.\n\nIs this what you meant by \"optimization pipeline\"?\n\n--\nSoumyadeep\n\nHi Andres,Thank you for your insight and the link offered just the context I needed!On Wed, Sep 25, 2019 at 1:06 PM Andres Freund <andres@anarazel.de> wrote:> I'm doubtful this is worth the complexity - and not that we already have> plenty other places with zero length blocks.Agreed.On Wed, Sep 25, 2019 at 1:06 PM Andres Freund <andres@anarazel.de> wrote:> WRT jit_optimize_above_cost not triggering, I think we need to replace> the \"basic, unoptimized\" codegen path with one that does cheap> optimizations only. The reason we don't do the full optimizations all> the time is that they're expensive, but there's enough optimizations> that are pretty cheap. At some point we'll probably need our own> optimization pipeline, but I don't want to maintain that right now> (i.e. if some other people want to help with this aspect, cool)...I would absolutely love to work on this!I was thinking the same. Perhaps not only replace the default on thecompile path, but also introduce additional levels. These levels could thenbe configured at a granularity lower than the -O0, -O1, .. In other words,perhaps we could have levels with ad-hoc pass combinations. We could thenmaybe have costs associated with each level. I think such a frameworkwould be ideal. To summarize this into TODOs:1. Tune default compilation path optimizations.2. New costs per optimization level.3. Capacity to define \"levels\" with ad-hoc pass combinations that are costingsensitive.Is this what you meant by \"optimization pipeline\"?--Soumyadeep",
"msg_date": "Wed, 25 Sep 2019 22:12:10 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Don't codegen deform code for virtual tuples in expr eval for\n scan fetch"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-25 22:11:51 -0700, Soumyadeep Chakraborty wrote:\n> Thank you very much for reviewing my patch!\n> \n> On Wed, Sep 25, 2019 at 1:02 PM Andres Freund <andres@anarazel.de> wrote:\n> > IOW, wherever ExecComputeSlotInfo() is called, we should only actually\n> > push the expression step, when ExecComputeSlotInfo does not determine\n> > that a) the slot is virtual, b) and fixed (i.e. guaranteed to always be\n> > the same type of slot).\n> >\n> > Does that make sense?\n> \n> That is a great suggestion and I totally agree. I have attached a patch\n> that achieves this.\n\nI think as done, this has the slight disadvantage of removing the\nfast-path for small interpreted expressions, because that explicitly\nmatches for seeing the EEOP_*_FETCHSOME ops. Look at execExprInterp.c,\naround:\n\t/*\n\t * Select fast-path evalfuncs for very simple expressions. \"Starting up\"\n\t * the full interpreter is a measurable overhead for these, and these\n\t * patterns occur often enough to be worth optimizing.\n\t */\n\tif (state->steps_len == 3)\n\t{\n\nSo I think we'd have to add a separate fastpath for virtual slots for\nthat.\n\nWhat do you think about the attached?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 27 Sep 2019 00:22:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Don't codegen deform code for virtual tuples in expr eval for\n scan fetch"
},
{
"msg_contents": "Hey,\n\nI completely agree, that was an important consideration.\n\nI had some purely cosmetic suggestions:\n1. Rename ExecComputeSlotInfo to eliminate the need for the asserts.\n2. Extract return value to a bool variable for slightly better\nreadability.\n3. Taking the opportunity to use TTS_IS_VIRTUAL.\n\nv2 of patch set attached. The first two patches are unchanged, the cosmetic\nchanges are part of v2-0003-Some-minor-cosmetic-changes.patch.\n\n--\nSoumyadeep",
"msg_date": "Fri, 27 Sep 2019 23:01:05 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Don't codegen deform code for virtual tuples in expr eval for\n scan fetch"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-27 23:01:05 -0700, Soumyadeep Chakraborty wrote:\n> I completely agree, that was an important consideration.\n> \n> I had some purely cosmetic suggestions:\n> 1. Rename ExecComputeSlotInfo to eliminate the need for the asserts.\n\nHow does renaming it do so? I feel like the asserts are a good idea\nindependent of anything else?\n\n\n> 2. Extract return value to a bool variable for slightly better\n> readability.\n\nTo me that seems clearly worse. The variable doesn't add anything, but\nneeding to track more state.\n\n\n> 3. Taking the opportunity to use TTS_IS_VIRTUAL.\n\nGood point.\n\n- Andres\n\n\n",
"msg_date": "Mon, 30 Sep 2019 01:00:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Don't codegen deform code for virtual tuples in expr eval for\n scan fetch"
},
{
"msg_contents": "Hi Andres,\n\nI don't feel very strongly about the changes I proposed.\n\n> > I completely agree, that was an important consideration.\n> >\n> > I had some purely cosmetic suggestions:\n> > 1. Rename ExecComputeSlotInfo to eliminate the need for the asserts.\n>\n> How does renaming it do so? I feel like the asserts are a good idea\n> independent of anything else?\n\nI felt that encoding the restriction that the function is meant to be called\nonly in the context of fetch operations in the function name itself\nensured that we don't call it from a non-fetch operation - something the\nasserts within ExecComputeSlotInfo() are guarding against.\n\n>\n> > 2. Extract return value to a bool variable for slightly better\n> > readability.\n>\n> To me that seems clearly worse. The variable doesn't add anything, but\n> needing to track more state.>\n\nAgreed.\n\n--\nSoumyadeep\n\nHi Andres,I don't feel very strongly about the changes I proposed.> > I completely agree, that was an important consideration.> > > > I had some purely cosmetic suggestions:> > 1. Rename ExecComputeSlotInfo to eliminate the need for the asserts.> > How does renaming it do so? I feel like the asserts are a good idea> independent of anything else?I felt that encoding the restriction that the function is meant to be calledonly in the context of fetch operations in the function name itselfensured that we don't call it from a non-fetch operation - something theasserts within ExecComputeSlotInfo() are guarding against.> > > 2. Extract return value to a bool variable for slightly better> > readability.> > To me that seems clearly worse. The variable doesn't add anything, but> needing to track more state.> Agreed.--Soumyadeep",
"msg_date": "Mon, 30 Sep 2019 09:14:45 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Don't codegen deform code for virtual tuples in expr eval for\n scan fetch"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-30 09:14:45 -0700, Soumyadeep Chakraborty wrote:\n> I don't feel very strongly about the changes I proposed.\n> \n> > > I completely agree, that was an important consideration.\n> > >\n> > > I had some purely cosmetic suggestions:\n> > > 1. Rename ExecComputeSlotInfo to eliminate the need for the asserts.\n> >\n> > How does renaming it do so? I feel like the asserts are a good idea\n> > independent of anything else?\n> \n> I felt that encoding the restriction that the function is meant to be called\n> only in the context of fetch operations in the function name itself\n> ensured that we don't call it from a non-fetch operation - something the\n> asserts within ExecComputeSlotInfo() are guarding against.\n> \n> >\n> > > 2. Extract return value to a bool variable for slightly better\n> > > readability.\n> >\n> > To me that seems clearly worse. The variable doesn't add anything, but\n> > needing to track more state.>\n> \n> Agreed.\n\nI pushed this to master. Thanks for your contribution!\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Mon, 30 Sep 2019 16:07:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Don't codegen deform code for virtual tuples in expr eval for\n scan fetch"
},
{
"msg_contents": "Awesome! Thanks so much for all the review! :)\n\n--\nSoumyadeep\n\nAwesome! Thanks so much for all the review! :)--Soumyadeep",
"msg_date": "Mon, 30 Sep 2019 17:35:47 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Don't codegen deform code for virtual tuples in expr eval for\n scan fetch"
}
] |
[
{
"msg_contents": "Sybase has a feature to turn off replication at the session level: set \nreplication = off, which can be temporarily turned off when there is a \nmaintenance action on the table. Our users also want this feature.\nI add a new flag bit in xinfo, control it with a session-level variable, \nwhen set to true, this flag is written when the transaction is \ncommitted, and when the logic is decoded it abandons the transaction \nlike aborted transactions. Since PostgreSQL has two types of \nreplication, I call the variable \"logical_replication\" to avoid \nconfusion and default value is true.\n\nSample SQL\n\ninsert into a values(100);\nset logical_replication to off;\ninsert into a values(200);\nreset logical_replication;\ninsert into a values(300);\n\npg_recvlogical output(the second is not output.)\nBEGIN 492\ntable public.a: INSERT: col1[integer]:100\nCOMMIT 492\nBEGIN 494\ntable public.a: INSERT: col1[integer]:300\nCOMMIT 494\n\nI'm not sure this is the most appropriate way. What do you think?\n\nRegards,\nQuan Zongliang",
"msg_date": "Wed, 18 Sep 2019 16:39:31 +0800",
"msg_from": "Quan Zongliang <zongliang.quan@postgresdata.com>",
"msg_from_op": true,
"msg_subject": "Add a GUC variable that control logical replication"
},
{
"msg_contents": "On 2019-09-18 10:39, Quan Zongliang wrote:\n> Sybase has a feature to turn off replication at the session level: set \n> replication = off, which can be temporarily turned off when there is a \n> maintenance action on the table. Our users also want this feature.\n\nThese kinds of feature requests are always dubious because just because\nSybase behaves this way for some implementation or architectural reason\ndoesn't necessarily mean it makes sense for PostgreSQL too.\n\nWhy do you need to turn off replication when there is \"maintenance\" on a\ntable? What does that even mean?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 18 Sep 2019 11:11:02 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add a GUC variable that control logical replication"
},
{
"msg_contents": "On 2019/9/18 17:11, Peter Eisentraut wrote:\n> On 2019-09-18 10:39, Quan Zongliang wrote:\n>> Sybase has a feature to turn off replication at the session level: set\n>> replication = off, which can be temporarily turned off when there is a\n>> maintenance action on the table. Our users also want this feature.\n> \n> These kinds of feature requests are always dubious because just because\n> Sybase behaves this way for some implementation or architectural reason\n> doesn't necessarily mean it makes sense for PostgreSQL too.\n> \nAgree\n> Why do you need to turn off replication when there is \"maintenance\" on a\n> table? What does that even mean?\n> \nIn a table, the user only keep data for a period of time and delete \nexpired records every day, involving about 10 million to 20 million \nrecords at a time. They want to not pass similar bulk operations in \nlogical replication.\nMy English is bad that I use the wrong word “maintenance” in my description.\n\n\n\n",
"msg_date": "Wed, 18 Sep 2019 17:33:44 +0800",
"msg_from": "Quan Zongliang <zongliang.quan@postgresdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add a GUC variable that control logical replication"
},
{
"msg_contents": "On 2019-09-18 11:33, Quan Zongliang wrote:\n> On 2019/9/18 17:11, Peter Eisentraut wrote:\n>> Why do you need to turn off replication when there is \"maintenance\" on a\n>> table? What does that even mean?\n>>\n> In a table, the user only keep data for a period of time and delete \n> expired records every day, involving about 10 million to 20 million \n> records at a time. They want to not pass similar bulk operations in \n> logical replication.\n\nYou can probably achieve that using ALTER PUBLICATION to disable\npublication of deletes or truncates, as the case may be, either\npermanently or just for the duration of the operations you want to skip.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 19 Oct 2019 19:10:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add a GUC variable that control logical replication"
},
{
"msg_contents": "Em sáb, 19 de out de 2019 às 14:11, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> escreveu:\n>\n> On 2019-09-18 11:33, Quan Zongliang wrote:\n> > On 2019/9/18 17:11, Peter Eisentraut wrote:\n> >> Why do you need to turn off replication when there is \"maintenance\" on a\n> >> table? What does that even mean?\n> >>\n> > In a table, the user only keep data for a period of time and delete\n> > expired records every day, involving about 10 million to 20 million\n> > records at a time. They want to not pass similar bulk operations in\n> > logical replication.\n>\n> You can probably achieve that using ALTER PUBLICATION to disable\n> publication of deletes or truncates, as the case may be, either\n> permanently or just for the duration of the operations you want to skip.\n>\n... then you are skipping all tables in the publication. I think this\nfeature is not essential for unidirectional logical replication.\nHowever, it is important for multi-master replication. Data\nsynchronization tool will generate transactions with rows that are\nalready in the other node(s) so those transactions can't be replicated\nto avoid (expensive) conflicts.\n\n\n--\n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n",
"msg_date": "Sat, 19 Oct 2019 19:23:36 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: Add a GUC variable that control logical replication"
},
{
"msg_contents": "On 2019-10-20 00:23, Euler Taveira wrote:\n>> You can probably achieve that using ALTER PUBLICATION to disable\n>> publication of deletes or truncates, as the case may be, either\n>> permanently or just for the duration of the operations you want to skip.\n>>\n> ... then you are skipping all tables in the publication.\n\nYou can group tables into different publications and set the \nsubscription to subscribe to multiple publications if you need this kind \nof granularity.\n\nIn any case, this kind of thing needs to be handled by the decoding \nplugin based on its configuration policies and depending on its needs. \nFor example, let's say you have two decoding plugins running: one for a \nreplication system and one for writing an audit log. It would not be \nappropriate to disable logging for both of them because of some \nperformance optimization for one of them. And it would also not be \nappropriate to do this with a USERSET setting.\n\nIf we need different hooks or more DDL commands do this better, then \nthat can be considered. But this seems to be the wrong way to do it.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 1 Nov 2019 13:49:15 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add a GUC variable that control logical replication"
},
{
"msg_contents": "On 2019/11/1 20:49, Peter Eisentraut wrote:\n> On 2019-10-20 00:23, Euler Taveira wrote:\n>>> You can probably achieve that using ALTER PUBLICATION to disable\n>>> publication of deletes or truncates, as the case may be, either\n>>> permanently or just for the duration of the operations you want to skip.\n>>>\n>> ... then you are skipping all tables in the publication.\n> \n> You can group tables into different publications and set the \n> subscription to subscribe to multiple publications if you need this kind \n> of granularity.\n> \n> In any case, this kind of thing needs to be handled by the decoding \n> plugin based on its configuration policies and depending on its needs. \n> For example, let's say you have two decoding plugins running: one for a \n> replication system and one for writing an audit log. It would not be \n> appropriate to disable logging for both of them because of some \n> performance optimization for one of them. And it would also not be \n> appropriate to do this with a USERSET setting.\n> \n> If we need different hooks or more DDL commands do this better, then \n> that can be considered. But this seems to be the wrong way to do it.\n> \n\nWhat the user needs is the same replication link that selectively skips \nsome transactions. And this choice only affects transactions that are \ndoing bulk delete sessions. The operations of other sessions are not \naffected and can continue to output replication messages.\nFor example, session 1 wants to bulk delete 1 million old data from the \nT1 table, which can be done without replication. At the same time, \nsession 2 deletes 10 records from T1, which is expected to be passed on \nthrough replication.\nTherefore, the two decoders can not meet this requirement. It is also \ninappropriate to temporarily disable subscriptions because it skips all \ntransactions for a certain period of time.\n\n-- \n权宗亮\n神州飞象(北京)数据科技有限公司\n我们的力量源自最先进的开源数据库PostgreSQL\nzongliang.quan@postgresdata.com\n\n\n\n",
"msg_date": "Wed, 6 Nov 2019 22:01:43 +0800",
"msg_from": "Quan Zongliang <zongliang.quan@postgresdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add a GUC variable that control logical replication"
},
{
"msg_contents": "On Wed, 18 Sep 2019 at 16:39, Quan Zongliang <\nzongliang.quan@postgresdata.com> wrote:\n\n>\n> Sybase has a feature to turn off replication at the session level: set\n> replication = off, which can be temporarily turned off when there is a\n> maintenance action on the table. Our users also want this feature.\n> I add a new flag bit in xinfo, control it with a session-level variable,\n> when set to true, this flag is written when the transaction is\n> committed, and when the logic is decoded it abandons the transaction\n> like aborted transactions. Since PostgreSQL has two types of\n> replication, I call the variable \"logical_replication\" to avoid\n> confusion and default value is true.\n>\n\nThere's something related to this already. You can set the replication\norigin for the transaction to the special value DoNotReplicateId\n(replication origin id 65535). This will suppress replication of the\ntransaction, at least for output plugins that're aware of replication\norigins.\n\nThis isn't presently exposed to SQL, it's there for the use of logical\nreplication extensions. It's possible to expose it with a pretty trivial C\nfunction in an extension.\n\nI think it's a bit of a hack TBH, it's something I perpetrated sometime in\nthe 9.4 series when we needed a way to suppress replication of individual\ntransactions. It originated out of core, so the original design was\nconstrained in how it worked, and maybe it would've actually made more\nsense to use an xlinfo flag. Probably not worth changing now though.\n\nBe extremely careful though. If you're hiding things from logical\nreplication you can get all sorts of confusing and exciting results. I very\nstrongly suggest you make anything like this superuser-only.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Wed, 18 Sep 2019 at 16:39, Quan Zongliang <zongliang.quan@postgresdata.com> wrote:\nSybase has a feature to turn off replication at the session level: set \nreplication = off, which can be temporarily turned off when there is a \nmaintenance action on the table. Our users also want this feature.\nI add a new flag bit in xinfo, control it with a session-level variable, \nwhen set to true, this flag is written when the transaction is \ncommitted, and when the logic is decoded it abandons the transaction \nlike aborted transactions. Since PostgreSQL has two types of \nreplication, I call the variable \"logical_replication\" to avoid \nconfusion and default value is true.There's something related to this already. You can set the replication origin for the transaction to the special value DoNotReplicateId (replication origin id 65535). This will suppress replication of the transaction, at least for output plugins that're aware of replication origins.This isn't presently exposed to SQL, it's there for the use of logical replication extensions. It's possible to expose it with a pretty trivial C function in an extension.I think it's a bit of a hack TBH, it's something I perpetrated sometime in the 9.4 series when we needed a way to suppress replication of individual transactions. It originated out of core, so the original design was constrained in how it worked, and maybe it would've actually made more sense to use an xlinfo flag. Probably not worth changing now though.Be extremely careful though. If you're hiding things from logical replication you can get all sorts of confusing and exciting results. I very strongly suggest you make anything like this superuser-only.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Sun, 10 Nov 2019 17:20:24 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add a GUC variable that control logical replication"
},
{
"msg_contents": "On Wed, Nov 06, 2019 at 10:01:43PM +0800, Quan Zongliang wrote:\n> What the user needs is the same replication link that selectively skips some\n> transactions. And this choice only affects transactions that are doing bulk\n> delete sessions. The operations of other sessions are not affected and can\n> continue to output replication messages.\n> For example, session 1 wants to bulk delete 1 million old data from the T1\n> table, which can be done without replication. At the same time, session 2\n> deletes 10 records from T1, which is expected to be passed on through\n> replication.\n> Therefore, the two decoders can not meet this requirement. It is also\n> inappropriate to temporarily disable subscriptions because it skips all\n> transactions for a certain period of time.\n\nHmm. The patch discussed on this thread does not have much support\nfrom Peter and Craig, so I am marking it as RwF.\n--\nMichael",
"msg_date": "Thu, 28 Nov 2019 12:53:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add a GUC variable that control logical replication"
},
{
"msg_contents": "On Thu, 28 Nov 2019 at 11:53, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Nov 06, 2019 at 10:01:43PM +0800, Quan Zongliang wrote:\n> > What the user needs is the same replication link that selectively skips\n> some\n> > transactions. And this choice only affects transactions that are doing\n> bulk\n> > delete sessions. The operations of other sessions are not affected and\n> can\n> > continue to output replication messages.\n> > For example, session 1 wants to bulk delete 1 million old data from the\n> T1\n> > table, which can be done without replication. At the same time, session 2\n> > deletes 10 records from T1, which is expected to be passed on through\n> > replication.\n> > Therefore, the two decoders can not meet this requirement. It is also\n> > inappropriate to temporarily disable subscriptions because it skips all\n> > transactions for a certain period of time.\n>\n> Hmm. The patch discussed on this thread does not have much support\n> from Peter and Craig, so I am marking it as RwF.\n>\n>\nYeah. I'm not against it as such. But I'd like to either see it work by\nexposing the ability to use DoNotReplicateId to SQL or if that's not\nsatisfactory, potentially replace that mechanism with the newly added one\nand emulate DoNotReplicateId for BC.\n\nI don't want two orthogonal ways to say \"don't consider this for logical\nreplication\".\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 28 Nov 2019 at 11:53, Michael Paquier <michael@paquier.xyz> wrote:On Wed, Nov 06, 2019 at 10:01:43PM +0800, Quan Zongliang wrote:\n> What the user needs is the same replication link that selectively skips some\n> transactions. And this choice only affects transactions that are doing bulk\n> delete sessions. The operations of other sessions are not affected and can\n> continue to output replication messages.\n> For example, session 1 wants to bulk delete 1 million old data from the T1\n> table, which can be done without replication. At the same time, session 2\n> deletes 10 records from T1, which is expected to be passed on through\n> replication.\n> Therefore, the two decoders can not meet this requirement. It is also\n> inappropriate to temporarily disable subscriptions because it skips all\n> transactions for a certain period of time.\n\nHmm. The patch discussed on this thread does not have much support\nfrom Peter and Craig, so I am marking it as RwF.Yeah. I'm not against it as such. But I'd like to either see it work by exposing the ability to use DoNotReplicateId to SQL or if that's not satisfactory, potentially replace that mechanism with the newly added one and emulate DoNotReplicateId for BC.I don't want two orthogonal ways to say \"don't consider this for logical replication\".-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Mon, 2 Dec 2019 13:52:28 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add a GUC variable that control logical replication"
}
] |
[
{
"msg_contents": "Hi!\n\nUnfortunately, jsonpath lexer, in contrast to jsonpath parser, was written by\nTeodor and me without a proper attention to the stanard. JSON path lexics is\nis borrowed from the external ECMAScript [1], and we did not study it carefully.\n\nThere were numerous deviations from the ECMAScript standard in our jsonpath\nimplementation that were mostly fixed in the attached patch:\n\n1. Identifiers (unquoted JSON key names) should start from the one of (see [2]):\n - Unicode symbol having Unicode property \"ID_Start\" (see [3])\n - Unicode escape sequence '\\uXXXX' or '\\u{X...}'\n - '$'\n - '_'\n\n And they should continue with the one of:\n - Unicode symbol having Unicode property \"ID_Continue\" (see [3])\n - Unicode escape sequence\n - '$'\n - ZWNJ\n - ZWJ\n\n2. '$' is also allowed inside the identifiers, so it is possible to write\n something like '$.a$$b'.\n\n3. Variable references '$var' are regular identifiers simply starting from the\n '$' sign, and there is no syntax like '$\"var\"', because quotes are not\n allowed in identifiers.\n\n4. Even if the Unicode escape sequence '\\uXXXX' is used, it cannot produce\n special symbols or whitespace, because the identifiers are displayed without\n quoting (i.e. '$\\u{20}' is not possible to display as '$\" \"' or even more as\n string '\"$ \"').\n\n5. All codepoints in '\\u{XXXXXX}' greater than 0x10FFFF should be forbidden.\n\n6. 6 single-character escape sequences (\\b \\t \\r \\f \\n \\v) should only be\n supported inside quoted strings.\n\n\nI don't know if it is possible to check Unicode properties \"ID_Start\" and\n\"ID_Continue\" in Postgres, and what ZWNJ/ZWJ is. Now, identifier's starting\ncharacter set is simply determined by the exclusion of all recognized special\ncharacters.\n\n\nThe patch is not so simple, but I believe that it's not too late to fix v12.\n\n\n[1] https://www.ecma-international.org/ecma-262/10.0/index.html#sec-ecmascript-language-lexical-grammar\n[2] https://www.ecma-international.org/ecma-262/10.0/index.html#sec-names-and-keywords\n[3] https://unicode.org/reports/tr31/\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 18 Sep 2019 18:10:27 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Fix parsing of identifiers in jsonpath"
},
{
"msg_contents": "On 9/18/19 11:10 AM, Nikita Glukhov wrote:\n\n> 4. Even if the Unicode escape sequence '\\uXXXX' is used, it cannot produce\n> special symbols or whitespace, because the identifiers are displayed\n> ...\n> I don't know if it is possible to check Unicode properties \"ID_Start\" and\n> \"ID_Continue\" in Postgres, and what ZWNJ/ZWJ is.\n\nZWNJ and ZWJ are U+200C and U+200D (mentioned in [1]).\n\nAlso, it's not just that a Unicode escape sequence can't make a\nspecial symbol or whitespace; it can't make any character that's\nnot allowed there by the other rules:\n\n\"A UnicodeEscapeSequence cannot be used to put a code point into an\nIdentifierName that would otherwise be illegal. In other words, if a \\\nUnicodeEscapeSequence sequence were replaced by the SourceCharacter it\ncontributes, the result must still be a valid IdentifierName that has\nthe exact same sequence of SourceCharacter elements as the original\nIdentifierName. All interpretations of IdentifierName within this\nspecification are based upon their actual code points regardless of\nwhether or not an escape sequence was used to contribute any particular\ncode point.\"\n\n\nA brief glance through src/backend/utils/mb/Unicode shows that the\nMakefile does download a bunch of stuff, but maybe not the Unicode\ncharacter data that would allow testing ID_Start and ID_Continue?\nI'm not sure.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 18 Sep 2019 11:29:54 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix parsing of identifiers in jsonpath"
},
{
"msg_contents": "Nikita Glukhov <n.gluhov@postgrespro.ru> writes:\n> I don't know if it is possible to check Unicode properties \"ID_Start\" and\n> \"ID_Continue\" in Postgres, and what ZWNJ/ZWJ is. Now, identifier's starting\n> character set is simply determined by the exclusion of all recognized special\n> characters.\n\nTBH, I think you should simply ignore any aspect of any of these standards\nthat is defined by reference to Unicode. We are not necessarily dealing\nwith a Unicode character set, so at best, references to things like ZWNJ\nare unreachable no-ops in a lot of environments.\n\nAs a relevant example, modern SQL defines whitespace in terms of Unicode[1],\na fact that we have ignored from the start and will likely continue to\ndo so.\n\nYou could do a lot worse than to just consider identifiers to be the same\nstrings as our SQL lexer would do (modulo things like \"$\" that have\nspecial status in the path language).\n\n\t\t\tregards, tom lane\n\n[1] cf 4.2.4 \"Character repertoires\" in SQL:2011\n\n\n",
"msg_date": "Wed, 18 Sep 2019 17:28:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix parsing of identifiers in jsonpath"
},
{
"msg_contents": "Attached v2 patch rebased onto current master.\n\nOn 18.09.2019 18:10, Nikita Glukhov wrote:\n\n> Unfortunately, jsonpath lexer, in contrast to jsonpath parser, was written by\n> Teodor and me without a proper attention to the stanard. JSON path lexics is\n> is borrowed from the external ECMAScript [1], and we did not study it carefully.\n>\n> There were numerous deviations from the ECMAScript standard in our jsonpath\n> implementation that were mostly fixed in the attached patch:\n>\n> 1. Identifiers (unquoted JSON key names) should start from the one of (see [2]):\n> - Unicode symbol having Unicode property \"ID_Start\" (see [3])\n> - Unicode escape sequence '\\uXXXX' or '\\u{X...}'\n> - '$'\n> - '_'\n>\n> And they should continue with the one of:\n> - Unicode symbol having Unicode property \"ID_Continue\" (see [3])\n> - Unicode escape sequence\n> - '$'\n> - ZWNJ\n> - ZWJ\n>\n> 2. '$' is also allowed inside the identifiers, so it is possible to write\n> something like '$.a$$b'.\n>\n> 3. Variable references '$var' are regular identifiers simply starting from the\n> '$' sign, and there is no syntax like '$\"var\"', because quotes are not\n> allowed in identifiers.\n>\n> 4. Even if the Unicode escape sequence '\\uXXXX' is used, it cannot produce\n> special symbols or whitespace, because the identifiers are displayed without\n> quoting (i.e. '$\\u{20}' is not possible to display as '$\" \"' or even more as\n> string '\"$ \"').\n>\n> 5. All codepoints in '\\u{XXXXXX}' greater than 0x10FFFF should be forbidden.\n>\n> 6. 6 single-character escape sequences (\\b \\t \\r \\f \\n \\v) should only be\n> supported inside quoted strings.\n>\n>\n> I don't know if it is possible to check Unicode properties \"ID_Start\" and\n> \"ID_Continue\" in Postgres, and what ZWNJ/ZWJ is. Now, identifier's starting\n> character set is simply determined by the exclusion of all recognized special\n> characters.\n>\n>\n> The patch is not so simple, but I believe that it's not too late to fix v12.\n>\n>\n> [1]https://www.ecma-international.org/ecma-262/10.0/index.html#sec-ecmascript-language-lexical-grammar\n> [2]https://www.ecma-international.org/ecma-262/10.0/index.html#sec-names-and-keywords\n> [3]https://unicode.org/reports/tr31/\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 2 Oct 2019 16:10:18 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Fix parsing of identifiers in jsonpath"
}
] |
[
{
"msg_contents": "\nHi hackers,\n\nWe have a customer which suffer from Postgres performance degradation \nwhen there are large number of connections performing inserts in the \nsame table.\nIn 2016 Robert Haas has committed optimization of relation extension \n719c84c1:\n\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: Fri Apr 8 02:04:46 2016 -0400\n\n Extend relations multiple blocks at a time to improve scalability.\n\n Contention on the relation extension lock can become quite fierce when\n multiple processes are inserting data into the same relation at the \nsame\n time at a high rate. Experimentation shows the extending the relation\n multiple blocks at a time improves scalability.\n\nBut this optimization is applied only for heap relations \n(RelationGetBufferForTuple).\nAnd here most of backends are competing for index relation extension lock:\n\n /*\n * Extend the relation by one page.\n *\n * We have to use a lock to ensure no one else is extending the \nrel at\n * the same time, else we will both try to initialize the same new\n * page. We can skip locking for new or temp relations, however,\n * since no one else could be accessing them.\n */\n needLock = !RELATION_IS_LOCAL(rel);\n\n if (needLock)\n LockRelationForExtension(rel, ExclusiveLock);\n\n\n#0 0x00007ff1787065e3 in __epoll_wait_nocancel () from /lib64/libc.so.6\n#1 0x000000000072e39e in WaitEventSetWaitBlock (nevents=1, \noccurred_events=0x7ffd03c0ddf0, cur_timeout=-1, set=0x2cb1838) at \nlatch.c:1048\n#2 WaitEventSetWait (set=set@entry=0x2cb1838, timeout=timeout@entry=-1, \noccurred_events=occurred_events@entry=0x7ffd03c0ddf0, \nnevents=nevents@entry=1, wait_event_info=wait_event_info@entry=50331649) a\nt latch.c:1000\n#3 0x000000000072e7fb in WaitLatchOrSocket (latch=0x2aec0cf5c844, \nwakeEvents=wakeEvents@entry=1, sock=sock@entry=-1, timeout=-1, \ntimeout@entry=0, wait_event_info=50331649) at latch.c:385\n#4 0x000000000072e8b0 in WaitLatch (latch=<optimized out>, \nwakeEvents=wakeEvents@entry=1, timeout=timeout@entry=0, \nwait_event_info=<optimized out>) at latch.c:339\n#5 0x000000000073e2c6 in ProcSleep \n(locallock=locallock@entry=0x2ace708, \nlockMethodTable=lockMethodTable@entry=0x9cee80 <default_lockmethod>) at \nproc.c:1284\n#6 0x0000000000738d92 in WaitOnLock \n(locallock=locallock@entry=0x2ace708, owner=owner@entry=0x28f2d10) at \nlock.c:1750\n#7 0x000000000073a216 in LockAcquireExtended \n(locktag=locktag@entry=0x7ffd03c0e170, lockmode=lockmode@entry=7, \nsessionLock=sessionLock@entry=false, dontWait=dontWait@entry=false, \nreportMemoryError=rep\nortMemoryError@entry=true, locallockp=locallockp@entry=0x0) at lock.c:1032\n#8 0x000000000073a8d4 in LockAcquire \n(locktag=locktag@entry=0x7ffd03c0e170, lockmode=lockmode@entry=7, \nsessionLock=sessionLock@entry=false, dontWait=dontWait@entry=false) at \nlock.c:695\n#9 0x0000000000737c36 in LockRelationForExtension \n(relation=relation@entry=0x3089c30, lockmode=lockmode@entry=7) at lmgr.c:362\n#10 0x00000000004d2209 in _bt_getbuf (rel=rel@entry=0x3089c30, \nblkno=blkno@entry=4294967295, access=access@entry=2) at nbtpage.c:829\n#11 0x00000000004d013b in _bt_split (newitemonleft=true, \nnewitem=0x2cb15b8, newitemsz=24, newitemoff=63, firstright=138, cbuf=0, \nbuf=27480727, rel=0x3089c30) at nbtinsert.c:1156\n#12 _bt_insertonpg (rel=rel@entry=0x3089c30, buf=buf@entry=27480727, \ncbuf=cbuf@entry=0, stack=stack@entry=0x2cb1758, \nitup=itup@entry=0x2cb15b8, newitemoff=63, \nsplit_only_page=split_only_page@entry=fals\ne) at nbtinsert.c:909\n#13 0x00000000004d1b1c in _bt_doinsert (rel=rel@entry=0x3089c30, \nitup=itup@entry=0x2cb15b8, \ncheckUnique=checkUnique@entry=UNIQUE_CHECK_NO, \nheapRel=heapRel@entry=0x3088d70) at nbtinsert.c:306\n#14 0x00000000004d4651 in btinsert (rel=0x3089c30, values=<optimized \nout>, isnull=<optimized out>, ht_ctid=0x2bd230c, heapRel=0x3088d70, \ncheckUnique=UNIQUE_CHECK_NO, indexInfo=0x2caf828) at nbtree.c:20\n5\n#15 0x000000000060a41a in ExecInsertIndexTuples \n(slot=slot@entry=0x2c875f0, tupleid=tupleid@entry=0x2bd230c, \nestate=estate@entry=0x2c85dc0, noDupErr=noDupErr@entry=false, \nspecConflict=specConflict@entr\ny=0x0, arbiterIndexes=arbiterIndexes@entry=0x0) at execIndexing.c:386\n#16 0x000000000062d132 in ExecInsert (mtstate=mtstate@entry=0x2c86110, \nslot=0x2c875f0, planSlot=planSlot@entry=0x2c875f0, \nestate=estate@entry=0x2c85dc0, canSetTag=<optimized out>) at \nnodeModifyTable.c:\n535\n#17 0x000000000062e1b9 in ExecModifyTable (pstate=0x2c86110) at \nnodeModifyTable.c:2159\n\n\nI wonder if such optimization should also be used for index relations?\nCan we just make RelationAddExtraBlocks public (non static) and use it \nin B-Tree code in the same way as in hio.c?\nOr it is better to provide some special function for extending arbitrary \nrelation?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Wed, 18 Sep 2019 18:38:55 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Relation extension lock bottleneck"
}
] |
[
{
"msg_contents": "In the lengthy thread on block-level incremental backup,[1] both\nVignesh C[2] and Stephen Frost[3] have suggested storing a manifest as\npart of each backup, somethig that could be useful not only for\nincremental backups but also for full backups. I initially didn't\nthink this was necessary,[4] but some of my colleagues figured out\nthat my design was broken, because my proposal was to detect new\nblocks just using LSN, and that ignores the fact that CREATE DATABASE\nand ALTER TABLE .. SET TABLESPACE do physical copies without bumping\npage LSNs, which I knew but somehow forgot about. Fortunately, some\nof my colleagues realized my mistake in testing.[5] Because of this\nproblem, for an LSN-based approach to work, we'll need to send not\nonly an LSN, but also a list of files (and file sizes) that exist in\nthe previous full backup; so, some kind of backup manifest now seems\nlike a good idea to me.[6] That whole approach might still be dead on\narrival if it's possible to add new blocks with old LSNs to existing\nfiles,[7] but there seems to be room to hope that there are no such\ncases.[8]\n\nSo, let's suppose we invent a backup manifest. What should it contain?\nI imagine that it would consist of a list of files, and the lengths of\nthose files, and a checksum for each file. I think you should have a\nchoice of what kind of checksums to use, because algorithms that used\nto seem like good choices (e.g. MD5) no longer do; this trend can\nprobably be expected to continue. Even if we initially support only\none kind of checksum -- presumably SHA-something since we have code\nfor that already for SCRAM -- I think that it would also be a good\nidea to allow for future changes. And maybe it's best to just allow a\nchoice of SHA-224, SHA-256, SHA-384, and SHA-512 right out of the\ngate, so that we can avoid bikeshedding over which one is secure\nenough. I guess we'll still have to argue about the default. I also\nthink that it should be possible to build a manifest with no\nchecksums, so that one need not pay the overhead of computing\nchecksums if one does not wish. Of course, such a manifest is of much\nless utility for checking backup integrity, but you can still check\nthat you've got the right files, which is noticeably better than\nnothing. The manifest should probably also contain a checksum of its\nown contents so that the integrity of the manifest itself can be\nverified. And maybe a few other bits of metadata, but I'm not sure\nexactly what. Ideas?\n\nOnce we invent the concept of a backup manifest, what do we need to do\nwith them? I think we'd want three things initially:\n\n(1) When taking a backup, have the option (perhaps enabled by default)\nto include a backup manifest.\n(2) Given an existing backup that has not got a manifest, construct one.\n(3) Cross-check a manifest against a backup and complain about extra\nfiles, missing files, size differences, or checksum mismatches.\n\nOne thing I'm not quite sure about is where to store the backup\nmanifest. If you take a base backup in tar format, you get base.tar,\npg_wal.tar (unless -Xnone), and an additional tar file per tablespace.\nDoes the backup manifest go into base.tar? Get written into a separate\nfile outside of any tar archive? Something else? And what about a\nplain-format backup? I suppose then we should just write the manifest\ninto the top level of the main data directory, but perhaps someone has\nanother idea.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BTgmoYxQLL%3DmVyN90HZgH0X_EUrw%2BaZ0xsXJk7XV3-3LygTvA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CALDaNm310fUZ72nM2n%3DcD0eSHKRAoJPuCyvvR0dhTEZ9Oytyzg%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/20190916143817.GA6962%40tamriel.snowman.net\n[4] https://www.postgresql.org/message-id/CA%2BTgmoaj-zw4Mou4YBcJSkHmQM%2BJA-dAVJnRP8zSASP1S4ZVgw%40mail.gmail.com\n[5] https://www.postgresql.org/message-id/CAM2%2B6%3DXfJX%3DKXvpTgDvgd1rQjya_Am27j4UvJtL3nA%2BJMCTGVQ%40mail.gmail.com\n[6] https://www.postgresql.org/message-id/CA%2BTgmoYg9i8TZhyjf8MqCyU8unUVuW%2B03FeBF1LGDu_-eOONag%40mail.gmail.com\n[7] https://www.postgresql.org/message-id/CA%2BTgmoYT9xODgEB6y6j93hFHqobVcdiRCRCp0dHh%2BfFzZALn%3Dw%40mail.gmail.com\nand nearby messages\n[8] https://www.postgresql.org/message-id/20190916173933.GE6962%40tamriel.snowman.net\n\n\n",
"msg_date": "Wed, 18 Sep 2019 13:48:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "backup manifests"
},
{
"msg_contents": "Hi Robert,\n\nOn 9/18/19 1:48 PM, Robert Haas wrote:\n> That whole approach might still be dead on\n> arrival if it's possible to add new blocks with old LSNs to existing\n> files,[7] but there seems to be room to hope that there are no such\n> cases.[8]\n\nI sure hope there are no such cases, but we should be open to the idea\njust in case.\n\n> So, let's suppose we invent a backup manifest. What should it contain?\n> I imagine that it would consist of a list of files, and the lengths of\n> those files, and a checksum for each file. \n\nThese are essential.\n\nAlso consider adding the timestamp. You have justifiable concerns about\nusing timestamps for deltas and I get that. However, there are a number\nof methods that can be employed to make it *much* safer. I won't go\ninto that here since it is an entire thread in itself. Suffice to say\nwe can detect many anomalies in the timestamps and require a checksum\nbackup when we see them. I'm really interested in scanning the WAL for\nchanged files but that method is very complex and getting it right might\nbe harder than ensuring FS checksums are reliable. Still worth trying,\nthough, since the benefits are enormous. We are planning to use\ntimestamp + size + wal data to do incrementals if we get there.\n\nConsider adding a reference to each file that specifies where the file\ncan be found in if it is not in this backup. As I understand the\npg_basebackup proposal, it would only be implementing differential\nbackups, i.e. an incremental that is *only* based on the last full\nbackup. So, the reference can be inferred in this case. However, if\nthe user selects the wrong full backup on restore, and we have labeled\neach backup, then a differential restore with references against the\nwrong full backup would result in a hard error rather than corruption.\n\n> I think you should have a\n> choice of what kind of checksums to use, because algorithms that used\n> to seem like good choices (e.g. MD5) no longer do; this trend can\n> probably be expected to continue. Even if we initially support only\n> one kind of checksum -- presumably SHA-something since we have code\n> for that already for SCRAM -- I think that it would also be a good\n> idea to allow for future changes. And maybe it's best to just allow a\n> choice of SHA-224, SHA-256, SHA-384, and SHA-512 right out of the\n> gate, so that we can avoid bikeshedding over which one is secure\n> enough. I guess we'll still have to argue about the default. \n\nBased on my original calculations (which sadly I don't have anymore),\nthe combination of SHA1, size, and file name is *extremely* unlikely to\ngenerate a collision. As in, unlikely to happen before the end of the\nuniverse kind of unlikely. Though, I guess it depends on your\nexpectations for the lifetime of the universe.\n\nThese checksums don't have to be cryptographically secure, in the sense\nthat you could infer the plaintext from the checksum. They just need to\nhave a suitably low collision rate. These days I would choose something\nwith more bits because the computation time is similar, though the\nlarger size requires more storage.\n\n> I also\n> think that it should be possible to build a manifest with no\n> checksums, so that one need not pay the overhead of computing\n> checksums if one does not wish. \n\nOur benchmarks have indicated that checksums only account for about 1%\nof total cpu time when gzip -6 compression is used. Without compression\nthe percentage may be higher of course, but in that case we find network\nlatency is the primary bottleneck.\n\nFor S3 backups we do a SHA1 hash for our manifest, a SHA256 hash for\nauthv4 and a good-old-fashioned MD5 checksum for each upload part. This\nis barely noticeable when compression is enabled.\n\n> Of course, such a manifest is of much\n> less utility for checking backup integrity, but you can still check\n> that you've got the right files, which is noticeably better than\n> nothing. \n\nAbsolutely -- and yet. There was a time when we made checksums optional\nbut eventually gave up on that once we profiled and realized how low the\ncost was vs. the benefit.\n\n> The manifest should probably also contain a checksum of its\n> own contents so that the integrity of the manifest itself can be\n> verified. \n\nThis is a good idea. Amazingly we've never seen a manifest checksum\nerror in the field but it's only a matter of time.\n\nAnd maybe a few other bits of metadata, but I'm not sure\n> exactly what. Ideas?\n\nA backup label for sure. You can also use this as the directory/tar\nname to save the user coming up with one. We use YYYYMMDDHH24MMSSF for\nfull backups and YYYYMMDDHH24MMSSF_YYYYMMDDHH24MMSS(D|I) for\nincrementals and have logic to prevent two backups from having the same\nlabel. This is unlikely outside of testing but still a good idea.\n\nKnowing the start/stop time of the backup is useful in all kinds of\nways, especially monitoring and time-targeted PITR. Start/stop LSN is\nalso good. I know this is also in backup_label but having it all in one\nplace is nice.\n\nWe include the version/sysid of the cluster to avoid mixups. It's a\ngreat extra check on top of references to be sure everything is kosher.\n\nA manifest version is good in case we change the format later. I'd\nrecommend JSON for the format since it is so ubiquitous and easily\nhandles escaping which can be gotchas in a home-grown format. We\ncurrently have a format that is a combination of Windows INI and JSON\n(for human-readability in theory) and we have become painfully aware of\nescaping issues. Really, why would you drop files with '=' in their\nname in PGDATA? And yet it happens.\n\n> Once we invent the concept of a backup manifest, what do we need to do\n> with them? I think we'd want three things initially:\n> \n> (1) When taking a backup, have the option (perhaps enabled by default)\n> to include a backup manifest.\n\nManifests are cheap to builds so I wouldn't make it an option.\n\n> (2) Given an existing backup that has not got a manifest, construct one.\n\nMight be too late to be trusted and we'd have to write extra code for\nit. I'd leave this for a project down the road, if at all.\n\n> (3) Cross-check a manifest against a backup and complain about extra\n> files, missing files, size differences, or checksum mismatches.\n\nVerification is the best part of the manifest. Plus, you can do\nverification pretty cheaply on restore. We also restore pg_control last\nso clusters that have a restore error won't start.\n\n> One thing I'm not quite sure about is where to store the backup\n> manifest. If you take a base backup in tar format, you get base.tar,\n> pg_wal.tar (unless -Xnone), and an additional tar file per tablespace.\n> Does the backup manifest go into base.tar? Get written into a separate\n> file outside of any tar archive? Something else? And what about a\n> plain-format backup? I suppose then we should just write the manifest\n> into the top level of the main data directory, but perhaps someone has\n> another idea.\n\nWe do:\n\n[backup_label]/\n backup.manifest\n pg_data/\n pg_tblspc/\n\nIn general, having the manifest easily accessible is ideal.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 18 Sep 2019 21:11:36 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 9:11 PM David Steele <david@pgmasters.net> wrote:\n> Also consider adding the timestamp.\n\nSounds reasonable, even if only for the benefit of humans who might\nlook at the file. We can decide later whether to use it for anything\nelse (and third-party tools could make different decisions from core).\nI assume we're talking about file mtime here, not file ctime or file\natime or the time the manifest was generated, but let me know if I'm\nwrong.\n\n> Consider adding a reference to each file that specifies where the file\n> can be found in if it is not in this backup. As I understand the\n> pg_basebackup proposal, it would only be implementing differential\n> backups, i.e. an incremental that is *only* based on the last full\n> backup. So, the reference can be inferred in this case. However, if\n> the user selects the wrong full backup on restore, and we have labeled\n> each backup, then a differential restore with references against the\n> wrong full backup would result in a hard error rather than corruption.\n\nI intend that we should be able to support incremental backups based\neither on a previous full backup or based on a previous incremental\nbackup. I am not aware of a technical reason why we need to identify\nthe specific backup that must be used. If incremental backup B is\ntaken based on a pre-existing backup A, then I think that B can be\nrestored using either A or *any other backup taken after A and before\nB*. In the normal case, there probably wouldn't be any such backup,\nbut AFAICS the start-LSNs are a sufficient cross-check that the chosen\nbase backup is legal.\n\n> Based on my original calculations (which sadly I don't have anymore),\n> the combination of SHA1, size, and file name is *extremely* unlikely to\n> generate a collision. As in, unlikely to happen before the end of the\n> universe kind of unlikely. Though, I guess it depends on your\n> expectations for the lifetime of the universe.\n\nSomebody once said that we should be prepared for it to end at an any\ntime, or not, and that the time at which it actually was due to end\nwould not be disclosed in advance. This is probably good life advice\nwhich I ought to take more frequently than I do, but I think we can\nfinesse the issue for purposes of this discussion. What I'd say is: if\nthe probability of getting a collision is demonstrably many orders of\nmagnitude less than the probability of the disk writing the block\nincorrectly, then I think we're probably reasonably OK. Somebody might\ndiffer, which is perhaps a mild point in favor of LSN-based\napproaches, but as a practical matter, if a bad block is a billion\ntimes more likely to be the result of a disk error than a checksum\nmismatch, then it's a negligible risk.\n\n> And maybe a few other bits of metadata, but I'm not sure\n> > exactly what. Ideas?\n>\n> A backup label for sure. You can also use this as the directory/tar\n> name to save the user coming up with one. We use YYYYMMDDHH24MMSSF for\n> full backups and YYYYMMDDHH24MMSSF_YYYYMMDDHH24MMSS(D|I) for\n> incrementals and have logic to prevent two backups from having the same\n> label. This is unlikely outside of testing but still a good idea.\n>\n> Knowing the start/stop time of the backup is useful in all kinds of\n> ways, especially monitoring and time-targeted PITR. Start/stop LSN is\n> also good. I know this is also in backup_label but having it all in one\n> place is nice.\n>\n> We include the version/sysid of the cluster to avoid mixups. It's a\n> great extra check on top of references to be sure everything is kosher.\n\nI don't think it's a good idea to duplicate the information that's\nalready in the backup_label. Storing two copies of the same\ninformation is just an invitation to having to worry about what\nhappens if they don't agree.\n\n> A manifest version is good in case we change the format later.\n\nYeah.\n\n> I'd\n> recommend JSON for the format since it is so ubiquitous and easily\n> handles escaping which can be gotchas in a home-grown format. We\n> currently have a format that is a combination of Windows INI and JSON\n> (for human-readability in theory) and we have become painfully aware of\n> escaping issues. Really, why would you drop files with '=' in their\n> name in PGDATA? And yet it happens.\n\nI am not crazy about JSON because it requires that I get a json parser\ninto src/common, which I could do, but given the possibly-imminent end\nof the universe, I'm not sure it's the greatest use of time. You're\nright that if we pick an ad-hoc format, we've got to worry about\nescaping, which isn't lovely.\n\n> > (1) When taking a backup, have the option (perhaps enabled by default)\n> > to include a backup manifest.\n>\n> Manifests are cheap to builds so I wouldn't make it an option.\n\nHuh. That's an interesting idea. Thanks.\n\n> > (3) Cross-check a manifest against a backup and complain about extra\n> > files, missing files, size differences, or checksum mismatches.\n>\n> Verification is the best part of the manifest. Plus, you can do\n> verification pretty cheaply on restore. We also restore pg_control last\n> so clusters that have a restore error won't start.\n\nThere's no \"restore\" operation here, really. A backup taken by\npg_basebackup can be \"restored\" by copying the whole thing, but it can\nalso be used just where it is. If we were going to build something\ninto some in-core tool to copy backups around, this would be a smart\nway to implement said tool, but I'm not planning on that myself.\n\n> > One thing I'm not quite sure about is where to store the backup\n> > manifest. If you take a base backup in tar format, you get base.tar,\n> > pg_wal.tar (unless -Xnone), and an additional tar file per tablespace.\n> > Does the backup manifest go into base.tar? Get written into a separate\n> > file outside of any tar archive? Something else? And what about a\n> > plain-format backup? I suppose then we should just write the manifest\n> > into the top level of the main data directory, but perhaps someone has\n> > another idea.\n>\n> We do:\n>\n> [backup_label]/\n> backup.manifest\n> pg_data/\n> pg_tblspc/\n>\n> In general, having the manifest easily accessible is ideal.\n\nThat's a fine choice for a tool, but a I'm talking about something\nthat is part of the actual backup format supported by PostgreSQL, not\nwhat a tool might wrap around it. The choice is whether, for a\ntar-format backup, the manifest goes inside a tar file or as a\nseparate file. To put that another way, a patch adding backup\nmanifests does not get to redesign where pg_basebackup puts anything\nelse; it only gets to decide where to put the manifest.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 19 Sep 2019 09:51:11 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 9:51 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I intend that we should be able to support incremental backups based\n> either on a previous full backup or based on a previous incremental\n> backup. I am not aware of a technical reason why we need to identify\n> the specific backup that must be used. If incremental backup B is\n> taken based on a pre-existing backup A, then I think that B can be\n> restored using either A or *any other backup taken after A and before\n> B*. In the normal case, there probably wouldn't be any such backup,\n> but AFAICS the start-LSNs are a sufficient cross-check that the chosen\n> base backup is legal.\n\nScratch that: there can be overlapping backups, so you have to\ncross-check both start and stop LSNs.\n\n> > > (3) Cross-check a manifest against a backup and complain about extra\n> > > files, missing files, size differences, or checksum mismatches.\n> >\n> > Verification is the best part of the manifest. Plus, you can do\n> > verification pretty cheaply on restore. We also restore pg_control last\n> > so clusters that have a restore error won't start.\n>\n> There's no \"restore\" operation here, really. A backup taken by\n> pg_basebackup can be \"restored\" by copying the whole thing, but it can\n> also be used just where it is. If we were going to build something\n> into some in-core tool to copy backups around, this would be a smart\n> way to implement said tool, but I'm not planning on that myself.\n\nScratch that: incremental backups need a restore tool, so we can use\nthis technique there. And it can work for full backups too, because\nwhy not?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 19 Sep 2019 11:00:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi Robert,\n\nOn 9/19/19 9:51 AM, Robert Haas wrote:\n> On Wed, Sep 18, 2019 at 9:11 PM David Steele <david@pgmasters.net> wrote:\n>> Also consider adding the timestamp.\n> \n> Sounds reasonable, even if only for the benefit of humans who might\n> look at the file. We can decide later whether to use it for anything\n> else (and third-party tools could make different decisions from core).\n> I assume we're talking about file mtime here, not file ctime or file\n> atime or the time the manifest was generated, but let me know if I'm\n> wrong.\n\nIn my experience only mtime is useful.\n\n>> Based on my original calculations (which sadly I don't have anymore),\n>> the combination of SHA1, size, and file name is *extremely* unlikely to\n>> generate a collision. As in, unlikely to happen before the end of the\n>> universe kind of unlikely. Though, I guess it depends on your\n>> expectations for the lifetime of the universe.\n\n> What I'd say is: if\n> the probability of getting a collision is demonstrably many orders of\n> magnitude less than the probability of the disk writing the block\n> incorrectly, then I think we're probably reasonably OK. Somebody might\n> differ, which is perhaps a mild point in favor of LSN-based\n> approaches, but as a practical matter, if a bad block is a billion\n> times more likely to be the result of a disk error than a checksum\n> mismatch, then it's a negligible risk.\n\nAgreed.\n\n>> We include the version/sysid of the cluster to avoid mixups. It's a\n>> great extra check on top of references to be sure everything is kosher.\n> \n> I don't think it's a good idea to duplicate the information that's\n> already in the backup_label. Storing two copies of the same\n> information is just an invitation to having to worry about what\n> happens if they don't agree.\n\nOK, but now we have backup_label, tablespace_map, \nXXXXXXXXXXXXXXXXXXXXXXXX.XXXXXXXX.backup (in the WAL) and now perhaps a \nbackup.manifest file. I feel like we may be drowning in backup info files.\n\n>> I'd\n>> recommend JSON for the format since it is so ubiquitous and easily\n>> handles escaping which can be gotchas in a home-grown format. We\n>> currently have a format that is a combination of Windows INI and JSON\n>> (for human-readability in theory) and we have become painfully aware of\n>> escaping issues. Really, why would you drop files with '=' in their\n>> name in PGDATA? And yet it happens.\n> \n> I am not crazy about JSON because it requires that I get a json parser\n> into src/common, which I could do, but given the possibly-imminent end\n> of the universe, I'm not sure it's the greatest use of time. You're\n> right that if we pick an ad-hoc format, we've got to worry about\n> escaping, which isn't lovely.\n\nMy experience is that JSON is simple to implement and has already dealt \nwith escaping and data structure considerations. A home-grown solution \nwill be at least as complex but have the disadvantage of being non-standard.\n\n>>> One thing I'm not quite sure about is where to store the backup\n>>> manifest. If you take a base backup in tar format, you get base.tar,\n>>> pg_wal.tar (unless -Xnone), and an additional tar file per tablespace.\n>>> Does the backup manifest go into base.tar? Get written into a separate\n>>> file outside of any tar archive? Something else? And what about a\n>>> plain-format backup? I suppose then we should just write the manifest\n>>> into the top level of the main data directory, but perhaps someone has\n>>> another idea.\n>>\n>> We do:\n>>\n>> [backup_label]/\n>> backup.manifest\n>> pg_data/\n>> pg_tblspc/\n>>\n>> In general, having the manifest easily accessible is ideal.\n> \n> That's a fine choice for a tool, but a I'm talking about something\n> that is part of the actual backup format supported by PostgreSQL, not\n> what a tool might wrap around it. The choice is whether, for a\n> tar-format backup, the manifest goes inside a tar file or as a\n> separate file. To put that another way, a patch adding backup\n> manifests does not get to redesign where pg_basebackup puts anything\n> else; it only gets to decide where to put the manifest.\n\nFair enough. The point is to make the manifest easily accessible.\n\nI'd keep it in the data directory for file-based backups and as a \nseparate file for tar-based backups. The advantage here is that we can \npick a file name that becomes reserved which a tool can't do.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 19 Sep 2019 23:06:04 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 9/19/19 11:00 AM, Robert Haas wrote:\n\n> On Thu, Sep 19, 2019 at 9:51 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> I intend that we should be able to support incremental backups based\n>> either on a previous full backup or based on a previous incremental\n>> backup. I am not aware of a technical reason why we need to identify\n>> the specific backup that must be used. If incremental backup B is\n>> taken based on a pre-existing backup A, then I think that B can be\n>> restored using either A or *any other backup taken after A and before\n>> B*. In the normal case, there probably wouldn't be any such backup,\n>> but AFAICS the start-LSNs are a sufficient cross-check that the chosen\n>> base backup is legal.\n> \n> Scratch that: there can be overlapping backups, so you have to\n> cross-check both start and stop LSNs.\n\nOverall we have found it's much simpler to label each backup and \ncross-check that against the pg version and system id. Start LSN is \npretty unique, but backup labels work really well and are more widely \nunderstood.\n\n>>>> (3) Cross-check a manifest against a backup and complain about extra\n>>>> files, missing files, size differences, or checksum mismatches.\n>>>\n>>> Verification is the best part of the manifest. Plus, you can do\n>>> verification pretty cheaply on restore. We also restore pg_control last\n>>> so clusters that have a restore error won't start.\n>>\n>> There's no \"restore\" operation here, really. A backup taken by\n>> pg_basebackup can be \"restored\" by copying the whole thing, but it can\n>> also be used just where it is. If we were going to build something\n>> into some in-core tool to copy backups around, this would be a smart\n>> way to implement said tool, but I'm not planning on that myself.\n> \n> Scratch that: incremental backups need a restore tool, so we can use\n> this technique there. And it can work for full backups too, because\n> why not?\n\nAgreed, once we have a restore tool, use it for everything.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 19 Sep 2019 23:10:46 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 11:10:46PM -0400, David Steele wrote:\n> On 9/19/19 11:00 AM, Robert Haas wrote:\n>> On Thu, Sep 19, 2019 at 9:51 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> > I intend that we should be able to support incremental backups based\n>> > either on a previous full backup or based on a previous incremental\n>> > backup. I am not aware of a technical reason why we need to identify\n>> > the specific backup that must be used. If incremental backup B is\n>> > taken based on a pre-existing backup A, then I think that B can be\n>> > restored using either A or *any other backup taken after A and before\n>> > B*. In the normal case, there probably wouldn't be any such backup,\n>> > but AFAICS the start-LSNs are a sufficient cross-check that the chosen\n>> > base backup is legal.\n>> \n>> Scratch that: there can be overlapping backups, so you have to\n>> cross-check both start and stop LSNs.\n> \n> Overall we have found it's much simpler to label each backup and cross-check\n> that against the pg version and system id. Start LSN is pretty unique, but\n> backup labels work really well and are more widely understood.\n\nWarning. The start LSN could be the same for multiple backups when\ntaken from a standby.\n--\nMichael",
"msg_date": "Fri, 20 Sep 2019 16:15:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 11:10 PM David Steele <david@pgmasters.net> wrote:\n> Overall we have found it's much simpler to label each backup and\n> cross-check that against the pg version and system id. Start LSN is\n> pretty unique, but backup labels work really well and are more widely\n> understood.\n\nI see your point, but part of my point is that uniqueness is not a\ntechnical requirement. However, it may be a requirement for user\ncomprehension.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 20 Sep 2019 08:58:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 11:06 PM David Steele <david@pgmasters.net> wrote:\n> > I am not crazy about JSON because it requires that I get a json parser\n> > into src/common, which I could do, but given the possibly-imminent end\n> > of the universe, I'm not sure it's the greatest use of time. You're\n> > right that if we pick an ad-hoc format, we've got to worry about\n> > escaping, which isn't lovely.\n>\n> My experience is that JSON is simple to implement and has already dealt\n> with escaping and data structure considerations. A home-grown solution\n> will be at least as complex but have the disadvantage of being non-standard.\n\nI think that's fair and just spent a little while investigating how\ndifficult it would be to disentangle the JSON parser from the backend.\nIt has dependencies on the following bits of backend-only\nfunctionality:\n\n- check_stack_depth(). No problem, I think. Just skip it for frontend code.\n\n- pg_mblen() / GetDatabaseEncoding(). Not sure what to do about this.\nSome of our infrastructure for dealing with encoding is available in\nthe frontend and backend, but this part is backend-only.\n\n- elog() / ereport(). Kind of a pain. We could just kill the program\nif an error occurs, but that seems a bit ham-fisted. Refactoring the\ncode so that the error is returned rather than thrown might be the way\nto go, but it's not simple, because you're not just passing a string.\n\n ereport(ERROR,\n\n(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n errmsg(\"invalid input syntax\nfor type %s\", \"json\"),\n errdetail(\"Character with\nvalue 0x%02x must be escaped.\",\n (unsigned char) *s),\n report_json_context(lex)));\n\n- appendStringInfo et. al. I don't think it would be that hard to move\nthis to src/common, but I'm also not sure it really solves the\nproblem, because StringInfo has a 1GB limit, and there's no rule at\nall that a backup manifest has got to be less than 1GB.\n\nhttps://www.pgcon.org/2013/schedule/events/595.en.html\n\nThis gets at another problem that I just started to think about. If\nthe file is just a series of lines, you can parse it one line and a\ntime and do something with that line, then move on. If it's a JSON\nblob, you have to parse the whole file and get a potentially giant\ndata structure back, and then operate on that data structure. At\nleast, I think you do. There's probably some way to create a callback\nstructure that lets you presuppose that the toplevel data structure is\nan array (or object) and get back each element of that array (or\nkey/value pair) as it's parsed, but that sounds pretty annoying to get\nworking. Or we could just decide that you have to have enough memory\nto hold the parsed version of the entire manifest file in memory all\nat once, and if you don't, maybe you should drop some tables or buy\nmore RAM. That still leaves you with bypassing the 1GB size limit on\nStringInfo, maybe by having a \"huge\" option, or perhaps by\nmemory-mapping the file and then making the StringInfo point directly\ninto the mapped region. Perhaps I'm overthinking this and maybe you\nhave a simpler idea in mind about how it can be made to work, but I\nfind all this complexity pretty unappealing.\n\nHere's a competing proposal: let's decide that lines consist of\ntab-separated fields. If a field contains a \\t, \\r, or \\n, put a \" at\nthe beginning, a \" at the end, and double any \" that appears in the\nmiddle. This is easy to generate and easy to parse. It lets us\ncompletely ignore encoding considerations. Incremental parsing is\nstraightforward. Quoting will rarely be needed because there's very\nlittle reason to create a file inside a PostgreSQL data directory that\ncontains a tab or a newline, but if you do it'll still work. The lack\nof quoting is nice for humans reading the manifest, and nice in terms\nof keeping the manifest succinct; in contrast, note that using JSON\ndoubles every backslash.\n\nI hear you saying that this is going to end up being just as complex\nin the end, but I don't think I believe it. It sounds to me like the\ndifference between spending a couple of hours figuring this out and\nspending a couple of months trying to figure it out and maybe not\nactually getting anywhere.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 20 Sep 2019 09:46:11 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 11:06 PM David Steele <david@pgmasters.net> wrote:\n> > I don't think it's a good idea to duplicate the information that's\n> > already in the backup_label. Storing two copies of the same\n> > information is just an invitation to having to worry about what\n> > happens if they don't agree.\n>\n> OK, but now we have backup_label, tablespace_map,\n> XXXXXXXXXXXXXXXXXXXXXXXX.XXXXXXXX.backup (in the WAL) and now perhaps a\n> backup.manifest file. I feel like we may be drowning in backup info files.\n\nI agree!\n\nI'm not sure what to do about it, though. The information that is\npresent in the tablespace_map file could have been stored in the\nbackup_label file, I think, and that would have made sense, because\nboth files are serving a very similar purpose: they tell the server\nthat it needs to do some non-standard stuff when it starts up, and\nthey give it instructions for what those things are. And, as a\nsecondary purpose, humans or third-party tools can read them and use\nthat information for whatever purpose they wish.\n\nThe proposed backup_manifest file is a little different. I don't think\nthat anyone is proposing that the server should read that file: it is\nthere solely for the purpose of helping our own tools or third-party\ntools or human beings who are, uh, acting like tools.[1] We're also\nproposing to put it in a different place: the backup_label goes into\none of the tar files, but the backup_manifest would sit outside of any\ntar file.\n\nIf we were designing this from scratch, maybe we'd roll all of this\ninto one file that serves as backup manifest, tablespace map, backup\nlabel, and backup history file, but then again, maybe separating the\ninstructions-to-the-server part from the backup-integrity-checking\npart makes sense. At any rate, even if we knew for sure that's the\ndirection we wanted to go, getting there from here looks a bit rough.\nIf we just add a backup manifest, people who don't care can mostly\nignore it and then should be mostly fine. If we start trying to create\nthe one backup information system to rule them all, we're going to\nbreak people's tools. Maybe that's worth doing someday, but the paint\nisn't even dry on removing recovery.conf yet.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n[1] There are a surprising number of installations where, in effect,\nthe DBA is the backup-and-restore tool, performing all the steps by\nhand and hoping not to mess any of them up. The fact that nearly every\nPostgreSQL company offers tools to make this easier does not seem to\nhave done a whole lot to diminish the number of people using ad-hoc\nsolutions.\n\n\n",
"msg_date": "Fri, 20 Sep 2019 10:40:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 9:46 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> - appendStringInfo et. al. I don't think it would be that hard to move\n> this to src/common, but I'm also not sure it really solves the\n> problem, because StringInfo has a 1GB limit, and there's no rule at\n> all that a backup manifest has got to be less than 1GB.\n\nHmm. That's actually going to be a problem on the server side, no\nmatter what we do on the client side. We have to send the manifest\nafter we send everything else, so that we know what we sent. But if we\nsent a lot of files, the manifest might be really huge. I had been\nthinking that we would generate the manifest on the server and send it\nto the client after everything else, but maybe this is an argument for\ngenerating the manifest on the client side and writing it\nincrementally. That would require the client to peek at the contents\nof every tar file it receives all the time, which it currently doesn't\nneed to do, but it does peek inside them a little bit, so maybe it's\nOK.\n\nAnother alternative would be to have the server spill the manifest in\nprogress to a temp file and then stream it from there to the client.\n\nThoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 20 Sep 2019 10:59:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 9/20/19 9:46 AM, Robert Haas wrote:\n> On Thu, Sep 19, 2019 at 11:06 PM David Steele <david@pgmasters.net> wrote:\n>\n>> My experience is that JSON is simple to implement and has already dealt\n>> with escaping and data structure considerations. A home-grown solution\n>> will be at least as complex but have the disadvantage of being non-standard.\n>\n> I think that's fair and just spent a little while investigating how\n> difficult it would be to disentangle the JSON parser from the backend.\n> It has dependencies on the following bits of backend-only\n> functionality:\n\n> - elog() / ereport(). Kind of a pain. We could just kill the program\n> if an error occurs, but that seems a bit ham-fisted. Refactoring the\n> code so that the error is returned rather than thrown might be the way\n> to go, but it's not simple, because you're not just passing a string.\n\nSeems to me we are overdue for elog()/ereport() compatible\nerror-handling in the front end. Plus mem contexts.\n\nIt sucks to make that a prereq for this project but the longer we kick\nthat can down the road...\n\n> https://www.pgcon.org/2013/schedule/events/595.en.html\n\nThis talk was good fun. The largest number of tables we've seen is a\nfew hundred thousand, but that still adds up to more than a million\nfiles to backup.\n\n> This gets at another problem that I just started to think about. If\n> the file is just a series of lines, you can parse it one line and a\n> time and do something with that line, then move on. If it's a JSON\n> blob, you have to parse the whole file and get a potentially giant\n> data structure back, and then operate on that data structure. At\n> least, I think you do. \n\nJSON can definitely be parsed incrementally, but for practical reasons\ncertain structures work better than others.\n\n> There's probably some way to create a callback\n> structure that lets you presuppose that the toplevel data structure is\n> an array (or object) and get back each element of that array (or\n> key/value pair) as it's parsed, but that sounds pretty annoying to get\n> working. \n\nAnd that's how we do it. It's annoying and yeah it's complicated but it\nis very fast and memory-efficient.\n\n> Or we could just decide that you have to have enough memory\n> to hold the parsed version of the entire manifest file in memory all\n> at once, and if you don't, maybe you should drop some tables or buy\n> more RAM. \n\nI assume you meant \"un-parsed\" here?\n\n> That still leaves you with bypassing the 1GB size limit on\n> StringInfo, maybe by having a \"huge\" option, or perhaps by\n> memory-mapping the file and then making the StringInfo point directly\n> into the mapped region. Perhaps I'm overthinking this and maybe you\n> have a simpler idea in mind about how it can be made to work, but I\n> find all this complexity pretty unappealing.\n\nOur String object has the same 1GB limit. Partly because it works and\nsaves a bit of memory per object, but also because if we find ourselves\nexceeding that limit we know we've probably made a design error.\n\nParsing in stream means that you only need to store the final in-memory\nrepresentation of the manifest which can be much more compact. Yeah,\nit's complicated, but the memory and time savings are worth it.\n\nNote that our Perl implementation took the naive approach and has worked\npretty well for six years, but can choke on really large manifests with\nout of memory errors. Overall, I'd say getting the format right is more\nimportant than having the perfect initial implementation.\n\n> Here's a competing proposal: let's decide that lines consist of\n> tab-separated fields. If a field contains a \\t, \\r, or \\n, put a \" at\n> the beginning, a \" at the end, and double any \" that appears in the\n> middle. This is easy to generate and easy to parse. It lets us\n> completely ignore encoding considerations. Incremental parsing is\n> straightforward. Quoting will rarely be needed because there's very\n> little reason to create a file inside a PostgreSQL data directory that\n> contains a tab or a newline, but if you do it'll still work. The lack\n> of quoting is nice for humans reading the manifest, and nice in terms\n> of keeping the manifest succinct; in contrast, note that using JSON\n> doubles every backslash.\n\nThere's other information you'll want to store that is not strictly file\ninfo so you need a way to denote that. It gets complicated quickly.\n\n> I hear you saying that this is going to end up being just as complex\n> in the end, but I don't think I believe it. It sounds to me like the\n> difference between spending a couple of hours figuring this out and\n> spending a couple of months trying to figure it out and maybe not\n> actually getting anywhere.\n\nMaybe the initial implementation will be easier but I am confident we'll\npay for it down the road. Also, don't we want users to be able to read\nthis file? Do we really want them to need to cook up a custom parser in\nPerl, Go, Python, etc.?\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 20 Sep 2019 11:09:34 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 9/20/19 10:59 AM, Robert Haas wrote:\n> On Fri, Sep 20, 2019 at 9:46 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> - appendStringInfo et. al. I don't think it would be that hard to move\n>> this to src/common, but I'm also not sure it really solves the\n>> problem, because StringInfo has a 1GB limit, and there's no rule at\n>> all that a backup manifest has got to be less than 1GB.\n> \n> Hmm. That's actually going to be a problem on the server side, no\n> matter what we do on the client side. We have to send the manifest\n> after we send everything else, so that we know what we sent. But if we\n> sent a lot of files, the manifest might be really huge. I had been\n> thinking that we would generate the manifest on the server and send it\n> to the client after everything else, but maybe this is an argument for\n> generating the manifest on the client side and writing it\n> incrementally. That would require the client to peek at the contents\n> of every tar file it receives all the time, which it currently doesn't\n> need to do, but it does peek inside them a little bit, so maybe it's\n> OK.\n> \n> Another alternative would be to have the server spill the manifest in\n> progress to a temp file and then stream it from there to the client.\n\nThis seems reasonable to me.\n\nWe keep an in-memory representation which is just an array of structs\nand is fairly compact -- 1 million files uses ~150MB of memory. We just\nformat and stream this to storage when saving. Saving is easier than\nloading, of course.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 20 Sep 2019 11:21:24 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 9/20/19 9:46 AM, Robert Haas wrote:\n\n> least, I think you do. There's probably some way to create a callback\n> structure that lets you presuppose that the toplevel data structure is\n> an array (or object) and get back each element of that array (or\n> key/value pair) as it's parsed,\n\nIf a JSON parser does find its way into src/common, it probably wants\nto have such an incremental mode available, similar to [2] offered\nin the \"Jackson\" library for Java.\n\nThe Jackson developer has propounded a thesis[1] that such a parsing\nlibrary ought to offer \"Three -- and Only Three\" different styles of\nAPI corresponding to three ways of organizing the code using the\nlibrary ([2], [3], [4], which also resemble the different APIs\nsupplied in Java for XML processing).\n\nRegards,\n-Chap\n\n\n[1] http://www.cowtowncoder.com/blog/archives/2009/01/entry_132.html\n[2] http://www.cowtowncoder.com/blog/archives/2009/01/entry_137.html\n[3] http://www.cowtowncoder.com/blog/archives/2009/01/entry_153.html\n[4] http://www.cowtowncoder.com/blog/archives/2009/01/entry_152.html\n\n\n",
"msg_date": "Fri, 20 Sep 2019 11:24:39 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 11:09 AM David Steele <david@pgmasters.net> wrote:\n> Seems to me we are overdue for elog()/ereport() compatible\n> error-handling in the front end. Plus mem contexts.\n>\n> It sucks to make that a prereq for this project but the longer we kick\n> that can down the road...\n\nThere are no doubt many patches that would benefit from having more\nbackend infrastructure exposed in frontend contexts, and I think we're\nslowly moving in that direction, but I generally do not believe in\nburdening feature patches with major infrastructure improvements.\nSometimes it's necessary, as in the case of parallel query, which\nrequired upgrading a whole lot of backend infrastructure in order to\nhave any chance of doing something useful. In most cases, however,\nthere's a way of getting the patch done that dodges the problem.\n\nFor example, I think there's a pretty good argument that Heikki's\ndesign for relation forks was a bad one. It's proven to scale poorly\nand create performance problems and extra complexity in quite a few\nplaces. It would likely have been better, from a strictly theoretical\npoint of view, to insist on a design where the FSM and VM pages got\nstored inside the relation itself, and the heap was responsible for\nfiguring out how various pages were being used. When BRIN came along,\nwe insisted on precisely that design, because it was clear that\nfurther straining the relation fork system was not a good plan.\nHowever, if we'd insisted on that when Heikki did the original work,\nit might have delayed the arrival of the free space map for one or\nmore releases, and we got big benefits out of having that done sooner.\nThere's nothing stopping someone from writing a patch to get rid of\nrelation forks and allow a heap AM to have multiple relfilenodes (with\nthe extra ones used for the FSM and VM) or with multiplexing all the\ndata inside of a single file. Nobody has, though, because it's hard,\nand the problems with the status quo are not so bad as to justify the\namount of development effort that would be required to fix it. At some\npoint, that problem is probably going to work its way to the top of\nsomebody's priority list, but it's already been about 10 years since\nthat all happened and everyone has so far dodged dealing with the\nproblem, which in turn has enabled them to work on other things that\nare perhaps more important.\n\nI think the same principle applies here. It's reasonable to ask the\nauthor of a feature patch to fix issues that are closely related to\nthe feature in question, or even problems that are not new but would\nbe greatly exacerbated by the addition of the feature. It's not\nreasonable to stack up a list of infrastructure upgrades that somebody\nhas to do as a condition of having a feature patch accepted that does\nnot necessarily require those upgrades. I am not convinced that JSON\nis actually a better format for a backup manifest (more on that\nbelow), but even if I were, I believe that getting a backup manifest\nfunctionality into PostgreSQL 13, and perhaps incremental backup on\ntop of that, is valuable enough to justify making some compromises to\nmake that happen. And I don't mean \"compromises\" as in \"let's commit\nsomething that doesn't work very well;\" rather, I mean making design\nchoices that are aimed at making the project something that is\nfeasible and can be completed in reasonable time, rather than not.\n\nAnd saying, well, the backup manifest format *has* to be JSON because\neverything else suxxor is not that. We don't have a single other\nexample of a file that we read and write in JSON format. Extension\ncontrol files use a custom format. Backup labels and backup history\nfiles and timeline history files and tablespace map files use custom\nformats. postgresql.conf, pg_hba.conf, and pg_ident.conf use custom\nformats. postmaster.opts and postmaster.pid use custom formats. If\nJSON is better and easier, at least one of the various people who\ncoded those things up would have chosen to use it, but none of them\ndid, and nobody's made a serious attempt to convert them to use it.\nThat might be because we lack the infrastructure for dealing with JSON\nand building it is more work than anybody's willing to do, or it might\nbe because JSON is not actually better for these kinds of use cases,\nbut either way, it's hard to see why this particular patch should be\nburdened with a requirement that none of the previous ones had to\nsatisfy.\n\nPersonally, I'd be intensely unhappy if a motion to convert\npostgresql.conf or pg_hba.conf to JSON format gathered enough steam to\nbe adopted. It would be darn useful, because you could specify\ncomplex values for options instead of being limited to scalars, but it\nwould also make the configuration files a lot harder for human beings\nto read and grep and the quality of error reporting would probably\ndecline significantly. Also, appending a setting to the file,\nsomething which is currently quite simple, would get a lot harder.\nAd-hoc file formats can be problematic, but they can also have real\nadvantages in terms of readability, brevity, and fitness for purpose.\n\n> This talk was good fun. The largest number of tables we've seen is a\n> few hundred thousand, but that still adds up to more than a million\n> files to backup.\n\nA quick survey of some of my colleagues turned up a few examples of\npeople with 2-4 million files to backup, so similar kind of ballpark.\nProbably not big enough for the manifest to hit the 1GB mark, but\ngetting close.\n\n> > Or we could just decide that you have to have enough memory\n> > to hold the parsed version of the entire manifest file in memory all\n> > at once, and if you don't, maybe you should drop some tables or buy\n> > more RAM.\n>\n> I assume you meant \"un-parsed\" here?\n\nI don't think I meant that, although it seems like you might need to\nstore either all the parsed data or all the unparsed data or even\nboth, depending on exactly what you are trying to do.\n\n> > I hear you saying that this is going to end up being just as complex\n> > in the end, but I don't think I believe it. It sounds to me like the\n> > difference between spending a couple of hours figuring this out and\n> > spending a couple of months trying to figure it out and maybe not\n> > actually getting anywhere.\n>\n> Maybe the initial implementation will be easier but I am confident we'll\n> pay for it down the road. Also, don't we want users to be able to read\n> this file? Do we really want them to need to cook up a custom parser in\n> Perl, Go, Python, etc.?\n\nWell, I haven't heard anybody complain that they can't read a\nbackup_label file because it's too hard to cook up a parser. And I\nthink the reason is pretty clear: such files are not hard to parse.\nSimilarly for a pg_hba.conf file. This case is a little more\ncomplicated than those, but AFAICS, not enormously so. Actually, it\nseems like a combination of those two cases: it has some fixed\nmetadata fields that can be represented with one line per field, like\na backup_label, and then a bunch of entries for files that are\nsomewhat like entries in a pg_hba.conf file, in that they can be\nrepresented by a line per record with a certain number of fields on\neach line.\n\nI attach here a couple of patches. The first one does some\nrefactoring of relevant code in pg_basebackup, and the second one adds\nchecksum manifests using a format that I pulled out of my ear. It\nprobably needs some adjustment but I don't think it's crazy. Each\nfile gets a line that looks like this:\n\nFile $FILENAME $FILESIZE $FILEMTIME $FILECHECKSUM\n\nRight now, the file checksums are computed using SHA-256 but it could\nbe changed to anything else for which we've got code. On my system,\nshasum -a256 $FILE produces the same answer that shows up here. At\nthe bottom of the manifest there's a checksum of the manifest itself,\nwhich looks like this:\n\nManifest-Checksum\n385fe156a8c6306db40937d59f46027cc079350ecf5221027d71367675c5f781\n\nThat's a SHA-256 checksum of the file contents excluding the final\nline. It can be verified by feeding all the file contents except the\nlast line to shasum -a256. I can't help but observe that if the file\nwere defined to be a JSONB blob, it's not very clear how you would\ninclude a checksum of the blob contents in the blob itself, but with a\nformat based on a bunch of lines of data, it's super-easy to generate\nand super-easy to write tools that verify it.\n\nThis is just a prototype so I haven't written a verification tool, and\nthere's a bunch of testing and documentation and so forth that would\nneed to be done aside from whatever we've got to hammer out in terms\nof design issues and file formats. But I think it's cool, and perhaps\nsome discussion of how it could be evolved will get us closer to a\nresolution everybody can at least live with.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 20 Sep 2019 14:55:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 9/20/19 2:55 PM, Robert Haas wrote:\n> On Fri, Sep 20, 2019 at 11:09 AM David Steele <david@pgmasters.net> wrote:\n>>\n>> It sucks to make that a prereq for this project but the longer we kick\n>> that can down the road...\n> \n> There are no doubt many patches that would benefit from having more\n> backend infrastructure exposed in frontend contexts, and I think we're\n> slowly moving in that direction, but I generally do not believe in\n> burdening feature patches with major infrastructure improvements.\n\nThe hardest part about technical debt is knowing when to incur it. It\nis never a cut-and-dried choice.\n\n>> This talk was good fun. The largest number of tables we've seen is a\n>> few hundred thousand, but that still adds up to more than a million\n>> files to backup.\n> \n> A quick survey of some of my colleagues turned up a few examples of\n> people with 2-4 million files to backup, so similar kind of ballpark.\n> Probably not big enough for the manifest to hit the 1GB mark, but\n> getting close.\n\nI have so many doubts about clusters with this many tables, but we do\nsupport it, so...\n\n>>> I hear you saying that this is going to end up being just as complex\n>>> in the end, but I don't think I believe it. It sounds to me like the\n>>> difference between spending a couple of hours figuring this out and\n>>> spending a couple of months trying to figure it out and maybe not\n>>> actually getting anywhere.\n>>\n>> Maybe the initial implementation will be easier but I am confident we'll\n>> pay for it down the road. Also, don't we want users to be able to read\n>> this file? Do we really want them to need to cook up a custom parser in\n>> Perl, Go, Python, etc.?\n> \n> Well, I haven't heard anybody complain that they can't read a\n> backup_label file because it's too hard to cook up a parser. And I\n> think the reason is pretty clear: such files are not hard to parse.\n> Similarly for a pg_hba.conf file. This case is a little more\n> complicated than those, but AFAICS, not enormously so. Actually, it\n> seems like a combination of those two cases: it has some fixed\n> metadata fields that can be represented with one line per field, like\n> a backup_label, and then a bunch of entries for files that are\n> somewhat like entries in a pg_hba.conf file, in that they can be\n> represented by a line per record with a certain number of fields on\n> each line.\n\nYeah, they are not hard to parse, but *everyone* has to cook up code for\nit. A bit of a bummer, that.\n\n> I attach here a couple of patches. The first one does some\n> refactoring of relevant code in pg_basebackup, and the second one adds\n> checksum manifests using a format that I pulled out of my ear. It\n> probably needs some adjustment but I don't think it's crazy. Each\n> file gets a line that looks like this:\n> \n> File $FILENAME $FILESIZE $FILEMTIME $FILECHECKSUM\n\nWe also include page checksum validation failures in the file record.\nNot critical for the first pass, perhaps, but something to keep in mind.\n\n> Right now, the file checksums are computed using SHA-256 but it could\n> be changed to anything else for which we've got code. On my system,\n> shasum -a256 $FILE produces the same answer that shows up here. At\n> the bottom of the manifest there's a checksum of the manifest itself,\n> which looks like this:\n> \n> Manifest-Checksum\n> 385fe156a8c6306db40937d59f46027cc079350ecf5221027d71367675c5f781\n> \n> That's a SHA-256 checksum of the file contents excluding the final\n> line. It can be verified by feeding all the file contents except the\n> last line to shasum -a256. I can't help but observe that if the file\n> were defined to be a JSONB blob, it's not very clear how you would\n> include a checksum of the blob contents in the blob itself, but with a\n> format based on a bunch of lines of data, it's super-easy to generate\n> and super-easy to write tools that verify it.\n\nYou can do this in JSON pretty easily by handling the terminating\nbrace/bracket:\n\n{\n<some json contents>*,\n\"checksum\":<sha256>\n}\n\nBut of course a linefeed-delimited file is even easier.\n\n> This is just a prototype so I haven't written a verification tool, and\n> there's a bunch of testing and documentation and so forth that would\n> need to be done aside from whatever we've got to hammer out in terms\n> of design issues and file formats. But I think it's cool, and perhaps\n> some discussion of how it could be evolved will get us closer to a\n> resolution everybody can at least live with.\n\nI had a quick look and it seems pretty reasonable. I'll need to\ngenerate a manifest to see if I can spot any obvious gotchas.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 20 Sep 2019 19:11:47 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Sat, Sep 21, 2019 at 12:25 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\nSome comments:\n\nManifest file will be in plain text format even if compression is\nspecified, should we compress it?\nMay be this is intended, just raised the point to make sure that it is intended.\n+static void\n+ReceiveBackupManifestChunk(size_t r, char *copybuf, void *callback_data)\n+{\n+ WriteManifestState *state = callback_data;\n+\n+ if (fwrite(copybuf, r, 1, state->file) != 1)\n+ {\n+ pg_log_error(\"could not write to file \\\"%s\\\": %m\", state->filename);\n+ exit(1);\n+ }\n+}\n\nWALfile.done file gets added but wal file information is not included\nin the manifest file, should we include WAL file also?\n@@ -599,16 +618,20 @@ perform_base_backup(basebackup_options *opt)\n (errcode_for_file_access(),\n errmsg(\"could not stat file \\\"%s\\\": %m\", pathbuf)));\n\n- sendFile(pathbuf, pathbuf, &statbuf, false, InvalidOid);\n+ sendFile(pathbuf, pathbuf, &statbuf, false, InvalidOid, manifest,\n+ NULL);\n\n /* unconditionally mark file as archived */\n StatusFilePath(pathbuf, fname, \".done\");\n- sendFileWithContent(pathbuf, \"\");\n+ sendFileWithContent(pathbuf, \"\", manifest);\n\nShould we add an option to make manifest generation configurable to\nreduce overhead during backup?\n\nManifest file does not include directory information, should we include it?\n\nThere is one warning:\nIn file included from ../../../src/include/fe_utils/string_utils.h:20:0,\n from pg_basebackup.c:34:\npg_basebackup.c: In function ‘ReceiveTarFile’:\n../../../src/interfaces/libpq/pqexpbuffer.h:60:9: warning: the\ncomparison will always evaluate as ‘false’ for the address of ‘buf’\nwill never be NULL [-Waddress]\n ((str) == NULL || (str)->maxlen == 0)\n ^\npg_basebackup.c:1203:7: note: in expansion of macro ‘PQExpBufferBroken’\n if (PQExpBufferBroken(&buf))\n\npg_gmtime can fail in case of malloc failure:\n+ /*\n+ * Convert time to a string. Since it's not clear what time zone to use\n+ * and since time zone definitions can change, possibly causing confusion,\n+ * use GMT always.\n+ */\n+ pg_strftime(timebuf, sizeof(timebuf), \"%Y-%m-%d %H:%M:%S %Z\",\n+ pg_gmtime(&mtime));\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 Sep 2019 18:16:54 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Entry for directory is not added in manifest. So it might be difficult\nat client to get to know about the directories. Will it be good to add\nan entry for each directory too? May be like:\nDir <dirname> <mtime>\n\nAlso, on latest HEAD patches does not apply.\n\nOn Wed, Sep 25, 2019 at 6:17 PM vignesh C <vignesh21@gmail.com> wrote:\n\n> On Sat, Sep 21, 2019 at 12:25 AM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> >\n> Some comments:\n>\n> Manifest file will be in plain text format even if compression is\n> specified, should we compress it?\n> May be this is intended, just raised the point to make sure that it is\n> intended.\n> +static void\n> +ReceiveBackupManifestChunk(size_t r, char *copybuf, void *callback_data)\n> +{\n> + WriteManifestState *state = callback_data;\n> +\n> + if (fwrite(copybuf, r, 1, state->file) != 1)\n> + {\n> + pg_log_error(\"could not write to file \\\"%s\\\": %m\", state->filename);\n> + exit(1);\n> + }\n> +}\n>\n> WALfile.done file gets added but wal file information is not included\n> in the manifest file, should we include WAL file also?\n> @@ -599,16 +618,20 @@ perform_base_backup(basebackup_options *opt)\n> (errcode_for_file_access(),\n> errmsg(\"could not stat file \\\"%s\\\": %m\", pathbuf)));\n>\n> - sendFile(pathbuf, pathbuf, &statbuf, false, InvalidOid);\n> + sendFile(pathbuf, pathbuf, &statbuf, false, InvalidOid, manifest,\n> + NULL);\n>\n> /* unconditionally mark file as archived */\n> StatusFilePath(pathbuf, fname, \".done\");\n> - sendFileWithContent(pathbuf, \"\");\n> + sendFileWithContent(pathbuf, \"\", manifest);\n>\n> Should we add an option to make manifest generation configurable to\n> reduce overhead during backup?\n>\n> Manifest file does not include directory information, should we include it?\n>\n> There is one warning:\n> In file included from ../../../src/include/fe_utils/string_utils.h:20:0,\n> from pg_basebackup.c:34:\n> pg_basebackup.c: In function ‘ReceiveTarFile’:\n> ../../../src/interfaces/libpq/pqexpbuffer.h:60:9: warning: the\n> comparison will always evaluate as ‘false’ for the address of ‘buf’\n> will never be NULL [-Waddress]\n> ((str) == NULL || (str)->maxlen == 0)\n> ^\n> pg_basebackup.c:1203:7: note: in expansion of macro ‘PQExpBufferBroken’\n> if (PQExpBufferBroken(&buf))\n>\n>\nYes I too obeserved this warning.\n\n\n> pg_gmtime can fail in case of malloc failure:\n> + /*\n> + * Convert time to a string. Since it's not clear what time zone to use\n> + * and since time zone definitions can change, possibly causing confusion,\n> + * use GMT always.\n> + */\n> + pg_strftime(timebuf, sizeof(timebuf), \"%Y-%m-%d %H:%M:%S %Z\",\n> + pg_gmtime(&mtime));\n>\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\n\n-- \nJeevan Chalke\nAssociate Database Architect & Team Lead, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nEntry for directory is not added in manifest. So it might be difficultat client to get to know about the directories. Will it be good to addan entry for each directory too? May be like:Dir <dirname> <mtime>Also, on latest HEAD patches does not apply.On Wed, Sep 25, 2019 at 6:17 PM vignesh C <vignesh21@gmail.com> wrote:On Sat, Sep 21, 2019 at 12:25 AM Robert Haas <robertmhaas@gmail.com> wrote:>Some comments:\nManifest file will be in plain text format even if compression isspecified, should we compress it?May be this is intended, just raised the point to make sure that it is intended.+static void+ReceiveBackupManifestChunk(size_t r, char *copybuf, void *callback_data)+{+ WriteManifestState *state = callback_data;++ if (fwrite(copybuf, r, 1, state->file) != 1)+ {+ pg_log_error(\"could not write to file \\\"%s\\\": %m\", state->filename);+ exit(1);+ }+}\nWALfile.done file gets added but wal file information is not includedin the manifest file, should we include WAL file also?@@ -599,16 +618,20 @@ perform_base_backup(basebackup_options *opt) (errcode_for_file_access(), errmsg(\"could not stat file \\\"%s\\\": %m\", pathbuf)));\n- sendFile(pathbuf, pathbuf, &statbuf, false, InvalidOid);+ sendFile(pathbuf, pathbuf, &statbuf, false, InvalidOid, manifest,+ NULL);\n /* unconditionally mark file as archived */ StatusFilePath(pathbuf, fname, \".done\");- sendFileWithContent(pathbuf, \"\");+ sendFileWithContent(pathbuf, \"\", manifest);\nShould we add an option to make manifest generation configurable toreduce overhead during backup?\nManifest file does not include directory information, should we include it?\nThere is one warning:In file included from ../../../src/include/fe_utils/string_utils.h:20:0, from pg_basebackup.c:34:pg_basebackup.c: In function ‘ReceiveTarFile’:../../../src/interfaces/libpq/pqexpbuffer.h:60:9: warning: thecomparison will always evaluate as ‘false’ for the address of ‘buf’will never be NULL [-Waddress] ((str) == NULL || (str)->maxlen == 0) ^pg_basebackup.c:1203:7: note: in expansion of macro ‘PQExpBufferBroken’ if (PQExpBufferBroken(&buf))\nYes I too obeserved this warning. pg_gmtime can fail in case of malloc failure:+ /*+ * Convert time to a string. Since it's not clear what time zone to use+ * and since time zone definitions can change, possibly causing confusion,+ * use GMT always.+ */+ pg_strftime(timebuf, sizeof(timebuf), \"%Y-%m-%d %H:%M:%S %Z\",+ pg_gmtime(&mtime));\nRegards,VigneshEnterpriseDB: http://www.enterprisedb.com\n\n\n-- Jeevan ChalkeAssociate Database Architect & Team Lead, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 30 Sep 2019 15:01:25 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Wed, Sep 25, 2019 at 6:17 PM vignesh C <vignesh21@gmail.com> wrote:\n\n> On Sat, Sep 21, 2019 at 12:25 AM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> >\n> Some comments:\n>\n> Manifest file will be in plain text format even if compression is\n> specified, should we compress it?\n> May be this is intended, just raised the point to make sure that it is\n> intended.\n> +static void\n> +ReceiveBackupManifestChunk(size_t r, char *copybuf, void *callback_data)\n> +{\n> + WriteManifestState *state = callback_data;\n> +\n> + if (fwrite(copybuf, r, 1, state->file) != 1)\n> + {\n> + pg_log_error(\"could not write to file \\\"%s\\\": %m\", state->filename);\n> + exit(1);\n> + }\n> +}\n>\n> WALfile.done file gets added but wal file information is not included\n> in the manifest file, should we include WAL file also?\n> @@ -599,16 +618,20 @@ perform_base_backup(basebackup_options *opt)\n> (errcode_for_file_access(),\n> errmsg(\"could not stat file \\\"%s\\\": %m\", pathbuf)));\n>\n> - sendFile(pathbuf, pathbuf, &statbuf, false, InvalidOid);\n> + sendFile(pathbuf, pathbuf, &statbuf, false, InvalidOid, manifest,\n> + NULL);\n>\n> /* unconditionally mark file as archived */\n> StatusFilePath(pathbuf, fname, \".done\");\n> - sendFileWithContent(pathbuf, \"\");\n> + sendFileWithContent(pathbuf, \"\", manifest);\n>\n> Should we add an option to make manifest generation configurable to\n> reduce overhead during backup?\n>\n> Manifest file does not include directory information, should we include it?\n>\n> There is one warning:\n> In file included from ../../../src/include/fe_utils/string_utils.h:20:0,\n> from pg_basebackup.c:34:\n> pg_basebackup.c: In function ‘ReceiveTarFile’:\n> ../../../src/interfaces/libpq/pqexpbuffer.h:60:9: warning: the\n> comparison will always evaluate as ‘false’ for the address of ‘buf’\n> will never be NULL [-Waddress]\n> ((str) == NULL || (str)->maxlen == 0)\n> ^\n> pg_basebackup.c:1203:7: note: in expansion of macro ‘PQExpBufferBroken’\n> if (PQExpBufferBroken(&buf))\n>\n>\nI also observed this warning. PFA to fix the same.\n\npg_gmtime can fail in case of malloc failure:\n> + /*\n> + * Convert time to a string. Since it's not clear what time zone to use\n> + * and since time zone definitions can change, possibly causing confusion,\n> + * use GMT always.\n> + */\n> + pg_strftime(timebuf, sizeof(timebuf), \"%Y-%m-%d %H:%M:%S %Z\",\n> + pg_gmtime(&mtime));\n>\n>\nFixed that into attached patch.\n\n\n\n\nRegards.\nRushabh Lathia\nwww.EnterpriseDB.com",
"msg_date": "Mon, 30 Sep 2019 15:37:09 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Mon, Sep 30, 2019 at 5:31 AM Jeevan Chalke\n<jeevan.chalke@enterprisedb.com> wrote:\n> Entry for directory is not added in manifest. So it might be difficult\n> at client to get to know about the directories. Will it be good to add\n> an entry for each directory too? May be like:\n> Dir <dirname> <mtime>\n\nWell, what kind of corruption would this allow us to detect that we\ncan't detect as things stand? I think the only case is an empty\ndirectory. If it's not empty, we'd have some entries for the files in\nthat directory, and those files won't be able to exist unless the\ndirectory does. But, how would we end up backing up an empty\ndirectory, anyway?\n\nI don't really *mind* adding directories into the manifest, but I'm\nnot sure how much it helps.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 1 Oct 2019 08:13:05 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "My colleague Suraj did testing and noticed the performance impact\nwith the checksums. On further testing, he found that specifically with\nsha its more of performance impact.\n\nPlease find below statistics:\n\nno of tables without checksum SHA256\nchecksum % performnce\noverhead\nwith\nSHA-256 md5 checksum % performnce\noverhead with md5 CRC checksum % performnce\noverhead with\nCRC\n10 (100 MB\nin each table) real 0m10.957s\nuser 0m0.367s\nsys 0m2.275s real 0m16.816s\nuser 0m0.210s\nsys 0m2.067s 53% real 0m11.895s\nuser 0m0.174s\nsys 0m1.725s 8% real 0m11.136s\nuser 0m0.365s\nsys 0m2.298s 2%\n20 (100 MB\nin each table) real 0m20.610s\nuser 0m0.484s\nsys 0m3.198s real 0m31.745s\nuser 0m0.569s\nsys 0m4.089s\n54% real 0m22.717s\nuser 0m0.638s\nsys 0m4.026s 10% real 0m21.075s\nuser 0m0.538s\nsys 0m3.417s 2%\n50 (100 MB\nin each table) real 0m49.143s\nuser 0m1.646s\nsys 0m8.499s real 1m13.683s\nuser 0m1.305s\nsys 0m10.541s 50% real 0m51.856s\nuser 0m0.932s\nsys 0m7.702s 6% real 0m49.689s\nuser 0m1.028s\nsys 0m6.921s 1%\n100 (100 MB\nin each table) real 1m34.308s\nuser 0m2.265s\nsys 0m14.717s real 2m22.403s\nuser 0m2.613s\nsys 0m20.776s 51% real 1m41.524s\nuser 0m2.158s\nsys 0m15.949s\n8% real 1m35.045s\nuser 0m2.061s\nsys 0m16.308s 1%\n100 (1 GB\nin each table) real 17m18.336s\nuser 0m20.222s\nsys 3m12.960s real 24m45.942s\nuser 0m26.911s\nsys 3m33.501s 43% real 17m41.670s\nuser 0m26.506s\nsys 3m18.402s 2% real 17m22.296s\nuser 0m26.811s\nsys 3m56.653s\n\nsometimes, this test\ncompletes within the\nsame time as without\nchecksum. approx. 0.5%\n\n\nConsidering the above results, I modified the earlier Robert's patch and\nadded\n\"manifest_with_checksums\" option to pg_basebackup. With a new patch.\nby default, checksums will be disabled and will be only enabled when\n\"manifest_with_checksums\" option is provided. Also re-based all patch set.\n\n\n\nRegards,\n\n-- \nRushabh Lathia\nwww.EnterpriseDB.com\n\nOn Tue, Oct 1, 2019 at 5:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Sep 30, 2019 at 5:31 AM Jeevan Chalke\n> <jeevan.chalke@enterprisedb.com> wrote:\n> > Entry for directory is not added in manifest. So it might be difficult\n> > at client to get to know about the directories. Will it be good to add\n> > an entry for each directory too? May be like:\n> > Dir <dirname> <mtime>\n>\n> Well, what kind of corruption would this allow us to detect that we\n> can't detect as things stand? I think the only case is an empty\n> directory. If it's not empty, we'd have some entries for the files in\n> that directory, and those files won't be able to exist unless the\n> directory does. But, how would we end up backing up an empty\n> directory, anyway?\n>\n> I don't really *mind* adding directories into the manifest, but I'm\n> not sure how much it helps.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\n-- \nRushabh Lathia",
"msg_date": "Tue, 19 Nov 2019 15:30:17 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "\nOn 11/19/19 5:00 AM, Rushabh Lathia wrote:\n>\n>\n> My colleague Suraj did testing and noticed the performance impact\n> with the checksums. On further testing, he found that specifically with\n> sha its more of performance impact. \n>\n>\n\nI admit I haven't been following along closely, but why do we need a\ncryptographic checksum here instead of, say, a CRC? Do we think that\nsomehow the checksum might be forged? Use of cryptographic hashes as\ngeneral purpose checksums has become far too common IMNSHO.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 19 Nov 2019 08:49:24 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 11/19/19 5:00 AM, Rushabh Lathia wrote:\n> \n> My colleague Suraj did testing and noticed the performance impact\n> with the checksums. On further testing, he found that specifically with\n> sha its more of performance impact. \n\nWe have found that SHA1 adds about 3% overhead when the backup is also\ncompressed (gzip -6), which is what most people want to do. This\npercentage goes down even more if the backup is being transferred over a\nnetwork or to an object store such as S3.\n\nWe judged that the lower collision rate of SHA1 justified the additional\nexpense.\n\nThat said, making SHA256 optional seems reasonable. We decided not to\nmake our SHA1 checksums optional to reduce the test matrix and because\nparallelism largely addressed performance concerns.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 19 Nov 2019 16:34:16 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Tue, Nov 19, 2019 at 7:19 PM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n\n>\n> On 11/19/19 5:00 AM, Rushabh Lathia wrote:\n> >\n> >\n> > My colleague Suraj did testing and noticed the performance impact\n> > with the checksums. On further testing, he found that specifically with\n> > sha its more of performance impact.\n> >\n> >\n>\n> I admit I haven't been following along closely, but why do we need a\n> cryptographic checksum here instead of, say, a CRC? Do we think that\n> somehow the checksum might be forged? Use of cryptographic hashes as\n> general purpose checksums has become far too common IMNSHO.\n>\n\nYeah, maybe. I was thinking to give the user an option to choose checksums\nalgorithms (SHA256. CRC, MD5, etc), so that they are open to choose what\nsuites for their environment.\n\nIf we decide to do that than we need to store the checksums algorithm\ninformation in the manifest file.\n\nThoughts?\n\n\n\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan https://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\n-- \nRushabh Lathia\n\nOn Tue, Nov 19, 2019 at 7:19 PM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\nOn 11/19/19 5:00 AM, Rushabh Lathia wrote:\n>\n>\n> My colleague Suraj did testing and noticed the performance impact\n> with the checksums. On further testing, he found that specifically with\n> sha its more of performance impact. \n>\n>\n\nI admit I haven't been following along closely, but why do we need a\ncryptographic checksum here instead of, say, a CRC? Do we think that\nsomehow the checksum might be forged? Use of cryptographic hashes as\ngeneral purpose checksums has become far too common IMNSHO.Yeah, maybe. I was thinking to give the user an option to choose checksumsalgorithms (SHA256. CRC, MD5, etc), so that they are open to choose whatsuites for their environment.If we decide to do that than we need to store the checksums algorithminformation in the manifest file.Thoughts?\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n-- Rushabh Lathia",
"msg_date": "Wed, 20 Nov 2019 10:58:18 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nSince now we are generating the backup manifest file with each backup, it\nprovides us an option to validate the given backup.\nLet's say, we have taken a backup and after a few days, we want to check\nwhether that backup is validated or corruption-free without restarting the\nserver.\n\nPlease find attached POC patch for same which will be based on the latest\nbackup manifest patch from Rushabh. With this functionality, we add new\noption to pg_basebackup, something like --verify-backup.\nSo, the syntax would be:\n./bin/pg_basebackup --verify-backup -D <backup_directory_path>\n\nBasically, we read the backup_manifest file line by line from the given\ndirectory path and build the hash table, then scan the directory and\ncompare each file with the hash entry.\n\nThoughts/suggestions?\n\nOn Tue, Nov 19, 2019 at 3:30 PM Rushabh Lathia <rushabh.lathia@gmail.com>\nwrote:\n\n>\n>\n> My colleague Suraj did testing and noticed the performance impact\n> with the checksums. On further testing, he found that specifically with\n> sha its more of performance impact.\n>\n> Please find below statistics:\n>\n> no of tables without checksum SHA256\n> checksum % performnce\n> overhead\n> with\n> SHA-256 md5 checksum % performnce\n> overhead with md5 CRC checksum % performnce\n> overhead with\n> CRC\n> 10 (100 MB\n> in each table) real 0m10.957s\n> user 0m0.367s\n> sys 0m2.275s real 0m16.816s\n> user 0m0.210s\n> sys 0m2.067s 53% real 0m11.895s\n> user 0m0.174s\n> sys 0m1.725s 8% real 0m11.136s\n> user 0m0.365s\n> sys 0m2.298s 2%\n> 20 (100 MB\n> in each table) real 0m20.610s\n> user 0m0.484s\n> sys 0m3.198s real 0m31.745s\n> user 0m0.569s\n> sys 0m4.089s\n> 54% real 0m22.717s\n> user 0m0.638s\n> sys 0m4.026s 10% real 0m21.075s\n> user 0m0.538s\n> sys 0m3.417s 2%\n> 50 (100 MB\n> in each table) real 0m49.143s\n> user 0m1.646s\n> sys 0m8.499s real 1m13.683s\n> user 0m1.305s\n> sys 0m10.541s 50% real 0m51.856s\n> user 0m0.932s\n> sys 0m7.702s 6% real 0m49.689s\n> user 0m1.028s\n> sys 0m6.921s 1%\n> 100 (100 MB\n> in each table) real 1m34.308s\n> user 0m2.265s\n> sys 0m14.717s real 2m22.403s\n> user 0m2.613s\n> sys 0m20.776s 51% real 1m41.524s\n> user 0m2.158s\n> sys 0m15.949s\n> 8% real 1m35.045s\n> user 0m2.061s\n> sys 0m16.308s 1%\n> 100 (1 GB\n> in each table) real 17m18.336s\n> user 0m20.222s\n> sys 3m12.960s real 24m45.942s\n> user 0m26.911s\n> sys 3m33.501s 43% real 17m41.670s\n> user 0m26.506s\n> sys 3m18.402s 2% real 17m22.296s\n> user 0m26.811s\n> sys 3m56.653s\n>\n> sometimes, this test\n> completes within the\n> same time as without\n> checksum. approx. 0.5%\n>\n>\n> Considering the above results, I modified the earlier Robert's patch and\n> added\n> \"manifest_with_checksums\" option to pg_basebackup. With a new patch.\n> by default, checksums will be disabled and will be only enabled when\n> \"manifest_with_checksums\" option is provided. Also re-based all patch set.\n>\n>\n>\n> Regards,\n>\n> --\n> Rushabh Lathia\n> www.EnterpriseDB.com\n>\n> On Tue, Oct 1, 2019 at 5:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Mon, Sep 30, 2019 at 5:31 AM Jeevan Chalke\n>> <jeevan.chalke@enterprisedb.com> wrote:\n>> > Entry for directory is not added in manifest. So it might be difficult\n>> > at client to get to know about the directories. Will it be good to add\n>> > an entry for each directory too? May be like:\n>> > Dir <dirname> <mtime>\n>>\n>> Well, what kind of corruption would this allow us to detect that we\n>> can't detect as things stand? I think the only case is an empty\n>> directory. If it's not empty, we'd have some entries for the files in\n>> that directory, and those files won't be able to exist unless the\n>> directory does. But, how would we end up backing up an empty\n>> directory, anyway?\n>>\n>> I don't really *mind* adding directories into the manifest, but I'm\n>> not sure how much it helps.\n>>\n>> --\n>> Robert Haas\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>>\n>>\n>\n> --\n> Rushabh Lathia\n>\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.",
"msg_date": "Wed, 20 Nov 2019 11:05:11 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Tue, Nov 19, 2019 at 3:30 PM Rushabh Lathia <rushabh.lathia@gmail.com>\nwrote:\n\n>\n>\n> My colleague Suraj did testing and noticed the performance impact\n> with the checksums. On further testing, he found that specifically with\n> sha its more of performance impact.\n>\n> Please find below statistics:\n>\n> no of tables without checksum SHA256\n> checksum % performnce\n> overhead\n> with\n> SHA-256 md5 checksum % performnce\n> overhead with md5 CRC checksum % performnce\n> overhead with\n> CRC\n> 10 (100 MB\n> in each table) real 0m10.957s\n> user 0m0.367s\n> sys 0m2.275s real 0m16.816s\n> user 0m0.210s\n> sys 0m2.067s 53% real 0m11.895s\n> user 0m0.174s\n> sys 0m1.725s 8% real 0m11.136s\n> user 0m0.365s\n> sys 0m2.298s 2%\n> 20 (100 MB\n> in each table) real 0m20.610s\n> user 0m0.484s\n> sys 0m3.198s real 0m31.745s\n> user 0m0.569s\n> sys 0m4.089s\n> 54% real 0m22.717s\n> user 0m0.638s\n> sys 0m4.026s 10% real 0m21.075s\n> user 0m0.538s\n> sys 0m3.417s 2%\n> 50 (100 MB\n> in each table) real 0m49.143s\n> user 0m1.646s\n> sys 0m8.499s real 1m13.683s\n> user 0m1.305s\n> sys 0m10.541s 50% real 0m51.856s\n> user 0m0.932s\n> sys 0m7.702s 6% real 0m49.689s\n> user 0m1.028s\n> sys 0m6.921s 1%\n> 100 (100 MB\n> in each table) real 1m34.308s\n> user 0m2.265s\n> sys 0m14.717s real 2m22.403s\n> user 0m2.613s\n> sys 0m20.776s 51% real 1m41.524s\n> user 0m2.158s\n> sys 0m15.949s\n> 8% real 1m35.045s\n> user 0m2.061s\n> sys 0m16.308s 1%\n> 100 (1 GB\n> in each table) real 17m18.336s\n> user 0m20.222s\n> sys 3m12.960s real 24m45.942s\n> user 0m26.911s\n> sys 3m33.501s 43% real 17m41.670s\n> user 0m26.506s\n> sys 3m18.402s 2% real 17m22.296s\n> user 0m26.811s\n> sys 3m56.653s\n>\n> sometimes, this test\n> completes within the\n> same time as without\n> checksum. approx. 0.5%\n>\n>\n> Considering the above results, I modified the earlier Robert's patch and\n> added\n> \"manifest_with_checksums\" option to pg_basebackup. With a new patch.\n> by default, checksums will be disabled and will be only enabled when\n> \"manifest_with_checksums\" option is provided. Also re-based all patch set.\n>\n\nReview comments on 0004:\n\n1.\nI don't think we need o_manifest_with_checksums variable,\nmanifest_with_checksums can be used instead.\n\n2.\nWe need to document this new option for pg_basebackup and basebackup.\n\n3.\nAlso, instead of keeping manifest_with_checksums as a global variable, we\nshould pass that to the required function. Patch 0002 already modified the\nsignature of all relevant functions anyways. So just need to add one more\nbool\nvariable there.\n\n4.\nWhy we need a \"File\" at the start of each entry as we are adding files only?\nI wonder if we also need to provide a tablespace name and directory marker\nso\nthat we have \"Tablespace\" and \"Dir\" at the start.\n\n5.\nIf I don't provide manifest-with-checksums option then too I see that\nchecksum\nis calculated for backup_manifest file itself. Is that intentional or\nmissed?\nI think we should omit that too if this option is not provided.\n\n6.\nIs it possible to get only a backup manifest from the server? A client like\npg_basebackup can then use that to fetch files reading that.\n\nThanks\n\n\n>\n>\n>\n> Regards,\n>\n> --\n> Rushabh Lathia\n> www.EnterpriseDB.com\n>\n> On Tue, Oct 1, 2019 at 5:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Mon, Sep 30, 2019 at 5:31 AM Jeevan Chalke\n>> <jeevan.chalke@enterprisedb.com> wrote:\n>> > Entry for directory is not added in manifest. So it might be difficult\n>> > at client to get to know about the directories. Will it be good to add\n>> > an entry for each directory too? May be like:\n>> > Dir <dirname> <mtime>\n>>\n>> Well, what kind of corruption would this allow us to detect that we\n>> can't detect as things stand? I think the only case is an empty\n>> directory. If it's not empty, we'd have some entries for the files in\n>> that directory, and those files won't be able to exist unless the\n>> directory does. But, how would we end up backing up an empty\n>> directory, anyway?\n>>\n>> I don't really *mind* adding directories into the manifest, but I'm\n>> not sure how much it helps.\n>>\n>> --\n>> Robert Haas\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>>\n>>\n>\n> --\n> Rushabh Lathia\n>\n\n\n-- \nJeevan Chalke\nAssociate Database Architect & Team Lead, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Tue, Nov 19, 2019 at 3:30 PM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:My colleague Suraj did testing and noticed the performance impactwith the checksums. On further testing, he found that specifically withsha its more of performance impact. Please find below statistics:no of tableswithout checksumSHA256checksum% performnceoverheadwithSHA-256md5 checksum% performnceoverhead with md5CRC checksum% performnceoverhead withCRC10 (100 MBin each table)real 0m10.957suser 0m0.367ssys 0m2.275sreal 0m16.816suser 0m0.210ssys 0m2.067s53%real 0m11.895suser 0m0.174ssys 0m1.725s8%real 0m11.136suser 0m0.365ssys 0m2.298s2%20 (100 MBin each table)real 0m20.610suser 0m0.484ssys 0m3.198sreal 0m31.745suser 0m0.569ssys 0m4.089s54%real 0m22.717suser 0m0.638ssys 0m4.026s10%real 0m21.075suser 0m0.538ssys 0m3.417s2%50 (100 MBin each table)real 0m49.143suser 0m1.646ssys 0m8.499sreal 1m13.683suser 0m1.305ssys 0m10.541s50%real 0m51.856suser 0m0.932ssys 0m7.702s6%real 0m49.689suser 0m1.028ssys 0m6.921s1%100 (100 MBin each table)real 1m34.308suser 0m2.265ssys 0m14.717sreal 2m22.403suser 0m2.613ssys 0m20.776s51%real 1m41.524suser 0m2.158ssys 0m15.949s8%real 1m35.045suser 0m2.061ssys 0m16.308s1%100 (1 GBin each table)real 17m18.336suser 0m20.222ssys 3m12.960sreal 24m45.942suser 0m26.911ssys 3m33.501s43%real 17m41.670suser 0m26.506ssys 3m18.402s2%real 17m22.296suser 0m26.811ssys 3m56.653ssometimes, this testcompletes within thesame time as withoutchecksum.approx. 0.5%Considering the above results, I modified the earlier Robert's patch and added\"manifest_with_checksums\" option to pg_basebackup. With a new patch.by default, checksums will be disabled and will be only enabled when\"manifest_with_checksums\" option is provided. Also re-based all patch set.Review comments on 0004:1.I don't think we need o_manifest_with_checksums variable,manifest_with_checksums can be used instead.2.We need to document this new option for pg_basebackup and basebackup.3.Also, instead of keeping manifest_with_checksums as a global variable, weshould pass that to the required function. Patch 0002 already modified thesignature of all relevant functions anyways. So just need to add one more boolvariable there.4.Why we need a \"File\" at the start of each entry as we are adding files only?I wonder if we also need to provide a tablespace name and directory marker sothat we have \"Tablespace\" and \"Dir\" at the start.5.If I don't provide manifest-with-checksums option then too I see that checksumis calculated for backup_manifest file itself. Is that intentional or missed?I think we should omit that too if this option is not provided.6.Is it possible to get only a backup manifest from the server? A client likepg_basebackup can then use that to fetch files reading that.Thanks Regards,-- Rushabh Lathiawww.EnterpriseDB.comOn Tue, Oct 1, 2019 at 5:43 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Sep 30, 2019 at 5:31 AM Jeevan Chalke\n<jeevan.chalke@enterprisedb.com> wrote:\n> Entry for directory is not added in manifest. So it might be difficult\n> at client to get to know about the directories. Will it be good to add\n> an entry for each directory too? May be like:\n> Dir <dirname> <mtime>\n\nWell, what kind of corruption would this allow us to detect that we\ncan't detect as things stand? I think the only case is an empty\ndirectory. If it's not empty, we'd have some entries for the files in\nthat directory, and those files won't be able to exist unless the\ndirectory does. But, how would we end up backing up an empty\ndirectory, anyway?\n\nI don't really *mind* adding directories into the manifest, but I'm\nnot sure how much it helps.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- Rushabh Lathia\n-- Jeevan ChalkeAssociate Database Architect & Team Lead, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 21 Nov 2019 14:33:05 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 11:05 AM Suraj Kharage <\nsuraj.kharage@enterprisedb.com> wrote:\n\n> Hi,\n>\n> Since now we are generating the backup manifest file with each backup, it\n> provides us an option to validate the given backup.\n> Let's say, we have taken a backup and after a few days, we want to check\n> whether that backup is validated or corruption-free without restarting the\n> server.\n>\n> Please find attached POC patch for same which will be based on the latest\n> backup manifest patch from Rushabh. With this functionality, we add new\n> option to pg_basebackup, something like --verify-backup.\n> So, the syntax would be:\n> ./bin/pg_basebackup --verify-backup -D <backup_directory_path>\n>\n> Basically, we read the backup_manifest file line by line from the given\n> directory path and build the hash table, then scan the directory and\n> compare each file with the hash entry.\n>\n> Thoughts/suggestions?\n>\n\n\nI like the idea of verifying the backup once we have backup_manifest with\nus.\nPeriodically verifying the already taken backup with this simple tool\nbecomes\neasy now.\n\nI have reviewed this patch and here are my comments:\n\n1.\n@@ -30,7 +30,9 @@\n #include \"common/file_perm.h\"\n #include \"common/file_utils.h\"\n #include \"common/logging.h\"\n+#include \"common/sha2.h\"\n #include \"common/string.h\"\n+#include \"fe_utils/simple_list.h\"\n #include \"fe_utils/recovery_gen.h\"\n #include \"fe_utils/string_utils.h\"\n #include \"getopt_long.h\"\n@@ -38,12 +40,19 @@\n #include \"pgtar.h\"\n #include \"pgtime.h\"\n #include \"pqexpbuffer.h\"\n+#include \"pgrhash.h\"\n #include \"receivelog.h\"\n #include \"replication/basebackup.h\"\n #include \"streamutil.h\"\n\nPlease add new files in order.\n\n2.\nCan hash related file names be renamed to backuphash.c and backuphash.h?\n\n3.\nNeed indentation adjustments at various places.\n\n4.\n+ char buf[1000000]; // 1MB chunk\n\nIt will be good if we have multiple of block /page size (or at-least power\nof 2\nnumber).\n\n5.\n+typedef struct pgrhash_entry\n+{\n+ struct pgrhash_entry *next; /* link to next entry in same bucket */\n+ DataDirectoryFileInfo *record;\n+} pgrhash_entry;\n+\n+struct pgrhash\n+{\n+ unsigned nbuckets; /* number of buckets */\n+ pgrhash_entry **bucket; /* pointer to hash entries */\n+};\n+\n+typedef struct pgrhash pgrhash;\n\nThese two can be moved to .h file instead of redefining over there.\n\n6.\n+/*\n+ * TODO: this function is not necessary, can be removed.\n+ * Test whether the given row number is match for the supplied keys.\n+ */\n+static bool\n+pgrhash_compare(char *bt_filename, char *filename)\n\nYeah, it can be removed by doing strcmp() at the required places rather than\ndoing it in a separate function.\n\n7.\nmdate is not compared anywhere. I understand that it can't be compared with\nthe file in the backup directory and its entry in the manifest as manifest\nentry gives mtime from server file whereas the same file in the backup will\nhave different mtime. But adding a few comments there will be good.\n\n8.\n+ char mdate[24];\n\nshould be mtime instead?\n\n\nThanks\n\n-- \nJeevan Chalke\nAssociate Database Architect & Team Lead, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Wed, Nov 20, 2019 at 11:05 AM Suraj Kharage <suraj.kharage@enterprisedb.com> wrote:Hi,Since now we are generating the backup manifest file with each backup, it provides us an option to validate the given backup. Let's say, we have taken a backup and after a few days, we want to check whether that backup is validated or corruption-free without restarting the server.Please find attached POC patch for same which will be based on the latest backup manifest patch from Rushabh. With this functionality, we add new option to pg_basebackup, something like --verify-backup.So, the syntax would be:./bin/pg_basebackup --verify-backup -D <backup_directory_path>Basically, we read the backup_manifest file line by line from the given directory path and build the hash table, then scan the directory and compare each file with the hash entry.Thoughts/suggestions?I like the idea of verifying the backup once we have backup_manifest with us.Periodically verifying the already taken backup with this simple tool becomeseasy now.I have reviewed this patch and here are my comments:1.@@ -30,7 +30,9 @@ #include \"common/file_perm.h\" #include \"common/file_utils.h\" #include \"common/logging.h\"+#include \"common/sha2.h\" #include \"common/string.h\"+#include \"fe_utils/simple_list.h\" #include \"fe_utils/recovery_gen.h\" #include \"fe_utils/string_utils.h\" #include \"getopt_long.h\"@@ -38,12 +40,19 @@ #include \"pgtar.h\" #include \"pgtime.h\" #include \"pqexpbuffer.h\"+#include \"pgrhash.h\" #include \"receivelog.h\" #include \"replication/basebackup.h\" #include \"streamutil.h\"Please add new files in order.2.Can hash related file names be renamed to backuphash.c and backuphash.h?3.Need indentation adjustments at various places.4.+ char buf[1000000]; // 1MB chunkIt will be good if we have multiple of block /page size (or at-least power of 2number).5.+typedef struct pgrhash_entry+{+ struct pgrhash_entry *next; /* link to next entry in same bucket */+ DataDirectoryFileInfo *record;+} pgrhash_entry;++struct pgrhash+{+ unsigned nbuckets; /* number of buckets */+ pgrhash_entry **bucket; /* pointer to hash entries */+};++typedef struct pgrhash pgrhash;These two can be moved to .h file instead of redefining over there.6.+/*+ * TODO: this function is not necessary, can be removed.+ * Test whether the given row number is match for the supplied keys.+ */+static bool+pgrhash_compare(char *bt_filename, char *filename)Yeah, it can be removed by doing strcmp() at the required places rather thandoing it in a separate function.7.mdate is not compared anywhere. I understand that it can't be compared withthe file in the backup directory and its entry in the manifest as manifestentry gives mtime from server file whereas the same file in the backup willhave different mtime. But adding a few comments there will be good.8.+ char mdate[24];should be mtime instead?Thanks-- Jeevan ChalkeAssociate Database Architect & Team Lead, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 21 Nov 2019 14:51:01 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Thank you Jeevan for reviewing the patch.\n\nOn Thu, Nov 21, 2019 at 2:33 PM Jeevan Chalke <\njeevan.chalke@enterprisedb.com> wrote:\n\n>\n>\n> On Tue, Nov 19, 2019 at 3:30 PM Rushabh Lathia <rushabh.lathia@gmail.com>\n> wrote:\n>\n>>\n>>\n>> My colleague Suraj did testing and noticed the performance impact\n>> with the checksums. On further testing, he found that specifically with\n>> sha its more of performance impact.\n>>\n>> Please find below statistics:\n>>\n>> no of tables without checksum SHA256\n>> checksum % performnce\n>> overhead\n>> with\n>> SHA-256 md5 checksum % performnce\n>> overhead with md5 CRC checksum % performnce\n>> overhead with\n>> CRC\n>> 10 (100 MB\n>> in each table) real 0m10.957s\n>> user 0m0.367s\n>> sys 0m2.275s real 0m16.816s\n>> user 0m0.210s\n>> sys 0m2.067s 53% real 0m11.895s\n>> user 0m0.174s\n>> sys 0m1.725s 8% real 0m11.136s\n>> user 0m0.365s\n>> sys 0m2.298s 2%\n>> 20 (100 MB\n>> in each table) real 0m20.610s\n>> user 0m0.484s\n>> sys 0m3.198s real 0m31.745s\n>> user 0m0.569s\n>> sys 0m4.089s\n>> 54% real 0m22.717s\n>> user 0m0.638s\n>> sys 0m4.026s 10% real 0m21.075s\n>> user 0m0.538s\n>> sys 0m3.417s 2%\n>> 50 (100 MB\n>> in each table) real 0m49.143s\n>> user 0m1.646s\n>> sys 0m8.499s real 1m13.683s\n>> user 0m1.305s\n>> sys 0m10.541s 50% real 0m51.856s\n>> user 0m0.932s\n>> sys 0m7.702s 6% real 0m49.689s\n>> user 0m1.028s\n>> sys 0m6.921s 1%\n>> 100 (100 MB\n>> in each table) real 1m34.308s\n>> user 0m2.265s\n>> sys 0m14.717s real 2m22.403s\n>> user 0m2.613s\n>> sys 0m20.776s 51% real 1m41.524s\n>> user 0m2.158s\n>> sys 0m15.949s\n>> 8% real 1m35.045s\n>> user 0m2.061s\n>> sys 0m16.308s 1%\n>> 100 (1 GB\n>> in each table) real 17m18.336s\n>> user 0m20.222s\n>> sys 3m12.960s real 24m45.942s\n>> user 0m26.911s\n>> sys 3m33.501s 43% real 17m41.670s\n>> user 0m26.506s\n>> sys 3m18.402s 2% real 17m22.296s\n>> user 0m26.811s\n>> sys 3m56.653s\n>>\n>> sometimes, this test\n>> completes within the\n>> same time as without\n>> checksum. approx. 0.5%\n>>\n>>\n>> Considering the above results, I modified the earlier Robert's patch and\n>> added\n>> \"manifest_with_checksums\" option to pg_basebackup. With a new patch.\n>> by default, checksums will be disabled and will be only enabled when\n>> \"manifest_with_checksums\" option is provided. Also re-based all patch\n>> set.\n>>\n>\n> Review comments on 0004:\n>\n> 1.\n> I don't think we need o_manifest_with_checksums variable,\n> manifest_with_checksums can be used instead.\n>\n\nYes, done in the latest version of opatch.\n\n\n> 2.\n> We need to document this new option for pg_basebackup and basebackup.\n>\n>\nDone, attaching documentation patch with the mail.\n\n3.\n> Also, instead of keeping manifest_with_checksums as a global variable, we\n> should pass that to the required function. Patch 0002 already modified the\n> signature of all relevant functions anyways. So just need to add one more\n> bool\n> variable there.\n>\n>\nyes, earlier I did that implementation but later found that we already\nhave checksum related global variable i.e. noverify_checksums, so\nthat it will be clean implementation - rather modifying the function\ndefinition\nto pass the variable (which is actually global for the operation).\n\n4.\n> Why we need a \"File\" at the start of each entry as we are adding files\n> only?\n> I wonder if we also need to provide a tablespace name and directory marker\n> so\n> that we have \"Tablespace\" and \"Dir\" at the start.\n>\n\nSorry, I am not quite sure about this, may be Robert is right person\nto answer this.\n\n\n> 5.\n> If I don't provide manifest-with-checksums option then too I see that\n> checksum\n> is calculated for backup_manifest file itself. Is that intentional or\n> missed?\n> I think we should omit that too if this option is not provided.\n>\n>\nOops yeah, corrected this in the latest version of patch.\n\n6.\n> Is it possible to get only a backup manifest from the server? A client like\n> pg_basebackup can then use that to fetch files reading that.\n>\n>\nCurrently we don't have any option to just get the manifest file from the\nserver. I am not sure but why we need this at this point of time.\n\n\n\nRegards,\n\nRushabh Lathia\nwww.EnterpriseDB.com",
"msg_date": "Fri, 22 Nov 2019 15:27:53 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Tue, Nov 19, 2019 at 8:49 AM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> I admit I haven't been following along closely, but why do we need a\n> cryptographic checksum here instead of, say, a CRC? Do we think that\n> somehow the checksum might be forged? Use of cryptographic hashes as\n> general purpose checksums has become far too common IMNSHO.\n\nI tend to agree with you. I suspect if we just use CRC, some people\nare going to complain that they want something \"stronger\" because that\nwill make them feel better about error detection rates or obscure\nthreat models or whatever other things a SHA-based approach might be\nable to catch that CRC would not catch. However, I suspect that for\nnormal use cases, CRC would be totally adequate, and the fact that the\nperformance overhead is almost none vs. a whole lot - at least in this\ntest setup, other results might vary depending on what you test -\nmakes it look pretty appealing.\n\nMy gut reaction is to make CRC the default, but have an option that\nyou can use to either turn it off entirely (if even 1-2% is too much\nfor you) or opt in to SHA-something if you want it. I don't think we\nshould offer an option for MD5, because MD5 is a dirty word these days\nand will cause problems for users who have to worry about FIPS 140-2\ncompliance. Phrased more positively, if you want a cryptographic hash\nat all, you should probably use one that isn't widely viewed as too\nweak.\n\nThoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 22 Nov 2019 10:58:27 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 11/22/19 10:58 AM, Robert Haas wrote:\n> On Tue, Nov 19, 2019 at 8:49 AM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com> wrote:\n>> I admit I haven't been following along closely, but why do we need a\n>> cryptographic checksum here instead of, say, a CRC? Do we think that\n>> somehow the checksum might be forged? Use of cryptographic hashes as\n>> general purpose checksums has become far too common IMNSHO.\n> \n> I tend to agree with you. I suspect if we just use CRC, some people\n> are going to complain that they want something \"stronger\" because that\n> will make them feel better about error detection rates or obscure\n> threat models or whatever other things a SHA-based approach might be\n> able to catch that CRC would not catch. \n\nWell, the maximum amount of data that can be protected with a 32-bit CRC\nis 512MB according to all the sources I found (NIST, Wikipedia, etc). I\npresume that's what we are talking about since I can't find any 64-bit\nCRC code in core or this patch.\n\nSo, that's half of what we need with the default relation segment size\n(I've seen larger in the field).\n\n> I don't think we\n> should offer an option for MD5, because MD5 is a dirty word these days\n> and will cause problems for users who have to worry about FIPS 140-2\n> compliance. \n\n+1.\n\n> Phrased more positively, if you want a cryptographic hash\n> at all, you should probably use one that isn't widely viewed as too\n> weak.\n\nSure. There's another advantage to picking an algorithm with lower\ncollision rates, though.\n\nCRCs are fine for catching transmission errors (as caveated above) but\nnot as great for comparing two files for equality. With strong hashes\nyou can confidently compare local files against the path, size, and hash\nstored in the manifest and save yourself a round-trip to the remote\nstorage to grab the file if it has not changed locally.\n\nThis is the basic premise of what we call delta restore which can speed\nup restores by orders of magnitude.\n\nDelta restore is the main advantage that made us decide to require SHA1\nchecksums. In most cases, restore speed is more important than backup\nspeed.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 22 Nov 2019 13:10:06 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Tue, Nov 19, 2019 at 4:34 PM David Steele <david@pgmasters.net> wrote:\n> On 11/19/19 5:00 AM, Rushabh Lathia wrote:\n> > My colleague Suraj did testing and noticed the performance impact\n> > with the checksums. On further testing, he found that specifically with\n> > sha its more of performance impact.\n>\n> We have found that SHA1 adds about 3% overhead when the backup is also\n> compressed (gzip -6), which is what most people want to do. This\n> percentage goes down even more if the backup is being transferred over a\n> network or to an object store such as S3.\n\nI don't really understand why your tests and Suraj's tests are showing\nsuch different results, or how compression plays into it. I tried\nrunning shasum -a$N lineitem-big.csv on my laptop, where that file\ncontains ~70MB of random-looking data whose source I no longer\nremember. Here are the results by algorithm: SHA1, ~25 seconds; SHA224\nor SHA256, ~52 seconds; SHA384 and SHA512, ~39 seconds. Aside from the\ninteresting discovery that the algorithms with more bits actually run\nfaster on this machine, this seems to show that there's only about a\n~2x difference between the SHA1 that you used and that I (pretty much\narbitrarily) used. But Rushabh and Suraj are reporting 43-54%\noverhead, and even if you divide that by two it's a lot more than 3%.\n\nOne possible explanation is that the compression is really slow, and\nso it makes the checksum overhead a smaller percentage of the total.\nLike, if you've already slowed down the backup by 8x, then 24%\noverhead turns into 3% overhead! But I assume that's not the real\nexplanation here. Another explanation is that your tests were\nI/O-bound rather than CPU-bound, maybe because you tested with a much\nlarger database or a much smaller amount of I/O bandwidth. If you had\nCPU cycles to burn, then neither compression nor checksums will cost\nmuch in terms of overall runtime. But that's a little hard to swallow,\ntoo, because I don't think the testing mentioned above was done using\nany sort of exotic test configuration, so why would yours be so\ndifferent? Another possibility is that Suraj and Rushabh messed up the\ntests, or alternatively that you did. Or, it could be that your\nchecksum implementation is way faster than the one PG uses, and so the\nimpact was much less. I don't know, but I'm having a hard time\nunderstanding the divergent results. Any ideas?\n\n> We judged that the lower collision rate of SHA1 justified the additional\n> expense.\n>\n> That said, making SHA256 optional seems reasonable. We decided not to\n> make our SHA1 checksums optional to reduce the test matrix and because\n> parallelism largely addressed performance concerns.\n\nJust to be clear, I really don't have any objection to using SHA1\ninstead of SHA256, or anything else for that matter. I picked the one\nto use out of a hat for the purpose of having a POC quickly; I didn't\nhave any intention to insist on that as the final selection. It seems\nlikely that anything we pick here will eventually be considered\nobsolete, so I think we need to allow for configurability, but I don't\nhave a horse in the game as far as an initial selection goes.\n\nExcept - and this gets back to the previous point - I don't want to\nslow down backups by 40% by default. I wouldn't mind slowing them down\n3% by default, but 40% is too much overhead. I think we've gotta\neither the overhead of using SHA way down or not use SHA by default.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 22 Nov 2019 13:24:16 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 1:10 PM David Steele <david@pgmasters.net> wrote:\n> Well, the maximum amount of data that can be protected with a 32-bit CRC\n> is 512MB according to all the sources I found (NIST, Wikipedia, etc). I\n> presume that's what we are talking about since I can't find any 64-bit\n> CRC code in core or this patch.\n\nCould you give a more precise citation for this? I can't find a\nreference to that in the Wikipedia article off-hand and I don't know\nwhere to look in NIST. I apologize if I'm being dense here, but I\ndon't see why there should be any limit on the amount of data that can\nbe protected. The important thing is that if the original file F is\naltered to F', we hope that CHECKSUM(F) != CHECKSUM(F'). The\nprobability of that, assuming that the alteration is random rather\nthan malicious and that the checksum function is equally likely to\nproduce every possible output, is just 1-2^-${CHECKSUM_BITS},\nregardless of the length of the message (except that there might be\nsome special cases for very short messages, which don't matter here).\n\nThis analysis by me seems to match\nhttps://en.wikipedia.org/wiki/Cyclic_redundancy_check, which says:\n\n\"Typically an n-bit CRC applied to a data block of arbitrary length\nwill detect any single error burst not longer than n bits, and the\nfraction of all longer error bursts that it will detect is (1 −\n2^−n).\"\n\nNotice the phrase \"a data block of arbitrary length\" and the formula \"1 - 2^-n\".\n\n> > Phrased more positively, if you want a cryptographic hash\n> > at all, you should probably use one that isn't widely viewed as too\n> > weak.\n>\n> Sure. There's another advantage to picking an algorithm with lower\n> collision rates, though.\n>\n> CRCs are fine for catching transmission errors (as caveated above) but\n> not as great for comparing two files for equality. With strong hashes\n> you can confidently compare local files against the path, size, and hash\n> stored in the manifest and save yourself a round-trip to the remote\n> storage to grab the file if it has not changed locally.\n\nI agree in part. I think there are two reasons why a cryptographically\nstrong hash is desirable for delta restore. First, since the checksums\nare longer, the probability of a false match happening randomly is\nlower, which is important. Even if the above analysis is correct and\nthe chance of a false match is just 2^-32 with a 32-bit CRC, if you\nback up ten million files every day, you'll likely get a false match\nwithin a few years or less, and once is too often. Second, unlike what\nI supposed above, the contents of a PostgreSQL data file are not\nchosen at random, unlike transmission errors, which probably are more\nor less random. It seems somewhat possible that there is an adversary\nwho is trying to choose the data that gets stored in some particular\nrecord so as to create a false checksum match. A CRC is a lot easier\nto fool than a crytographic hash, so I think that using a CRC of *any*\nlength for this kind of use case would be extremely dangerous no\nmatter the probability of an accidental match.\n\n> This is the basic premise of what we call delta restore which can speed\n> up restores by orders of magnitude.\n>\n> Delta restore is the main advantage that made us decide to require SHA1\n> checksums. In most cases, restore speed is more important than backup\n> speed.\n\nI see your point, but it's not the whole story. We've encountered a\nbunch of cases where the time it took to complete a backup exceeded\nthe user's desired backup interval, which is obviously very bad, or\neven more commonly where it exceeded the length of the user's\n\"low-usage\" period when they could tolerate the extra overhead imposed\nby the backup. A few percentage points is probably not a big deal, but\na user who has an 8-hour window to get the backup done overnight will\nnot be happy if it's taking 6 hours now and we tack 40%-50% on to\nthat. So I think that we either have to disable backup checksums by\ndefault, or figure out a way to get the overhead down to something a\nlot smaller than what current tests are showing -- which we could\npossibly do without changing the algorithm if we can somehow make it a\nlot cheaper, but otherwise I think the choice is between disabling the\nfunctionality altogether by default and adopting a less-expensive\nalgorithm. Maybe someday when delta restore is in core and widely used\nand CPUs are faster, it'll make sense to revise the default, and\nthat's cool, but I can't see imposing a big overhead by default to\nenable a feature core doesn't have yet...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 22 Nov 2019 14:01:44 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 11/22/19 1:24 PM, Robert Haas wrote:\n> On Tue, Nov 19, 2019 at 4:34 PM David Steele <david@pgmasters.net> wrote:\n>> On 11/19/19 5:00 AM, Rushabh Lathia wrote:\n>>> My colleague Suraj did testing and noticed the performance impact\n>>> with the checksums. On further testing, he found that specifically with\n>>> sha its more of performance impact.\n>>\n>> We have found that SHA1 adds about 3% overhead when the backup is also\n>> compressed (gzip -6), which is what most people want to do. This\n>> percentage goes down even more if the backup is being transferred over a\n>> network or to an object store such as S3.\n> \n> I don't really understand why your tests and Suraj's tests are showing\n> such different results, or how compression plays into it. I tried\n> running shasum -a$N lineitem-big.csv on my laptop, where that file\n> contains ~70MB of random-looking data whose source I no longer\n> remember. Here are the results by algorithm: SHA1, ~25 seconds; SHA224\n> or SHA256, ~52 seconds; SHA384 and SHA512, ~39 seconds. Aside from the\n> interesting discovery that the algorithms with more bits actually run\n> faster on this machine, this seems to show that there's only about a\n> ~2x difference between the SHA1 that you used and that I (pretty much\n> arbitrarily) used. But Rushabh and Suraj are reporting 43-54%\n> overhead, and even if you divide that by two it's a lot more than 3%.\n> \n> One possible explanation is that the compression is really slow, and\n> so it makes the checksum overhead a smaller percentage of the total.\n> Like, if you've already slowed down the backup by 8x, then 24%\n> overhead turns into 3% overhead! But I assume that's not the real\n> explanation here. \n\nThat's the real explanation here. Hash calculations run at the same\nspeed, they just become a smaller portion of the *total* time once\ncompression (gzip -6) is added. With something like lz4 hashing will\nobviously be a big percentage of the total.\n\nAlso consider how much extra latency you get from copying over a\nnetwork. My 3% did not include that but realistically most backups are\nrunning over a network (hopefully).\n\n>> That said, making SHA256 optional seems reasonable. We decided not to\n>> make our SHA1 checksums optional to reduce the test matrix and because\n>> parallelism largely addressed performance concerns.\n> \n> Just to be clear, I really don't have any objection to using SHA1\n> instead of SHA256, or anything else for that matter. I picked the one\n> to use out of a hat for the purpose of having a POC quickly; I didn't\n> have any intention to insist on that as the final selection. It seems\n> likely that anything we pick here will eventually be considered\n> obsolete, so I think we need to allow for configurability, but I don't\n> have a horse in the game as far as an initial selection goes.\n\nWe decided that SHA1 was good enough and there was no need to go up to\nSHA256. What we were interested in was collision rates and what the\nchance of getting a false positive were based on the combination of\npath, size, and hash. With SHA1 the chance of a collision was literally\nastronomically low (as in the universe would probably end before it\nhappened, depending on whether you are an expand forever or contract\nproponent).\n\n> Except - and this gets back to the previous point - I don't want to\n> slow down backups by 40% by default. I wouldn't mind slowing them down\n> 3% by default, but 40% is too much overhead. I think we've gotta\n> either the overhead of using SHA way down or not use SHA by default.\n\nMaybe -- my take is that the measurements, an uncompressed backup to the\nlocal filesystem, are not a very realistic use case.\n\nHowever, I'm still fine with leaving the user the option of checksums or\nno. I just wanted to point out that CRCs have their limits so maybe\nthat's not a great option unless it is properly caveated and perhaps not\nthe default.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 22 Nov 2019 14:02:12 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 11/22/19 2:01 PM, Robert Haas wrote:\n> On Fri, Nov 22, 2019 at 1:10 PM David Steele <david@pgmasters.net> wrote:\n>> Well, the maximum amount of data that can be protected with a 32-bit CRC\n>> is 512MB according to all the sources I found (NIST, Wikipedia, etc). I\n>> presume that's what we are talking about since I can't find any 64-bit\n>> CRC code in core or this patch.\n> \n> Could you give a more precise citation for this? \n\nSee:\nhttps://www.nist.gov/system/files/documents/2017/04/26/lrdc_systems_part2_032713.pdf\nSearch for \"The maximum block size\"\n\nhttps://en.wikipedia.org/wiki/Cyclic_redundancy_check\n\"The design of the CRC polynomial depends on the maximum total length of\nthe block to be protected (data + CRC bits)\", which I took to mean there\nare limits.\n\nHere another interesting bit from:\nhttps://en.wikipedia.org/wiki/Mathematics_of_cyclic_redundancy_checks\n\"Because a CRC is based on division, no polynomial can detect errors\nconsisting of a string of zeroes prepended to the data, or of missing\nleading zeroes\" -- but it appears to matter what CRC you are using.\nThere's a variation that works in this case and hopefully we are using\nthat one.\n\nThis paper talks about appropriate block lengths vs crc length:\nhttp://users.ece.cmu.edu/~koopman/roses/dsn04/koopman04_crc_poly_embedded.pdf\nbut it is concerned with network transmission and small block lengths.\n\n> \"Typically an n-bit CRC applied to a data block of arbitrary length\n> will detect any single error burst not longer than n bits, and the\n> fraction of all longer error bursts that it will detect is (1 −\n> 2^−n).\"\n\nI'm not sure how encouraging I find this -- a four-byte error not a lot\nand 2^32 is only 4 billion. We have individual users who have backed up\nmore than 4 billion files over the last few years.\n\n>> This is the basic premise of what we call delta restore which can speed\n>> up restores by orders of magnitude.\n>>\n>> Delta restore is the main advantage that made us decide to require SHA1\n>> checksums. In most cases, restore speed is more important than backup\n>> speed.\n> \n> I see your point, but it's not the whole story. We've encountered a\n> bunch of cases where the time it took to complete a backup exceeded\n> the user's desired backup interval, which is obviously very bad, or\n> even more commonly where it exceeded the length of the user's\n> \"low-usage\" period when they could tolerate the extra overhead imposed\n> by the backup. A few percentage points is probably not a big deal, but\n> a user who has an 8-hour window to get the backup done overnight will\n> not be happy if it's taking 6 hours now and we tack 40%-50% on to\n> that. So I think that we either have to disable backup checksums by\n> default, or figure out a way to get the overhead down to something a\n> lot smaller than what current tests are showing -- which we could\n> possibly do without changing the algorithm if we can somehow make it a\n> lot cheaper, but otherwise I think the choice is between disabling the\n> functionality altogether by default and adopting a less-expensive\n> algorithm. Maybe someday when delta restore is in core and widely used\n> and CPUs are faster, it'll make sense to revise the default, and\n> that's cool, but I can't see imposing a big overhead by default to\n> enable a feature core doesn't have yet...\n\nOK, I'll buy that. But I *don't* think CRCs should be allowed for\ndeltas (when we have them) and I *do* think we should caveat their\neffectiveness (assuming we can agree on them).\n\nIn general the answer to faster backups should be more cores/faster\nnetwork/faster disk, not compromising backup integrity. I understand\nwe'll need to wait until we have parallelism in pg_basebackup to justify\nthat answer.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 22 Nov 2019 14:29:17 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Moin Robert,\n\nOn 2019-11-22 20:01, Robert Haas wrote:\n> On Fri, Nov 22, 2019 at 1:10 PM David Steele <david@pgmasters.net> \n> wrote:\n>> Well, the maximum amount of data that can be protected with a 32-bit \n>> CRC\n>> is 512MB according to all the sources I found (NIST, Wikipedia, etc). \n>> I\n>> presume that's what we are talking about since I can't find any 64-bit\n>> CRC code in core or this patch.\n> \n> Could you give a more precise citation for this? I can't find a\n> reference to that in the Wikipedia article off-hand and I don't know\n> where to look in NIST. I apologize if I'm being dense here, but I\n> don't see why there should be any limit on the amount of data that can\n> be protected. The important thing is that if the original file F is\n> altered to F', we hope that CHECKSUM(F) != CHECKSUM(F'). The\n> probability of that, assuming that the alteration is random rather\n> than malicious and that the checksum function is equally likely to\n> produce every possible output, is just 1-2^-${CHECKSUM_BITS},\n> regardless of the length of the message (except that there might be\n> some special cases for very short messages, which don't matter here).\n> \n> This analysis by me seems to match\n> https://en.wikipedia.org/wiki/Cyclic_redundancy_check, which says:\n> \n> \"Typically an n-bit CRC applied to a data block of arbitrary length\n> will detect any single error burst not longer than n bits, and the\n> fraction of all longer error bursts that it will detect is (1 −\n> 2^−n).\"\n> \n> Notice the phrase \"a data block of arbitrary length\" and the formula \"1 \n> - 2^-n\".\n\nIt is related to the number of states, and the birthday problem factors \nin it, too:\n\n https://en.wikipedia.org/wiki/Birthday_problem\n\nIf you have a 32 bit checksum or hash, it can represent only 2**32-1 \nstates at most (or less, if the\nalgorithmn isn't really good).\n\nEach byte is 8 bit, so 2 ** 32 / 8 is 512 Mbyte. If you process your \ndata bit by bit, each\nnew bit would add a new state (consider: missing bit == 0, added bit == \n1). If each new state\nis repesented by a different checksum, all possible 2 ** 32 values are \nexhausted after\nprocessing 512 Mbyte, after that you get one of the former states again \n- aka a collision.\n\nThere is no way around it with so little bits, no matter what algorithmn \nyou choose.\n\n>> > Phrased more positively, if you want a cryptographic hash\n>> > at all, you should probably use one that isn't widely viewed as too\n>> > weak.\n>> \n>> Sure. There's another advantage to picking an algorithm with lower\n>> collision rates, though.\n>> \n>> CRCs are fine for catching transmission errors (as caveated above) but\n>> not as great for comparing two files for equality. With strong hashes\n>> you can confidently compare local files against the path, size, and \n>> hash\n>> stored in the manifest and save yourself a round-trip to the remote\n>> storage to grab the file if it has not changed locally.\n> \n> I agree in part. I think there are two reasons why a cryptographically\n> strong hash is desirable for delta restore. First, since the checksums\n> are longer, the probability of a false match happening randomly is\n> lower, which is important. Even if the above analysis is correct and\n> the chance of a false match is just 2^-32 with a 32-bit CRC, if you\n> back up ten million files every day, you'll likely get a false match\n> within a few years or less, and once is too often. Second, unlike what\n> I supposed above, the contents of a PostgreSQL data file are not\n> chosen at random, unlike transmission errors, which probably are more\n> or less random. It seems somewhat possible that there is an adversary\n> who is trying to choose the data that gets stored in some particular\n> record so as to create a false checksum match. A CRC is a lot easier\n> to fool than a crytographic hash, so I think that using a CRC of *any*\n> length for this kind of use case would be extremely dangerous no\n> matter the probability of an accidental match.\n\nAgreed. See above.\n\nHowever, if you choose a hash, please do not go below SHA-256. Both MD5\nand SHA-1 already had collision attacks, and these only got to be bound\nto be worse.\n\n https://www.mscs.dal.ca/~selinger/md5collision/\n https://shattered.io/\n\nIt might even be a wise idea to encode the used Hash-Algorithm into the\nmanifest file, so it can be changed later. The hash length might be not\nenough to decide which algorithm is the one used.\n\n>> This is the basic premise of what we call delta restore which can \n>> speed\n>> up restores by orders of magnitude.\n>> \n>> Delta restore is the main advantage that made us decide to require \n>> SHA1\n>> checksums. In most cases, restore speed is more important than backup\n>> speed.\n> \n> I see your point, but it's not the whole story. We've encountered a\n> bunch of cases where the time it took to complete a backup exceeded\n> the user's desired backup interval, which is obviously very bad, or\n> even more commonly where it exceeded the length of the user's\n> \"low-usage\" period when they could tolerate the extra overhead imposed\n> by the backup. A few percentage points is probably not a big deal, but\n> a user who has an 8-hour window to get the backup done overnight will\n> not be happy if it's taking 6 hours now and we tack 40%-50% on to\n> that. So I think that we either have to disable backup checksums by\n> default, or figure out a way to get the overhead down to something a\n> lot smaller than what current tests are showing -- which we could\n> possibly do without changing the algorithm if we can somehow make it a\n> lot cheaper, but otherwise I think the choice is between disabling the\n> functionality altogether by default and adopting a less-expensive\n> algorithm. Maybe someday when delta restore is in core and widely used\n> and CPUs are faster, it'll make sense to revise the default, and\n> that's cool, but I can't see imposing a big overhead by default to\n> enable a feature core doesn't have yet...\n\nModern algorithms are amazingly fast on modern hardware, some even\nare implemented in hardware nowadays:\n\n https://software.intel.com/en-us/articles/intel-sha-extensions\n\nQuote from:\n\n \nhttps://neosmart.net/blog/2017/will-amds-ryzen-finally-bring-sha-extensions-to-intels-cpus/\n\n \"Despite the extremely limited availability of SHA extension support\n in modern desktop and mobile processors, crypto libraries have already\n upstreamed support to great effect. Botan’s SHA extension patches show \na\n significant 3x to 5x performance boost when taking advantage of the \nhardware\n extensions, and the Linux kernel itself shipped with hardware SHA \nsupport\n with version 4.4, bringing a very respectable 3.6x performance upgrade \nover\n the already hardware-assisted SSE3-enabled code.\"\n\nIf you need to load the data from disk and shove it over a network, the\nhashing will certainly be very little overhead, it might even be \ncompletely\ninvisible, since it can run in paralell to all the other things. Sure, \nthere\nis the thing called zero-copy-networking, but if you have to compress \nthe\ndata bevore sending it to the network, you have to put it through the \nCPU,\nanyway. And if you have more than one core, the second one can to the\nhashing it paralell to the first one doing the compression.\n\nTo get a feeling one can use:\n\n openssl speed md5 sha1 sha256 sha512\n\nOn my really-not-fast desktop CPU (i5-4690T CPU @ 2.50GHz) it says:\n\n The 'numbers' are in 1000s of bytes per second processed.\n type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 \nbytes 16384 bytes\n md5 122638.55k 277023.96k 487725.57k 630806.19k \n683892.74k 688553.98k\n sha1 127226.45k 313891.52k 632510.55k 865753.43k \n960995.33k 977215.19k\n sha256 77611.02k 173368.15k 325460.99k 412633.43k \n447022.92k 448020.48k\n sha512 51164.77k 205189.87k 361345.79k 543883.26k \n638372.52k 645933.74k\n\nOr in other words, it can hash nearly 931 MByte /s with SHA-1 and about\n427 MByte / s with SHA-256 (if I haven't miscalculated something). You'd \nneed a\npretty fast disk (aka M.2 SSD) and network (aka > 1 Gbit) to top these \nspeeds\nand then you'd use a real CPU for your server, not some poor Intel \npowersaving\nsurfing thingy-majingy :)\n\nBest regards,\n\nTels\n\n\n",
"msg_date": "Fri, 22 Nov 2019 23:15:29 +0100",
"msg_from": "Tels <nospam-pg-abuse@bloodgate.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 11/22/19 5:15 PM, Tels wrote:\n> On 2019-11-22 20:01, Robert Haas wrote:\n>> On Fri, Nov 22, 2019 at 1:10 PM David Steele <david@pgmasters.net> wrote:\n> \n>>> > Phrased more positively, if you want a cryptographic hash\n>>> > at all, you should probably use one that isn't widely viewed as too\n>>> > weak.\n>>>\n>>> Sure. There's another advantage to picking an algorithm with lower\n>>> collision rates, though.\n>>>\n>>> CRCs are fine for catching transmission errors (as caveated above) but\n>>> not as great for comparing two files for equality. With strong hashes\n>>> you can confidently compare local files against the path, size, and hash\n>>> stored in the manifest and save yourself a round-trip to the remote\n>>> storage to grab the file if it has not changed locally.\n>>\n>> I agree in part. I think there are two reasons why a cryptographically\n>> strong hash is desirable for delta restore. First, since the checksums\n>> are longer, the probability of a false match happening randomly is\n>> lower, which is important. Even if the above analysis is correct and\n>> the chance of a false match is just 2^-32 with a 32-bit CRC, if you\n>> back up ten million files every day, you'll likely get a false match\n>> within a few years or less, and once is too often. Second, unlike what\n>> I supposed above, the contents of a PostgreSQL data file are not\n>> chosen at random, unlike transmission errors, which probably are more\n>> or less random. It seems somewhat possible that there is an adversary\n>> who is trying to choose the data that gets stored in some particular\n>> record so as to create a false checksum match. A CRC is a lot easier\n>> to fool than a crytographic hash, so I think that using a CRC of *any*\n>> length for this kind of use case would be extremely dangerous no\n>> matter the probability of an accidental match.\n> \n> Agreed. See above.\n> \n> However, if you choose a hash, please do not go below SHA-256. Both MD5\n> and SHA-1 already had collision attacks, and these only got to be bound\n> to be worse.\n\nI don't think collision attacks are a big consideration in the general\ncase. The manifest is generally stored with the backup files so if a\nfile is modified it is then trivial to modify the manifest as well.\n\nOf course, you could store the manifest separately or even just know the\nhash of the manifest and store that separately. In that case SHA-256\nmight be useful and it would be good to have the option, which I believe\nis the plan.\n\nI do wonder if you could construct a successful collision attack (even\nin MD5) that would also result in a valid relation file. Probably, at\nleast eventually.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 22 Nov 2019 17:30:18 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Moin,\n\nOn 2019-11-22 23:30, David Steele wrote:\n> On 11/22/19 5:15 PM, Tels wrote:\n>> On 2019-11-22 20:01, Robert Haas wrote:\n>>> On Fri, Nov 22, 2019 at 1:10 PM David Steele <david@pgmasters.net> \n>>> wrote:\n>> \n>>>> > Phrased more positively, if you want a cryptographic hash\n>>>> > at all, you should probably use one that isn't widely viewed as too\n>>>> > weak.\n>>>> \n>>>> Sure. There's another advantage to picking an algorithm with lower\n>>>> collision rates, though.\n>>>> \n>>>> CRCs are fine for catching transmission errors (as caveated above) \n>>>> but\n>>>> not as great for comparing two files for equality. With strong \n>>>> hashes\n>>>> you can confidently compare local files against the path, size, and \n>>>> hash\n>>>> stored in the manifest and save yourself a round-trip to the remote\n>>>> storage to grab the file if it has not changed locally.\n>>> \n>>> I agree in part. I think there are two reasons why a \n>>> cryptographically\n>>> strong hash is desirable for delta restore. First, since the \n>>> checksums\n>>> are longer, the probability of a false match happening randomly is\n>>> lower, which is important. Even if the above analysis is correct and\n>>> the chance of a false match is just 2^-32 with a 32-bit CRC, if you\n>>> back up ten million files every day, you'll likely get a false match\n>>> within a few years or less, and once is too often. Second, unlike \n>>> what\n>>> I supposed above, the contents of a PostgreSQL data file are not\n>>> chosen at random, unlike transmission errors, which probably are more\n>>> or less random. It seems somewhat possible that there is an adversary\n>>> who is trying to choose the data that gets stored in some particular\n>>> record so as to create a false checksum match. A CRC is a lot easier\n>>> to fool than a crytographic hash, so I think that using a CRC of \n>>> *any*\n>>> length for this kind of use case would be extremely dangerous no\n>>> matter the probability of an accidental match.\n>> \n>> Agreed. See above.\n>> \n>> However, if you choose a hash, please do not go below SHA-256. Both \n>> MD5\n>> and SHA-1 already had collision attacks, and these only got to be \n>> bound\n>> to be worse.\n> \n> I don't think collision attacks are a big consideration in the general\n> case. The manifest is generally stored with the backup files so if a\n> file is modified it is then trivial to modify the manifest as well.\n\nThat is true. However, a simple way around this is to sign the manifest\nwith a public key l(GPG or similiar). And if the manifest contains\nstrong, hard-to-forge hashes, we got a mure more secure backup, where\n(almost) nobody else can alter the manifest, nor can he mount easy\ncollision attacks against the single files.\n\nWithout the strong hashes it would be pointless to sign the manifest.\n\n> Of course, you could store the manifest separately or even just know \n> the\n> hash of the manifest and store that separately. In that case SHA-256\n> might be useful and it would be good to have the option, which I \n> believe\n> is the plan.\n> \n> I do wonder if you could construct a successful collision attack (even\n> in MD5) that would also result in a valid relation file. Probably, at\n> least eventually.\n\nWith MD5, certainly. One way is to have two block of 512 bits that hash\nto the different MD5s. It is trivial to re-use one already existing from\nthe known examples.\n\nHere is one, where the researchers constructed 12 PDFs that all\nhave the same MD5 hash:\n\n https://www.win.tue.nl/hashclash/Nostradamus/\n\nIf you insert one of these blocks into a relation and dump it, you could\nswap it (probably?) out on disk for the other block. I'm not sure this\nis of practical usage as an attack, tho. It would, however, cast doubt\non the integrity of the backup and prove that MD5 is useless.\n\nOTOH, finding a full collision with MD5 should also be in reach with\ntodays hardware. It is hard find exact numbers but this:\n\n https://www.win.tue.nl/hashclash/SingleBlock/\n\ngives the following numbers for 2008/2009:\n\n \"Finding the birthday bits took 47 hours (expected was 3 days) on the\n cluster of 215 Playstation 3 game consoles at LACAL, EPFL. This is\n roughly equivalent to 400,000 hours on a single PC core. The single\n near-collision block construction took 18 hours and 20 minutes on a\n single PC core.\"\n\nToday one can probably compute it on a single GPU in mere hours. And you\ncan rent massive amounts of them in the cloud for real cheap.\n\nHere are a few, now a bit dated, references:\n\n https://blog.codinghorror.com/speed-hashing/\n http://codahale.com/how-to-safely-store-a-password/\n\nBest regards,\n\nTels\n\n\n",
"msg_date": "Sat, 23 Nov 2019 09:13:21 +0100",
"msg_from": "Tels <nospam-pg-abuse@bloodgate.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "\nOn 11/23/19 3:13 AM, Tels wrote:\n>\n> Without the strong hashes it would be pointless to sign the manifest.\n>\n>\n\nI guess I must have missed where we are planning to add a cryptographic\nsignature.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sat, 23 Nov 2019 16:34:05 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 11/23/19 4:34 PM, Andrew Dunstan wrote:\n> \n> On 11/23/19 3:13 AM, Tels wrote:\n>>\n>> Without the strong hashes it would be pointless to sign the manifest.\n>>\n> \n> I guess I must have missed where we are planning to add a cryptographic\n> signature.\n\nI don't think we were planning to, but the user could do so if they wished.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Sun, 24 Nov 2019 09:38:09 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi Jeevan,\n\nI have incorporated all the comments in the attached patch. Please review\nand let me know your thoughts.\n\nOn Thu, Nov 21, 2019 at 2:51 PM Jeevan Chalke <\njeevan.chalke@enterprisedb.com> wrote:\n\n>\n>\n> On Wed, Nov 20, 2019 at 11:05 AM Suraj Kharage <\n> suraj.kharage@enterprisedb.com> wrote:\n>\n>> Hi,\n>>\n>> Since now we are generating the backup manifest file with each backup, it\n>> provides us an option to validate the given backup.\n>> Let's say, we have taken a backup and after a few days, we want to check\n>> whether that backup is validated or corruption-free without restarting the\n>> server.\n>>\n>> Please find attached POC patch for same which will be based on the latest\n>> backup manifest patch from Rushabh. With this functionality, we add new\n>> option to pg_basebackup, something like --verify-backup.\n>> So, the syntax would be:\n>> ./bin/pg_basebackup --verify-backup -D <backup_directory_path>\n>>\n>> Basically, we read the backup_manifest file line by line from the given\n>> directory path and build the hash table, then scan the directory and\n>> compare each file with the hash entry.\n>>\n>> Thoughts/suggestions?\n>>\n>\n>\n> I like the idea of verifying the backup once we have backup_manifest with\n> us.\n> Periodically verifying the already taken backup with this simple tool\n> becomes\n> easy now.\n>\n> I have reviewed this patch and here are my comments:\n>\n> 1.\n> @@ -30,7 +30,9 @@\n> #include \"common/file_perm.h\"\n> #include \"common/file_utils.h\"\n> #include \"common/logging.h\"\n> +#include \"common/sha2.h\"\n> #include \"common/string.h\"\n> +#include \"fe_utils/simple_list.h\"\n> #include \"fe_utils/recovery_gen.h\"\n> #include \"fe_utils/string_utils.h\"\n> #include \"getopt_long.h\"\n> @@ -38,12 +40,19 @@\n> #include \"pgtar.h\"\n> #include \"pgtime.h\"\n> #include \"pqexpbuffer.h\"\n> +#include \"pgrhash.h\"\n> #include \"receivelog.h\"\n> #include \"replication/basebackup.h\"\n> #include \"streamutil.h\"\n>\n> Please add new files in order.\n>\n> 2.\n> Can hash related file names be renamed to backuphash.c and backuphash.h?\n>\n> 3.\n> Need indentation adjustments at various places.\n>\n> 4.\n> + char buf[1000000]; // 1MB chunk\n>\n> It will be good if we have multiple of block /page size (or at-least power\n> of 2\n> number).\n>\n> 5.\n> +typedef struct pgrhash_entry\n> +{\n> + struct pgrhash_entry *next; /* link to next entry in same bucket */\n> + DataDirectoryFileInfo *record;\n> +} pgrhash_entry;\n> +\n> +struct pgrhash\n> +{\n> + unsigned nbuckets; /* number of buckets */\n> + pgrhash_entry **bucket; /* pointer to hash entries */\n> +};\n> +\n> +typedef struct pgrhash pgrhash;\n>\n> These two can be moved to .h file instead of redefining over there.\n>\n> 6.\n> +/*\n> + * TODO: this function is not necessary, can be removed.\n> + * Test whether the given row number is match for the supplied keys.\n> + */\n> +static bool\n> +pgrhash_compare(char *bt_filename, char *filename)\n>\n> Yeah, it can be removed by doing strcmp() at the required places rather\n> than\n> doing it in a separate function.\n>\n> 7.\n> mdate is not compared anywhere. I understand that it can't be compared with\n> the file in the backup directory and its entry in the manifest as manifest\n> entry gives mtime from server file whereas the same file in the backup will\n> have different mtime. But adding a few comments there will be good.\n>\n> 8.\n> + char mdate[24];\n>\n> should be mtime instead?\n>\n>\n> Thanks\n>\n> --\n> Jeevan Chalke\n> Associate Database Architect & Team Lead, Product Development\n> EnterpriseDB Corporation\n> The Enterprise PostgreSQL Company\n>\n>\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.",
"msg_date": "Mon, 25 Nov 2019 15:05:22 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 2019-11-24 15:38, David Steele wrote:\n> On 11/23/19 4:34 PM, Andrew Dunstan wrote:\n>> \n>> On 11/23/19 3:13 AM, Tels wrote:\n>>> \n>>> Without the strong hashes it would be pointless to sign the manifest.\n>>> \n>> \n>> I guess I must have missed where we are planning to add a \n>> cryptographic\n>> signature.\n> \n> I don't think we were planning to, but the user could do so if they \n> wished.\n\nThat was what I meant.\n\nBest regards,\n\nTels\n\n\n",
"msg_date": "Mon, 25 Nov 2019 17:24:48 +0100",
"msg_from": "Tels <nospam-pg-abuse@bloodgate.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 2:29 PM David Steele <david@pgmasters.net> wrote:\n> See:\n> https://www.nist.gov/system/files/documents/2017/04/26/lrdc_systems_part2_032713.pdf\n> Search for \"The maximum block size\"\n\nHmm, so it says: \"The maximum block size that can be protected by a\n32-bit CRC is 512MB.\" My problem is that (1) it doesn't back this up\nwith a citation or any kind of logical explanation and (2) it's not\nvery clear what \"protected\" means. Tels replies downthread to explain\nthat the internal state of the 32-bit CRC calculation is also limited\nto 32 bits, and changes once per bit, so that after processing 512MB =\n2^29 bytes = 2^32 bits of data, you're guaranteed to start repeating\ninternal states. Perhaps this is also what the NIST folks had in mind,\nthough it's hard to know.\n\nThis link provides some more details:\n\nhttps://community.arm.com/developer/tools-software/tools/f/keil-forum/17467/crc-for-256-byte-data\n\nNot everyone on the thread agrees with everybody else, but it seems\nlike there are size limits below which a CRC-n is guaranteed to detect\nall 1-bit and 2-bit errors, and above which this is no longer\nguaranteed. They put the limit *lower* than what NIST supposes, namely\n2^(n-1)-1 bits, which would be 256MB, not 512MB, if I'm doing math\ncorrectly. However, they also say that above that value, you are still\nlikely to detect most errors. Absent an intelligent adversary, the\nchance of a random collision when corruption is present is still about\n1 in 4 billion (2^-32).\n\nTo me, guaranteed detection of 1-bit and 2-bit errors (and the other\nkinds of specific things CRC is designed to catch) doesn't seem like a\nprinciple design consideration. It's nice if we can get it and I'm not\nagainst it, but these are algorithms that are designed to be used when\ndata undergoes a digital-to-analog-to-digital conversion, where for\nexample it's possible that that the conversion back to digital loses\nsync and reads 9 bits or 7 bits rather than 8 bits. And that's not\nreally what we're doing here: we all know that bits get flipped\nsometimes, but nobody uses scp to copy a 1GB file and ends up with a\nfile that is 1GB +/- a few bits. Some lower-level part of the\ncommunication stack is handling that part of the work; you're going to\nget exactly 1GB. So it seems to me that here, as with XLOG, we're not\nrelying on the specific CRC properties that were intended to be used\nto catch and in some cases repair bit flips caused by wrinkles in an\nA-to-D conversion, but just on its general tendency to probably not\nmatch if any bits got flipped. And those properties hold regardless of\ninput length.\n\nThat being said, having done some reading on this, I am a little\nconcerned that we're getting further and further from the design\ncenter of the CRC algorithm. Like relation segment files, XLOG records\nare not packets subject to bit insertions, but at least they're small,\nand relation files are not. Using a 40-year-old algorithm that was\nintended to be used for things like making sure the modem hadn't lost\nframing in the last second to verify 1GB files feels, in some nebulous\nway, like we might be stretching. That being said, I'm not sure what\nwe think the reasonable alternatives are. Users aren't going to be\nbetter off if we say that, because CRC-32C might not do a great job\ndetecting errors, we're not going to check for errors at all. If we go\nthe other way and say we're going to use some variant of SHA, they\nwill be better off, but at the price of what looks like a\n*significant* hit in terms of backup time.\n\n> > \"Typically an n-bit CRC applied to a data block of arbitrary length\n> > will detect any single error burst not longer than n bits, and the\n> > fraction of all longer error bursts that it will detect is (1 −\n> > 2^−n).\"\n>\n> I'm not sure how encouraging I find this -- a four-byte error not a lot\n> and 2^32 is only 4 billion. We have individual users who have backed up\n> more than 4 billion files over the last few years.\n\nI agree that people have a lot more than 4 billion files backed up,\nbut I'm not sure it matters very much given the use case I'm trying to\nenable. There's a lot of difference between delta restore and backup\nintegrity checking. For backup integrity checking, my goal is that, on\nthose occasions when a file gets corrupted, the chances that we notice\nthat it has been corrupted. For that purpose, a 32-bit checksum is\nprobably sufficient. If a file gets corrupted, we have about a\n1-in-4-billion chance of being unable to detect it. If 4 billion files\nget corrupted, we'll miss, on average, one of those corruption events.\nThat's sad, but so is the fact that you had *4 billion corrupted\nfiles*. This is not the total number of files backed up; this is the\nnumber of those that got corrupted. I don't really know how common it\nis to copy a file and end up with a corrupt copy, but if you say it's\none-in-a-million, which I suspect is far too high, then you'd have to\nback up something like 4 quadrillion files before you missed a\ncorruption event, and that's a *very* big number.\n\nNow delta restore is a whole different kettle of fish. The birthday\nproblem is huge here. If you've got a 32-bit checksum for file A, and\nyou go and look it up in a database of checksums, and that database\nhas even 1 billion things in it, you've got a pretty decent shot of\nlatching onto a file that is not actually the same as file A. The\nproblem goes away almost entirely if you only compare against previous\nversions of that file from that database cluster. You've probably only\ngot tens or maybe at the very outside hundreds or thousands of backups\nof that particular file, and a collision is unlikely even with only a\n32-bit checksum -- though even there maybe you'd like to use something\nlarger just to be on the safe side. But if you're going to compare to\nother files from the same cluster, or even worse any file from any\ncluster, 32 bits is *woefully* inadequate. TBH even using SHA for such\nuse cases feels a little scary to me. It's probably good enough --\n2^160 for SHA-1 is a *lot* bigger than 2^32, and 2^512 for SHA-512 is\nenormous. But I'd want to spend time thinking very carefully about the\nmath before designing such a system.\n\n> OK, I'll buy that. But I *don't* think CRCs should be allowed for\n> deltas (when we have them) and I *do* think we should caveat their\n> effectiveness (assuming we can agree on them).\n\nSounds good.\n\n> In general the answer to faster backups should be more cores/faster\n> network/faster disk, not compromising backup integrity. I understand\n> we'll need to wait until we have parallelism in pg_basebackup to justify\n> that answer.\n\nI would like to dispute that characterization of what we're talking\nabout here. If we added a 1-bit checksum (parity bit) it would be\n*strictly better* than what we're doing right now, which is nothing.\nThat's not a serious proposal because it's obvious we can do a lot\nbetter for trivial additional cost, but deciding that we're going to\nuse a weaker kind of checksum to avoid adding too much overhead is not\nwimping out, because it's still going to be strong enough to catch the\noverwhelming majority of problems that go undetected today. Even an\n*8-bit* checksum would give us a >99% chance of catching a corrupted\nfile, which would be noticeably better than the 0% chance we have\ntoday. Even a manifest with no checksums at all that just checked the\npresence and size of files would catch tons of operator error, e.g.\n\n- wait, that database had tablespaces?\n- were those logs in pg_clog anything important?\n- oh, i wasn't supposed to start postgres on the copy of the database\nstored in the backup directory?\n\nSo I don't think we're talking about whether to compromise backup\nintegrity. I think we're talking about - if we're going to make backup\nintegrity better than it is today, how much better should we try to\nmake it, and what are the trade-offs there? The straw man here is that\nwe could make the database infinitely secure if we put it in a\nconcrete bunker and sunk it to the bottom of the ocean, with the small\nprice that we'd no longer be able to access it either. Somewhere\nbetween that extreme and the other extreme of setting the\nauthentication method to 0.0.0.0/0 trust there's a happy medium where\nsecurity is tolerably good but ease of access isn't crippled, and the\nsame thing applies here. We could (probably) be the first database on\nthe planet to store a 1024-bit encrypted checksum of every 8kB block,\nbut that seems like it's going too far in the \"concrete bunker\"\ndirection. IMHO, at least, we should be aiming for something that has\na high probability of catching real problems and a low probability of\nbeing super-annoying.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 25 Nov 2019 12:11:53 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 2:02 PM David Steele <david@pgmasters.net> wrote:\n> > Except - and this gets back to the previous point - I don't want to\n> > slow down backups by 40% by default. I wouldn't mind slowing them down\n> > 3% by default, but 40% is too much overhead. I think we've gotta\n> > either the overhead of using SHA way down or not use SHA by default.\n>\n> Maybe -- my take is that the measurements, an uncompressed backup to the\n> local filesystem, are not a very realistic use case.\n\nWell, compression is a feature we don't have yet, in core. So for\npeople who are only using core tools, an uncompressed backup is a very\nrealistic use case, because it's the only kind they can get. Granted\nthe situation is different if you are using pgbackrest.\n\nI don't have enough experience to know how often people back up to\nlocal filesystems vs. remote filesystems mounted locally vs. overtly\nover-the-network. I sometimes get the impression that users choose\ntheir backup tools and procedures with, as Tom would say, the aid of a\ndart board, but that's probably the cynic in me talking. Or maybe a\nreflection of the fact that I usually end up talking to the users for\nwhom things have gone really, really badly wrong, rather than the ones\nfor whom things went as planned.\n\n> However, I'm still fine with leaving the user the option of checksums or\n> no. I just wanted to point out that CRCs have their limits so maybe\n> that's not a great option unless it is properly caveated and perhaps not\n> the default.\n\nI think the default is the sticking point here. To me, it looks like\nCRC is a better default than nothing at all because it should still\ncatch a high percentage of issues that would otherwise be missed, and\na better default than SHA because it's so cheap to compute. However,\nI'm certainly willing to consider other theories.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 25 Nov 2019 12:21:56 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 5:15 PM Tels <nospam-pg-abuse@bloodgate.com> wrote:\n> It is related to the number of states...\n\nThanks for this explanation. See my reply to David where I also\ndiscuss this point.\n\n> However, if you choose a hash, please do not go below SHA-256. Both MD5\n> and SHA-1 already had collision attacks, and these only got to be bound\n> to be worse.\n>\n> https://www.mscs.dal.ca/~selinger/md5collision/\n> https://shattered.io/\n\nYikes, that second link, about SHA-1, is depressing. Now, it's not\nlikely that an attacker has access to your backup repository and can\nspend 6500 years of CPU time to engineer a Trojan file there (maybe\nmore, because the files are probably bigger than the PDFs they used in\nthat case) and then induce you to restore and rely upon that backup.\nHowever, it's entirely likely that somebody is going to eventually ban\nSHA-1 as the attacks get better, which is going to be a problem for us\nwhether the underlying exposures are problems or not.\n\n> It might even be a wise idea to encode the used Hash-Algorithm into the\n> manifest file, so it can be changed later. The hash length might be not\n> enough to decide which algorithm is the one used.\n\nI agree. Let's write\nSHA256:bc1c3a57369acd0d2183a927fb2e07acbbb1c97f317bbc3b39d93ec65b754af5\nor similar rather than just the hash. That way even if the entire SHA\nfamily gets cracked, we can easily substitute in something else that\nhasn't been cracked yet.\n\n(It is unclear to me why anyone supposes that *any* popular hash\nfunction won't eventually be cracked. For a K-bit hash function, there\nare 2^K possible outputs, where K is probably in the hundreds. But\nthere are 2^{2^33} possible 1GB files. So for every possible output\nvalue, there are 2^{2^33-K} inputs that produce that value, which is a\nvery very big number. The probability that any given input produces a\ncertain output is very low, but the number of possible inputs that\nproduce a given output is very high; so assuming that nobody's ever\ngoing to figure out how to construct them seems optimistic.)\n\n> To get a feeling one can use:\n>\n> openssl speed md5 sha1 sha256 sha512\n>\n> On my really-not-fast desktop CPU (i5-4690T CPU @ 2.50GHz) it says:\n>\n> The 'numbers' are in 1000s of bytes per second processed.\n> type 16 bytes 64 bytes 256 bytes 1024 bytes 8192\n> bytes 16384 bytes\n> md5 122638.55k 277023.96k 487725.57k 630806.19k\n> 683892.74k 688553.98k\n> sha1 127226.45k 313891.52k 632510.55k 865753.43k\n> 960995.33k 977215.19k\n> sha256 77611.02k 173368.15k 325460.99k 412633.43k\n> 447022.92k 448020.48k\n> sha512 51164.77k 205189.87k 361345.79k 543883.26k\n> 638372.52k 645933.74k\n>\n> Or in other words, it can hash nearly 931 MByte /s with SHA-1 and about\n> 427 MByte / s with SHA-256 (if I haven't miscalculated something). You'd\n> need a\n> pretty fast disk (aka M.2 SSD) and network (aka > 1 Gbit) to top these\n> speeds\n> and then you'd use a real CPU for your server, not some poor Intel\n> powersaving\n> surfing thingy-majingy :)\n\nI mean, how fast is in theory doesn't matter nearly as much as what\nhappens when you benchmark the proposed implementation, and the\nresults we have so far don't support the theory that this is so cheap\nas to be negligible.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 25 Nov 2019 12:43:18 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "As per the discussion on the thread, here is the patch which\n\na) Make checksum for manifest file optional.\nb) Allow user to choose a particular algorithm.\n\nCurrently with the WIP patch SHA256 and CRC checksum algorithm\nsupported. Patch also changed the manifest file format to append\nthe used algorithm name before the checksum, this way it will be\neasy to validator to know which algorithm to used.\n\nEx:\n./db/bin/pg_basebackup -D bksha/ --manifest-with-checksums=SHA256\n\n$ cat bksha/backup_manifest | more\nPostgreSQL-Backup-Manifest-Version 1\nFile backup_label 226 2019-12-04 17:46:46 GMT\nSHA256:7cf53d1b9facca908678ab70d93a9e7460cd35cedf7891de948dcf858f8a281a\nFile pg_xact/0000 8192 2019-12-04 17:46:46 GMT\nSHA256:8d2b6cb1dc1a6e8cee763b52d75e73571fddce06eb573861d44082c7d8c03c26\n\n./db/bin/pg_basebackup -D bkcrc/ --manifest-with-checksums=CRC\nPostgreSQL-Backup-Manifest-Version 1\nFile backup_label 226 2019-12-04 17:58:40 GMT CRC:343138313931333134\nFile pg_xact/0000 8192 2019-12-04 17:46:46 GMT CRC:363538343433333133\n\nPending TODOs:\n- Documentation update\n- Code cleanup\n- Testing.\n\nI will further continue to work on the patch and meanwhile feel free to\nprovide\nthoughts/inputs.\n\nThanks,\n\n\nOn Mon, Nov 25, 2019 at 11:13 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Nov 22, 2019 at 5:15 PM Tels <nospam-pg-abuse@bloodgate.com>\n> wrote:\n> > It is related to the number of states...\n>\n> Thanks for this explanation. See my reply to David where I also\n> discuss this point.\n>\n> > However, if you choose a hash, please do not go below SHA-256. Both MD5\n> > and SHA-1 already had collision attacks, and these only got to be bound\n> > to be worse.\n> >\n> > https://www.mscs.dal.ca/~selinger/md5collision/\n> > https://shattered.io/\n>\n> Yikes, that second link, about SHA-1, is depressing. Now, it's not\n> likely that an attacker has access to your backup repository and can\n> spend 6500 years of CPU time to engineer a Trojan file there (maybe\n> more, because the files are probably bigger than the PDFs they used in\n> that case) and then induce you to restore and rely upon that backup.\n> However, it's entirely likely that somebody is going to eventually ban\n> SHA-1 as the attacks get better, which is going to be a problem for us\n> whether the underlying exposures are problems or not.\n>\n> > It might even be a wise idea to encode the used Hash-Algorithm into the\n> > manifest file, so it can be changed later. The hash length might be not\n> > enough to decide which algorithm is the one used.\n>\n> I agree. Let's write\n> SHA256:bc1c3a57369acd0d2183a927fb2e07acbbb1c97f317bbc3b39d93ec65b754af5\n> or similar rather than just the hash. That way even if the entire SHA\n> family gets cracked, we can easily substitute in something else that\n> hasn't been cracked yet.\n>\n> (It is unclear to me why anyone supposes that *any* popular hash\n> function won't eventually be cracked. For a K-bit hash function, there\n> are 2^K possible outputs, where K is probably in the hundreds. But\n> there are 2^{2^33} possible 1GB files. So for every possible output\n> value, there are 2^{2^33-K} inputs that produce that value, which is a\n> very very big number. The probability that any given input produces a\n> certain output is very low, but the number of possible inputs that\n> produce a given output is very high; so assuming that nobody's ever\n> going to figure out how to construct them seems optimistic.)\n>\n> > To get a feeling one can use:\n> >\n> > openssl speed md5 sha1 sha256 sha512\n> >\n> > On my really-not-fast desktop CPU (i5-4690T CPU @ 2.50GHz) it says:\n> >\n> > The 'numbers' are in 1000s of bytes per second processed.\n> > type 16 bytes 64 bytes 256 bytes 1024 bytes 8192\n> > bytes 16384 bytes\n> > md5 122638.55k 277023.96k 487725.57k 630806.19k\n> > 683892.74k 688553.98k\n> > sha1 127226.45k 313891.52k 632510.55k 865753.43k\n> > 960995.33k 977215.19k\n> > sha256 77611.02k 173368.15k 325460.99k 412633.43k\n> > 447022.92k 448020.48k\n> > sha512 51164.77k 205189.87k 361345.79k 543883.26k\n> > 638372.52k 645933.74k\n> >\n> > Or in other words, it can hash nearly 931 MByte /s with SHA-1 and about\n> > 427 MByte / s with SHA-256 (if I haven't miscalculated something). You'd\n> > need a\n> > pretty fast disk (aka M.2 SSD) and network (aka > 1 Gbit) to top these\n> > speeds\n> > and then you'd use a real CPU for your server, not some poor Intel\n> > powersaving\n> > surfing thingy-majingy :)\n>\n> I mean, how fast is in theory doesn't matter nearly as much as what\n> happens when you benchmark the proposed implementation, and the\n> results we have so far don't support the theory that this is so cheap\n> as to be negligible.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\n-- \nRushabh Lathia",
"msg_date": "Wed, 4 Dec 2019 23:31:37 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Wed, Dec 4, 2019 at 1:01 PM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:\n> As per the discussion on the thread, here is the patch which\n>\n> a) Make checksum for manifest file optional.\n> b) Allow user to choose a particular algorithm.\n>\n> Currently with the WIP patch SHA256 and CRC checksum algorithm\n> supported. Patch also changed the manifest file format to append\n> the used algorithm name before the checksum, this way it will be\n> easy to validator to know which algorithm to used.\n>\n> Ex:\n> ./db/bin/pg_basebackup -D bksha/ --manifest-with-checksums=SHA256\n>\n> $ cat bksha/backup_manifest | more\n> PostgreSQL-Backup-Manifest-Version 1\n> File backup_label 226 2019-12-04 17:46:46 GMT SHA256:7cf53d1b9facca908678ab70d93a9e7460cd35cedf7891de948dcf858f8a281a\n> File pg_xact/0000 8192 2019-12-04 17:46:46 GMT SHA256:8d2b6cb1dc1a6e8cee763b52d75e73571fddce06eb573861d44082c7d8c03c26\n>\n> ./db/bin/pg_basebackup -D bkcrc/ --manifest-with-checksums=CRC\n> PostgreSQL-Backup-Manifest-Version 1\n> File backup_label 226 2019-12-04 17:58:40 GMT CRC:343138313931333134\n> File pg_xact/0000 8192 2019-12-04 17:46:46 GMT CRC:363538343433333133\n>\n> Pending TODOs:\n> - Documentation update\n> - Code cleanup\n> - Testing.\n>\n> I will further continue to work on the patch and meanwhile feel free to provide\n> thoughts/inputs.\n\n+ initilize_manifest_checksum(&cCtx);\n\nSpelling.\n\n-\n\nSpurious.\n\n+ case MC_CRC:\n+ INIT_CRC32C(cCtx->crc_ctx);\n\nSuggest that we do CRC -> CRC32C throughout the patch. Someone might\nconceivably want some other CRC variant, mostly likely 64-bit, in the\nfuture.\n\n+final_manifest_checksum(ChecksumCtx *cCtx, char *checksumbuf)\n\nfinalize\n\n printf(_(\" --manifest-with-checksums\\n\"\n- \" do calculate checksums for manifest files\\n\"));\n+ \" calculate checksums for manifest files\nusing provided algorithm\\n\"));\n\nSwitch name is wrong. Suggest --manifest-checksums.\nHelp usually shows that an argument is expected, e.g.\n--manifest-checksums=ALGORITHM or\n--manifest-checksums=sha256|crc32c|none\n\nThis seems to apply over some earlier version of the patch. A\nconsolidated patch, or the whole stack, would be better.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 4 Dec 2019 13:47:07 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Dec 5, 2019 at 12:17 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Dec 4, 2019 at 1:01 PM Rushabh Lathia <rushabh.lathia@gmail.com>\n> wrote:\n> > As per the discussion on the thread, here is the patch which\n> >\n> > a) Make checksum for manifest file optional.\n> > b) Allow user to choose a particular algorithm.\n> >\n> > Currently with the WIP patch SHA256 and CRC checksum algorithm\n> > supported. Patch also changed the manifest file format to append\n> > the used algorithm name before the checksum, this way it will be\n> > easy to validator to know which algorithm to used.\n> >\n> > Ex:\n> > ./db/bin/pg_basebackup -D bksha/ --manifest-with-checksums=SHA256\n> >\n> > $ cat bksha/backup_manifest | more\n> > PostgreSQL-Backup-Manifest-Version 1\n> > File backup_label 226 2019-12-04 17:46:46 GMT\n> SHA256:7cf53d1b9facca908678ab70d93a9e7460cd35cedf7891de948dcf858f8a281a\n> > File pg_xact/0000 8192 2019-12-04 17:46:46 GMT\n> SHA256:8d2b6cb1dc1a6e8cee763b52d75e73571fddce06eb573861d44082c7d8c03c26\n> >\n> > ./db/bin/pg_basebackup -D bkcrc/ --manifest-with-checksums=CRC\n> > PostgreSQL-Backup-Manifest-Version 1\n> > File backup_label 226 2019-12-04 17:58:40 GMT CRC:343138313931333134\n> > File pg_xact/0000 8192 2019-12-04 17:46:46 GMT CRC:363538343433333133\n> >\n> > Pending TODOs:\n> > - Documentation update\n> > - Code cleanup\n> > - Testing.\n> >\n> > I will further continue to work on the patch and meanwhile feel free to\n> provide\n> > thoughts/inputs.\n>\n> + initilize_manifest_checksum(&cCtx);\n>\n> Spelling.\n>\n>\nFixed.\n\n-\n>\n> Spurious.\n>\n> + case MC_CRC:\n> + INIT_CRC32C(cCtx->crc_ctx);\n>\n> Suggest that we do CRC -> CRC32C throughout the patch. Someone might\n> conceivably want some other CRC variant, mostly likely 64-bit, in the\n> future.\n>\n>\nMake sense, done.\n\n+final_manifest_checksum(ChecksumCtx *cCtx, char *checksumbuf)\n>\n> finalize\n>\n>\nDone.\n\n printf(_(\" --manifest-with-checksums\\n\"\n> - \" do calculate checksums for manifest files\\n\"));\n> + \" calculate checksums for manifest files\n> using provided algorithm\\n\"));\n>\n> Switch name is wrong. Suggest --manifest-checksums.\n> Help usually shows that an argument is expected, e.g.\n> --manifest-checksums=ALGORITHM or\n> --manifest-checksums=sha256|crc32c|none\n>\n>\nFixed.\n\nThis seems to apply over some earlier version of the patch. A\n> consolidated patch, or the whole stack, would be better.\n>\n\nHere is the whole stack of patches.\n\n\nThanks,\nRushabh Lathia\nwww.EnterpriseDB.com",
"msg_date": "Thu, 5 Dec 2019 21:52:21 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Dec 5, 2019 at 11:22 AM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:\n> Here is the whole stack of patches.\n\nPlease include proper attribution and, where somebody's written them,\ncommit messages in each patch in the stack. For example, I see that\nyour 0001 is mostly the same as my 0001 from upthread, but now it\nsays:\n\n From a3e075d5edb5031ea358e049f8cb07031fc480a3 Mon Sep 17 00:00:00 2001\nFrom: Rushabh Lathia <rushabh.lathia@enterprisedb.com>\nDate: Wed, 13 Nov 2019 15:19:22 +0530\nSubject: [PATCH 1/5] Reduce code duplication and eliminate weird macro tricks.\n\n...with no indication of who the original author was.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 5 Dec 2019 13:46:08 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Dec 5, 2019 at 11:22 AM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:\n> Here is the whole stack of patches.\n\nI committed 0001, as that's just refactoring and I think (hope) it's\nuncontroversial. I think 0002-0005 need to be squashed together\n(crediting all authors properly and in the appropriate order) as it's\nquite hard to understand right now, and that Suraj's patch to validate\nthe backup should be included in the patch stack. It needs\ndocumentation. Also, we need, either in that patch or a separate, TAP\ntests that exercise this feature. Things we should try to check:\n\n- Plain format backups can be verified against the manifest.\n- Tar format backups can be verified against the manifest after\nuntarring (this might be a problem; not sure there's any guarantee\nthat we have a working \"tar\" command available).\n- Verification succeeds for all available checksums algorithms and\nalso for no checksum algorithm (should still check which files are\npresent, and sizes).\n- If we tamper with a backup by removing a file, adding a file, or\nchanging the size of a file, the modification is detected even without\nchecksums.\n- If we tamper with a backup by changing the contents of a file but\nnot the size, the modification is detected if checksums are used.\n- Everything above still works if there is user-defined tablespace\nthat contains a table.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 5 Dec 2019 15:14:34 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Dec 6, 2019 at 1:44 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Dec 5, 2019 at 11:22 AM Rushabh Lathia <rushabh.lathia@gmail.com>\n> wrote:\n> > Here is the whole stack of patches.\n>\n> I committed 0001, as that's just refactoring and I think (hope) it's\n> uncontroversial. I think 0002-0005 need to be squashed together\n> (crediting all authors properly and in the appropriate order) as it's\n> quite hard to understand right now,\n\n\nPlease find attached single patch and I tried to add the credit to all\nthe authors.\n\nThere is one review comment from Jeevan Chalke, which still pending\nto address is:\n\n4.\n> Why we need a \"File\" at the start of each entry as we are adding files\n> only?\n> I wonder if we also need to provide a tablespace name and directory marker\n> so\n> that we have \"Tablespace\" and \"Dir\" at the start.\n>\n\nSorry, I am not quite sure about this, may be Robert is right person\nto answer this.\n\nand that Suraj's patch to validate\n> the backup should be included in the patch stack. It needs\n> documentation. Also, we need, either in that patch or a separate, TAP\n> tests that exercise this feature. Things we should try to check:\n>\n> - Plain format backups can be verified against the manifest.\n> - Tar format backups can be verified against the manifest after\n> untarring (this might be a problem; not sure there's any guarantee\n> that we have a working \"tar\" command available).\n> - Verification succeeds for all available checksums algorithms and\n> also for no checksum algorithm (should still check which files are\n> present, and sizes).\n> - If we tamper with a backup by removing a file, adding a file, or\n> changing the size of a file, the modification is detected even without\n> checksums.\n> - If we tamper with a backup by changing the contents of a file but\n> not the size, the modification is detected if checksums are used.\n> - Everything above still works if there is user-defined tablespace\n> that contains a table.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\nThanks.\nRushabh Lathia\nwww.EnterpriseDB.com",
"msg_date": "Fri, 6 Dec 2019 12:05:19 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Dec 6, 2019 at 1:35 AM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:\n> There is one review comment from Jeevan Chalke, which still pending\n> to address is:\n>\n>> 4.\n>> Why we need a \"File\" at the start of each entry as we are adding files only?\n>> I wonder if we also need to provide a tablespace name and directory marker so\n>> that we have \"Tablespace\" and \"Dir\" at the start.\n>\n> Sorry, I am not quite sure about this, may be Robert is right person\n> to answer this.\n\nI did it that way for extensibility. Notice that the first and last\nline of the manifest begin with other words, so someone parsing the\nmanifest can identify the line type by looking just at the first word.\nSomeone might in the future find some need to add other kinds of lines\nthat don't exist today.\n\n\"Tablespace\" and \"Dir\" are, in fact, pretty good examples of things\nthat someone might want to add in the future. I don't really see a\nclear need for either one today, although maybe somebody else will,\nbut I think we should leave ourselves room to add such things in the\nfuture.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 6 Dec 2019 08:33:39 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Dec 6, 2019 at 12:05 PM Rushabh Lathia <rushabh.lathia@gmail.com>\nwrote:\n\n>\n>\n> On Fri, Dec 6, 2019 at 1:44 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Thu, Dec 5, 2019 at 11:22 AM Rushabh Lathia <rushabh.lathia@gmail.com>\n>> wrote:\n>> > Here is the whole stack of patches.\n>>\n>> I committed 0001, as that's just refactoring and I think (hope) it's\n>> uncontroversial. I think 0002-0005 need to be squashed together\n>> (crediting all authors properly and in the appropriate order) as it's\n>> quite hard to understand right now,\n>\n>\n> Please find attached single patch and I tried to add the credit to all\n> the authors.\n>\n\nI had a look over the patch and here are my few review comments:\n\n1.\n+ if (pg_strcasecmp(manifest_checksum_algo, \"SHA256\") == 0)\n+ manifest_checksums = MC_SHA256;\n+ else if (pg_strcasecmp(manifest_checksum_algo, \"CRC32C\") == 0)\n+ manifest_checksums = MC_CRC32C;\n+ else if (pg_strcasecmp(manifest_checksum_algo, \"NONE\") == 0)\n+ manifest_checksums = MC_NONE;\n+ else\n+ ereport(ERROR,\n\nIs NONE is a valid input? I think the default is \"NONE\" only and thus no\nneed\nof this as an input. It will be better if we simply error out if input is\nneither \"SHA256\" nor \"CRC32C\".\n\nI believe you have done this way as from pg_basebackup you are always\npassing\nMANIFEST_CHECKSUMS '%s' string which will have \"NONE\" if no user input is\ngiven. But I think passing that conditional will be better like we have\nmaxrate_clause for example.\n\nWell, this is what I think, feel free to ignore as I don't see any\ncorrectness\nissue over here.\n\n\n2.\n+ if (manifest_checksums != MC_NONE)\n+ {\n+ checksumbuflen = finalize_manifest_checksum(cCtx, checksumbuf);\n+ switch (manifest_checksums)\n+ {\n+ case MC_NONE:\n+ break;\n+ }\n\nSince switch case is within \"if (manifest_checksums != MC_NONE)\" condition,\nI don't think we need a case for MC_NONE here. Rather we can use a default\ncase to error out.\n\n\n3.\n+ if (manifest_checksums != MC_NONE)\n+ {\n+ initialize_manifest_checksum(&cCtx);\n+ update_manifest_checksum(&cCtx, content, len);\n+ }\n\n@@ -1384,6 +1641,9 @@ sendFile(const char *readfilename, const char\n*tarfilename, struct stat *statbuf\n int segmentno = 0;\n char *segmentpath;\n bool verify_checksum = false;\n+ ChecksumCtx cCtx;\n+\n+ initialize_manifest_checksum(&cCtx);\n\n\nI see that in a few cases you are calling\ninitialize/update_manifest_checksum()\nconditional and at some other places call is unconditional. It seems like\ncalling unconditional will not have any issues as switch cases inside them\nreturn doing nothing when manifest_checksums is MC_NONE.\n\n\n4.\ninitialize/update/finalize_manifest_checksum() functions may be needed by\nthe\nvalidation patch as well. And thus I think these functions should not depend\non a global variable as such. Also, it will be good if we keep them in a\nfile\nthat is accessible to frontend-only code. Well, you can ignore these\ncomments\nwith the argument saying that this refactoring can be done by the patch\nadding\nvalidation support. I have no issues. Since both the patches are dependent\nand\nposted on the same email chain, thought of putting that observation.\n\n\n5.\n+ switch (manifest_checksums)\n+ {\n+ case MC_SHA256:\n+ checksumlabel = \"SHA256:\";\n+ break;\n+ case MC_CRC32C:\n+ checksumlabel = \"CRC32C:\";\n+ break;\n+ case MC_NONE:\n+ break;\n+ }\n\nThis code in AddFileToManifest() is executed for every file for which we are\nadding an entry. However, the checksumlabel will be going to remain the same\nthroughout. Can it be set just once and then used as is?\n\n\n6.\nCan we avoid manifest_checksums from declaring it as a global variable?\nI think for that, we need to pass that to every function and thus need to\nchange the function signature of various functions. Currently, we pass\n\"StringInfo manifest\" to all the required function, will it better to pass\nthe struct variable instead? A struct may have members like,\n\"StringInfo manifest\" in it, checksum type (manifest_checksums),\nchecksum label, etc.\n\n\nThanks\n-- \nJeevan Chalke\nAssociate Database Architect & Team Lead, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Fri, Dec 6, 2019 at 12:05 PM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:On Fri, Dec 6, 2019 at 1:44 AM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Dec 5, 2019 at 11:22 AM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:\n> Here is the whole stack of patches.\n\nI committed 0001, as that's just refactoring and I think (hope) it's\nuncontroversial. I think 0002-0005 need to be squashed together\n(crediting all authors properly and in the appropriate order) as it's\nquite hard to understand right now, Please find attached single patch and I tried to add the credit to allthe authors.I had a look over the patch and here are my few review comments:1.+ if (pg_strcasecmp(manifest_checksum_algo, \"SHA256\") == 0)+ manifest_checksums = MC_SHA256;+ else if (pg_strcasecmp(manifest_checksum_algo, \"CRC32C\") == 0)+ manifest_checksums = MC_CRC32C;+ else if (pg_strcasecmp(manifest_checksum_algo, \"NONE\") == 0)+ manifest_checksums = MC_NONE;+ else+ ereport(ERROR,Is NONE is a valid input? I think the default is \"NONE\" only and thus no needof this as an input. It will be better if we simply error out if input isneither \"SHA256\" nor \"CRC32C\".I believe you have done this way as from pg_basebackup you are always passingMANIFEST_CHECKSUMS '%s' string which will have \"NONE\" if no user input isgiven. But I think passing that conditional will be better like we havemaxrate_clause for example.Well, this is what I think, feel free to ignore as I don't see any correctnessissue over here.2.+ if (manifest_checksums != MC_NONE)+ {+ checksumbuflen = finalize_manifest_checksum(cCtx, checksumbuf);+ switch (manifest_checksums)+ {+ case MC_NONE:+ break;+ }Since switch case is within \"if (manifest_checksums != MC_NONE)\" condition,I don't think we need a case for MC_NONE here. Rather we can use a defaultcase to error out.3.+ if (manifest_checksums != MC_NONE)+ {+ initialize_manifest_checksum(&cCtx);+ update_manifest_checksum(&cCtx, content, len);+ }@@ -1384,6 +1641,9 @@ sendFile(const char *readfilename, const char *tarfilename, struct stat *statbuf int segmentno = 0; char *segmentpath; bool verify_checksum = false;+ ChecksumCtx cCtx;++ initialize_manifest_checksum(&cCtx);I see that in a few cases you are calling initialize/update_manifest_checksum()conditional and at some other places call is unconditional. It seems likecalling unconditional will not have any issues as switch cases inside themreturn doing nothing when manifest_checksums is MC_NONE.4.initialize/update/finalize_manifest_checksum() functions may be needed by thevalidation patch as well. And thus I think these functions should not dependon a global variable as such. Also, it will be good if we keep them in a filethat is accessible to frontend-only code. Well, you can ignore these commentswith the argument saying that this refactoring can be done by the patch addingvalidation support. I have no issues. Since both the patches are dependent andposted on the same email chain, thought of putting that observation.5.+ switch (manifest_checksums)+ {+ case MC_SHA256:+ checksumlabel = \"SHA256:\";+ break;+ case MC_CRC32C:+ checksumlabel = \"CRC32C:\";+ break;+ case MC_NONE:+ break;+ }This code in AddFileToManifest() is executed for every file for which we areadding an entry. However, the checksumlabel will be going to remain the samethroughout. Can it be set just once and then used as is?6.Can we avoid manifest_checksums from declaring it as a global variable?I think for that, we need to pass that to every function and thus need tochange the function signature of various functions. Currently, we pass\"StringInfo manifest\" to all the required function, will it better to passthe struct variable instead? A struct may have members like,\"StringInfo manifest\" in it, checksum type (manifest_checksums),checksum label, etc.Thanks-- Jeevan ChalkeAssociate Database Architect & Team Lead, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 9 Dec 2019 11:15:23 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Thanks Jeevan for reviewing the patch and offline discussion.\n\nOn Mon, Dec 9, 2019 at 11:15 AM Jeevan Chalke <\njeevan.chalke@enterprisedb.com> wrote:\n\n>\n>\n> On Fri, Dec 6, 2019 at 12:05 PM Rushabh Lathia <rushabh.lathia@gmail.com>\n> wrote:\n>\n>>\n>>\n>> On Fri, Dec 6, 2019 at 1:44 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>>\n>>> On Thu, Dec 5, 2019 at 11:22 AM Rushabh Lathia <rushabh.lathia@gmail.com>\n>>> wrote:\n>>> > Here is the whole stack of patches.\n>>>\n>>> I committed 0001, as that's just refactoring and I think (hope) it's\n>>> uncontroversial. I think 0002-0005 need to be squashed together\n>>> (crediting all authors properly and in the appropriate order) as it's\n>>> quite hard to understand right now,\n>>\n>>\n>> Please find attached single patch and I tried to add the credit to all\n>> the authors.\n>>\n>\n> I had a look over the patch and here are my few review comments:\n>\n> 1.\n> + if (pg_strcasecmp(manifest_checksum_algo, \"SHA256\") == 0)\n> + manifest_checksums = MC_SHA256;\n> + else if (pg_strcasecmp(manifest_checksum_algo, \"CRC32C\") == 0)\n> + manifest_checksums = MC_CRC32C;\n> + else if (pg_strcasecmp(manifest_checksum_algo, \"NONE\") == 0)\n> + manifest_checksums = MC_NONE;\n> + else\n> + ereport(ERROR,\n>\n> Is NONE is a valid input? I think the default is \"NONE\" only and thus no\n> need\n> of this as an input. It will be better if we simply error out if input is\n> neither \"SHA256\" nor \"CRC32C\".\n>\n> I believe you have done this way as from pg_basebackup you are always\n> passing\n> MANIFEST_CHECKSUMS '%s' string which will have \"NONE\" if no user input is\n> given. But I think passing that conditional will be better like we have\n> maxrate_clause for example.\n>\n> Well, this is what I think, feel free to ignore as I don't see any\n> correctness\n> issue over here.\n>\n>\nI would still keep this NONE as it's look more cleaner in the say of\ngiven options to the checksums.\n\n\n> 2.\n> + if (manifest_checksums != MC_NONE)\n> + {\n> + checksumbuflen = finalize_manifest_checksum(cCtx, checksumbuf);\n> + switch (manifest_checksums)\n> + {\n> + case MC_NONE:\n> + break;\n> + }\n>\n> Since switch case is within \"if (manifest_checksums != MC_NONE)\" condition,\n> I don't think we need a case for MC_NONE here. Rather we can use a default\n> case to error out.\n>\n>\nYeah, with the new patch we don't have this part of code.\n\n\n> 3.\n> + if (manifest_checksums != MC_NONE)\n> + {\n> + initialize_manifest_checksum(&cCtx);\n> + update_manifest_checksum(&cCtx, content, len);\n> + }\n>\n> @@ -1384,6 +1641,9 @@ sendFile(const char *readfilename, const char\n> *tarfilename, struct stat *statbuf\n> int segmentno = 0;\n> char *segmentpath;\n> bool verify_checksum = false;\n> + ChecksumCtx cCtx;\n> +\n> + initialize_manifest_checksum(&cCtx);\n>\n>\n> I see that in a few cases you are calling\n> initialize/update_manifest_checksum()\n> conditional and at some other places call is unconditional. It seems like\n> calling unconditional will not have any issues as switch cases inside them\n> return doing nothing when manifest_checksums is MC_NONE.\n>\n>\nFixed.\n\n\n> 4.\n> initialize/update/finalize_manifest_checksum() functions may be needed by\n> the\n> validation patch as well. And thus I think these functions should not\n> depend\n> on a global variable as such. Also, it will be good if we keep them in a\n> file\n> that is accessible to frontend-only code. Well, you can ignore these\n> comments\n> with the argument saying that this refactoring can be done by the patch\n> adding\n> validation support. I have no issues. Since both the patches are dependent\n> and\n> posted on the same email chain, thought of putting that observation.\n>\n>\nMake sense, I just changed those API to that it doesn't have to\naccess the global.\n\n\n> 5.\n> + switch (manifest_checksums)\n> + {\n> + case MC_SHA256:\n> + checksumlabel = \"SHA256:\";\n> + break;\n> + case MC_CRC32C:\n> + checksumlabel = \"CRC32C:\";\n> + break;\n> + case MC_NONE:\n> + break;\n> + }\n>\n> This code in AddFileToManifest() is executed for every file for which we\n> are\n> adding an entry. However, the checksumlabel will be going to remain the\n> same\n> throughout. Can it be set just once and then used as is?\n>\n>\nYeah, with the attached patch we no more have this part of code.\n\n\n> 6.\n> Can we avoid manifest_checksums from declaring it as a global variable?\n> I think for that, we need to pass that to every function and thus need to\n> change the function signature of various functions. Currently, we pass\n> \"StringInfo manifest\" to all the required function, will it better to pass\n> the struct variable instead? A struct may have members like,\n> \"StringInfo manifest\" in it, checksum type (manifest_checksums),\n> checksum label, etc.\n>\n>\nI agree. Earlier I was not sure about this because that require data\nstructure\nto expose. But in the given attached patch that's what I tried, introduced\nnew\ndata structure and defined in basebackup.h and passed the same through the\nfunction so that doesn't require to pass an individual members. Also\nremoved\nglobal manifest_checksum and added the same in the newly introduced\nstructure.\n\nAttaching the patch, which need to apply on the top of earlier 0001 patch.\n\nThanks,\n\n-- \nRushabh Lathia\nwww.EnterpriseDB.com",
"msg_date": "Mon, 9 Dec 2019 14:52:34 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Mon, Dec 9, 2019 at 2:52 PM Rushabh Lathia <rushabh.lathia@gmail.com>\nwrote:\n\n>\n> Thanks Jeevan for reviewing the patch and offline discussion.\n>\n> On Mon, Dec 9, 2019 at 11:15 AM Jeevan Chalke <\n> jeevan.chalke@enterprisedb.com> wrote:\n>\n>>\n>>\n>> On Fri, Dec 6, 2019 at 12:05 PM Rushabh Lathia <rushabh.lathia@gmail.com>\n>> wrote:\n>>\n>>>\n>>>\n>>> On Fri, Dec 6, 2019 at 1:44 AM Robert Haas <robertmhaas@gmail.com>\n>>> wrote:\n>>>\n>>>> On Thu, Dec 5, 2019 at 11:22 AM Rushabh Lathia <\n>>>> rushabh.lathia@gmail.com> wrote:\n>>>> > Here is the whole stack of patches.\n>>>>\n>>>> I committed 0001, as that's just refactoring and I think (hope) it's\n>>>> uncontroversial. I think 0002-0005 need to be squashed together\n>>>> (crediting all authors properly and in the appropriate order) as it's\n>>>> quite hard to understand right now,\n>>>\n>>>\n>>> Please find attached single patch and I tried to add the credit to all\n>>> the authors.\n>>>\n>>\n>> I had a look over the patch and here are my few review comments:\n>>\n>> 1.\n>> + if (pg_strcasecmp(manifest_checksum_algo, \"SHA256\") == 0)\n>> + manifest_checksums = MC_SHA256;\n>> + else if (pg_strcasecmp(manifest_checksum_algo, \"CRC32C\") ==\n>> 0)\n>> + manifest_checksums = MC_CRC32C;\n>> + else if (pg_strcasecmp(manifest_checksum_algo, \"NONE\") == 0)\n>> + manifest_checksums = MC_NONE;\n>> + else\n>> + ereport(ERROR,\n>>\n>> Is NONE is a valid input? I think the default is \"NONE\" only and thus no\n>> need\n>> of this as an input. It will be better if we simply error out if input is\n>> neither \"SHA256\" nor \"CRC32C\".\n>>\n>> I believe you have done this way as from pg_basebackup you are always\n>> passing\n>> MANIFEST_CHECKSUMS '%s' string which will have \"NONE\" if no user input is\n>> given. But I think passing that conditional will be better like we have\n>> maxrate_clause for example.\n>>\n>> Well, this is what I think, feel free to ignore as I don't see any\n>> correctness\n>> issue over here.\n>>\n>>\n> I would still keep this NONE as it's look more cleaner in the say of\n> given options to the checksums.\n>\n>\n>> 2.\n>> + if (manifest_checksums != MC_NONE)\n>> + {\n>> + checksumbuflen = finalize_manifest_checksum(cCtx, checksumbuf);\n>> + switch (manifest_checksums)\n>> + {\n>> + case MC_NONE:\n>> + break;\n>> + }\n>>\n>> Since switch case is within \"if (manifest_checksums != MC_NONE)\"\n>> condition,\n>> I don't think we need a case for MC_NONE here. Rather we can use a default\n>> case to error out.\n>>\n>>\n> Yeah, with the new patch we don't have this part of code.\n>\n>\n>> 3.\n>> + if (manifest_checksums != MC_NONE)\n>> + {\n>> + initialize_manifest_checksum(&cCtx);\n>> + update_manifest_checksum(&cCtx, content, len);\n>> + }\n>>\n>> @@ -1384,6 +1641,9 @@ sendFile(const char *readfilename, const char\n>> *tarfilename, struct stat *statbuf\n>> int segmentno = 0;\n>> char *segmentpath;\n>> bool verify_checksum = false;\n>> + ChecksumCtx cCtx;\n>> +\n>> + initialize_manifest_checksum(&cCtx);\n>>\n>>\n>> I see that in a few cases you are calling\n>> initialize/update_manifest_checksum()\n>> conditional and at some other places call is unconditional. It seems like\n>> calling unconditional will not have any issues as switch cases inside them\n>> return doing nothing when manifest_checksums is MC_NONE.\n>>\n>>\n> Fixed.\n>\n>\n>> 4.\n>> initialize/update/finalize_manifest_checksum() functions may be needed by\n>> the\n>> validation patch as well. And thus I think these functions should not\n>> depend\n>> on a global variable as such. Also, it will be good if we keep them in a\n>> file\n>> that is accessible to frontend-only code. Well, you can ignore these\n>> comments\n>> with the argument saying that this refactoring can be done by the patch\n>> adding\n>> validation support. I have no issues. Since both the patches are\n>> dependent and\n>> posted on the same email chain, thought of putting that observation.\n>>\n>>\n> Make sense, I just changed those API to that it doesn't have to\n> access the global.\n>\n>\n>> 5.\n>> + switch (manifest_checksums)\n>> + {\n>> + case MC_SHA256:\n>> + checksumlabel = \"SHA256:\";\n>> + break;\n>> + case MC_CRC32C:\n>> + checksumlabel = \"CRC32C:\";\n>> + break;\n>> + case MC_NONE:\n>> + break;\n>> + }\n>>\n>> This code in AddFileToManifest() is executed for every file for which we\n>> are\n>> adding an entry. However, the checksumlabel will be going to remain the\n>> same\n>> throughout. Can it be set just once and then used as is?\n>>\n>>\n> Yeah, with the attached patch we no more have this part of code.\n>\n>\n>> 6.\n>> Can we avoid manifest_checksums from declaring it as a global variable?\n>> I think for that, we need to pass that to every function and thus need to\n>> change the function signature of various functions. Currently, we pass\n>> \"StringInfo manifest\" to all the required function, will it better to pass\n>> the struct variable instead? A struct may have members like,\n>> \"StringInfo manifest\" in it, checksum type (manifest_checksums),\n>> checksum label, etc.\n>>\n>>\n> I agree. Earlier I was not sure about this because that require data\n> structure\n> to expose. But in the given attached patch that's what I tried,\n> introduced new\n> data structure and defined in basebackup.h and passed the same through the\n> function so that doesn't require to pass an individual members. Also\n> removed\n> global manifest_checksum and added the same in the newly introduced\n> structure.\n>\n> Attaching the patch, which need to apply on the top of earlier 0001 patch.\n>\n\nAttaching another version of 0002 patch, as my collogue Jeevan Chalke\npointed\nfew indentation problem in 0002 patch which I sent earlier. Fixed the same\nin\nthe latest patch.\n\n\n\n\n> Thanks,\n>\n> --\n> Rushabh Lathia\n> www.EnterpriseDB.com\n>\n\n\n-- \nRushabh Lathia",
"msg_date": "Tue, 10 Dec 2019 15:29:43 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 3:29 PM Rushabh Lathia <rushabh.lathia@gmail.com>\nwrote:\n\n>\n> Attaching another version of 0002 patch, as my collogue Jeevan Chalke\n> pointed\n> few indentation problem in 0002 patch which I sent earlier. Fixed the\n> same in\n> the latest patch.\n>\n\nI had a look over the new patch and see no issues. Looks good to me.\nThanks for quickly fixing the review comments posted earlier.\n\nHowever, here are the minor comments:\n\n1.\n@@ -122,6 +133,7 @@ static long long int total_checksum_failures;\n /* Do not verify checksums. */\n static bool noverify_checksums = false;\n\n+\n /*\n * The contents of these directories are removed or recreated during server\n * start so they are not included in backups. The directories themselves\nare\n\n\nPlease remove this unnecessary change.\n\nNeed to run the indentation.\n\nThanks\n-- \nJeevan Chalke\nAssociate Database Architect & Team Lead, Product Development\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Tue, Dec 10, 2019 at 3:29 PM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:Attaching another version of 0002 patch, as my collogue Jeevan Chalke pointedfew indentation problem in 0002 patch which I sent earlier. Fixed the same inthe latest patch.I had a look over the new patch and see no issues. Looks good to me.Thanks for quickly fixing the review comments posted earlier.However, here are the minor comments:1.@@ -122,6 +133,7 @@ static long long int total_checksum_failures; /* Do not verify checksums. */ static bool noverify_checksums = false; + /* * The contents of these directories are removed or recreated during server * start so they are not included in backups. The directories themselves arePlease remove this unnecessary change.Need to run the indentation.Thanks-- Jeevan ChalkeAssociate Database Architect & Team Lead, Product DevelopmentEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 10 Dec 2019 16:25:50 +0530",
"msg_from": "Jeevan Chalke <jeevan.chalke@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nPlease find attached patch for backup validator implementation (0004\npatch). This patch is based\non Rushabh's latest patch for backup manifest.\n\nThere are some functions required at client side as well, so I have moved\nthose functions\nand some data structure at common place so that they can be accessible for\nboth. (0003 patch).\n\nMy colleague Rajkumar Raghuwanshi has prepared the WIP patch (0005) for tap\ntest cases which\nis also attached. As of now, test cases related to the tablespace and tar\nbackup format are missing,\nwill continue work on same and submit the complete patch.\n\nWith this mail, I have attached the complete patch stack for backup\nmanifest and backup\nvalidate implementation.\n\nPlease let me know your thoughts on the same.\n\nOn Fri, Dec 6, 2019 at 1:44 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Dec 5, 2019 at 11:22 AM Rushabh Lathia <rushabh.lathia@gmail.com>\n> wrote:\n> > Here is the whole stack of patches.\n>\n> I committed 0001, as that's just refactoring and I think (hope) it's\n> uncontroversial. I think 0002-0005 need to be squashed together\n> (crediting all authors properly and in the appropriate order) as it's\n> quite hard to understand right now, and that Suraj's patch to validate\n> the backup should be included in the patch stack. It needs\n> documentation. Also, we need, either in that patch or a separate, TAP\n> tests that exercise this feature. Things we should try to check:\n>\n> - Plain format backups can be verified against the manifest.\n> - Tar format backups can be verified against the manifest after\n> untarring (this might be a problem; not sure there's any guarantee\n> that we have a working \"tar\" command available).\n> - Verification succeeds for all available checksums algorithms and\n> also for no checksum algorithm (should still check which files are\n> present, and sizes).\n> - If we tamper with a backup by removing a file, adding a file, or\n> changing the size of a file, the modification is detected even without\n> checksums.\n> - If we tamper with a backup by changing the contents of a file but\n> not the size, the modification is detected if checksums are used.\n> - Everything above still works if there is user-defined tablespace\n> that contains a table.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.",
"msg_date": "Tue, 10 Dec 2019 17:10:35 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Tue, Dec 10, 2019 at 6:40 AM Suraj Kharage\n<suraj.kharage@enterprisedb.com> wrote:\n> Please find attached patch for backup validator implementation (0004 patch). This patch is based\n> on Rushabh's latest patch for backup manifest.\n>\n> There are some functions required at client side as well, so I have moved those functions\n> and some data structure at common place so that they can be accessible for both. (0003 patch).\n>\n> My colleague Rajkumar Raghuwanshi has prepared the WIP patch (0005) for tap test cases which\n> is also attached. As of now, test cases related to the tablespace and tar backup format are missing,\n> will continue work on same and submit the complete patch.\n>\n> With this mail, I have attached the complete patch stack for backup manifest and backup\n> validate implementation.\n>\n> Please let me know your thoughts on the same.\n\nWell, for the second time on this thread, please don't take a bunch of\nsomebody else's code and post it in a patch that doesn't attribute\nthat person as one of the authors. For the second time on this thread,\nthe person is me, but don't borrow *anyone's* code without proper\nattribution. It's really important!\n\nOn a related note, it's a very good idea to use git format-patch and\ngit rebase -i to maintain patch stacks like this. Rushabh seems to\nhave done that, but the files you're posting look like raw 'git diff'\noutput. Notice that this gives him a way to include authorship\ninformation and a tentative commit message in each patch, but you\ndon't have any of that.\n\nAlso on a related note, part of the process of adapting existing code\nto a new purpose is adapting the comments. You haven't done that:\n\n+ * Search a result-set hash table for a row matching a given filename.\n...\n+ * Insert a row into a result-set hash table, provided no such row is already\n...\n+ * Most of the values\n+ * that we're hashing are short integers formatted as text, so there\n+ * shouldn't be much room for pathological input.\n\nI think that what we should actually do here is try to use simplehash.\nRight now, it won't work for frontend code, but I posted some patches\nto try to address that issue:\n\nhttps://www.postgresql.org/message-id/CA%2BTgmob8oyh02NrZW%3DxCScB%2B5GyJ-jVowE3%2BTWTUmPF%3DFsGWTA%40mail.gmail.com\n\nThat would have a few advantages. One, we wouldn't need to know the\nnumber of elements in advance, because simplehash can grow\ndynamically. Two, we could use the iteration interfaces to walk the\nhash table. Your solution to that is pgrhash_seq_search, but that's\nactually not well-designed, because it's not a generic iterator\nfunction but something that knows specifically about the 'touch' flag.\nI incidentally suggest renaming 'touch' to 'matched;' 'touch' is not\nbad, but I think 'matched' will be a little more recognizable.\n\nPlease run pgindent. If needed, first add locally defined types to\ntypedefs.list, so that things indent properly.\n\nIt's not a crazy idea to try to share some data structures and code\nbetween the frontend and the backend here, but I think\nsrc/common/backup.c and src/include/common/backup.h is a far too\ngeneric name given what the code is actually doing. It's mostly about\nchecksums, not backup, and I think it should be named accordingly. I\nsuggest removing \"manifestinfo\" and renaming the rest to just talk\nabout checksums rather than manifests. That would make it logical to\nreuse this for any other future code that needs a configurable\nchecksum type. Also, how about adding a function like:\n\nextern bool parse_checksum_algorithm(char *name, ChecksumAlgorithm *algo);\n\n...which would return true and set *algo if name is recognized, and\nreturn false otherwise. That code could be used on both the client and\nserver sides of this patch, and by any future patches that want to\nreturn this scaffolding.\n\nThe file header for backup.h has the wrong filename (string.h). The\nheader format looks somewhat atypical compared to what we normally do,\ntoo.\n\nIt's arguable, but I tend to think that it would be better to\nhex-encode the CRC rather than printing it as an integer. Maybe\nhex_encode() is another thing that could be moved into the new\nsrc/common file.\n\nAs I said before about Rushabh's patch set, it's very confusing that\nwe have so many patches here stacked up. Like, you have 0002 moving\nstuff, and then 0003 moving it again. That's super-confusing. Please\ntry to structure the patch set so as to make it as easy to review as\npossible.\n\nRegarding the test case patch, error checks are important! Don't do\nthings like this:\n\n+open my $modify_file_sha256, '>>', \"$tempdir/backup_verify/postgresql.conf\";\n+print $modify_file_sha256 \"port = 5555\\n\";\n+close $modify_file_sha256;\n\nIf the open fails, then it and the print and the close are going to\nsilently do nothing. That's bad. I don't know exactly what the\ncustomary error-checking is for things like this in TAP tests, but I\nhope it's not like this, because this has a pretty fair chance of\nlooking like it's testing something that it isn't. Let's figure out\nwhat the best practice in this area is and adhere to it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 10 Dec 2019 14:39:47 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Thanks, Robert for the review.\n\nOn Wed, Dec 11, 2019 at 1:10 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Dec 10, 2019 at 6:40 AM Suraj Kharage\n> <suraj.kharage@enterprisedb.com> wrote:\n> > Please find attached patch for backup validator implementation (0004\n> patch). This patch is based\n> > on Rushabh's latest patch for backup manifest.\n> >\n> > There are some functions required at client side as well, so I have\n> moved those functions\n> > and some data structure at common place so that they can be accessible\n> for both. (0003 patch).\n> >\n> > My colleague Rajkumar Raghuwanshi has prepared the WIP patch (0005) for\n> tap test cases which\n> > is also attached. As of now, test cases related to the tablespace and\n> tar backup format are missing,\n> > will continue work on same and submit the complete patch.\n> >\n> > With this mail, I have attached the complete patch stack for backup\n> manifest and backup\n> > validate implementation.\n> >\n> > Please let me know your thoughts on the same.\n>\n> Well, for the second time on this thread, please don't take a bunch of\n> somebody else's code and post it in a patch that doesn't attribute\n> that person as one of the authors. For the second time on this thread,\n> the person is me, but don't borrow *anyone's* code without proper\n> attribution. It's really important!\n>\n> On a related note, it's a very good idea to use git format-patch and\n> git rebase -i to maintain patch stacks like this. Rushabh seems to\n> have done that, but the files you're posting look like raw 'git diff'\n> output. Notice that this gives him a way to include authorship\n> information and a tentative commit message in each patch, but you\n> don't have any of that.\n>\n\nSorry, I have corrected this in the attached v2 patch set.\n\n\n> Also on a related note, part of the process of adapting existing code\n> to a new purpose is adapting the comments. You haven't done that:\n>\n> + * Search a result-set hash table for a row matching a given filename.\n> ...\n> + * Insert a row into a result-set hash table, provided no such row is\n> already\n> ...\n> + * Most of the values\n> + * that we're hashing are short integers formatted as text, so there\n> + * shouldn't be much room for pathological input.\n>\nCorrected in v2 patch.\n\n\n> I think that what we should actually do here is try to use simplehash.\n> Right now, it won't work for frontend code, but I posted some patches\n> to try to address that issue:\n>\n>\n> https://www.postgresql.org/message-id/CA%2BTgmob8oyh02NrZW%3DxCScB%2B5GyJ-jVowE3%2BTWTUmPF%3DFsGWTA%40mail.gmail.com\n>\n> That would have a few advantages. One, we wouldn't need to know the\n> number of elements in advance, because simplehash can grow\n> dynamically. Two, we could use the iteration interfaces to walk the\n> hash table. Your solution to that is pgrhash_seq_search, but that's\n> actually not well-designed, because it's not a generic iterator\n> function but something that knows specifically about the 'touch' flag.\n> I incidentally suggest renaming 'touch' to 'matched;' 'touch' is not\n> bad, but I think 'matched' will be a little more recognizable.\n>\n\nThanks for the suggestion. Will try to implement the same and update\naccordingly.\nI am assuming that I need to build the patch based on the changes that you\nproposed on the mentioned thread.\n\n\n> Please run pgindent. If needed, first add locally defined types to\n> typedefs.list, so that things indent properly.\n>\n> It's not a crazy idea to try to share some data structures and code\n> between the frontend and the backend here, but I think\n> src/common/backup.c and src/include/common/backup.h is a far too\n> generic name given what the code is actually doing. It's mostly about\n> checksums, not backup, and I think it should be named accordingly. I\n> suggest removing \"manifestinfo\" and renaming the rest to just talk\n> about checksums rather than manifests. That would make it logical to\n> reuse this for any other future code that needs a configurable\n> checksum type. Also, how about adding a function like:\n>\n> extern bool parse_checksum_algorithm(char *name, ChecksumAlgorithm *algo);\n>\n> ...which would return true and set *algo if name is recognized, and\n> return false otherwise. That code could be used on both the client and\n> server sides of this patch, and by any future patches that want to\n> return this scaffolding.\n>\n\nCorrected the filename and implemented the function as suggested.\n\n\n> The file header for backup.h has the wrong filename (string.h). The\n> header format looks somewhat atypical compared to what we normally do,\n> too.\n\n\nMy bad, corrected the header format as well.\n\n\n>\n>\nIt's arguable, but I tend to think that it would be better to\n> hex-encode the CRC rather than printing it as an integer. Maybe\n> hex_encode() is another thing that could be moved into the new\n> src/common file.\n\n\nWe are already encoding the CRC checksum as well. Please let me know if I\nmisunderstood anything.\nMoved hex_encode into src/common.\n\n\n> As I said before about Rushabh's patch set, it's very confusing that\n> we have so many patches here stacked up. Like, you have 0002 moving\n> stuff, and then 0003 moving it again. That's super-confusing. Please\n> try to structure the patch set so as to make it as easy to review as\n> possible.\n>\n\nSorry for the confusion. I have squashed 0001 to 0003 patches in one patch.\n\n\n> Regarding the test case patch, error checks are important! Don't do\n> things like this:\n>\n> +open my $modify_file_sha256, '>>',\n> \"$tempdir/backup_verify/postgresql.conf\";\n> +print $modify_file_sha256 \"port = 5555\\n\";\n> +close $modify_file_sha256;\n>\n> If the open fails, then it and the print and the close are going to\n> silently do nothing. That's bad. I don't know exactly what the\n> customary error-checking is for things like this in TAP tests, but I\n> hope it's not like this, because this has a pretty fair chance of\n> looking like it's testing something that it isn't. Let's figure out\n> what the best practice in this area is and adhere to it.\n>\n\nRajkumar has fixed this, please find attached 0003 patch for same.\n\nPlease find attached v2 set patches.\n\nTODO: will implement the simplehash as suggested.\n\n\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.",
"msg_date": "Thu, 12 Dec 2019 18:02:49 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\n\n> I think that what we should actually do here is try to use simplehash.\n>> Right now, it won't work for frontend code, but I posted some patches\n>> to try to address that issue:\n>>\n>>\n>> https://www.postgresql.org/message-id/CA%2BTgmob8oyh02NrZW%3DxCScB%2B5GyJ-jVowE3%2BTWTUmPF%3DFsGWTA%40mail.gmail.com\n>>\n>> That would have a few advantages. One, we wouldn't need to know the\n>> number of elements in advance, because simplehash can grow\n>> dynamically. Two, we could use the iteration interfaces to walk the\n>> hash table. Your solution to that is pgrhash_seq_search, but that's\n>> actually not well-designed, because it's not a generic iterator\n>> function but something that knows specifically about the 'touch' flag.\n>> I incidentally suggest renaming 'touch' to 'matched;' 'touch' is not\n>> bad, but I think 'matched' will be a little more recognizable.\n>>\n>\n> Thanks for the suggestion. Will try to implement the same and update\n> accordingly.\n> I am assuming that I need to build the patch based on the changes that you\n> proposed on the mentioned thread.\n>\n>\n\nI have implemented the simplehash in backup validator patch as Robert\nsuggested. Please find attached 0002 patch for the same.\n\nkindly review and let me know your thoughts.\n\nAlso attached the remaining patches. 0001 and 0003 are same as v2, only\npatch version is bumped.\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.",
"msg_date": "Tue, 17 Dec 2019 11:24:46 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 12:54 AM Suraj Kharage\n<suraj.kharage@enterprisedb.com> wrote:\n> I have implemented the simplehash in backup validator patch as Robert suggested. Please find attached 0002 patch for the same.\n>\n> kindly review and let me know your thoughts.\n\n+#define CHECKSUM_LENGTH 256\n\nThis seems wrong. Not all checksums are the same length, and none of\nthe ones we're using are 256 bytes in length, and if we've got to have\na constant someplace for the maximum checksum length, it should\nprobably be in the new header file, not here. But I don't think we\nshould need this in the first place; see comments below about how to\nrevise the parsing of the manifest file.\n\n+ char filetype[10];\n\nA mysterious 10-byte field with no comments explaining what it\nmeans... and the same magic number 10 appears in at least one other\nplace in the patch.\n\n+typedef struct manifesthash_hash *hashtab;\n\nThis declares a new *type* called hashtab, not a variable called\nhashtab. The new type is not used anywhere, but later, you have\nseveral variables of the same type that have this name. Just remove\nthis: it's wrong and unused.\n\n+static enum ChecksumAlgorithm checksum_type = MC_NONE;\n\nRemove \"enum\". Not needed, because you have a typedef for it in the\nheader, and not per style.\n\n+static manifesthash_hash *create_manifest_hash(char manifest_path[MAXPGPATH]);\n\nWhitespace is wrong. The whole patch needs a visit from pgindent with\na properly-updated typedefs.list.\n\nAlso, you will struggle to find anywhere else in the code base where\npass a character array as a function argument. I don't know why this\nisn't just char *.\n\n+ if(verify_backup)\n\nWhitespace wrong here, too.\n\n+ * Read the backup_manifest file and generate the hash table, then scan data\n+ * directroy and verify each file. Finally, iterate on hash table to find\n+ * out missing files.\n\nYou've got a word spelled wrong here, but the bigger problem is that\nthis comment doesn't actually describe what this function is trying to\ndo. Instead, it describes how it does it. If it's necessary to explain\nwhat steps the function takes in order to accomplish some goal, you\nshould comment individual bits of code in the function. The header\ncomment is a high-level overview, not a description of the algorithm.\n\nIt's also pretty unhelpful, here and elsewhere, to refer to \"the hash\ntable\" as if there were only one, and as if the reader were supposed\nto know something about it when you haven't told them anything about\nit.\n\n+ if (!entry->matched)\n+ {\n+ pg_log_info(\"missing file: %s\", entry->filename);\n+ }\n+\n\nThe braces here are not project style. We usually omit braces when\nonly a single line of code is present.\n\nI think some work needs to be done to standardize and improve the\nmessages that get produced here. You have:\n\n1. missing file: %s\n2. duplicate file present: %s\n3. size changed for file: %s, original size: %d, current size: %zu\n4. checksum difference for file: %s\n5. extra file found: %s\n\nI suggest:\n\n1. file \\\"%s\\\" is present in manifest but missing from the backup\n2. file \\\"%s\\\" has multiple manifest entries\n(this one should probably be pg_log_error(), not pg_log_info(), as it\nrepresents a corrupt-manifest problem)\n3. file \\\"%s\" has size %lu in manifest but size %lu in backup\n4. file \\\"%s\" has checksum %s in manifest but checksum %s in backup\n5. file \\\"%s\" is present in backup but not in manifest\n\nYour patch actually doesn't compile on my system, because for the\nthird message above, it uses %zu to print the size. But %zu is for\nsize_t, not off_t. I went looking for other places in the code where\nwe print off_t; based on that, I think the right thing to do is to\nprint it using %lu and write (unsigned long) st.st_size.\n\n+ char file_checksum[256];\n+ char header[1024];\n\nMore arbitrary constants.\n\n+ if (!file)\n+ {\n+ pg_log_error(\"could not open backup_manifest\");\n\nThat's bad error reporting. See e.g. readfile() in initdb.c.\n\n+ if (fscanf(file, \"%1023[^\\n]\\n\", header) != 1)\n+ {\n+ pg_log_error(\"error while reading the header from backup_manifest\");\n\nThat's also bad error reporting. It is only a slight step up from\n\"ERROR: error\".\n\nAnd we have another magic number (1023).\n\n+ appendPQExpBufferStr(manifest, header);\n+ appendPQExpBufferStr(manifest, \"\\n\");\n...\n+ appendPQExpBuffer(manifest, \"File\\t%s\\t%d\\t%s\\t%s\\n\", filename,\n+ filesize, mtime, checksum_with_type);\n\nThis whole thing seems completely crazy to me. Basically, you're\ntrying to use fscanf() to parse the file. But then, because fscanf()\ndoesn't give you the original bytes back, you're trying to reassemble\nthe data that you parsed to recover the original line, so that you can\nstuff it in the buffer and eventually checksum it. However, that's\nhighly error-prone. You're basically duplicating server code, and thus\nrisking getting out of sync in the server code, to work around a\nproblem that is entirely self-inflicted, namely, deciding to use\nfscanf().\n\nWhat I would recommend is:\n\n1. Use open(), read(), close() rather than the fopen() family of\nfunctions. As we have discovered elsewhere, fread() doesn't promise to\nset errno, so we can't necessarily get reliable error-reporting out of\nit.\n\n2. Before you start reading the file, create a buffer that's large\nenough to hold the whole thing, by using fstat() to figure out how big\nthe file is. Read the whole file into that buffer. If you're not able\nto read the whole file -- i.e. open() or read() or close() fail --\nthen just error out and exit.\n\n3. Now advance through the file line by line. Write a function that\nknows how to search forward for the next \\r or \\n but with checks to\nmake sure it can't run off the end of the buffer, and use that to\nlocate the end of each line so that you can walk forward. As you walk\nforward line by line, add the line you just processed to the checksum.\nThat way, you only need a single pass over the data. Also, you can\nmodify it in place. More on that below.\n\n4. As you examine each line, start by examining the first word. You'll\nneed a function that finds the first word by searching forward for a\ntab character, but not beyond the end of the line. The first word of\nthe first line should be PostgreSQL-Backup-Manifest-Version and the\nsecond word should be 1. Then on each subsequent line check whether\nthe first word is File or Manifest-Checksum or something else,\nerroring out in the last case. If it's Manifest-Checksum, verify that\nthis is the last line of the file and that the checksum matches. If\nit's File, break the line into fields so you can add it to the hash\ntable. You'll want a pointer to the filename and a pointer to the\nchecksum, and you'll want to parse the size as an integer. Instead of\nallocating new memory for those fields, just overwrite the character\nthat follows the field with a \\0. There must be one - either \\t or \\n\n- so you shouldn't run off the end of the buffer.\n\nIf you do this, a bunch of the fixed-size buffers you have right now\ngo away. You don't need the variable filetype[10] any more, or\nchecksum_with_type[CHECKSUM_LENGTH], or checksum[CHECKSUM_LENGTH], or\nthe character arrays inside DataDirectoryFileInfo. Instead you can\njust have pointers into the buffer that contains the file. And you\ndon't need this code to back up using fseek() and reread the lines,\neither.\n\nAlso read this article:\n\nhttps://stackoverflow.com/questions/2430303/disadvantages-of-scanf\n\nNote that the very first point in the article talks about the problem\nof overrunning the buffer, which you certainly have in the current\ncode right here:\n\n+ if (fscanf(file, \"%s\\t%s\\t%d\\t%23[^\\t] %s\\n\", filetype, filename,\n\nfiletype is declared as char[10], but %s could read arbitrarily much data.\n\n+ filename = (char*) pg_malloc(MAXPGPATH);\n\npg_malloc returns void *, so no cast is required.\n\n+ if (strcmp(checksum_with_type, \"-\") == 0)\n+ {\n+ checksum_type = MC_NONE;\n+ }\n+ else\n+ {\n+ if (strncmp(checksum_with_type, \"SHA256\", 6) == 0)\n\nUse parse_checksum_algorithm. Right now you've invented a \"common\"\nfunction with 1 caller, but I explicitly suggested previously that you\nput it in common so that you could reuse it.\n\n+ if (strcmp(de->d_name, \".\") == 0 || strcmp(de->d_name, \"..\") == 0 ||\n+ strcmp(de->d_name, \"pg_wal\") == 0)\n+ continue;\n\nIgnoring pg_wal at the top level might be OK, but this will ignore a\npg_wal entry anywhere in the directory tree.\n\n+ /* Skip backup manifest file. */\n+ if (strcmp(de->d_name, \"backup_manifest\") == 0)\n+ return;\n\nSame problem.\n\n+ filename = createPQExpBuffer();\n+ if (!filename)\n+ {\n+ pg_log_error(\"out of memory\");\n+ exit(1);\n+ }\n+\n+ appendPQExpBuffer(filename, \"%s%s\", relative_path, de->d_name);\n\nJust use char filename[MAXPGPATH] and snprintf here, as you do\nelsewhere. It will be simpler and save memory.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 18 Dec 2019 16:24:03 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Thank you for review comments.\n\nOn Thu, Dec 19, 2019 at 2:54 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Dec 17, 2019 at 12:54 AM Suraj Kharage\n> <suraj.kharage@enterprisedb.com> wrote:\n> > I have implemented the simplehash in backup validator patch as Robert\n> suggested. Please find attached 0002 patch for the same.\n> >\n> > kindly review and let me know your thoughts.\n>\n> +#define CHECKSUM_LENGTH 256\n>\n> This seems wrong. Not all checksums are the same length, and none of\n> the ones we're using are 256 bytes in length, and if we've got to have\n> a constant someplace for the maximum checksum length, it should\n> probably be in the new header file, not here. But I don't think we\n> should need this in the first place; see comments below about how to\n> revise the parsing of the manifest file.\n>\n\nI agree. Removed.\n\n+ char filetype[10];\n>\n> A mysterious 10-byte field with no comments explaining what it\n> means... and the same magic number 10 appears in at least one other\n> place in the patch.\n>\n\nwith current logic, we don't need this anymore.\nI have removed the filetype from the structure as we are not doing any\ncomparison anywhere.\n\n\n>\n> +typedef struct manifesthash_hash *hashtab;\n>\n> This declares a new *type* called hashtab, not a variable called\n> hashtab. The new type is not used anywhere, but later, you have\n> several variables of the same type that have this name. Just remove\n> this: it's wrong and unused.\n>\n>\ncorrected.\n\n\n> +static enum ChecksumAlgorithm checksum_type = MC_NONE;\n>\n> Remove \"enum\". Not needed, because you have a typedef for it in the\n> header, and not per style.\n>\n> corrected.\n\n\n> +static manifesthash_hash *create_manifest_hash(char\n> manifest_path[MAXPGPATH]);\n>\n> Whitespace is wrong. The whole patch needs a visit from pgindent with\n> a properly-updated typedefs.list.\n>\n> Also, you will struggle to find anywhere else in the code base where\n> pass a character array as a function argument. I don't know why this\n> isn't just char *.\n>\n\nCorrected.\n\n\n>\n> + if(verify_backup)\n>\n> Whitespace wrong here, too.\n>\n>\nFixed\n\n\n>\n> It's also pretty unhelpful, here and elsewhere, to refer to \"the hash\n> table\" as if there were only one, and as if the reader were supposed\n> to know something about it when you haven't told them anything about\n> it.\n>\n> + if (!entry->matched)\n> + {\n> + pg_log_info(\"missing file: %s\", entry->filename);\n> + }\n> +\n>\n> The braces here are not project style. We usually omit braces when\n> only a single line of code is present.\n>\n\nfixed\n\n\n>\n> I think some work needs to be done to standardize and improve the\n> messages that get produced here. You have:\n>\n> 1. missing file: %s\n> 2. duplicate file present: %s\n> 3. size changed for file: %s, original size: %d, current size: %zu\n> 4. checksum difference for file: %s\n> 5. extra file found: %s\n>\n> I suggest:\n>\n> 1. file \\\"%s\\\" is present in manifest but missing from the backup\n> 2. file \\\"%s\\\" has multiple manifest entries\n> (this one should probably be pg_log_error(), not pg_log_info(), as it\n> represents a corrupt-manifest problem)\n> 3. file \\\"%s\" has size %lu in manifest but size %lu in backup\n> 4. file \\\"%s\" has checksum %s in manifest but checksum %s in backup\n> 5. file \\\"%s\" is present in backup but not in manifest\n>\n\nCorrected.\n\n\n>\n> Your patch actually doesn't compile on my system, because for the\n> third message above, it uses %zu to print the size. But %zu is for\n> size_t, not off_t. I went looking for other places in the code where\n> we print off_t; based on that, I think the right thing to do is to\n> print it using %lu and write (unsigned long) st.st_size.\n>\n\nCorrected.\n\n+ char file_checksum[256];\n> + char header[1024];\n>\n> More arbitrary constants.\n\n\n\n>\n> + if (!file)\n> + {\n> + pg_log_error(\"could not open backup_manifest\");\n>\n> That's bad error reporting. See e.g. readfile() in initdb.c.\n>\n\nCorrected.\n\n\n>\n> + if (fscanf(file, \"%1023[^\\n]\\n\", header) != 1)\n> + {\n> + pg_log_error(\"error while reading the header from\n> backup_manifest\");\n>\n> That's also bad error reporting. It is only a slight step up from\n> \"ERROR: error\".\n>\n> And we have another magic number (1023).\n>\n\nWith current logic, we don't need this anymore.\n\n\n>\n> + appendPQExpBufferStr(manifest, header);\n> + appendPQExpBufferStr(manifest, \"\\n\");\n> ...\n> + appendPQExpBuffer(manifest, \"File\\t%s\\t%d\\t%s\\t%s\\n\", filename,\n> + filesize, mtime, checksum_with_type);\n>\n> This whole thing seems completely crazy to me. Basically, you're\n> trying to use fscanf() to parse the file. But then, because fscanf()\n> doesn't give you the original bytes back, you're trying to reassemble\n> the data that you parsed to recover the original line, so that you can\n> stuff it in the buffer and eventually checksum it. However, that's\n> highly error-prone. You're basically duplicating server code, and thus\n> risking getting out of sync in the server code, to work around a\n> problem that is entirely self-inflicted, namely, deciding to use\n> fscanf().\n>\n> What I would recommend is:\n>\n> 1. Use open(), read(), close() rather than the fopen() family of\n> functions. As we have discovered elsewhere, fread() doesn't promise to\n> set errno, so we can't necessarily get reliable error-reporting out of\n> it.\n>\n> 2. Before you start reading the file, create a buffer that's large\n> enough to hold the whole thing, by using fstat() to figure out how big\n> the file is. Read the whole file into that buffer. If you're not able\n> to read the whole file -- i.e. open() or read() or close() fail --\n> then just error out and exit.\n>\n> 3. Now advance through the file line by line. Write a function that\n> knows how to search forward for the next \\r or \\n but with checks to\n> make sure it can't run off the end of the buffer, and use that to\n> locate the end of each line so that you can walk forward. As you walk\n> forward line by line, add the line you just processed to the checksum.\n> That way, you only need a single pass over the data. Also, you can\n> modify it in place. More on that below.\n>\n> 4. As you examine each line, start by examining the first word. You'll\n> need a function that finds the first word by searching forward for a\n> tab character, but not beyond the end of the line. The first word of\n> the first line should be PostgreSQL-Backup-Manifest-Version and the\n> second word should be 1. Then on each subsequent line check whether\n> the first word is File or Manifest-Checksum or something else,\n> erroring out in the last case. If it's Manifest-Checksum, verify that\n> this is the last line of the file and that the checksum matches. If\n> it's File, break the line into fields so you can add it to the hash\n> table. You'll want a pointer to the filename and a pointer to the\n> checksum, and you'll want to parse the size as an integer. Instead of\n> allocating new memory for those fields, just overwrite the character\n> that follows the field with a \\0. There must be one - either \\t or \\n\n> - so you shouldn't run off the end of the buffer.\n>\n> If you do this, a bunch of the fixed-size buffers you have right now\n> go away. You don't need the variable filetype[10] any more, or\n> checksum_with_type[CHECKSUM_LENGTH], or checksum[CHECKSUM_LENGTH], or\n> the character arrays inside DataDirectoryFileInfo. Instead you can\n> just have pointers into the buffer that contains the file. And you\n> don't need this code to back up using fseek() and reread the lines,\n> either.\n>\n>\nThanks for the suggestion. I tried to mimic your approach in the attached\nv4-0002 patch.\nPlease let me know your thoughts on the same.\n\nAlso read this article:\n>\n> https://stackoverflow.com/questions/2430303/disadvantages-of-scanf\n>\n> Note that the very first point in the article talks about the problem\n> of overrunning the buffer, which you certainly have in the current\n> code right here:\n>\n> + if (fscanf(file, \"%s\\t%s\\t%d\\t%23[^\\t] %s\\n\", filetype, filename,\n>\n> filetype is declared as char[10], but %s could read arbitrarily much data.\n>\n\nnow with this revised logic, we don't use this anymore.\n\n\n>\n> + filename = (char*) pg_malloc(MAXPGPATH);\n>\n> pg_malloc returns void *, so no cast is required.\n>\n>\nfixed.\n\n\n> + if (strcmp(checksum_with_type, \"-\") == 0)\n> + {\n> + checksum_type = MC_NONE;\n> + }\n> + else\n> + {\n> + if (strncmp(checksum_with_type, \"SHA256\", 6) == 0)\n>\n> Use parse_checksum_algorithm. Right now you've invented a \"common\"\n> function with 1 caller, but I explicitly suggested previously that you\n> put it in common so that you could reuse it.\n>\n\nwhile parsing the record, we get <checktype>:<checksum> as a string for\nchecksum.\nparse_checksum_algorithm uses pg_strcasecmp() so we need to pass exact\nstring to that function.\nwith current logic, we can't add '\\0' in between the line unless we parse\nit completely.\nSo we may need to allocate another small buffer and copy only checksum type\nin that and pass that to\n parse_checksum_algorithm. I don't think of any other solution apart from\nthis. I might be missing something\nhere, please correct me if I am wrong.\n\n\n> + if (strcmp(de->d_name, \".\") == 0 || strcmp(de->d_name, \"..\") == 0\n> ||\n> + strcmp(de->d_name, \"pg_wal\") == 0)\n> + continue;\n>\n> Ignoring pg_wal at the top level might be OK, but this will ignore a\n> pg_wal entry anywhere in the directory tree.\n>\n> + /* Skip backup manifest file. */\n> + if (strcmp(de->d_name, \"backup_manifest\") == 0)\n> + return;\n>\n> Same problem.\n>\n\nYou are right. Added extra check for this.\n\n\n>\n> + filename = createPQExpBuffer();\n> + if (!filename)\n> + {\n> + pg_log_error(\"out of memory\");\n> + exit(1);\n> + }\n> +\n> + appendPQExpBuffer(filename, \"%s%s\", relative_path, de->d_name);\n>\n> Just use char filename[MAXPGPATH] and snprintf here, as you do\n> elsewhere. It will be simpler and save memory.\n>\nFixed.\n\nTAP test case patch needs some modification, Will do that and submit.\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.",
"msg_date": "Fri, 20 Dec 2019 18:54:20 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Fixed some typos in attached v5-0002 patch. Please consider this patch for\nreview.\n\nOn Fri, Dec 20, 2019 at 6:54 PM Suraj Kharage <\nsuraj.kharage@enterprisedb.com> wrote:\n\n> Thank you for review comments.\n>\n> On Thu, Dec 19, 2019 at 2:54 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Tue, Dec 17, 2019 at 12:54 AM Suraj Kharage\n>> <suraj.kharage@enterprisedb.com> wrote:\n>> > I have implemented the simplehash in backup validator patch as Robert\n>> suggested. Please find attached 0002 patch for the same.\n>> >\n>> > kindly review and let me know your thoughts.\n>>\n>> +#define CHECKSUM_LENGTH 256\n>>\n>> This seems wrong. Not all checksums are the same length, and none of\n>> the ones we're using are 256 bytes in length, and if we've got to have\n>> a constant someplace for the maximum checksum length, it should\n>> probably be in the new header file, not here. But I don't think we\n>> should need this in the first place; see comments below about how to\n>> revise the parsing of the manifest file.\n>>\n>\n> I agree. Removed.\n>\n> + char filetype[10];\n>>\n>> A mysterious 10-byte field with no comments explaining what it\n>> means... and the same magic number 10 appears in at least one other\n>> place in the patch.\n>>\n>\n> with current logic, we don't need this anymore.\n> I have removed the filetype from the structure as we are not doing any\n> comparison anywhere.\n>\n>\n>>\n>> +typedef struct manifesthash_hash *hashtab;\n>>\n>> This declares a new *type* called hashtab, not a variable called\n>> hashtab. The new type is not used anywhere, but later, you have\n>> several variables of the same type that have this name. Just remove\n>> this: it's wrong and unused.\n>>\n>>\n> corrected.\n>\n>\n>> +static enum ChecksumAlgorithm checksum_type = MC_NONE;\n>>\n>> Remove \"enum\". Not needed, because you have a typedef for it in the\n>> header, and not per style.\n>>\n>> corrected.\n>\n>\n>> +static manifesthash_hash *create_manifest_hash(char\n>> manifest_path[MAXPGPATH]);\n>>\n>> Whitespace is wrong. The whole patch needs a visit from pgindent with\n>> a properly-updated typedefs.list.\n>>\n>> Also, you will struggle to find anywhere else in the code base where\n>> pass a character array as a function argument. I don't know why this\n>> isn't just char *.\n>>\n>\n> Corrected.\n>\n>\n>>\n>> + if(verify_backup)\n>>\n>> Whitespace wrong here, too.\n>>\n>>\n> Fixed\n>\n>\n>>\n>> It's also pretty unhelpful, here and elsewhere, to refer to \"the hash\n>> table\" as if there were only one, and as if the reader were supposed\n>> to know something about it when you haven't told them anything about\n>> it.\n>>\n>> + if (!entry->matched)\n>> + {\n>> + pg_log_info(\"missing file: %s\", entry->filename);\n>> + }\n>> +\n>>\n>> The braces here are not project style. We usually omit braces when\n>> only a single line of code is present.\n>>\n>\n> fixed\n>\n>\n>>\n>> I think some work needs to be done to standardize and improve the\n>> messages that get produced here. You have:\n>>\n>> 1. missing file: %s\n>> 2. duplicate file present: %s\n>> 3. size changed for file: %s, original size: %d, current size: %zu\n>> 4. checksum difference for file: %s\n>> 5. extra file found: %s\n>>\n>> I suggest:\n>>\n>> 1. file \\\"%s\\\" is present in manifest but missing from the backup\n>> 2. file \\\"%s\\\" has multiple manifest entries\n>> (this one should probably be pg_log_error(), not pg_log_info(), as it\n>> represents a corrupt-manifest problem)\n>> 3. file \\\"%s\" has size %lu in manifest but size %lu in backup\n>> 4. file \\\"%s\" has checksum %s in manifest but checksum %s in backup\n>> 5. file \\\"%s\" is present in backup but not in manifest\n>>\n>\n> Corrected.\n>\n>\n>>\n>> Your patch actually doesn't compile on my system, because for the\n>> third message above, it uses %zu to print the size. But %zu is for\n>> size_t, not off_t. I went looking for other places in the code where\n>> we print off_t; based on that, I think the right thing to do is to\n>> print it using %lu and write (unsigned long) st.st_size.\n>>\n>\n> Corrected.\n>\n> + char file_checksum[256];\n>> + char header[1024];\n>>\n>> More arbitrary constants.\n>\n>\n>\n>>\n>> + if (!file)\n>> + {\n>> + pg_log_error(\"could not open backup_manifest\");\n>>\n>> That's bad error reporting. See e.g. readfile() in initdb.c.\n>>\n>\n> Corrected.\n>\n>\n>>\n>> + if (fscanf(file, \"%1023[^\\n]\\n\", header) != 1)\n>> + {\n>> + pg_log_error(\"error while reading the header from\n>> backup_manifest\");\n>>\n>> That's also bad error reporting. It is only a slight step up from\n>> \"ERROR: error\".\n>>\n>> And we have another magic number (1023).\n>>\n>\n> With current logic, we don't need this anymore.\n>\n>\n>>\n>> + appendPQExpBufferStr(manifest, header);\n>> + appendPQExpBufferStr(manifest, \"\\n\");\n>> ...\n>> + appendPQExpBuffer(manifest, \"File\\t%s\\t%d\\t%s\\t%s\\n\", filename,\n>> + filesize, mtime, checksum_with_type);\n>>\n>> This whole thing seems completely crazy to me. Basically, you're\n>> trying to use fscanf() to parse the file. But then, because fscanf()\n>> doesn't give you the original bytes back, you're trying to reassemble\n>> the data that you parsed to recover the original line, so that you can\n>> stuff it in the buffer and eventually checksum it. However, that's\n>> highly error-prone. You're basically duplicating server code, and thus\n>> risking getting out of sync in the server code, to work around a\n>> problem that is entirely self-inflicted, namely, deciding to use\n>> fscanf().\n>>\n>> What I would recommend is:\n>>\n>> 1. Use open(), read(), close() rather than the fopen() family of\n>> functions. As we have discovered elsewhere, fread() doesn't promise to\n>> set errno, so we can't necessarily get reliable error-reporting out of\n>> it.\n>>\n>> 2. Before you start reading the file, create a buffer that's large\n>> enough to hold the whole thing, by using fstat() to figure out how big\n>> the file is. Read the whole file into that buffer. If you're not able\n>> to read the whole file -- i.e. open() or read() or close() fail --\n>> then just error out and exit.\n>>\n>> 3. Now advance through the file line by line. Write a function that\n>> knows how to search forward for the next \\r or \\n but with checks to\n>> make sure it can't run off the end of the buffer, and use that to\n>> locate the end of each line so that you can walk forward. As you walk\n>> forward line by line, add the line you just processed to the checksum.\n>> That way, you only need a single pass over the data. Also, you can\n>> modify it in place. More on that below.\n>>\n>> 4. As you examine each line, start by examining the first word. You'll\n>> need a function that finds the first word by searching forward for a\n>> tab character, but not beyond the end of the line. The first word of\n>> the first line should be PostgreSQL-Backup-Manifest-Version and the\n>> second word should be 1. Then on each subsequent line check whether\n>> the first word is File or Manifest-Checksum or something else,\n>> erroring out in the last case. If it's Manifest-Checksum, verify that\n>> this is the last line of the file and that the checksum matches. If\n>> it's File, break the line into fields so you can add it to the hash\n>> table. You'll want a pointer to the filename and a pointer to the\n>> checksum, and you'll want to parse the size as an integer. Instead of\n>> allocating new memory for those fields, just overwrite the character\n>> that follows the field with a \\0. There must be one - either \\t or \\n\n>> - so you shouldn't run off the end of the buffer.\n>>\n>> If you do this, a bunch of the fixed-size buffers you have right now\n>> go away. You don't need the variable filetype[10] any more, or\n>> checksum_with_type[CHECKSUM_LENGTH], or checksum[CHECKSUM_LENGTH], or\n>> the character arrays inside DataDirectoryFileInfo. Instead you can\n>> just have pointers into the buffer that contains the file. And you\n>> don't need this code to back up using fseek() and reread the lines,\n>> either.\n>>\n>>\n> Thanks for the suggestion. I tried to mimic your approach in the attached\n> v4-0002 patch.\n> Please let me know your thoughts on the same.\n>\n> Also read this article:\n>>\n>> https://stackoverflow.com/questions/2430303/disadvantages-of-scanf\n>>\n>> Note that the very first point in the article talks about the problem\n>> of overrunning the buffer, which you certainly have in the current\n>> code right here:\n>>\n>> + if (fscanf(file, \"%s\\t%s\\t%d\\t%23[^\\t] %s\\n\", filetype, filename,\n>>\n>> filetype is declared as char[10], but %s could read arbitrarily much data.\n>>\n>\n> now with this revised logic, we don't use this anymore.\n>\n>\n>>\n>> + filename = (char*) pg_malloc(MAXPGPATH);\n>>\n>> pg_malloc returns void *, so no cast is required.\n>>\n>>\n> fixed.\n>\n>\n>> + if (strcmp(checksum_with_type, \"-\") == 0)\n>> + {\n>> + checksum_type = MC_NONE;\n>> + }\n>> + else\n>> + {\n>> + if (strncmp(checksum_with_type, \"SHA256\", 6) == 0)\n>>\n>> Use parse_checksum_algorithm. Right now you've invented a \"common\"\n>> function with 1 caller, but I explicitly suggested previously that you\n>> put it in common so that you could reuse it.\n>>\n>\n> while parsing the record, we get <checktype>:<checksum> as a string for\n> checksum.\n> parse_checksum_algorithm uses pg_strcasecmp() so we need to pass exact\n> string to that function.\n> with current logic, we can't add '\\0' in between the line unless we parse\n> it completely.\n> So we may need to allocate another small buffer and copy only checksum\n> type in that and pass that to\n> parse_checksum_algorithm. I don't think of any other solution apart from\n> this. I might be missing something\n> here, please correct me if I am wrong.\n>\n>\n>> + if (strcmp(de->d_name, \".\") == 0 || strcmp(de->d_name, \"..\") ==\n>> 0 ||\n>> + strcmp(de->d_name, \"pg_wal\") == 0)\n>> + continue;\n>>\n>> Ignoring pg_wal at the top level might be OK, but this will ignore a\n>> pg_wal entry anywhere in the directory tree.\n>>\n>> + /* Skip backup manifest file. */\n>> + if (strcmp(de->d_name, \"backup_manifest\") == 0)\n>> + return;\n>>\n>> Same problem.\n>>\n>\n> You are right. Added extra check for this.\n>\n>\n>>\n>> + filename = createPQExpBuffer();\n>> + if (!filename)\n>> + {\n>> + pg_log_error(\"out of memory\");\n>> + exit(1);\n>> + }\n>> +\n>> + appendPQExpBuffer(filename, \"%s%s\", relative_path, de->d_name);\n>>\n>> Just use char filename[MAXPGPATH] and snprintf here, as you do\n>> elsewhere. It will be simpler and save memory.\n>>\n> Fixed.\n>\n> TAP test case patch needs some modification, Will do that and submit.\n>\n> --\n> --\n>\n> Thanks & Regards,\n> Suraj kharage,\n> EnterpriseDB Corporation,\n> The Postgres Database Company.\n>\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.",
"msg_date": "Fri, 20 Dec 2019 20:40:57 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 8:24 AM Suraj Kharage\n<suraj.kharage@enterprisedb.com> wrote:\n> Thank you for review comments.\n\nThanks for the new version.\n\n+ <term><option>--verify-backup </option></term>\n\nWhitespace.\n\n+struct manifesthash_hash *hashtab;\n\nUh, I had it in mind that you would nuke this line completely, not\njust remove \"typedef\" from it. You shouldn't need a global variable\nhere.\n\n+ if (buf == NULL)\n\npg_malloc seems to have an internal check such that it never returns\nNULL. I don't see anything like this test in other callers.\n\nThe order of operations in create_manifest_hash() seems unusual:\n\n+ fd = open(manifest_path, O_RDONLY, 0);\n+ if (fstat(fd, &stat))\n+ buf = pg_malloc(stat.st_size);\n+ hashtab = manifesthash_create(1024, NULL);\n...\n+ entry = manifesthash_insert(hashtab, filename, &found);\n...\n+ close(fd);\n\nI would have expected open-fstat-read-close to be consecutive, and the\nmanifesthash stuff all done afterwards. In fact, it seems like reading\nthe file could be a separate function.\n\n+ if (strncmp(checksum, \"SHA256\", 6) == 0)\n\nThis isn't really right; it would give a false match if we had a\nchecksum algorithm with a name like SHA2560 or SHA256C or\nSHA256ExceptWayBetter. The right thing to do is find the colon first,\nand then probably overwrite it with '\\0' so that you have a string\nthat you can pass to parse_checksum_algorithm().\n\n+ /*\n+ * we don't have checksum type in the header, so need to\n+ * read through the first file enttry to find the checksum\n+ * type for the manifest file and initilize the checksum\n+ * for the manifest file itself.\n+ */\n\nThis seems to be proceeding on the assumption that the checksum type\nfor the manifest itself will always be the same as the checksum type\nfor the first file in the manifest. I don't think that's the right\napproach. I think the manifest should always have a SHA256 checksum,\nregardless of what type of checksum is used for the individual files\nwithin the manifest. Since the volume of data in the manifest is\npresumably very small compared to the size of the database cluster\nitself, I don't think there should be any performance problem there.\n\n+ filesize = atol(size);\n\nUsing strtol() would allow for some error checking.\n\n+ * Increase the checksum by its lable length so that we can\n+ checksum = checksum + checksum_lable_length;\n\nSpelling.\n\n+ pg_log_error(\"invalid record found in \\\"%s\\\"\", manifest_path);\n\nError message needs work.\n\n+VerifyBackup(void)\n+create_manifest_hash(char *manifest_path)\n+nextLine(char *buf)\n\nYour function names should be consistent with the surrounding style,\nand with each other, as far as possible. Three different conventions\nwithin the same patch and source file seems over the top.\n\nAlso keep in mind that you're not writing code in a vacuum. There's a\nwhole file of code here, and around that, a whole project.\nscan_data_directory() is a good example of a function whose name is\nclearly too generic. It's not a general-purpose function for scanning\nthe data directory; it's specifically a support function for verifying\na backup. Yet, the name gives no hint of this.\n\n+verify_file(struct dirent *de, char fn[MAXPGPATH], struct stat st,\n+ char relative_path[MAXPGPATH], manifesthash_hash *hashtab)\n\nI think I commented on the use of char[] parameters in my previous review.\n\n+ /* Skip backup manifest file. */\n+ if (strcmp(de->d_name, \"backup_manifest\") == 0)\n+ return;\n\nStill looks like this will be skipped at any level of the directory\nhierarchy, not just the top. And why are we skipping backup_manifest\nhere bug pg_wal in scan_data_directory? That's a rhetorical question,\nbecause I know the answer: verify_file() is only getting called for\nfiles, so you can't use it to skip directories. But that's not a good\nexcuse for putting closely-related checks in different parts of the\ncode. It's just going to result in the checks being inconsistent and\neach one having its own bugs that have to be fixed separately from the\nother one, as here. Please try to reorganize this code so that it can\nbe done in a consistent way.\n\nI think this is related to the way you're traversing the directory\ntree, which somehow looks a bit awkward to me. At the top of\nscan_data_directory(), you've got code that uses basedir and\nsubdirpath to construct path and relative_path. I was initially\nsurprised to see that this was the job of this function, rather than\nthe caller, but then I thought: well, as long as it makes life easy\nfor the caller, it's probably fine. However, I notice that the only\nnon-trivial caller is the scan_data_directory() itself, and it has to\ngo and construct newsubdirpath from subdirpath and the directory name.\n\nIt seems to me that this would get easier if you defined\nscan_data_directory() -- or whatever we end up calling it -- to take\ntwo pathname-related arguments:\n\n- basepath, which would be $PGDATA and would never change as we\nrecurse down, so same as what you're now calling basedir\n- pathsuffix, which would be an empty string at the top level and at\neach recursive level we'd add a slash and then de->d_name.\n\nSo at the top of the function we wouldn't need an if statement,\nbecause you could just do:\n\nsnprintf(path, MAXPGPATH, \"%s%s\", basedir, pathsuffix);\n\nAnd when you recurse you wouldn't need an if statement either, because\nyou could just do:\n\nsnprintf(newpathsuffix, MAXPGPATH, \"%s/%s\", pathsuffix, de->d_name);\n\nWhat I'd suggest is constructing newpathsuffix right after rejecting\n\".\" and \"..\" entries, and then you can reject both pg_wal and\nbackup_manifest, at the top-level only, using symmetric and elegant\ncode:\n\nif (strcmp(newpathsuffix, \"/pg_wal\") == 0 || strcmp(newpathsuffix,\n\"/backup_manifest\") == 0)\n continue;\n\n+ record = manifesthash_lookup(hashtab, filename);;\n+ if (record)\n+ {\n...long block...\n+ }\n+ else\n+ pg_log_info(\"file \\\"%s\\\" is present in backup but not in manifest\",\n+ filename);\n\nTry to structure the code in such a way that you minimize unnecessary\nindentation. For example, in this case, you could instead write:\n\nif (record == NULL)\n{\n pg_log_info(...)\n return;\n}\n\nand the result would be that everything inside that long if-block is\nnow at the top level of the function and indented one level less. And\nI think if you look at this function you'll see a way that you can\nsave a *second* level of indentation for much of that code. Please\ncheck the rest of the patch for similar cases, too.\n\n+static char *\n+nextLine(char *buf)\n+{\n+ while (*buf != '\\0' && *buf != '\\n')\n+ buf = buf + 1;\n+\n+ return buf + 1;\n+}\n\nI'm pretty sure that my previous review mentioned the importance of\nprotecting against buffer overruns here.\n\n+static char *\n+nextWord(char *line)\n+{\n+ while (*line != '\\0' && *line != '\\t' && *line != '\\n')\n+ line = line + 1;\n+\n+ return line + 1;\n+}\n\nSame problem here.\n\nIn both cases, ++ is more idiomatic.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 20 Dec 2019 10:43:57 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 9:14 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Dec 20, 2019 at 8:24 AM Suraj Kharage\n> <suraj.kharage@enterprisedb.com> wrote:\n> > Thank you for review comments.\n>\n> Thanks for the new version.\n>\n> + <term><option>--verify-backup </option></term>\n>\n> Whitespace.\n>\n> +struct manifesthash_hash *hashtab;\n>\n> Uh, I had it in mind that you would nuke this line completely, not\n> just remove \"typedef\" from it. You shouldn't need a global variable\n> here.\n>\n> + if (buf == NULL)\n>\n> pg_malloc seems to have an internal check such that it never returns\n> NULL. I don't see anything like this test in other callers.\n>\n> The order of operations in create_manifest_hash() seems unusual:\n>\n> + fd = open(manifest_path, O_RDONLY, 0);\n> + if (fstat(fd, &stat))\n> + buf = pg_malloc(stat.st_size);\n> + hashtab = manifesthash_create(1024, NULL);\n> ...\n> + entry = manifesthash_insert(hashtab, filename, &found);\n> ...\n> + close(fd);\n>\n> I would have expected open-fstat-read-close to be consecutive, and the\n> manifesthash stuff all done afterwards. In fact, it seems like reading\n> the file could be a separate function.\n>\n> + if (strncmp(checksum, \"SHA256\", 6) == 0)\n>\n> This isn't really right; it would give a false match if we had a\n> checksum algorithm with a name like SHA2560 or SHA256C or\n> SHA256ExceptWayBetter. The right thing to do is find the colon first,\n> and then probably overwrite it with '\\0' so that you have a string\n> that you can pass to parse_checksum_algorithm().\n>\n> + /*\n> + * we don't have checksum type in the header, so need to\n> + * read through the first file enttry to find the checksum\n> + * type for the manifest file and initilize the checksum\n> + * for the manifest file itself.\n> + */\n>\n> This seems to be proceeding on the assumption that the checksum type\n> for the manifest itself will always be the same as the checksum type\n> for the first file in the manifest. I don't think that's the right\n> approach. I think the manifest should always have a SHA256 checksum,\n> regardless of what type of checksum is used for the individual files\n> within the manifest. Since the volume of data in the manifest is\n> presumably very small compared to the size of the database cluster\n> itself, I don't think there should be any performance problem there.\n>\n\nAgree, that performance won't be a problem, but that will be bit confusing\nto the user. As at the start user providing the manifest-checksum (assume\nthat user-provided CRC32C) and at the end, user will find the SHA256\nchecksum string in the backup_manifest file.\n\nDoes this also means that irrespective of whether user provided a checksum\noption or not, we will be always generating the checksum for the\nbackup_manifest file?\n\n\n> + filesize = atol(size);\n>\n> Using strtol() would allow for some error checking.\n>\n> + * Increase the checksum by its lable length so that we can\n> + checksum = checksum + checksum_lable_length;\n>\n> Spelling.\n>\n> + pg_log_error(\"invalid record found in \\\"%s\\\"\", manifest_path);\n>\n> Error message needs work.\n>\n> +VerifyBackup(void)\n> +create_manifest_hash(char *manifest_path)\n> +nextLine(char *buf)\n>\n> Your function names should be consistent with the surrounding style,\n> and with each other, as far as possible. Three different conventions\n> within the same patch and source file seems over the top.\n>\n> Also keep in mind that you're not writing code in a vacuum. There's a\n> whole file of code here, and around that, a whole project.\n> scan_data_directory() is a good example of a function whose name is\n> clearly too generic. It's not a general-purpose function for scanning\n> the data directory; it's specifically a support function for verifying\n> a backup. Yet, the name gives no hint of this.\n>\n> +verify_file(struct dirent *de, char fn[MAXPGPATH], struct stat st,\n> + char relative_path[MAXPGPATH], manifesthash_hash *hashtab)\n>\n> I think I commented on the use of char[] parameters in my previous review.\n>\n> + /* Skip backup manifest file. */\n> + if (strcmp(de->d_name, \"backup_manifest\") == 0)\n> + return;\n>\n> Still looks like this will be skipped at any level of the directory\n> hierarchy, not just the top. And why are we skipping backup_manifest\n> here bug pg_wal in scan_data_directory? That's a rhetorical question,\n> because I know the answer: verify_file() is only getting called for\n> files, so you can't use it to skip directories. But that's not a good\n> excuse for putting closely-related checks in different parts of the\n> code. It's just going to result in the checks being inconsistent and\n> each one having its own bugs that have to be fixed separately from the\n> other one, as here. Please try to reorganize this code so that it can\n> be done in a consistent way.\n>\n> I think this is related to the way you're traversing the directory\n> tree, which somehow looks a bit awkward to me. At the top of\n> scan_data_directory(), you've got code that uses basedir and\n> subdirpath to construct path and relative_path. I was initially\n> surprised to see that this was the job of this function, rather than\n> the caller, but then I thought: well, as long as it makes life easy\n> for the caller, it's probably fine. However, I notice that the only\n> non-trivial caller is the scan_data_directory() itself, and it has to\n> go and construct newsubdirpath from subdirpath and the directory name.\n>\n> It seems to me that this would get easier if you defined\n> scan_data_directory() -- or whatever we end up calling it -- to take\n> two pathname-related arguments:\n>\n> - basepath, which would be $PGDATA and would never change as we\n> recurse down, so same as what you're now calling basedir\n> - pathsuffix, which would be an empty string at the top level and at\n> each recursive level we'd add a slash and then de->d_name.\n>\n> So at the top of the function we wouldn't need an if statement,\n> because you could just do:\n>\n> snprintf(path, MAXPGPATH, \"%s%s\", basedir, pathsuffix);\n>\n> And when you recurse you wouldn't need an if statement either, because\n> you could just do:\n>\n> snprintf(newpathsuffix, MAXPGPATH, \"%s/%s\", pathsuffix, de->d_name);\n>\n> What I'd suggest is constructing newpathsuffix right after rejecting\n> \".\" and \"..\" entries, and then you can reject both pg_wal and\n> backup_manifest, at the top-level only, using symmetric and elegant\n> code:\n>\n> if (strcmp(newpathsuffix, \"/pg_wal\") == 0 || strcmp(newpathsuffix,\n> \"/backup_manifest\") == 0)\n> continue;\n>\n> + record = manifesthash_lookup(hashtab, filename);;\n> + if (record)\n> + {\n> ...long block...\n> + }\n> + else\n> + pg_log_info(\"file \\\"%s\\\" is present in backup but not in manifest\",\n> + filename);\n>\n> Try to structure the code in such a way that you minimize unnecessary\n> indentation. For example, in this case, you could instead write:\n>\n> if (record == NULL)\n> {\n> pg_log_info(...)\n> return;\n> }\n>\n> and the result would be that everything inside that long if-block is\n> now at the top level of the function and indented one level less. And\n> I think if you look at this function you'll see a way that you can\n> save a *second* level of indentation for much of that code. Please\n> check the rest of the patch for similar cases, too.\n>\n> +static char *\n> +nextLine(char *buf)\n> +{\n> + while (*buf != '\\0' && *buf != '\\n')\n> + buf = buf + 1;\n> +\n> + return buf + 1;\n> +}\n>\n> I'm pretty sure that my previous review mentioned the importance of\n> protecting against buffer overruns here.\n>\n> +static char *\n> +nextWord(char *line)\n> +{\n> + while (*line != '\\0' && *line != '\\t' && *line != '\\n')\n> + line = line + 1;\n> +\n> + return line + 1;\n> +}\n>\n> Same problem here.\n>\n> In both cases, ++ is more idiomatic.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\n-- \nRushabh Lathia\n\nOn Fri, Dec 20, 2019 at 9:14 PM Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Dec 20, 2019 at 8:24 AM Suraj Kharage\n<suraj.kharage@enterprisedb.com> wrote:\n> Thank you for review comments.\n\nThanks for the new version.\n\n+ <term><option>--verify-backup </option></term>\n\nWhitespace.\n\n+struct manifesthash_hash *hashtab;\n\nUh, I had it in mind that you would nuke this line completely, not\njust remove \"typedef\" from it. You shouldn't need a global variable\nhere.\n\n+ if (buf == NULL)\n\npg_malloc seems to have an internal check such that it never returns\nNULL. I don't see anything like this test in other callers.\n\nThe order of operations in create_manifest_hash() seems unusual:\n\n+ fd = open(manifest_path, O_RDONLY, 0);\n+ if (fstat(fd, &stat))\n+ buf = pg_malloc(stat.st_size);\n+ hashtab = manifesthash_create(1024, NULL);\n...\n+ entry = manifesthash_insert(hashtab, filename, &found);\n...\n+ close(fd);\n\nI would have expected open-fstat-read-close to be consecutive, and the\nmanifesthash stuff all done afterwards. In fact, it seems like reading\nthe file could be a separate function.\n\n+ if (strncmp(checksum, \"SHA256\", 6) == 0)\n\nThis isn't really right; it would give a false match if we had a\nchecksum algorithm with a name like SHA2560 or SHA256C or\nSHA256ExceptWayBetter. The right thing to do is find the colon first,\nand then probably overwrite it with '\\0' so that you have a string\nthat you can pass to parse_checksum_algorithm().\n\n+ /*\n+ * we don't have checksum type in the header, so need to\n+ * read through the first file enttry to find the checksum\n+ * type for the manifest file and initilize the checksum\n+ * for the manifest file itself.\n+ */\n\nThis seems to be proceeding on the assumption that the checksum type\nfor the manifest itself will always be the same as the checksum type\nfor the first file in the manifest. I don't think that's the right\napproach. I think the manifest should always have a SHA256 checksum,\nregardless of what type of checksum is used for the individual files\nwithin the manifest. Since the volume of data in the manifest is\npresumably very small compared to the size of the database cluster\nitself, I don't think there should be any performance problem there.Agree, that performance won't be a problem, but that will be bit confusingto the user. As at the start user providing the manifest-checksum (assumethat user-provided CRC32C) and at the end, user will find the SHA256checksum string in the backup_manifest file. Does this also means that irrespective of whether user provided a checksumoption or not, we will be always generating the checksum for the backup_manifest file?\n\n+ filesize = atol(size);\n\nUsing strtol() would allow for some error checking.\n\n+ * Increase the checksum by its lable length so that we can\n+ checksum = checksum + checksum_lable_length;\n\nSpelling.\n\n+ pg_log_error(\"invalid record found in \\\"%s\\\"\", manifest_path);\n\nError message needs work.\n\n+VerifyBackup(void)\n+create_manifest_hash(char *manifest_path)\n+nextLine(char *buf)\n\nYour function names should be consistent with the surrounding style,\nand with each other, as far as possible. Three different conventions\nwithin the same patch and source file seems over the top.\n\nAlso keep in mind that you're not writing code in a vacuum. There's a\nwhole file of code here, and around that, a whole project.\nscan_data_directory() is a good example of a function whose name is\nclearly too generic. It's not a general-purpose function for scanning\nthe data directory; it's specifically a support function for verifying\na backup. Yet, the name gives no hint of this.\n\n+verify_file(struct dirent *de, char fn[MAXPGPATH], struct stat st,\n+ char relative_path[MAXPGPATH], manifesthash_hash *hashtab)\n\nI think I commented on the use of char[] parameters in my previous review.\n\n+ /* Skip backup manifest file. */\n+ if (strcmp(de->d_name, \"backup_manifest\") == 0)\n+ return;\n\nStill looks like this will be skipped at any level of the directory\nhierarchy, not just the top. And why are we skipping backup_manifest\nhere bug pg_wal in scan_data_directory? That's a rhetorical question,\nbecause I know the answer: verify_file() is only getting called for\nfiles, so you can't use it to skip directories. But that's not a good\nexcuse for putting closely-related checks in different parts of the\ncode. It's just going to result in the checks being inconsistent and\neach one having its own bugs that have to be fixed separately from the\nother one, as here. Please try to reorganize this code so that it can\nbe done in a consistent way.\n\nI think this is related to the way you're traversing the directory\ntree, which somehow looks a bit awkward to me. At the top of\nscan_data_directory(), you've got code that uses basedir and\nsubdirpath to construct path and relative_path. I was initially\nsurprised to see that this was the job of this function, rather than\nthe caller, but then I thought: well, as long as it makes life easy\nfor the caller, it's probably fine. However, I notice that the only\nnon-trivial caller is the scan_data_directory() itself, and it has to\ngo and construct newsubdirpath from subdirpath and the directory name.\n\nIt seems to me that this would get easier if you defined\nscan_data_directory() -- or whatever we end up calling it -- to take\ntwo pathname-related arguments:\n\n- basepath, which would be $PGDATA and would never change as we\nrecurse down, so same as what you're now calling basedir\n- pathsuffix, which would be an empty string at the top level and at\neach recursive level we'd add a slash and then de->d_name.\n\nSo at the top of the function we wouldn't need an if statement,\nbecause you could just do:\n\nsnprintf(path, MAXPGPATH, \"%s%s\", basedir, pathsuffix);\n\nAnd when you recurse you wouldn't need an if statement either, because\nyou could just do:\n\nsnprintf(newpathsuffix, MAXPGPATH, \"%s/%s\", pathsuffix, de->d_name);\n\nWhat I'd suggest is constructing newpathsuffix right after rejecting\n\".\" and \"..\" entries, and then you can reject both pg_wal and\nbackup_manifest, at the top-level only, using symmetric and elegant\ncode:\n\nif (strcmp(newpathsuffix, \"/pg_wal\") == 0 || strcmp(newpathsuffix,\n\"/backup_manifest\") == 0)\n continue;\n\n+ record = manifesthash_lookup(hashtab, filename);;\n+ if (record)\n+ {\n...long block...\n+ }\n+ else\n+ pg_log_info(\"file \\\"%s\\\" is present in backup but not in manifest\",\n+ filename);\n\nTry to structure the code in such a way that you minimize unnecessary\nindentation. For example, in this case, you could instead write:\n\nif (record == NULL)\n{\n pg_log_info(...)\n return;\n}\n\nand the result would be that everything inside that long if-block is\nnow at the top level of the function and indented one level less. And\nI think if you look at this function you'll see a way that you can\nsave a *second* level of indentation for much of that code. Please\ncheck the rest of the patch for similar cases, too.\n\n+static char *\n+nextLine(char *buf)\n+{\n+ while (*buf != '\\0' && *buf != '\\n')\n+ buf = buf + 1;\n+\n+ return buf + 1;\n+}\n\nI'm pretty sure that my previous review mentioned the importance of\nprotecting against buffer overruns here.\n\n+static char *\n+nextWord(char *line)\n+{\n+ while (*line != '\\0' && *line != '\\t' && *line != '\\n')\n+ line = line + 1;\n+\n+ return line + 1;\n+}\n\nSame problem here.\n\nIn both cases, ++ is more idiomatic.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n-- Rushabh Lathia",
"msg_date": "Mon, 23 Dec 2019 10:02:28 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Sun, Dec 22, 2019 at 8:32 PM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:\n> Agree, that performance won't be a problem, but that will be bit confusing\n> to the user. As at the start user providing the manifest-checksum (assume\n> that user-provided CRC32C) and at the end, user will find the SHA256\n> checksum string in the backup_manifest file.\n\nI don't think that's particularly confusing. The documentation should\nsay that this is the algorithm to be used for checksumming the files\nwhich are backed up. The algorithm to be used for the manifest itself\nis another matter. To me, it seems far MORE confusing if the algorithm\nused for the manifest itself is magically inferred from the algorithm\nused for one of the File lines therein.\n\n> Does this also means that irrespective of whether user provided a checksum\n> option or not, we will be always generating the checksum for the backup_manifest file?\n\nYes, that is what I am proposing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 23 Dec 2019 20:50:54 -0800",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Thank you for review comments.\n\nOn Fri, Dec 20, 2019 at 9:14 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Dec 20, 2019 at 8:24 AM Suraj Kharage\n> <suraj.kharage@enterprisedb.com> wrote:\n> > Thank you for review comments.\n>\n> Thanks for the new version.\n>\n> + <term><option>--verify-backup </option></term>\n>\n> Whitespace.\n>\nCorrected.\n\n\n>\n> +struct manifesthash_hash *hashtab;\n>\n> Uh, I had it in mind that you would nuke this line completely, not\n> just remove \"typedef\" from it. You shouldn't need a global variable\n> here.\n>\n\nRemoved.\n\n\n> + if (buf == NULL)\n>\n> pg_malloc seems to have an internal check such that it never returns\n> NULL. I don't see anything like this test in other callers.\n>\n\nYeah, removed this check\n\n\n>\n> The order of operations in create_manifest_hash() seems unusual:\n>\n> + fd = open(manifest_path, O_RDONLY, 0);\n> + if (fstat(fd, &stat))\n> + buf = pg_malloc(stat.st_size);\n> + hashtab = manifesthash_create(1024, NULL);\n> ...\n> + entry = manifesthash_insert(hashtab, filename, &found);\n> ...\n> + close(fd);\n>\n> I would have expected open-fstat-read-close to be consecutive, and the\n> manifesthash stuff all done afterwards. In fact, it seems like reading\n> the file could be a separate function.\n>\n\nYes, created new function which will read the file and return the buffer.\n\n\n>\n> + if (strncmp(checksum, \"SHA256\", 6) == 0)\n>\n> This isn't really right; it would give a false match if we had a\n> checksum algorithm with a name like SHA2560 or SHA256C or\n> SHA256ExceptWayBetter. The right thing to do is find the colon first,\n> and then probably overwrite it with '\\0' so that you have a string\n> that you can pass to parse_checksum_algorithm().\n>\n\nCorrected this check. Below suggestion, allow us to put '\\0' in between the\nline.\nsince SHA256 is used to generate for backup manifest, so that we can feed\nthat\nline early to the checksum machinery.\n\n\n>\n> + /*\n> + * we don't have checksum type in the header, so need to\n> + * read through the first file enttry to find the checksum\n> + * type for the manifest file and initilize the checksum\n> + * for the manifest file itself.\n> + */\n>\n> This seems to be proceeding on the assumption that the checksum type\n> for the manifest itself will always be the same as the checksum type\n> for the first file in the manifest. I don't think that's the right\n> approach. I think the manifest should always have a SHA256 checksum,\n> regardless of what type of checksum is used for the individual files\n> within the manifest. Since the volume of data in the manifest is\n> presumably very small compared to the size of the database cluster\n> itself, I don't think there should be any performance problem there.\n>\nMade the change in backup manifest as well in backup validatort patch.\nThanks to Rushabh Lathia for the offline discussion and help.\n\nTo examine the first word of each line, I am using below check:\nif (strncmp(line, \"File\", 4) == 0)\n{\n..\n}\nelse if (strncmp(line, \"Manifest-Checksum\", 17) == 0)\n{\n..\n}\nelse\n error\n\nstrncmp might be not right here, but we can not put '\\0' in between the\nline (to find out first word)\nbefore we recognize the line type.\nAll the lines expect line last one (where we have manifest checksum) are\nfeed to the checksum machinary to calculate manifest checksum.\nso update_checksum() should be called after recognizing the type, i.e: if\nit is a File type record. Do you see any issues with this?\n\n+ filesize = atol(size);\n>\n> Using strtol() would allow for some error checking.\n>\ncorrected.\n\n\n>\n> + * Increase the checksum by its lable length so that we can\n> + checksum = checksum + checksum_lable_length;\n>\n> Spelling.\n>\ncorrected.\n\n\n>\n> + pg_log_error(\"invalid record found in \\\"%s\\\"\", manifest_path);\n>\n> Error message needs work.\n>\n> +VerifyBackup(void)\n> +create_manifest_hash(char *manifest_path)\n> +nextLine(char *buf)\n>\n> Your function names should be consistent with the surrounding style,\n> and with each other, as far as possible. Three different conventions\n> within the same patch and source file seems over the top.\n>\n> Also keep in mind that you're not writing code in a vacuum. There's a\n> whole file of code here, and around that, a whole project.\n> scan_data_directory() is a good example of a function whose name is\n> clearly too generic. It's not a general-purpose function for scanning\n> the data directory; it's specifically a support function for verifying\n> a backup. Yet, the name gives no hint of this.\n>\n> +verify_file(struct dirent *de, char fn[MAXPGPATH], struct stat st,\n> + char relative_path[MAXPGPATH], manifesthash_hash *hashtab)\n>\n> I think I commented on the use of char[] parameters in my previous review.\n>\n> + /* Skip backup manifest file. */\n> + if (strcmp(de->d_name, \"backup_manifest\") == 0)\n> + return;\n>\n> Still looks like this will be skipped at any level of the directory\n> hierarchy, not just the top. And why are we skipping backup_manifest\n> here bug pg_wal in scan_data_directory? That's a rhetorical question,\n> because I know the answer: verify_file() is only getting called for\n> files, so you can't use it to skip directories. But that's not a good\n> excuse for putting closely-related checks in different parts of the\n> code. It's just going to result in the checks being inconsistent and\n> each one having its own bugs that have to be fixed separately from the\n> other one, as here. Please try to reorganize this code so that it can\n> be done in a consistent way.\n>\n> I think this is related to the way you're traversing the directory\n> tree, which somehow looks a bit awkward to me. At the top of\n> scan_data_directory(), you've got code that uses basedir and\n> subdirpath to construct path and relative_path. I was initially\n> surprised to see that this was the job of this function, rather than\n> the caller, but then I thought: well, as long as it makes life easy\n> for the caller, it's probably fine. However, I notice that the only\n> non-trivial caller is the scan_data_directory() itself, and it has to\n> go and construct newsubdirpath from subdirpath and the directory name.\n>\n> It seems to me that this would get easier if you defined\n> scan_data_directory() -- or whatever we end up calling it -- to take\n> two pathname-related arguments:\n>\n> - basepath, which would be $PGDATA and would never change as we\n> recurse down, so same as what you're now calling basedir\n> - pathsuffix, which would be an empty string at the top level and at\n> each recursive level we'd add a slash and then de->d_name.\n>\n> So at the top of the function we wouldn't need an if statement,\n> because you could just do:\n>\n> snprintf(path, MAXPGPATH, \"%s%s\", basedir, pathsuffix);\n>\n> And when you recurse you wouldn't need an if statement either, because\n> you could just do:\n>\n> snprintf(newpathsuffix, MAXPGPATH, \"%s/%s\", pathsuffix, de->d_name);\n>\n> What I'd suggest is constructing newpathsuffix right after rejecting\n> \".\" and \"..\" entries, and then you can reject both pg_wal and\n> backup_manifest, at the top-level only, using symmetric and elegant\n> code:\n>\n> if (strcmp(newpathsuffix, \"/pg_wal\") == 0 || strcmp(newpathsuffix,\n> \"/backup_manifest\") == 0)\n> continue;\n>\n\nThanks for the suggestion. Corrected as per the above inputs.\n\n\n> + record = manifesthash_lookup(hashtab, filename);;\n> + if (record)\n> + {\n> ...long block...\n> + }\n> + else\n> + pg_log_info(\"file \\\"%s\\\" is present in backup but not in manifest\",\n> + filename);\n>\n> Try to structure the code in such a way that you minimize unnecessary\n> indentation. For example, in this case, you could instead write:\n>\n> if (record == NULL)\n> {\n> pg_log_info(...)\n> return;\n> }\n>\n> and the result would be that everything inside that long if-block is\n> now at the top level of the function and indented one level less. And\n> I think if you look at this function you'll see a way that you can\n> save a *second* level of indentation for much of that code. Please\n> check the rest of the patch for similar cases, too.\n>\n\nMake sense. corrected.\n\n\n>\n> +static char *\n> +nextLine(char *buf)\n> +{\n> + while (*buf != '\\0' && *buf != '\\n')\n> + buf = buf + 1;\n> +\n> + return buf + 1;\n> +}\n>\n> I'm pretty sure that my previous review mentioned the importance of\n> protecting against buffer overruns here.\n>\n> +static char *\n> +nextWord(char *line)\n> +{\n> + while (*line != '\\0' && *line != '\\t' && *line != '\\n')\n> + line = line + 1;\n> +\n> + return line + 1;\n> +}\n>\n> Same problem here.\n>\n> In both cases, ++ is more idiomatic.\n>\nI have added a check for EOF, but not sure whether that woule be right here.\nDo we need to check the length of buffer as well?\n\nRajkaumar has changed the tap test case patch as per revised error\nmessages.\nPlease find attached patch stack incorporated the above comments.\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.",
"msg_date": "Tue, 24 Dec 2019 16:11:50 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Tue, Dec 24, 2019 at 5:42 AM Suraj Kharage\n<suraj.kharage@enterprisedb.com> wrote:\n> Made the change in backup manifest as well in backup validatort patch. Thanks to Rushabh Lathia for the offline discussion and help.\n>\n> To examine the first word of each line, I am using below check:\n> if (strncmp(line, \"File\", 4) == 0)\n> {\n> ..\n> }\n> else if (strncmp(line, \"Manifest-Checksum\", 17) == 0)\n> {\n> ..\n> }\n> else\n> error\n>\n> strncmp might be not right here, but we can not put '\\0' in between the line (to find out first word)\n> before we recognize the line type.\n> All the lines expect line last one (where we have manifest checksum) are feed to the checksum machinary to calculate manifest checksum.\n> so update_checksum() should be called after recognizing the type, i.e: if it is a File type record. Do you see any issues with this?\n\nI see the problem, but I don't think your solution is right, because\nthe first test would pass if the line said FiletMignon rather than\njust File, which we certainly don't want. You've got to write the test\nso that you're checking against the whole first word, not just some\nprefix of it. There are several possible ways to accomplish that, but\nthis isn't one of them.\n\n>> + pg_log_error(\"invalid record found in \\\"%s\\\"\", manifest_path);\n>>\n>> Error message needs work.\n\nLooks better now, but you have a messages that say \"invalid checksums\ntype \\\"%s\\\" found in \\\"%s\\\"\". This is wrong because checksums would\nneed to be singular in this context (checksum). Also, I think it could\nbe better phrased as \"manifest file \\\"%s\\\" specifies unknown checksum\nalgorithm \\\"%s\\\" at line %d\".\n\n>> Your function names should be consistent with the surrounding style,\n>> and with each other, as far as possible. Three different conventions\n>> within the same patch and source file seems over the top.\n\nThis appears to be fixed.\n\n>> Also keep in mind that you're not writing code in a vacuum. There's a\n>> whole file of code here, and around that, a whole project.\n>> scan_data_directory() is a good example of a function whose name is\n>> clearly too generic. It's not a general-purpose function for scanning\n>> the data directory; it's specifically a support function for verifying\n>> a backup. Yet, the name gives no hint of this.\n\nBut this appears not to be fixed.\n\n>> if (strcmp(newpathsuffix, \"/pg_wal\") == 0 || strcmp(newpathsuffix,\n>> \"/backup_manifest\") == 0)\n>> continue;\n>\n> Thanks for the suggestion. Corrected as per the above inputs.\n\nYou need a comment here, like \"Ignore the possible presence of a\nbackup_manifest file and/or a pg_wal directory in the backup being\nverified.\" and then maybe another sentence explaining why that's the\nright thing to do.\n\n+ * The forth parameter to VerifyFile() will pass the relative path\n+ * of file to match exactly with the filename present in manifest.\n\nI don't know what this comment is trying to tell me, which might be\nsomething you want to try to fix. However, I'm pretty sure it's\nsupposed to say \"fourth\" not \"forth\".\n\n>> and the result would be that everything inside that long if-block is\n>> now at the top level of the function and indented one level less. And\n>> I think if you look at this function you'll see a way that you can\n>> save a *second* level of indentation for much of that code. Please\n>> check the rest of the patch for similar cases, too.\n>\n> Make sense. corrected.\n\nI don't agree. A large chunk of VerifyFile() is still subject to a\nquite unnecessary level of indentation.\n\n> I have added a check for EOF, but not sure whether that woule be right here.\n> Do we need to check the length of buffer as well?\n\nThat's really, really not right. EOF is not a character that can\nappear in the buffer. It's chosen on purpose to be a value that never\nmatches any actual character when both the character and the EOF value\nare regarded as values of type 'int'. That guarantee doesn't apply\nhere though because you're dealing with values of type 'char'. So what\nthis code is doing is searching for an impossible value using\nincorrect logic, which has very little to do with the actual need\nhere, which is to avoid running off the end of the buffer. To see what\nthe problem is, try creating a file with no terminating newline, like\nthis:\n\necho -n this file has no terminating newline >> some-file\n\nI doubt it will be very hard to make this patch crash horribly. Even\nif you can't, it seems pretty clear that the logic isn't right.\n\nI don't really know what the \\0 tests in NextLine() and NextWord()\nthink they're doing either. If there's a \\0 in the buffer before you\nadd one, it was in the original input data, and pretending like that\nmarks a word or line boundary seems like a fairly arbitrary choice.\n\nWhat I suggest is:\n\n(1) Allocate one byte more than the file size for the buffer that's\ngoing to hold the file, so that if you write a \\0 just after the last\nbyte of the file, you don't overrun the allocated buffer.\n\n(2) Compute char *endptr = buf + len.\n\n(3) Pass endptr to NextLine and NextWord and write the loop condition\nsomething like while (*buf != '\\n' && buf < endptr).\n\nOther notes:\n\n- The error handling in ReadFileIntoBuffer() does not seem to consider\nthe case of a short read. If you look through the source tree, you can\nfind examples of how we normally handle that.\n\n- Putting string_hash_sdbm() into encode.c seems like a surprising\nchoice. What does this have to do with encoding anything? And why is\nit going into src/common at all if it's only intended for frontend\nuse?\n\n- It seems like whether or not any problems were found while verifying\nthe manifest ought to affect the exit status of pg_basebackup. I'm not\nexactly sure what exit codes ought to be used, but you could look for\nsimilar precedents. Document this, too.\n\n- As much as possible let's have errors in the manifest file report\nthe line number, and let's also try to make them more specific, e.g.\ninstead of \"invalid manifest record found in \\\"%s\\\"\", perhaps\n\"manifest file \\\"%s\\\" contains invalid keyword \\\"%s\\\" at line %d\".\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 30 Dec 2019 13:22:59 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Moin,\n\nsorry for the very late reply. There was a discussion about the specific \nformat of the backup manifests, and maybe that was already discussed and \nI just overlooked it:\n\n1) Why invent your own format, and not just use a machine-readable \nformat that already exists? It doesn't have to be full blown XML, or \neven JSON, something simple as YAML would already be better. That way \nnot everyone has to write their own parser. Or maybe it is already YAML \nand just the different keywords where under discussion?\n\n2) It would be very wise to add a version number to the format. That \nwill making an extension later much easier and avoids the \"we need to \nadd X, but that breaks compatibility with all software out there\" \nsituations that often arise a few years down the line.\n\nBest regards,\n\nand a happy New Year 2020\n\nTels\n\n\n",
"msg_date": "Tue, 31 Dec 2019 13:30:01 +0100",
"msg_from": "Tels <nospam-pg-abuse@bloodgate.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Tue, Dec 31, 2019 at 01:30:01PM +0100, Tels wrote:\n> Moin,\n> \n> sorry for the very late reply. There was a discussion about the specific\n> format of the backup manifests, and maybe that was already discussed and I\n> just overlooked it:\n> \n> 1) Why invent your own format, and not just use a machine-readable format\n> that already exists? It doesn't have to be full blown XML, or even JSON,\n> something simple as YAML would already be better. That way not everyone has\n> to write their own parser. Or maybe it is already YAML and just the\n> different keywords where under discussion?\n\nYAML is extremely fragile and error-prone. It's also a superset of\nJSON, so I don't understand what you mean by \"as simple as.\"\n\n-1 from me on YAML\n\nThat said, I agree that there's no reason to come up with a bespoke\nformat and parser when JSON is already available in every PostgreSQL\ninstallation. Imposing a structure atop that includes a version\nnumber, as you suggest, seems pretty straightforward, and should be\ndone.\n\nWould it make sense to include some kind of capability description in\nthe format along with the version number?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 31 Dec 2019 18:43:26 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 12/31/19 10:43 AM, David Fetter wrote:\n> On Tue, Dec 31, 2019 at 01:30:01PM +0100, Tels wrote:\n>> Moin,\n>>\n>> sorry for the very late reply. There was a discussion about the specific\n>> format of the backup manifests, and maybe that was already discussed and I\n>> just overlooked it:\n>>\n>> 1) Why invent your own format, and not just use a machine-readable format\n>> that already exists? It doesn't have to be full blown XML, or even JSON,\n>> something simple as YAML would already be better. That way not everyone has\n>> to write their own parser. Or maybe it is already YAML and just the\n>> different keywords where under discussion?\n> \n> YAML is extremely fragile and error-prone. It's also a superset of\n> JSON, so I don't understand what you mean by \"as simple as.\"\n> \n> -1 from me on YAML\n\n-1 from me as well. YAML is easy to write but definitely non-trivial to \nread.\n\n> That said, I agree that there's no reason to come up with a bespoke\n> format and parser when JSON is already available in every PostgreSQL\n> installation. Imposing a structure atop that includes a version\n> number, as you suggest, seems pretty straightforward, and should be\n> done.\n\n+1. I continue to support a format that would be easily readable \nwithout writing a lot of code.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 31 Dec 2019 19:16:53 -0700",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Tue, Dec 31, 2019 at 9:16 PM David Steele <david@pgmasters.net> wrote:\n> > That said, I agree that there's no reason to come up with a bespoke\n> > format and parser when JSON is already available in every PostgreSQL\n> > installation. Imposing a structure atop that includes a version\n> > number, as you suggest, seems pretty straightforward, and should be\n> > done.\n>\n> +1. I continue to support a format that would be easily readable\n> without writing a lot of code.\n\nSo, if someone can suggest to me how I could read JSON from a tool in\nsrc/bin without writing a lot of code, I'm all ears. So far that's\nbeen asserted but not been demonstrated to be possible. Getting the\nJSON parser that we have in the backend to work from frontend doesn't\nlook all that straightforward, for reasons that I talked about in\nhttp://postgr.es/m/CA+TgmobZrNYR-ATtfZiZ_k-W7tSPgvmYZmyiqumQig4R4fkzHw@mail.gmail.com\n\nAs to the suggestion that a version number be included, that's been\nthere in every version of the patch I've posted.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 1 Jan 2020 13:43:40 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Wed, Jan 01, 2020 at 01:43:40PM -0500, Robert Haas wrote:\n> On Tue, Dec 31, 2019 at 9:16 PM David Steele <david@pgmasters.net> wrote:\n> > > That said, I agree that there's no reason to come up with a bespoke\n> > > format and parser when JSON is already available in every PostgreSQL\n> > > installation. Imposing a structure atop that includes a version\n> > > number, as you suggest, seems pretty straightforward, and should be\n> > > done.\n> >\n> > +1. I continue to support a format that would be easily readable\n> > without writing a lot of code.\n> \n> So, if someone can suggest to me how I could read JSON from a tool in\n> src/bin without writing a lot of code, I'm all ears. So far that's\n> been asserted but not been demonstrated to be possible. Getting the\n> JSON parser that we have in the backend to work from frontend doesn't\n> look all that straightforward, for reasons that I talked about in\n> http://postgr.es/m/CA+TgmobZrNYR-ATtfZiZ_k-W7tSPgvmYZmyiqumQig4R4fkzHw@mail.gmail.com\n\nMaybe I'm missing something obvious, but wouldn't combining\npg_read_file() with a cast to JSONB fix this, as below?\n\nshackle@[local]:5413/postgres(13devel)(892328) # SELECT jsonb_pretty(j::jsonb) FROM pg_read_file('/home/shackle/advanced_comparison.json') AS t(j);\n jsonb_pretty \n════════════════════════════════════\n [ ↵\n { ↵\n \"message\": \"hello world!\",↵\n \"severity\": \"[DEBUG]\" ↵\n }, ↵\n { ↵\n \"message\": \"boz\", ↵\n \"severity\": \"[INFO]\" ↵\n }, ↵\n { ↵\n \"message\": \"foo\", ↵\n \"severity\": \"[DEBUG]\" ↵\n }, ↵\n { ↵\n \"message\": \"null\", ↵\n \"severity\": \"null\" ↵\n } ↵\n ]\n(1 row)\n\nTime: 3.050 ms\n\n> As to the suggestion that a version number be included, that's been\n> there in every version of the patch I've posted.\n\nand thanks for that!\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 1 Jan 2020 20:09:18 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> On Wed, Jan 01, 2020 at 01:43:40PM -0500, Robert Haas wrote:\n>> So, if someone can suggest to me how I could read JSON from a tool in\n>> src/bin without writing a lot of code, I'm all ears.\n\n> Maybe I'm missing something obvious, but wouldn't combining\n> pg_read_file() with a cast to JSONB fix this, as below?\n\nOnly if you're prepared to restrict the use of the tool to superusers\n(or at least people with whatever privilege that function requires).\n\nAdmittedly, you can probably feed the data to the backend without\nuse of an intermediate file; but it still requires a working backend\nconnection, which might be a bit of a leap for backup-related tools.\nI'm sure Robert was envisioning doing this processing inside the tool.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Jan 2020 19:46:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Wed, Jan 1, 2020 at 7:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> David Fetter <david@fetter.org> writes:\n> > On Wed, Jan 01, 2020 at 01:43:40PM -0500, Robert Haas wrote:\n> >> So, if someone can suggest to me how I could read JSON from a tool in\n> >> src/bin without writing a lot of code, I'm all ears.\n>\n> > Maybe I'm missing something obvious, but wouldn't combining\n> > pg_read_file() with a cast to JSONB fix this, as below?\n>\n> Only if you're prepared to restrict the use of the tool to superusers\n> (or at least people with whatever privilege that function requires).\n>\n> Admittedly, you can probably feed the data to the backend without\n> use of an intermediate file; but it still requires a working backend\n> connection, which might be a bit of a leap for backup-related tools.\n> I'm sure Robert was envisioning doing this processing inside the tool.\n\nYeah, exactly. I don't think verifying a backup should require a\nrunning server, let alone a running server on the same machine where\nthe backup is stored and for which you have superuser privileges.\nAFAICS, the only options to make that work with JSON are (1) introduce\na new hand-coded JSON parser designed for frontend operation, (2) add\na dependency on an external JSON parser that we can use from frontend\ncode, or (3) adapt the existing JSON parser used in the backend so\nthat it can also be used in the frontend.\n\nI'd be willing to do (1) -- it wouldn't be the first time I've written\nJSON parser for PostgreSQL -- but I think it will take an order of\nmagnitude more code than using a file with tab-separated columns as\nI've proposed, and I assume that there will be complaints about having\ntwo JSON parsers in core. I'd also be willing to do (2) if that's the\nconsensus, but I'd vote against such an approach if somebody else\nproposed it because (a) I'm not aware of a widely-available library\nupon which we could depend and (b) introducing such a dependency for a\nminor feature like this seems fairly unpalatable to me, and it'd\nprobably still be more code than just using a tab-separated file. I'd\nbe willing to do (3) if somebody could explain to me how to solve the\nproblems with porting that code to work on the frontend side, but the\nonly suggestion so far as to how to do that is to port memory\ncontexts, elog/report, and presumably encoding handling to work on the\nfrontend side. That seems to me to be an unreasonably large lift,\nespecially given that we have lots of other files that use ad-hoc\nformats already, and if somebody ever gets around to converting all of\nthose to JSON, they can certainly convert this one at the same time.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 1 Jan 2020 20:57:11 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> AFAICS, the only options to make that work with JSON are (1) introduce\n> a new hand-coded JSON parser designed for frontend operation, (2) add\n> a dependency on an external JSON parser that we can use from frontend\n> code, or (3) adapt the existing JSON parser used in the backend so\n> that it can also be used in the frontend.\n> ... I'd\n> be willing to do (3) if somebody could explain to me how to solve the\n> problems with porting that code to work on the frontend side, but the\n> only suggestion so far as to how to do that is to port memory\n> contexts, elog/report, and presumably encoding handling to work on the\n> frontend side. That seems to me to be an unreasonably large lift,\n\nYeah, agreed. The only consideration that'd make that a remotely\nsane idea is that if somebody did the work, there would be other\nuses for it. (One that comes to mind immediately is cleaning up\necpg's miserably-maintained fork of the backend datetime code.)\n\nBut there's no denying that it would be a large amount of work\n(if it's even feasible), and nobody has stepped up to volunteer.\nIt's not reasonable to hold up this particular feature waiting\nfor that to happen.\n\nIf a tab-delimited file can handle this requirement, that seems\nlike a sane choice to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Jan 2020 21:20:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Wed, Jan 01, 2020 at 08:57:11PM -0500, Robert Haas wrote:\n> On Wed, Jan 1, 2020 at 7:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > David Fetter <david@fetter.org> writes:\n> > > On Wed, Jan 01, 2020 at 01:43:40PM -0500, Robert Haas wrote:\n> > >> So, if someone can suggest to me how I could read JSON from a tool in\n> > >> src/bin without writing a lot of code, I'm all ears.\n> >\n> > > Maybe I'm missing something obvious, but wouldn't combining\n> > > pg_read_file() with a cast to JSONB fix this, as below?\n> >\n> > Only if you're prepared to restrict the use of the tool to superusers\n> > (or at least people with whatever privilege that function requires).\n> >\n> > Admittedly, you can probably feed the data to the backend without\n> > use of an intermediate file; but it still requires a working backend\n> > connection, which might be a bit of a leap for backup-related tools.\n> > I'm sure Robert was envisioning doing this processing inside the tool.\n> \n> Yeah, exactly. I don't think verifying a backup should require a\n> running server, let alone a running server on the same machine where\n> the backup is stored and for which you have superuser privileges.\n\nThanks for clarifying the context.\n\n> AFAICS, the only options to make that work with JSON are (1) introduce\n> a new hand-coded JSON parser designed for frontend operation, (2) add\n> a dependency on an external JSON parser that we can use from frontend\n> code, or (3) adapt the existing JSON parser used in the backend so\n> that it can also be used in the frontend.\n> \n> I'd be willing to do (1) -- it wouldn't be the first time I've written\n> JSON parser for PostgreSQL -- but I think it will take an order of\n> magnitude more code than using a file with tab-separated columns as\n> I've proposed, and I assume that there will be complaints about having\n> two JSON parsers in core. I'd also be willing to do (2) if that's the\n> consensus, but I'd vote against such an approach if somebody else\n> proposed it because (a) I'm not aware of a widely-available library\n> upon which we could depend and\n\nI believe jq has an excellent one that's available under a suitable\nlicense.\n\nMaking jq a dependency seems like a separate discussion, though. At\nthe moment, we don't use git tools like submodel/subtree, and deciding\nwhich (or whether) seems like a gigantic discussion all on its own.\n\n> (b) introducing such a dependency for a minor feature like this\n> seems fairly unpalatable to me, and it'd probably still be more code\n> than just using a tab-separated file. I'd be willing to do (3) if\n> somebody could explain to me how to solve the problems with porting\n> that code to work on the frontend side, but the only suggestion so\n> far as to how to do that is to port memory contexts, elog/report,\n> and presumably encoding handling to work on the frontend side.\n\nThis port has come up several times recently in different contexts.\nHow big a chunk of work would it be? Just so we're clear, I'm not\nsuggesting that this port should gate this feature.\n\n> That seems to me to be an unreasonably large lift, especially given\n> that we have lots of other files that use ad-hoc formats already,\n> and if somebody ever gets around to converting all of those to JSON,\n> they can certainly convert this one at the same time.\n\nWould that require some kind of file converter program, or just a\nreally loud notice in the release notes?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Thu, 2 Jan 2020 19:03:23 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Jan 2, 2020 at 1:03 PM David Fetter <david@fetter.org> wrote:\n> I believe jq has an excellent one that's available under a suitable\n> license.\n>\n> Making jq a dependency seems like a separate discussion, though. At\n> the moment, we don't use git tools like submodel/subtree, and deciding\n> which (or whether) seems like a gigantic discussion all on its own.\n\nYep. And it doesn't seem worth it for a relatively small feature like\nthis. If we already had it, it might be worth using for a relatively\nsmall feature like this, but that's a different issue.\n\n> > (b) introducing such a dependency for a minor feature like this\n> > seems fairly unpalatable to me, and it'd probably still be more code\n> > than just using a tab-separated file. I'd be willing to do (3) if\n> > somebody could explain to me how to solve the problems with porting\n> > that code to work on the frontend side, but the only suggestion so\n> > far as to how to do that is to port memory contexts, elog/report,\n> > and presumably encoding handling to work on the frontend side.\n>\n> This port has come up several times recently in different contexts.\n> How big a chunk of work would it be? Just so we're clear, I'm not\n> suggesting that this port should gate this feature.\n\nI don't really know. It's more of a research project than a coding\nproject, at least initially, I think. For instance, psql has its own\nnon-local-transfer-of-control mechanism using sigsetjmp(). If you\nwanted to introduce elog/ereport on the frontend, would you make psql\nuse it? Or just let psql continue to do what it does now and introduce\nthe new mechanism as an option for code going forward? Or try to make\nthe two mechanisms work together somehow? Will you start using the\nsame error codes that we use in the backend on the frontend side, and\nif so, what will they do, given that what the backend does is just\nembed them in a protocol message that any particular client may or may\nnot display? Similarly, should frontend errors support reporting a\nhint, detail, statement, or query? Will it be confusing if backend and\nfrontend errors are too similar? If you make memory contexts available\nin the frontend, what if any code will you adapt to use them? There's\na lot of stuff in src/bin. If you want the encoding machinery on the\nfront end, what will you use in place of the backend's idea of the\n\"database encoding\"? What will you do about dependencies on Datum in\nfrontend code? Somebody would need to study all this stuff, come up\nwith a tentative set of decisions, write patches, get it all working,\nand then quite possibly have the choices they made get second-guessed\nby other people who have different ideas. If you come up with a really\ngood, clean proposal that doesn't provoke any major disagreements, you\nmight be able to get this done in a couple of months. If you can't\ncome up with something people good, or if you're the only one who\nthinks what you come up with is good, it might take years.\n\nIt seems to me that in a perfect world a lot of the code we have in\nthe backend that is usefully reusable in other contexts would be\nstructured so that it doesn't have random dependencies on backend-only\nmachinery like memory contexts and elog/ereport. For example, if you\nwrite a function that returns an error message rather than throwing an\nerror, then you can arrange to call that from either frontend or\nbackend code and the caller can do whatever it wishes with that error\ntext. However, once you've written your code so that an error gets\nthrown six layers down in the call stack, it's really hard to\nrearrange that so that the error is returned, and if you are\npopulating not only the primary error message but error code, detail,\nhint, etc. it's almost impractical to think that you can rearrange\nthings that way anyway. And generally you want to be populating those\nthings, as a best practice for backend code. So while in theory I kind\nof like the idea of adapting the JSON parser we've already got to just\nnot depend so heavily on a backend environment, it's not really very\nclear how to actually make that happen. At least not to me.\n\n> > That seems to me to be an unreasonably large lift, especially given\n> > that we have lots of other files that use ad-hoc formats already,\n> > and if somebody ever gets around to converting all of those to JSON,\n> > they can certainly convert this one at the same time.\n>\n> Would that require some kind of file converter program, or just a\n> really loud notice in the release notes?\n\nMaybe neither. I don't see why it wouldn't be possible to be\nbackward-compatible just by keeping the old code around and having it\nparse as far as the version number. Then it could decide to continue\non with the old code or call the new code, depending.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 Jan 2020 13:34:44 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Thank you for review comments.\n\nOn Mon, Dec 30, 2019 at 11:53 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Dec 24, 2019 at 5:42 AM Suraj Kharage\n> <suraj.kharage@enterprisedb.com> wrote:\n> > To examine the first word of each line, I am using below check:\n> > if (strncmp(line, \"File\", 4) == 0)\n> > {\n> > ..\n> > }\n> > else if (strncmp(line, \"Manifest-Checksum\", 17) == 0)\n> > {\n> > ..\n> > }\n> > else\n> > error\n> >\n> > strncmp might be not right here, but we can not put '\\0' in between the\n> line (to find out first word)\n> > before we recognize the line type.\n> > All the lines expect line last one (where we have manifest checksum) are\n> feed to the checksum machinary to calculate manifest checksum.\n> > so update_checksum() should be called after recognizing the type, i.e:\n> if it is a File type record. Do you see any issues with this?\n>\n> I see the problem, but I don't think your solution is right, because\n> the first test would pass if the line said FiletMignon rather than\n> just File, which we certainly don't want. You've got to write the test\n> so that you're checking against the whole first word, not just some\n> prefix of it. There are several possible ways to accomplish that, but\n> this isn't one of them.\n>\n\nYeah. Fixed in the attached patch.\n\n\n>\n> >> + pg_log_error(\"invalid record found in \\\"%s\\\"\", manifest_path);\n> >>\n> >> Error message needs work.\n>\n> Looks better now, but you have a messages that say \"invalid checksums\n> type \\\"%s\\\" found in \\\"%s\\\"\". This is wrong because checksums would\n> need to be singular in this context (checksum). Also, I think it could\n> be better phrased as \"manifest file \\\"%s\\\" specifies unknown checksum\n> algorithm \\\"%s\\\" at line %d\".\n>\n\nCorrected.\n\n\n>\n> >> Your function names should be consistent with the surrounding style,\n> >> and with each other, as far as possible. Three different conventions\n> >> within the same patch and source file seems over the top.\n>\n> This appears to be fixed.\n>\n> >> Also keep in mind that you're not writing code in a vacuum. There's a\n> >> whole file of code here, and around that, a whole project.\n> >> scan_data_directory() is a good example of a function whose name is\n> >> clearly too generic. It's not a general-purpose function for scanning\n> >> the data directory; it's specifically a support function for verifying\n> >> a backup. Yet, the name gives no hint of this.\n>\n> But this appears not to be fixed.\n>\n\nI have changed this function name to \"VerifyDir\" likewise, we have sendDir\nand sendFile in basebackup.c\n\n\n>\n> >> if (strcmp(newpathsuffix, \"/pg_wal\") == 0 || strcmp(newpathsuffix,\n> >> \"/backup_manifest\") == 0)\n> >> continue;\n> >\n> > Thanks for the suggestion. Corrected as per the above inputs.\n>\n> You need a comment here, like \"Ignore the possible presence of a\n> backup_manifest file and/or a pg_wal directory in the backup being\n> verified.\" and then maybe another sentence explaining why that's the\n> right thing to do.\n>\n\nCorrected.\n\n\n>\n> + * The forth parameter to VerifyFile() will pass the relative\n> path\n> + * of file to match exactly with the filename present in\n> manifest.\n>\n> I don't know what this comment is trying to tell me, which might be\n> something you want to try to fix. However, I'm pretty sure it's\n> supposed to say \"fourth\" not \"forth\".\n>\n\nI have changed the fourth parameter of VerifyFile(), so my comment over\nthere is no more valid.\n\n\n>\n> >> and the result would be that everything inside that long if-block is\n> >> now at the top level of the function and indented one level less. And\n> >> I think if you look at this function you'll see a way that you can\n> >> save a *second* level of indentation for much of that code. Please\n> >> check the rest of the patch for similar cases, too.\n> >\n> > Make sense. corrected.\n>\n> I don't agree. A large chunk of VerifyFile() is still subject to a\n> quite unnecessary level of indentation.\n>\n\nYeah, corrected.\n\n\n>\n> > I have added a check for EOF, but not sure whether that woule be right\n> here.\n> > Do we need to check the length of buffer as well?\n>\n> That's really, really not right. EOF is not a character that can\n> appear in the buffer. It's chosen on purpose to be a value that never\n> matches any actual character when both the character and the EOF value\n> are regarded as values of type 'int'. That guarantee doesn't apply\n> here though because you're dealing with values of type 'char'. So what\n> this code is doing is searching for an impossible value using\n> incorrect logic, which has very little to do with the actual need\n> here, which is to avoid running off the end of the buffer. To see what\n> the problem is, try creating a file with no terminating newline, like\n> this:\n>\n> echo -n this file has no terminating newline >> some-file\n>\n> I doubt it will be very hard to make this patch crash horribly. Even\n> if you can't, it seems pretty clear that the logic isn't right.\n>\n> I don't really know what the \\0 tests in NextLine() and NextWord()\n> think they're doing either. If there's a \\0 in the buffer before you\n> add one, it was in the original input data, and pretending like that\n> marks a word or line boundary seems like a fairly arbitrary choice.\n>\n> What I suggest is:\n>\n> (1) Allocate one byte more than the file size for the buffer that's\n> going to hold the file, so that if you write a \\0 just after the last\n> byte of the file, you don't overrun the allocated buffer.\n>\n> (2) Compute char *endptr = buf + len.\n>\n> (3) Pass endptr to NextLine and NextWord and write the loop condition\n> something like while (*buf != '\\n' && buf < endptr).\n>\n\nThanks for the suggestion. Corrected as per above suggestion.\n\n\n>\n> Other notes:\n>\n> - The error handling in ReadFileIntoBuffer() does not seem to consider\n> the case of a short read. If you look through the source tree, you can\n> find examples of how we normally handle that.\n>\n\nyeah, corrected.\n\n\n>\n> - Putting string_hash_sdbm() into encode.c seems like a surprising\n> choice. What does this have to do with encoding anything? And why is\n> it going into src/common at all if it's only intended for frontend\n> use?\n>\nI thought this function can be used in backend as well, i.e: likewise we\nare using in simplehash, so kept that in src/common.\nAfter your comment, I have moved this to pg_basebackup.c.\nI think this can be kept in common place but not in \"srs/common/encode.c\"\nthoughts?\n\n\n>\n> - It seems like whether or not any problems were found while verifying\n> the manifest ought to affect the exit status of pg_basebackup. I'm not\n> exactly sure what exit codes ought to be used, but you could look for\n> similar precedents. Document this, too.\n>\nI might be not getting this completely correct, but as per my observation,\nif any error occurs, pg_basebackup terminated with exit(1).\nWhereas in normal case (without an error), main function returns 0. The\n\"help\" and \"version\" option terminate normally with exit(0).\nSo in our case, exit(0) would be appropriate. Please correct me if I\nmisunderstood anything.\n\n\n>\n> - As much as possible let's have errors in the manifest file report\n> the line number, and let's also try to make them more specific, e.g.\n> instead of \"invalid manifest record found in \\\"%s\\\"\", perhaps\n> \"manifest file \\\"%s\\\" contains invalid keyword \\\"%s\\\" at line %d\".\n>\nyeah, added line number at possible places.\n\nI have also fixed few comments given by Jeevan Chalke offlist.\n\nPlease find attached v7 patches and let me know your comments.\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.",
"msg_date": "Fri, 3 Jan 2020 18:11:45 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > AFAICS, the only options to make that work with JSON are (1) introduce\n> > a new hand-coded JSON parser designed for frontend operation, (2) add\n> > a dependency on an external JSON parser that we can use from frontend\n> > code, or (3) adapt the existing JSON parser used in the backend so\n> > that it can also be used in the frontend.\n> > ... I'd\n> > be willing to do (3) if somebody could explain to me how to solve the\n> > problems with porting that code to work on the frontend side, but the\n> > only suggestion so far as to how to do that is to port memory\n> > contexts, elog/report, and presumably encoding handling to work on the\n> > frontend side. That seems to me to be an unreasonably large lift,\n> \n> Yeah, agreed. The only consideration that'd make that a remotely\n> sane idea is that if somebody did the work, there would be other\n> uses for it. (One that comes to mind immediately is cleaning up\n> ecpg's miserably-maintained fork of the backend datetime code.)\n> \n> But there's no denying that it would be a large amount of work\n> (if it's even feasible), and nobody has stepped up to volunteer.\n> It's not reasonable to hold up this particular feature waiting\n> for that to happen.\n\nSure, it'd be work, and for \"adding a simple backup manifest\", maybe too\nmuch to be worth considering ... but that's not what is going on here,\nis it? Are we really *just* going to add a backup manifest to\npg_basebackup and call it done? That's not what I understood the goal\nhere to be but rather to start doing a lot of other things with\npg_basebackup beyond just having a manifest and if you think just a bit\nfarther down the path, I think you start to realize that you're going to\nneed this base set of capabilities to get to a point where pg_basebackup\n(or whatever it ends up being called) is able to have the kind of\ncapabilities that exist in other PG backup software already.\n\nI'm sure I don't need to say where to find it, but I can point you to a\npretty good example of a similar effort, and we didn't start with \"build\na manifest into a custom format\" as the first thing implemented, but\nrather a great deal of work was first put into building out things like\nlogging, memory management/contexts, error handling/try-catch, having a\nstring type, a variant type, etc.\n\nIn some ways, it's kind of impressive what we've got in our front-ends\ntools even though we don't have these things, really, and certainly not\nall in one nice library that they all use... but at the same time, I\nthink that lack has also held those tools back, pg_basebackup among\nthem.\n\nAnyway, off my high horse, I'll just say I agree w/ David and David wrt\nusing JSON for this over hacking together yet another format. We didn't\ndo that as thoroughly as we should have (we've got a JSON parser and all\nthat, and use JSON quite a bit, but the actual manifest format is a mix\nof ini-style and JSON, because it's got more in it than just a list of\nfiles, something that I suspect will also end up being true of this down\nthe road and for good reasons, and we started with the ini format and\ndiscovered it sucked and then started embedding JSON in it...), and\nwe've come to realize that was a bad idea, and intend to fix it in our\nnext manifest major version bump. Would be unfortunate to see PG making\nthat same mistake. \n\nThanks,\n\nStephen",
"msg_date": "Fri, 3 Jan 2020 11:44:24 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 11:44 AM Stephen Frost <sfrost@snowman.net> wrote:\n> Sure, it'd be work, and for \"adding a simple backup manifest\", maybe too\n> much to be worth considering ... but that's not what is going on here,\n> is it? Are we really *just* going to add a backup manifest to\n> pg_basebackup and call it done? That's not what I understood the goal\n> here to be but rather to start doing a lot of other things with\n> pg_basebackup beyond just having a manifest and if you think just a bit\n> farther down the path, I think you start to realize that you're going to\n> need this base set of capabilities to get to a point where pg_basebackup\n> (or whatever it ends up being called) is able to have the kind of\n> capabilities that exist in other PG backup software already.\n\nI have no development plans for pg_basebackup that require extending\nthe format of the manifest file in any significant way, and am not\naware that anyone else has such plans either. If you are aware of\nsomething I'm not, or if anyone else is, it would be helpful to know\nabout it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Jan 2020 11:51:06 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Fri, Jan 3, 2020 at 11:44 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > Sure, it'd be work, and for \"adding a simple backup manifest\", maybe too\n> > much to be worth considering ... but that's not what is going on here,\n> > is it? Are we really *just* going to add a backup manifest to\n> > pg_basebackup and call it done? That's not what I understood the goal\n> > here to be but rather to start doing a lot of other things with\n> > pg_basebackup beyond just having a manifest and if you think just a bit\n> > farther down the path, I think you start to realize that you're going to\n> > need this base set of capabilities to get to a point where pg_basebackup\n> > (or whatever it ends up being called) is able to have the kind of\n> > capabilities that exist in other PG backup software already.\n> \n> I have no development plans for pg_basebackup that require extending\n> the format of the manifest file in any significant way, and am not\n> aware that anyone else has such plans either. If you are aware of\n> something I'm not, or if anyone else is, it would be helpful to know\n> about it.\n\nYou're certainly intending to do *something* with the manifest, and\nwhile I appreciate that you feel you've come up with a complete use-case\nthat this simple manifest will be sufficient for, I frankly doubt\nthat'll actually be the case. Not long ago it wasn't completely clear\nthat a manifest at *all* was even going to be necessary for the specific\nuse-case you had in mind (I'll admit I wasn't 100% sure myself at the\ntime either), but now that we're down the road of having one, I can't\nagree with the blanket assumption that we're never going to want to\nextend it, or even that it won't be necessary to add to it before this\nparticular use-case is fully addressed.\n\nAnd the same goes for the other things that were discussed up-thread\nregarding memory context and error handling and such.\n\nI'm happy to outline the other things that one *might* want to include\nin a manifest, if that would be helpful, but I'll also say that I'm not\nplanning to hack on adding that to pg_basebackup in the next month or\ntwo. Once we've actually got a manifest, if it's in an extendable\nformat, I could certainly see people wanting to do more with it though.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 3 Jan 2020 12:01:23 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 12:01 PM Stephen Frost <sfrost@snowman.net> wrote:\n> You're certainly intending to do *something* with the manifest, and\n> while I appreciate that you feel you've come up with a complete use-case\n> that this simple manifest will be sufficient for, I frankly doubt\n> that'll actually be the case. Not long ago it wasn't completely clear\n> that a manifest at *all* was even going to be necessary for the specific\n> use-case you had in mind (I'll admit I wasn't 100% sure myself at the\n> time either), but now that we're down the road of having one, I can't\n> agree with the blanket assumption that we're never going to want to\n> extend it, or even that it won't be necessary to add to it before this\n> particular use-case is fully addressed.\n>\n> And the same goes for the other things that were discussed up-thread\n> regarding memory context and error handling and such.\n\nWell, I don't know how to make you happy here. It looks to me like\ninsisting on a JSON-format manifest will likely mean that this doesn't\nget into PG13 or PG14 or probably PG15, because a port of all that\nmachinery to work in frontend code will be neither simple nor quick.\nIf you want this to happen for this release, you've got to be willing\nto settle for something that can be implemented in the time we have.\n\nI'm not sure whether what you and David are arguing boils down to\nthinking that I'm wrong when I say that doing that is hard, or whether\nyou know it's hard but you just don't care because you'd rather see\nthe feature go nowhere than use a format other than JSON. I don't see\nmuch difference between the latter position and a desire to block the\nfeature permanently. And if it's the former then you have yet to make\nany suggestions for how to get it done with reasonable effort.\n\n> I'm happy to outline the other things that one *might* want to include\n> in a manifest, if that would be helpful, but I'll also say that I'm not\n> planning to hack on adding that to pg_basebackup in the next month or\n> two. Once we've actually got a manifest, if it's in an extendable\n> format, I could certainly see people wanting to do more with it though.\n\nWell, as I say, it's got a version number, so somebody can always come\nalong with something better. I really think this is a red herring,\nthough. If somebody wants to track additional data about a backup,\nthere's no rule that they have to include it in the backup manifest. A\nbackup management solution might want to track things like who\ninitiated the backup, or for what purpose it was taken, or the IP\naddress of the machine where it was taken, or the backup system's own\nidentifier, but any of that stuff could (and probably should) be\nstored in a file managed by that tool rather than in the server's own\nmanifest. As to the per-file information, I believe that David and I\ndiscussed that and the list of fields that I had seemed relatively OK,\nand I believe I added at least one (mtime) per his suggestion. Of\ncourse, it's a tab-separated file; more fields could easily be added\nat the end, separated by tabs. Or, you could modify the file so that\nafter each \"File\" line you had another line with supplementary\ninformation about that file, beginning with some other word. Or, you\ncould convert the whole file to JSON for v2 of the manifest, if,\ncontrary to my belief, that's a fairly simple thing to do. There are\nprobably other approaches as well. This file format has already had\nconsiderably more thought about forward-compatibility than\npg_hba.conf, which has been retrofitted multiple times without\nbreaking the world.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Jan 2020 12:37:47 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Fri, Jan 3, 2020 at 12:01 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > You're certainly intending to do *something* with the manifest, and\n> > while I appreciate that you feel you've come up with a complete use-case\n> > that this simple manifest will be sufficient for, I frankly doubt\n> > that'll actually be the case. Not long ago it wasn't completely clear\n> > that a manifest at *all* was even going to be necessary for the specific\n> > use-case you had in mind (I'll admit I wasn't 100% sure myself at the\n> > time either), but now that we're down the road of having one, I can't\n> > agree with the blanket assumption that we're never going to want to\n> > extend it, or even that it won't be necessary to add to it before this\n> > particular use-case is fully addressed.\n> >\n> > And the same goes for the other things that were discussed up-thread\n> > regarding memory context and error handling and such.\n> \n> Well, I don't know how to make you happy here.\n\nI suppose I should admit that, first off, I don't feel you're required\nto make me happy, and I don't think it's necessary to make me happy to\nget this feature into PG.\n\nSince you expressed that interest though, I'll go out on a limb and say\nthat what would make me *really* happy would be to think about where the\nproject should be taking pg_basebackup, what we should be working on\n*today* to address the concerns we hear about from our users, and to\nconsider the best way to implement solutions to what they're actively\nasking for a core backup solution to be providing. I get that maybe\nthat isn't how the world works and that sometimes we have people who\nwrite our paychecks wanting us to work on something else, and yes, I'm\nsure there are some users who are asking for this specific thing but I\ncertainly don't think it's a common ask of pg_basebackup or what users\nfeel is missing from the backup options we offer in core; we had users\non this list specifically saying they *wouldn't* use this feature\n(referring to the differential backup stuff, of course), in fact,\nbecause of the things which are missing, which is pretty darn rare.\n\nThat's what would make *me* happy. Even some comments about how to\n*get* there while also working towards these features would be likely\nto make me happy. Instead, I feel like we're being told that we need\nthis feature badly in v13 and we're going to cut bait and do whatever\nis necessary to get us there.\n\n> It looks to me like\n> insisting on a JSON-format manifest will likely mean that this doesn't\n> get into PG13 or PG14 or probably PG15, because a port of all that\n> machinery to work in frontend code will be neither simple nor quick.\n\nI certainly understand that these things take time, sometimes quite a\nbit of it as the past 2 years have shown in this other little side\nproject, and that was hacking without having to go through the much\nlarger effort involved in getting things into PG core. That doesn't\nmean that kind of effort isn't worthwhile or that, because something is\na bunch of work, we shouldn't spend the time on it. I do feel what\nyou're after here is a multi-year project, and I've said before that I\ndon't agree that this is a feature (the differential backup with\npg_basebackup thing) that makes any sense going into PG at this time,\nbut I'm also not trying to block this feature, just to share the\nexperience that we've gotten from working in this area for quite a\nwhile and hopefully help guide the effort in PG away from pitfalls and\nin a good direction long-term.\n\n> If you want this to happen for this release, you've got to be willing\n> to settle for something that can be implemented in the time we have.\n\nI'm not sure what you're expecting here, but for my part, at least, I'm\nnot going to be terribly upset if this feature doesn't make this release\nbecause there's an agreement and understanding that the current\ndirection isn't a good long-term solution. Nor am I going to be\nterribly upset about the time that's been spent on this particular\napproach given that there's been no shortage of people commenting that\nthey'd rather see an extensible format, like JSON, and has been for\nquite some time.\n\nAll that said- one thing we've done is to consider that *we* are the\nones who are writing the JSON, while also being the ones to read it- we\ndon't need the parsing side to understand and deal with *any* JSON that\nmight exist out there, just whatever it is the server creates/created.\nIt may be possible to use that to simplify the parser, or perhaps at\nleast to accept that if it ends up being given something else that it\nmight not perform as well with it. I'm not sure how helpful that will\nbe to you, but I recall David finding it a helpful thought.\n\n> I'm not sure whether what you and David are arguing boils down to\n> thinking that I'm wrong when I say that doing that is hard, or whether\n> you know it's hard but you just don't care because you'd rather see\n> the feature go nowhere than use a format other than JSON. I don't see\n> much difference between the latter position and a desire to block the\n> feature permanently. And if it's the former then you have yet to make\n> any suggestions for how to get it done with reasonable effort.\n\nThere seems to be a great deal of daylight between the two positions\nyou're proposing I might have (as I don't speak for David..).\n\nI *do* think there's a lot of work that would need to be done here to\nmake this a good solution. I'm *not* completely against other formats\nbesides JSON. Even more so though, I am *not* argueing that this\nfeature should go 'nowhere', whether it uses JSON or not.\n\nWhat I don't care for is having a hand-hacked inflexible format that's\ngoing to require everyone down the road to implement their own parser\nfor it and bespoke code for every version of the custom format that\nthere ends up being, *including* PG core, to be clear. Whatever utility\nis going to be utilizing this manifest, it's going to need to support\nolder versions, just like pg_dump deals with older versions of custom\nformat dumps (though we still get people complaining about not being\nable to use older tools with newer dumps- it'd be awful nice if we\ncould use JSON, or something, and then just *add* things that wouldn't\nbreak older tools, except for the rare case where we don't have a\nchoice..). Not to mention the debugging grief and such, since we can't\njust use a tool like jq to check out what's going on. \n\nAs to the reference to pg_hba.conf- I don't think the packagers would\nnecessairly agree that there's been little grief around that, but even\nso, a given pg_hba.conf is only going to be used with a given major\nversion and, sure, it might have to be updated to that newer major\nversion's format if we change the format and someone copies the old\nversion to the new version, but that's during a major version upgrade of\nthe server, and at least newer tools don't have to deal with the older\npg_hba.conf version.\n\nAlso, pg_hba.conf doesn't seem like a terribly good example in any case-\nthe last time the actual structure of that file was changed in a\nbreaking way was in 2002 when the 'user' column was added, and the\nexample pg_hba.conf from that commit works just fine with PG12, it\nseems, based on some quick tests. There have been other\nbackwards-incompatible changes, of course, the last being 6 years ago, I\nthink, when 'krb5' was removed. I suppose there is some chance that you\nmight have a PG12-configured pg_hba.conf and you try copying that back\nto a PG11 or PG10 server and it doesn't work, but that strikes me as far\nless of an issue than trying to read a PG12 backup with a PG11 tool,\nwhich we know people do because they complain on the lists about it with\npg_dump/pg_restore.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 3 Jan 2020 14:35:59 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 2:35 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Well, I don't know how to make you happy here.\n>\n> I suppose I should admit that, first off, I don't feel you're required\n> to make me happy, and I don't think it's necessary to make me happy to\n> get this feature into PG.\n\nFair enough. That is gracious of you, but I would like to try to make\nyou happy if it is possible to do so.\n\n> Since you expressed that interest though, I'll go out on a limb and say\n> that what would make me *really* happy would be to think about where the\n> project should be taking pg_basebackup, what we should be working on\n> *today* to address the concerns we hear about from our users, and to\n> consider the best way to implement solutions to what they're actively\n> asking for a core backup solution to be providing. I get that maybe\n> that isn't how the world works and that sometimes we have people who\n> write our paychecks wanting us to work on something else, and yes, I'm\n> sure there are some users who are asking for this specific thing but I\n> certainly don't think it's a common ask of pg_basebackup or what users\n> feel is missing from the backup options we offer in core; we had users\n> on this list specifically saying they *wouldn't* use this feature\n> (referring to the differential backup stuff, of course), in fact,\n> because of the things which are missing, which is pretty darn rare.\n\nWell, I mean, what you seem to be suggesting here is that somebody is\ndriving me with a stick to do something that I don't really like but\nhave to do because otherwise I won't be able to make rent, but that's\nactually not the case. I genuinely believe that this is a good design,\nand it's driven by me, not some shadowy conglomerate of EnterpriseDB\nexecutives who are out to make PostgreSQL sucks. If I'm wrong and the\ndesign sucks, that's again not the fault of shadowy EnterpriseDB\nexecutives; it's my fault. Incidentally, my boss is not very shadowy\nanyhow; he's a super-nice guy, and a major reason why I work here. :-)\n\nI don't think the issue here is that I haven't thought about what\nusers want, but that not everybody wants the same thing, and it's\nseems like the people with whom I interact want somewhat different\nthings than those with whom you interact. EnterpriseDB has an existing\ntool that does parallel and block-level incremental backup, and I\nstarted out with the goal of providing those same capabilities in\ncore. They are quite popular with EnterpriseDB customers, and I'd like\nto make them more widely available and, as far as I can, improve on\nthem. From our previous discussion and from a (brief) look at\npgbackrest, I gather that the interests of your customers are somewhat\ndifferent. Apparently, block-level incremental backup isn't quite as\nimportant to your customers, perhaps because you've already got\nfile-level incremental backup, but various other things like\nencryption and backup verification are extremely important, and you've\ngot a set of ideas about what would be valuable in the future which\nI'm sure is based on real input from your customers. I hope you pursue\nthose ideas, and I hope you do it in core rather than in a separate\npiece of software, but that's up to you. Meanwhile, I think that if I\nhave somewhat different ideas about what I'd like to pursue, that\nought to be just fine. And I don't think it is unreasonable to hope\nthat you'll acknowledge my goals as legitimate even if you have\ndifferent ones.\n\nI want to point out that my idea about how to do all of this has\nshifted by a considerable amount based on the input that you and David\nhave provided. My original design didn't involve a backup manifest,\nbut now it does. That turned out to be necessary, but it was also\nsomething you suggested, and something where I asked and took advice\non what ought to go into it. Likewise, you suggested that the process\nof taking the backup should involve giving the client more control\nrather than trying to do everything on the server side, and that is\nnow the design which I plan to pursue. You suggested that because it\nwould be more advantageous for out-of-core backup tools, such as\npgbackrest, and I acknowledge that as a benefit and I think we're\nheaded in that direction. I am not doing a single thing which, to my\nknowledge, blocks anything that you might want to do with\npg_basebackup in the future. I have accepted as much of your input as\nI believe that I can without killing the project off completely. To go\nfurther, I'd have to either accept years of delay or abandon my\npriorities entirely and pursue yours.\n\n> That's what would make *me* happy. Even some comments about how to\n> *get* there while also working towards these features would be likely\n> to make me happy. Instead, I feel like we're being told that we need\n> this feature badly in v13 and we're going to cut bait and do whatever\n> is necessary to get us there.\n\nThis seems like a really unfair accusation given how much work I've\nput into trying to satisfy you and David. If this patch, the parallel\nfull backup patch, and the incremental backup patch were all to get\ncommitted to v13, an outcome which seems pretty unlikely to me at this\npoint, then you would have a very significant number of things that\nyou have requested in the course of the various discussions, and\nAFAICS the only thing you'd have that you don't want is the need to\nparse the manifest file use while (<>) { @a = split /\\t/, $_ } rather\nthan $a = parse_json(join '', <>). You would, for example, have the\nability to request an individual file from the server rather than a\ncomplete tarball. Maybe the command that requests a file would lack an\nencryption option, something which IIUC you would like to have, but\nthat certainly does not leave you worse off. It is easier to add an\nencryption option to a command which you already have than it is to\ninvent a whole new command -- or really several whole new commands,\nsince such a command is not really usable unless you also have\nfacilities to start and stop a backup through the replication\nprotocol.\n\nAll that being said, I continue to maintain that insisting on JSON is\nnot a reasonable request. It is not easy to parse JSON, or a subset of\nJSON. The amount of code required to write even a stripped-down JSON\nparser is far more than the amount required to split a file on tabs,\nand the existing code we have for the backend cannot be easily (or\neven with moderate effort) adapted to work in the frontend. On the\nother hand, the code that pgbackrest would need to parse the manifest\nfile format I've proposed could have easily been written in less time\nthan you've spent arguing about it. Heck, if it helps, I'll offer\nwrite that patch myself (I could be a pgbackrest contributor!). I\ndon't want this effort to suck because something gets rushed through\ntoo quickly, but I also don't want it to get derailed because of what\nI view as a relatively minor detail. It is not always right to take\nthe easier road, but it is also not always wrong. I have no illusions\nthat what is being proposed here is perfect, but lots of features\nstarted out imperfect and get better over time -- RLS and parallel\nquery come to mind, among others -- and we often learn from the\nexperience of shipping something which parts of the feature are most\nin need of improvement.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 7 Jan 2020 13:05:33 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Fri, Jan 3, 2020 at 2:35 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > Well, I don't know how to make you happy here.\n> >\n> > I suppose I should admit that, first off, I don't feel you're required\n> > to make me happy, and I don't think it's necessary to make me happy to\n> > get this feature into PG.\n> \n> Fair enough. That is gracious of you, but I would like to try to make\n> you happy if it is possible to do so.\n\nI certainly appreciate that, but I don't know that it is possible to do\nso while approaching this in the order that you are, which I tried to\npoint out previously.\n\n> > Since you expressed that interest though, I'll go out on a limb and say\n> > that what would make me *really* happy would be to think about where the\n> > project should be taking pg_basebackup, what we should be working on\n> > *today* to address the concerns we hear about from our users, and to\n> > consider the best way to implement solutions to what they're actively\n> > asking for a core backup solution to be providing. I get that maybe\n> > that isn't how the world works and that sometimes we have people who\n> > write our paychecks wanting us to work on something else, and yes, I'm\n> > sure there are some users who are asking for this specific thing but I\n> > certainly don't think it's a common ask of pg_basebackup or what users\n> > feel is missing from the backup options we offer in core; we had users\n> > on this list specifically saying they *wouldn't* use this feature\n> > (referring to the differential backup stuff, of course), in fact,\n> > because of the things which are missing, which is pretty darn rare.\n> \n> Well, I mean, what you seem to be suggesting here is that somebody is\n> driving me with a stick to do something that I don't really like but\n> have to do because otherwise I won't be able to make rent, but that's\n> actually not the case. I genuinely believe that this is a good design,\n> and it's driven by me, not some shadowy conglomerate of EnterpriseDB\n> executives who are out to make PostgreSQL sucks. If I'm wrong and the\n> design sucks, that's again not the fault of shadowy EnterpriseDB\n> executives; it's my fault. Incidentally, my boss is not very shadowy\n> anyhow; he's a super-nice guy, and a major reason why I work here. :-)\n\nThen I just have to disagree, really vehemently, that having a\nblock-level incremental backup solution without solid dependency\nhandling between incremental and full backups, solid WAL management and\narchiving, expiration handling for incremental/full backups and WAL, and\nthe manifest that that this thread has been about, is a good design.\n\nUltimately, what this calls for is some kind of 'repository' which\nyou've stressed you don't think is a good idea for pg_basebackup to ever\ndeal with and I just can't disagree more with that. I could perhaps\nagree that it isn't appropriate for the specific tool \"pg_basebackup\" to\nwork with a repo because of the goal of that particular tool, but in\nthat case, I don't think pg_basebackup should be the tool to provide a\nblock-level incremental backup solution, it should continue to be a tool\nto provide a simple and easy way to take a one-time, complete, snapshot\nof a running PG system over the replication protocol- and adding support\nfor parallel backups, or encrypted backups, or similar things would be\ncompletely in-line and appropriate for such a tool, and I'm not against\nthose features being added to pg_basebackup even in advance of anything\nlike support for a repo or dependency handling.\n\n> I don't think the issue here is that I haven't thought about what\n> users want, but that not everybody wants the same thing, and it's\n> seems like the people with whom I interact want somewhat different\n> things than those with whom you interact. EnterpriseDB has an existing\n> tool that does parallel and block-level incremental backup, and I\n> started out with the goal of providing those same capabilities in\n> core. They are quite popular with EnterpriseDB customers, and I'd like\n> to make them more widely available and, as far as I can, improve on\n> them. From our previous discussion and from a (brief) look at\n> pgbackrest, I gather that the interests of your customers are somewhat\n> different. Apparently, block-level incremental backup isn't quite as\n> important to your customers, perhaps because you've already got\n> file-level incremental backup, but various other things like\n> encryption and backup verification are extremely important, and you've\n> got a set of ideas about what would be valuable in the future which\n> I'm sure is based on real input from your customers. I hope you pursue\n> those ideas, and I hope you do it in core rather than in a separate\n> piece of software, but that's up to you. Meanwhile, I think that if I\n> have somewhat different ideas about what I'd like to pursue, that\n> ought to be just fine. And I don't think it is unreasonable to hope\n> that you'll acknowledge my goals as legitimate even if you have\n> different ones.\n\nI'm all for block-level incremental backup, in general (though I've got\nconcerns about it from a correctness standpoint.. I certainly think\nit's going to be difficult to get right and probably finicky, but\nhopefully your experience with BART has let you identify where the\ndragons lie and it'll be interesting to see what that code looks like\nand if the approach used can be leveraged in other tools), but I am\nconcerned about how we're getting there.\n\n> I want to point out that my idea about how to do all of this has\n> shifted by a considerable amount based on the input that you and David\n> have provided. My original design didn't involve a backup manifest,\n> but now it does. That turned out to be necessary, but it was also\n> something you suggested, and something where I asked and took advice\n> on what ought to go into it. Likewise, you suggested that the process\n> of taking the backup should involve giving the client more control\n> rather than trying to do everything on the server side, and that is\n> now the design which I plan to pursue. You suggested that because it\n> would be more advantageous for out-of-core backup tools, such as\n> pgbackrest, and I acknowledge that as a benefit and I think we're\n> headed in that direction. I am not doing a single thing which, to my\n> knowledge, blocks anything that you might want to do with\n> pg_basebackup in the future. I have accepted as much of your input as\n> I believe that I can without killing the project off completely. To go\n> further, I'd have to either accept years of delay or abandon my\n> priorities entirely and pursue yours.\n\nWhile I'm hopeful that the parallel backup pieces will be useful to\nout-of-core backup tools, I've been increasingly less confident that\nit'll end up being very useful to pgbackrest, as much as I would like it\nto be. Perhaps after it's in place we might be able to work on it to\nmake it useful, but we'd need to push all the features like encryption\nand options for compression and such into the backend, in a way that\nworks for pgbackrest, to be able to leverage it, and I'm not sure that\nwould get much support or that it could be done in a way that doesn't\nend up causing problems for pg_basebackup, which clearly wouldn't be\nacceptable. Further, if we can't leverage the PG backup protocol that\nyou're building here, it seems pretty darn unlikely we'd have much use\nfor the manifest that's built as part of that.\n\nI'm probably going to lose what credibility I have in critizing what\nyou're doing with pg_basebackup here, but I started off saying you don't\nhave to make me happy and this is part of why- I really don't think\nthere's much that you're doing with pg_basebackup that is ultimately\ngoing to impact what plans I have for the future, for pretty much\nanything. I haven't got any real specific plans around pg_basebackup,\nthough, point-in-fact, if you put in a bunch of code that shows how to\nget PG and pg_basebackup to do block-level incremental backups in a safe\nand trusted way, that would actually be *really* useful to the\npgbackrest project because we could then lift that logic out of\npg_basebackup and leverage it. If I wanted to be entirely selfish, I'd\nbe pushing you to get block-level incremental backup into pg_basebackup\nas quickly as possible so that we could have such an example of \"how to\ndo it in a way that, if it breaks, the PG community will figure out what\nwent wrong and fix it\". If you look at other things we've done, such as\nnot backing up unlogged tables, that's exactly the approach we've used:\nintroduce the feature into pg_basebackup *first*, make sure the\ncommunity agrees that it's a valid approach and will deal with any\nissues with it (and will take pains to avoid *breaking* it in future\nversions..), and only *then* introduce it into pgbackrest by using the\nsame approach. Those other features were well in-line with what makes\nsense for pg_basebackup too though.\n\nWe haven't done that though, and I haven't been pushing in that\ndirection, not because I think it's a bad feature or that I want to\nblock something going into pg_basebackup or whatever, but because I\nthink it's actually going to cause more problems for users than it\nsolves because some users will want to use it (though not all, as we've\nseen on this list, as there's at least some users out there who are as\nscared of the idea of having *just* this in pg_basebackup without the\nother things I talk about above as I am) and then they're going to try\nand hack together all those other things they need around WAL management\nand archiving and expiration and they're likely to get it wrong- perhaps\nin obvious ways, perhaps in relatively subtle ways, but either way,\nthey'll end up with backups that aren't valid that they only discover\nwhen they're in an emergency. Again, perhaps selfish me would say \"oh\ngood, then they'll call me and pay me lots to fix it for them\", but it\ncertainly wouldn't look good for the community- even if all of the\ndocumentation and everything we put out there says that they way they\nwere doing it had this subtle issue or whatever (considering our docs\nstill promote a really bad, imv anyway, archive command kinda makes this\nlikely, if you ask me anyway..), and it wouldn't be good for the user.\n\n> > That's what would make *me* happy. Even some comments about how to\n> > *get* there while also working towards these features would be likely\n> > to make me happy. Instead, I feel like we're being told that we need\n> > this feature badly in v13 and we're going to cut bait and do whatever\n> > is necessary to get us there.\n> \n> This seems like a really unfair accusation given how much work I've\n> put into trying to satisfy you and David. If this patch, the parallel\n> full backup patch, and the incremental backup patch were all to get\n> committed to v13, an outcome which seems pretty unlikely to me at this\n> point, then you would have a very significant number of things that\n> you have requested in the course of the various discussions, and\n> AFAICS the only thing you'd have that you don't want is the need to\n> parse the manifest file use while (<>) { @a = split /\\t/, $_ } rather\n> than $a = parse_json(join '', <>). You would, for example, have the\n> ability to request an individual file from the server rather than a\n> complete tarball. Maybe the command that requests a file would lack an\n> encryption option, something which IIUC you would like to have, but\n> that certainly does not leave you worse off. It is easier to add an\n> encryption option to a command which you already have than it is to\n> invent a whole new command -- or really several whole new commands,\n> since such a command is not really usable unless you also have\n> facilities to start and stop a backup through the replication\n> protocol.\n\nNo, the manifest format is definitely not the only issue that I have\nwith this- but as it relates to the thread about building a manifest, my\ncomplaint really is isolated to the format and just forward thinking\nabout how the format you're advocating for will mean custom code for who\nknows how many different tools. While I appreciate the offer to write\nall the bespoke code for every version of the manifest for pgbackrest,\nI'm really not thrilled about the idea of having to have that extra code\nand having to then maintain it. Yes, when you compare the single format\nof the manifest and the code required for it against a JSON parser, if\nwe only ever have this one format then it'd win in terms of code, but I\ndon't believe it'll end up being one format, instead we're going to end\nup with multiple formats, each of which will have some additional code\nfor dealing with parsing it, and that's going to add up. That's also\ngoing to, as I said before, make it almost certain that we can't use\nolder tools with newer backups. These are issues that we've thought\nabout and worried about over the years of pgbackrest and with that\nexperience we've come down on the side that a JSON-based format would be\nan altogether better design. That's why we're advocating for it, not\nbecause it requires more code or so that it delays the efforts here, but\nbecause we've been there, we've used other formats, we've dealt with\nuser complaints when we do break things, this is all history for us\nthat's helped us learn- for PG, it looks like the future with a static\nformat, and I get that the future is hard to predict and pg_basebackup\nisn't pgbackrest and yeah, I could be completely wrong because I don't\nactually have a crystal ball, but this starting point sure looks really\nfamiliar.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 7 Jan 2020 20:33:48 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi Robert,\n\nOn 1/7/20 6:33 PM, Stephen Frost wrote:\n\n > These are issues that we've thought\n > about and worried about over the years of pgbackrest and with that\n > experience we've come down on the side that a JSON-based format would be\n > an altogether better design. That's why we're advocating for it, not\n > because it requires more code or so that it delays the efforts here, but\n > because we've been there, we've used other formats, we've dealt with\n > user complaints when we do break things, this is all history for us\n > that's helped us learn- for PG, it looks like the future with a static\n > format, and I get that the future is hard to predict and pg_basebackup\n > isn't pgbackrest and yeah, I could be completely wrong because I don't\n > actually have a crystal ball, but this starting point sure looks really\n > familiar.\n\nFor example, have you considered what will happen if you have a file in \nthe cluster with a tab in the name? This is perfectly valid in Posix \nfilesystems, at least. You may already be escaping tabs but the simple \ncode snippet you provided earlier isn't going to work so well either \nway. It gets complicated quickly.\n\nI know users should not be creating weird files in PGDATA, but it's \namazing how often this sort of thing pops up. We currently have an open \nissue because = in file names breaks our file format. Tab is surely \nless common but it's amazing what users will do.\n\nAnother fun one is 03849840 which fixes the handling of \\ characters in \nthe code which checksums the manifest. The file is not fully JSON but \nthe checksums are and that was initially missed in the C migration. The \nbug never got released but it easily could have been.\n\nIn short, using a quick-and-dirty homegrown format seemed great at first \nbut has caused many headaches. Because we don't change the repo format \nacross releases we are kind of stuck with past sins until we create a \nnew repo format and write update/compatability code. Users are \nunderstandably concerned if new versions of the software won't work with \ntheir repo, some of which contain years of backups (really).\n\nThis doesn't even get into the work everyone else will need to do to \nread a custom format. I do appreciate your offer of contributing parser \ncode to pgBackRest, but honestly I'd rather it were not necessary. \nThough of course I'd still love to see a contribution of some sort from you!\n\nHard experience tells me that using a standard format where all these \nissues have been worked out is the way to go.\n\nThere are a few MIT-licensed JSON projects that are implemented in a \nsingle file. cJSON is very capable while JSMN is very minimal. Is is \npossible that one of those (or something like it) would be acceptable? \nIt looks like the one requirement we have is that the JSON can be \nstreamed rather than just building up one big blob? Even with that \nrequirement there are a few tricks that can be used. JSON nests rather \nnicely after all so the individual file records can be transmitted \nindependently of the overall file format.\n\nYour first question may be why didn't pgBackRest use one of those \nparsers? The answer is that JSON parsing/rendering is pretty trivial. \nMemory management and a (datum-like) type system are the hard parts and \npgBackRest already had those.\n\nWould it be acceptable to bring in JSON code with a compatible license \nto use in libcommon? If so I'm willing to help adapt that code for use \nin Postgres. It's possible that the pgBackRest code could be adapted \nsimilarly, but it might make more sense to start from one of these \ngeneral purpose parsers.\n\nThoughts?\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 9 Jan 2020 18:19:00 -0700",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 8:19 PM David Steele <david@pgmasters.net> wrote:\n> For example, have you considered what will happen if you have a file in\n> the cluster with a tab in the name? This is perfectly valid in Posix\n> filesystems, at least.\n\nYeah, there's code for that in the patch I posted. I don't think the\nvalidator patch deals with it, but that's fixable.\n\n> You may already be escaping tabs but the simple\n> code snippet you provided earlier isn't going to work so well either\n> way. It gets complicated quickly.\n\nSure, but obviously neither of those code snippets were intended to be\nused straight out of the box. Even after you parse the manifest as\nJSON, you would still - if you really want to validate it - check that\nyou have the keys and values you expect, that the individual field\nvalues are sensible, etc. I still stand by my earlier contention that,\nas things stand today, you can parse an ad-hoc format in less code\nthan a JSON format. If we had a JSON parser available on the front\nend, I think it'd be roughly comparable, but maybe the JSON format\nwould come out a bit ahead. Not sure.\n\n> There are a few MIT-licensed JSON projects that are implemented in a\n> single file. cJSON is very capable while JSMN is very minimal. Is is\n> possible that one of those (or something like it) would be acceptable?\n> It looks like the one requirement we have is that the JSON can be\n> streamed rather than just building up one big blob? Even with that\n> requirement there are a few tricks that can be used. JSON nests rather\n> nicely after all so the individual file records can be transmitted\n> independently of the overall file format.\n\nI haven't really looked at these. I would have expected that including\na second JSON parser in core would provoke significant opposition.\nGenerally, people dislike having more than one piece of code to do the\nsame thing. I would also expect that depending on an external package\nwould provoke significant opposition. If we suck the code into core,\nthen we have to keep it up to date with the upstream, which is a\nsignificant maintenance burden - look at all the time Tom has spent on\nsnowball, regex, and time zone code over the years. If we don't suck\nthe code into core but depend on it, then every developer needs to\nhave that package installed on their operating system, and every\npackager has to make sure that it is being built for their OS so that\nPostgreSQL can depend on it. Perhaps JSON is so popular today that\nimposing such a requirement would provoke only a groundswell of\nsupport, but based on past precedent I would assume that if I\ncommitted a patch of this sort the chances that I'd have to revert it\nwould be about 99.9%. Optional dependencies for optional features are\nusually pretty well-tolerated when they're clearly necessary: e.g. you\ncan't really do JIT without depending on something like LLVM, but the\nbar for a mandatory dependency has historically been quite high.\n\n> Would it be acceptable to bring in JSON code with a compatible license\n> to use in libcommon? If so I'm willing to help adapt that code for use\n> in Postgres. It's possible that the pgBackRest code could be adapted\n> similarly, but it might make more sense to start from one of these\n> general purpose parsers.\n\nFor the reasons above, I expect this approach would be rejected, by\nTom and by others.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 14 Jan 2020 11:54:30 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> ... I would also expect that depending on an external package\n> would provoke significant opposition. If we suck the code into core,\n> then we have to keep it up to date with the upstream, which is a\n> significant maintenance burden - look at all the time Tom has spent on\n> snowball, regex, and time zone code over the years.\n\nAlso worth noting is that we have a seriously bad track record about\nchoosing external packages to depend on. The regex code has no upstream\nmaintainer anymore (well, the Tcl guys seem to think that *we* are\nupstream for that now), and snowball is next door to moribund.\nWith C not being a particularly hip language to develop in anymore,\nit wouldn't surprise me in the least for any C-code JSON parser\nwe might pick to go dead pretty soon.\n\nBetween that problem and the likelihood that we'd need to make\nsignificant code changes anyway to meet our own coding style etc\nexpectations, I think really we'd have to assume that we're going\nto fork and maintain our own copy of any code we pick.\n\nNow, if it's a small enough chunk of code (and really, how complex\nis JSON parsing anyway) maybe that doesn't matter. But I tend to\nagree with Robert's position that it's a big ask for this patch\nto introduce a frontend JSON parser.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 12:53:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 12:53:04PM -0500, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > ... I would also expect that depending on an external package\n> > would provoke significant opposition. If we suck the code into core,\n> > then we have to keep it up to date with the upstream, which is a\n> > significant maintenance burden - look at all the time Tom has spent on\n> > snowball, regex, and time zone code over the years.\n> \n> Also worth noting is that we have a seriously bad track record about\n> choosing external packages to depend on. The regex code has no upstream\n> maintainer anymore (well, the Tcl guys seem to think that *we* are\n> upstream for that now), and snowball is next door to moribund.\n> With C not being a particularly hip language to develop in anymore,\n> it wouldn't surprise me in the least for any C-code JSON parser\n> we might pick to go dead pretty soon.\n\nGiven jq's extreme popularity and compatible license, I'd nominate that.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 14 Jan 2020 19:33:12 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* David Fetter (david@fetter.org) wrote:\n> On Tue, Jan 14, 2020 at 12:53:04PM -0500, Tom Lane wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > ... I would also expect that depending on an external package\n> > > would provoke significant opposition. If we suck the code into core,\n> > > then we have to keep it up to date with the upstream, which is a\n> > > significant maintenance burden - look at all the time Tom has spent on\n> > > snowball, regex, and time zone code over the years.\n> > \n> > Also worth noting is that we have a seriously bad track record about\n> > choosing external packages to depend on. The regex code has no upstream\n> > maintainer anymore (well, the Tcl guys seem to think that *we* are\n> > upstream for that now), and snowball is next door to moribund.\n> > With C not being a particularly hip language to develop in anymore,\n> > it wouldn't surprise me in the least for any C-code JSON parser\n> > we might pick to go dead pretty soon.\n> \n> Given jq's extreme popularity and compatible license, I'd nominate that.\n\nI don't think that really changes Tom's concerns here about having an\n\"upstream\" for this.\n\nFor my part, I don't really agree with the whole \"we don't want two\ndifferent JSON parsers\" when we've got two of a bunch of stuff between\nthe frontend and the backend, particularly since I don't really think\nit'll end up being *that* much code.\n\nMy thought, which I had expressed to David (though he obviously didn't\nentirely agree with me since he suggested the other options), was to\nadapt the pgBackRest JSON parser, which isn't really all that much code.\n\nFrustratingly, that code has got some internal pgBackRest dependency on\nthings like the memory context system (which looks, unsurprisingly, an\nawful lot like what is in PG backend), the error handling and logging\nsystems (which are different from PG because they're quite intentionally\nsegregated from each other- something PG would benefit from, imv..), and\nVariadics (known in the PG backend as Datums, and quite similar to\nthem..).\n\nEven so, David's offered to adjust the code to use the frontend's memory\nmanagement (*cough* malloc()..), and error handling/logging, and he had\nsome idea for Variadics (or maybe just pulling the backend's Datum\nsystem in..? He could answer better), and basically write a frontend\nJSON parser for PG without too much code, no external dependencies, and\nto make sure it answers this requirement, and I've agreed that he can\nspend some time on that instead of pgBackRest to get us through this, if\neveryone else is agreeable to the idea. Obviously this isn't intended\nto box anyone in- if there turns out even after the code's been written\nto be some fatal issue with using it, so be it, but we're offering to\nhelp.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 14 Jan 2020 15:35:40 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 03:35:40PM -0500, Stephen Frost wrote:\n> Greetings,\n> \n> * David Fetter (david@fetter.org) wrote:\n> > On Tue, Jan 14, 2020 at 12:53:04PM -0500, Tom Lane wrote:\n> > > Robert Haas <robertmhaas@gmail.com> writes:\n> > > > ... I would also expect that depending on an external package\n> > > > would provoke significant opposition. If we suck the code into core,\n> > > > then we have to keep it up to date with the upstream, which is a\n> > > > significant maintenance burden - look at all the time Tom has spent on\n> > > > snowball, regex, and time zone code over the years.\n> > > \n> > > Also worth noting is that we have a seriously bad track record about\n> > > choosing external packages to depend on. The regex code has no upstream\n> > > maintainer anymore (well, the Tcl guys seem to think that *we* are\n> > > upstream for that now), and snowball is next door to moribund.\n> > > With C not being a particularly hip language to develop in anymore,\n> > > it wouldn't surprise me in the least for any C-code JSON parser\n> > > we might pick to go dead pretty soon.\n> > \n> > Given jq's extreme popularity and compatible license, I'd nominate that.\n> \n> I don't think that really changes Tom's concerns here about having an\n> \"upstream\" for this.\n> \n> For my part, I don't really agree with the whole \"we don't want two\n> different JSON parsers\" when we've got two of a bunch of stuff between\n> the frontend and the backend, particularly since I don't really think\n> it'll end up being *that* much code.\n> \n> My thought, which I had expressed to David (though he obviously didn't\n> entirely agree with me since he suggested the other options), was to\n> adapt the pgBackRest JSON parser, which isn't really all that much code.\n> \n> Frustratingly, that code has got some internal pgBackRest dependency on\n> things like the memory context system (which looks, unsurprisingly, an\n> awful lot like what is in PG backend), the error handling and logging\n> systems (which are different from PG because they're quite intentionally\n> segregated from each other- something PG would benefit from, imv..), and\n> Variadics (known in the PG backend as Datums, and quite similar to\n> them..).\n\nIt might be more fun to put in that infrastructure and have it gate\nthe manifest feature than to have two vastly different parsers to\ncontend with. I get that putting off the backup manifests isn't an\nawesome prospect, but neither is rushing them in and getting them\nwrong in ways we'll still be regretting a decade hence.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 14 Jan 2020 23:14:49 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi Stephen,\n\nOn 1/14/20 1:35 PM, Stephen Frost wrote:\n> \n> My thought, which I had expressed to David (though he obviously didn't\n> entirely agree with me since he suggested the other options), was to\n> adapt the pgBackRest JSON parser, which isn't really all that much code.\n\nIt's not that I didn't agree, it's just that the pgBackRest code does \nuse mem contexts, the type system, etc. After looking at some other \nsolutions with similar amounts of code I thought they might be more \nacceptable. At least it seemed like a good idea to throw it out there.\n\n> Even so, David's offered to adjust the code to use the frontend's memory\n> management (*cough* malloc()..), and error handling/logging, and he had\n> some idea for Variadics (or maybe just pulling the backend's Datum\n> system in..? He could answer better), and basically write a frontend\n> JSON parser for PG without too much code, no external dependencies, and\n> to make sure it answers this requirement, and I've agreed that he can\n> spend some time on that instead of pgBackRest to get us through this, if\n> everyone else is agreeable to the idea. \n\nTo keep it simple I think we are left with callbacks or a somewhat \nstatic \"what's the next datum\" kind of approach. I think the latter \ncould get us through a release or two while we make improvements.\n\n> Obviously this isn't intended\n> to box anyone in- if there turns out even after the code's been written\n> to be some fatal issue with using it, so be it, but we're offering to\n> help.\n\nI'm happy to work up a prototype unless the consensus is that we \nabsolutely don't want a second JSON parser in core.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 14 Jan 2020 21:36:30 -0700",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "David Steele <david@pgmasters.net> writes:\n> I'm happy to work up a prototype unless the consensus is that we \n> absolutely don't want a second JSON parser in core.\n\nHow much code are we talking about? If the answer is \"a few hundred\nlines\", it's a lot easier to swallow than if it's \"a few thousand\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 23:47:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi Tom,\n\nOn 1/14/20 9:47 PM, Tom Lane wrote:\n> David Steele <david@pgmasters.net> writes:\n>> I'm happy to work up a prototype unless the consensus is that we\n>> absolutely don't want a second JSON parser in core.\n> \n> How much code are we talking about? If the answer is \"a few hundred\n> lines\", it's a lot easier to swallow than if it's \"a few thousand\".\n\nIt's currently about a thousand lines but we have a lot of functions to \nconvert to/from specific types. I imagine the line count would be \nsimilar using one of the approaches I discussed above.\n\nCurrent source attached for reference.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net",
"msg_date": "Tue, 14 Jan 2020 22:21:00 -0700",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 12:53:04PM -0500, Tom Lane wrote:\n> Also worth noting is that we have a seriously bad track record about\n> choosing external packages to depend on. The regex code has no upstream\n> maintainer anymore (well, the Tcl guys seem to think that *we* are\n> upstream for that now), and snowball is next door to moribund.\n> With C not being a particularly hip language to develop in anymore,\n> it wouldn't surprise me in the least for any C-code JSON parser\n> we might pick to go dead pretty soon.\n> \n> Between that problem and the likelihood that we'd need to make\n> significant code changes anyway to meet our own coding style etc\n> expectations, I think really we'd have to assume that we're going\n> to fork and maintain our own copy of any code we pick.\n> \n> Now, if it's a small enough chunk of code (and really, how complex\n> is JSON parsing anyway) maybe that doesn't matter. But I tend to\n> agree with Robert's position that it's a big ask for this patch\n> to introduce a frontend JSON parser.\n\nI know we have talked about our experience in maintaining external code:\n\n* TCL regex\n* Snowball\n* Timezone handling\n\nHowever, the regex code is complex, and the Snowball and timezone code\nis improved as they add new languages and time zones. I don't see JSON\nparsing as complex or likely to change much, so it might be acceptable\nto include it in our frontend code.\n\nAs far as using tab-delimited data, I know this usage was compared to\npostgresql.conf and pg_hba.conf, which don't change much. However,\nthose files are not usually written, and do not contain user data, while\nthe backup file might contain user-specified paths if they are not just\nrelative to the PGDATA directory, and that would make escaping a\nrequirement.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 18 Jan 2020 09:36:14 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 6:11 PM Suraj Kharage\n<suraj.kharage@enterprisedb.com> wrote:\n> Thank you for review comments.\n\nHere's a new patch set for this feature.\n\n0001 adds checksum helper functions, similar to what Suraj had\nincorporated into my original patch but separated out into a separate\npatch and with some different aesthetic decisions. I also decided to\nsupport all of the SHA variants that PG knows about as options and\nadded a function to parse a checksum algorithm name, along the lines I\nsuggested previously.\n\n0002 teaches the server to generate a backup manifest using the format\nI originally proposed. This is similar to the patch I posted\npreviously, but it spools the manifest to disk as it's being\ngenerated, so that we don't run the server out of memory or fail when\nhitting the 1GB allocation limit.\n\n0003 adds a new utility, pg_validatebackup, to validate a backup\nagainst a manifest. Suraj tried to incorporate this into\npg_basebackup, which I initially thought might be OK but eventually\ndecided wasn't good, partly because this really wants to take some\ncommand-line options entirely unrelated to the options accepted by\npg_basebackup. I tried to improve the error checking and the order in\nwhich various things are done, too. This is a basically a complete\nrewrite as compared with Suraj's version.\n\n0004 modifies the server to generate a backup manifest in JSON format\nrather than my originally proposed format. This allows for some\ncomparison of the code doing it one way vs. the other. Assuming we\nstick with JSON, I will squash this with 0002 at some point.\n\n0005 is a very much work-in-progress and proof-of-concept to modify\nthe backup validator to understand the JSON format. It doesn't\nvalidate the manifest checksum at this point; it just prints it out.\nThe error handling needs work. It has other problems, and bugs.\nAlthough I'm still not very happy about the idea of using JSON here,\nI'm pretty happy with the basic approach this patch takes. It\ndemonstrates that the JSON parser can be used for non-trivial things\nin frontend code, and I'd say the code even looks reasonably clean -\nwith the exception of small details like being buggy and\nunder-commented.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 27 Feb 2020 21:22:25 +0530",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 2/27/20 9:22 PM, Robert Haas wrote:\n> Here's a new patch set for this feature.\n\nThanks Robert. After applying all the 5 patches (v8-00*) against PG v13 \n(commit id -afb5465e0cfce7637066eaaaeecab30b0f23fbe3) ,\n\nThere are few issues/observations\n\n1)Getting segmentation fault error if we try pg_validatebackup against \na valid backup_manifest file but data directory path is WRONG\n\n[centos@tushar-ldap-docker bin]$ ./pg_basebackup -D bk \n--manifest-checksums=sha224\n\n[centos@tushar-ldap-docker bin]$ cp bk/backup_manifest /tmp/.\n\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup -m \n/tmp/backup_manifest random_directory/\npg_validatebackup: * manifest_checksum = \nf0460cd6aa13cf0c5e35426a41af940a9231e6425cd65115a19778b7abfdaef9\npg_validatebackup: error: could not open directory \"random_directory\": \nNo such file or directory\nSegmentation fault\n\n2) when used '-R' option at the time of create base backup\n\n[centos@tushar-ldap-docker bin]$ ./pg_basebackup -D bar -R\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup bar\npg_validatebackup: * manifest_checksum = \na195d3a3a82a41200c9ac92c12d764d23c810e7e91b31c44a7d04f67ce012edc\npg_validatebackup: error: \"standby.signal\" is present on disk but not in \nthe manifest\npg_validatebackup: error: \"postgresql.auto.conf\" has size 286 on disk \nbut size 88 in the manifest\n[centos@tushar-ldap-docker bin]$\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Tue, 3 Mar 2020 16:04:07 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/3/20 4:04 PM, tushar wrote:\n> Thanks Robert. After applying all the 5 patches (v8-00*) against PG \n> v13 (commit id -afb5465e0cfce7637066eaaaeecab30b0f23fbe3) , \n\nThere is a scenario where pg_validatebackup is not throwing an error if \nsome file deleted from pg_wal/ folder and but later at the time of \nrestoring - we are getting an error\n\n[centos@tushar-ldap-docker bin]$ ./pg_basebackup -D test1\n\n[centos@tushar-ldap-docker bin]$ ls test1/pg_wal/\n000000010000000000000010 archive_status\n\n[centos@tushar-ldap-docker bin]$ rm -rf test1/pg_wal/*\n\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup test1\npg_validatebackup: * manifest_checksum = \n88f1ed995c83e86252466a2c88b3e660a69cfc76c169991134b101c4f16c9df7\npg_validatebackup: backup successfully verified\n\n[centos@tushar-ldap-docker bin]$ ./pg_ctl -D test1 start -o '-p 3333'\nwaiting for server to start....2020-03-02 20:05:22.732 IST [21441] LOG: \nstarting PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled by gcc \n(GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\n2020-03-02 20:05:22.733 IST [21441] LOG: listening on IPv6 address \n\"::1\", port 3333\n2020-03-02 20:05:22.733 IST [21441] LOG: listening on IPv4 address \n\"127.0.0.1\", port 3333\n2020-03-02 20:05:22.736 IST [21441] LOG: listening on Unix socket \n\"/tmp/.s.PGSQL.3333\"\n2020-03-02 20:05:22.739 IST [21442] LOG: database system was \ninterrupted; last known up at 2020-03-02 20:04:35 IST\n2020-03-02 20:05:22.739 IST [21442] LOG: creating missing WAL directory \n\"pg_wal/archive_status\"\n2020-03-02 20:05:22.886 IST [21442] LOG: invalid checkpoint record\n2020-03-02 20:05:22.886 IST [21442] FATAL: could not locate required \ncheckpoint record\n2020-03-02 20:05:22.886 IST [21442] HINT: If you are restoring from a \nbackup, touch \n\"/home/centos/pg13_bk_mani/edb/edbpsql/bin/test1/recovery.signal\" and \nadd required recovery options.\n If you are not restoring from a backup, try removing the file \n\"/home/centos/pg13_bk_mani/edb/edbpsql/bin/test1/backup_label\".\n Be careful: removing \n\"/home/centos/pg13_bk_mani/edb/edbpsql/bin/test1/backup_label\" will \nresult in a corrupt cluster if restoring from a backup.\n2020-03-02 20:05:22.886 IST [21441] LOG: startup process (PID 21442) \nexited with exit code 1\n2020-03-02 20:05:22.886 IST [21441] LOG: aborting startup due to \nstartup process failure\n2020-03-02 20:05:22.889 IST [21441] LOG: database system is shut down\n stopped waiting\npg_ctl: could not start server\nExamine the log output.\n[centos@tushar-ldap-docker bin]$\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Tue, 3 Mar 2020 20:19:42 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\nAnother observation , if i change the ownership of a file which is under \nglobal/ directory\ni.e\n\n[root@tushar-ldap-docker global]# chown enterprisedb 2396\n\nand run the pg_validatebackup command, i am getting this message -\n\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup gggg\npg_validatebackup: * manifest_checksum = \ne8cb007bcc9c0deab6eff51cd8d9d9af6af35b86e02f3055e60e70e56737e877\npg_validatebackup: error: could not open file \"global/2396\": Permission \ndenied\n*** Error in `./pg_validatebackup': double free or corruption (!prev): \n0x0000000001850ba0 ***\n======= Backtrace: =========\n/lib64/libc.so.6(+0x81679)[0x7fa2248e3679]\n./pg_validatebackup[0x401f4c]\n/lib64/libc.so.6(__libc_start_main+0xf5)[0x7fa224884505]\n./pg_validatebackup[0x402049]\n======= Memory map: ========\n00400000-00415000 r-xp 00000000 fd:03 4044545 \n/home/centos/pg13_bk_mani/edb/edbpsql/bin/pg_validatebackup\n00614000-00615000 r--p 00014000 fd:03 4044545 \n/home/centos/pg13_bk_mani/edb/edbpsql/bin/pg_validatebackup\n00615000-00616000 rw-p 00015000 fd:03 4044545 \n/home/centos/pg13_bk_mani/edb/edbpsql/bin/pg_validatebackup\n017f3000-01878000 rw-p 00000000 00:00 0 \n[heap]\n7fa218000000-7fa218021000 rw-p 00000000 00:00 0\n7fa218021000-7fa21c000000 ---p 00000000 00:00 0\n7fa21e122000-7fa21e137000 r-xp 00000000 fd:03 141697 \n/usr/lib64/libgcc_s-4.8.5-20150702.so.1\n7fa21e137000-7fa21e336000 ---p 00015000 fd:03 141697 \n/usr/lib64/libgcc_s-4.8.5-20150702.so.1\n7fa21e336000-7fa21e337000 r--p 00014000 fd:03 141697 \n/usr/lib64/libgcc_s-4.8.5-20150702.so.1\n7fa21e337000-7fa21e338000 rw-p 00015000 fd:03 141697 \n/usr/lib64/libgcc_s-4.8.5-20150702.so.1\n7fa21e338000-7fa224862000 r--p 00000000 fd:03 266442 \n/usr/lib/locale/locale-archive\n7fa224862000-7fa224a25000 r-xp 00000000 fd:03 134456 \n/usr/lib64/libc-2.17.so\n7fa224a25000-7fa224c25000 ---p 001c3000 fd:03 134456 \n/usr/lib64/libc-2.17.so\n7fa224c25000-7fa224c29000 r--p 001c3000 fd:03 134456 \n/usr/lib64/libc-2.17.so\n7fa224c29000-7fa224c2b000 rw-p 001c7000 fd:03 134456 \n/usr/lib64/libc-2.17.so\n7fa224c2b000-7fa224c30000 rw-p 00000000 00:00 0\n7fa224c30000-7fa224c47000 r-xp 00000000 fd:03 134485 \n/usr/lib64/libpthread-2.17.so\n7fa224c47000-7fa224e46000 ---p 00017000 fd:03 134485 \n/usr/lib64/libpthread-2.17.so\n7fa224e46000-7fa224e47000 r--p 00016000 fd:03 134485 \n/usr/lib64/libpthread-2.17.so\n7fa224e47000-7fa224e48000 rw-p 00017000 fd:03 134485 \n/usr/lib64/libpthread-2.17.so\n7fa224e48000-7fa224e4c000 rw-p 00000000 00:00 0\n7fa224e4c000-7fa224e90000 r-xp 00000000 fd:03 4044478 \n/home/centos/pg13_bk_mani/edb/edbpsql/lib/libpq.so.5.13\n7fa224e90000-7fa225090000 ---p 00044000 fd:03 4044478 \n/home/centos/pg13_bk_mani/edb/edbpsql/lib/libpq.so.5.13\n7fa225090000-7fa225093000 r--p 00044000 fd:03 4044478 \n/home/centos/pg13_bk_mani/edb/edbpsql/lib/libpq.so.5.13\n7fa225093000-7fa225094000 rw-p 00047000 fd:03 4044478 \n/home/centos/pg13_bk_mani/edb/edbpsql/lib/libpq.so.5.13\n7fa225094000-7fa2250b6000 r-xp 00000000 fd:03 130333 \n/usr/lib64/ld-2.17.so\n7fa22527d000-7fa2252a2000 rw-p 00000000 00:00 0\n7fa2252b3000-7fa2252b5000 rw-p 00000000 00:00 0\n7fa2252b5000-7fa2252b6000 r--p 00021000 fd:03 130333 \n/usr/lib64/ld-2.17.so\n7fa2252b6000-7fa2252b7000 rw-p 00022000 fd:03 130333 \n/usr/lib64/ld-2.17.so\n7fa2252b7000-7fa2252b8000 rw-p 00000000 00:00 0\n7ffdf354f000-7ffdf3570000 rw-p 00000000 00:00 0 \n[stack]\n7ffdf3572000-7ffdf3574000 r-xp 00000000 00:00 0 \n[vdso]\nffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 \n[vsyscall]\nAborted\n[centos@tushar-ldap-docker bin]$\n\n\nI am getting the error message but along with \"*** Error in \n`./pg_validatebackup': double free or corruption (!prev): \n0x0000000001850ba0 ***\" messages\n\nIs this expected ?\n\nregards,\n\nOn 3/3/20 8:19 PM, tushar wrote:\n> On 3/3/20 4:04 PM, tushar wrote:\n>> Thanks Robert. After applying all the 5 patches (v8-00*) against PG \n>> v13 (commit id -afb5465e0cfce7637066eaaaeecab30b0f23fbe3) , \n>\n> There is a scenario where pg_validatebackup is not throwing an error \n> if some file deleted from pg_wal/ folder and but later at the time of \n> restoring - we are getting an error\n>\n> [centos@tushar-ldap-docker bin]$ ./pg_basebackup -D test1\n>\n> [centos@tushar-ldap-docker bin]$ ls test1/pg_wal/\n> 000000010000000000000010 archive_status\n>\n> [centos@tushar-ldap-docker bin]$ rm -rf test1/pg_wal/*\n>\n> [centos@tushar-ldap-docker bin]$ ./pg_validatebackup test1\n> pg_validatebackup: * manifest_checksum = \n> 88f1ed995c83e86252466a2c88b3e660a69cfc76c169991134b101c4f16c9df7\n> pg_validatebackup: backup successfully verified\n>\n> [centos@tushar-ldap-docker bin]$ ./pg_ctl -D test1 start -o '-p 3333'\n> waiting for server to start....2020-03-02 20:05:22.732 IST [21441] \n> LOG: starting PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled by \n> gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\n> 2020-03-02 20:05:22.733 IST [21441] LOG: listening on IPv6 address \n> \"::1\", port 3333\n> 2020-03-02 20:05:22.733 IST [21441] LOG: listening on IPv4 address \n> \"127.0.0.1\", port 3333\n> 2020-03-02 20:05:22.736 IST [21441] LOG: listening on Unix socket \n> \"/tmp/.s.PGSQL.3333\"\n> 2020-03-02 20:05:22.739 IST [21442] LOG: database system was \n> interrupted; last known up at 2020-03-02 20:04:35 IST\n> 2020-03-02 20:05:22.739 IST [21442] LOG: creating missing WAL \n> directory \"pg_wal/archive_status\"\n> 2020-03-02 20:05:22.886 IST [21442] LOG: invalid checkpoint record\n> 2020-03-02 20:05:22.886 IST [21442] FATAL: could not locate required \n> checkpoint record\n> 2020-03-02 20:05:22.886 IST [21442] HINT: If you are restoring from a \n> backup, touch \n> \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/test1/recovery.signal\" and \n> add required recovery options.\n> If you are not restoring from a backup, try removing the file \n> \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/test1/backup_label\".\n> Be careful: removing \n> \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/test1/backup_label\" will \n> result in a corrupt cluster if restoring from a backup.\n> 2020-03-02 20:05:22.886 IST [21441] LOG: startup process (PID 21442) \n> exited with exit code 1\n> 2020-03-02 20:05:22.886 IST [21441] LOG: aborting startup due to \n> startup process failure\n> 2020-03-02 20:05:22.889 IST [21441] LOG: database system is shut down\n> stopped waiting\n> pg_ctl: could not start server\n> Examine the log output.\n> [centos@tushar-ldap-docker bin]$\n>\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Wed, 4 Mar 2020 15:26:16 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Another scenario, in which if we modify Manifest-Checksum\" value from \nbackup_manifest file , we are not getting an error\n\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup data/\npg_validatebackup: * manifest_checksum = \n28d082921650d0ae881de8ceb122c8d2af5f449f51ecfb446827f7f49f91f65d\npg_validatebackup: backup successfully verified\n\nopen backup_manifest file and replace\n\n\"Manifest-Checksum\": \n\"8d082921650d0ae881de8ceb122c8d2af5f449f51ecfb446827f7f49f91f65d\"}\nwith\n\"Manifest-Checksum\": \"Hello World\"}\n\nrerun the pg_validatebackup\n\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup data/\npg_validatebackup: * manifest_checksum = Hello World\npg_validatebackup: backup successfully verified\n\nregards,\n\nOn 3/4/20 3:26 PM, tushar wrote:\n> Hi,\n> Another observation , if i change the ownership of a file which is \n> under global/ directory\n> i.e\n>\n> [root@tushar-ldap-docker global]# chown enterprisedb 2396\n>\n> and run the pg_validatebackup command, i am getting this message -\n>\n> [centos@tushar-ldap-docker bin]$ ./pg_validatebackup gggg\n> pg_validatebackup: * manifest_checksum = \n> e8cb007bcc9c0deab6eff51cd8d9d9af6af35b86e02f3055e60e70e56737e877\n> pg_validatebackup: error: could not open file \"global/2396\": \n> Permission denied\n> *** Error in `./pg_validatebackup': double free or corruption (!prev): \n> 0x0000000001850ba0 ***\n> ======= Backtrace: =========\n> /lib64/libc.so.6(+0x81679)[0x7fa2248e3679]\n> ./pg_validatebackup[0x401f4c]\n> /lib64/libc.so.6(__libc_start_main+0xf5)[0x7fa224884505]\n> ./pg_validatebackup[0x402049]\n> ======= Memory map: ========\n> 00400000-00415000 r-xp 00000000 fd:03 4044545 \n> /home/centos/pg13_bk_mani/edb/edbpsql/bin/pg_validatebackup\n> 00614000-00615000 r--p 00014000 fd:03 4044545 \n> /home/centos/pg13_bk_mani/edb/edbpsql/bin/pg_validatebackup\n> 00615000-00616000 rw-p 00015000 fd:03 4044545 \n> /home/centos/pg13_bk_mani/edb/edbpsql/bin/pg_validatebackup\n> 017f3000-01878000 rw-p 00000000 00:00 \n> 0 [heap]\n> 7fa218000000-7fa218021000 rw-p 00000000 00:00 0\n> 7fa218021000-7fa21c000000 ---p 00000000 00:00 0\n> 7fa21e122000-7fa21e137000 r-xp 00000000 fd:03 \n> 141697 /usr/lib64/libgcc_s-4.8.5-20150702.so.1\n> 7fa21e137000-7fa21e336000 ---p 00015000 fd:03 \n> 141697 /usr/lib64/libgcc_s-4.8.5-20150702.so.1\n> 7fa21e336000-7fa21e337000 r--p 00014000 fd:03 \n> 141697 /usr/lib64/libgcc_s-4.8.5-20150702.so.1\n> 7fa21e337000-7fa21e338000 rw-p 00015000 fd:03 \n> 141697 /usr/lib64/libgcc_s-4.8.5-20150702.so.1\n> 7fa21e338000-7fa224862000 r--p 00000000 fd:03 \n> 266442 /usr/lib/locale/locale-archive\n> 7fa224862000-7fa224a25000 r-xp 00000000 fd:03 \n> 134456 /usr/lib64/libc-2.17.so\n> 7fa224a25000-7fa224c25000 ---p 001c3000 fd:03 \n> 134456 /usr/lib64/libc-2.17.so\n> 7fa224c25000-7fa224c29000 r--p 001c3000 fd:03 \n> 134456 /usr/lib64/libc-2.17.so\n> 7fa224c29000-7fa224c2b000 rw-p 001c7000 fd:03 \n> 134456 /usr/lib64/libc-2.17.so\n> 7fa224c2b000-7fa224c30000 rw-p 00000000 00:00 0\n> 7fa224c30000-7fa224c47000 r-xp 00000000 fd:03 \n> 134485 /usr/lib64/libpthread-2.17.so\n> 7fa224c47000-7fa224e46000 ---p 00017000 fd:03 \n> 134485 /usr/lib64/libpthread-2.17.so\n> 7fa224e46000-7fa224e47000 r--p 00016000 fd:03 \n> 134485 /usr/lib64/libpthread-2.17.so\n> 7fa224e47000-7fa224e48000 rw-p 00017000 fd:03 \n> 134485 /usr/lib64/libpthread-2.17.so\n> 7fa224e48000-7fa224e4c000 rw-p 00000000 00:00 0\n> 7fa224e4c000-7fa224e90000 r-xp 00000000 fd:03 4044478 \n> /home/centos/pg13_bk_mani/edb/edbpsql/lib/libpq.so.5.13\n> 7fa224e90000-7fa225090000 ---p 00044000 fd:03 4044478 \n> /home/centos/pg13_bk_mani/edb/edbpsql/lib/libpq.so.5.13\n> 7fa225090000-7fa225093000 r--p 00044000 fd:03 4044478 \n> /home/centos/pg13_bk_mani/edb/edbpsql/lib/libpq.so.5.13\n> 7fa225093000-7fa225094000 rw-p 00047000 fd:03 4044478 \n> /home/centos/pg13_bk_mani/edb/edbpsql/lib/libpq.so.5.13\n> 7fa225094000-7fa2250b6000 r-xp 00000000 fd:03 \n> 130333 /usr/lib64/ld-2.17.so\n> 7fa22527d000-7fa2252a2000 rw-p 00000000 00:00 0\n> 7fa2252b3000-7fa2252b5000 rw-p 00000000 00:00 0\n> 7fa2252b5000-7fa2252b6000 r--p 00021000 fd:03 \n> 130333 /usr/lib64/ld-2.17.so\n> 7fa2252b6000-7fa2252b7000 rw-p 00022000 fd:03 \n> 130333 /usr/lib64/ld-2.17.so\n> 7fa2252b7000-7fa2252b8000 rw-p 00000000 00:00 0\n> 7ffdf354f000-7ffdf3570000 rw-p 00000000 00:00 \n> 0 [stack]\n> 7ffdf3572000-7ffdf3574000 r-xp 00000000 00:00 \n> 0 [vdso]\n> ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 \n> 0 [vsyscall]\n> Aborted\n> [centos@tushar-ldap-docker bin]$\n>\n>\n> I am getting the error message but along with \"*** Error in \n> `./pg_validatebackup': double free or corruption (!prev): \n> 0x0000000001850ba0 ***\" messages\n>\n> Is this expected ?\n>\n> regards,\n>\n> On 3/3/20 8:19 PM, tushar wrote:\n>> On 3/3/20 4:04 PM, tushar wrote:\n>>> Thanks Robert. After applying all the 5 patches (v8-00*) against PG \n>>> v13 (commit id -afb5465e0cfce7637066eaaaeecab30b0f23fbe3) , \n>>\n>> There is a scenario where pg_validatebackup is not throwing an error \n>> if some file deleted from pg_wal/ folder and but later at the time \n>> of restoring - we are getting an error\n>>\n>> [centos@tushar-ldap-docker bin]$ ./pg_basebackup -D test1\n>>\n>> [centos@tushar-ldap-docker bin]$ ls test1/pg_wal/\n>> 000000010000000000000010 archive_status\n>>\n>> [centos@tushar-ldap-docker bin]$ rm -rf test1/pg_wal/*\n>>\n>> [centos@tushar-ldap-docker bin]$ ./pg_validatebackup test1\n>> pg_validatebackup: * manifest_checksum = \n>> 88f1ed995c83e86252466a2c88b3e660a69cfc76c169991134b101c4f16c9df7\n>> pg_validatebackup: backup successfully verified\n>>\n>> [centos@tushar-ldap-docker bin]$ ./pg_ctl -D test1 start -o '-p 3333'\n>> waiting for server to start....2020-03-02 20:05:22.732 IST [21441] \n>> LOG: starting PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled by \n>> gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\n>> 2020-03-02 20:05:22.733 IST [21441] LOG: listening on IPv6 address \n>> \"::1\", port 3333\n>> 2020-03-02 20:05:22.733 IST [21441] LOG: listening on IPv4 address \n>> \"127.0.0.1\", port 3333\n>> 2020-03-02 20:05:22.736 IST [21441] LOG: listening on Unix socket \n>> \"/tmp/.s.PGSQL.3333\"\n>> 2020-03-02 20:05:22.739 IST [21442] LOG: database system was \n>> interrupted; last known up at 2020-03-02 20:04:35 IST\n>> 2020-03-02 20:05:22.739 IST [21442] LOG: creating missing WAL \n>> directory \"pg_wal/archive_status\"\n>> 2020-03-02 20:05:22.886 IST [21442] LOG: invalid checkpoint record\n>> 2020-03-02 20:05:22.886 IST [21442] FATAL: could not locate required \n>> checkpoint record\n>> 2020-03-02 20:05:22.886 IST [21442] HINT: If you are restoring from \n>> a backup, touch \n>> \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/test1/recovery.signal\" and \n>> add required recovery options.\n>> If you are not restoring from a backup, try removing the file \n>> \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/test1/backup_label\".\n>> Be careful: removing \n>> \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/test1/backup_label\" will \n>> result in a corrupt cluster if restoring from a backup.\n>> 2020-03-02 20:05:22.886 IST [21441] LOG: startup process (PID 21442) \n>> exited with exit code 1\n>> 2020-03-02 20:05:22.886 IST [21441] LOG: aborting startup due to \n>> startup process failure\n>> 2020-03-02 20:05:22.889 IST [21441] LOG: database system is shut down\n>> stopped waiting\n>> pg_ctl: could not start server\n>> Examine the log output.\n>> [centos@tushar-ldap-docker bin]$\n>>\n>\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Wed, 4 Mar 2020 15:51:32 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nThere is a scenario in which i add something inside the pg_tablespace \ndirectory , i am getting an error like-\n\npg_validatebackup: * manifest_checksum = \n77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\npg_validatebackup: error: \"pg_tblspc/16385/*PG_13_202002271*/test\" is \npresent on disk but not in the manifest\n\nbut if i remove 'PG_13_202002271 ' directory then there is no error\n\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup data\npg_validatebackup: * manifest_checksum = \n77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\npg_validatebackup: backup successfully verified\n\nSteps to reproduce -\n--connect to psql terminal , create a tablespace\npostgres=# \\! mkdir /tmp/my_tblspc\npostgres=# create tablespace tbs location '/tmp/my_tblspc';\nCREATE TABLESPACE\npostgres=# \\q\n\n--run pg_basebackup\n[centos@tushar-ldap-docker bin]$ ./pg_basebackup -D data_dir -T \n/tmp/my_tblspc/=/tmp/new_my_tblspc\n[centos@tushar-ldap-docker bin]$\n[centos@tushar-ldap-docker bin]$ ls /tmp/new_my_tblspc/\nPG_13_202002271\n\n--create a new file under PG_13_* folder\n[centos@tushar-ldap-docker bin]$ touch \n/tmp/new_my_tblspc/PG_13_202002271/test\n[centos@tushar-ldap-docker bin]$\n\n--run pg_validatebackup ,Getting an error which looks expected\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup data_dir/\npg_validatebackup: * manifest_checksum = \n3951308eab576906ebdb002ff00ca313b2c1862592168c1f5f7ecf051ac07907\npg_validatebackup: error: \"pg_tblspc/16386/PG_13_202002271/test\" is \npresent on disk but not in the manifest\n[centos@tushar-ldap-docker bin]$\n\n--remove the added file\n[centos@tushar-ldap-docker bin]$ rm -rf \n/tmp/new_my_tblspc/PG_13_202002271/test\n\n--run pg_validatebackup , working fine\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup data_dir/\npg_validatebackup: * manifest_checksum = \n3951308eab576906ebdb002ff00ca313b2c1862592168c1f5f7ecf051ac07907\npg_validatebackup: backup successfully verified\n[centos@tushar-ldap-docker bin]$\n\n--remove the folder PG_13*\n[centos@tushar-ldap-docker bin]$ rm -rf \n/tmp/new_my_tblspc/PG_13_202002271/\n[centos@tushar-ldap-docker bin]$\n[centos@tushar-ldap-docker bin]$ ls /tmp/new_my_tblspc/\n\n--run pg_validatebackup , No error reported ?\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup data_dir/\npg_validatebackup: * manifest_checksum = \n3951308eab576906ebdb002ff00ca313b2c1862592168c1f5f7ecf051ac07907\npg_validatebackup: backup successfully verified\n[centos@tushar-ldap-docker bin]$\n\nStart the server -\n\n[centos@tushar-ldap-docker bin]$ ./pg_ctl -D data_dir/ start -o '-p 9033'\nwaiting for server to start....2020-03-04 19:18:54.839 IST [13097] LOG: \nstarting PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled by gcc \n(GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\n2020-03-04 19:18:54.840 IST [13097] LOG: listening on IPv6 address \n\"::1\", port 9033\n2020-03-04 19:18:54.840 IST [13097] LOG: listening on IPv4 address \n\"127.0.0.1\", port 9033\n2020-03-04 19:18:54.842 IST [13097] LOG: listening on Unix socket \n\"/tmp/.s.PGSQL.9033\"\n2020-03-04 19:18:54.843 IST [13097] LOG: could not open directory \n\"pg_tblspc/16386/PG_13_202002271\": No such file or directory\n2020-03-04 19:18:54.845 IST [13098] LOG: database system was \ninterrupted; last known up at 2020-03-04 19:14:50 IST\n2020-03-04 19:18:54.937 IST [13098] LOG: could not open directory \n\"pg_tblspc/16386/PG_13_202002271\": No such file or directory\n2020-03-04 19:18:54.939 IST [13098] LOG: could not open directory \n\"pg_tblspc/16386/PG_13_202002271\": No such file or directory\n2020-03-04 19:18:54.939 IST [13098] LOG: redo starts at 0/18000028\n2020-03-04 19:18:54.939 IST [13098] LOG: consistent recovery state \nreached at 0/18000100\n2020-03-04 19:18:54.939 IST [13098] LOG: redo done at 0/18000100\n2020-03-04 19:18:54.941 IST [13098] LOG: could not open directory \n\"pg_tblspc/16386/PG_13_202002271\": No such file or directory\n2020-03-04 19:18:54.984 IST [13097] LOG: database system is ready to \naccept connections\n done\nserver started\n[centos@tushar-ldap-docker bin]$\n\nregards,\n\nOn 3/4/20 3:51 PM, tushar wrote:\n> Another scenario, in which if we modify Manifest-Checksum\" value from \n> backup_manifest file , we are not getting an error\n>\n> [centos@tushar-ldap-docker bin]$ ./pg_validatebackup data/\n> pg_validatebackup: * manifest_checksum = \n> 28d082921650d0ae881de8ceb122c8d2af5f449f51ecfb446827f7f49f91f65d\n> pg_validatebackup: backup successfully verified\n>\n> open backup_manifest file and replace\n>\n> \"Manifest-Checksum\": \n> \"8d082921650d0ae881de8ceb122c8d2af5f449f51ecfb446827f7f49f91f65d\"}\n> with\n> \"Manifest-Checksum\": \"Hello World\"}\n>\n> rerun the pg_validatebackup\n>\n> [centos@tushar-ldap-docker bin]$ ./pg_validatebackup data/\n> pg_validatebackup: * manifest_checksum = Hello World\n> pg_validatebackup: backup successfully verified\n>\n> regards,\n>\n> On 3/4/20 3:26 PM, tushar wrote:\n>> Hi,\n>> Another observation , if i change the ownership of a file which is \n>> under global/ directory\n>> i.e\n>>\n>> [root@tushar-ldap-docker global]# chown enterprisedb 2396\n>>\n>> and run the pg_validatebackup command, i am getting this message -\n>>\n>> [centos@tushar-ldap-docker bin]$ ./pg_validatebackup gggg\n>> pg_validatebackup: * manifest_checksum = \n>> e8cb007bcc9c0deab6eff51cd8d9d9af6af35b86e02f3055e60e70e56737e877\n>> pg_validatebackup: error: could not open file \"global/2396\": \n>> Permission denied\n>> *** Error in `./pg_validatebackup': double free or corruption \n>> (!prev): 0x0000000001850ba0 ***\n>> ======= Backtrace: =========\n>> /lib64/libc.so.6(+0x81679)[0x7fa2248e3679]\n>> ./pg_validatebackup[0x401f4c]\n>> /lib64/libc.so.6(__libc_start_main+0xf5)[0x7fa224884505]\n>> ./pg_validatebackup[0x402049]\n>> ======= Memory map: ========\n>> 00400000-00415000 r-xp 00000000 fd:03 4044545 \n>> /home/centos/pg13_bk_mani/edb/edbpsql/bin/pg_validatebackup\n>> 00614000-00615000 r--p 00014000 fd:03 4044545 \n>> /home/centos/pg13_bk_mani/edb/edbpsql/bin/pg_validatebackup\n>> 00615000-00616000 rw-p 00015000 fd:03 4044545 \n>> /home/centos/pg13_bk_mani/edb/edbpsql/bin/pg_validatebackup\n>> 017f3000-01878000 rw-p 00000000 00:00 \n>> 0 [heap]\n>> 7fa218000000-7fa218021000 rw-p 00000000 00:00 0\n>> 7fa218021000-7fa21c000000 ---p 00000000 00:00 0\n>> 7fa21e122000-7fa21e137000 r-xp 00000000 fd:03 141697 \n>> /usr/lib64/libgcc_s-4.8.5-20150702.so.1\n>> 7fa21e137000-7fa21e336000 ---p 00015000 fd:03 141697 \n>> /usr/lib64/libgcc_s-4.8.5-20150702.so.1\n>> 7fa21e336000-7fa21e337000 r--p 00014000 fd:03 141697 \n>> /usr/lib64/libgcc_s-4.8.5-20150702.so.1\n>> 7fa21e337000-7fa21e338000 rw-p 00015000 fd:03 141697 \n>> /usr/lib64/libgcc_s-4.8.5-20150702.so.1\n>> 7fa21e338000-7fa224862000 r--p 00000000 fd:03 \n>> 266442 /usr/lib/locale/locale-archive\n>> 7fa224862000-7fa224a25000 r-xp 00000000 fd:03 \n>> 134456 /usr/lib64/libc-2.17.so\n>> 7fa224a25000-7fa224c25000 ---p 001c3000 fd:03 \n>> 134456 /usr/lib64/libc-2.17.so\n>> 7fa224c25000-7fa224c29000 r--p 001c3000 fd:03 \n>> 134456 /usr/lib64/libc-2.17.so\n>> 7fa224c29000-7fa224c2b000 rw-p 001c7000 fd:03 \n>> 134456 /usr/lib64/libc-2.17.so\n>> 7fa224c2b000-7fa224c30000 rw-p 00000000 00:00 0\n>> 7fa224c30000-7fa224c47000 r-xp 00000000 fd:03 \n>> 134485 /usr/lib64/libpthread-2.17.so\n>> 7fa224c47000-7fa224e46000 ---p 00017000 fd:03 \n>> 134485 /usr/lib64/libpthread-2.17.so\n>> 7fa224e46000-7fa224e47000 r--p 00016000 fd:03 \n>> 134485 /usr/lib64/libpthread-2.17.so\n>> 7fa224e47000-7fa224e48000 rw-p 00017000 fd:03 \n>> 134485 /usr/lib64/libpthread-2.17.so\n>> 7fa224e48000-7fa224e4c000 rw-p 00000000 00:00 0\n>> 7fa224e4c000-7fa224e90000 r-xp 00000000 fd:03 4044478 \n>> /home/centos/pg13_bk_mani/edb/edbpsql/lib/libpq.so.5.13\n>> 7fa224e90000-7fa225090000 ---p 00044000 fd:03 4044478 \n>> /home/centos/pg13_bk_mani/edb/edbpsql/lib/libpq.so.5.13\n>> 7fa225090000-7fa225093000 r--p 00044000 fd:03 4044478 \n>> /home/centos/pg13_bk_mani/edb/edbpsql/lib/libpq.so.5.13\n>> 7fa225093000-7fa225094000 rw-p 00047000 fd:03 4044478 \n>> /home/centos/pg13_bk_mani/edb/edbpsql/lib/libpq.so.5.13\n>> 7fa225094000-7fa2250b6000 r-xp 00000000 fd:03 \n>> 130333 /usr/lib64/ld-2.17.so\n>> 7fa22527d000-7fa2252a2000 rw-p 00000000 00:00 0\n>> 7fa2252b3000-7fa2252b5000 rw-p 00000000 00:00 0\n>> 7fa2252b5000-7fa2252b6000 r--p 00021000 fd:03 \n>> 130333 /usr/lib64/ld-2.17.so\n>> 7fa2252b6000-7fa2252b7000 rw-p 00022000 fd:03 \n>> 130333 /usr/lib64/ld-2.17.so\n>> 7fa2252b7000-7fa2252b8000 rw-p 00000000 00:00 0\n>> 7ffdf354f000-7ffdf3570000 rw-p 00000000 00:00 \n>> 0 [stack]\n>> 7ffdf3572000-7ffdf3574000 r-xp 00000000 00:00 \n>> 0 [vdso]\n>> ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 \n>> 0 [vsyscall]\n>> Aborted\n>> [centos@tushar-ldap-docker bin]$\n>>\n>>\n>> I am getting the error message but along with \"*** Error in \n>> `./pg_validatebackup': double free or corruption (!prev): \n>> 0x0000000001850ba0 ***\" messages\n>>\n>> Is this expected ?\n>>\n>> regards,\n>>\n>> On 3/3/20 8:19 PM, tushar wrote:\n>>> On 3/3/20 4:04 PM, tushar wrote:\n>>>> Thanks Robert. After applying all the 5 patches (v8-00*) against \n>>>> PG v13 (commit id -afb5465e0cfce7637066eaaaeecab30b0f23fbe3) , \n>>>\n>>> There is a scenario where pg_validatebackup is not throwing an error \n>>> if some file deleted from pg_wal/ folder and but later at the time \n>>> of restoring - we are getting an error\n>>>\n>>> [centos@tushar-ldap-docker bin]$ ./pg_basebackup -D test1\n>>>\n>>> [centos@tushar-ldap-docker bin]$ ls test1/pg_wal/\n>>> 000000010000000000000010 archive_status\n>>>\n>>> [centos@tushar-ldap-docker bin]$ rm -rf test1/pg_wal/*\n>>>\n>>> [centos@tushar-ldap-docker bin]$ ./pg_validatebackup test1\n>>> pg_validatebackup: * manifest_checksum = \n>>> 88f1ed995c83e86252466a2c88b3e660a69cfc76c169991134b101c4f16c9df7\n>>> pg_validatebackup: backup successfully verified\n>>>\n>>> [centos@tushar-ldap-docker bin]$ ./pg_ctl -D test1 start -o '-p 3333'\n>>> waiting for server to start....2020-03-02 20:05:22.732 IST [21441] \n>>> LOG: starting PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled \n>>> by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\n>>> 2020-03-02 20:05:22.733 IST [21441] LOG: listening on IPv6 address \n>>> \"::1\", port 3333\n>>> 2020-03-02 20:05:22.733 IST [21441] LOG: listening on IPv4 address \n>>> \"127.0.0.1\", port 3333\n>>> 2020-03-02 20:05:22.736 IST [21441] LOG: listening on Unix socket \n>>> \"/tmp/.s.PGSQL.3333\"\n>>> 2020-03-02 20:05:22.739 IST [21442] LOG: database system was \n>>> interrupted; last known up at 2020-03-02 20:04:35 IST\n>>> 2020-03-02 20:05:22.739 IST [21442] LOG: creating missing WAL \n>>> directory \"pg_wal/archive_status\"\n>>> 2020-03-02 20:05:22.886 IST [21442] LOG: invalid checkpoint record\n>>> 2020-03-02 20:05:22.886 IST [21442] FATAL: could not locate \n>>> required checkpoint record\n>>> 2020-03-02 20:05:22.886 IST [21442] HINT: If you are restoring from \n>>> a backup, touch \n>>> \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/test1/recovery.signal\" \n>>> and add required recovery options.\n>>> If you are not restoring from a backup, try removing the file \n>>> \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/test1/backup_label\".\n>>> Be careful: removing \n>>> \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/test1/backup_label\" will \n>>> result in a corrupt cluster if restoring from a backup.\n>>> 2020-03-02 20:05:22.886 IST [21441] LOG: startup process (PID \n>>> 21442) exited with exit code 1\n>>> 2020-03-02 20:05:22.886 IST [21441] LOG: aborting startup due to \n>>> startup process failure\n>>> 2020-03-02 20:05:22.889 IST [21441] LOG: database system is shut down\n>>> stopped waiting\n>>> pg_ctl: could not start server\n>>> Examine the log output.\n>>> [centos@tushar-ldap-docker bin]$\n>>>\n>>\n>\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nHi,\n\n\nThere is a scenario in which i add\n something inside the pg_tablespace directory , i am getting an\n error like-\n\n\npg_validatebackup: * manifest_checksum\n = 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n pg_validatebackup: error: \"pg_tblspc/16385/PG_13_202002271/test\"\n is present on disk but not in the manifest\n\n\n\nbut if i remove 'PG_13_202002271 '\n directory then there is no error \n\n\n\n[centos@tushar-ldap-docker bin]$\n ./pg_validatebackup data\n pg_validatebackup: * manifest_checksum =\n 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n pg_validatebackup: backup successfully verified\n\n\n\nSteps to reproduce - \n\n--connect to psql terminal , create a\n tablespace \n\npostgres=# \\! mkdir /tmp/my_tblspc\n postgres=# create tablespace tbs location '/tmp/my_tblspc';\n CREATE TABLESPACE\n postgres=# \\q\n\n\n--run pg_basebackup \n\n[centos@tushar-ldap-docker bin]$\n ./pg_basebackup -D data_dir -T\n /tmp/my_tblspc/=/tmp/new_my_tblspc\n [centos@tushar-ldap-docker bin]$ \n [centos@tushar-ldap-docker bin]$ ls /tmp/new_my_tblspc/\n PG_13_202002271\n\n\n\n--create a new file under PG_13_*\n folder \n [centos@tushar-ldap-docker bin]$ touch \n /tmp/new_my_tblspc/PG_13_202002271/test\n [centos@tushar-ldap-docker bin]$ \n\n\n\n--run pg_validatebackup ,Getting an\n error which looks expected \n\n[centos@tushar-ldap-docker bin]$\n ./pg_validatebackup data_dir/\n pg_validatebackup: * manifest_checksum =\n 3951308eab576906ebdb002ff00ca313b2c1862592168c1f5f7ecf051ac07907\n pg_validatebackup: error: \"pg_tblspc/16386/PG_13_202002271/test\"\n is present on disk but not in the manifest\n [centos@tushar-ldap-docker bin]$ \n\n\n\n--remove the added file \n\n[centos@tushar-ldap-docker bin]$ rm\n -rf /tmp/new_my_tblspc/PG_13_202002271/test\n\n\n--run pg_validatebackup , working fine\n \n\n[centos@tushar-ldap-docker bin]$\n ./pg_validatebackup data_dir/\n pg_validatebackup: * manifest_checksum =\n 3951308eab576906ebdb002ff00ca313b2c1862592168c1f5f7ecf051ac07907\n pg_validatebackup: backup successfully verified\n [centos@tushar-ldap-docker bin]$ \n\n\n\n--remove the folder PG_13*\n\n[centos@tushar-ldap-docker bin]$ rm\n -rf /tmp/new_my_tblspc/PG_13_202002271/\n [centos@tushar-ldap-docker bin]$ \n [centos@tushar-ldap-docker bin]$ ls /tmp/new_my_tblspc/\n\n\n\n--run pg_validatebackup , No error\n reported ?\n\n[centos@tushar-ldap-docker bin]$\n ./pg_validatebackup data_dir/\n pg_validatebackup: * manifest_checksum =\n 3951308eab576906ebdb002ff00ca313b2c1862592168c1f5f7ecf051ac07907\n pg_validatebackup: backup successfully verified\n [centos@tushar-ldap-docker bin]$ \n\n\n\nStart the server - \n\n\n\n[centos@tushar-ldap-docker bin]$\n ./pg_ctl -D data_dir/ start -o '-p 9033'\n waiting for server to start....2020-03-04 19:18:54.839 IST [13097]\n LOG: starting PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled\n by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\n 2020-03-04 19:18:54.840 IST [13097] LOG: listening on IPv6\n address \"::1\", port 9033\n 2020-03-04 19:18:54.840 IST [13097] LOG: listening on IPv4\n address \"127.0.0.1\", port 9033\n 2020-03-04 19:18:54.842 IST [13097] LOG: listening on Unix socket\n \"/tmp/.s.PGSQL.9033\"\n 2020-03-04 19:18:54.843 IST [13097] LOG: could not open directory\n \"pg_tblspc/16386/PG_13_202002271\": No such file or directory\n 2020-03-04 19:18:54.845 IST [13098] LOG: database system was\n interrupted; last known up at 2020-03-04 19:14:50 IST\n 2020-03-04 19:18:54.937 IST [13098] LOG: could not open directory\n \"pg_tblspc/16386/PG_13_202002271\": No such file or directory\n 2020-03-04 19:18:54.939 IST [13098] LOG: could not open directory\n \"pg_tblspc/16386/PG_13_202002271\": No such file or directory\n 2020-03-04 19:18:54.939 IST [13098] LOG: redo starts at\n 0/18000028\n 2020-03-04 19:18:54.939 IST [13098] LOG: consistent recovery\n state reached at 0/18000100\n 2020-03-04 19:18:54.939 IST [13098] LOG: redo done at 0/18000100\n 2020-03-04 19:18:54.941 IST [13098] LOG: could not open directory\n \"pg_tblspc/16386/PG_13_202002271\": No such file or directory\n 2020-03-04 19:18:54.984 IST [13097] LOG: database system is ready\n to accept connections\n done\n server started\n [centos@tushar-ldap-docker bin]$ \n\n\n\nregards,\n\n\n\nOn 3/4/20 3:51 PM, tushar wrote:\n\nAnother\n scenario, in which if we modify Manifest-Checksum\" value from\n backup_manifest file , we are not getting an error\n \n\n [centos@tushar-ldap-docker bin]$ ./pg_validatebackup data/\n \n pg_validatebackup: * manifest_checksum =\n 28d082921650d0ae881de8ceb122c8d2af5f449f51ecfb446827f7f49f91f65d\n \n pg_validatebackup: backup successfully verified\n \n\n open backup_manifest file and replace\n \n\n \"Manifest-Checksum\":\n \"8d082921650d0ae881de8ceb122c8d2af5f449f51ecfb446827f7f49f91f65d\"}\n \n with\n \n \"Manifest-Checksum\": \"Hello World\"}\n \n\n rerun the pg_validatebackup\n \n\n [centos@tushar-ldap-docker bin]$ ./pg_validatebackup data/\n \n pg_validatebackup: * manifest_checksum = Hello World\n \n pg_validatebackup: backup successfully verified\n \n\n regards,\n \n\n On 3/4/20 3:26 PM, tushar wrote:\n \nHi,\n \n Another observation , if i change the ownership of a file which\n is under global/ directory\n \n i.e\n \n\n [root@tushar-ldap-docker global]# chown enterprisedb 2396\n \n\n and run the pg_validatebackup command, i am getting this message\n -\n \n\n [centos@tushar-ldap-docker bin]$ ./pg_validatebackup gggg\n \n pg_validatebackup: * manifest_checksum =\n e8cb007bcc9c0deab6eff51cd8d9d9af6af35b86e02f3055e60e70e56737e877\n \n pg_validatebackup: error: could not open file \"global/2396\":\n Permission denied\n \n *** Error in `./pg_validatebackup': double free or corruption\n (!prev): 0x0000000001850ba0 ***\n \n ======= Backtrace: =========\n \n /lib64/libc.so.6(+0x81679)[0x7fa2248e3679]\n \n ./pg_validatebackup[0x401f4c]\n \n /lib64/libc.so.6(__libc_start_main+0xf5)[0x7fa224884505]\n \n ./pg_validatebackup[0x402049]\n \n ======= Memory map: ========\n \n 00400000-00415000 r-xp 00000000 fd:03 4044545\n /home/centos/pg13_bk_mani/edb/edbpsql/bin/pg_validatebackup\n \n 00614000-00615000 r--p 00014000 fd:03 4044545\n /home/centos/pg13_bk_mani/edb/edbpsql/bin/pg_validatebackup\n \n 00615000-00616000 rw-p 00015000 fd:03 4044545\n /home/centos/pg13_bk_mani/edb/edbpsql/bin/pg_validatebackup\n \n 017f3000-01878000 rw-p 00000000 00:00\n 0 [heap]\n \n 7fa218000000-7fa218021000 rw-p 00000000 00:00 0\n \n 7fa218021000-7fa21c000000 ---p 00000000 00:00 0\n \n 7fa21e122000-7fa21e137000 r-xp 00000000 fd:03\n 141697 \n /usr/lib64/libgcc_s-4.8.5-20150702.so.1\n \n 7fa21e137000-7fa21e336000 ---p 00015000 fd:03\n 141697 \n /usr/lib64/libgcc_s-4.8.5-20150702.so.1\n \n 7fa21e336000-7fa21e337000 r--p 00014000 fd:03\n 141697 \n /usr/lib64/libgcc_s-4.8.5-20150702.so.1\n \n 7fa21e337000-7fa21e338000 rw-p 00015000 fd:03\n 141697 \n /usr/lib64/libgcc_s-4.8.5-20150702.so.1\n \n 7fa21e338000-7fa224862000 r--p 00000000 fd:03\n 266442 /usr/lib/locale/locale-archive\n \n 7fa224862000-7fa224a25000 r-xp 00000000 fd:03\n 134456 /usr/lib64/libc-2.17.so\n \n 7fa224a25000-7fa224c25000 ---p 001c3000 fd:03\n 134456 /usr/lib64/libc-2.17.so\n \n 7fa224c25000-7fa224c29000 r--p 001c3000 fd:03\n 134456 /usr/lib64/libc-2.17.so\n \n 7fa224c29000-7fa224c2b000 rw-p 001c7000 fd:03\n 134456 /usr/lib64/libc-2.17.so\n \n 7fa224c2b000-7fa224c30000 rw-p 00000000 00:00 0\n \n 7fa224c30000-7fa224c47000 r-xp 00000000 fd:03\n 134485 /usr/lib64/libpthread-2.17.so\n \n 7fa224c47000-7fa224e46000 ---p 00017000 fd:03\n 134485 /usr/lib64/libpthread-2.17.so\n \n 7fa224e46000-7fa224e47000 r--p 00016000 fd:03\n 134485 /usr/lib64/libpthread-2.17.so\n \n 7fa224e47000-7fa224e48000 rw-p 00017000 fd:03\n 134485 /usr/lib64/libpthread-2.17.so\n \n 7fa224e48000-7fa224e4c000 rw-p 00000000 00:00 0\n \n 7fa224e4c000-7fa224e90000 r-xp 00000000 fd:03 4044478\n /home/centos/pg13_bk_mani/edb/edbpsql/lib/libpq.so.5.13\n \n 7fa224e90000-7fa225090000 ---p 00044000 fd:03 4044478\n /home/centos/pg13_bk_mani/edb/edbpsql/lib/libpq.so.5.13\n \n 7fa225090000-7fa225093000 r--p 00044000 fd:03 4044478\n /home/centos/pg13_bk_mani/edb/edbpsql/lib/libpq.so.5.13\n \n 7fa225093000-7fa225094000 rw-p 00047000 fd:03 4044478\n /home/centos/pg13_bk_mani/edb/edbpsql/lib/libpq.so.5.13\n \n 7fa225094000-7fa2250b6000 r-xp 00000000 fd:03\n 130333 /usr/lib64/ld-2.17.so\n \n 7fa22527d000-7fa2252a2000 rw-p 00000000 00:00 0\n \n 7fa2252b3000-7fa2252b5000 rw-p 00000000 00:00 0\n \n 7fa2252b5000-7fa2252b6000 r--p 00021000 fd:03\n 130333 /usr/lib64/ld-2.17.so\n \n 7fa2252b6000-7fa2252b7000 rw-p 00022000 fd:03\n 130333 /usr/lib64/ld-2.17.so\n \n 7fa2252b7000-7fa2252b8000 rw-p 00000000 00:00 0\n \n 7ffdf354f000-7ffdf3570000 rw-p 00000000 00:00\n 0 [stack]\n \n 7ffdf3572000-7ffdf3574000 r-xp 00000000 00:00\n 0 [vdso]\n \n ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00\n 0 [vsyscall]\n \n Aborted\n \n [centos@tushar-ldap-docker bin]$\n \n\n\n I am getting the error message but along with \"*** Error in\n `./pg_validatebackup': double free or corruption (!prev):\n 0x0000000001850ba0 ***\" messages\n \n\n Is this expected ?\n \n\n regards,\n \n\n On 3/3/20 8:19 PM, tushar wrote:\n \nOn 3/3/20 4:04 PM, tushar wrote:\n \nThanks Robert. After applying all the\n 5 patches (v8-00*) against PG v13 (commit id\n -afb5465e0cfce7637066eaaaeecab30b0f23fbe3) , \n\n There is a scenario where pg_validatebackup is not throwing an\n error if some file deleted from pg_wal/ folder and but later\n at the time of restoring - we are getting an error\n \n\n [centos@tushar-ldap-docker bin]$ ./pg_basebackup -D test1\n \n\n [centos@tushar-ldap-docker bin]$ ls test1/pg_wal/\n \n 000000010000000000000010 archive_status\n \n\n [centos@tushar-ldap-docker bin]$ rm -rf test1/pg_wal/*\n \n\n [centos@tushar-ldap-docker bin]$ ./pg_validatebackup test1\n \n pg_validatebackup: * manifest_checksum =\n 88f1ed995c83e86252466a2c88b3e660a69cfc76c169991134b101c4f16c9df7\n \n pg_validatebackup: backup successfully verified\n \n\n [centos@tushar-ldap-docker bin]$ ./pg_ctl -D test1 start -o\n '-p 3333'\n \n waiting for server to start....2020-03-02 20:05:22.732 IST\n [21441] LOG: starting PostgreSQL 13devel on\n x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red\n Hat 4.8.5-39), 64-bit\n \n 2020-03-02 20:05:22.733 IST [21441] LOG: listening on IPv6\n address \"::1\", port 3333\n \n 2020-03-02 20:05:22.733 IST [21441] LOG: listening on IPv4\n address \"127.0.0.1\", port 3333\n \n 2020-03-02 20:05:22.736 IST [21441] LOG: listening on Unix\n socket \"/tmp/.s.PGSQL.3333\"\n \n 2020-03-02 20:05:22.739 IST [21442] LOG: database system was\n interrupted; last known up at 2020-03-02 20:04:35 IST\n \n 2020-03-02 20:05:22.739 IST [21442] LOG: creating missing WAL\n directory \"pg_wal/archive_status\"\n \n 2020-03-02 20:05:22.886 IST [21442] LOG: invalid checkpoint\n record\n \n 2020-03-02 20:05:22.886 IST [21442] FATAL: could not locate\n required checkpoint record\n \n 2020-03-02 20:05:22.886 IST [21442] HINT: If you are\n restoring from a backup, touch\n \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/test1/recovery.signal\"\n and add required recovery options.\n \n If you are not restoring from a backup, try removing the\n file\n \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/test1/backup_label\".\n \n Be careful: removing\n \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/test1/backup_label\"\n will result in a corrupt cluster if restoring from a backup.\n \n 2020-03-02 20:05:22.886 IST [21441] LOG: startup process (PID\n 21442) exited with exit code 1\n \n 2020-03-02 20:05:22.886 IST [21441] LOG: aborting startup due\n to startup process failure\n \n 2020-03-02 20:05:22.889 IST [21441] LOG: database system is\n shut down\n \n stopped waiting\n \n pg_ctl: could not start server\n \n Examine the log output.\n \n [centos@tushar-ldap-docker bin]$\n \n\n\n\n\n\n\n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 4 Mar 2020 19:21:03 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Wed, Mar 4, 2020 at 3:51 PM tushar <tushar.ahuja@enterprisedb.com> wrote:\n\n> Another scenario, in which if we modify Manifest-Checksum\" value from\n> backup_manifest file , we are not getting an error\n>\n> [centos@tushar-ldap-docker bin]$ ./pg_validatebackup data/\n> pg_validatebackup: * manifest_checksum =\n> 28d082921650d0ae881de8ceb122c8d2af5f449f51ecfb446827f7f49f91f65d\n> pg_validatebackup: backup successfully verified\n>\n> open backup_manifest file and replace\n>\n> \"Manifest-Checksum\":\n> \"8d082921650d0ae881de8ceb122c8d2af5f449f51ecfb446827f7f49f91f65d\"}\n> with\n> \"Manifest-Checksum\": \"Hello World\"}\n>\n> rerun the pg_validatebackup\n>\n> [centos@tushar-ldap-docker bin]$ ./pg_validatebackup data/\n> pg_validatebackup: * manifest_checksum = Hello World\n> pg_validatebackup: backup successfully verified\n>\n> regards,\n>\n\nYeah, This handling is missing in the provided WIP patch. I believe Robert\nwill consider this fixing in upcoming version of validator patch.\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.\n\nOn Wed, Mar 4, 2020 at 3:51 PM tushar <tushar.ahuja@enterprisedb.com> wrote:Another scenario, in which if we modify Manifest-Checksum\" value from \nbackup_manifest file , we are not getting an error\n\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup data/\npg_validatebackup: * manifest_checksum = \n28d082921650d0ae881de8ceb122c8d2af5f449f51ecfb446827f7f49f91f65d\npg_validatebackup: backup successfully verified\n\nopen backup_manifest file and replace\n\n\"Manifest-Checksum\": \n\"8d082921650d0ae881de8ceb122c8d2af5f449f51ecfb446827f7f49f91f65d\"}\nwith\n\"Manifest-Checksum\": \"Hello World\"}\n\nrerun the pg_validatebackup\n\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup data/\npg_validatebackup: * manifest_checksum = Hello World\npg_validatebackup: backup successfully verified\n\nregards, Yeah, This handling is missing in the provided WIP patch. I believe Robert will consider this fixing in upcoming version of validator patch.-- --Thanks & Regards, Suraj kharage, EnterpriseDB Corporation, The Postgres Database Company.",
"msg_date": "Thu, 5 Mar 2020 09:20:19 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Wed, Mar 4, 2020 at 7:21 PM tushar <tushar.ahuja@enterprisedb.com> wrote:\n\n> Hi,\n>\n> There is a scenario in which i add something inside the pg_tablespace\n> directory , i am getting an error like-\n>\n> pg_validatebackup: * manifest_checksum =\n> 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n> pg_validatebackup: error: \"pg_tblspc/16385/*PG_13_202002271*/test\" is\n> present on disk but not in the manifest\n>\n> but if i remove 'PG_13_202002271 ' directory then there is no error\n>\n> [centos@tushar-ldap-docker bin]$ ./pg_validatebackup data\n> pg_validatebackup: * manifest_checksum =\n> 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n> pg_validatebackup: backup successfully verified\n>\n>\nThis seems expected considering current design as we don't log the\ndirectory entries in backup_manifest. In your case, you have tablespace\nwith no objects (empty tablespace) then backup_manifest does not have any\nentry for this hence when you remove this tablespace directory, validator\ncould not detect it.\n\nWe can either document it or add the entry for directories in the manifest.\nRobert may have a better idea on this.\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.\n\nOn Wed, Mar 4, 2020 at 7:21 PM tushar <tushar.ahuja@enterprisedb.com> wrote:\n\nHi,\n\n\nThere is a scenario in which i add\n something inside the pg_tablespace directory , i am getting an\n error like-\n\n\npg_validatebackup: * manifest_checksum\n = 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n pg_validatebackup: error: \"pg_tblspc/16385/PG_13_202002271/test\"\n is present on disk but not in the manifest\n\n\n\nbut if i remove 'PG_13_202002271 '\n directory then there is no error \n\n\n\n[centos@tushar-ldap-docker bin]$\n ./pg_validatebackup data\n pg_validatebackup: * manifest_checksum =\n 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n pg_validatebackup: backup successfully verified\n\nThis seems expected considering current design as we don't log the directory entries in backup_manifest. In your case, you have tablespace with no objects (empty tablespace) then backup_manifest does not have any entry for this hence when you remove this tablespace directory, validator could not detect it.We can either document it or add the entry for directories in the manifest. Robert may have a better idea on this.-- --Thanks & Regards, Suraj kharage, EnterpriseDB Corporation, The Postgres Database Company.",
"msg_date": "Thu, 5 Mar 2020 09:37:13 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nIn a negative test scenario, if I changed size to -1 in backup_manifest,\npg_validatebackup giving\nerror with a random size number.\n\n[edb@localhost bin]$ ./pg_basebackup -p 5551 -D /tmp/bold\n--manifest-checksum 'SHA256'\n[edb@localhost bin]$ ./pg_validatebackup /tmp/bold\npg_validatebackup: backup successfully verified\n\n--change a file size to -1 and generate new checksum.\n[edb@localhost bin]$ vi /tmp/bold/backup_manifest\n[edb@localhost bin]$ shasum -a256 /tmp/bold/backup_manifest\nc3d7838cbbf991c6108f9c1ab78f673c20d8073114500f14da6ed07ede2dc44a\n /tmp/bold/backup_manifest\n[edb@localhost bin]$ vi /tmp/bold/backup_manifest\n\n[edb@localhost bin]$ ./pg_validatebackup /tmp/bold\npg_validatebackup: error: \"global/4183\" has size 0 on disk but size\n*18446744073709551615* in the manifest\n\nThanks & Regards,\nRajkumar Raghuwanshi\n\n\nOn Thu, Mar 5, 2020 at 9:37 AM Suraj Kharage <suraj.kharage@enterprisedb.com>\nwrote:\n\n>\n> On Wed, Mar 4, 2020 at 7:21 PM tushar <tushar.ahuja@enterprisedb.com>\n> wrote:\n>\n>> Hi,\n>>\n>> There is a scenario in which i add something inside the pg_tablespace\n>> directory , i am getting an error like-\n>>\n>> pg_validatebackup: * manifest_checksum =\n>> 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n>> pg_validatebackup: error: \"pg_tblspc/16385/*PG_13_202002271*/test\" is\n>> present on disk but not in the manifest\n>>\n>> but if i remove 'PG_13_202002271 ' directory then there is no error\n>>\n>> [centos@tushar-ldap-docker bin]$ ./pg_validatebackup data\n>> pg_validatebackup: * manifest_checksum =\n>> 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n>> pg_validatebackup: backup successfully verified\n>>\n>>\n> This seems expected considering current design as we don't log the\n> directory entries in backup_manifest. In your case, you have tablespace\n> with no objects (empty tablespace) then backup_manifest does not have any\n> entry for this hence when you remove this tablespace directory, validator\n> could not detect it.\n>\n> We can either document it or add the entry for directories in the\n> manifest. Robert may have a better idea on this.\n>\n> --\n> --\n>\n> Thanks & Regards,\n> Suraj kharage,\n> EnterpriseDB Corporation,\n> The Postgres Database Company.\n>\n\nHi,In a negative test scenario, if I changed size to -1 in backup_manifest, pg_validatebackup giving error with a random size number.[edb@localhost bin]$ ./pg_basebackup -p 5551 -D /tmp/bold --manifest-checksum 'SHA256'[edb@localhost bin]$ ./pg_validatebackup /tmp/boldpg_validatebackup: backup successfully verified--change a file size to -1 and generate new checksum.[edb@localhost bin]$ vi /tmp/bold/backup_manifest [edb@localhost bin]$ shasum -a256 /tmp/bold/backup_manifestc3d7838cbbf991c6108f9c1ab78f673c20d8073114500f14da6ed07ede2dc44a /tmp/bold/backup_manifest[edb@localhost bin]$ vi /tmp/bold/backup_manifest [edb@localhost bin]$ ./pg_validatebackup /tmp/boldpg_validatebackup: error: \"global/4183\" has size 0 on disk but size 18446744073709551615 in the manifestThanks & Regards,Rajkumar RaghuwanshiOn Thu, Mar 5, 2020 at 9:37 AM Suraj Kharage <suraj.kharage@enterprisedb.com> wrote:On Wed, Mar 4, 2020 at 7:21 PM tushar <tushar.ahuja@enterprisedb.com> wrote:\n\nHi,\n\n\nThere is a scenario in which i add\n something inside the pg_tablespace directory , i am getting an\n error like-\n\n\npg_validatebackup: * manifest_checksum\n = 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n pg_validatebackup: error: \"pg_tblspc/16385/PG_13_202002271/test\"\n is present on disk but not in the manifest\n\n\n\nbut if i remove 'PG_13_202002271 '\n directory then there is no error \n\n\n\n[centos@tushar-ldap-docker bin]$\n ./pg_validatebackup data\n pg_validatebackup: * manifest_checksum =\n 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n pg_validatebackup: backup successfully verified\n\nThis seems expected considering current design as we don't log the directory entries in backup_manifest. In your case, you have tablespace with no objects (empty tablespace) then backup_manifest does not have any entry for this hence when you remove this tablespace directory, validator could not detect it.We can either document it or add the entry for directories in the manifest. Robert may have a better idea on this.-- --Thanks & Regards, Suraj kharage, EnterpriseDB Corporation, The Postgres Database Company.",
"msg_date": "Thu, 5 Mar 2020 13:09:02 +0530",
"msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nThere is one scenario where i somehow able to run pg_validatebackup \nsuccessfully but when i tried to start the server , it is failing\n\nSteps to reproduce -\n--create 2 base backup directory\n[centos@tushar-ldap-docker bin]$ ./pg_basebackup -D db1\n[centos@tushar-ldap-docker bin]$ ./pg_basebackup -D db2\n\n--run pg_validatebackup , use backup_manifest of db1 directory against \ndb2/ . Will get an error\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup -m \ndb1/backup_manifest db2/\npg_validatebackup: * manifest_checksum = \n5b131aff4a4f86e2a53efd84b003a67b9f615decb0039f19033eefa6f43c1ede\npg_validatebackup: error: checksum mismatch for file \"backup_label\"\n--copy the backup_level of db1 to db2 folder\n[centos@tushar-ldap-docker bin]$ cp db1/backup_label db2/.\n\n--run pg_validatebackup .. working fine\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup -m \ndb1/backup_manifest db2/\npg_validatebackup: * manifest_checksum = \n5b131aff4a4f86e2a53efd84b003a67b9f615decb0039f19033eefa6f43c1ede\npg_validatebackup: backup successfully verified\n[centos@tushar-ldap-docker bin]$\n\n--try to start the server\n[centos@tushar-ldap-docker bin]$ ./pg_ctl -D db2 start -o '-p 7777'\nwaiting for server to start....2020-03-05 15:33:53.471 IST [24049] LOG: \nstarting PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled by gcc \n(GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\n2020-03-05 15:33:53.471 IST [24049] LOG: listening on IPv6 address \n\"::1\", port 7777\n2020-03-05 15:33:53.471 IST [24049] LOG: listening on IPv4 address \n\"127.0.0.1\", port 7777\n2020-03-05 15:33:53.473 IST [24049] LOG: listening on Unix socket \n\"/tmp/.s.PGSQL.7777\"\n2020-03-05 15:33:53.476 IST [24050] LOG: database system was \ninterrupted; last known up at 2020-03-05 15:32:51 IST\n2020-03-05 15:33:53.573 IST [24050] LOG: invalid checkpoint record\n2020-03-05 15:33:53.573 IST [24050] FATAL: could not locate required \ncheckpoint record\n2020-03-05 15:33:53.573 IST [24050] HINT: If you are restoring from a \nbackup, touch \n\"/home/centos/pg13_bk_mani/edb/edbpsql/bin/db2/recovery.signal\" and add \nrequired recovery options.\n If you are not restoring from a backup, try removing the file \n\"/home/centos/pg13_bk_mani/edb/edbpsql/bin/db2/backup_label\".\n Be careful: removing \n\"/home/centos/pg13_bk_mani/edb/edbpsql/bin/db2/backup_label\" will result \nin a corrupt cluster if restoring from a backup.\n2020-03-05 15:33:53.574 IST [24049] LOG: startup process (PID 24050) \nexited with exit code 1\n2020-03-05 15:33:53.574 IST [24049] LOG: aborting startup due to \nstartup process failure\n2020-03-05 15:33:53.575 IST [24049] LOG: database system is shut down\n stopped waiting\npg_ctl: could not start server\nExamine the log output.\n[centos@tushar-ldap-docker bin]$\n\nregards,\n\n\nOn 3/5/20 1:09 PM, Rajkumar Raghuwanshi wrote:\n> Hi,\n>\n> In a negative test scenario, if I changed size to -1 in \n> backup_manifest, pg_validatebackup giving\n> error with a random size number.\n>\n> [edb@localhost bin]$ ./pg_basebackup -p 5551 -D /tmp/bold \n> --manifest-checksum 'SHA256'\n> [edb@localhost bin]$ ./pg_validatebackup /tmp/bold\n> pg_validatebackup: backup successfully verified\n>\n> --change a file size to -1 and generate new checksum.\n> [edb@localhost bin]$ vi /tmp/bold/backup_manifest\n> [edb@localhost bin]$ shasum -a256 /tmp/bold/backup_manifest\n> c3d7838cbbf991c6108f9c1ab78f673c20d8073114500f14da6ed07ede2dc44a \n> /tmp/bold/backup_manifest\n> [edb@localhost bin]$ vi /tmp/bold/backup_manifest\n>\n> [edb@localhost bin]$ ./pg_validatebackup /tmp/bold\n> pg_validatebackup: error: \"global/4183\" has size 0 on disk but size \n> *18446744073709551615* in the manifest\n>\n> Thanks & Regards,\n> Rajkumar Raghuwanshi\n>\n>\n> On Thu, Mar 5, 2020 at 9:37 AM Suraj Kharage \n> <suraj.kharage@enterprisedb.com \n> <mailto:suraj.kharage@enterprisedb.com>> wrote:\n>\n>\n> On Wed, Mar 4, 2020 at 7:21 PM tushar\n> <tushar.ahuja@enterprisedb.com\n> <mailto:tushar.ahuja@enterprisedb.com>> wrote:\n>\n> Hi,\n>\n> There is a scenario in which i add something inside the\n> pg_tablespace directory , i am getting an error like-\n>\n> pg_validatebackup: * manifest_checksum =\n> 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n> pg_validatebackup: error:\n> \"pg_tblspc/16385/*PG_13_202002271*/test\" is present on disk\n> but not in the manifest\n>\n> but if i remove 'PG_13_202002271 ' directory then there is no\n> error\n>\n> [centos@tushar-ldap-docker bin]$ ./pg_validatebackup data\n> pg_validatebackup: * manifest_checksum =\n> 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n> pg_validatebackup: backup successfully verified\n>\n>\n> This seems expected considering current design as we don't log the\n> directory entries in backup_manifest. In your case, you have\n> tablespace with no objects (empty tablespace) then backup_manifest\n> does not have any entry for this hence when you remove this\n> tablespace directory, validator could not detect it.\n>\n> We can either document it or add the entry for directories in the\n> manifest. Robert may have a better idea on this.\n>\n> -- \n> -- \n>\n> Thanks & Regards,\n> Suraj kharage,\n> EnterpriseDB Corporation,\n> The Postgres Database Company.\n>\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nHi,\n\n\nThere is one scenario where i somehow\n able to run pg_validatebackup successfully but when i tried to\n start the server , it is failing \n\n\n\nSteps to reproduce - \n\n--create 2 base backup directory \n\n[centos@tushar-ldap-docker bin]$\n ./pg_basebackup -D db1 \n [centos@tushar-ldap-docker bin]$ ./pg_basebackup -D db2\n\n\n--run pg_validatebackup , use\n backup_manifest of db1 directory against db2/ . Will get an\n error \n\n[centos@tushar-ldap-docker bin]$\n ./pg_validatebackup -m db1/backup_manifest db2/\n pg_validatebackup: * manifest_checksum =\n 5b131aff4a4f86e2a53efd84b003a67b9f615decb0039f19033eefa6f43c1ede\n pg_validatebackup: error: checksum mismatch for file\n \"backup_label\"\n \n--copy the backup_level of db1 to db2\n folder \n\n[centos@tushar-ldap-docker bin]$ cp\n db1/backup_label db2/.\n\n\n\n--run pg_validatebackup .. working fine\n \n\n[centos@tushar-ldap-docker bin]$\n ./pg_validatebackup -m db1/backup_manifest db2/\n pg_validatebackup: * manifest_checksum =\n 5b131aff4a4f86e2a53efd84b003a67b9f615decb0039f19033eefa6f43c1ede\n pg_validatebackup: backup successfully verified\n [centos@tushar-ldap-docker bin]$ \n\n\n\n--try to start the server \n\n[centos@tushar-ldap-docker bin]$\n ./pg_ctl -D db2 start -o '-p 7777'\n waiting for server to start....2020-03-05 15:33:53.471 IST [24049]\n LOG: starting PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled\n by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\n 2020-03-05 15:33:53.471 IST [24049] LOG: listening on IPv6\n address \"::1\", port 7777\n 2020-03-05 15:33:53.471 IST [24049] LOG: listening on IPv4\n address \"127.0.0.1\", port 7777\n 2020-03-05 15:33:53.473 IST [24049] LOG: listening on Unix socket\n \"/tmp/.s.PGSQL.7777\"\n 2020-03-05 15:33:53.476 IST [24050] LOG: database system was\n interrupted; last known up at 2020-03-05 15:32:51 IST\n 2020-03-05 15:33:53.573 IST [24050] LOG: invalid checkpoint\n record\n 2020-03-05 15:33:53.573 IST [24050] FATAL: could not locate\n required checkpoint record\n 2020-03-05 15:33:53.573 IST [24050] HINT: If you are restoring\n from a backup, touch\n \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/db2/recovery.signal\"\n and add required recovery options.\n If you are not restoring from a backup, try removing the file\n \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/db2/backup_label\".\n Be careful: removing\n \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/db2/backup_label\" will\n result in a corrupt cluster if restoring from a backup.\n 2020-03-05 15:33:53.574 IST [24049] LOG: startup process (PID\n 24050) exited with exit code 1\n 2020-03-05 15:33:53.574 IST [24049] LOG: aborting startup due to\n startup process failure\n 2020-03-05 15:33:53.575 IST [24049] LOG: database system is shut\n down\n stopped waiting\n pg_ctl: could not start server\n Examine the log output.\n [centos@tushar-ldap-docker bin]$ \n\n\nregards,\n\n\n\n\n\nOn 3/5/20 1:09 PM, Rajkumar Raghuwanshi\n wrote:\n\n\n\n\nHi,\n\n\nIn a negative test scenario, if I changed size to -1 in\n backup_manifest, pg_validatebackup giving \n\nerror with a random size number.\n\n\n\n[edb@localhost bin]$ ./pg_basebackup -p 5551 -D /tmp/bold\n --manifest-checksum 'SHA256'\n[edb@localhost bin]$ ./pg_validatebackup /tmp/bold\n pg_validatebackup: backup successfully verified\n\n\n\n--change a file size to -1 and generate new checksum.\n\n[edb@localhost bin]$ vi /tmp/bold/backup_manifest \n [edb@localhost bin]$ shasum -a256 /tmp/bold/backup_manifest\nc3d7838cbbf991c6108f9c1ab78f673c20d8073114500f14da6ed07ede2dc44a\n /tmp/bold/backup_manifest\n [edb@localhost bin]$ vi /tmp/bold/backup_manifest \n\n\n\n[edb@localhost bin]$ ./pg_validatebackup /tmp/bold\n pg_validatebackup: error: \"global/4183\" has size 0 on disk but\n size 18446744073709551615 in the manifest\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThanks & Regards,\nRajkumar Raghuwanshi\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOn Thu, Mar 5, 2020 at 9:37 AM\n Suraj Kharage <suraj.kharage@enterprisedb.com>\n wrote:\n\n\n\n\n\n\nOn Wed, Mar 4, 2020 at\n 7:21 PM tushar <tushar.ahuja@enterprisedb.com>\n wrote:\n\n\n\nHi,\n\n\nThere is a scenario in which i add something\n inside the pg_tablespace directory , i am getting an\n error like-\n\n\npg_validatebackup: * manifest_checksum =\n 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n pg_validatebackup: error: \"pg_tblspc/16385/PG_13_202002271/test\"\n is present on disk but not in the manifest\n\n\n\nbut if i remove 'PG_13_202002271 ' directory then\n there is no error \n\n\n\n[centos@tushar-ldap-docker bin]$\n ./pg_validatebackup data\n pg_validatebackup: * manifest_checksum =\n 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n pg_validatebackup: backup successfully verified\n\n\n\n\n\n\n\nThis seems expected considering current design as we\n don't log the directory entries in backup_manifest. In\n your case, you have tablespace with no objects (empty\n tablespace) then backup_manifest does not have any entry\n for this hence when you remove this tablespace\n directory, validator could not detect it.\n\n\nWe can either document it or add the entry for\n directories in the manifest. Robert may have a better\n idea on this.\n\n\n\n -- \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n--\n \n\n\nThanks\n &\n Regards, \nSuraj\n kharage, \nEnterpriseDB\n Corporation, \nThe\n Postgres\n Database\n Company.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 5 Mar 2020 15:40:46 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "There is one small observation if we use slash (/) with option -i then \nnot getting the desired result\n\nSteps to reproduce -\n==============\n\n[centos@tushar-ldap-docker bin]$ ./pg_basebackup -D test\n\n[centos@tushar-ldap-docker bin]$ touch test/*pg_notify*/dummy_file\n\n--working\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup \n--ignore=*pg_notify* test\npg_validatebackup: * manifest_checksum = \nbe9b72e1320c6c34c131533de19371a10dd5011940181724e43277f786026c7b\npg_validatebackup: backup successfully verified\n\n--not working\n\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup \n--ignore=*pg_notify/* test\npg_validatebackup: * manifest_checksum = \nbe9b72e1320c6c34c131533de19371a10dd5011940181724e43277f786026c7b\npg_validatebackup: error: \"pg_notify/dummy_file\" is present on disk but \nnot in the manifest\n\nregards,\n\nOn 3/5/20 3:40 PM, tushar wrote:\n> Hi,\n>\n> There is one scenario where i somehow able to run pg_validatebackup \n> successfully but when i tried to start the server , it is failing\n>\n> Steps to reproduce -\n> --create 2 base backup directory\n> [centos@tushar-ldap-docker bin]$ ./pg_basebackup -D db1\n> [centos@tushar-ldap-docker bin]$ ./pg_basebackup -D db2\n>\n> --run pg_validatebackup , use backup_manifest of db1 directory \n> against db2/ . Will get an error\n> [centos@tushar-ldap-docker bin]$ ./pg_validatebackup -m \n> db1/backup_manifest db2/\n> pg_validatebackup: * manifest_checksum = \n> 5b131aff4a4f86e2a53efd84b003a67b9f615decb0039f19033eefa6f43c1ede\n> pg_validatebackup: error: checksum mismatch for file \"backup_label\"\n> --copy the backup_level of db1 to db2 folder\n> [centos@tushar-ldap-docker bin]$ cp db1/backup_label db2/.\n>\n> --run pg_validatebackup .. working fine\n> [centos@tushar-ldap-docker bin]$ ./pg_validatebackup -m \n> db1/backup_manifest db2/\n> pg_validatebackup: * manifest_checksum = \n> 5b131aff4a4f86e2a53efd84b003a67b9f615decb0039f19033eefa6f43c1ede\n> pg_validatebackup: backup successfully verified\n> [centos@tushar-ldap-docker bin]$\n>\n> --try to start the server\n> [centos@tushar-ldap-docker bin]$ ./pg_ctl -D db2 start -o '-p 7777'\n> waiting for server to start....2020-03-05 15:33:53.471 IST [24049] \n> LOG: starting PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled by \n> gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\n> 2020-03-05 15:33:53.471 IST [24049] LOG: listening on IPv6 address \n> \"::1\", port 7777\n> 2020-03-05 15:33:53.471 IST [24049] LOG: listening on IPv4 address \n> \"127.0.0.1\", port 7777\n> 2020-03-05 15:33:53.473 IST [24049] LOG: listening on Unix socket \n> \"/tmp/.s.PGSQL.7777\"\n> 2020-03-05 15:33:53.476 IST [24050] LOG: database system was \n> interrupted; last known up at 2020-03-05 15:32:51 IST\n> 2020-03-05 15:33:53.573 IST [24050] LOG: invalid checkpoint record\n> 2020-03-05 15:33:53.573 IST [24050] FATAL: could not locate required \n> checkpoint record\n> 2020-03-05 15:33:53.573 IST [24050] HINT: If you are restoring from a \n> backup, touch \n> \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/db2/recovery.signal\" and \n> add required recovery options.\n> If you are not restoring from a backup, try removing the file \n> \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/db2/backup_label\".\n> Be careful: removing \n> \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/db2/backup_label\" will \n> result in a corrupt cluster if restoring from a backup.\n> 2020-03-05 15:33:53.574 IST [24049] LOG: startup process (PID 24050) \n> exited with exit code 1\n> 2020-03-05 15:33:53.574 IST [24049] LOG: aborting startup due to \n> startup process failure\n> 2020-03-05 15:33:53.575 IST [24049] LOG: database system is shut down\n> stopped waiting\n> pg_ctl: could not start server\n> Examine the log output.\n> [centos@tushar-ldap-docker bin]$\n>\n> regards,\n>\n>\n> On 3/5/20 1:09 PM, Rajkumar Raghuwanshi wrote:\n>> Hi,\n>>\n>> In a negative test scenario, if I changed size to -1 in \n>> backup_manifest, pg_validatebackup giving\n>> error with a random size number.\n>>\n>> [edb@localhost bin]$ ./pg_basebackup -p 5551 -D /tmp/bold \n>> --manifest-checksum 'SHA256'\n>> [edb@localhost bin]$ ./pg_validatebackup /tmp/bold\n>> pg_validatebackup: backup successfully verified\n>>\n>> --change a file size to -1 and generate new checksum.\n>> [edb@localhost bin]$ vi /tmp/bold/backup_manifest\n>> [edb@localhost bin]$ shasum -a256 /tmp/bold/backup_manifest\n>> c3d7838cbbf991c6108f9c1ab78f673c20d8073114500f14da6ed07ede2dc44a \n>> /tmp/bold/backup_manifest\n>> [edb@localhost bin]$ vi /tmp/bold/backup_manifest\n>>\n>> [edb@localhost bin]$ ./pg_validatebackup /tmp/bold\n>> pg_validatebackup: error: \"global/4183\" has size 0 on disk but size \n>> *18446744073709551615* in the manifest\n>>\n>> Thanks & Regards,\n>> Rajkumar Raghuwanshi\n>>\n>>\n>> On Thu, Mar 5, 2020 at 9:37 AM Suraj Kharage \n>> <suraj.kharage@enterprisedb.com \n>> <mailto:suraj.kharage@enterprisedb.com>> wrote:\n>>\n>>\n>> On Wed, Mar 4, 2020 at 7:21 PM tushar\n>> <tushar.ahuja@enterprisedb.com\n>> <mailto:tushar.ahuja@enterprisedb.com>> wrote:\n>>\n>> Hi,\n>>\n>> There is a scenario in which i add something inside the\n>> pg_tablespace directory , i am getting an error like-\n>>\n>> pg_validatebackup: * manifest_checksum =\n>> 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n>> pg_validatebackup: error:\n>> \"pg_tblspc/16385/*PG_13_202002271*/test\" is present on disk\n>> but not in the manifest\n>>\n>> but if i remove 'PG_13_202002271 ' directory then there is no\n>> error\n>>\n>> [centos@tushar-ldap-docker bin]$ ./pg_validatebackup data\n>> pg_validatebackup: * manifest_checksum =\n>> 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n>> pg_validatebackup: backup successfully verified\n>>\n>>\n>> This seems expected considering current design as we don't log\n>> the directory entries in backup_manifest. In your case, you have\n>> tablespace with no objects (empty tablespace) then\n>> backup_manifest does not have any entry for this hence when you\n>> remove this tablespace directory, validator could not detect it.\n>>\n>> We can either document it or add the entry for directories in the\n>> manifest. Robert may have a better idea on this.\n>>\n>> -- \n>> -- \n>>\n>> Thanks & Regards,\n>> Suraj kharage,\n>> EnterpriseDB Corporation,\n>> The Postgres Database Company.\n>>\n>\n> -- \n> regards,tushar\n> EnterpriseDBhttps://www.enterprisedb.com/\n> The Enterprise PostgreSQL Company\n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nThere is one small observation if we\n use slash (/) with option -i then not getting the desired result \n\n\n\nSteps to reproduce -\n==============\n\n\n[centos@tushar-ldap-docker bin]$\n ./pg_basebackup -D test\n\n [centos@tushar-ldap-docker bin]$ touch test/pg_notify/dummy_file\n\n\n\n--working \n\n[centos@tushar-ldap-docker bin]$\n ./pg_validatebackup --ignore=pg_notify test\n pg_validatebackup: * manifest_checksum =\n be9b72e1320c6c34c131533de19371a10dd5011940181724e43277f786026c7b\n pg_validatebackup: backup successfully verified\n\n\n--not working\n\n\n\n[centos@tushar-ldap-docker bin]$\n ./pg_validatebackup --ignore=pg_notify/ test\n pg_validatebackup: * manifest_checksum =\n be9b72e1320c6c34c131533de19371a10dd5011940181724e43277f786026c7b\n pg_validatebackup: error: \"pg_notify/dummy_file\" is present on\n disk but not in the manifest\n\n\nregards,\n\n\nOn 3/5/20 3:40 PM, tushar wrote:\n\n\n\nHi,\n\n\nThere is one scenario where i\n somehow able to run pg_validatebackup successfully but when i\n tried to start the server , it is failing \n\n\n\nSteps to reproduce - \n\n--create 2 base backup directory \n\n[centos@tushar-ldap-docker bin]$\n ./pg_basebackup -D db1 \n [centos@tushar-ldap-docker bin]$ ./pg_basebackup -D db2\n\n\n--run pg_validatebackup , use\n backup_manifest of db1 directory against db2/ . Will get an\n error \n\n[centos@tushar-ldap-docker bin]$\n ./pg_validatebackup -m db1/backup_manifest db2/\n pg_validatebackup: * manifest_checksum =\n 5b131aff4a4f86e2a53efd84b003a67b9f615decb0039f19033eefa6f43c1ede\n pg_validatebackup: error: checksum mismatch for file\n \"backup_label\"\n \n--copy the backup_level of db1 to db2\n folder \n\n[centos@tushar-ldap-docker bin]$ cp\n db1/backup_label db2/.\n\n\n\n--run pg_validatebackup .. working\n fine \n\n[centos@tushar-ldap-docker bin]$\n ./pg_validatebackup -m db1/backup_manifest db2/\n pg_validatebackup: * manifest_checksum =\n 5b131aff4a4f86e2a53efd84b003a67b9f615decb0039f19033eefa6f43c1ede\n pg_validatebackup: backup successfully verified\n [centos@tushar-ldap-docker bin]$ \n\n\n\n--try to start the server \n\n[centos@tushar-ldap-docker bin]$\n ./pg_ctl -D db2 start -o '-p 7777'\n waiting for server to start....2020-03-05 15:33:53.471 IST\n [24049] LOG: starting PostgreSQL 13devel on\n x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red\n Hat 4.8.5-39), 64-bit\n 2020-03-05 15:33:53.471 IST [24049] LOG: listening on IPv6\n address \"::1\", port 7777\n 2020-03-05 15:33:53.471 IST [24049] LOG: listening on IPv4\n address \"127.0.0.1\", port 7777\n 2020-03-05 15:33:53.473 IST [24049] LOG: listening on Unix\n socket \"/tmp/.s.PGSQL.7777\"\n 2020-03-05 15:33:53.476 IST [24050] LOG: database system was\n interrupted; last known up at 2020-03-05 15:32:51 IST\n 2020-03-05 15:33:53.573 IST [24050] LOG: invalid checkpoint\n record\n 2020-03-05 15:33:53.573 IST [24050] FATAL: could not locate\n required checkpoint record\n 2020-03-05 15:33:53.573 IST [24050] HINT: If you are restoring\n from a backup, touch\n \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/db2/recovery.signal\"\n and add required recovery options.\n If you are not restoring from a backup, try removing the\n file\n \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/db2/backup_label\".\n Be careful: removing\n \"/home/centos/pg13_bk_mani/edb/edbpsql/bin/db2/backup_label\"\n will result in a corrupt cluster if restoring from a backup.\n 2020-03-05 15:33:53.574 IST [24049] LOG: startup process (PID\n 24050) exited with exit code 1\n 2020-03-05 15:33:53.574 IST [24049] LOG: aborting startup due\n to startup process failure\n 2020-03-05 15:33:53.575 IST [24049] LOG: database system is\n shut down\n stopped waiting\n pg_ctl: could not start server\n Examine the log output.\n [centos@tushar-ldap-docker bin]$ \n\n\nregards,\n\n\n\n\n\nOn 3/5/20 1:09 PM, Rajkumar\n Raghuwanshi wrote:\n\n\n\n\nHi,\n\n\nIn a negative test scenario, if I changed size to -1 in\n backup_manifest, pg_validatebackup giving \n\nerror with a random size number.\n\n\n\n[edb@localhost bin]$ ./pg_basebackup -p 5551 -D /tmp/bold\n --manifest-checksum 'SHA256'\n[edb@localhost bin]$ ./pg_validatebackup /tmp/bold\n pg_validatebackup: backup successfully verified\n\n\n\n--change a file size to -1 and generate new checksum.\n\n[edb@localhost bin]$ vi /tmp/bold/backup_manifest \n [edb@localhost bin]$ shasum -a256 /tmp/bold/backup_manifest\nc3d7838cbbf991c6108f9c1ab78f673c20d8073114500f14da6ed07ede2dc44a\n /tmp/bold/backup_manifest\n [edb@localhost bin]$ vi /tmp/bold/backup_manifest \n\n\n\n[edb@localhost bin]$ ./pg_validatebackup /tmp/bold\n pg_validatebackup: error: \"global/4183\" has size 0 on disk\n but size 18446744073709551615 in the manifest\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThanks & Regards,\nRajkumar Raghuwanshi\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOn Thu, Mar 5, 2020 at 9:37\n AM Suraj Kharage <suraj.kharage@enterprisedb.com>\n wrote:\n\n\n\n\n\n\nOn Wed, Mar 4, 2020 at\n 7:21 PM tushar <tushar.ahuja@enterprisedb.com>\n wrote:\n\n\n\nHi,\n\n\nThere is a scenario in which i add something\n inside the pg_tablespace directory , i am getting\n an error like-\n\n\npg_validatebackup: * manifest_checksum =\n 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n pg_validatebackup: error: \"pg_tblspc/16385/PG_13_202002271/test\"\n is present on disk but not in the manifest\n\n\n\nbut if i remove 'PG_13_202002271 ' directory\n then there is no error \n\n\n\n[centos@tushar-ldap-docker bin]$\n ./pg_validatebackup data\n pg_validatebackup: * manifest_checksum =\n 77ddacb4e7e02e2b880792a19a3adf09266dd88553dd15cfd0c22caee7d9cc04\n pg_validatebackup: backup successfully verified\n\n\n\n\n\n\n\nThis seems expected considering current design as\n we don't log the directory entries in backup_manifest.\n In your case, you have tablespace with no objects\n (empty tablespace) then backup_manifest does not have\n any entry for this hence when you remove this\n tablespace directory, validator could not detect it.\n\n\nWe can either document it or add the entry for\n directories in the manifest. Robert may have a better\n idea on this.\n\n\n\n -- \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n--\n \n\n\nThanks\n &\n Regards, \nSuraj\n kharage, \nEnterpriseDB\n Corporation, \nThe\n Postgres\n Database\n Company.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 5 Mar 2020 17:35:28 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Mar 5, 2020 at 7:05 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> There is one small observation if we use slash (/) with option -i then not getting the desired result\n\nHere's an updated patch set responding to many of the comments\nreceived thus far. Since there are quite a few emails, let me\nconsolidate my comments and responses here.\n\nReport: Segmentation fault if -m is used to point to a valid manifest,\nbut actual backup directory is nonexistent.\nResponse: Fixed; thanks for the report.\n\nReport: pg_validatebackup doesn't complain about problems within the\npg_wal directory.\nResponse: That's out of scope. The WAL files are fetched separately\nand are therefore not part of the manifest.\n\nReport: Inaccessible file in data directory being validated leads to a\ndouble free.\nResponse: Fixed; thanks for the report.\n\nReport: Patch 0005 doesn't validate the manifest checksum.\nResponse: I know. I mentioned that when posting the previous patch\nset. Fixed in this version, though.\n\nReport: Removing an empty directory doesn't make backup validation\nfail, even though it might cause problems for the server.\nResponse: That's a little unfortunate, but I'm not sure it's really\nworth complicating the patch to deal with it. It's something of a\ncorner case.\n\nReport: Negative file sizes in the backup manifest are interpreted as\nlarge integers.\nResponse: That's also a little unfortunate, but I doubt it's worth\nadding code to catch it, since any such manifest is corrupt. Also,\nit's not like we're ignoring it; the error just isn't ideal.\n\nReport: If I take the backup label from backup #1 and stick it into\notherwise-identical backup #2, validation succeeds but the server\nwon't start.\nResponse: That's because we can't validate the pg_wal directory. As\nnoted above, that's out of scope.\n\nReport: Using --ignore with a slash-terminated pathname doesn't work\nas expected.\nResponse: Fixed, thanks for the report.\n\nOff-List Report: You forgot a PG_BINARY flag.\nResponse: Fixed. I thought I'd done this before but there were two\nplaces and I'd only fixed one of them.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 5 Mar 2020 11:55:39 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Thanks, Robert.\n\n1: Getting below error while compiling 0002 patch.\n\nedb@localhost:postgres$ mi > mi.log\nbasebackup.c: In function ‘AddFileToManifest’:\nbasebackup.c:1052:6: error: ‘pathname’ undeclared (first use in this\nfunction)\n pathname);\n ^\nbasebackup.c:1052:6: note: each undeclared identifier is reported only once\nfor each function it appears in\nmake[3]: *** [basebackup.o] Error 1\nmake[2]: *** [replication-recursive] Error 2\nmake[1]: *** [install-backend-recurse] Error 2\nmake: *** [install-src-recurse] Error 2\n\n\nI can see you have renamed the filename argument of AddFileToManifest() to\npathname, but those changes are part of 0003 (validator patch).\nI think the changes related to src/backend/replication/basebackup.c should\nnot be there in the validator patch (0003). We can move these changes to\nbackup manifest patch, either in 0002 or 0004 for better readability of\npatch set.\n\n2:\n\n#define KW_MANIFEST_VERSION \"PostgreSQL-Backup-Manifest-Version\"\n#define KW_MANIFEST_FILE \"File\"\n#define KW_MANIFEST_CHECKSUM \"Manifest-Checksum\"\n#define KWL_MANIFEST_VERSION (sizeof(KW_MANIFEST_VERSION)-1)\n#define KWL_MANIFEST_FILE (sizeof(KW_MANIFEST_FILE)-1)\n#define KWL_MANIFEST_CHECKSUM (sizeof(KW_MANIFEST_CHECKSUM)-1)\n\n#define FIELDS_PER_FILE_LINE 4\n\nFew macros defined in 0003 patch not used anywhere in 0005 patch. Either we\ncan replace these with hard-coded values or remove them.\n\n\nOn Thu, Mar 5, 2020 at 10:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Mar 5, 2020 at 7:05 AM tushar <tushar.ahuja@enterprisedb.com>\n> wrote:\n> > There is one small observation if we use slash (/) with option -i then\n> not getting the desired result\n>\n> Here's an updated patch set responding to many of the comments\n> received thus far. Since there are quite a few emails, let me\n> consolidate my comments and responses here.\n>\n> Report: Segmentation fault if -m is used to point to a valid manifest,\n> but actual backup directory is nonexistent.\n> Response: Fixed; thanks for the report.\n>\n> Report: pg_validatebackup doesn't complain about problems within the\n> pg_wal directory.\n> Response: That's out of scope. The WAL files are fetched separately\n> and are therefore not part of the manifest.\n>\n> Report: Inaccessible file in data directory being validated leads to a\n> double free.\n> Response: Fixed; thanks for the report.\n>\n> Report: Patch 0005 doesn't validate the manifest checksum.\n> Response: I know. I mentioned that when posting the previous patch\n> set. Fixed in this version, though.\n>\n> Report: Removing an empty directory doesn't make backup validation\n> fail, even though it might cause problems for the server.\n> Response: That's a little unfortunate, but I'm not sure it's really\n> worth complicating the patch to deal with it. It's something of a\n> corner case.\n>\n> Report: Negative file sizes in the backup manifest are interpreted as\n> large integers.\n> Response: That's also a little unfortunate, but I doubt it's worth\n> adding code to catch it, since any such manifest is corrupt. Also,\n> it's not like we're ignoring it; the error just isn't ideal.\n>\n> Report: If I take the backup label from backup #1 and stick it into\n> otherwise-identical backup #2, validation succeeds but the server\n> won't start.\n> Response: That's because we can't validate the pg_wal directory. As\n> noted above, that's out of scope.\n>\n> Report: Using --ignore with a slash-terminated pathname doesn't work\n> as expected.\n> Response: Fixed, thanks for the report.\n>\n> Off-List Report: You forgot a PG_BINARY flag.\n> Response: Fixed. I thought I'd done this before but there were two\n> places and I'd only fixed one of them.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.\n\nThanks, Robert.1: Getting below error while compiling 0002 patch.edb@localhost:postgres$ mi > mi.logbasebackup.c: In function ‘AddFileToManifest’:basebackup.c:1052:6: error: ‘pathname’ undeclared (first use in this function) pathname); ^basebackup.c:1052:6: note: each undeclared identifier is reported only once for each function it appears inmake[3]: *** [basebackup.o] Error 1make[2]: *** [replication-recursive] Error 2make[1]: *** [install-backend-recurse] Error 2make: *** [install-src-recurse] Error 2I can see you have renamed the filename argument of AddFileToManifest() to pathname, but those changes are part of 0003 (validator patch). I think the changes related to src/backend/replication/basebackup.c should not be there in the validator patch (0003). We can move these changes to backup manifest patch, either in 0002 or 0004 for better readability of patch set.2:#define KW_MANIFEST_VERSION\t\t\t\"PostgreSQL-Backup-Manifest-Version\"#define KW_MANIFEST_FILE\t\t\t\"File\"#define KW_MANIFEST_CHECKSUM\t\t\"Manifest-Checksum\"#define KWL_MANIFEST_VERSION\t\t(sizeof(KW_MANIFEST_VERSION)-1)#define KWL_MANIFEST_FILE\t\t\t(sizeof(KW_MANIFEST_FILE)-1)#define KWL_MANIFEST_CHECKSUM\t\t(sizeof(KW_MANIFEST_CHECKSUM)-1)#define FIELDS_PER_FILE_LINE\t\t4Few macros defined in 0003 patch not used anywhere in 0005 patch. Either we can replace these with hard-coded values or remove them.On Thu, Mar 5, 2020 at 10:25 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Mar 5, 2020 at 7:05 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> There is one small observation if we use slash (/) with option -i then not getting the desired result\n\nHere's an updated patch set responding to many of the comments\nreceived thus far. Since there are quite a few emails, let me\nconsolidate my comments and responses here.\n\nReport: Segmentation fault if -m is used to point to a valid manifest,\nbut actual backup directory is nonexistent.\nResponse: Fixed; thanks for the report.\n\nReport: pg_validatebackup doesn't complain about problems within the\npg_wal directory.\nResponse: That's out of scope. The WAL files are fetched separately\nand are therefore not part of the manifest.\n\nReport: Inaccessible file in data directory being validated leads to a\ndouble free.\nResponse: Fixed; thanks for the report.\n\nReport: Patch 0005 doesn't validate the manifest checksum.\nResponse: I know. I mentioned that when posting the previous patch\nset. Fixed in this version, though.\n\nReport: Removing an empty directory doesn't make backup validation\nfail, even though it might cause problems for the server.\nResponse: That's a little unfortunate, but I'm not sure it's really\nworth complicating the patch to deal with it. It's something of a\ncorner case.\n\nReport: Negative file sizes in the backup manifest are interpreted as\nlarge integers.\nResponse: That's also a little unfortunate, but I doubt it's worth\nadding code to catch it, since any such manifest is corrupt. Also,\nit's not like we're ignoring it; the error just isn't ideal.\n\nReport: If I take the backup label from backup #1 and stick it into\notherwise-identical backup #2, validation succeeds but the server\nwon't start.\nResponse: That's because we can't validate the pg_wal directory. As\nnoted above, that's out of scope.\n\nReport: Using --ignore with a slash-terminated pathname doesn't work\nas expected.\nResponse: Fixed, thanks for the report.\n\nOff-List Report: You forgot a PG_BINARY flag.\nResponse: Fixed. I thought I'd done this before but there were two\nplaces and I'd only fixed one of them.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n-- --Thanks & Regards, Suraj kharage, EnterpriseDB Corporation, The Postgres Database Company.",
"msg_date": "Fri, 6 Mar 2020 14:28:24 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/5/20 10:25 PM, Robert Haas wrote:\n> Here's an updated patch set responding to many of the comments\n> received thus far.\nThanks Robert. There is a scenario - if user provide port of v11 server \nat the time of creating 'base backup' against pg_basebackup(v13+ your \npatch applied)\nwith option --manifest-checksums,will lead to this below error\n\n[centos@tushar-ldap-docker bin]$ ./pg_basebackup -R -p 9045 \n--manifest-checksums=SHA224 -D dc1\npg_basebackup: error: could not initiate base backup: ERROR: syntax error\npg_basebackup: removing data directory \"dc1\"\n[centos@tushar-ldap-docker bin]$\n\nSteps to reproduce -\nPG v11 is running\nrun pg_basebackup against that with option --manifest-checksums\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Mon, 9 Mar 2020 21:52:17 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Mon, Mar 9, 2020 at 12:22 PM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> On 3/5/20 10:25 PM, Robert Haas wrote:\n> > Here's an updated patch set responding to many of the comments\n> > received thus far.\n> Thanks Robert. There is a scenario - if user provide port of v11 server\n> at the time of creating 'base backup' against pg_basebackup(v13+ your\n> patch applied)\n> with option --manifest-checksums,will lead to this below error\n>\n> [centos@tushar-ldap-docker bin]$ ./pg_basebackup -R -p 9045\n> --manifest-checksums=SHA224 -D dc1\n> pg_basebackup: error: could not initiate base backup: ERROR: syntax error\n> pg_basebackup: removing data directory \"dc1\"\n> [centos@tushar-ldap-docker bin]$\n>\n> Steps to reproduce -\n> PG v11 is running\n> run pg_basebackup against that with option --manifest-checksums\n\nSeems like expected behavior to me. We could consider providing a more\ndescriptive error message, but there's now way for it to work.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 9 Mar 2020 13:16:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Mar 6, 2020 at 3:58 AM Suraj Kharage\n<suraj.kharage@enterprisedb.com> wrote:\n> 1: Getting below error while compiling 0002 patch.\n> 2:\n>\n> Few macros defined in 0003 patch not used anywhere in 0005 patch. Either we can replace these with hard-coded values or remove them.\n\nThanks. I hope that I have straightened those things out in the new\nversion which is attached. This version also includes some other\nchanges. The non-JSON code is now completely gone. Also, I've\nrefactored the code that does parses the JSON manifest to make it\ncleaner, and I've moved it out into a separate file. This might be\nuseful if anyone ends up wanting to reuse that code for some other\npurpose, and I think it makes it easier to understand, too, since the\nmanifest parsing is now much better separated from the task of\nactually validating the given directory against the manifest. I've\nalso added some tests, which are based in part on testing ideas from\nRajkumar Raghuwanshi and Mark Dilger, but this test code was written\nby me. So now it's like this:\n\n0001 - checksum helper functions. same as before.\n0002 - patch the server to generate and send a manifest, and\npg_basebackup to receive it\n0003 - add pg_validatebackup\n0004 - TAP tests\n\nComments?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 11 Mar 2020 16:08:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/9/20 10:46 PM, Robert Haas wrote:\n> Seems like expected behavior to me. We could consider providing a more\n> descriptive error message, but there's now way for it to work.\n\nRight , Error message need to be more user friendly .\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Thu, 12 Mar 2020 20:16:57 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/12/20 8:16 PM, tushar wrote:\n>> Seems like expected behavior to me. We could consider providing a more\n>> descriptive error message, but there's now way for it to work.\n>\n> Right , Error message need to be more user friendly . \n\nOne scenario which i feel - should error out even if -s option is \nspecified.\n\ncreate base backup directory ( ./pg_basebackup data1)\nConnect to root user and take out the permission from pg_hba.conf file \n( chmod 004 pg_hba.conf)\n\nrun pg_validatebackup -\n\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup data1\npg_validatebackup: error: could not open file \"pg_hba.conf\": Permission \ndenied\n\nrun pg_validatebackup with switch -s\n\n[centos@tushar-ldap-docker bin]$ ./pg_validatebackup data1 -s\npg_validatebackup: backup successfully verified\n\nhere file is not accessible so i think - it should throw you an error ( \nthe same above one) instead of blindly skipping it.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nOn 3/12/20 8:16 PM, tushar wrote:\n\n\nSeems like\n expected behavior to me. We could consider providing a more\n \n descriptive error message, but there's now way for it to work.\n \n\n\n Right , Error message need to be more user friendly .\n \nOne scenario which i feel - should error out even if -s option\n is specified. \n\n create base backup directory ( ./pg_basebackup data1)\n Connect to root user and take out the permission from pg_hba.conf\n file ( chmod 004 pg_hba.conf)\nrun pg_validatebackup -\n [centos@tushar-ldap-docker bin]$ ./pg_validatebackup data1\n pg_validatebackup: error: could not open file \"pg_hba.conf\":\n Permission denied\nrun pg_validatebackup with switch -s \n [centos@tushar-ldap-docker bin]$ ./pg_validatebackup data1 -s\n pg_validatebackup: backup successfully verified\nhere file is not accessible so i think - it should throw you an\n error ( the same above one) instead of blindly skipping it. \n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 13 Mar 2020 19:23:03 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Mar 13, 2020 at 9:53 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> run pg_validatebackup -\n>\n> [centos@tushar-ldap-docker bin]$ ./pg_validatebackup data1\n> pg_validatebackup: error: could not open file \"pg_hba.conf\": Permission denied\n>\n> run pg_validatebackup with switch -s\n>\n> [centos@tushar-ldap-docker bin]$ ./pg_validatebackup data1 -s\n> pg_validatebackup: backup successfully verified\n>\n> here file is not accessible so i think - it should throw you an error ( the same above one) instead of blindly skipping it.\n\nI don't really want to do that. That would require it to open every\nfile even if it doesn't need to read the data in the files. I think in\nmost cases that would just slow it down for no real benefit. If you've\nspecified -s, you have to be OK with getting a less complete check for\nproblems.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 13 Mar 2020 12:54:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Mar 12, 2020 at 10:47 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> On 3/9/20 10:46 PM, Robert Haas wrote:\n> > Seems like expected behavior to me. We could consider providing a more\n> > descriptive error message, but there's now way for it to work.\n>\n> Right , Error message need to be more user friendly .\n\nOK. Done in the attached version, which also includes a few other changes:\n\n- I expanded the regression tests. They now cover every line of code\nin parse_manifest.c except for a few that I believe to be unreachable\n(though I might be mistaken). Coverage for pg_validatebackup.c is also\nimproved, but it's not 100%; there are some cases that I don't know\nhow to hit outside of a kernel malfunction, and others that I only\nknow how to hit on non-Windows systems. For instance, it's easy to use\nperl to make a file inaccessible on Linux with chmod(0, $filename),\nbut I gather that doesn't work on Windows. I'm going to spend a bit\nmore time looking at this, but I think it's already reasonably good.\n\n- I fixed a couple of very minor bugs which I discovered by writing those tests.\n\n- I added documentation, in part based on a draft Mark Dilger shared\nwith me off-list.\n\nI don't think this is committable just yet, but I think it's getting\nfairly close, so if anyone has major objections please speak up soon.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 13 Mar 2020 16:34:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Thank you, Robert.\n\nGetting below warning while compiling the\nv11-0003-pg_validatebackup-Validate-a-backup-against-the-.patch.\n\n\n\n*pg_validatebackup.c: In function\n‘report_manifest_error’:pg_validatebackup.c:356:2: warning: function might\nbe possible candidate for ‘gnu_printf’ format attribute\n[-Wsuggest-attribute=format] pg_log_generic_v(PG_LOG_FATAL, fmt, ap);*\n\n\nTo resolve this, can we use \"pg_attribute_printf(2, 3)\" in function\ndeclaration something like below?\ne.g:\n\ndiff --git a/src/bin/pg_validatebackup/parse_manifest.h\nb/src/bin/pg_validatebackup/parse_manifest.h\nindex b0b18a5..25d140f 100644\n--- a/src/bin/pg_validatebackup/parse_manifest.h\n+++ b/src/bin/pg_validatebackup/parse_manifest.h\n@@ -25,7 +25,7 @@ typedef void\n(*json_manifest_perfile_callback)(JsonManifestParseContext *,\n size_t\nsize, pg_checksum_type checksum_type,\n int\nchecksum_length, uint8 *checksum_payload);\n typedef void (*json_manifest_error_callback)(JsonManifestParseContext *,\n- char *fmt,\n...);\n+ char\n*fmt,...) pg_attribute_printf(2, 3);\n\n struct JsonManifestParseContext\n {\ndiff --git a/src/bin/pg_validatebackup/pg_validatebackup.c\nb/src/bin/pg_validatebackup/pg_validatebackup.c\nindex 0e7299b..6ccbe59 100644\n--- a/src/bin/pg_validatebackup/pg_validatebackup.c\n+++ b/src/bin/pg_validatebackup/pg_validatebackup.c\n@@ -95,7 +95,7 @@ static void\nrecord_manifest_details_for_file(JsonManifestParseContext *context,\n\n int checksum_length,\n\n uint8 *checksum_payload);\n static void report_manifest_error(JsonManifestParseContext *context,\n- char\n*fmt, ...);\n+ char\n*fmt,...) pg_attribute_printf(2, 3);\n\n static void validate_backup_directory(validator_context *context,\n\nchar *relpath, char *fullpath);\n\n\nTypos:\n\n0004 patch\nunexpctedly => unexpectedly\n\n0005 patch\nbacup => backup\n\nOn Sat, Mar 14, 2020 at 2:04 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Mar 12, 2020 at 10:47 AM tushar <tushar.ahuja@enterprisedb.com>\n> wrote:\n> > On 3/9/20 10:46 PM, Robert Haas wrote:\n> > > Seems like expected behavior to me. We could consider providing a more\n> > > descriptive error message, but there's now way for it to work.\n> >\n> > Right , Error message need to be more user friendly .\n>\n> OK. Done in the attached version, which also includes a few other changes:\n>\n> - I expanded the regression tests. They now cover every line of code\n> in parse_manifest.c except for a few that I believe to be unreachable\n> (though I might be mistaken). Coverage for pg_validatebackup.c is also\n> improved, but it's not 100%; there are some cases that I don't know\n> how to hit outside of a kernel malfunction, and others that I only\n> know how to hit on non-Windows systems. For instance, it's easy to use\n> perl to make a file inaccessible on Linux with chmod(0, $filename),\n> but I gather that doesn't work on Windows. I'm going to spend a bit\n> more time looking at this, but I think it's already reasonably good.\n>\n> - I fixed a couple of very minor bugs which I discovered by writing those\n> tests.\n>\n> - I added documentation, in part based on a draft Mark Dilger shared\n> with me off-list.\n>\n> I don't think this is committable just yet, but I think it's getting\n> fairly close, so if anyone has major objections please speak up soon.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.\n\nThank you, Robert.Getting below warning while compiling the v11-0003-pg_validatebackup-Validate-a-backup-against-the-.patch.pg_validatebackup.c: In function ‘report_manifest_error’:pg_validatebackup.c:356:2: warning: function might be possible candidate for ‘gnu_printf’ format attribute [-Wsuggest-attribute=format] pg_log_generic_v(PG_LOG_FATAL, fmt, ap); To resolve this, can we use \"pg_attribute_printf(2, 3)\" in function declaration something like below?e.g:diff --git a/src/bin/pg_validatebackup/parse_manifest.h b/src/bin/pg_validatebackup/parse_manifest.hindex b0b18a5..25d140f 100644--- a/src/bin/pg_validatebackup/parse_manifest.h+++ b/src/bin/pg_validatebackup/parse_manifest.h@@ -25,7 +25,7 @@ typedef void (*json_manifest_perfile_callback)(JsonManifestParseContext *, size_t size, pg_checksum_type checksum_type, int checksum_length, uint8 *checksum_payload); typedef void (*json_manifest_error_callback)(JsonManifestParseContext *,- char *fmt, ...);+ char *fmt,...) pg_attribute_printf(2, 3); struct JsonManifestParseContext {diff --git a/src/bin/pg_validatebackup/pg_validatebackup.c b/src/bin/pg_validatebackup/pg_validatebackup.cindex 0e7299b..6ccbe59 100644--- a/src/bin/pg_validatebackup/pg_validatebackup.c+++ b/src/bin/pg_validatebackup/pg_validatebackup.c@@ -95,7 +95,7 @@ static void record_manifest_details_for_file(JsonManifestParseContext *context, int checksum_length, uint8 *checksum_payload); static void report_manifest_error(JsonManifestParseContext *context,- char *fmt, ...);+ char *fmt,...) pg_attribute_printf(2, 3); static void validate_backup_directory(validator_context *context, char *relpath, char *fullpath);Typos: 0004 patchunexpctedly => unexpectedly0005 patchbacup => backupOn Sat, Mar 14, 2020 at 2:04 AM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Mar 12, 2020 at 10:47 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> On 3/9/20 10:46 PM, Robert Haas wrote:\n> > Seems like expected behavior to me. We could consider providing a more\n> > descriptive error message, but there's now way for it to work.\n>\n> Right , Error message need to be more user friendly .\n\nOK. Done in the attached version, which also includes a few other changes:\n\n- I expanded the regression tests. They now cover every line of code\nin parse_manifest.c except for a few that I believe to be unreachable\n(though I might be mistaken). Coverage for pg_validatebackup.c is also\nimproved, but it's not 100%; there are some cases that I don't know\nhow to hit outside of a kernel malfunction, and others that I only\nknow how to hit on non-Windows systems. For instance, it's easy to use\nperl to make a file inaccessible on Linux with chmod(0, $filename),\nbut I gather that doesn't work on Windows. I'm going to spend a bit\nmore time looking at this, but I think it's already reasonably good.\n\n- I fixed a couple of very minor bugs which I discovered by writing those tests.\n\n- I added documentation, in part based on a draft Mark Dilger shared\nwith me off-list.\n\nI don't think this is committable just yet, but I think it's getting\nfairly close, so if anyone has major objections please speak up soon.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n-- --Thanks & Regards, Suraj kharage, EnterpriseDB Corporation, The Postgres Database Company.",
"msg_date": "Mon, 16 Mar 2020 10:37:35 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "One more suggestion, recent commit (1933ae62) has added the PostgreSQL home\npage to --help output.\n\ne.g:\n*PostgreSQL home page: <https://www.postgresql.org/\n<https://www.postgresql.org/>>*\n\nWe might need to consider this change for pg_validatebackup binary.\n\nOn Mon, Mar 16, 2020 at 10:37 AM Suraj Kharage <\nsuraj.kharage@enterprisedb.com> wrote:\n\n> Thank you, Robert.\n>\n> Getting below warning while compiling the\n> v11-0003-pg_validatebackup-Validate-a-backup-against-the-.patch.\n>\n>\n>\n> *pg_validatebackup.c: In function\n> ‘report_manifest_error’:pg_validatebackup.c:356:2: warning: function might\n> be possible candidate for ‘gnu_printf’ format attribute\n> [-Wsuggest-attribute=format] pg_log_generic_v(PG_LOG_FATAL, fmt, ap);*\n>\n>\n> To resolve this, can we use \"pg_attribute_printf(2, 3)\" in function\n> declaration something like below?\n> e.g:\n>\n> diff --git a/src/bin/pg_validatebackup/parse_manifest.h\n> b/src/bin/pg_validatebackup/parse_manifest.h\n> index b0b18a5..25d140f 100644\n> --- a/src/bin/pg_validatebackup/parse_manifest.h\n> +++ b/src/bin/pg_validatebackup/parse_manifest.h\n> @@ -25,7 +25,7 @@ typedef void\n> (*json_manifest_perfile_callback)(JsonManifestParseContext *,\n> size_t\n> size, pg_checksum_type checksum_type,\n> int\n> checksum_length, uint8 *checksum_payload);\n> typedef void (*json_manifest_error_callback)(JsonManifestParseContext *,\n> - char\n> *fmt, ...);\n> + char\n> *fmt,...) pg_attribute_printf(2, 3);\n>\n> struct JsonManifestParseContext\n> {\n> diff --git a/src/bin/pg_validatebackup/pg_validatebackup.c\n> b/src/bin/pg_validatebackup/pg_validatebackup.c\n> index 0e7299b..6ccbe59 100644\n> --- a/src/bin/pg_validatebackup/pg_validatebackup.c\n> +++ b/src/bin/pg_validatebackup/pg_validatebackup.c\n> @@ -95,7 +95,7 @@ static void\n> record_manifest_details_for_file(JsonManifestParseContext *context,\n>\n> int checksum_length,\n>\n> uint8 *checksum_payload);\n> static void report_manifest_error(JsonManifestParseContext *context,\n> - char\n> *fmt, ...);\n> + char\n> *fmt,...) pg_attribute_printf(2, 3);\n>\n> static void validate_backup_directory(validator_context *context,\n>\n> char *relpath, char *fullpath);\n>\n>\n> Typos:\n>\n> 0004 patch\n> unexpctedly => unexpectedly\n>\n> 0005 patch\n> bacup => backup\n>\n> On Sat, Mar 14, 2020 at 2:04 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Thu, Mar 12, 2020 at 10:47 AM tushar <tushar.ahuja@enterprisedb.com>\n>> wrote:\n>> > On 3/9/20 10:46 PM, Robert Haas wrote:\n>> > > Seems like expected behavior to me. We could consider providing a more\n>> > > descriptive error message, but there's now way for it to work.\n>> >\n>> > Right , Error message need to be more user friendly .\n>>\n>> OK. Done in the attached version, which also includes a few other changes:\n>>\n>> - I expanded the regression tests. They now cover every line of code\n>> in parse_manifest.c except for a few that I believe to be unreachable\n>> (though I might be mistaken). Coverage for pg_validatebackup.c is also\n>> improved, but it's not 100%; there are some cases that I don't know\n>> how to hit outside of a kernel malfunction, and others that I only\n>> know how to hit on non-Windows systems. For instance, it's easy to use\n>> perl to make a file inaccessible on Linux with chmod(0, $filename),\n>> but I gather that doesn't work on Windows. I'm going to spend a bit\n>> more time looking at this, but I think it's already reasonably good.\n>>\n>> - I fixed a couple of very minor bugs which I discovered by writing those\n>> tests.\n>>\n>> - I added documentation, in part based on a draft Mark Dilger shared\n>> with me off-list.\n>>\n>> I don't think this is committable just yet, but I think it's getting\n>> fairly close, so if anyone has major objections please speak up soon.\n>>\n>> --\n>> Robert Haas\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>\n>\n> --\n> --\n>\n> Thanks & Regards,\n> Suraj kharage,\n> EnterpriseDB Corporation,\n> The Postgres Database Company.\n>\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.\n\nOne more suggestion, recent commit (1933ae62) has added the PostgreSQL home page to --help output.e.g:PostgreSQL home page: <https://www.postgresql.org/>We might need to consider this change for pg_validatebackup binary.On Mon, Mar 16, 2020 at 10:37 AM Suraj Kharage <suraj.kharage@enterprisedb.com> wrote:Thank you, Robert.Getting below warning while compiling the v11-0003-pg_validatebackup-Validate-a-backup-against-the-.patch.pg_validatebackup.c: In function ‘report_manifest_error’:pg_validatebackup.c:356:2: warning: function might be possible candidate for ‘gnu_printf’ format attribute [-Wsuggest-attribute=format] pg_log_generic_v(PG_LOG_FATAL, fmt, ap); To resolve this, can we use \"pg_attribute_printf(2, 3)\" in function declaration something like below?e.g:diff --git a/src/bin/pg_validatebackup/parse_manifest.h b/src/bin/pg_validatebackup/parse_manifest.hindex b0b18a5..25d140f 100644--- a/src/bin/pg_validatebackup/parse_manifest.h+++ b/src/bin/pg_validatebackup/parse_manifest.h@@ -25,7 +25,7 @@ typedef void (*json_manifest_perfile_callback)(JsonManifestParseContext *, size_t size, pg_checksum_type checksum_type, int checksum_length, uint8 *checksum_payload); typedef void (*json_manifest_error_callback)(JsonManifestParseContext *,- char *fmt, ...);+ char *fmt,...) pg_attribute_printf(2, 3); struct JsonManifestParseContext {diff --git a/src/bin/pg_validatebackup/pg_validatebackup.c b/src/bin/pg_validatebackup/pg_validatebackup.cindex 0e7299b..6ccbe59 100644--- a/src/bin/pg_validatebackup/pg_validatebackup.c+++ b/src/bin/pg_validatebackup/pg_validatebackup.c@@ -95,7 +95,7 @@ static void record_manifest_details_for_file(JsonManifestParseContext *context, int checksum_length, uint8 *checksum_payload); static void report_manifest_error(JsonManifestParseContext *context,- char *fmt, ...);+ char *fmt,...) pg_attribute_printf(2, 3); static void validate_backup_directory(validator_context *context, char *relpath, char *fullpath);Typos: 0004 patchunexpctedly => unexpectedly0005 patchbacup => backupOn Sat, Mar 14, 2020 at 2:04 AM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Mar 12, 2020 at 10:47 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> On 3/9/20 10:46 PM, Robert Haas wrote:\n> > Seems like expected behavior to me. We could consider providing a more\n> > descriptive error message, but there's now way for it to work.\n>\n> Right , Error message need to be more user friendly .\n\nOK. Done in the attached version, which also includes a few other changes:\n\n- I expanded the regression tests. They now cover every line of code\nin parse_manifest.c except for a few that I believe to be unreachable\n(though I might be mistaken). Coverage for pg_validatebackup.c is also\nimproved, but it's not 100%; there are some cases that I don't know\nhow to hit outside of a kernel malfunction, and others that I only\nknow how to hit on non-Windows systems. For instance, it's easy to use\nperl to make a file inaccessible on Linux with chmod(0, $filename),\nbut I gather that doesn't work on Windows. I'm going to spend a bit\nmore time looking at this, but I think it's already reasonably good.\n\n- I fixed a couple of very minor bugs which I discovered by writing those tests.\n\n- I added documentation, in part based on a draft Mark Dilger shared\nwith me off-list.\n\nI don't think this is committable just yet, but I think it's getting\nfairly close, so if anyone has major objections please speak up soon.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n-- --Thanks & Regards, Suraj kharage, EnterpriseDB Corporation, The Postgres Database Company.\n-- --Thanks & Regards, Suraj kharage, EnterpriseDB Corporation, The Postgres Database Company.",
"msg_date": "Mon, 16 Mar 2020 11:33:23 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/14/20 2:04 AM, Robert Haas wrote:\n> OK. Done in the attached version\n\nThanks. Verified.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Mon, 16 Mar 2020 15:52:04 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 2:03 AM Suraj Kharage\n<suraj.kharage@enterprisedb.com> wrote:\n> One more suggestion, recent commit (1933ae62) has added the PostgreSQL home page to --help output.\n\nGood catch. Fixed. I also attempted to address the compiler warning\nyou mentioned in your other email.\n\nAlso, I realized that the previous patch versions didn't handle the\nhex-encoded path format that we need to use for non-UTF8 filenames,\nand that there was no easy way to test that format. So, in this\nversion I added an option to force all pathnames to be encoded in that\nformat. I also made that option capable of suppressing the backup\nmanifest altogether. Other than that, this version is pretty much the\nsame as the last version, except for a few additional test cases which\nI added to get the code coverage up even a little more. It would be\nnice if someone could test whether the tests pass on Windows.\n\nI have squashed the series down to just 2 commits, since that seems\nlike the way that this should probably be committed. Barring strong\nobjections and/or the end of the world, I plan to do that next week.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 20 Mar 2020 18:29:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Sat, Mar 21, 2020 at 4:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Mar 16, 2020 at 2:03 AM Suraj Kharage\n> <suraj.kharage@enterprisedb.com> wrote:\n> > One more suggestion, recent commit (1933ae62) has added the PostgreSQL home page to --help output.\n>\n> Good catch. Fixed. I also attempted to address the compiler warning\n> you mentioned in your other email.\n>\n> Also, I realized that the previous patch versions didn't handle the\n> hex-encoded path format that we need to use for non-UTF8 filenames,\n> and that there was no easy way to test that format. So, in this\n> version I added an option to force all pathnames to be encoded in that\n> format. I also made that option capable of suppressing the backup\n> manifest altogether. Other than that, this version is pretty much the\n> same as the last version, except for a few additional test cases which\n> I added to get the code coverage up even a little more. It would be\n> nice if someone could test whether the tests pass on Windows.\n>\n\nOn my CentOS, the patch gives below compilation failure:\npg_validatebackup.c: In function ‘parse_manifest_file’:\npg_validatebackup.c:335:19: error: assignment left-hand side might be\na candidate for a format attribute [-Werror=suggest-attribute=format]\n context.error_cb = report_manifest_error;\n\nI have tested it on Windows and found there are multiple failures.\nThe failures are as below:\nTest Summary Report\n---------------------------------------\nt/002_algorithm.pl (Wstat: 512 Tests: 5 Failed: 4)\n Failed tests: 2-5\n Non-zero exit status: 2\n Parse errors: Bad plan. You planned 19 tests but ran 5.\nt/003_corruption.pl (Wstat: 256 Tests: 14 Failed: 7)\n Failed tests: 2, 4, 6, 8, 10, 12, 14\n Non-zero exit status: 1\n Parse errors: Bad plan. You planned 44 tests but ran 14.\nt/004_options.pl (Wstat: 4352 Tests: 25 Failed: 17)\n Failed tests: 2, 4, 6-12, 14-17, 19-20, 22, 25\n Non-zero exit status: 17\nt/005_bad_manifest.pl (Wstat: 1792 Tests: 44 Failed: 7)\n Failed tests: 18, 24, 26, 30, 32, 34, 36\n Non-zero exit status: 7\nFiles=6, Tests=109, 72 wallclock secs ( 0.05 usr + 0.01 sys = 0.06 CPU)\nResult: FAIL\n\nFailure Report\n------------------------\nt/002_algorithm.pl ..... 1/19\n# Failed test 'backup ok with algorithm \"none\"'\n# at t/002_algorithm.pl line 33.\n\n# Failed test 'backup manifest exists'\n# at t/002_algorithm.pl line 39.\n\nt/002_algorithm.pl ..... 4/19 # Failed test 'validate backup with\nalgorithm \"none\"'\n# at t/002_algorithm.pl line 53.\n\n# Failed test 'backup ok with algorithm \"crc32c\"'\n# at t/002_algorithm.pl line 33.\n# Looks like you planned 19 tests but ran 5.\n# Looks like you failed 4 tests of 5 run.\n# Looks like your test exited with 2 just after 5.\nt/002_algorithm.pl ..... Dubious, test returned 2 (wstat 512, 0x200)\nFailed 18/19 subtests\nt/003_corruption.pl .... 1/44\n# Failed test 'intact backup validated'\n# at t/003_corruption.pl line 110.\n\n# Failed test 'corrupt backup fails validation: extra_file: matches'\n# at t/003_corruption.pl line 117.\n# 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# doesn't match '(?^:extra_file.*present on disk but not in the manifest)'\nt/003_corruption.pl .... 5/44\n# Failed test 'intact backup validated'\n# at t/003_corruption.pl line 110.\nt/003_corruption.pl .... 7/44\n# Failed test 'corrupt backup fails validation:\nextra_tablespace_file: matches'\n# at t/003_corruption.pl line 117.\n# 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# doesn't match '(?^:extra_ts_file.*present on disk but not in the\nmanifest)'\nt/003_corruption.pl .... 9/44\n# Failed test 'intact backup validated'\n# at t/003_corruption.pl line 110.\n\n# Failed test 'corrupt backup fails validation: missing_file: matches'\n# at t/003_corruption.pl line 117.\n# 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# doesn't match '(?^:pg_xact/0000.*present in the manifest but not on disk)'\nt/003_corruption.pl .... 13/44\n# Failed test 'intact backup validated'\n# at t/003_corruption.pl line 110.\n# Looks like you planned 44 tests but ran 14.\n# Looks like you failed 7 tests of 14 run.\n# Looks like your test exited with 1 just after 14.\nt/003_corruption.pl .... Dubious, test returned 1 (wstat 256, 0x100)\nFailed 37/44 subtests\nt/004_options.pl ....... 1/25\n# Failed test '-q succeeds: exit code 0'\n# at t/004_options.pl line 25.\n\n# Failed test '-q succeeds: no stderr'\n# at t/004_options.pl line 27.\n# got: 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# expected: ''\n\n# Failed test '-q checksum mismatch: matches'\n# at t/004_options.pl line 37.\n# 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# doesn't match '(?^:checksum mismatch for file \\\"PG_VERSION\\\")'\nt/004_options.pl ....... 7/25\n# Failed test '-s skips checksumming: exit code 0'\n# at t/004_options.pl line 43.\n\n# Failed test '-s skips checksumming: no stderr'\n# at t/004_options.pl line 43.\n# got: 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# expected: ''\n\n# Failed test '-s skips checksumming: matches'\n# at t/004_options.pl line 43.\n# ''\n# doesn't match '(?^:backup successfully verified)'\n\n# Failed test '-i ignores problem file: exit code 0'\n# at t/004_options.pl line 48.\n\n# Failed test '-i ignores problem file: no stderr'\n# at t/004_options.pl line 48.\n# got: 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# expected: ''\n\n# Failed test '-i ignores problem file: matches'\n# at t/004_options.pl line 48.\n# ''\n# doesn't match '(?^:backup successfully verified)'\n\n# Failed test '-i does not ignore all problems: matches'\n# at t/004_options.pl line 57.\n# 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# doesn't match '(?^:pg_xact.*is present in the manifest but not on disk)'\n\n# Failed test 'multiple -i options work: exit code 0'\n# at t/004_options.pl line 62.\n\n# Failed test 'multiple -i options work: no stderr'\n# at t/004_options.pl line 62.\n# got: 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# expected: ''\n\n# Failed test 'multiple -i options work: matches'\n# at t/004_options.pl line 62.\n# ''\n# doesn't match '(?^:backup successfully verified)'\n\n# Failed test 'multiple problems: missing files reported'\n# at t/004_options.pl line 71.\n# 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# doesn't match '(?^:pg_xact.*is present in the manifest but not on disk)'\n\n# Failed test 'multiple problems: checksum mismatch reported'\n# at t/004_options.pl line 73.\n# 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# doesn't match '(?^:checksum mismatch for file \\\"PG_VERSION\\\")'\n\n# Failed test '-e reports 1 error: missing files reported'\n# at t/004_options.pl line 80.\n# 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# doesn't match '(?^:pg_xact.*is present in the manifest but not on disk)'\n\n# Failed test 'nonexistent backup directory: matches'\n# at t/004_options.pl line 86.\n# 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# doesn't match '(?^:could not open directory)'\n# Looks like you failed 17 tests of 25.\nt/004_options.pl ....... Dubious, test returned 17 (wstat 4352, 0x1100)\nFailed 17/25 subtests\nt/005_bad_manifest.pl .. 1/44\n# Failed test 'missing pathname: matches'\n# at t/005_bad_manifest.pl line 156.\n# 'pg_validatebackup: fatal: could not parse backup\nmanifest: missing size\n# '\n# doesn't match '(?^:could not parse backup manifest: missing pathname)'\n\n# Failed test 'missing size: matches'\n# at t/005_bad_manifest.pl line 156.\n# 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# doesn't match '(?^:could not parse backup manifest: missing size)'\n\n# Failed test 'file size is not an integer: matches'\n# at t/005_bad_manifest.pl line 156.\n# 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# doesn't match '(?^:could not parse backup manifest: file size is\nnot an integer)'\n\n# Failed test 'duplicate pathname in backup manifest: matches'\n# at t/005_bad_manifest.pl line 156.\n# 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# doesn't match '(?^:fatal: duplicate pathname in backup manifest)'\nt/005_bad_manifest.pl .. 31/44\n# Failed test 'checksum without algorithm: matches'\n# at t/005_bad_manifest.pl line 156.\n# 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# doesn't match '(?^:could not parse backup manifest: checksum\nwithout algorithm)'\n\n# Failed test 'unrecognized checksum algorithm: matches'\n# at t/005_bad_manifest.pl line 156.\n# 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# doesn't match '(?^:fatal: unrecognized checksum algorithm)'\n\n# Failed test 'invalid checksum for file: matches'\n# at t/005_bad_manifest.pl line 156.\n# 'pg_validatebackup: fatal: could not parse backup\nmanifest: both pathname and encoded pathname\n# '\n# doesn't match '(?^:fatal: invalid checksum for file)'\n# Looks like you failed 7 tests of 44.\nt/005_bad_manifest.pl .. Dubious, test returned 7 (wstat 1792, 0x700)\nFailed 7/44 subtests\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 21 Mar 2020 17:56:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Sat, Mar 21, 2020 at 5:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n>\n> On my CentOS, the patch gives below compilation failure:\n> pg_validatebackup.c: In function ‘parse_manifest_file’:\n> pg_validatebackup.c:335:19: error: assignment left-hand side might be\n> a candidate for a format attribute [-Werror=suggest-attribute=format]\n> context.error_cb = report_manifest_error;\n>\n> I have tested it on Windows and found there are multiple failures.\n> The failures are as below:\n>\n\nI have started to investigate the failures.\n\n>\n> Failure Report\n> ------------------------\n> t/002_algorithm.pl ..... 1/19\n> # Failed test 'backup ok with algorithm \"none\"'\n> # at t/002_algorithm.pl line 33.\n>\n\nI checked the log and it was giving error:\n\n/src/bin/pg_validatebackup/tmp_check/t_002_algorithm_master_data/backup/none\n--manifest-checksum none --no-sync\n\\tmp_install\\bin\\pg_basebackup.EXE: illegal option -- manifest-checksum\n\nIt seems the option to be used should be --manifest-checksums. The\nattached patch fixes this problem for me.\n\n> t/002_algorithm.pl ..... 4/19 # Failed test 'validate backup with\n> algorithm \"none\"'\n> # at t/002_algorithm.pl line 53.\n>\n\nThe error message for the above failure is:\npg_validatebackup: fatal: could not parse backup manifest: both\npathname and encoded pathname\n\nI don't know at this stage what could cause this? Any pointers?\n\nAttached are logs of failed runs (regression.tar.gz).\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 23 Mar 2020 16:34:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Mon, Mar 23, 2020 at 7:04 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> /src/bin/pg_validatebackup/tmp_check/t_002_algorithm_master_data/backup/none\n> --manifest-checksum none --no-sync\n> \\tmp_install\\bin\\pg_basebackup.EXE: illegal option -- manifest-checksum\n>\n> It seems the option to be used should be --manifest-checksums. The\n> attached patch fixes this problem for me.\n\nOK, incorporated that.\n\n> > t/002_algorithm.pl ..... 4/19 # Failed test 'validate backup with\n> > algorithm \"none\"'\n> > # at t/002_algorithm.pl line 53.\n> >\n>\n> The error message for the above failure is:\n> pg_validatebackup: fatal: could not parse backup manifest: both\n> pathname and encoded pathname\n>\n> I don't know at this stage what could cause this? Any pointers?\n\nI think I forgot an initializer. Try this version.\n\nI also incorporated a fix previously proposed by Suraj for the\ncompiler warning you mentioned in the other email.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 23 Mar 2020 12:15:54 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> I think I forgot an initializer. Try this version.\n\nJust took a quick look through this. I'm pretty sure David wants to\nlook at it too. Anyway, some comments below.\n\n> diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml\n> index f139ba0231..d1ff53e8e8 100644\n> --- a/doc/src/sgml/protocol.sgml\n> +++ b/doc/src/sgml/protocol.sgml\n> @@ -2466,7 +2466,7 @@ The commands accepted in replication mode are:\n> </varlistentry>\n> \n> <varlistentry id=\"protocol-replication-base-backup\" xreflabel=\"BASE_BACKUP\">\n> - <term><literal>BASE_BACKUP</literal> [ <literal>LABEL</literal> <replaceable>'label'</replaceable> ] [ <literal>PROGRESS</literal> ] [ <literal>FAST</literal> ] [ <literal>WAL</literal> ] [ <literal>NOWAIT</literal> ] [ <literal>MAX_RATE</literal> <replaceable>rate</replaceable> ] [ <literal>TABLESPACE_MAP</literal> ] [ <literal>NOVERIFY_CHECKSUMS</literal> ]\n> + <term><literal>BASE_BACKUP</literal> [ <literal>LABEL</literal> <replaceable>'label'</replaceable> ] [ <literal>PROGRESS</literal> ] [ <literal>FAST</literal> ] [ <literal>WAL</literal> ] [ <literal>NOWAIT</literal> ] [ <literal>MAX_RATE</literal> <replaceable>rate</replaceable> ] [ <literal>TABLESPACE_MAP</literal> ] [ <literal>NOVERIFY_CHECKSUMS</literal> ] [ <literal>MANIFEST</literal> <replaceable>manifest_option</replaceable> ] [ <literal>MANIFEST_CHECKSUMS</literal> <replaceable>checksum_algorithm</replaceable> ]\n> <indexterm><primary>BASE_BACKUP</primary></indexterm>\n> </term>\n> <listitem>\n> @@ -2576,6 +2576,37 @@ The commands accepted in replication mode are:\n> </para>\n> </listitem>\n> </varlistentry>\n> +\n> + <varlistentry>\n> + <term><literal>MANIFEST</literal></term>\n> + <listitem>\n> + <para>\n> + When this option is specified with a value of <literal>ye'</literal>\n> + or <literal>force-escape</literal>, a backup manifest is created\n> + and sent along with the backup. The latter value forces all filenames\n> + to be hex-encoded; otherwise, this type of encoding is performed only\n> + for files whose names are non-UTF8 octet sequences.\n> + <literal>force-escape</literal> is intended primarily for testing\n> + purposes, to be sure that clients which read the backup manifest\n> + can handle this case. For compatibility with previous releases,\n> + the default is <literal>MANIFEST 'no'</literal>.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> + <term><literal>MANIFEST_CHECKSUMS</literal></term>\n> + <listitem>\n> + <para>\n> + Specifies the algorithm that should be used to checksum each file\n> + for purposes of the backup manifest. Currently, the available\n> + algorithms are <literal>NONE</literal>, <literal>CRC32C</literal>,\n> + <literal>SHA224</literal>, <literal>SHA256</literal>,\n> + <literal>SHA384</literal>, and <literal>SHA512</literal>.\n> + The default is <literal>CRC32C</literal>.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> </variablelist>\n> </para>\n> <para>\n\nWhile I get the desire to have a default here that includes checksums,\nthe way the command is structured, it strikes me as odd that the lack of\nMANIFEST_CHECKSUMS in the command actually results in checksums being\nincluded. I would think that we'd either:\n\n- have the lack of MANIFEST_CHECKSUMS mean 'No checksums'\n\nor\n\n- Require MANIFEST_CHECKSUMS to be specified and not have it be optional\n\nWe aren't expecting people to actually be typing these commands out and\nso I don't think it's a *huge* deal to have it the way you've written\nit, but it still strikes me as odd. I don't think I have a real\npreference between the two options that I suggest above, maybe very\nslightly in favor of the first.\n\n> diff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml\n> index 90638aad0e..bf6963a595 100644\n> --- a/doc/src/sgml/ref/pg_basebackup.sgml\n> +++ b/doc/src/sgml/ref/pg_basebackup.sgml\n> @@ -561,6 +561,69 @@ PostgreSQL documentation\n> </para>\n> </listitem>\n> </varlistentry>\n> +\n> + <varlistentry>\n> + <term><option>--no-manifest</option></term>\n> + <listitem>\n> + <para>\n> + Disables generation of a backup manifest. If this option is not\n> + specified, the server will and send generate a backup manifest\n> + which can be verified using <xref linkend=\"app-pgvalidatebackup\" />.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nHow about \"If this option is not specified, the server will generate and\nsend a backup manifest which can be verified using ...\"\n\n> + <varlistentry>\n> + <term><option>--manifest-checksums=<replaceable class=\"parameter\">algorithm</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Specifies the algorithm that should be used to checksum each file\n> + for purposes of the backup manifest. Currently, the available\n> + algorithms are <literal>NONE</literal>, <literal>CRC32C</literal>,\n> + <literal>SHA224</literal>, <literal>SHA256</literal>,\n> + <literal>SHA384</literal>, and <literal>SHA512</literal>.\n> + The default is <literal>CRC32C</literal>.\n> + </para>\n\nAs I recall, there was an invitation to argue about the defaults at one\npoint, and so I'm going to say here that I would advocate for a\ndifferent default than 'crc32c'. Specifically, I would think sha256 or\n512 would be better. I don't recall seeing a debate about this that\nconclusively found crc32c to be better, but I'm happy to go back and\nreread anything someone wants to point me at.\n\n> + <para>\n> + If <literal>NONE</literal> is selected, the backup manifest will\n> + not contain any checksums. Otherwise, it will contain a checksum\n> + of each file in the backup using the specified algorithm. In addition,\n> + the manifest itself will always contain a <literal>SHA256</literal>\n> + checksum of its own contents. The <literal>SHA</literal> algorithms\n> + are significantly more CPU-intensive than <literal>CRC32C</literal>,\n> + so selecting one of them may increase the time required to complete\n> + the backup.\n> + </para>\n\nIt also seems a bit silly to me that using the defaults means having to\ndeal with two different algorithms- crc32c and sha256. Considering how\nfast these algorithms are, compared to everything else involved in a\nbackup (particularly one that's likely going across a network...), I\nwonder if we should say \"may slightly increase\" above.\n\n> + <para>\n> + On the other hand, <literal>CRC32C</literal> is not a cryptographic\n> + hash function, so it is only suitable for protecting against\n> + inadvertent or random modifications to a backup. An adversary\n> + who can modify the backup could easily do so in such a way that\n> + the CRC does not change, whereas a SHA collision will be hard\n> + to manufacture. (However, note that if the attacker also has access\n> + to modify the backup manifest itself, no checksum algorithm will\n> + provide any protection.) An additional advantage of the\n> + <literal>SHA</literal> family of functions is that they output\n> + a much larger number of bits.\n> + </para>\n\nI'm not really sure that this paragraph is sensible to include.. We\ncertainly don't talk about adversaries and cryptographic hash functions\nwhen we talk about our page-level checksums, for example. I'm not\ncompletely against including it, but I don't want to give the impression\nthat this is something we routinely consider or that lack of discussion\nelsewhere implies we have protections against a determined attacker.\n\n> diff --git a/doc/src/sgml/ref/pg_validatebackup.sgml b/doc/src/sgml/ref/pg_validatebackup.sgml\n> new file mode 100644\n> index 0000000000..1c171f6970\n> --- /dev/null\n> +++ b/doc/src/sgml/ref/pg_validatebackup.sgml\n> @@ -0,0 +1,232 @@\n> +<!--\n> +doc/src/sgml/ref/pg_validatebackup.sgml\n> +PostgreSQL documentation\n> +-->\n> +\n> +<refentry id=\"app-pgvalidatebackup\">\n> + <indexterm zone=\"app-pgvalidatebackup\">\n> + <primary>pg_validatebackup</primary>\n> + </indexterm>\n> +\n> + <refmeta>\n> + <refentrytitle>pg_validatebackup</refentrytitle>\n> + <manvolnum>1</manvolnum>\n> + <refmiscinfo>Application</refmiscinfo>\n> + </refmeta>\n> +\n> + <refnamediv>\n> + <refname>pg_validatebackup</refname>\n> + <refpurpose>verify the integrity of a base backup of a\n> + <productname>PostgreSQL</productname> cluster</refpurpose>\n> + </refnamediv>\n\n\"verify the integrity of a backup taken using pg_basebackup\"\n\n> + <refsect1>\n> + <title>\n> + Description\n> + </title>\n> + <para>\n> + <application>pg_validatebackup</application> is used to check the integrity\n> + of a database cluster backup. The backup being checked should have been\n> + created by <command>pg_basebackup</command> or some other tool that includes\n> + a <literal>backup_manifest</literal> file with the backup. The backup\n> + must be stored in the \"plain\" format; a \"tar\" format backup can be checked\n> + after extracting it. Backup manifests are created by the server beginning\n> + with <productname>PostgreSQL</productname> version 13, so older backups\n> + cannot be validated using this tool.\n> + </para>\n\nThis seems to invite the idea that pg_validatebackup should be able to\nwork with external backup solutions- but I'm a bit concerned by that\nidea because it seems like it would then mean we'd have to be\nparticularly careful when changing things in this area, and I'm not\nthrilled by that. I'd like to make sure that new versions of\npg_validatebackup work with older backups, and, ideally, older versions\nof pg_validatebackup would work even with newer backups, all of which I\nthink the json structure of the manifest helps us with, but that's when\nwe're building the manifest and know what it's going to look like.\n\nMaybe to put it another way- would a patch be accepted to make\npg_validatebackup work with other manifests..? If not, then I'd keep\nthis to the more specific \"this tool is used to validate backups taken\nusing pg_basebackup\".\n\n> + <para>\n> + <application>pg_validatebackup</application> reads the manifest file of a\n> + backup, verifies the manifest against its own internal checksum, and then\n> + verifies that the same files are present in the target directory as in the\n> + manifest itself. It then verifies that each file has the expected checksum,\n> + unless the backup was taken the checksum algorithm set to\n\n\"was taken with the checksum algorithm\"...\n\n> + <literal>none</literal>, in which case checksum verification is not\n> + performed. The presence or absence of directories is not checked, except\n> + indirectly: if a directory is missing, any files it should have contained\n> + will necessarily also be missing. Certain files and directories are\n> + excluded from verification:\n> + </para>\n> +\n> + <itemizedlist>\n> + <listitem>\n> + <para>\n> + <literal>backup_manifest</literal> is ignored because the backup\n> + manifest is logically not part of the backup and does not include\n> + any entry for itself.\n> + </para>\n> + </listitem>\n\nThis seems a bit confusing, doesn't it? The backup_manifest must exist,\nand its checksum is internal, and is checked, isn't it? Why say that\nit's excluded..?\n\n> + <listitem>\n> + <para>\n> + <literal>pg_wal</literal> is ignored because WAL files are sent\n> + separately from the backup, and are therefore not described by the\n> + backup manifest.\n> + </para>\n> + </listitem>\n\nI don't agree with the choice to exclude the WAL files, considering\nthey're an integral part of a backup, to exclude them means that if\nthey've been corrupted at all then the entire backup is invalid. You\ndon't want to be discovering that when you're trying to do a restore of\na backup that you took with pg_basebackup and which pg_validatebackup\nsays is valid. After all, the tool being used here, pg_basebackup,\n*does* also stream the WAL files- there's no reason why we can't\ncalculate a checksum on them and store that checksum somewhere and use\nit to validate the WAL files. This, in my opinion, is actually a\nshow-stopper for this feature. Claiming it's a valid backup when we\ndon't check the absolutely necessary-for-restore WAL is making a false\nclaim, no matter how well it's documented.\n\nI do understand that it's possible to run pg_basebackup without the WAL\nfiles being grabbed as part of that run- in such a case, we should be\nable to detect that was the case for the backup and when running\npg_validatebackup we should issue a WARNING that the WAL files weren't\nable to be verified (we could have an option to suppress that warning if\npeople feel that's needed).\n\n> + <listitem>\n> + <para>\n> + <literal>postgesql.auto.conf</literal>,\n> + <literal>standby.signal</literal>,\n> + and <literal>recovery.signal</literal> are ignored because they may\n> + sometimes be created or modified by the backup client itself.\n> + (For example, <literal>pg_basebackup -R</literal> will modify\n> + <literal>postgresql.auto.conf</literal> and create\n> + <literal>standby.signal</literal>.)\n> + </para>\n> + </listitem>\n> + </itemizedlist>\n> + </refsect1>\n\nNot really thrilled with this (pg_basebackup certainly could figure out\nthe checksum for those files...), but I also don't think it's a huge\nissue as they can be recreated by a user (unlike a WAL file..).\n\nI got through most of the pg_basebackup changes, and they looked pretty\ngood in general. Will try to review more tomorrow.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 23 Mar 2020 18:42:17 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Mon, Mar 23, 2020 at 9:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Mar 23, 2020 at 7:04 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > /src/bin/pg_validatebackup/tmp_check/t_002_algorithm_master_data/backup/none\n> > --manifest-checksum none --no-sync\n> > \\tmp_install\\bin\\pg_basebackup.EXE: illegal option -- manifest-checksum\n> >\n> > It seems the option to be used should be --manifest-checksums. The\n> > attached patch fixes this problem for me.\n>\n> OK, incorporated that.\n>\n> > > t/002_algorithm.pl ..... 4/19 # Failed test 'validate backup with\n> > > algorithm \"none\"'\n> > > # at t/002_algorithm.pl line 53.\n> > >\n> >\n> > The error message for the above failure is:\n> > pg_validatebackup: fatal: could not parse backup manifest: both\n> > pathname and encoded pathname\n> >\n> > I don't know at this stage what could cause this? Any pointers?\n>\n> I think I forgot an initializer. Try this version.\n>\n\nAll others except one are passing now. See the summary of the failed\ntest below and attached are failed run logs.\n\nTest Summary Report\n-------------------\nt/003_corruption.pl (Wstat: 65280 Tests: 14 Failed: 0)\n Non-zero exit status: 255\n Parse errors: Bad plan. You planned 44 tests but ran 14.\nFiles=6, Tests=123, 164 wallclock secs ( 0.06 usr + 0.02 sys = 0.08 CPU)\nResult: FAIL\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 24 Mar 2020 09:13:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Mon, Mar 23, 2020 at 11:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> All others except one are passing now. See the summary of the failed\n> test below and attached are failed run logs.\n>\n> Test Summary Report\n> -------------------\n> t/003_corruption.pl (Wstat: 65280 Tests: 14 Failed: 0)\n> Non-zero exit status: 255\n> Parse errors: Bad plan. You planned 44 tests but ran 14.\n> Files=6, Tests=123, 164 wallclock secs ( 0.06 usr + 0.02 sys = 0.08 CPU)\n> Result: FAIL\n\nHmm. It looks like it's trying to remove the symlink that points to\nthe tablespace directory, and failing with no error message. I could\nset that permutation to be skipped on Windows, or maybe there's an\nalternate method you can suggest that would work?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 24 Mar 2020 13:00:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Mon, Mar 23, 2020 at 6:42 PM Stephen Frost <sfrost@snowman.net> wrote:\n> While I get the desire to have a default here that includes checksums,\n> the way the command is structured, it strikes me as odd that the lack of\n> MANIFEST_CHECKSUMS in the command actually results in checksums being\n> included.\n\nI don't think that's quite accurate, because the default for the\nMANIFEST option is 'no', so the actual default if you say nothing\nabout manifests at all, you don't get one. However, it is true that if\nyou ask for a manifest and you don't specify the type of checksums,\nyou get CRC-32C. We could change it so that if you ask for a manifest\nyou must also specify the type of checksum, but I don't see any\nadvantage in that approach. Nothing prevents the client from\nspecifying the value if it cares, but making the default \"I don't\ncare, you pick\" seems pretty sensible. It could be really helpful if,\nfor example, we decide to remove the initial default in a future\nrelease for some reason. Then the client just keeps working without\nneeding to change anything, but anyone who explicitly specified the\nold default gets an error.\n\n> > + Disables generation of a backup manifest. If this option is not\n> > + specified, the server will and send generate a backup manifest\n> > + which can be verified using <xref linkend=\"app-pgvalidatebackup\" />.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n>\n> How about \"If this option is not specified, the server will generate and\n> send a backup manifest which can be verified using ...\"\n\nGood suggestion. :-)\n\n> > + <varlistentry>\n> > + <term><option>--manifest-checksums=<replaceable class=\"parameter\">algorithm</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + Specifies the algorithm that should be used to checksum each file\n> > + for purposes of the backup manifest. Currently, the available\n> > + algorithms are <literal>NONE</literal>, <literal>CRC32C</literal>,\n> > + <literal>SHA224</literal>, <literal>SHA256</literal>,\n> > + <literal>SHA384</literal>, and <literal>SHA512</literal>.\n> > + The default is <literal>CRC32C</literal>.\n> > + </para>\n>\n> As I recall, there was an invitation to argue about the defaults at one\n> point, and so I'm going to say here that I would advocate for a\n> different default than 'crc32c'. Specifically, I would think sha256 or\n> 512 would be better. I don't recall seeing a debate about this that\n> conclusively found crc32c to be better, but I'm happy to go back and\n> reread anything someone wants to point me at.\n\nIt was discussed upthread. Andrew Dunstan argued that there was no\nreason to use a cryptographic checksum here and that we shouldn't do\nso gratuitously. Suraj Kharage found that CRC-32C has very little\nperformance impact but that any of the SHA functions slow down backups\nconsiderably. David Steele pointed out that you'd need a better\nchecksum if you wanted to use it for purposes such as delta restore,\nwith which I agree, but that's not the design center for this feature.\nI concluded that different people wanted different things, so that we\nought to make this configurable, but that CRC-32C is a good default.\nIt has approximately a 99.9999999767169% chance of detecting a random\nerror, which is pretty good, and it doesn't drastically slow down\nbackups, which is also good.\n\n> It also seems a bit silly to me that using the defaults means having to\n> deal with two different algorithms- crc32c and sha256. Considering how\n> fast these algorithms are, compared to everything else involved in a\n> backup (particularly one that's likely going across a network...), I\n> wonder if we should say \"may slightly increase\" above.\n\nActually, Suraj's results upthread show that it's a pretty big hit.\n\n> > + <para>\n> > + On the other hand, <literal>CRC32C</literal> is not a cryptographic\n> > + hash function, so it is only suitable for protecting against\n> > + inadvertent or random modifications to a backup. An adversary\n> > + who can modify the backup could easily do so in such a way that\n> > + the CRC does not change, whereas a SHA collision will be hard\n> > + to manufacture. (However, note that if the attacker also has access\n> > + to modify the backup manifest itself, no checksum algorithm will\n> > + provide any protection.) An additional advantage of the\n> > + <literal>SHA</literal> family of functions is that they output\n> > + a much larger number of bits.\n> > + </para>\n>\n> I'm not really sure that this paragraph is sensible to include.. We\n> certainly don't talk about adversaries and cryptographic hash functions\n> when we talk about our page-level checksums, for example. I'm not\n> completely against including it, but I don't want to give the impression\n> that this is something we routinely consider or that lack of discussion\n> elsewhere implies we have protections against a determined attacker.\n\nGiven the skepticism from some quarters about CRC-32C on this thread,\nI didn't want to oversell it. Also, I do think that these things are\npossibly things that we should consider more widely. I agree with\nAndrew's complaint that it's far too easy to just throw SHA<lots> at\nproblems that don't really require it without any actually good\nreason. Spelling out our reasons for choosing certain algorithms for\ncertain purposes seems like a good habit to get into, and if we\nhaven't done it in other places, maybe we should. On the other hand,\nwhile I'm inclined to keep this paragraph, I won't lose much sleep if\nwe decide to remove it.\n\n> > + <refnamediv>\n> > + <refname>pg_validatebackup</refname>\n> > + <refpurpose>verify the integrity of a base backup of a\n> > + <productname>PostgreSQL</productname> cluster</refpurpose>\n> > + </refnamediv>\n>\n> \"verify the integrity of a backup taken using pg_basebackup\"\n\nOK.\n\n> This seems to invite the idea that pg_validatebackup should be able to\n> work with external backup solutions- but I'm a bit concerned by that\n> idea because it seems like it would then mean we'd have to be\n> particularly careful when changing things in this area, and I'm not\n> thrilled by that. I'd like to make sure that new versions of\n> pg_validatebackup work with older backups, and, ideally, older versions\n> of pg_validatebackup would work even with newer backups, all of which I\n> think the json structure of the manifest helps us with, but that's when\n> we're building the manifest and know what it's going to look like.\n\nBoth you and David made forceful arguments that this needed to be JSON\nrather than an ad-hoc text format precisely so that other tools could\nparse it more easily, and I just spent *a lot* of time making the JSON\nparsing stuff work precisely so that you could have that. This project\nwould've been done a month ago if not for that. I don't care all that\nmuch whether we remove the mention here, but the idea that using JSON\nwas so that pg_validatebackup could manage compatibility issues is\njust not correct. The version number on line 1 of the file was more\nthan sufficient for that purpose.\n\n> > + <para>\n> > + <application>pg_validatebackup</application> reads the manifest file of a\n> > + backup, verifies the manifest against its own internal checksum, and then\n> > + verifies that the same files are present in the target directory as in the\n> > + manifest itself. It then verifies that each file has the expected checksum,\n> > + unless the backup was taken the checksum algorithm set to\n>\n> \"was taken with the checksum algorithm\"...\n\nOops. Will fix.\n\n> > + <itemizedlist>\n> > + <listitem>\n> > + <para>\n> > + <literal>backup_manifest</literal> is ignored because the backup\n> > + manifest is logically not part of the backup and does not include\n> > + any entry for itself.\n> > + </para>\n> > + </listitem>\n>\n> This seems a bit confusing, doesn't it? The backup_manifest must exist,\n> and its checksum is internal, and is checked, isn't it? Why say that\n> it's excluded..?\n\nWell, there's no entry in the backup manifest for backup_manifest\nitself. Normally, the presence of a file not mentioned in\nbackup_manifest would cause a complaint about an extra file, but\nbecause backup_manifest is in the ignore list, it doesn't.\n\n> > + <listitem>\n> > + <para>\n> > + <literal>pg_wal</literal> is ignored because WAL files are sent\n> > + separately from the backup, and are therefore not described by the\n> > + backup manifest.\n> > + </para>\n> > + </listitem>\n>\n> I don't agree with the choice to exclude the WAL files, considering\n> they're an integral part of a backup, to exclude them means that if\n> they've been corrupted at all then the entire backup is invalid. You\n> don't want to be discovering that when you're trying to do a restore of\n> a backup that you took with pg_basebackup and which pg_validatebackup\n> says is valid. After all, the tool being used here, pg_basebackup,\n> *does* also stream the WAL files- there's no reason why we can't\n> calculate a checksum on them and store that checksum somewhere and use\n> it to validate the WAL files. This, in my opinion, is actually a\n> show-stopper for this feature. Claiming it's a valid backup when we\n> don't check the absolutely necessary-for-restore WAL is making a false\n> claim, no matter how well it's documented.\n\nThe default for pg_basebackup is -Xstream, which means that the WAL\nfiles are being sent over a separate connection that has no connection\nto the original session. The server, when generating the backup\nmanifest, has no idea what WAL files are being sent over that separate\nconnection, and thus cannot include them in the manifest. This problem\ncould be \"solved\" by having the client generate the manifest rather\nthan the server, but I think that cure would be worse than the\ndisease. As it stands, the manifest provides some protection against\ntransmission errors, which would be lost with that design. As you\npoint out, this clearly can't be done with -Xnone. I think it would be\npossible to support this with -Xfetch, but we'd have to have the\nmanifest itself specify whether or not it included files in pg_wal,\nwhich would require complicating the format a bit. I don't think that\nmakes sense. I assume -Xstream is the most commonly-used mode, because\nthe default used to be -Xfetch and we changed it, which I think we\nwould not have done unless people liked -Xstream significantly better.\nAdding complexity to cater to a non-default case which I suspect is\nnot widely used doesn't really make sense to me.\n\nIn the future, we might want to consider improvements which could make\nvalidation of pg_wal feasible in common cases. Specifically, suppose\nthat pg_basebackup could receive the manifest from the server, keep\nall the entries for the existing files just as they are, but add\nentries for WAL files and anything else it may have added to the\nbackup, recompute the manifest checksum, and store the resulting\nrevised manifest with the backup. That, I think, would be fairly cool,\nbut it's a significant body of additional development work, and this\nis already quite a large patch. The patch itself has grown to about\n3000 lines, and has already 10 preparatory commits doing another ~1500\nlines of refactoring to prepare for it.\n\n> Not really thrilled with this (pg_basebackup certainly could figure out\n> the checksum for those files...), but I also don't think it's a huge\n> issue as they can be recreated by a user (unlike a WAL file..).\n\nYeah, same issues, though. Here again, there are several possible\nfixes: (1) make the server modify those files rather than letting\npg_basebackup do it; (2) make the client compute the manifest rather\nthan the server; (3) have the client revise the manifest. (3) makes\nmost sense to me, but I think that it would be better to return to\nthat topic at a later date. This is certainly not a perfect feature as\nthings stand but I believe it is good enough to provide significant\nbenefits.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 24 Mar 2020 14:04:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 10:30 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Mar 23, 2020 at 11:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > All others except one are passing now. See the summary of the failed\n> > test below and attached are failed run logs.\n> >\n> > Test Summary Report\n> > -------------------\n> > t/003_corruption.pl (Wstat: 65280 Tests: 14 Failed: 0)\n> > Non-zero exit status: 255\n> > Parse errors: Bad plan. You planned 44 tests but ran 14.\n> > Files=6, Tests=123, 164 wallclock secs ( 0.06 usr + 0.02 sys = 0.08 CPU)\n> > Result: FAIL\n>\n> Hmm. It looks like it's trying to remove the symlink that points to\n> the tablespace directory, and failing with no error message. I could\n> set that permutation to be skipped on Windows, or maybe there's an\n> alternate method you can suggest that would work?\n>\n\nWe can use rmdir() for Windows. The attached patch fixes the failure\nfor me. I have tried the test on CentOS as well after the fix and it\npasses there as well.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 25 Mar 2020 12:23:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Mar 23, 2020 at 6:42 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > While I get the desire to have a default here that includes checksums,\n> > the way the command is structured, it strikes me as odd that the lack of\n> > MANIFEST_CHECKSUMS in the command actually results in checksums being\n> > included.\n> \n> I don't think that's quite accurate, because the default for the\n> MANIFEST option is 'no', so the actual default if you say nothing\n> about manifests at all, you don't get one. However, it is true that if\n> you ask for a manifest and you don't specify the type of checksums,\n> you get CRC-32C. We could change it so that if you ask for a manifest\n> you must also specify the type of checksum, but I don't see any\n> advantage in that approach. Nothing prevents the client from\n> specifying the value if it cares, but making the default \"I don't\n> care, you pick\" seems pretty sensible. It could be really helpful if,\n> for example, we decide to remove the initial default in a future\n> release for some reason. Then the client just keeps working without\n> needing to change anything, but anyone who explicitly specified the\n> old default gets an error.\n\nI get that the default for manifest is 'no', but I don't really see how\nthat means that the lack of saying anything about checksums should mean\n\"give me crc32c checksums\". It's really rather common that if we don't\nspecify something, it means don't do that thing- like an 'ORDER BY'\nclause. We aren't designing SQL here, so I'm not going to get terribly\nupset if you push forward with \"if you don't want checksums, you have to\nexplicitly say MANIFEST_CHECKSUMS no\", but I don't agree with the\nreasoning here.\n\n> > > + <varlistentry>\n> > > + <term><option>--manifest-checksums=<replaceable class=\"parameter\">algorithm</replaceable></option></term>\n> > > + <listitem>\n> > > + <para>\n> > > + Specifies the algorithm that should be used to checksum each file\n> > > + for purposes of the backup manifest. Currently, the available\n> > > + algorithms are <literal>NONE</literal>, <literal>CRC32C</literal>,\n> > > + <literal>SHA224</literal>, <literal>SHA256</literal>,\n> > > + <literal>SHA384</literal>, and <literal>SHA512</literal>.\n> > > + The default is <literal>CRC32C</literal>.\n> > > + </para>\n> >\n> > As I recall, there was an invitation to argue about the defaults at one\n> > point, and so I'm going to say here that I would advocate for a\n> > different default than 'crc32c'. Specifically, I would think sha256 or\n> > 512 would be better. I don't recall seeing a debate about this that\n> > conclusively found crc32c to be better, but I'm happy to go back and\n> > reread anything someone wants to point me at.\n> \n> It was discussed upthread. Andrew Dunstan argued that there was no\n> reason to use a cryptographic checksum here and that we shouldn't do\n> so gratuitously. Suraj Kharage found that CRC-32C has very little\n> performance impact but that any of the SHA functions slow down backups\n> considerably. David Steele pointed out that you'd need a better\n> checksum if you wanted to use it for purposes such as delta restore,\n> with which I agree, but that's not the design center for this feature.\n> I concluded that different people wanted different things, so that we\n> ought to make this configurable, but that CRC-32C is a good default.\n> It has approximately a 99.9999999767169% chance of detecting a random\n> error, which is pretty good, and it doesn't drastically slow down\n> backups, which is also good.\n\nThere were also comments made up-thread about how it might not be great\nfor larger (eg: 1GB files, like we tend to have quite a few of...), and\nsomething about it being a 40 year old algorithm.. Having re-read some\nof the discussion, I'm actually more inclined to say we should be using\nsha256 instead of crc32c.\n\n> > It also seems a bit silly to me that using the defaults means having to\n> > deal with two different algorithms- crc32c and sha256. Considering how\n> > fast these algorithms are, compared to everything else involved in a\n> > backup (particularly one that's likely going across a network...), I\n> > wonder if we should say \"may slightly increase\" above.\n> \n> Actually, Suraj's results upthread show that it's a pretty big hit.\n\nSo, I went back and re-read part of the thread and looked at the\n(seemingly, only one..?) post regarding timing and didn't understand\nwhat, exactly, was being timed there, because I didn't see the actual\ncommands/script/whatever that was used to get those results included.\n\nI'm sure that sha256 takes a lot more time than crc32c, I'm certainly\nnot trying to dispute that, but what's relevent here is how much it\nimpacts the time required to run the overall backup (including sync'ing\nit to disk, and possibly network transmission time.. if we're just\ncomparing the time to run it through memory then, sure, the sha256\ncomputation time might end up being quite a bit of the time, but that's\nnot really that interesting of a test..).\n\n> > > + <para>\n> > > + On the other hand, <literal>CRC32C</literal> is not a cryptographic\n> > > + hash function, so it is only suitable for protecting against\n> > > + inadvertent or random modifications to a backup. An adversary\n> > > + who can modify the backup could easily do so in such a way that\n> > > + the CRC does not change, whereas a SHA collision will be hard\n> > > + to manufacture. (However, note that if the attacker also has access\n> > > + to modify the backup manifest itself, no checksum algorithm will\n> > > + provide any protection.) An additional advantage of the\n> > > + <literal>SHA</literal> family of functions is that they output\n> > > + a much larger number of bits.\n> > > + </para>\n> >\n> > I'm not really sure that this paragraph is sensible to include.. We\n> > certainly don't talk about adversaries and cryptographic hash functions\n> > when we talk about our page-level checksums, for example. I'm not\n> > completely against including it, but I don't want to give the impression\n> > that this is something we routinely consider or that lack of discussion\n> > elsewhere implies we have protections against a determined attacker.\n> \n> Given the skepticism from some quarters about CRC-32C on this thread,\n> I didn't want to oversell it. Also, I do think that these things are\n> possibly things that we should consider more widely. I agree with\n> Andrew's complaint that it's far too easy to just throw SHA<lots> at\n> problems that don't really require it without any actually good\n> reason. Spelling out our reasons for choosing certain algorithms for\n> certain purposes seems like a good habit to get into, and if we\n> haven't done it in other places, maybe we should. On the other hand,\n> while I'm inclined to keep this paragraph, I won't lose much sleep if\n> we decide to remove it.\n\nI don't mind spelling out reasoning for certain algorithms over others,\nin general, this just seems a bit much. I'm not sure we need to be\ngoing into what being a cryptographic hash function means every time we\ntalk about any hash or checksum. Those who actually care about\ncryptographic hash function usage really don't need someone to explain\nto them that crc32c isn't cryptographically secure. The last sentence\nalso seems kind of odd (why is a much larger number of bits, alone, an\nadvantage..?).\n\nI tried to figure out a way to rewrite this and I feel like I keep\nending up coming back to something like \"CRC32C is a CRC, not a hash\"\nand that kind of truism just doesn't feel terribly useful to include in\nour documentation.\n\nMaybe:\n\n\"Using a SHA hash function provides a cryptographically secure digest\nof each file for users who wish to verify that the backup has not been\ntampered with, while the CRC32C algorithm provides a checksum which is\nmuch faster to calculate and good at catching errors due to accidental\nchanges but is not resistent to targeted modifications. Note that, to\nbe useful against an adversary who has access to the backup, the backup\nmanifest would need to be stored securely elsewhere or otherwise\nverified to have not been modified since the backup was taken.\"\n\nThis at least talks about things in a positive direction (SHA hash\nfunctions do this, CRC32C does that) rather than in a negative tone.\n\n> > This seems to invite the idea that pg_validatebackup should be able to\n> > work with external backup solutions- but I'm a bit concerned by that\n> > idea because it seems like it would then mean we'd have to be\n> > particularly careful when changing things in this area, and I'm not\n> > thrilled by that. I'd like to make sure that new versions of\n> > pg_validatebackup work with older backups, and, ideally, older versions\n> > of pg_validatebackup would work even with newer backups, all of which I\n> > think the json structure of the manifest helps us with, but that's when\n> > we're building the manifest and know what it's going to look like.\n> \n> Both you and David made forceful arguments that this needed to be JSON\n> rather than an ad-hoc text format precisely so that other tools could\n> parse it more easily, and I just spent *a lot* of time making the JSON\n> parsing stuff work precisely so that you could have that. This project\n> would've been done a month ago if not for that. I don't care all that\n> much whether we remove the mention here, but the idea that using JSON\n> was so that pg_validatebackup could manage compatibility issues is\n> just not correct. The version number on line 1 of the file was more\n> than sufficient for that purpose.\n\nI stand by the decision that the manifest should be in JSON, but that's\nwhat is produced by the backend server as part of a base backup, which\nis quite likely going to be used by some external tools, and isn't at\nall the same as the external pg_validatebackup command that the\ndiscussion here is about. I also did make the argument up-thread,\nthough I'll admit that it seemed to be mostly ignored, but I make it\nstill, that a simple version number sucks and using JSON does avoid some\nof the downsides from it. Particularly, I'd love to see a v13\npg_validatebackup able to work with a v14 pg_basebackup, even if that\nv14 pg_basebackup added some extra stuff to the manifest. That's\npossible to do with a generic structure like JSON and not something that\na simple version number would allow. Yes, I admit that we might change\nthe structure or the contents in a way where that wouldn't be possible\nand I'm not going to raise a fuss if we do so, but this approach gives\nus more options.\n\nAnyway, my point here was really just that *pg_validatebackup* is about\nvalidating backups taken with pg_basebackup. While it's possible that\nit could be used for backups taken with other tools, I don't think\nthat's really part of its actual mandate or that we're going to actively\nwork to add such support in the future.\n\n> > > + <itemizedlist>\n> > > + <listitem>\n> > > + <para>\n> > > + <literal>backup_manifest</literal> is ignored because the backup\n> > > + manifest is logically not part of the backup and does not include\n> > > + any entry for itself.\n> > > + </para>\n> > > + </listitem>\n> >\n> > This seems a bit confusing, doesn't it? The backup_manifest must exist,\n> > and its checksum is internal, and is checked, isn't it? Why say that\n> > it's excluded..?\n> \n> Well, there's no entry in the backup manifest for backup_manifest\n> itself. Normally, the presence of a file not mentioned in\n> backup_manifest would cause a complaint about an extra file, but\n> because backup_manifest is in the ignore list, it doesn't.\n\nYes, I get why it's excluded from the manifest and why we have code to\navoid complaining about it being an extra file, but this is\ndocumentation and, in this part of the docs, we seem to be saying that\nwe're not checking/validating the manifest, and that's certainly not\nactually true.\n\nIn particular, the sentence right above this list is:\n\n\"Certain files and directories are excluded from verification:\"\n\nbut we actually do verify the manifest, that's all I'm saying here.\n\nMaybe rewording that a bit is what would help, say:\n\n\"Certain files and directories are not included in the manifest:\"\n\nthen have the entry for backup_manifest be something like:\n\"backup_manifest is not included as it is the manifest itself and is not\nlogically part of the backup; backup_manifest is checked using its own\ninternal validation digest\" or something along those lines.\n\n> > > + <listitem>\n> > > + <para>\n> > > + <literal>pg_wal</literal> is ignored because WAL files are sent\n> > > + separately from the backup, and are therefore not described by the\n> > > + backup manifest.\n> > > + </para>\n> > > + </listitem>\n> >\n> > I don't agree with the choice to exclude the WAL files, considering\n> > they're an integral part of a backup, to exclude them means that if\n> > they've been corrupted at all then the entire backup is invalid. You\n> > don't want to be discovering that when you're trying to do a restore of\n> > a backup that you took with pg_basebackup and which pg_validatebackup\n> > says is valid. After all, the tool being used here, pg_basebackup,\n> > *does* also stream the WAL files- there's no reason why we can't\n> > calculate a checksum on them and store that checksum somewhere and use\n> > it to validate the WAL files. This, in my opinion, is actually a\n> > show-stopper for this feature. Claiming it's a valid backup when we\n> > don't check the absolutely necessary-for-restore WAL is making a false\n> > claim, no matter how well it's documented.\n> \n> The default for pg_basebackup is -Xstream, which means that the WAL\n> files are being sent over a separate connection that has no connection\n> to the original session. The server, when generating the backup\n> manifest, has no idea what WAL files are being sent over that separate\n> connection, and thus cannot include them in the manifest. This problem\n> could be \"solved\" by having the client generate the manifest rather\n> than the server, but I think that cure would be worse than the\n> disease. As it stands, the manifest provides some protection against\n> transmission errors, which would be lost with that design. As you\n> point out, this clearly can't be done with -Xnone. I think it would be\n> possible to support this with -Xfetch, but we'd have to have the\n> manifest itself specify whether or not it included files in pg_wal,\n> which would require complicating the format a bit. I don't think that\n> makes sense. I assume -Xstream is the most commonly-used mode, because\n> the default used to be -Xfetch and we changed it, which I think we\n> would not have done unless people liked -Xstream significantly better.\n> Adding complexity to cater to a non-default case which I suspect is\n> not widely used doesn't really make sense to me.\n\nYeah, I get that it's not easy to figure out how to validate the WAL,\nbut I stand by my opinion that it's simply not acceptable to exclude the\nnecessary WAL from verification so and to claim that a backup is valid\nwhen we haven't checked the WAL.\n\nI agree that -Xfetch isn't commonly used and only supporting validation\nof WAL when that's used isn't a good answer.\n\n> In the future, we might want to consider improvements which could make\n> validation of pg_wal feasible in common cases. Specifically, suppose\n> that pg_basebackup could receive the manifest from the server, keep\n> all the entries for the existing files just as they are, but add\n> entries for WAL files and anything else it may have added to the\n> backup, recompute the manifest checksum, and store the resulting\n> revised manifest with the backup. That, I think, would be fairly cool,\n> but it's a significant body of additional development work, and this\n> is already quite a large patch. The patch itself has grown to about\n> 3000 lines, and has already 10 preparatory commits doing another ~1500\n> lines of refactoring to prepare for it.\n\nHaving the client calculate the checksums for the WAL and add them to\nthe manifest is one approach and could work, but there's others-\n\n- Have the WAL checksums be calculated during the base backup and kept\n somewhere, and then included in the manifest sent by the server- the\n backup_manifest is the last thing we send anyway, isn't it? And\n surely at the end of the backup we actually do know all of the WAL\n that's needed for the backup to be valid, because we pass that\n information to pg_basebackup to construct the necessary backup_label\n file.\n\n- Validate the WAL using its own internal checksums instead of having\n the manifest involved at all. That's not ideal since we wouldn't have\n cryptographically secure digests for the WAL, but at least we will\n have validated it and raised the chances that the backup will be able\n to actually be restored using PG a whole bunch.\n\n- With the 'checksum none' option, we aren't really validating contents\n of anything, so in that case it'd actually be alright to simply scan\n the WAL and make sure that we've at least got all of the WAL files\n needed to go from the start of the backup to the end. I don't think\n just checking that the WAL files exist is a proper solution when it\n comes to a backup where the user has asked for checksums to be\n included though. I will say that I'm really very surprised that\n pg_validatebackup wasn't already checking that we at least had the WAL\n that is needed, but I don't see any code for that.\n\n> > Not really thrilled with this (pg_basebackup certainly could figure out\n> > the checksum for those files...), but I also don't think it's a huge\n> > issue as they can be recreated by a user (unlike a WAL file..).\n> \n> Yeah, same issues, though. Here again, there are several possible\n> fixes: (1) make the server modify those files rather than letting\n> pg_basebackup do it; (2) make the client compute the manifest rather\n> than the server; (3) have the client revise the manifest. (3) makes\n> most sense to me, but I think that it would be better to return to\n> that topic at a later date. This is certainly not a perfect feature as\n> things stand but I believe it is good enough to provide significant\n> benefits.\n\nAs I said, I don't consider these files to be as much of an issue and\ntherefore excluding them and documenting that we do would be alright. I\ndon't feel that's an acceptable option for the WAL though.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 25 Mar 2020 09:31:06 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 9:31 AM Stephen Frost <sfrost@snowman.net> wrote:\n> I get that the default for manifest is 'no', but I don't really see how\n> that means that the lack of saying anything about checksums should mean\n> \"give me crc32c checksums\". It's really rather common that if we don't\n> specify something, it means don't do that thing- like an 'ORDER BY'\n> clause.\n\nThat's a fair argument, but I think the other relevant principle is\nthat we try to give people useful defaults for things. I think that\nchecksums are a sufficiently useful thing that having the default be\nnot to do it doesn't make sense. I had the impression that you and\nDavid were in agreement on that point, actually.\n\n> There were also comments made up-thread about how it might not be great\n> for larger (eg: 1GB files, like we tend to have quite a few of...), and\n> something about it being a 40 year old algorithm..\n\nWell, the 512MB \"limit\" for CRC-32C means only that for certain very\nspecific types of errors, detection is not guaranteed above that file\nsize. So if you have a single flipped bit, for example, and the file\nsize is greater than 512MB, then CRC-32C has only a 99.9999999767169%\nchance of detecting the error, whereas if the file size is less than\n512MB, it is 100% certain, because of the design of the algorithm. But\nnine nines is plenty, and neither SHA nor our page-level checksums\nprovide guaranteed error detection properties anyway.\n\nI'm not sure why the fact that it's a 40-year-old algorithm is\nrelevant. There are many 40-year-old algorithms that are very good.\nGenerally, if we discover that we're using bad 40-year-old algorithms,\nlike Knuth's tape sorting stuff, we eventually figure out how to\nreplace them with something else that's better. But there's no reason\nto retire an algorithm simply because it's old. I have not heard\nanyone say, for example, that we should stop using CRC-32C for XLOG\nchecksums. We continue to use it for that purpose because it (1) is\nhighly likely to detect any errors and (2) is very fast. Those are the\nsame reasons why I think it's a good fit for this case.\n\nMy guess is that if this patch is adopted as currently proposed, we\nwill eventually need to replace the cryptographic hash functions due\nto the march of time. As I'm sure you realize, the problem with hash\nfunctions that are designed to foil an adversary is that adversaries\nkeep getting smarter. So, eventually someone will probably figure out\nhow to do something nefarious with SHA-512. Some other technique that\nnobody's cracked yet will need to be adopted, and then people will\nbegin trying to crack that, and the whole thing will repeat. But I\nsuspect that we can keep using the same non-cryptographic hash\nfunction essentially forever. It does not matter that people know how\nthe algorithm works because it makes no pretensions of trying to foil\nan opponent. It is just trying to mix up the bits in such a way that a\nchange to the file is likely to cause a change in the checksum. The\nbit-mixing properties of the algorithm do not degrade with the passage\nof time.\n\n> I'm sure that sha256 takes a lot more time than crc32c, I'm certainly\n> not trying to dispute that, but what's relevent here is how much it\n> impacts the time required to run the overall backup (including sync'ing\n> it to disk, and possibly network transmission time.. if we're just\n> comparing the time to run it through memory then, sure, the sha256\n> computation time might end up being quite a bit of the time, but that's\n> not really that interesting of a test..).\n\nI think that http://postgr.es/m/38e29a1c-0d20-fc73-badd-ca05f7f07ffa@pgmasters.net\nis one of the more interesting emails on this topic. My conclusion\nfrom that email, and the ones that led up to it, was that there is a\n40-50% overhead from doing a SHA checksum, but in pgbackrest, users\ndon't see it because backups are compressed. Because the compression\nuses so much CPU time, the additional overhead from the SHA checksum\nis only a few percent more. But I don't think that it would be smart\nto slow down uncompressed backups by 40-50%. That's going to cause a\nproblem for somebody, almost for sure.\n\n> Maybe:\n>\n> \"Using a SHA hash function provides a cryptographically secure digest\n> of each file for users who wish to verify that the backup has not been\n> tampered with, while the CRC32C algorithm provides a checksum which is\n> much faster to calculate and good at catching errors due to accidental\n> changes but is not resistent to targeted modifications. Note that, to\n> be useful against an adversary who has access to the backup, the backup\n> manifest would need to be stored securely elsewhere or otherwise\n> verified to have not been modified since the backup was taken.\"\n>\n> This at least talks about things in a positive direction (SHA hash\n> functions do this, CRC32C does that) rather than in a negative tone.\n\nCool. I like it.\n\n> Anyway, my point here was really just that *pg_validatebackup* is about\n> validating backups taken with pg_basebackup. While it's possible that\n> it could be used for backups taken with other tools, I don't think\n> that's really part of its actual mandate or that we're going to actively\n> work to add such support in the future.\n\nI think you're kind just nitpicking here, because the statement that\npg_validatebackup can validate not only a backup taken by\npg_basebackup but also a backup taken in using some compatible method\nis just a tautology. But I'll remove the reference.\n\n> In particular, the sentence right above this list is:\n>\n> \"Certain files and directories are excluded from verification:\"\n>\n> but we actually do verify the manifest, that's all I'm saying here.\n>\n> Maybe rewording that a bit is what would help, say:\n>\n> \"Certain files and directories are not included in the manifest:\"\n\nWell, that'd be wrong, though. It's true that backup_manifest won't\nhave an entry in the manifest, and neither will WAL files, but\npostgresql.auto.conf will. We'll just skip complaining about it if the\nchecksum doesn't match or whatever. The server generates manifest\nentries for everything, and the client decides not to pay attention to\nsome of them because it knows that pg_basebackup may have made certain\nchanges that were not known to the server.\n\n> Yeah, I get that it's not easy to figure out how to validate the WAL,\n> but I stand by my opinion that it's simply not acceptable to exclude the\n> necessary WAL from verification so and to claim that a backup is valid\n> when we haven't checked the WAL.\n\nI hear that, but I don't agree that having nothing is better than\nhaving this much committed. I would be fine with renaming the tool\n(pg_validatebackupmanifest? pg_validatemanifest?), or with updating\nthe documentation to be more clear about what is and is not checked,\nbut I'm not going to extent the tool to do totally new things for\nwhich we don't even have an agreed design yet. I believe in trying to\ncreate patches that do one thing and do it well, and this patch does\nthat. The fact that it doesn't do some other thing that is\nconceptually related yet different is a good thing, not a bad one.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 25 Mar 2020 12:50:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Mar 25, 2020 at 9:31 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > I get that the default for manifest is 'no', but I don't really see how\n> > that means that the lack of saying anything about checksums should mean\n> > \"give me crc32c checksums\". It's really rather common that if we don't\n> > specify something, it means don't do that thing- like an 'ORDER BY'\n> > clause.\n> \n> That's a fair argument, but I think the other relevant principle is\n> that we try to give people useful defaults for things. I think that\n> checksums are a sufficiently useful thing that having the default be\n> not to do it doesn't make sense. I had the impression that you and\n> David were in agreement on that point, actually.\n\nI agree with wanting to have useful defaults and that checksums should\nbe included by default, and I'm alright even with letting people pick\nwhat algorithms they'd like to have too. The construct here is made odd\nbecause we've got this idea that \"no checksum\" is an option, which is\nactually something that I don't particularly like, but that's what's\nmaking this particular syntax weird. I don't suppose you'd be open to\nthe idea of just dropping that though..? There wouldn't be any issue\nwith this syntax if we just always had checksums included when a\nmanifest is requested. :)\n\nSomehow, I don't think I'm going to win that argument.\n\n> > There were also comments made up-thread about how it might not be great\n> > for larger (eg: 1GB files, like we tend to have quite a few of...), and\n> > something about it being a 40 year old algorithm..\n> \n> Well, the 512MB \"limit\" for CRC-32C means only that for certain very\n> specific types of errors, detection is not guaranteed above that file\n> size. So if you have a single flipped bit, for example, and the file\n> size is greater than 512MB, then CRC-32C has only a 99.9999999767169%\n> chance of detecting the error, whereas if the file size is less than\n> 512MB, it is 100% certain, because of the design of the algorithm. But\n> nine nines is plenty, and neither SHA nor our page-level checksums\n> provide guaranteed error detection properties anyway.\n\nRight, so we know that CRC-32C has an upper-bound of 512MB to be useful\nfor exactly what it's designed to be useful for, but we also know that\nwe're going to have larger files- at least 1GB ones, and quite possibly\nlarger, so why are we choosing this?\n\nAt the least, wouldn't it make sense to consider a larger CRC, one whose\nlimit is above the size of commonly expected files, if we're going to\nuse a CRC?\n\n> I'm not sure why the fact that it's a 40-year-old algorithm is\n> relevant. There are many 40-year-old algorithms that are very good.\n\nSure there are, but there probably wasn't a lot of thought about\nGB-sized files, and this doesn't really seem to be the direction people\nare going in for larger objects. s3, as an example, uses sha256.\nGoogle, it seems, suggests folks use \"HighwayHash\" (from their crc32c\ngithub repo- https://github.com/google/crc32c). Most CRC uses seem to\nbe for much smaller data sets.\n\n> My guess is that if this patch is adopted as currently proposed, we\n> will eventually need to replace the cryptographic hash functions due\n> to the march of time. As I'm sure you realize, the problem with hash\n> functions that are designed to foil an adversary is that adversaries\n> keep getting smarter. So, eventually someone will probably figure out\n> how to do something nefarious with SHA-512. Some other technique that\n> nobody's cracked yet will need to be adopted, and then people will\n> begin trying to crack that, and the whole thing will repeat. But I\n> suspect that we can keep using the same non-cryptographic hash\n> function essentially forever. It does not matter that people know how\n> the algorithm works because it makes no pretensions of trying to foil\n> an opponent. It is just trying to mix up the bits in such a way that a\n> change to the file is likely to cause a change in the checksum. The\n> bit-mixing properties of the algorithm do not degrade with the passage\n> of time.\n\nSure, there's a good chance we'll need newer algorithms in the future, I\ndon't doubt that. On the other hand, if crc32c, or CRC whatever, was\nthe perfect answer and no one will ever need something better, then\nwhat's with folks like Google suggesting something else..?\n\n> > I'm sure that sha256 takes a lot more time than crc32c, I'm certainly\n> > not trying to dispute that, but what's relevent here is how much it\n> > impacts the time required to run the overall backup (including sync'ing\n> > it to disk, and possibly network transmission time.. if we're just\n> > comparing the time to run it through memory then, sure, the sha256\n> > computation time might end up being quite a bit of the time, but that's\n> > not really that interesting of a test..).\n> \n> I think that http://postgr.es/m/38e29a1c-0d20-fc73-badd-ca05f7f07ffa@pgmasters.net\n> is one of the more interesting emails on this topic. My conclusion\n> from that email, and the ones that led up to it, was that there is a\n> 40-50% overhead from doing a SHA checksum, but in pgbackrest, users\n> don't see it because backups are compressed. Because the compression\n> uses so much CPU time, the additional overhead from the SHA checksum\n> is only a few percent more. But I don't think that it would be smart\n> to slow down uncompressed backups by 40-50%. That's going to cause a\n> problem for somebody, almost for sure.\n\nI like that email on the topic also, as it points out again (as I tried\nto do earlier also..) that it depends on what we're actually including\nin the test- and it seems, again, that those tests didn't consider the\ntime to actually write the data somewhere, either network or disk.\n\nAs for folks who are that close to the edge on their backup timing that\nthey can't have it slow down- chances are pretty darn good that they're\nnot far from ending up needing to find a better solution than\npg_basebackup anyway. Or they don't need to generate a manifest (or, I\nsuppose, they could have one but not have checksums..).\n\n> > In particular, the sentence right above this list is:\n> >\n> > \"Certain files and directories are excluded from verification:\"\n> >\n> > but we actually do verify the manifest, that's all I'm saying here.\n> >\n> > Maybe rewording that a bit is what would help, say:\n> >\n> > \"Certain files and directories are not included in the manifest:\"\n> \n> Well, that'd be wrong, though. It's true that backup_manifest won't\n> have an entry in the manifest, and neither will WAL files, but\n> postgresql.auto.conf will. We'll just skip complaining about it if the\n> checksum doesn't match or whatever. The server generates manifest\n> entries for everything, and the client decides not to pay attention to\n> some of them because it knows that pg_basebackup may have made certain\n> changes that were not known to the server.\n\nOk, but it's also wrong to say that the backup_label is excluded from\nverification.\n\n> > Yeah, I get that it's not easy to figure out how to validate the WAL,\n> > but I stand by my opinion that it's simply not acceptable to exclude the\n> > necessary WAL from verification so and to claim that a backup is valid\n> > when we haven't checked the WAL.\n> \n> I hear that, but I don't agree that having nothing is better than\n> having this much committed. I would be fine with renaming the tool\n> (pg_validatebackupmanifest? pg_validatemanifest?), or with updating\n> the documentation to be more clear about what is and is not checked,\n> but I'm not going to extent the tool to do totally new things for\n> which we don't even have an agreed design yet. I believe in trying to\n> create patches that do one thing and do it well, and this patch does\n> that. The fact that it doesn't do some other thing that is\n> conceptually related yet different is a good thing, not a bad one.\n\nI fail to see the usefulness of a tool that doesn't actually verify that\nthe backup is able to be restored from.\n\nEven pg_basebackup (in both fetch and stream modes...) checks that we at\nleast got all the WAL that's needed for the backup from the server\nbefore considering the backup to be valid and telling the user that\nthere was a successful backup. With what you're proposing here, we\ncould have someone do a pg_basebackup, get back an ERROR saying the\nbackup wasn't valid, and then run pg_validatebackup and be told that the\nbackup is valid. I don't get how that's sensible.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 25 Mar 2020 16:54:33 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 4:54 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > That's a fair argument, but I think the other relevant principle is\n> > that we try to give people useful defaults for things. I think that\n> > checksums are a sufficiently useful thing that having the default be\n> > not to do it doesn't make sense. I had the impression that you and\n> > David were in agreement on that point, actually.\n>\n> I agree with wanting to have useful defaults and that checksums should\n> be included by default, and I'm alright even with letting people pick\n> what algorithms they'd like to have too. The construct here is made odd\n> because we've got this idea that \"no checksum\" is an option, which is\n> actually something that I don't particularly like, but that's what's\n> making this particular syntax weird. I don't suppose you'd be open to\n> the idea of just dropping that though..? There wouldn't be any issue\n> with this syntax if we just always had checksums included when a\n> manifest is requested. :)\n>\n> Somehow, I don't think I'm going to win that argument.\n\nWell, it's not a crazy idea. So, at some point, I had the idea that\nyou were always going to get a manifest, and therefore you should at\nleast ought to have the option of not checksumming to avoid the\noverhead. But, as things stand now, you can suppress the manifest\naltogether, so that you can still take a backup even if you've got no\ndisk space to spool the manifest on the master. So, if you really want\nno overhead from manifests, just don't have a manifest. And if you are\nOK with some overhead, why not at least have a CRC-32C checksum, which\nis, after all, pretty cheap?\n\nNow, on the other hand, I don't have any strong evidence that the\nmanifest-without-checksums mode is useless. You can still use it to\nverify that you have the correct files and that those files have the\nexpected sizes. And, verifying those things is very cheap, because you\nonly need to stat() each file, not open and read them all. True, you\ncan do those things by using pg_validatebackup -s. But, you'd still\nincur the (admittedly fairly low) overhead of computing checksums that\nyou don't intend to use.\n\nThis is where I feel like I'm trying to make decisions in a vacuum. If\nwe had a few more people weighing in on the thread on this point, I'd\nbe happy to go with whatever the consensus was. If most people think\nhaving both --no-manifest (suppressing the manifest completely) and\n--manifest-checksums=none (suppressing only the checksums) is useless\nand confusing, then sure, let's rip the latter one out. If most people\nlike the flexibility, let's keep it: it's already implemented and\ntested. But I hate to base the decision on what one or two people\nthink.\n\n> > Well, the 512MB \"limit\" for CRC-32C means only that for certain very\n> > specific types of errors, detection is not guaranteed above that file\n> > size. So if you have a single flipped bit, for example, and the file\n> > size is greater than 512MB, then CRC-32C has only a 99.9999999767169%\n> > chance of detecting the error, whereas if the file size is less than\n> > 512MB, it is 100% certain, because of the design of the algorithm. But\n> > nine nines is plenty, and neither SHA nor our page-level checksums\n> > provide guaranteed error detection properties anyway.\n>\n> Right, so we know that CRC-32C has an upper-bound of 512MB to be useful\n> for exactly what it's designed to be useful for, but we also know that\n> we're going to have larger files- at least 1GB ones, and quite possibly\n> larger, so why are we choosing this?\n>\n> At the least, wouldn't it make sense to consider a larger CRC, one whose\n> limit is above the size of commonly expected files, if we're going to\n> use a CRC?\n\nI mean, you're just repeating the same argument here, and it's just\nnot valid. Regardless of the file size, the chances of a false\nchecksum match are literally less than one in a billion. There is\nevery reason to believe that users will be happy with a low-overhead\nmethod that has a 99.9999999+% chance of detecting corrupt files. I do\nagree that a 64-bit CRC would probably be not much more expensive and\nimprove the probability of detecting errors even further, but I wanted\nto restrict this patch to using infrastructure we already have. The\nchoices there are the various SHA functions (so I supported those),\nMD5 (which I deliberately omitted, for reasons I hope you'll be the\nfirst to agree with), CRC-32C (which is fast), a couple of other\nCRC-32 variants (which I omitted because they seemed redundant and one\nof them only ever existed in PostgreSQL because of a coding mistake),\nand the hacked-up version of FNV that we use for page-level checksums\n(which is only 16 bits and seems to have no advantages for this\npurpose).\n\n> > I'm not sure why the fact that it's a 40-year-old algorithm is\n> > relevant. There are many 40-year-old algorithms that are very good.\n>\n> Sure there are, but there probably wasn't a lot of thought about\n> GB-sized files, and this doesn't really seem to be the direction people\n> are going in for larger objects. s3, as an example, uses sha256.\n> Google, it seems, suggests folks use \"HighwayHash\" (from their crc32c\n> github repo- https://github.com/google/crc32c). Most CRC uses seem to\n> be for much smaller data sets.\n\nAgain, I really want to stick with infrastructure we already have.\nTrying to find a hash function that will please everybody is a hole\nwith no bottom, or more to the point, a bikeshed in need of painting.\nThere are TONS of great hash functions out there on the Internet, and\nas previous discussions of pgsql-hackers will attest, as soon as you\ngo down that road, somebody will say \"well, what about xxhash\" or\nwhatever, and then you spend the rest of your life trying to figure\nout what hash function we could try to commit that is fast and secure\nand doesn't have copyright or patent problems. There have been\nmultiple efforts to introduce such hash functions in the past, and I\nthink basically all of those have crashed into a brick wall.\n\nI don't think that's because introducing new hash functions is a bad\nidea. I think that there are various reasons why it might be a good\nidea. For instance, highwayhash purports to be a cryptographic hash\nfunction that is fast enough to replace non-cryptographic hash\nfunctions. It's easy to see why someone might want that, here. For\nexample, it would be entirely reasonable to copy the backup manifest\nonto a USB key and store it in a vault. Later, if you get the USB key\nback out of the vault and validate it against the backup, you pretty\nmuch know that none of the data files have been tampered with,\nprovided that you used a cryptographic hash. So, SHA is a good option\nfor people who have a USB key and a vault, and a faster cryptographic\nmight be even better. I don't have any desire to block such proposals,\nand I would be thrilled if this work inspires other people to add such\noptions. However, I also don't want this patch to get blocked by an\ninterminable argument about which hash functions we ought to use. The\nones we have in core now are good enough for a start, and more can be\nadded later.\n\n> Sure, there's a good chance we'll need newer algorithms in the future, I\n> don't doubt that. On the other hand, if crc32c, or CRC whatever, was\n> the perfect answer and no one will ever need something better, then\n> what's with folks like Google suggesting something else..?\n\nI have never said that CRC was the perfect answer, and the reason why\nGoogle is suggesting something different is because they wanted a fast\nhash (not SHA) that still has cryptographic properties. What I have\nsaid is that using CRC-32C by default means that there is very little\ndownside as compared with current releases. Backups will not get\nslower, and error detection will get better. If you pick any other\ndefault from the menu of options currently available, then either\nbackups get noticeably slower, or we get less error detection\ncapability than that option gives us.\n\n> As for folks who are that close to the edge on their backup timing that\n> they can't have it slow down- chances are pretty darn good that they're\n> not far from ending up needing to find a better solution than\n> pg_basebackup anyway. Or they don't need to generate a manifest (or, I\n> suppose, they could have one but not have checksums..).\n\n40-50% is a lot more than \"if you were on the edge.\"\n\n> > Well, that'd be wrong, though. It's true that backup_manifest won't\n> > have an entry in the manifest, and neither will WAL files, but\n> > postgresql.auto.conf will. We'll just skip complaining about it if the\n> > checksum doesn't match or whatever. The server generates manifest\n> > entries for everything, and the client decides not to pay attention to\n> > some of them because it knows that pg_basebackup may have made certain\n> > changes that were not known to the server.\n>\n> Ok, but it's also wrong to say that the backup_label is excluded from\n> verification.\n\nThe docs don't say that backup_label is excluded from verification.\nThey do say that backup_manifest is excluded from verification\n*against the manifest*, because it is. I'm not sure if you're honestly\nconfused here or if we're just devolving into arguing for the sake of\nargument, but right now the code looks like this:\n\n simple_string_list_append(&context.ignore_list, \"backup_manifest\");\n simple_string_list_append(&context.ignore_list, \"pg_wal\");\n simple_string_list_append(&context.ignore_list, \"postgresql.auto.conf\");\n simple_string_list_append(&context.ignore_list, \"recovery.signal\");\n simple_string_list_append(&context.ignore_list, \"standby.signal\");\n\nNotice that this is the same list of files mentioned in the\ndocumentation. Now let's suppose we remove the first of those lines of\ncode, so that backup_manifest is not in the exclude list by default.\nNow let's try to validate a backup:\n\n[rhaas pgsql]$ src/bin/pg_validatebackup/pg_validatebackup ~/pgslave\npg_validatebackup: error: \"backup_manifest\" is present on disk but not\nin the manifest\n\nOops. If you read that error carefully, you can see that the complaint\nis 100% valid. backup_manifest is indeed present on disk, but not in\nthe manifest. However, because this situation is expected and known\nnot to be a problem, the right thing to do is suppress the error. That\nis why it is in the ignore_list by default. The documentation is\nattempting to explain this. If it's unclear, we should try to make it\nbetter, but it is absolutely NOT saying that there is no internal\nvalidation of the backup_manifest. In fact, the previous paragraph\ntries to explain that:\n\n+ <application>pg_validatebackup</application> reads the manifest file of a\n+ backup, verifies the manifest against its own internal checksum, and then\n\nIt is, however, saying, and *entirely correctly*, that\npg_validatebackup will not check the backup_manifest file against the\nbackup_manifest. If it did, it would find that it's not there. It\nwould then emit an error message like the one above even though\nthere's no problem with the backup.\n\n> I fail to see the usefulness of a tool that doesn't actually verify that\n> the backup is able to be restored from.\n>\n> Even pg_basebackup (in both fetch and stream modes...) checks that we at\n> least got all the WAL that's needed for the backup from the server\n> before considering the backup to be valid and telling the user that\n> there was a successful backup. With what you're proposing here, we\n> could have someone do a pg_basebackup, get back an ERROR saying the\n> backup wasn't valid, and then run pg_validatebackup and be told that the\n> backup is valid. I don't get how that's sensible.\n\nI'm sorry that you can't see how that's sensible, but it doesn't mean\nthat it isn't sensible. It is totally unrealistic to expect that any\nbackup verification tool can verify that you won't get an error when\ntrying to use the backup. That would require that everything that the\nvalidation tool try to do everything that PostgreSQL will try to do\nwhen the backup is used, including running recovery and updating the\ndata files. Anything less than that creates a real possibility that\nthe backup will verify good but fail when used. This tool has a much\nnarrower purpose, which is to try to verify that we (still) have the\nfiles the server sent as part of the backup and that, to the best of\nour ability to detect such things, they have not been modified. As you\nknow, or should know, the WAL files are not sent as part of the\nbackup, and so are not verified. Other things that would also be\nuseful to check are also not verified. It would be fantastic to have\nmore verification tools in the future, but it is difficult to see why\nanyone would bother trying if an attempt to get the first one\ncommitted gets blocked because it does not yet do everything. Very few\npatches try to do everything, and those that do usually get blocked\nbecause, by trying to do too much, they get some of it badly wrong.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 26 Mar 2020 11:37:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Mar 25, 2020 at 4:54 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > That's a fair argument, but I think the other relevant principle is\n> > > that we try to give people useful defaults for things. I think that\n> > > checksums are a sufficiently useful thing that having the default be\n> > > not to do it doesn't make sense. I had the impression that you and\n> > > David were in agreement on that point, actually.\n> >\n> > I agree with wanting to have useful defaults and that checksums should\n> > be included by default, and I'm alright even with letting people pick\n> > what algorithms they'd like to have too. The construct here is made odd\n> > because we've got this idea that \"no checksum\" is an option, which is\n> > actually something that I don't particularly like, but that's what's\n> > making this particular syntax weird. I don't suppose you'd be open to\n> > the idea of just dropping that though..? There wouldn't be any issue\n> > with this syntax if we just always had checksums included when a\n> > manifest is requested. :)\n> >\n> > Somehow, I don't think I'm going to win that argument.\n> \n> Well, it's not a crazy idea. So, at some point, I had the idea that\n> you were always going to get a manifest, and therefore you should at\n> least ought to have the option of not checksumming to avoid the\n> overhead. But, as things stand now, you can suppress the manifest\n> altogether, so that you can still take a backup even if you've got no\n> disk space to spool the manifest on the master. So, if you really want\n> no overhead from manifests, just don't have a manifest. And if you are\n> OK with some overhead, why not at least have a CRC-32C checksum, which\n> is, after all, pretty cheap?\n> \n> Now, on the other hand, I don't have any strong evidence that the\n> manifest-without-checksums mode is useless. You can still use it to\n> verify that you have the correct files and that those files have the\n> expected sizes. And, verifying those things is very cheap, because you\n> only need to stat() each file, not open and read them all. True, you\n> can do those things by using pg_validatebackup -s. But, you'd still\n> incur the (admittedly fairly low) overhead of computing checksums that\n> you don't intend to use.\n> \n> This is where I feel like I'm trying to make decisions in a vacuum. If\n> we had a few more people weighing in on the thread on this point, I'd\n> be happy to go with whatever the consensus was. If most people think\n> having both --no-manifest (suppressing the manifest completely) and\n> --manifest-checksums=none (suppressing only the checksums) is useless\n> and confusing, then sure, let's rip the latter one out. If most people\n> like the flexibility, let's keep it: it's already implemented and\n> tested. But I hate to base the decision on what one or two people\n> think.\n\nI'm frustrated at the lack of involvement from others also.\n\nJust to be clear- I'm not completely against having a 'manifest but no\nchecksum' option, but if that's what we're going to have then it seems\nlike the syntax should be such that if you don't specify checksums then\nyou don't get checksums and \"MANIFEST_CHECKSUM none\" shouldn't be a\nthing.\n\nAll that said, as I said up-thread, I appreciate that we aren't\ndesigning SQL here and that this is pretty special syntax to begin with,\nso if you ended up committing it the way you have it now, so be it, I\nwouldn't be asking for it to be reverted over this. It's a bit awkward\nand kind of a thorn, but it's not entirely unreasonable, and we'd\nprobably end up there anyway if we started out without a 'none' option\nand someone did come up with a good argument and a patch to add such an\noption in the future.\n\n> > > Well, the 512MB \"limit\" for CRC-32C means only that for certain very\n> > > specific types of errors, detection is not guaranteed above that file\n> > > size. So if you have a single flipped bit, for example, and the file\n> > > size is greater than 512MB, then CRC-32C has only a 99.9999999767169%\n> > > chance of detecting the error, whereas if the file size is less than\n> > > 512MB, it is 100% certain, because of the design of the algorithm. But\n> > > nine nines is plenty, and neither SHA nor our page-level checksums\n> > > provide guaranteed error detection properties anyway.\n> >\n> > Right, so we know that CRC-32C has an upper-bound of 512MB to be useful\n> > for exactly what it's designed to be useful for, but we also know that\n> > we're going to have larger files- at least 1GB ones, and quite possibly\n> > larger, so why are we choosing this?\n> >\n> > At the least, wouldn't it make sense to consider a larger CRC, one whose\n> > limit is above the size of commonly expected files, if we're going to\n> > use a CRC?\n> \n> I mean, you're just repeating the same argument here, and it's just\n> not valid. Regardless of the file size, the chances of a false\n> checksum match are literally less than one in a billion. There is\n> every reason to believe that users will be happy with a low-overhead\n> method that has a 99.9999999+% chance of detecting corrupt files. I do\n> agree that a 64-bit CRC would probably be not much more expensive and\n> improve the probability of detecting errors even further, but I wanted\n> to restrict this patch to using infrastructure we already have. The\n> choices there are the various SHA functions (so I supported those),\n> MD5 (which I deliberately omitted, for reasons I hope you'll be the\n> first to agree with), CRC-32C (which is fast), a couple of other\n> CRC-32 variants (which I omitted because they seemed redundant and one\n> of them only ever existed in PostgreSQL because of a coding mistake),\n> and the hacked-up version of FNV that we use for page-level checksums\n> (which is only 16 bits and seems to have no advantages for this\n> purpose).\n\nThe argument that \"well, we happened to already have it, even though we\nused it for much smaller data sets, which are well within the\n100%-single-bit-error detection limit\" certainly doesn't make me be in\nmore support of this. Choosing the right algorithm to use maybe\nshouldn't be based on the age of that algorithm, but it also certainly\nshouldn't be \"just because we already have it\" when we're using it for a\nvery different use-case.\n\nI'm guessing folks have already seen it, but I thought this was an\ninteresting run-down of actual collisions based on various checksum\nlengths using one data set (though it's not clear exactly how big it is,\nfrom what I can see)-\n\nhttp://www.backplane.com/matt/crc64.html\n\nI do agree with excluding things like md5 and others that aren't good\noptions. I wasn't saying we should necessarily exclude crc32c either..\nbut rather saying that it shouldn't be the default.\n\nHere's another way to look at it- where do we use crc32c today, and how\nmuch data might we possibly be covering with that crc? Why was crc32c\npicked for that purpose? If the individual who decided to pick crc32c\nfor that case was contemplating a checksum for up-to-1GB files, would\nthey have picked crc32c? Seems unlikely to me.\n\n> > > I'm not sure why the fact that it's a 40-year-old algorithm is\n> > > relevant. There are many 40-year-old algorithms that are very good.\n> >\n> > Sure there are, but there probably wasn't a lot of thought about\n> > GB-sized files, and this doesn't really seem to be the direction people\n> > are going in for larger objects. s3, as an example, uses sha256.\n> > Google, it seems, suggests folks use \"HighwayHash\" (from their crc32c\n> > github repo- https://github.com/google/crc32c). Most CRC uses seem to\n> > be for much smaller data sets.\n> \n> Again, I really want to stick with infrastructure we already have.\n\nI don't agree with that as a sensible justification for picking it for\nthis case, because it's clearly not the same use-case.\n\n> Trying to find a hash function that will please everybody is a hole\n> with no bottom, or more to the point, a bikeshed in need of painting.\n> There are TONS of great hash functions out there on the Internet, and\n> as previous discussions of pgsql-hackers will attest, as soon as you\n> go down that road, somebody will say \"well, what about xxhash\" or\n> whatever, and then you spend the rest of your life trying to figure\n> out what hash function we could try to commit that is fast and secure\n> and doesn't have copyright or patent problems. There have been\n> multiple efforts to introduce such hash functions in the past, and I\n> think basically all of those have crashed into a brick wall.\n> \n> I don't think that's because introducing new hash functions is a bad\n> idea. I think that there are various reasons why it might be a good\n> idea. For instance, highwayhash purports to be a cryptographic hash\n> function that is fast enough to replace non-cryptographic hash\n> functions. It's easy to see why someone might want that, here. For\n> example, it would be entirely reasonable to copy the backup manifest\n> onto a USB key and store it in a vault. Later, if you get the USB key\n> back out of the vault and validate it against the backup, you pretty\n> much know that none of the data files have been tampered with,\n> provided that you used a cryptographic hash. So, SHA is a good option\n> for people who have a USB key and a vault, and a faster cryptographic\n> might be even better. I don't have any desire to block such proposals,\n> and I would be thrilled if this work inspires other people to add such\n> options. However, I also don't want this patch to get blocked by an\n> interminable argument about which hash functions we ought to use. The\n> ones we have in core now are good enough for a start, and more can be\n> added later.\n\nI'm not actually argueing about which hash functions we should support,\nbut rather what the default is and if crc32c, specifically, is actually\na reasonable choice. Just because it's fast and we already had an\nimplementation of it doesn't justify its use as the default. Given that\nit doesn't actually provide the check that is generally expected of\nCRC checksums (100% detection of single-bit errors) when the file size\ngets over 512MB makes me wonder if we should have it at all, yes, but it\ndefinitely makes me think it shouldn't be our default.\n\nFolks look to PG as being pretty good at figuring things out and doing\nthe thing that makes sense to minimize risk of data loss or corruption.\nI can understand and agree with the desire to have a faster alternative\nto sha256 for those who don't need a cryptographically safe hash, but if\nwe're going to provide that option, it should be the right answer and\nit's pretty clear, at least to me, that crc32c isn't a good choice for\ngigabyte-size files.\n\n> > Sure, there's a good chance we'll need newer algorithms in the future, I\n> > don't doubt that. On the other hand, if crc32c, or CRC whatever, was\n> > the perfect answer and no one will ever need something better, then\n> > what's with folks like Google suggesting something else..?\n> \n> I have never said that CRC was the perfect answer, and the reason why\n> Google is suggesting something different is because they wanted a fast\n> hash (not SHA) that still has cryptographic properties. What I have\n> said is that using CRC-32C by default means that there is very little\n> downside as compared with current releases. Backups will not get\n> slower, and error detection will get better. If you pick any other\n> default from the menu of options currently available, then either\n> backups get noticeably slower, or we get less error detection\n> capability than that option gives us.\n\nI don't agree with limiting our view to only those algorithms that we've\nalready got implemented in PG.\n\n> > As for folks who are that close to the edge on their backup timing that\n> > they can't have it slow down- chances are pretty darn good that they're\n> > not far from ending up needing to find a better solution than\n> > pg_basebackup anyway. Or they don't need to generate a manifest (or, I\n> > suppose, they could have one but not have checksums..).\n> \n> 40-50% is a lot more than \"if you were on the edge.\"\n\nWe can agree to disagree on this, it's not particularly relevant in the\nend.\n\n> > > Well, that'd be wrong, though. It's true that backup_manifest won't\n> > > have an entry in the manifest, and neither will WAL files, but\n> > > postgresql.auto.conf will. We'll just skip complaining about it if the\n> > > checksum doesn't match or whatever. The server generates manifest\n> > > entries for everything, and the client decides not to pay attention to\n> > > some of them because it knows that pg_basebackup may have made certain\n> > > changes that were not known to the server.\n> >\n> > Ok, but it's also wrong to say that the backup_label is excluded from\n> > verification.\n> \n> The docs don't say that backup_label is excluded from verification.\n> They do say that backup_manifest is excluded from verification\n> *against the manifest*, because it is. I'm not sure if you're honestly\n> confused here or if we're just devolving into arguing for the sake of\n> argument, but right now the code looks like this:\n\nThat you're bringing up code here is really just not sensible- we're\ntalking about the documentation, not about the code here. I do\nunderstand what the code is doing and I don't have any complaint about\nthe code.\n\n> Oops. If you read that error carefully, you can see that the complaint\n> is 100% valid. backup_manifest is indeed present on disk, but not in\n> the manifest. However, because this situation is expected and known\n> not to be a problem, the right thing to do is suppress the error. That\n> is why it is in the ignore_list by default. The documentation is\n> attempting to explain this. If it's unclear, we should try to make it\n> better, but it is absolutely NOT saying that there is no internal\n> validation of the backup_manifest. In fact, the previous paragraph\n> tries to explain that:\n\nYes, I think the documentation is unclear, as I said before, because it\npurports to list things that aren't being validated and then includes\nbackup_manifest in that list, which doesn't make sense. The sentence in\nquestion does *not* say \"Certain files and directories are excluded from\nthe manifest\" (which is wording that I actually proposed up-thread, to\ntry to address this...), it says, from the patch:\n\n\"Certain files and directories are excluded from verification:\"\n\nExcluded from verification. Then lists backup_manifest. Even though,\nearlier in that same paragraph it says that the manifest is verified\nagainst its own checksum.\n\n> + <application>pg_validatebackup</application> reads the manifest file of a\n> + backup, verifies the manifest against its own internal checksum, and then\n> \n> It is, however, saying, and *entirely correctly*, that\n> pg_validatebackup will not check the backup_manifest file against the\n> backup_manifest. If it did, it would find that it's not there. It\n> would then emit an error message like the one above even though\n> there's no problem with the backup.\n\nIt's saying, removing the listing aspect, exactly that \"backup_label is\nexcluded from verification\". That's what I am taking issue with. I've\nmade multiple attempts to suggest other language to avoid saying that\nbecause it's clearly wrong- the manifest is verified.\n\n> > I fail to see the usefulness of a tool that doesn't actually verify that\n> > the backup is able to be restored from.\n> >\n> > Even pg_basebackup (in both fetch and stream modes...) checks that we at\n> > least got all the WAL that's needed for the backup from the server\n> > before considering the backup to be valid and telling the user that\n> > there was a successful backup. With what you're proposing here, we\n> > could have someone do a pg_basebackup, get back an ERROR saying the\n> > backup wasn't valid, and then run pg_validatebackup and be told that the\n> > backup is valid. I don't get how that's sensible.\n> \n> I'm sorry that you can't see how that's sensible, but it doesn't mean\n> that it isn't sensible. It is totally unrealistic to expect that any\n> backup verification tool can verify that you won't get an error when\n> trying to use the backup. That would require that everything that the\n> validation tool try to do everything that PostgreSQL will try to do\n> when the backup is used, including running recovery and updating the\n> data files. Anything less than that creates a real possibility that\n> the backup will verify good but fail when used. This tool has a much\n> narrower purpose, which is to try to verify that we (still) have the\n> files the server sent as part of the backup and that, to the best of\n> our ability to detect such things, they have not been modified. As you\n> know, or should know, the WAL files are not sent as part of the\n> backup, and so are not verified. Other things that would also be\n> useful to check are also not verified. It would be fantastic to have\n> more verification tools in the future, but it is difficult to see why\n> anyone would bother trying if an attempt to get the first one\n> committed gets blocked because it does not yet do everything. Very few\n> patches try to do everything, and those that do usually get blocked\n> because, by trying to do too much, they get some of it badly wrong.\n\nI'm not talking about making sure that no error ever happens when doing\na restore of a particular backup. You're arguing against something that\nI have not advocated for and which I don't advocate for.\n\nI'm saying that the existing tool that takes the backup has a *really*\n*important* verification check that this proposed \"validate backup\" tool\ndoesn't have, and that isn't sensible. It leads to situations where the\nbackup tool itself, pg_basebackup, can fail or be killed before it's\nactually completed, and the \"validate backup\" tool would say that the\nbackup is perfectly fine. That is not sensible.\n\nThat there might be other reasons why a backup can't be restored isn't\nrelevant and I'm not asking for a tool that is perfect and does some\nkind of proof that the backup is able to be restored.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 26 Mar 2020 12:34:52 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "\n\n> On Mar 26, 2020, at 9:34 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> I'm not actually argueing about which hash functions we should support,\n> but rather what the default is and if crc32c, specifically, is actually\n> a reasonable choice. Just because it's fast and we already had an\n> implementation of it doesn't justify its use as the default. Given that\n> it doesn't actually provide the check that is generally expected of\n> CRC checksums (100% detection of single-bit errors) when the file size\n> gets over 512MB makes me wonder if we should have it at all, yes, but it\n> definitely makes me think it shouldn't be our default.\n\nI don't understand your focus on the single-bit error issue. If you are sending your backup across the wire, single bit errors during transmission should already be detected as part of the networking protocol. The real issue has to be detection of the kinds of errors or modifications that are most likely to happen in practice. Which are those? People manually mucking with the files? Bugs in backup scripts? Corruption on the storage device? Truncated files? The more bits in the checksum (assuming a well designed checksum algorithm), the more likely we are to detect accidental modification, so it is no surprise if a 64-bit crc does better than 32-bit crc. But that logic can be taken arbitrarily far. I don't see the connection between, on the one hand, an analysis of single-bit error detection against file size, and on the other hand, the verification of backups.\n\nFrom a support perspective, I think the much more important issue is making certain that checksums are turned on. A one in a billion chance of missing an error seems pretty acceptable compared to the, let's say, one in two chance that your customer didn't use checksums. Why are we even allowing this to be turned off? Is there a usage case compelling that option?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 26 Mar 2020 10:40:55 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 12:34 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I do agree with excluding things like md5 and others that aren't good\n> options. I wasn't saying we should necessarily exclude crc32c either..\n> but rather saying that it shouldn't be the default.\n>\n> Here's another way to look at it- where do we use crc32c today, and how\n> much data might we possibly be covering with that crc?\n\nWAL record size is a 32-bit unsigned integer, so in theory, up to 4GB\nminus 1 byte. In practice, most of them are not more than a few\nhundred bytes, the amount we might possibly be covering is a lot more.\n\n> Why was crc32c\n> picked for that purpose?\n\nBecause it was discovered that 64-bit CRC was too slow, per commit\n21fda22ec46deb7734f793ef4d7fa6c226b4c78e.\n\n> If the individual who decided to pick crc32c\n> for that case was contemplating a checksum for up-to-1GB files, would\n> they have picked crc32c? Seems unlikely to me.\n\nIt's hard to be sure what someone who isn't us would have done in some\nsituation that they didn't face, but we do have the discussion thread:\n\nhttps://www.postgresql.org/message-id/flat/9291.1117593389%40sss.pgh.pa.us#c4e413bbf3d7fbeced7786da1c3aca9c\n\nThe question of how much data is protected by the CRC was discussed,\nmostly in the first few messages, in general terms, but it doesn't\nseem to have covered the question very thoroughly. I'm sure we could\neach draw things from that discussion that support our view of the\nsituation, but I'm not sure it would be very productive.\n\nWhat confuses to me is that you seem to have a view of the upsides and\ndownsides of these various algorithms that seems to me to be highly\nskewed. Like, suppose we change the default from CRC-32C to\nSHA-something. On the upside, the error detection rate will increase\nfrom 99.9999999+% to something much closer to 100%. On the downside,\nbackups will get as much as 40-50% slower for some users. I hope we\ncan agree that both detecting errors and taking backups quickly are\nimportant. However, it is hard for me to imagine that the typical user\nwould want to pay even a 5-10% performance penalty when taking a\nbackup in order to improve an error detection feature which they may\nnot even use and which already has less than a one-in-a-billion chance\nof going wrong. We routinely reject features for causing, say, a 2%\nregression on general workloads. Base backup speed is probably less\nimportant than how many SELECT or INSERT queries you can pump through\nthe system in a second, but it's still a pain point for lots of\npeople. I think if you said to some users \"hey, would you like to have\nerror detection for your backups? it'll cost 10%\" many people would\nsay \"yes, please.\" But I think if you went to the same users and said\n\"hey, would you like to make the error detection for your backups\nbetter? it currently has a less than 1-in-a-billion chance of failing\nto detect random corruption, and you can reduce that by many orders of\nmagnitude for an extra 10% on your backup time,\" I think the results\nwould be much more mixed. Some people would like it, but it certainly\nnot everybody.\n\n> I'm not actually argueing about which hash functions we should support,\n> but rather what the default is and if crc32c, specifically, is actually\n> a reasonable choice. Just because it's fast and we already had an\n> implementation of it doesn't justify its use as the default. Given that\n> it doesn't actually provide the check that is generally expected of\n> CRC checksums (100% detection of single-bit errors) when the file size\n> gets over 512MB makes me wonder if we should have it at all, yes, but it\n> definitely makes me think it shouldn't be our default.\n\nI mean, the property that I care about is the one where it detects\nbetter than 999,999,999 errors out of every 1,000,000,000, regardless\nof input length.\n\n> I don't agree with limiting our view to only those algorithms that we've\n> already got implemented in PG.\n\nI mean, opening that giant can of worms ~2 weeks before feature freeze\nis not very nice. This patch has been around for months, and the\nalgorithms were openly discussed a long time ago. I checked and found\nout that the CRC-64 code was nuked in commit\n404bc51cde9dce1c674abe4695635612f08fe27e, so in theory we could revert\nthat, but how much confidence do we have that the code in question\nactually did the right thing, or that it's actually fast? An awful lot\nof work has been done on the CRC-32C code over the years, including\nseveral rounds of speeding it up\n(f044d71e331d77a0039cec0a11859b5a3c72bc95,\n3dc2d62d0486325bf263655c2d9a96aee0b02abe) and one round of fixing it\nbecause it was producing completely wrong answers\n(5028f22f6eb0579890689655285a4778b4ffc460), so I don't have a lot of\nconfidence about that CRC-64 code being totally without problems.\n\nThe commit message for that last commit,\n5028f22f6eb0579890689655285a4778b4ffc460, seems pretty relevant in\nthis context, too. It observes that, because it \"does not correspond\nto any bit-wise CRC calculation\" it is \"difficult to reason about its\nproperties.\" In other words, the algorithm that we used for WAL\nrecords for many years likely did not have the guaranteed\nerror-detection properties with which you are so concerned (nor do\nmost hash functions we might choose; CRC-64 is probably the only\nchoice that would). Despite that, the commit message also observed\nthat \"it has worked well in practice.\" I realize I'm not convincing\nyou of anything here, but the guaranteed error-detection properties of\nCRC are almost totally uninteresting in this context. I'm not\nconcerned that CRC-32C doesn't have those properties. I'm not\nconcerned that SHA-n wouldn't have those properties. I'm not concerned\nthat xxhash or HighwayHash don't have that property either. I doubt\nthe fact that CRC-64 would have that property would give us much\nbenefit. I think the only things that matter here are (1) how many\nbits you get (more bits = better chance of finding errors, but even\n*sixteen* bits would give you a pretty fair chance of noticing if\nthings are broken) and (2) whether you want a cryptographic hash\nfunction so that you can keep the backup manifest in a vault.\n\n> It's saying, removing the listing aspect, exactly that \"backup_label is\n> excluded from verification\". That's what I am taking issue with. I've\n> made multiple attempts to suggest other language to avoid saying that\n> because it's clearly wrong- the manifest is verified.\n\nWell, it's talking about the particular kind of verification that has\njust been discussed, not any form of verification. As one idea,\nperhaps instead of:\n\n+ Certain files and directories are\n+ excluded from verification:\n\n...I could maybe insert a paragraph break there and then continue with\nsomething like this:\n\nWhen pg_basebackup compares the files and directories in the manifest\nto those which are present on disk, it will ignore the presence of, or\nchanges to, certain files:\n\nbackup_manifest will not be present in the manifest itself, and is\ntherefore ignored. Note that the manifest is still verified\ninternally, as described above, but no error will be issued about the\npresence of a backup_manifest file in the backup directory even though\nit is not listed in the manifest.\n\nWould that be more clear? Do you want to suggest something else?\n\n> I'm not talking about making sure that no error ever happens when doing\n> I'm saying that the existing tool that takes the backup has a *really*\n> *important* verification check that this proposed \"validate backup\" tool\n> doesn't have, and that isn't sensible. It leads to situations where the\n> backup tool itself, pg_basebackup, can fail or be killed before it's\n> actually completed, and the \"validate backup\" tool would say that the\n> backup is perfectly fine. That is not sensible.\n\nIf someone's procedure for taking and restoring backups involves not\nknowing whether or not pg_basebackup completed without error and then\ntrying to use the backup anyway, they are doing something which is\nvery foolish, and it's questionable whether any technological solution\nhas much hope of getting them out of trouble. But on the plus side,\nthis patch would have a good chance of detecting the problem, which is\na noticeable improvement over what we have now, which has no chance of\ndetecting the problem, because we have nothing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 26 Mar 2020 14:02:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Mar 26, 2020, at 9:34 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> > I'm not actually argueing about which hash functions we should support,\n> > but rather what the default is and if crc32c, specifically, is actually\n> > a reasonable choice. Just because it's fast and we already had an\n> > implementation of it doesn't justify its use as the default. Given that\n> > it doesn't actually provide the check that is generally expected of\n> > CRC checksums (100% detection of single-bit errors) when the file size\n> > gets over 512MB makes me wonder if we should have it at all, yes, but it\n> > definitely makes me think it shouldn't be our default.\n> \n> I don't understand your focus on the single-bit error issue. \n\nMaybe I'm wrong, but my understanding was that detecting single-bit\nerrors was one of the primary design goals of CRC and why people talk\nabout CRCs of certain sizes having 'limits'- that's the size at which\nsingle-bit errors will no longer, necessarily, be picked up and\ntherefore that's where the CRC of that size starts falling down on that\ngoal.\n\n> If you are sending your backup across the wire, single bit errors during transmission should already be detected as part of the networking protocol. The real issue has to be detection of the kinds of errors or modifications that are most likely to happen in practice. Which are those? People manually mucking with the files? Bugs in backup scripts? Corruption on the storage device? Truncated files? The more bits in the checksum (assuming a well designed checksum algorithm), the more likely we are to detect accidental modification, so it is no surprise if a 64-bit crc does better than 32-bit crc. But that logic can be taken arbitrarily far. I don't see the connection between, on the one hand, an analysis of single-bit error detection against file size, and on the other hand, the verification of backups.\n\nWe'd like something that does a good job at detecting any differences\nbetween when the file was copied off of the server and when the command\nis run- potentially weeks or months later. I would expect most issues\nto end up being storage-level corruption over time where the backup is\nstored, which could be single bit flips or whole pages getting zeroed or\nvarious other things. Files changing size probably is one of the less\ncommon things, but, sure, that too.\n\nThat we could take this \"arbitrarily far\" is actually entirely fine-\nthat's a good reason to have alternatives, which this patch does have,\nbut that doesn't mean we should have a default that's not suitable for\nthe files that we know we're going to be storing.\n\nConsider that we could have used a 16-bit CRC instead, but does that\nactually make sense? Ok, sure, maybe someone really wants something\nsuper fast- but should that be our default? If not, then what criteria\nshould we use for the default?\n\n> From a support perspective, I think the much more important issue is making certain that checksums are turned on. A one in a billion chance of missing an error seems pretty acceptable compared to the, let's say, one in two chance that your customer didn't use checksums. Why are we even allowing this to be turned off? Is there a usage case compelling that option?\n\nThe argument is that adding checksums takes more time. I can understand\nthat argument, though I don't really agree with it. Certainly a few\npercent really shouldn't be that big of an issue, and in many cases even\na sha256 hash isn't going to have that dramatic of an impact on the\nactual overall time.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 26 Mar 2020 15:37:11 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/26/20 11:37 AM, Robert Haas wrote:\n>> On Wed, Mar 25, 2020 at 4:54 PM Stephen Frost <sfrost@snowman.net> wrot >\n> This is where I feel like I'm trying to make decisions in a vacuum. If\n> we had a few more people weighing in on the thread on this point, I'd\n> be happy to go with whatever the consensus was. If most people think\n> having both --no-manifest (suppressing the manifest completely) and\n> --manifest-checksums=none (suppressing only the checksums) is useless\n> and confusing, then sure, let's rip the latter one out. If most people\n> like the flexibility, let's keep it: it's already implemented and\n> tested. But I hate to base the decision on what one or two people\n> think.\n\nI'm not sure I see a lot of value to being able to build manifest with \nno checksums, especially if overhead for the default checksum algorithm \nis negligible.\n\nHowever, I'd still prefer that the default be something more robust and \nallow users to tune it down rather than the other way around. But I've \nmade that pretty clear up-thread and I consider that argument lost at \nthis point.\n\n>> As for folks who are that close to the edge on their backup timing that\n>> they can't have it slow down- chances are pretty darn good that they're\n>> not far from ending up needing to find a better solution than\n>> pg_basebackup anyway. Or they don't need to generate a manifest (or, I\n>> suppose, they could have one but not have checksums..).\n> \n> 40-50% is a lot more than \"if you were on the edge.\"\n\nFor the record I think this is a very misleading number. Sure, if you \nare doing your backup to a local SSD on a powerful development laptop it \nmakes sense.\n\nBut backups are generally placed on slower storage, remotely, with \ncompression. Even without compression the first two are going to bring \nthis percentage down by a lot.\n\nWhen you get to page-level incremental backups, which is where this all \nstarted, I'd still recommend using a stronger checksum algorithm to \nverify that the file was reconstructed correctly on restore. That much \nI believe we have agreed on.\n\n>> Even pg_basebackup (in both fetch and stream modes...) checks that we at\n>> least got all the WAL that's needed for the backup from the server\n>> before considering the backup to be valid and telling the user that\n>> there was a successful backup. With what you're proposing here, we\n>> could have someone do a pg_basebackup, get back an ERROR saying the\n>> backup wasn't valid, and then run pg_validatebackup and be told that the\n>> backup is valid. I don't get how that's sensible.\n> \n> I'm sorry that you can't see how that's sensible, but it doesn't mean\n> that it isn't sensible. It is totally unrealistic to expect that any\n> backup verification tool can verify that you won't get an error when\n> trying to use the backup. That would require that everything that the\n> validation tool try to do everything that PostgreSQL will try to do\n> when the backup is used, including running recovery and updating the\n> data files. Anything less than that creates a real possibility that\n> the backup will verify good but fail when used. This tool has a much\n> narrower purpose, which is to try to verify that we (still) have the\n> files the server sent as part of the backup and that, to the best of\n> our ability to detect such things, they have not been modified. As you\n> know, or should know, the WAL files are not sent as part of the\n> backup, and so are not verified. Other things that would also be\n> useful to check are also not verified. It would be fantastic to have\n> more verification tools in the future, but it is difficult to see why\n> anyone would bother trying if an attempt to get the first one\n> committed gets blocked because it does not yet do everything. Very few\n> patches try to do everything, and those that do usually get blocked\n> because, by trying to do too much, they get some of it badly wrong.\n\nI agree with Stephen that this should be done, but I agree with you that \nit can wait for a future commit. However, I do think:\n\n1) It should be called out rather plainly in the documentation.\n2) If there are files in pg_wal then pg_validatebackup should inform the \nuser that those files have not been validated.\n\nI know you and Stephen have agreed on a number of doc changes, would it \nbe possible to get a new patch with those included? I finally have time \nto do a review of this tomorrow. I saw some mistakes in the docs in the \ncurrent patch but I know those patches are not current.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 26 Mar 2020 16:37:47 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "\n\n> On Mar 26, 2020, at 12:37 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Greetings,\n> \n> * Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n>>> On Mar 26, 2020, at 9:34 AM, Stephen Frost <sfrost@snowman.net> wrote:\n>>> I'm not actually argueing about which hash functions we should support,\n>>> but rather what the default is and if crc32c, specifically, is actually\n>>> a reasonable choice. Just because it's fast and we already had an\n>>> implementation of it doesn't justify its use as the default. Given that\n>>> it doesn't actually provide the check that is generally expected of\n>>> CRC checksums (100% detection of single-bit errors) when the file size\n>>> gets over 512MB makes me wonder if we should have it at all, yes, but it\n>>> definitely makes me think it shouldn't be our default.\n>> \n>> I don't understand your focus on the single-bit error issue. \n> \n> Maybe I'm wrong, but my understanding was that detecting single-bit\n> errors was one of the primary design goals of CRC and why people talk\n> about CRCs of certain sizes having 'limits'- that's the size at which\n> single-bit errors will no longer, necessarily, be picked up and\n> therefore that's where the CRC of that size starts falling down on that\n> goal.\n\nI think I agree with all that. I'm not sure it is relevant. When people use CRCs to detect things *other than* transmission errors, they are in some sense using a hammer to drive a screw. At that point, the analysis of how good the hammer is, and how big a nail it can drive, is no longer relevant. The relevant discussion here is how appropriate a CRC is for our purpose. I don't know the answer to that, but it doesn't seem the single-bit error analysis is the right analysis.\n\n>> If you are sending your backup across the wire, single bit errors during transmission should already be detected as part of the networking protocol. The real issue has to be detection of the kinds of errors or modifications that are most likely to happen in practice. Which are those? People manually mucking with the files? Bugs in backup scripts? Corruption on the storage device? Truncated files? The more bits in the checksum (assuming a well designed checksum algorithm), the more likely we are to detect accidental modification, so it is no surprise if a 64-bit crc does better than 32-bit crc. But that logic can be taken arbitrarily far. I don't see the connection between, on the one hand, an analysis of single-bit error detection against file size, and on the other hand, the verification of backups.\n> \n> We'd like something that does a good job at detecting any differences\n> between when the file was copied off of the server and when the command\n> is run- potentially weeks or months later. I would expect most issues\n> to end up being storage-level corruption over time where the backup is\n> stored, which could be single bit flips or whole pages getting zeroed or\n> various other things. Files changing size probably is one of the less\n> common things, but, sure, that too.\n> \n> That we could take this \"arbitrarily far\" is actually entirely fine-\n> that's a good reason to have alternatives, which this patch does have,\n> but that doesn't mean we should have a default that's not suitable for\n> the files that we know we're going to be storing.\n> \n> Consider that we could have used a 16-bit CRC instead, but does that\n> actually make sense? Ok, sure, maybe someone really wants something\n> super fast- but should that be our default? If not, then what criteria\n> should we use for the default?\n\nI'll answer this below....\n\n>> From a support perspective, I think the much more important issue is making certain that checksums are turned on. A one in a billion chance of missing an error seems pretty acceptable compared to the, let's say, one in two chance that your customer didn't use checksums. Why are we even allowing this to be turned off? Is there a usage case compelling that option?\n> \n> The argument is that adding checksums takes more time. I can understand\n> that argument, though I don't really agree with it. Certainly a few\n> percent really shouldn't be that big of an issue, and in many cases even\n> a sha256 hash isn't going to have that dramatic of an impact on the\n> actual overall time.\n\nI see two dangers here:\n\n(1) The user enables checksums of some type, and due to checksums not being perfect, corruption happens but goes undetected, leaving her in a bad place.\n\n(2) The user makes no checksum selection at all, gets checksums of the *default* type, determines it is too slow for her purposes, and instead of adjusting the checksum algorithm to something faster, simply turns checksums off; corruption happens and of course is undetected, leaving her in a bad place.\n\nI think the risk of (2) is far worse, which makes me tend towards a default that is fast enough not to encourage anybody to disable checksums altogether. I have no opinion about which algorithm is best suited to that purpose, because I haven't benchmarked any. I'm pretty much going off what Robert said, in terms of how big an impact using a heavier algorithm would be. Perhaps you'd like to run benchmarks and make a concrete proposal for another algorithm, with numbers showing the runtime changes? You mentioned up-thread that prior timings which showed a 40-50% slowdown were not including all the relevant stuff, so perhaps you could fix that in your benchmark and let us know what is included in the timings?\n\nI don't think we should be contemplating for v13 any checksum algorithms for the default except the ones already in the options list. Doing that just derails the patch. If you want highwayhash or similar to be the default, can't we hold off until v14 and think about changing the default? Maybe I'm missing something, but I don't see any reason why it would be hard to change this after the first version has already been released.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 26 Mar 2020 13:38:13 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Mar 26, 2020 at 12:34 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > I do agree with excluding things like md5 and others that aren't good\n> > options. I wasn't saying we should necessarily exclude crc32c either..\n> > but rather saying that it shouldn't be the default.\n> >\n> > Here's another way to look at it- where do we use crc32c today, and how\n> > much data might we possibly be covering with that crc?\n> \n> WAL record size is a 32-bit unsigned integer, so in theory, up to 4GB\n> minus 1 byte. In practice, most of them are not more than a few\n> hundred bytes, the amount we might possibly be covering is a lot more.\n\nIs it actually possible, today, in PG, to have a 4GB WAL record?\nJudging this based on the WAL record size doesn't seem quite right.\n\n> > Why was crc32c\n> > picked for that purpose?\n> \n> Because it was discovered that 64-bit CRC was too slow, per commit\n> 21fda22ec46deb7734f793ef4d7fa6c226b4c78e.\n\n... 15 years ago. I actually find it pretty interesting that we started\nout with a 64bit CRC there, I didn't know that was the case. Also\ninteresting is that we had 64bit CRC code already.\n\n> > If the individual who decided to pick crc32c\n> > for that case was contemplating a checksum for up-to-1GB files, would\n> > they have picked crc32c? Seems unlikely to me.\n> \n> It's hard to be sure what someone who isn't us would have done in some\n> situation that they didn't face, but we do have the discussion thread:\n> \n> https://www.postgresql.org/message-id/flat/9291.1117593389%40sss.pgh.pa.us#c4e413bbf3d7fbeced7786da1c3aca9c\n> \n> The question of how much data is protected by the CRC was discussed,\n> mostly in the first few messages, in general terms, but it doesn't\n> seem to have covered the question very thoroughly. I'm sure we could\n> each draw things from that discussion that support our view of the\n> situation, but I'm not sure it would be very productive.\n\nInteresting.\n\n> What confuses to me is that you seem to have a view of the upsides and\n> downsides of these various algorithms that seems to me to be highly\n> skewed. Like, suppose we change the default from CRC-32C to\n> SHA-something. On the upside, the error detection rate will increase\n> from 99.9999999+% to something much closer to 100%. On the downside,\n> backups will get as much as 40-50% slower for some users. I hope we\n> can agree that both detecting errors and taking backups quickly are\n> important. However, it is hard for me to imagine that the typical user\n> would want to pay even a 5-10% performance penalty when taking a\n> backup in order to improve an error detection feature which they may\n> not even use and which already has less than a one-in-a-billion chance\n> of going wrong. We routinely reject features for causing, say, a 2%\n> regression on general workloads. Base backup speed is probably less\n> important than how many SELECT or INSERT queries you can pump through\n> the system in a second, but it's still a pain point for lots of\n> people. I think if you said to some users \"hey, would you like to have\n> error detection for your backups? it'll cost 10%\" many people would\n> say \"yes, please.\" But I think if you went to the same users and said\n> \"hey, would you like to make the error detection for your backups\n> better? it currently has a less than 1-in-a-billion chance of failing\n> to detect random corruption, and you can reduce that by many orders of\n> magnitude for an extra 10% on your backup time,\" I think the results\n> would be much more mixed. Some people would like it, but it certainly\n> not everybody.\n\nI think you're right that base backup speed is much less of an issue to\nslow down than SELECT or INSERT workloads, but I do also understand\nthat it isn't completely unimportant, which is why having options isn't\na bad idea here. That said, the options presented for users should all\nbe reasonable options, and for the default we should pick something\nsensible, erroring on the \"be safer\" side, if anything.\n\nThere's lots of options for speeding up base backups, with this patch,\neven if the default is to have a manifest with sha256 hashes- it could\nbe changed to some form of CRC, or changed to not have checksums, or\nchanged to not have a manifest. Users will have options.\n\nAgain, I'm not against having a checksum algorithm as a option. I'm not\nsaying that it must be SHA512 as the default.\n\n> > I'm not actually argueing about which hash functions we should support,\n> > but rather what the default is and if crc32c, specifically, is actually\n> > a reasonable choice. Just because it's fast and we already had an\n> > implementation of it doesn't justify its use as the default. Given that\n> > it doesn't actually provide the check that is generally expected of\n> > CRC checksums (100% detection of single-bit errors) when the file size\n> > gets over 512MB makes me wonder if we should have it at all, yes, but it\n> > definitely makes me think it shouldn't be our default.\n> \n> I mean, the property that I care about is the one where it detects\n> better than 999,999,999 errors out of every 1,000,000,000, regardless\n> of input length.\n\nThrowing these kinds of things around I really don't think is useful.\n\n> > I don't agree with limiting our view to only those algorithms that we've\n> > already got implemented in PG.\n> \n> I mean, opening that giant can of worms ~2 weeks before feature freeze\n> is not very nice. This patch has been around for months, and the\n> algorithms were openly discussed a long time ago. \n\nYes, they were discussed before, and these issues were brought up before\nand there was specifically concern brought up about exactly the same\nissues that I'm repeating here. Those concerns seem to have been\nlargely ignored, apparently because \"we don't have that in PG today\" as\nat least one of the considerations- even though we used to. I don't\nthink that was the right response and, yeah, I saw that you were\nplanning to commit and that prompted me to look into it right now. I\ndon't think that's entirely uncommon around here. I also had hoped that\nDavid's concerns that were raised before had been heeded, as I knew he\nwas involved in the discussion previously, but that turns out to not\nhave been the case.\n\n> > It's saying, removing the listing aspect, exactly that \"backup_label is\n> > excluded from verification\". That's what I am taking issue with. I've\n> > made multiple attempts to suggest other language to avoid saying that\n> > because it's clearly wrong- the manifest is verified.\n> \n> Well, it's talking about the particular kind of verification that has\n> just been discussed, not any form of verification. As one idea,\n> perhaps instead of:\n> \n> + Certain files and directories are\n> + excluded from verification:\n> \n> ...I could maybe insert a paragraph break there and then continue with\n> something like this:\n> \n> When pg_basebackup compares the files and directories in the manifest\n> to those which are present on disk, it will ignore the presence of, or\n> changes to, certain files:\n> \n> backup_manifest will not be present in the manifest itself, and is\n> therefore ignored. Note that the manifest is still verified\n> internally, as described above, but no error will be issued about the\n> presence of a backup_manifest file in the backup directory even though\n> it is not listed in the manifest.\n> \n> Would that be more clear? Do you want to suggest something else?\n\nYes, that looks fine. Feels slightly redundant to include the \"as\ndescribed above ...\" bit, and I think that could be dropped, but up to\nyou.\n\n> > I'm not talking about making sure that no error ever happens when doing\n> > I'm saying that the existing tool that takes the backup has a *really*\n> > *important* verification check that this proposed \"validate backup\" tool\n> > doesn't have, and that isn't sensible. It leads to situations where the\n> > backup tool itself, pg_basebackup, can fail or be killed before it's\n> > actually completed, and the \"validate backup\" tool would say that the\n> > backup is perfectly fine. That is not sensible.\n> \n> If someone's procedure for taking and restoring backups involves not\n> knowing whether or not pg_basebackup completed without error and then\n> trying to use the backup anyway, they are doing something which is\n> very foolish, and it's questionable whether any technological solution\n> has much hope of getting them out of trouble. But on the plus side,\n> this patch would have a good chance of detecting the problem, which is\n> a noticeable improvement over what we have now, which has no chance of\n> detecting the problem, because we have nothing.\n\nThis doesn't address my concern at all. Even if it seems ridiculous and\nfoolish to think that a backup was successful when the system was\nrebooted and pg_basebackup was killed before all of the WAL had made it\ninto pg_wal, there is absolutely zero doubt in my mind that it's going\nto happen and users are going to, entirely reasonably, think that\npg_validatebackup at least includes all the checks that pg_basebackup\ndoes about making sure that the backup is valid.\n\nI really don't understand how we can have a backup validation tool that\ndoesn't do the absolute basics, like making sure that we have all of the\nWAL for the backup. I've routinely, almost jokingly, said to folks that\nany backup tool that doesn't check that isn't really a backup tool, and\nI was glad that pg_basebackup had that check, so, yeah, I'm going to\ncontinue to object to committing a backup validation tool that doesn't\nhave that absolutely basic and necessary check.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 26 Mar 2020 16:44:14 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Mar 26, 2020, at 12:37 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> > * Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> >>> On Mar 26, 2020, at 9:34 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> >>> I'm not actually argueing about which hash functions we should support,\n> >>> but rather what the default is and if crc32c, specifically, is actually\n> >>> a reasonable choice. Just because it's fast and we already had an\n> >>> implementation of it doesn't justify its use as the default. Given that\n> >>> it doesn't actually provide the check that is generally expected of\n> >>> CRC checksums (100% detection of single-bit errors) when the file size\n> >>> gets over 512MB makes me wonder if we should have it at all, yes, but it\n> >>> definitely makes me think it shouldn't be our default.\n> >> \n> >> I don't understand your focus on the single-bit error issue. \n> > \n> > Maybe I'm wrong, but my understanding was that detecting single-bit\n> > errors was one of the primary design goals of CRC and why people talk\n> > about CRCs of certain sizes having 'limits'- that's the size at which\n> > single-bit errors will no longer, necessarily, be picked up and\n> > therefore that's where the CRC of that size starts falling down on that\n> > goal.\n> \n> I think I agree with all that. I'm not sure it is relevant. When people use CRCs to detect things *other than* transmission errors, they are in some sense using a hammer to drive a screw. At that point, the analysis of how good the hammer is, and how big a nail it can drive, is no longer relevant. The relevant discussion here is how appropriate a CRC is for our purpose. I don't know the answer to that, but it doesn't seem the single-bit error analysis is the right analysis.\n\nI disagree that it's not relevant- it's, in fact, the one really clear\nthing we can get a pretty straight-forward answer on, and that seems\nreally useful to me.\n\n> >> If you are sending your backup across the wire, single bit errors during transmission should already be detected as part of the networking protocol. The real issue has to be detection of the kinds of errors or modifications that are most likely to happen in practice. Which are those? People manually mucking with the files? Bugs in backup scripts? Corruption on the storage device? Truncated files? The more bits in the checksum (assuming a well designed checksum algorithm), the more likely we are to detect accidental modification, so it is no surprise if a 64-bit crc does better than 32-bit crc. But that logic can be taken arbitrarily far. I don't see the connection between, on the one hand, an analysis of single-bit error detection against file size, and on the other hand, the verification of backups.\n> > \n> > We'd like something that does a good job at detecting any differences\n> > between when the file was copied off of the server and when the command\n> > is run- potentially weeks or months later. I would expect most issues\n> > to end up being storage-level corruption over time where the backup is\n> > stored, which could be single bit flips or whole pages getting zeroed or\n> > various other things. Files changing size probably is one of the less\n> > common things, but, sure, that too.\n> > \n> > That we could take this \"arbitrarily far\" is actually entirely fine-\n> > that's a good reason to have alternatives, which this patch does have,\n> > but that doesn't mean we should have a default that's not suitable for\n> > the files that we know we're going to be storing.\n> > \n> > Consider that we could have used a 16-bit CRC instead, but does that\n> > actually make sense? Ok, sure, maybe someone really wants something\n> > super fast- but should that be our default? If not, then what criteria\n> > should we use for the default?\n> \n> I'll answer this below....\n> \n> >> From a support perspective, I think the much more important issue is making certain that checksums are turned on. A one in a billion chance of missing an error seems pretty acceptable compared to the, let's say, one in two chance that your customer didn't use checksums. Why are we even allowing this to be turned off? Is there a usage case compelling that option?\n> > \n> > The argument is that adding checksums takes more time. I can understand\n> > that argument, though I don't really agree with it. Certainly a few\n> > percent really shouldn't be that big of an issue, and in many cases even\n> > a sha256 hash isn't going to have that dramatic of an impact on the\n> > actual overall time.\n> \n> I see two dangers here:\n> \n> (1) The user enables checksums of some type, and due to checksums not being perfect, corruption happens but goes undetected, leaving her in a bad place.\n> \n> (2) The user makes no checksum selection at all, gets checksums of the *default* type, determines it is too slow for her purposes, and instead of adjusting the checksum algorithm to something faster, simply turns checksums off; corruption happens and of course is undetected, leaving her in a bad place.\n\nAlright, I have tried to avoid referring back to pgbackrest, but I can't\nhelp it here.\n\nWe have never, ever, had a user come to us and complain that pgbackrest\nis too slow because we're using a SHA hash. We have also had them by\ndefault since absolutely day number one, and we even removed the option\nto disable them in 1.0. We've never even been asked if we should\nimplement some other hash or checksum which is faster.\n\n> I think the risk of (2) is far worse, which makes me tend towards a default that is fast enough not to encourage anybody to disable checksums altogether. I have no opinion about which algorithm is best suited to that purpose, because I haven't benchmarked any. I'm pretty much going off what Robert said, in terms of how big an impact using a heavier algorithm would be. Perhaps you'd like to run benchmarks and make a concrete proposal for another algorithm, with numbers showing the runtime changes? You mentioned up-thread that prior timings which showed a 40-50% slowdown were not including all the relevant stuff, so perhaps you could fix that in your benchmark and let us know what is included in the timings?\n\nI don't even know what the 40-50% slowdown numbers included. Also, the\ngeneral expectation in this community is that whomever is pushing a\ngiven patch forward should be providing the benchmarks to justify their\nchoice.\n\n> I don't think we should be contemplating for v13 any checksum algorithms for the default except the ones already in the options list. Doing that just derails the patch. If you want highwayhash or similar to be the default, can't we hold off until v14 and think about changing the default? Maybe I'm missing something, but I don't see any reason why it would be hard to change this after the first version has already been released.\n\nI'd rather we default to something that we are all confident and happy\nwith, erroring on the side of it being overkill rather than something\nthat we know isn't really appropriate for the data volume.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 26 Mar 2020 17:00:00 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-26 11:37:48 -0400, Robert Haas wrote:\n> I mean, you're just repeating the same argument here, and it's just\n> not valid. Regardless of the file size, the chances of a false\n> checksum match are literally less than one in a billion. There is\n> every reason to believe that users will be happy with a low-overhead\n> method that has a 99.9999999+% chance of detecting corrupt files. I do\n> agree that a 64-bit CRC would probably be not much more expensive and\n> improve the probability of detecting errors even further\n\nI *seriously* doubt that it's true that 64bit CRCs wouldn't be\nslower. The only reason CRC32C is semi-fast is that we're accelerating\nit using hardware instructions (on x86-64 and ARM at least). Before that\nit was very regularly the bottleneck for processing WAL - and it still\nsometimes is. Most CRCs aren't actually very fast to compute, because\nthey don't lend themselves to benefit from ILP or SIMD. We spent a fair\nbit of time optimizing our crc implementation before the hardware\nsupport was widespread.\n\n\n> but I wanted to restrict this patch to using infrastructure we already\n> have. The choices there are the various SHA functions (so I supported\n> those), MD5 (which I deliberately omitted, for reasons I hope you'll\n> be the first to agree with), CRC-32C (which is fast), a couple of\n> other CRC-32 variants (which I omitted because they seemed redundant\n> and one of them only ever existed in PostgreSQL because of a coding\n> mistake), and the hacked-up version of FNV that we use for page-level\n> checksums (which is only 16 bits and seems to have no advantages for\n> this purpose).\n\nFWIW, FNV is only 16bit because we reduce its size to 16 bit. See the\ntail of pg_checksum_page.\n\n\nI'm not sure the error detection guarantees of various CRC algorithms\nare that relevant here, btw. IMO, for something like checksums in a\nbackup, just having a single one-bit error isn't as common as having\nlarger errors (e.g. entire blocks beeing zeroed). And to detect that\n32bit checksums aren't that good.\n\n\n> > As for folks who are that close to the edge on their backup timing that\n> > they can't have it slow down- chances are pretty darn good that they're\n> > not far from ending up needing to find a better solution than\n> > pg_basebackup anyway. Or they don't need to generate a manifest (or, I\n> > suppose, they could have one but not have checksums..).\n> \n> 40-50% is a lot more than \"if you were on the edge.\"\n\nsha256 does about approx 400MB/s per core on modern intel CPUs. That's\nway below commonly accessible storage / network capabilities (and even\nif you're only doing 200MB/s, you're still going to spend roughly half\nof the CPU time just doing hashing. It's unlikely that you're going to\nsee much speedups for sha256 just by upgrading a CPU. While there are\nhardware instructions available, they don't result in all that large\nimprovements. Of course, we could also start using the GPU (err, really\nno).\n\nDefaulting to that makes very little sense to me. You're not just going\nto spend that time while backing up, but also when validating backups\n(i.e. network limits suddenly aren't a relevant bottleneck anymore).\n\n\n> > I fail to see the usefulness of a tool that doesn't actually verify that\n> > the backup is able to be restored from.\n> >\n> > Even pg_basebackup (in both fetch and stream modes...) checks that we at\n> > least got all the WAL that's needed for the backup from the server\n> > before considering the backup to be valid and telling the user that\n> > there was a successful backup. With what you're proposing here, we\n> > could have someone do a pg_basebackup, get back an ERROR saying the\n> > backup wasn't valid, and then run pg_validatebackup and be told that the\n> > backup is valid. I don't get how that's sensible.\n> \n> I'm sorry that you can't see how that's sensible, but it doesn't mean\n> that it isn't sensible. It is totally unrealistic to expect that any\n> backup verification tool can verify that you won't get an error when\n> trying to use the backup. That would require that everything that the\n> validation tool try to do everything that PostgreSQL will try to do\n> when the backup is used, including running recovery and updating the\n> data files. Anything less than that creates a real possibility that\n> the backup will verify good but fail when used. This tool has a much\n> narrower purpose, which is to try to verify that we (still) have the\n> files the server sent as part of the backup and that, to the best of\n> our ability to detect such things, they have not been modified. As you\n> know, or should know, the WAL files are not sent as part of the\n> backup, and so are not verified. Other things that would also be\n> useful to check are also not verified. It would be fantastic to have\n> more verification tools in the future, but it is difficult to see why\n> anyone would bother trying if an attempt to get the first one\n> committed gets blocked because it does not yet do everything. Very few\n> patches try to do everything, and those that do usually get blocked\n> because, by trying to do too much, they get some of it badly wrong.\n\nIt sounds to me that if there are to be manifests for the WAL, it should\nbe a separate (set of) manifests. Trying to somehow tie together the\nmanifest for the base backup, and the one for the WAL, makes little\nsense to me. They're commonly not computed in one place, often not even\nstored in the same place. For PITR relevant WAL doesn't even exist yet\nat the time the manifest is created (and thus obviously cannot be\nincluded in the base backup manifest). And fairly obviously one would\nwant to be able to verify the correctness of WAL between two\nbasebackups.\n\nI don't see much point in complicating the design to somehow capture WAL\nin the manifest, when it's only going to solve a small set of cases.\n\nSeems better to (later?) add support for generating manifests for WAL\nfiles, and then have a tool that can verify all the manifests required\nto restore a base backup.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 26 Mar 2020 21:30:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-26 14:02:29 -0400, Robert Haas wrote:\n> On Thu, Mar 26, 2020 at 12:34 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Why was crc32c\n> > picked for that purpose?\n> \n> Because it was discovered that 64-bit CRC was too slow, per commit\n> 21fda22ec46deb7734f793ef4d7fa6c226b4c78e.\n\nWell, a 32bit crc, not crc32c. IIRC it was the ethernet polynomial (+\nbug). We switched to crc32c at some point because there are hardware\nimplementations:\n\ncommit 5028f22f6eb0579890689655285a4778b4ffc460\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\nDate: 2014-11-04 11:35:15 +0200\n\n Switch to CRC-32C in WAL and other places.\n\n\n> Like, suppose we change the default from CRC-32C to SHA-something. On\n> the upside, the error detection rate will increase from 99.9999999+%\n> to something much closer to 100%.\n\nFWIW, I don't buy the relevancy of 99.9999999+% at all. That's assuming\na single bit error (at relevant lengths, before that it's single burst\nerrors of a greater length), which isn't that relevant for our purposes.\n\nThat's not to say that I don't think a CRC check can provide value. It\ndoes provide a high likelihood of detecting enough errors, including\ncoding errors in how data is restored (not unimportant), that you're\nlikely not find out aobut a problem soon.\n\n\n> On the downside,\n> backups will get as much as 40-50% slower for some users. I hope we\n> can agree that both detecting errors and taking backups quickly are\n> important. However, it is hard for me to imagine that the typical user\n> would want to pay even a 5-10% performance penalty when taking a\n> backup in order to improve an error detection feature which they may\n> not even use and which already has less than a one-in-a-billion chance\n> of going wrong.\n\nFWIW, that seems far too large a slowdown to default to for me. Most\npeople aren't going to be able to figure out that it's the checksum\nparameter that causes this slowdown, there just going to feel the pain\nof the backup being much slower than their hardware.\n\nA few hundred megabytes of streaming reads/writes really doesn't take a\nbeefy server these days. Medium sized VMs + a bit larger network block\ndevices at all the common cloud providers have considerably higher\nbandwidth. Even a raid5x of 4 spinning disks can deliver > 500MB/s.\n\nAnd plenty of even the smaller instances at many providers have >\n5gbit/s network. At the upper end it's way more than that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 26 Mar 2020 22:06:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-26 15:37:11 -0400, Stephen Frost wrote:\n> The argument is that adding checksums takes more time. I can understand\n> that argument, though I don't really agree with it. Certainly a few\n> percent really shouldn't be that big of an issue, and in many cases even\n> a sha256 hash isn't going to have that dramatic of an impact on the\n> actual overall time.\n\nI don't understand how you can come to that conclusion? It doesn't take\nvery long to measure openssl's sha256 performance (which is pretty well\noptimized). Note that we do use openssl's sha256, when compiled with\nopenssl support.\n\nOn my workstation, with a pretty new (but not fastest single core perf\nmodel) intel Xeon Gold 5215, I get:\n\n$ openssl speed sha256\n...\ntype 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes\nsha256 76711.75k 172036.78k 321566.89k 399008.09k 431423.49k 433689.94k\n\nIOW, ~430MB/s.\n\n\nOn my laptop, with pretty fast cores:\ntype 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes\nsha256 97054.91k 217188.63k 394864.13k 493441.02k 532100.44k 533441.19k\n\nIOW, 530MB/s\n\n\n530 MB/s is well within the realm of medium sized VMs.\n\nAnd, as mentioned before. even if you do only half of that, you're still\ngoing to be spending roughly half of the CPU time of sending a base\nbackup.\n\nWhat makes you think that a few hundred MB/s is out of reach for a large\nfraction of PG installations that actually keep backups?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 26 Mar 2020 22:31:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-23 12:15:54 -0400, Robert Haas wrote:\n> + <varlistentry>\n> + <term><literal>MANIFEST</literal></term>\n> + <listitem>\n> + <para>\n> + When this option is specified with a value of <literal>ye'</literal>\n\ns/ye'/yes/\n\n> + or <literal>force-escape</literal>, a backup manifest is created\n> + and sent along with the backup. The latter value forces all filenames\n> + to be hex-encoded; otherwise, this type of encoding is performed only\n> + for files whose names are non-UTF8 octet sequences.\n> + <literal>force-escape</literal> is intended primarily for testing\n> + purposes, to be sure that clients which read the backup manifest\n> + can handle this case. For compatibility with previous releases,\n> + the default is <literal>MANIFEST 'no'</literal>.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nAre you planning to include a specification of the manifest file format\nanywhere? I looked through the patches and didn't find anything.\n\nI think it'd also be good to include more information about what the\npoint of manifest files actually is.\n\n\n> + <para>\n> + <application>pg_validatebackup</application> reads the manifest file of a\n> + backup, verifies the manifest against its own internal checksum, and then\n> + verifies that the same files are present in the target directory as in the\n> + manifest itself. It then verifies that each file has the expected checksum,\n> + unless the backup was taken the checksum algorithm set to\n> + <literal>none</literal>, in which case checksum verification is not\n> + performed. The presence or absence of directories is not checked, except\n> + indirectly: if a directory is missing, any files it should have contained\n> + will necessarily also be missing. Certain files and directories are\n> + excluded from verification:\n> + </para>\n\nDepending on what you want to use the manifest for, we'd also need to\ncheck that there are no additional files. That seems to actually be\nimplemented, which imo should be mentioned here.\n\n\n\n\n> +/*\n> + * Finalize the backup manifest, and send it to the client.\n> + */\n> +static void\n> +SendBackupManifest(manifest_info *manifest)\n> +{\n> +\tStringInfoData protobuf;\n> +\tuint8\t\tchecksumbuf[PG_SHA256_DIGEST_LENGTH];\n> +\tchar\t\tchecksumstringbuf[PG_SHA256_DIGEST_STRING_LENGTH];\n> +\tsize_t\t\tmanifest_bytes_done = 0;\n> +\n> +\t/*\n> +\t * If there is no buffile, then the user doesn't want a manifest, so\n> +\t * don't waste any time generating one.\n> +\t */\n> +\tif (manifest->buffile == NULL)\n> +\t\treturn;\n> +\n> +\t/* Terminate the list of files. */\n> +\tAppendStringToManifest(manifest, \"],\\n\");\n> +\n> +\t/*\n> +\t * Append manifest checksum, so that the problems with the manifest itself\n> +\t * can be detected.\n> +\t *\n> +\t * We always use SHA-256 for this, regardless of what algorithm is chosen\n> +\t * for checksumming the files. If we ever want to make the checksum\n> +\t * algorithm used for the manifest file variable, the client will need a\n> +\t * way to figure out which algorithm to use as close to the beginning of\n> +\t * the manifest file as possible, to avoid having to read the whole thing\n> +\t * twice.\n> +\t */\n> +\tmanifest->still_checksumming = false;\n> +\tpg_sha256_final(&manifest->manifest_ctx, checksumbuf);\n> +\tAppendStringToManifest(manifest, \"\\\"Manifest-Checksum\\\": \\\"\");\n> +\thex_encode((char *) checksumbuf, sizeof checksumbuf, checksumstringbuf);\n> +\tchecksumstringbuf[PG_SHA256_DIGEST_STRING_LENGTH - 1] = '\\0';\n> +\tAppendStringToManifest(manifest, checksumstringbuf);\n> +\tAppendStringToManifest(manifest, \"\\\"}\\n\");\n\nHm. Is it a great choice to include the checksum for the manifest inside\nthe manifest itself? With a cryptographic checksum it seems like it\ncould make a ton of sense to store the checksum somewhere \"safe\", but\nkeep the manifest itself alongside the base backup itself. While not\nhuge, they won't be tiny either.\n\n\n\n> diff --git a/src/bin/pg_validatebackup/parse_manifest.c b/src/bin/pg_validatebackup/parse_manifest.c\n> new file mode 100644\n> index 0000000000..e6b42adfda\n> --- /dev/null\n> +++ b/src/bin/pg_validatebackup/parse_manifest.c\n> @@ -0,0 +1,576 @@\n> +/*-------------------------------------------------------------------------\n> + *\n> + * parse_manifest.c\n> + *\t Parse a backup manifest in JSON format.\n> + *\n> + * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n> + * Portions Copyright (c) 1994, Regents of the University of California\n> + *\n> + * src/bin/pg_validatebackup/parse_manifest.c\n> + *\n> + *-------------------------------------------------------------------------\n> + */\n\nDoesn't have to be in the first version, but could it be useful to move\nthis to common/ or such?\n\n\n\n> +/*\n> + * Validate one directory.\n> + *\n> + * 'relpath' is NULL if we are to validate the top-level backup directory,\n> + * and otherwise the relative path to the directory that is to be validated.\n> + *\n> + * 'fullpath' is the backup directory with 'relpath' appended; i.e. the actual\n> + * filesystem path at which it can be found.\n> + */\n> +static void\n> +validate_backup_directory(validator_context *context, char *relpath,\n> +\t\t\t\t\t\t char *fullpath)\n> +{\n\nHm. Should this warn if the directory's permissions are set too openly\n(world writable?)?\n\n\n> +/*\n> + * Validate the checksum of a single file.\n> + */\n> +static void\n> +validate_file_checksum(validator_context *context, manifestfile *tabent,\n> +\t\t\t\t\t char *fullpath)\n> +{\n> +\tpg_checksum_context checksum_ctx;\n> +\tchar\t *relpath = tabent->pathname;\n> +\tint\t\t\tfd;\n> +\tint\t\t\trc;\n> +\tuint8\t\tbuffer[READ_CHUNK_SIZE];\n> +\tuint8\t\tchecksumbuf[PG_CHECKSUM_MAX_LENGTH];\n> +\tint\t\t\tchecksumlen;\n> +\n> +\t/* Open the target file. */\n> +\tif ((fd = open(fullpath, O_RDONLY | PG_BINARY, 0)) < 0)\n> +\t{\n> +\t\treport_backup_error(context, \"could not open file \\\"%s\\\": %m\",\n> +\t\t\t\t\t\t relpath);\n> +\t\treturn;\n> +\t}\n> +\n> +\t/* Initialize checksum context. */\n> +\tpg_checksum_init(&checksum_ctx, tabent->checksum_type);\n> +\n> +\t/* Read the file chunk by chunk, updating the checksum as we go. */\n> +\twhile ((rc = read(fd, buffer, READ_CHUNK_SIZE)) > 0)\n> +\t\tpg_checksum_update(&checksum_ctx, buffer, rc);\n> +\tif (rc < 0)\n> +\t\treport_backup_error(context, \"could not read file \\\"%s\\\": %m\",\n> +\t\t\t\t\t\t relpath);\n> +\n\nHm. I think it'd be good to verify that the checksummed size is the same\nas the size of the file in the manifest.\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 26 Mar 2020 23:29:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2020-03-26 11:37:48 -0400, Robert Haas wrote:\n> > I'm sorry that you can't see how that's sensible, but it doesn't mean\n> > that it isn't sensible. It is totally unrealistic to expect that any\n> > backup verification tool can verify that you won't get an error when\n> > trying to use the backup. That would require that everything that the\n> > validation tool try to do everything that PostgreSQL will try to do\n> > when the backup is used, including running recovery and updating the\n> > data files. Anything less than that creates a real possibility that\n> > the backup will verify good but fail when used. This tool has a much\n> > narrower purpose, which is to try to verify that we (still) have the\n> > files the server sent as part of the backup and that, to the best of\n> > our ability to detect such things, they have not been modified. As you\n> > know, or should know, the WAL files are not sent as part of the\n> > backup, and so are not verified. Other things that would also be\n> > useful to check are also not verified. It would be fantastic to have\n> > more verification tools in the future, but it is difficult to see why\n> > anyone would bother trying if an attempt to get the first one\n> > committed gets blocked because it does not yet do everything. Very few\n> > patches try to do everything, and those that do usually get blocked\n> > because, by trying to do too much, they get some of it badly wrong.\n> \n> It sounds to me that if there are to be manifests for the WAL, it should\n> be a separate (set of) manifests. Trying to somehow tie together the\n> manifest for the base backup, and the one for the WAL, makes little\n> sense to me. They're commonly not computed in one place, often not even\n> stored in the same place. For PITR relevant WAL doesn't even exist yet\n> at the time the manifest is created (and thus obviously cannot be\n> included in the base backup manifest). And fairly obviously one would\n> want to be able to verify the correctness of WAL between two\n> basebackups.\n\nWe aren't talking about generic PITR or about tools other than\npg_basebackup, which has specific options for grabbing the WAL, and\nmaking sure that it is all there for the backup that was taken.\n\n> I don't see much point in complicating the design to somehow capture WAL\n> in the manifest, when it's only going to solve a small set of cases.\n\nAs it relates to this, I tend to think that it solves the exact case\nthat pg_basebackup is built for and used for. I said up-thread that if\nsomeone does decide to use -X none then we could just throw a warning\n(and perhaps have a way to override that if there's desire for it).\n\n> Seems better to (later?) add support for generating manifests for WAL\n> files, and then have a tool that can verify all the manifests required\n> to restore a base backup.\n\nI'm not trying to expand on the feature set here or move the goalposts\nway down the road, which is what seems to be what's being suggested\nhere. To be clear, I don't have any objection to adding a generic tool\nfor validating WAL as you're talking about here, but I also don't think\nthat's required for pg_validatebackup. What I do think we need is a\ncheck of the WAL that's fetched when people use pg_basebackup -Xstream\nor -Xfetch. pg_basebackup itself has that check because it's critical\nto the backup being successful and valid. Not having that basic\nvalidation of a backup really just isn't ok- there's a reason\npg_basebackup has that check.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Mar 2020 11:26:56 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 4:37 PM David Steele <david@pgmasters.net> wrote:\n> I know you and Stephen have agreed on a number of doc changes, would it\n> be possible to get a new patch with those included? I finally have time\n> to do a review of this tomorrow. I saw some mistakes in the docs in the\n> current patch but I know those patches are not current.\n\nHi David,\n\nHere's a new version with some fixes:\n\n- Fixes for doc typos noted by Stephen Frost and Andres Freund.\n- Replace a doc paragraph about the advantages and disadvantages of\nCRC-32C with one by Stephen Frost, with a slightly change by me that I\nthought made it sound more grammatical.\n- Change the pg_validatebackup documentation so that it makes no\nmention of compatible tools, per Stephen.\n- Reword the discussion of the exclude list in the pg_validatebackup\ndocumentation, per discussion between Stephen and myself.\n- Try to make the documentation more clear about the fact that we\ncheck for both extra and missing files.\n- Incorporate a fix from Amit Kapila to make 003_corruption.pl pass on Windows.\n\nHTH,\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 27 Mar 2020 13:53:54 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 1:06 AM Andres Freund <andres@anarazel.de> wrote:\n> > Like, suppose we change the default from CRC-32C to SHA-something. On\n> > the upside, the error detection rate will increase from 99.9999999+%\n> > to something much closer to 100%.\n>\n> FWIW, I don't buy the relevancy of 99.9999999+% at all. That's assuming\n> a single bit error (at relevant lengths, before that it's single burst\n> errors of a greater length), which isn't that relevant for our purposes.\n>\n> That's not to say that I don't think a CRC check can provide value. It\n> does provide a high likelihood of detecting enough errors, including\n> coding errors in how data is restored (not unimportant), that you're\n> likely not find out aobut a problem soon.\n\nSo, I'm glad that you think a CRC check gives a sufficiently good\nchance of detection errors, but I don't understand what your objection\nto the percentage. Stephen just objected to it again, too:\n\nOn Thu, Mar 26, 2020 at 4:44 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > I mean, the property that I care about is the one where it detects\n> > better than 999,999,999 errors out of every 1,000,000,000, regardless\n> > of input length.\n>\n> Throwing these kinds of things around I really don't think is useful.\n\n...but I don't understand his reasoning, or yours.\n\nMy reasoning for thinking that the number is accurate is that a 32-bit\nchecksum has 2^32 possible results. If all of those results are\nequally probable, then the probability that two files with unequal\ncontents produce the same result is 2^-32. This does assume that the\nhash function is perfect, which no hash function is, so the actual\nprobability of a collision is likely higher. But if the hash function\nis pretty good, it shouldn't be all that much higher. Note that I am\nmaking no assumptions here about how many bits are different, nor am I\nmaking any assumption about the length of a file. I am simply saying\nthat an n-bit checksum should detect a difference between two files\nwith a probability of roughly 1-2^{-n}, modulo the imperfections of\nthe hash function. I thought that this was a well-accepted fact that\nwould produce little argument from anybody, and I'm confused that\npeople seem to feel otherwise.\n\nOne explanation that would make sense to me is if somebody said, well,\nthe nature of this particular algorithm means that, although values\nare uniformly distributed in general, the kinds of errors that are\nlikely to occur in practice are likely to cancel out. For instance, if\nyou imagine trivial algorithms such as adding or xor-ing all the\nbytes, adding zero bytes doesn't change the answer, and neither do\ntranspositions. However, CRC is, AIUI, designed to be resistant to\nsuch problems. Your remark about large blocks of zero bytes is\ninteresting to me in this context, but in a quick search I couldn't\nfind anything stating that CRC was weak for such use cases.\n\nThe old thread about switching from 64-bit CRC to 32-bit CRC had a\nlink to a page which has subsequently been moved to here:\n\nhttps://www.ece.unb.ca/tervo/ee4253/crc.shtml\n\nDown towards the bottom, it says:\n\n\"In general, bit errors and bursts up to N-bits long will be detected\nfor a P(x) of degree N. For arbitrary bit errors longer than N-bits,\nthe odds are one in 2^{N} than a totally false bit pattern will\nnonetheless lead to a zero remainder.\"\n\nWhich I think is the same thing I'm saying: the chances of failing to\ndetecting an error with a decent n-bit checksum ought to be about\n2^{-N}. If that's not right, I'd really like to understand why.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 27 Mar 2020 14:13:17 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 4:37 PM David Steele <david@pgmasters.net> wrote:\n> I agree with Stephen that this should be done, but I agree with you that\n> it can wait for a future commit. However, I do think:\n>\n> 1) It should be called out rather plainly in the documentation.\n> 2) If there are files in pg_wal then pg_validatebackup should inform the\n> user that those files have not been validated.\n\nI agree with you about #1, and I suspect that there's a way to improve\nwhat I've got here now, but I think I might be too close to this to\nfigure out what the best way would be, so suggestions welcome.\n\nI think #2 is an interesting idea and could possibly reduce the danger\nof user confusion on this point considerably - because, let's face it,\nnot everyone is going to read the documentation. However, I'm having a\nhard time figuring out exactly what we'd print. Right now on success,\nunless you specify -q, you get:\n\n[rhaas ~]$ pg_validatebackup ~/pgslave\nbackup successfully verified\n\nBut it feels strange and possibly confusing to me to print something like:\n\n[rhaas ~]$ pg_validatebackup ~/pgslave\nbackup successfully verified (except for pg_wal)\n\n...because there are a few other exceptions too, and also because it\nmight make the user think that we normally check that but for some\nreason decided to skip it in this case. Maybe something more verbose\nlike:\n\n[rhaas ~]$ pg_validatebackup ~/pgslave\nbackup files successfully verified\nyour backup contains a pg_wal directory, but this tool can't validate\nthat, so do it yourself\n\n...but that seems a little obnoxious and a little silly to print out every time.\n\nIdeas?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 27 Mar 2020 14:34:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 4:44 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Is it actually possible, today, in PG, to have a 4GB WAL record?\n> Judging this based on the WAL record size doesn't seem quite right.\n\nI'm not sure. I mean, most records are quite small, but I think if you\nset REPLICA IDENTITY FULL on a table with a bunch of very wide columns\n(and also wal_level=logical) it can get really big. I haven't tested\nto figure out just how big it can get. (If I have a table with lots of\nalmost-1GB-blobs in it, does it work without logical replication and\nfail with logical replication? I don't know, but I doubt a WAL record\n>4GB is possible, because it seems unlikely that the code has a way to\ncope with that struct field overflowing.)\n\n> Again, I'm not against having a checksum algorithm as a option. I'm not\n> saying that it must be SHA512 as the default.\n\nI think that what we have seen so far is that all of the SHA-n\nalgorithms that PostgreSQL supports are about equally slow, so it\ndoesn't really matter which one you pick there from a performance\npoint of view. If you're not saying it has to be SHA-512 but you do\nwant it to be SHA-256, I don't think that really fixes anything. Using\nCRC-32C does fix the performance issue, but I don't think you like\nthat, either. We could default to having no checksums at all, or even\nno manifest at all, but I didn't get the impression that David, at\nleast, wanted to go that way, and I don't like it either. It's not the\nworld's best feature, but I think it's good enough to justify enabling\nit by default. So I'm not sure we have any options here that will\nsatisfy you.\n\n> > > I don't agree with limiting our view to only those algorithms that we've\n> > > already got implemented in PG.\n> >\n> > I mean, opening that giant can of worms ~2 weeks before feature freeze\n> > is not very nice. This patch has been around for months, and the\n> > algorithms were openly discussed a long time ago.\n>\n> Yes, they were discussed before, and these issues were brought up before\n> and there was specifically concern brought up about exactly the same\n> issues that I'm repeating here. Those concerns seem to have been\n> largely ignored, apparently because \"we don't have that in PG today\" as\n> at least one of the considerations- even though we used to.\n\nI might have missed something, but I don't remember any suggestion of\nCRC-64 or other algorithms for which PG does not currently have\nsupport prior to this week. The only thing I remember having been\nsuggested previously was SHA, and I responded to that by adding\nsupport for SHA, not by ignoring the suggestion. If there was another\nsuggestion made earlier, I must have missed it.\n\n> I also had hoped that\n> David's concerns that were raised before had been heeded, as I knew he\n> was involved in the discussion previously, but that turns out to not\n> have been the case.\n\nWell, I mean, I am trying pretty hard here, but I realize that I'm not\nsucceeding. I don't know which specific suggestion you're talking\nabout here. I understand that there is a concern about a 32-bit CRC\nsomehow not being valid for more than 512MB, but based on my research,\nI believe that to be incorrect. I've explained the reasons why I\nbelieve it to be incorrect several times now, but I feel like we're\njust going around in circles. If my explanation of why it's incorrect\nis itself incorrect, tell me why, but let's not just keep saying the\nthings we've both already said.\n\n> Yes, that looks fine. Feels slightly redundant to include the \"as\n> described above ...\" bit, and I think that could be dropped, but up to\n> you.\n\nDone in the version I posted a bit ago.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 27 Mar 2020 14:53:10 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 2:29 AM Andres Freund <andres@anarazel.de> wrote:\n> s/ye'/yes/\n\nUgh, sorry. Fixed in the version posted earlier.\n\n> Are you planning to include a specification of the manifest file format\n> anywhere? I looked through the patches and didn't find anything.\n\nI thought about that. I think it would be good to have. I was sort of\nhoping to leave it for a follow-on patch, but maybe that's cheating\ntoo much.\n\n> I think it'd also be good to include more information about what the\n> point of manifest files actually is.\n\nWhat kind of information do you want to see included there? Basically,\nthe way the documentation is written right now, it essentially says,\nwell, we have this manifest thing so that you can later run\npg_validatebackup, and pg_validatebackup says that it's there to check\nthe integrity of backups using the manifest. This is all a bit\ncircular, though, and maybe needs elaboration.\n\nWhat I've experienced is that:\n\n- Sometimes people take a backup and then wonder later whether the\ndisk has flipped some bits.\n- Sometimes people restore a backup and forget some of the parts, like\nthe user-defined tablespaces.\n- Sometimes anti-virus software, or poorly-run cron job run amok,\nwander around inflicting unpredictable damage.\n\nIt would be nice to have a system that would notice these kinds of\nthings on a running system, but here I've got the more modest goal of\nchecking for in the context of a backup. If the data gets corrupted in\ntransit, or if the disk mutilates it, or if the user mutilates it, you\nneed something to check the backup against to find out that bad things\nhave happend; the manifest is that thing. But I don't know exactly how\nmuch of all that should go in the docs, or in what way.\n\n> > + <para>\n> > + <application>pg_validatebackup</application> reads the manifest file of a\n> > + backup, verifies the manifest against its own internal checksum, and then\n> > + verifies that the same files are present in the target directory as in the\n> > + manifest itself. It then verifies that each file has the expected checksum,\n>\n> Depending on what you want to use the manifest for, we'd also need to\n> check that there are no additional files. That seems to actually be\n> implemented, which imo should be mentioned here.\n\nI intended the text to say that, because it says that it checks that\nthe two things are \"the same,\" which is symmetric. In the new version\nI posted a bit ago, I tried to make it more explicit, because\napparently it was not sufficiently clear.\n\n> Hm. Is it a great choice to include the checksum for the manifest inside\n> the manifest itself? With a cryptographic checksum it seems like it\n> could make a ton of sense to store the checksum somewhere \"safe\", but\n> keep the manifest itself alongside the base backup itself. While not\n> huge, they won't be tiny either.\n\nSeems like the user could just copy the manifest checksum and store it\nsomewhere, if they wish. Then they can check it against the manifest\nitself later, if they wish. Or they can take a SHA-512 of the whole\nfile and store that securely. The problem is that we have no idea how\nto write that checksum to a more security storage. We could write\nbackup_manifest and backup_manifest.checksum into separate files, but\nthat seems like it's adding complexity without any real benefit.\n\nTo me, the security-related uses of this patch seem to be fairly\nniche. I think it's nice that they exist, but I don't think that's the\nmain selling point. For me, the main selling point is that you can\ncheck that your disk didn't eat your data and that nobody nuked any\nfiles that were supposed to be there.\n\n> Doesn't have to be in the first version, but could it be useful to move\n> this to common/ or such?\n\nYeah. At one point, this code was written in a way that was totally\nspecific to pg_validatebackup, but I then realized that it would be\nbetter to make it more general, so I refactored it into in the form\nyou see now, where pg_validatebackup.c depends on parse_manifest.c but\nnot the reverse. I suspect that if someone wants to use this for\nsomething else they might need to change a few more things - not sure\nexactly what - but I don't think it would be too hard. I thought it\nwould be best to leave that task until someone has a concrete use case\nin mind, but I did want it to to be relatively easy to do that down\nthe road, and I hope that the way I've organized the code achieves\nthat.\n\n> > +static void\n> > +validate_backup_directory(validator_context *context, char *relpath,\n> > + char *fullpath)\n> > +{\n>\n> Hm. Should this warn if the directory's permissions are set too openly\n> (world writable?)?\n\nI don't think so, but it's pretty clear that different people have\ndifferent ideas about what the scope of this tool ought to be, even in\nthis first version.\n\n> Hm. I think it'd be good to verify that the checksummed size is the same\n> as the size of the file in the manifest.\n\nThat's checked in an earlier phase. Are you worried about the file\nbeing modified after the first pass checks the size and before we come\nthrough to do the checksumming?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 27 Mar 2020 15:20:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 11:26 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > Seems better to (later?) add support for generating manifests for WAL\n> > files, and then have a tool that can verify all the manifests required\n> > to restore a base backup.\n>\n> I'm not trying to expand on the feature set here or move the goalposts\n> way down the road, which is what seems to be what's being suggested\n> here. To be clear, I don't have any objection to adding a generic tool\n> for validating WAL as you're talking about here, but I also don't think\n> that's required for pg_validatebackup. What I do think we need is a\n> check of the WAL that's fetched when people use pg_basebackup -Xstream\n> or -Xfetch. pg_basebackup itself has that check because it's critical\n> to the backup being successful and valid. Not having that basic\n> validation of a backup really just isn't ok- there's a reason\n> pg_basebackup has that check.\n\nI don't understand how this could be done without significantly\ncomplicating the architecture. As I said before, -Xstream sends WAL\nover a separate connection that is unrelated to the one running\nBASE_BACKUP, so the base-backup connection doesn't know what to\ninclude in the manifest. Now you could do something like: once all of\nthe WAL files have been fetched, the client checksums all of those and\nsends their names and checksums to the server, which turns around and\nputs them into the manifest, which it then sends back to the client.\nBut that is actually quite a bit of additional complexity, and it's\npretty strange, too, because now you have the client checksumming some\nfiles and the server checksumming others. I know you mentioned a few\ndifferent ideas before, but I think they all kinda have some problem\nalong these lines.\n\nI also kinda disagree with the idea that the WAL should be considered\nan integral part of the backup. I don't know how pgbackrest does\nthings, but BART stores each backup in a separate directly without any\nassociated WAL, and then keeps all the WAL together in a different\ndirectory. I imagine that people who are using continuous archiving\nalso tend to use -Xnone, or if they do backups by copying the files\nrather than using pg_backrest, they exclude pg_wal. In fact, for\npeople with big, important databases, I'd assume that would be the\nnormal pattern. You presumably wouldn't want to keep one copy of the\nWAL files taken during the backup with the backup itself, and a\nseparate copy in the archive.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 27 Mar 2020 15:29:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Mar 26, 2020 at 4:37 PM David Steele <david@pgmasters.net> wrote:\n> > I agree with Stephen that this should be done, but I agree with you that\n> > it can wait for a future commit. However, I do think:\n> >\n> > 1) It should be called out rather plainly in the documentation.\n> > 2) If there are files in pg_wal then pg_validatebackup should inform the\n> > user that those files have not been validated.\n> \n> I agree with you about #1, and I suspect that there's a way to improve\n> what I've got here now, but I think I might be too close to this to\n> figure out what the best way would be, so suggestions welcome.\n> \n> I think #2 is an interesting idea and could possibly reduce the danger\n> of user confusion on this point considerably - because, let's face it,\n> not everyone is going to read the documentation. However, I'm having a\n> hard time figuring out exactly what we'd print. Right now on success,\n> unless you specify -q, you get:\n> \n> [rhaas ~]$ pg_validatebackup ~/pgslave\n> backup successfully verified\n> \n> But it feels strange and possibly confusing to me to print something like:\n> \n> [rhaas ~]$ pg_validatebackup ~/pgslave\n> backup successfully verified (except for pg_wal)\n> \n> ...because there are a few other exceptions too, and also because it\n\nThe exceptions you're referring to here are things like the various\nsignal files, that the user can recreated pretty easily..? I don't\nthink those really rise to the level of pg_wal.\n\nWhat I would hope to see (... well, we know what I *really* would hope\nto see, but if we really go this route) is something like:\n\nWARNING: pg_wal not empty, WAL files are not validated by this tool\ndata files successfully verified\n\nand a non-zero exit code.\n\nBasically, if you're doing WAL yourself, then you'd use pg_receivewal\nand maybe your own manifest-building code for WAL or something and then\nuse -X none with pg_basebackup.\n\nThen again, I'd have -X none throw a warning too. I'd be alright with\nall of these having override switches to say \"ok, I get it, don't\ncomplain about it\".\n\nI disagree with the idea of writing \"backup successfully verified\" when\nwe aren't doing any checking of the WAL that's essential for the backup\n(unlike various signal files and whatnot, which aren't...).\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Mar 2020 15:48:50 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/27/20 1:53 PM, Robert Haas wrote:\n> On Thu, Mar 26, 2020 at 4:37 PM David Steele <david@pgmasters.net> wrote:\n>> I know you and Stephen have agreed on a number of doc changes, would it\n>> be possible to get a new patch with those included? I finally have time\n>> to do a review of this tomorrow. I saw some mistakes in the docs in the\n>> current patch but I know those patches are not current.\n> \n> Hi David,\n> \n> Here's a new version with some fixes:\n> \n> - Fixes for doc typos noted by Stephen Frost and Andres Freund.\n> - Replace a doc paragraph about the advantages and disadvantages of\n> CRC-32C with one by Stephen Frost, with a slightly change by me that I\n> thought made it sound more grammatical.\n> - Change the pg_validatebackup documentation so that it makes no\n> mention of compatible tools, per Stephen.\n> - Reword the discussion of the exclude list in the pg_validatebackup\n> documentation, per discussion between Stephen and myself.\n> - Try to make the documentation more clear about the fact that we\n> check for both extra and missing files.\n> - Incorporate a fix from Amit Kapila to make 003_corruption.pl pass on Windows.\n\nThanks!\n\nThere appear to be conflicts with 67e0adfb3f98:\n\n$ git apply -3 \n../download/v14-0002-Generate-backup-manifests-for-base-backups-and-v.patch\n../download/v14-0002-Generate-backup-manifests-for-base-backups-and-v.patch:3396: \ntrailing whitespace.\nsub cleanup_search_directory_fails\nerror: patch failed: src/backend/replication/basebackup.c:258\nFalling back to three-way merge...\nApplied patch to 'src/backend/replication/basebackup.c' with conflicts.\nU src/backend/replication/basebackup.c\nwarning: 1 line adds whitespace errors.\n\n > + Specifies the algorithm that should be used to checksum \neach file\n > + for purposes of the backup manifest. Currently, the available\n\nperhaps \"for inclusion in the backup manifest\"? Anyway, I think this \nsentence is awkward.\n\n > + Specifies the algorithm that should be used to checksum each \nfile\n > + for purposes of the backup manifest. Currently, the available\n\nAnd again.\n\n > + because the files themselves do not need to read.\n\nshould be \"need to be read\".\n\n > + the manifest itself will always contain a \n<literal>SHA256</literal>\n\nI think just \"the manifest will always contain\" is fine.\n\n > + manifeste itself, and is therefore ignored. Note that the \nmanifest\n\ntypo \"manifeste\", perhaps remove itself.\n\n > { \"Path\": \"backup_label\", \"Size\": 224, \"Last-Modified\": \"2020-03-27 \n18:33:18 GMT\", \"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"b914bec9\" },\n\nStoring the checksum type with each file seems pretty redundant. \nPerhaps that could go in the header? You could always override if a \nspecific file had a different checksum type, though that seems unlikely.\n\nIn general it might be good to go with shorter keys: \"mod\", \"chk\", etc. \nManifests can get pretty big and that's a lot of extra bytes.\n\nI'm also partial to using epoch time in the manifest because it is \ngenerally easier for programs to work with. But, human-readable doesn't \nsuck, either.\n\n > \tif (maxrate > 0)\n > \t\tmaxrate_clause = psprintf(\"MAX_RATE %u\", maxrate);\n > +\tif (manifest)\n\nA linefeed here would be nice.\n\n > +\tmanifestfile *tabent;\n\nThis is an odd name. A holdover from the tab-delimited version?\n\n > +\tprintf(_(\"Usage:\\n %s [OPTION]... BACKUPDIR\\n\\n\"), progname);\n\nWhen I ran pg_validatebackup I expected to use -D to specify the backup \ndir since pg_basebackup does. On the other hand -D is weird because I \n*really* expect that to be the pg data dir.\n\nBut, do we want this to be different from pg_basebackup?\n\n > +\t\tchecksum_length = checksum_string_length / 2;\n\nThis check is defeated if a single character is added the to checksum.\n\nNot too big a deal since you still get an error, but still.\n\n > + * Verify that the manifest checksum is correct.\n\nThis is not working the way I would expect -- I could freely modify the \nmanifest without getting a checksum error on the manifest. For example:\n\n$ /home/vagrant/test/pg/bin/pg_validatebackup test/backup3\npg_validatebackup: fatal: invalid checksum for file \"backup_label\": \n\"408901e0814f40f8ceb7796309a59c7248458325a21941e7c55568e381f53831?\"\n\nSo, if I deleted the entry above, I got a manifest checksum error. But \nif I just modified the checksum I get a file checksum error with no \nmanifest checksum error.\n\nI would prefer a manifest checksum error in all cases where it is wrong, \nunless --exit-on-error is specified.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 27 Mar 2020 15:50:56 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Mar 26, 2020 at 4:44 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Is it actually possible, today, in PG, to have a 4GB WAL record?\n> > Judging this based on the WAL record size doesn't seem quite right.\n> \n> I'm not sure. I mean, most records are quite small, but I think if you\n> set REPLICA IDENTITY FULL on a table with a bunch of very wide columns\n> (and also wal_level=logical) it can get really big. I haven't tested\n> to figure out just how big it can get. (If I have a table with lots of\n> almost-1GB-blobs in it, does it work without logical replication and\n> fail with logical replication? I don't know, but I doubt a WAL record\n> >4GB is possible, because it seems unlikely that the code has a way to\n> cope with that struct field overflowing.)\n\nInteresting.. Well, topic for another thread, but I'd say if we believe\nthat's possible then we might want to consider if the crc32c is a good\ndecision to use still there.\n\n> > Again, I'm not against having a checksum algorithm as a option. I'm not\n> > saying that it must be SHA512 as the default.\n> \n> I think that what we have seen so far is that all of the SHA-n\n> algorithms that PostgreSQL supports are about equally slow, so it\n> doesn't really matter which one you pick there from a performance\n> point of view. If you're not saying it has to be SHA-512 but you do\n> want it to be SHA-256, I don't think that really fixes anything. Using\n> CRC-32C does fix the performance issue, but I don't think you like\n> that, either. We could default to having no checksums at all, or even\n> no manifest at all, but I didn't get the impression that David, at\n> least, wanted to go that way, and I don't like it either. It's not the\n> world's best feature, but I think it's good enough to justify enabling\n> it by default. So I'm not sure we have any options here that will\n> satisfy you.\n\nI do like having a manifest by default. At this point it's pretty clear\nthat we've just got a fundamental disagreement that more words aren't\ngoing to fix. I'd rather we play it safe and use a sha256 hash and\naccept that it's going to be slower by default, and then give users an\noption to make it go faster if they want (though I'd much rather that\nalternative be a 64bit CRC than a 32bit one).\n\nAndres seems to agree with you. I'm not sure where David sits on this\nspecific question.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Mar 2020 15:55:12 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-27 14:13:17 -0400, Robert Haas wrote:\n> On Thu, Mar 26, 2020 at 4:44 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > I mean, the property that I care about is the one where it detects\n> > > better than 999,999,999 errors out of every 1,000,000,000, regardless\n> > > of input length.\n> >\n> > Throwing these kinds of things around I really don't think is useful.\n> \n> ...but I don't understand his reasoning, or yours.\n> \n> My reasoning for thinking that the number is accurate is that a 32-bit\n> checksum has 2^32 possible results. If all of those results are\n> equally probable, then the probability that two files with unequal\n> contents produce the same result is 2^-32. This does assume that the\n> hash function is perfect, which no hash function is, so the actual\n> probability of a collision is likely higher. But if the hash function\n> is pretty good, it shouldn't be all that much higher. Note that I am\n> making no assumptions here about how many bits are different, nor am I\n> making any assumption about the length of a file. I am simply saying\n> that an n-bit checksum should detect a difference between two files\n> with a probability of roughly 1-2^{-n}, modulo the imperfections of\n> the hash function. I thought that this was a well-accepted fact that\n> would produce little argument from anybody, and I'm confused that\n> people seem to feel otherwise.\n\nWell: crc32 is a terrible hash, if you're looking for even distribution\nof hashed values. That's not too surprising - its design goals included\nguaranteed error detection for certain lengths, and error correction of\nsingle bit errors. My understanding of the underlying math is spotty at\nbest, but from what I understand that does pretty directly imply less\nindependence between source data -> hash value than what we'd want from\na good hash function.\n\nHere's an smhasher result page for crc32 (at least the latter is crc32\nafaict):\nhttps://notabug.org/vaeringjar/smhasher/src/master/doc/crc32\nhttps://notabug.org/vaeringjar/smhasher/src/master/doc/crc32_hw\n\nand then compare that with something like xxhash, or even lookup3 (which\nI think is what our hash is a variant of):\nhttps://notabug.org/vaeringjar/smhasher/src/master/doc/xxHash32\nhttps://notabug.org/vaeringjar/smhasher/src/master/doc/lookup3\n\nThe birthday paradoxon doesn't apply (otherwise 32bit would never be\nenough, at a 50% chance of conflict at around 80k hashes), but still I\ndo wonder if it matters that we're trying to detect errors in not one,\nbut commonly tens of thousands to millions of files. But since we just\nneed to detect one error to call the whole backup corrupt...\n\n\n> One explanation that would make sense to me is if somebody said, well,\n> the nature of this particular algorithm means that, although values\n> are uniformly distributed in general, the kinds of errors that are\n> likely to occur in practice are likely to cancel out. For instance, if\n> you imagine trivial algorithms such as adding or xor-ing all the\n> bytes, adding zero bytes doesn't change the answer, and neither do\n> transpositions. However, CRC is, AIUI, designed to be resistant to\n> such problems. Your remark about large blocks of zero bytes is\n> interesting to me in this context, but in a quick search I couldn't\n> find anything stating that CRC was weak for such use cases.\n\nMy main point was that CRC's error detection guarantees are pretty much\nirrelevant for us. I.e. while the right CRC will guarantee that all\nsingle 2 bit errors will be detected, that's not a helpful property for\nus. There rarely are single bit errors, and the bursts are too long to\nto benefit from any >2 bit guarantees. Nor are multiple failures rare\nonce you hit a problem.\n\n\n> The old thread about switching from 64-bit CRC to 32-bit CRC had a\n> link to a page which has subsequently been moved to here:\n> \n> https://www.ece.unb.ca/tervo/ee4253/crc.shtml\n> \n> Down towards the bottom, it says:\n> \n> \"In general, bit errors and bursts up to N-bits long will be detected\n> for a P(x) of degree N. For arbitrary bit errors longer than N-bits,\n> the odds are one in 2^{N} than a totally false bit pattern will\n> nonetheless lead to a zero remainder.\"\n\nThat's still about a single sequence of bit errors though, as far as I\ncan tell. I.e. it doesn't hold for CRCs if you have two errors at\ndifferent places.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 27 Mar 2020 12:56:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/27/20 3:20 PM, Robert Haas wrote:\n> On Fri, Mar 27, 2020 at 2:29 AM Andres Freund <andres@anarazel.de> wrote:\n> \n>> Hm. Is it a great choice to include the checksum for the manifest inside\n>> the manifest itself? With a cryptographic checksum it seems like it\n>> could make a ton of sense to store the checksum somewhere \"safe\", but\n>> keep the manifest itself alongside the base backup itself. While not\n>> huge, they won't be tiny either.\n> \n> Seems like the user could just copy the manifest checksum and store it\n> somewhere, if they wish. Then they can check it against the manifest\n> itself later, if they wish. Or they can take a SHA-512 of the whole\n> file and store that securely. The problem is that we have no idea how\n> to write that checksum to a more security storage. We could write\n> backup_manifest and backup_manifest.checksum into separate files, but\n> that seems like it's adding complexity without any real benefit.\n\nI agree that this seems like a separate problem. What Robert has done \nhere is detect random mutilation of the manifest.\n\nTo prevent malicious modifications you either need to store the checksum \nin another place, or digitally sign the file and store that alongside it \n(or inside it even). Either way seems pretty far out of scope to me.\n\n>> Hm. I think it'd be good to verify that the checksummed size is the same\n>> as the size of the file in the manifest.\n> \n> That's checked in an earlier phase. Are you worried about the file\n> being modified after the first pass checks the size and before we come\n> through to do the checksumming?\n\nI prefer to validate the size and checksum in the same pass, but I'm not \nsure it's that big a deal. If the backup is being corrupted under the \nvalidate process that would also apply to files that had already been \nvalidated.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 27 Mar 2020 16:02:00 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-27 15:29:02 -0400, Robert Haas wrote:\n> On Fri, Mar 27, 2020 at 11:26 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > > Seems better to (later?) add support for generating manifests for WAL\n> > > files, and then have a tool that can verify all the manifests required\n> > > to restore a base backup.\n> >\n> > I'm not trying to expand on the feature set here or move the goalposts\n> > way down the road, which is what seems to be what's being suggested\n> > here. To be clear, I don't have any objection to adding a generic tool\n> > for validating WAL as you're talking about here, but I also don't think\n> > that's required for pg_validatebackup. What I do think we need is a\n> > check of the WAL that's fetched when people use pg_basebackup -Xstream\n> > or -Xfetch. pg_basebackup itself has that check because it's critical\n> > to the backup being successful and valid. Not having that basic\n> > validation of a backup really just isn't ok- there's a reason\n> > pg_basebackup has that check.\n> \n> I don't understand how this could be done without significantly\n> complicating the architecture. As I said before, -Xstream sends WAL\n> over a separate connection that is unrelated to the one running\n> BASE_BACKUP, so the base-backup connection doesn't know what to\n> include in the manifest. Now you could do something like: once all of\n> the WAL files have been fetched, the client checksums all of those and\n> sends their names and checksums to the server, which turns around and\n> puts them into the manifest, which it then sends back to the client.\n> But that is actually quite a bit of additional complexity, and it's\n> pretty strange, too, because now you have the client checksumming some\n> files and the server checksumming others. I know you mentioned a few\n> different ideas before, but I think they all kinda have some problem\n> along these lines.\n\nHow about having separate manifests for segments? And have them stay\nseparate? And then have an option to verify the manifests for all the\nWAL files that are required for a specific restore? The easiest way\nwould be to just add a separate manifest file for each segment, and name\nthem accordingly. But inventing a naming pattern that specifies both\nstart-end segments wouldn't be hard either, and result in fewer\nmanifests.\n\nBase backups (in the backup sense, not for bringing up replicas etc)\nwithout the ability to apply newer WAL are fairly pointless imo. And if\nnewer WAL is applied, there's not much point in just verifying the WAL\nthat's necessary to restore the base backup. Instead you'd want to be\nable to verify all the WAL since the base backup to the \"current\" point\n(or the next base backup).\n\nFor me having something inside pg_basebackup (or the server, for\n-Xfetch) that somehow includes the WAL files in the manifest doesn't\nreally gain us much - it's obviously not something that'll help us to\nverify all the WAL that needs to be applied (to either get the base\nbackup into a consistent state, or to roll forward to the desired\npoint).\n\n\n\n> I also kinda disagree with the idea that the WAL should be considered\n> an integral part of the backup. I don't know how pgbackrest does\n> things, but BART stores each backup in a separate directly without any\n> associated WAL, and then keeps all the WAL together in a different\n> directory. I imagine that people who are using continuous archiving\n> also tend to use -Xnone, or if they do backups by copying the files\n> rather than using pg_backrest, they exclude pg_wal. In fact, for\n> people with big, important databases, I'd assume that would be the\n> normal pattern. You presumably wouldn't want to keep one copy of the\n> WAL files taken during the backup with the backup itself, and a\n> separate copy in the archive.\n\n+1\n\nI also don't see them as being as important, due to the already existing\nchecksums (which are of a much much much higher quality than what we\nhave for database pages, both by being wider, and by being much more\nfrequent in most cases). There's obviously a need to validate the WAL in\na nicer way than scripting pg_waldump - but that seems separate anyway.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 27 Mar 2020 13:08:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-27 14:34:19 -0400, Robert Haas wrote:\n> I think #2 is an interesting idea and could possibly reduce the danger\n> of user confusion on this point considerably - because, let's face it,\n> not everyone is going to read the documentation. However, I'm having a\n> hard time figuring out exactly what we'd print. Right now on success,\n> unless you specify -q, you get:\n> \n> [rhaas ~]$ pg_validatebackup ~/pgslave\n> backup successfully verified\n> \n> But it feels strange and possibly confusing to me to print something like:\n> \n> [rhaas ~]$ pg_validatebackup ~/pgslave\n> backup successfully verified (except for pg_wal)\n\nYou could print something like:\nWAL necessary to restore this base backup can be validated with:\n\npg_waldump -p ~/pgslave -t tl -s backup_start_location -e backup_end_loc > /dev/null && echo true\n\nObviously that specific invocation sucks, but it'd not be hard to add an\noption to waldump to not output anything.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 27 Mar 2020 13:12:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/27/20 3:29 PM, Robert Haas wrote:\n> On Fri, Mar 27, 2020 at 11:26 AM Stephen Frost <sfrost@snowman.net> wrote:\n>>> Seems better to (later?) add support for generating manifests for WAL\n>>> files, and then have a tool that can verify all the manifests required\n>>> to restore a base backup.\n>>\n>> I'm not trying to expand on the feature set here or move the goalposts\n>> way down the road, which is what seems to be what's being suggested\n>> here. To be clear, I don't have any objection to adding a generic tool\n>> for validating WAL as you're talking about here, but I also don't think\n>> that's required for pg_validatebackup. What I do think we need is a\n>> check of the WAL that's fetched when people use pg_basebackup -Xstream\n>> or -Xfetch. pg_basebackup itself has that check because it's critical\n>> to the backup being successful and valid. Not having that basic\n>> validation of a backup really just isn't ok- there's a reason\n>> pg_basebackup has that check.\n> \n> I don't understand how this could be done without significantly\n> complicating the architecture. As I said before, -Xstream sends WAL\n> over a separate connection that is unrelated to the one running\n> BASE_BACKUP, so the base-backup connection doesn't know what to\n> include in the manifest. Now you could do something like: once all of\n> the WAL files have been fetched, the client checksums all of those and\n> sends their names and checksums to the server, which turns around and\n> puts them into the manifest, which it then sends back to the client.\n> But that is actually quite a bit of additional complexity, and it's\n> pretty strange, too, because now you have the client checksumming some\n> files and the server checksumming others. I know you mentioned a few\n> different ideas before, but I think they all kinda have some problem\n> along these lines.\n> \n> I also kinda disagree with the idea that the WAL should be considered\n> an integral part of the backup. I don't know how pgbackrest does\n> things, \n\nWe checksum each WAL file while it is read and transmitted to the repo \nby the archive_command. Then at the end of the backup we ensure that \nall the WAL required to make the backup consistent has made it to the repo.\n\n> but BART stores each backup in a separate directly without any\n> associated WAL, and then keeps all the WAL together in a different\n> directory. I imagine that people who are using continuous archiving\n> also tend to use -Xnone, or if they do backups by copying the files\n> rather than using pg_backrest, they exclude pg_wal. In fact, for\n> people with big, important databases, I'd assume that would be the\n> normal pattern. You presumably wouldn't want to keep one copy of the\n> WAL files taken during the backup with the backup itself, and a\n> separate copy in the archive.\n\npgBackRest does provide the option to copy WAL into the backup directory \nfor the super-paranoid, though it is not the default. It is pretty handy \nfor moving individual backups some other medium like tape, though.\n\nIf -Xnone is specified then it seems like pg_validatebackup is \ncompletely off the hook. But in the case of -Xstream or -Xfetch \ncouldn't we at least verify that the expected WAL segments are present \nand the correct size?\n\nStoring the start/stop lsn in the manifest would be a nice thing to have \nanyway and that would make this feature pretty trivial. Yeah, that's in \nthe backup_label file as well but the manifest is so much easier to read.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 27 Mar 2020 16:16:11 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-27 15:20:27 -0400, Robert Haas wrote:\n> On Fri, Mar 27, 2020 at 2:29 AM Andres Freund <andres@anarazel.de> wrote:\n> > Are you planning to include a specification of the manifest file format\n> > anywhere? I looked through the patches and didn't find anything.\n> \n> I thought about that. I think it would be good to have. I was sort of\n> hoping to leave it for a follow-on patch, but maybe that's cheating\n> too much.\n\nI don't like having a file format that's intended to be used by external\ntools too that's undocumented except for code that assembles it in a\npiecemeal fashion. Do you mean in a follow-on patch this release, or\nlater? I don't have a problem with the former.\n\n\n> > I think it'd also be good to include more information about what the\n> > point of manifest files actually is.\n> \n> What kind of information do you want to see included there? Basically,\n> the way the documentation is written right now, it essentially says,\n> well, we have this manifest thing so that you can later run\n> pg_validatebackup, and pg_validatebackup says that it's there to check\n> the integrity of backups using the manifest. This is all a bit\n> circular, though, and maybe needs elaboration.\n\nI do found it to be circular. I think we mostly need a paragraph or two\nsomewhere that explains on a higher level what the point of verifying\nbase backups is and what is verified.\n\n\n> > Hm. Is it a great choice to include the checksum for the manifest inside\n> > the manifest itself? With a cryptographic checksum it seems like it\n> > could make a ton of sense to store the checksum somewhere \"safe\", but\n> > keep the manifest itself alongside the base backup itself. While not\n> > huge, they won't be tiny either.\n> \n> Seems like the user could just copy the manifest checksum and store it\n> somewhere, if they wish. Then they can check it against the manifest\n> itself later, if they wish. Or they can take a SHA-512 of the whole\n> file and store that securely. The problem is that we have no idea how\n> to write that checksum to a more security storage. We could write\n> backup_manifest and backup_manifest.checksum into separate files, but\n> that seems like it's adding complexity without any real benefit.\n> \n> To me, the security-related uses of this patch seem to be fairly\n> niche. I think it's nice that they exist, but I don't think that's the\n> main selling point. For me, the main selling point is that you can\n> check that your disk didn't eat your data and that nobody nuked any\n> files that were supposed to be there.\n\nOh, I agree. I wasn't really mentioning the crypto checksum because of\nit being \"security\" stuff, but because of the quality of the guarantee\nit gives. I don't know how large the manifest file will be for a setup\nof with a lot of partitioned tables, but I'd expect it to not be\ntiny. So not having to store it in the 'archiving sytem' is nice.\n\nFWIW, I was thinking of backup_manifest.checksum potentially being\ndesirable for another reason: The need to embed the checksum inside the\ndocument imo adds a fair bit of rigidity to the file format. See\n\n> +static void\n> +verify_manifest_checksum(JsonManifestParseState *parse, char *buffer,\n> +\t\t\t\t\t\t size_t size)\n> +{\n...\n> +\n> +\t/* Find the last two newlines in the file. */\n> +\tfor (i = 0; i < size; ++i)\n> +\t{\n> +\t\tif (buffer[i] == '\\n')\n> +\t\t{\n> +\t\t\t++number_of_newlines;\n> +\t\t\tpenultimate_newline = ultimate_newline;\n> +\t\t\tultimate_newline = i;\n> +\t\t}\n> +\t}\n> +\n> +\t/*\n> +\t * Make sure that the last newline is right at the end, and that there are\n> +\t * at least two lines total. We need this to be true in order for the\n> +\t * following code, which computes the manifest checksum, to work properly.\n> +\t */\n> +\tif (number_of_newlines < 2)\n> +\t\tjson_manifest_parse_failure(parse->context,\n> +\t\t\t\t\t\t\t\t\t\"expected at least 2 lines\");\n> +\tif (ultimate_newline != size - 1)\n> +\t\tjson_manifest_parse_failure(parse->context,\n> +\t\t\t\t\t\t\t\t\t\"last line not newline-terminated\");\n> +\n> +\t/* Checksum the rest. */\n> +\tpg_sha256_init(&manifest_ctx);\n> +\tpg_sha256_update(&manifest_ctx, (uint8 *) buffer, penultimate_newline + 1);\n> +\tpg_sha256_final(&manifest_ctx, manifest_checksum_actual);\n\nwhich certainly isn't \"free form json\".\n\n\n> > Doesn't have to be in the first version, but could it be useful to move\n> > this to common/ or such?\n> \n> Yeah. At one point, this code was written in a way that was totally\n> specific to pg_validatebackup, but I then realized that it would be\n> better to make it more general, so I refactored it into in the form\n> you see now, where pg_validatebackup.c depends on parse_manifest.c but\n> not the reverse. I suspect that if someone wants to use this for\n> something else they might need to change a few more things - not sure\n> exactly what - but I don't think it would be too hard. I thought it\n> would be best to leave that task until someone has a concrete use case\n> in mind, but I did want it to to be relatively easy to do that down\n> the road, and I hope that the way I've organized the code achieves\n> that.\n\nCool.\n\n\n> > > +static void\n> > > +validate_backup_directory(validator_context *context, char *relpath,\n> > > + char *fullpath)\n> > > +{\n> >\n> > Hm. Should this warn if the directory's permissions are set too openly\n> > (world writable?)?\n> \n> I don't think so, but it's pretty clear that different people have\n> different ideas about what the scope of this tool ought to be, even in\n> this first version.\n\nYea. I don't have a strong opinion on this specific issue. I was mostly\nwondering because I've repeatedly seen people restore backups with world\nreadable properties, and with that it's obviously possible for somebody\nelse to change the contents after the checksum was computed.\n\n\n> > Hm. I think it'd be good to verify that the checksummed size is the same\n> > as the size of the file in the manifest.\n> \n> That's checked in an earlier phase. Are you worried about the file\n> being modified after the first pass checks the size and before we come\n> through to do the checksumming?\n\nNot really, I wondered about it for a bit, and then decided that it's\ntoo remote an issue.\n\nWhat I've seen a couple of times is that actually reading a file can\nresult in the file ending to be reported at a different position than\nwhat stat() said. So by crosschecking the size while reading with the\none from stat (which was compared with the source system one) we'd make\nthe errors much better. It's certainly easier to know where to start\nlooking when validate says \"error: read %llu bytes from file, expected\n%llu\" or something along those lines, than when it just were to report a\nchecksum error.\n\nThere's also some crypto hash algorithm weaknesses that are easier to\nexploit when it's possible to append data to a known prefix, but that\ndoesn't seem an obvious threat here.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 27 Mar 2020 13:32:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/27/20 3:55 PM, Stephen Frost wrote:\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n>> I think that what we have seen so far is that all of the SHA-n\n>> algorithms that PostgreSQL supports are about equally slow, so it\n>> doesn't really matter which one you pick there from a performance\n>> point of view. If you're not saying it has to be SHA-512 but you do\n>> want it to be SHA-256, I don't think that really fixes anything. Using\n>> CRC-32C does fix the performance issue, but I don't think you like\n>> that, either. We could default to having no checksums at all, or even\n>> no manifest at all, but I didn't get the impression that David, at\n>> least, wanted to go that way, and I don't like it either. It's not the\n>> world's best feature, but I think it's good enough to justify enabling\n>> it by default. So I'm not sure we have any options here that will\n>> satisfy you.\n> \n> I do like having a manifest by default. At this point it's pretty clear\n> that we've just got a fundamental disagreement that more words aren't\n> going to fix. I'd rather we play it safe and use a sha256 hash and\n> accept that it's going to be slower by default, and then give users an\n> option to make it go faster if they want (though I'd much rather that\n> alternative be a 64bit CRC than a 32bit one).\n> \n> Andres seems to agree with you. I'm not sure where David sits on this\n> specific question.\n\nI would prefer a stronger checksum as the default but I would be fine \nwith SHA1, which is a bit faster.\n\nI believe the overhead of checksums is being overblown. In my experience \nthe vast majority of users are using compression and running the backup \nover a network. Once you have done those two things the cost of SHA1 is \npretty negligible. As I posted way up-thread we found that just gzip -6 \npushed the cost of SHA1 below 3% and that did not include network transfer.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 27 Mar 2020 16:39:29 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Fri, Mar 27, 2020 at 11:26 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > > Seems better to (later?) add support for generating manifests for WAL\n> > > files, and then have a tool that can verify all the manifests required\n> > > to restore a base backup.\n> >\n> > I'm not trying to expand on the feature set here or move the goalposts\n> > way down the road, which is what seems to be what's being suggested\n> > here. To be clear, I don't have any objection to adding a generic tool\n> > for validating WAL as you're talking about here, but I also don't think\n> > that's required for pg_validatebackup. What I do think we need is a\n> > check of the WAL that's fetched when people use pg_basebackup -Xstream\n> > or -Xfetch. pg_basebackup itself has that check because it's critical\n> > to the backup being successful and valid. Not having that basic\n> > validation of a backup really just isn't ok- there's a reason\n> > pg_basebackup has that check.\n> \n> I don't understand how this could be done without significantly\n> complicating the architecture. As I said before, -Xstream sends WAL\n> over a separate connection that is unrelated to the one running\n> BASE_BACKUP, so the base-backup connection doesn't know what to\n> include in the manifest. Now you could do something like: once all of\n> the WAL files have been fetched, the client checksums all of those and\n> sends their names and checksums to the server, which turns around and\n> puts them into the manifest, which it then sends back to the client.\n> But that is actually quite a bit of additional complexity, and it's\n> pretty strange, too, because now you have the client checksumming some\n> files and the server checksumming others. I know you mentioned a few\n> different ideas before, but I think they all kinda have some problem\n> along these lines.\n\nI've made some suggestions before, also chatted about an idea with David\nthat I'll outline here.\n\nFirst off- I'm a bit mystified why you are saying that the base backup\nconnection doesn't know what to include in the manifest regarding WAL.\nThe base-backup process determines the starting position (and then even\nputs it into the backup_label that's sent to the client), and then it\ndirectly returns the ending position at the end of the BASE_BACKUP\ncommand. Given that we do know that information, then we just need to\nget the checksums/hashes for each of the WAL files, if it's been asked\nfor. How do we know checksums or hashes have been asked for in the\nWAL streaming connection? We can have the pg_basebackup process ask for\nthat when it connects to stream the WAL that's needed.\n\nNow the only part that's a little grotty is dealing with passing the\nchecksums/hashes that the WAL stream connection calculates over to the\nbase backup connection to include in the manifest. Offhand though, it\nseems like we could drop a file in archive_status for that, perhaps\n\"wal_checksums.PID\" or such (the PID would be that of the PG backend\nthat's doing the base backup, which we'd pass to START_REPLICATION). Of\ncourse, the backup process would have to check and make sure that it got\nall the needed WAL file checksums, but since it knows the end, that\nshouldn't be too bad.\n\n> I also kinda disagree with the idea that the WAL should be considered\n> an integral part of the backup. I don't know how pgbackrest does\n> things, but BART stores each backup in a separate directly without any\n> associated WAL, and then keeps all the WAL together in a different\n> directory. I imagine that people who are using continuous archiving\n> also tend to use -Xnone, or if they do backups by copying the files\n> rather than using pg_backrest, they exclude pg_wal. In fact, for\n> people with big, important databases, I'd assume that would be the\n> normal pattern. You presumably wouldn't want to keep one copy of the\n> WAL files taken during the backup with the backup itself, and a\n> separate copy in the archive.\n\nI really don't know what to say to this. WAL is absolutely critical to\na backup being valid. pgBackRest doesn't have a way to *just* validate\na backup today, unfortunately, but we're planning to support it in the\nfuture and we will absolutely include in that validation checking all of\nthe WAL that's part of the backup.\n\nI'm fine with forgoing all of this in the -X none case, as I've said\nelsewhere. I think it'd be great for pg_receivewal to have a way to\nvalidate WAL and such, but that's a clearly new feature and it's\nindependent from validating a backup.\n\nAs it relates to how pgBackRest stores WAL, we actually do support both\nof the options you mention, because people with big important databases\nlike to be extra paranoid. WAL can either be stored in just the\narchive, or it can be stored in both the archive and in the backup (with\n'--archive-copy'). Note that this isn't done by just grabbing whatever\nis in pg_wal at the time of the backup, as that wouldn't actually work,\nbut rather by copying the necessary WAL from the archive at the end of\nthe backup.\n\nWe do also check all WAL that's pulled from the archive by the restore\ncommand, though exactly what WAL is needed isn't something we know ahead\nof time (yet, anyway.. we are working on WAL parsing code that'll\nchange that by actually scanning the WAL and storing all restore points,\nstarting/ending times and transaction IDs, and anything else that can be\nused as a restore target, so we can figure out exactly all WAL that's\nneeded to get to a particular restore target).\n\nWe actually have someone who implemented an independent tool called\ncheck_pgbackrest which specifically has a \"archives\" check, for checking\nthat the WAL is in the archive. We plan to also provide a way to ask\npgbackrest to confirm that there's no missing WAL, and that all of the\nWAL is valid.\n\nWAL is critical to a backup that's been taken in an online manner, no\nmatter where it's stored. A backup isn't valid without the WAL that's\nneeded to reach consistency.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Mar 2020 16:57:46 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2020-03-27 14:34:19 -0400, Robert Haas wrote:\n> > I think #2 is an interesting idea and could possibly reduce the danger\n> > of user confusion on this point considerably - because, let's face it,\n> > not everyone is going to read the documentation. However, I'm having a\n> > hard time figuring out exactly what we'd print. Right now on success,\n> > unless you specify -q, you get:\n> > \n> > [rhaas ~]$ pg_validatebackup ~/pgslave\n> > backup successfully verified\n> > \n> > But it feels strange and possibly confusing to me to print something like:\n> > \n> > [rhaas ~]$ pg_validatebackup ~/pgslave\n> > backup successfully verified (except for pg_wal)\n> \n> You could print something like:\n> WAL necessary to restore this base backup can be validated with:\n> \n> pg_waldump -p ~/pgslave -t tl -s backup_start_location -e backup_end_loc > /dev/null && echo true\n> \n> Obviously that specific invocation sucks, but it'd not be hard to add an\n> option to waldump to not output anything.\n\nInteresting idea to use pg_waldump.\n\nI had suggested up-thread, and I'm still fine with, having\npg_validatebackup scan the WAL and check the internal checksums. I'd\nprefer an option that uses hashes to check when the user has asked for\nhashes with SHA256 or something, but at least scanning the WAL and\nmaking sure it validates its internal checksum (and is actually all\nthere, which is pretty darn critical) would be enough to say that we're\npretty sure the backup is valid.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Mar 2020 17:07:42 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2020-03-27 15:20:27 -0400, Robert Haas wrote:\n> > On Fri, Mar 27, 2020 at 2:29 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Hm. Should this warn if the directory's permissions are set too openly\n> > > (world writable?)?\n> > \n> > I don't think so, but it's pretty clear that different people have\n> > different ideas about what the scope of this tool ought to be, even in\n> > this first version.\n> \n> Yea. I don't have a strong opinion on this specific issue. I was mostly\n> wondering because I've repeatedly seen people restore backups with world\n> readable properties, and with that it's obviously possible for somebody\n> else to change the contents after the checksum was computed.\n\nFor my 2c, at least, I don't think we need to check the directory\npermissions, but I wouldn't object to including a warning if they're set\nsuch that PG won't start. I suppose +0 for \"warn if they are such that\nPG won't start\".\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Mar 2020 17:44:07 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-27 17:44:07 -0400, Stephen Frost wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n> > On 2020-03-27 15:20:27 -0400, Robert Haas wrote:\n> > > On Fri, Mar 27, 2020 at 2:29 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > Hm. Should this warn if the directory's permissions are set too openly\n> > > > (world writable?)?\n> > > \n> > > I don't think so, but it's pretty clear that different people have\n> > > different ideas about what the scope of this tool ought to be, even in\n> > > this first version.\n> > \n> > Yea. I don't have a strong opinion on this specific issue. I was mostly\n> > wondering because I've repeatedly seen people restore backups with world\n> > readable properties, and with that it's obviously possible for somebody\n> > else to change the contents after the checksum was computed.\n> \n> For my 2c, at least, I don't think we need to check the directory\n> permissions, but I wouldn't object to including a warning if they're set\n> such that PG won't start. I suppose +0 for \"warn if they are such that\n> PG won't start\".\n\nI was thinking of that check not being just at the top-level, but in\nsubdirectories too. It's easy to screw up the top and subdirectory\npermissions in different ways, e.g. when manually creating the database\ndir and then restoring a data directory directly into that. IIRC\npostmaster doesn't check that at start.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 27 Mar 2020 14:56:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-27 17:07:42 -0400, Stephen Frost wrote:\n> I had suggested up-thread, and I'm still fine with, having\n> pg_validatebackup scan the WAL and check the internal checksums. I'd\n> prefer an option that uses hashes to check when the user has asked for\n> hashes with SHA256 or something, but at least scanning the WAL and\n> making sure it validates its internal checksum (and is actually all\n> there, which is pretty darn critical) would be enough to say that we're\n> pretty sure the backup is valid.\n\nI'd say that actually parsing the WAL will give you a lot higher\nconfidence than verifying a sha256 for each file. There's plenty of ways\nto screw up the pg_wal on the source server (I've seen several\nrestore_commands doing so, particularly when eagerly fetching). Sure,\nit'll not help against an attacker, but I'm not sure I see the threat\nmodel.\n\nThere's imo a cost argument against doing WAL verification by reading\nit, but that'd mostly be a factor when comparing against a faster\nwhole-file checksum.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 27 Mar 2020 15:00:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-27 16:57:46 -0400, Stephen Frost wrote:\n> I really don't know what to say to this. WAL is absolutely critical to\n> a backup being valid. pgBackRest doesn't have a way to *just* validate\n> a backup today, unfortunately, but we're planning to support it in the\n> future and we will absolutely include in that validation checking all of\n> the WAL that's part of the backup.\n\nCould you please address the fact that just about everybody uses base\nbackups + later WAL to have a short data loss window? Integrating the\nWAL files necessary to make the base backup consistent doesn't achieve\nmuch if we can't verify the WAL files afterwards. And fairly obviously\npg_basebackup can't do much about WAL created after its invocation.\n\nGiven that we need something separate to address that \"verification\nhole\", I don't see why it's useful to have a special case solution (or\nrather multiple ones, for stream and fetch) inside pg_basebackup.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 27 Mar 2020 15:07:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2020-03-27 17:44:07 -0400, Stephen Frost wrote:\n> > * Andres Freund (andres@anarazel.de) wrote:\n> > > On 2020-03-27 15:20:27 -0400, Robert Haas wrote:\n> > > > On Fri, Mar 27, 2020 at 2:29 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > Hm. Should this warn if the directory's permissions are set too openly\n> > > > > (world writable?)?\n> > > > \n> > > > I don't think so, but it's pretty clear that different people have\n> > > > different ideas about what the scope of this tool ought to be, even in\n> > > > this first version.\n> > > \n> > > Yea. I don't have a strong opinion on this specific issue. I was mostly\n> > > wondering because I've repeatedly seen people restore backups with world\n> > > readable properties, and with that it's obviously possible for somebody\n> > > else to change the contents after the checksum was computed.\n> > \n> > For my 2c, at least, I don't think we need to check the directory\n> > permissions, but I wouldn't object to including a warning if they're set\n> > such that PG won't start. I suppose +0 for \"warn if they are such that\n> > PG won't start\".\n> \n> I was thinking of that check not being just at the top-level, but in\n> subdirectories too. It's easy to screw up the top and subdirectory\n> permissions in different ways, e.g. when manually creating the database\n> dir and then restoring a data directory directly into that. IIRC\n> postmaster doesn't check that at start.\n\nYeah, I'm pretty sure we don't check that at postmaster start.. which\nalso means that we'll start up just fine even if the perms on\nsubdirectories are odd or wrong, unless maybe we end up in a really odd\nstate where a directory is 000'd or something.\n\nOf course.. this is all a mess when it comes to pg_basebackup, really,\nas previously discussed elsewhere, because what permissions and such you\nend up with actually depends on what *format* you use with\npg_basebackup- it's different between 'tar' format and 'plain' format.\nThat is, if you use 'tar' format, and then actually use 'tar' to\nextract, you get one set of privs, but if you use 'plain', you get\nsomething different.\n\nI mean.. pgBackRest sets all perms to whatever is in the manifest on\nrestore (or delta), but this patch doesn't include the permissions on\nfiles, or ownership (something pgBackRest also tries to set, if\npossible, on restore), does it...? Doesn't look like it on a quick\nlook. So if we want to compare to pgBackRest then, yes, we should\ninclude the permissions in the manifest and we should check that\neverything in the manifest matches what's on the filesystem.\n\nI don't think we should just compare all permissions or ownership with\nsome arbitrary idea of what we think they should be, even though if you\nuse pg_basebackup in 'plain' format, you actually end up with\ndifferences, today, from what the source system has. In my view, that\nshould actually be fixed, to the extent possible.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Mar 2020 18:09:10 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 2020-Mar-27, Stephen Frost wrote:\n\n> I don't think we should just compare all permissions or ownership with\n> some arbitrary idea of what we think they should be, even though if you\n> use pg_basebackup in 'plain' format, you actually end up with\n> differences, today, from what the source system has. In my view, that\n> should actually be fixed, to the extent possible.\n\nI posted some thoughts about this at\nhttps://www.postgresql.org/message-id/20190904201117.GA12986%40alvherre.pgsql\nI didn't get time to work on that myself.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Mar 2020 19:17:00 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2020-03-27 16:57:46 -0400, Stephen Frost wrote:\n> > I really don't know what to say to this. WAL is absolutely critical to\n> > a backup being valid. pgBackRest doesn't have a way to *just* validate\n> > a backup today, unfortunately, but we're planning to support it in the\n> > future and we will absolutely include in that validation checking all of\n> > the WAL that's part of the backup.\n> \n> Could you please address the fact that just about everybody uses base\n> backups + later WAL to have a short data loss window? Integrating the\n> WAL files necessary to make the base backup consistent doesn't achieve\n> much if we can't verify the WAL files afterwards. And fairly obviously\n> pg_basebackup can't do much about WAL created after its invocation.\n\nI feel like we have very different ideas about what \"just about\neverybody\" does here. In my view, folks use pg_basebackup because it's\neasy and they can create self-contained backups that include all the WAL\nneeded to get the backup up and running again and they don't typically\ncare about PITR all that much. Folks who care about PITR use something\nthat manages WAL for them, which pg_basebackup and pg_receivewal really\ndon't do and it's not easy to add scripting around them to figure out\nwhat WAL is needed for what backup, etc.\n\nIf we didn't think that the ability to create a self-contained backup\nwas useful, it sure seems odd that we've done a lot to make that work\n(having both fetch and stream modes for it) and that it's the default.\n\n> Given that we need something separate to address that \"verification\n> hole\", I don't see why it's useful to have a special case solution (or\n> rather multiple ones, for stream and fetch) inside pg_basebackup.\n\nWell, the proposal up-thread would end up with almost zero changes to\npg_basebackup itself, but, yes, there'd be changes to BASE_BACKUP and\ndifferent ones for STREAMING_REPLICATION to support getting the WAL\nchecksums into the manifest.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Mar 2020 18:24:31 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/27/20 6:07 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2020-03-27 16:57:46 -0400, Stephen Frost wrote:\n>> I really don't know what to say to this. WAL is absolutely critical to\n>> a backup being valid. pgBackRest doesn't have a way to *just* validate\n>> a backup today, unfortunately, but we're planning to support it in the\n>> future and we will absolutely include in that validation checking all of\n>> the WAL that's part of the backup.\n> \n> Could you please address the fact that just about everybody uses base\n> backups + later WAL to have a short data loss window? Integrating the\n> WAL files necessary to make the base backup consistent doesn't achieve\n> much if we can't verify the WAL files afterwards. And fairly obviously\n> pg_basebackup can't do much about WAL created after its invocation.\n> \n> Given that we need something separate to address that \"verification\n> hole\", I don't see why it's useful to have a special case solution (or\n> rather multiple ones, for stream and fetch) inside pg_basebackup.\n\nThere's a pretty big difference between not being able to play forward \nto the end of WAL and not being able to get the backup to restore to \nconsistency at all.\n\nThe WAL that is generated during during the backup has special \nimportance. Without it you have no backup at all. It's the difference \nbetween *some* data loss and *total* data loss.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 27 Mar 2020 18:33:51 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Mar 26, 2020 at 12:34:52PM -0400, Stephen Frost wrote:\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n> > This is where I feel like I'm trying to make decisions in a vacuum. If\n> > we had a few more people weighing in on the thread on this point, I'd\n> > be happy to go with whatever the consensus was. If most people think\n> > having both --no-manifest (suppressing the manifest completely) and\n> > --manifest-checksums=none (suppressing only the checksums) is useless\n> > and confusing, then sure, let's rip the latter one out. If most people\n> > like the flexibility, let's keep it: it's already implemented and\n> > tested. But I hate to base the decision on what one or two people\n> > think.\n> \n> I'm frustrated at the lack of involvement from others also.\n\nWell, the topic of backup manifests feels like it has generated a lot of\nbickering emails, and people don't want to spend their time dealing with\nthat.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 27 Mar 2020 18:36:17 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\nOn Fri, Mar 27, 2020 at 18:36 Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Thu, Mar 26, 2020 at 12:34:52PM -0400, Stephen Frost wrote:\n> > * Robert Haas (robertmhaas@gmail.com) wrote:\n> > > This is where I feel like I'm trying to make decisions in a vacuum. If\n> > > we had a few more people weighing in on the thread on this point, I'd\n> > > be happy to go with whatever the consensus was. If most people think\n> > > having both --no-manifest (suppressing the manifest completely) and\n> > > --manifest-checksums=none (suppressing only the checksums) is useless\n> > > and confusing, then sure, let's rip the latter one out. If most people\n> > > like the flexibility, let's keep it: it's already implemented and\n> > > tested. But I hate to base the decision on what one or two people\n> > > think.\n> >\n> > I'm frustrated at the lack of involvement from others also.\n>\n> Well, the topic of backup manifests feels like it has generated a lot of\n> bickering emails, and people don't want to spend their time dealing with\n> that.\n\n\nI’d like to not also. I suppose it’s just an area that I’m particularly\nconcerned with that allows me to overcome that. Backups are important to me.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Fri, Mar 27, 2020 at 18:36 Bruce Momjian <bruce@momjian.us> wrote:On Thu, Mar 26, 2020 at 12:34:52PM -0400, Stephen Frost wrote:\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n> > This is where I feel like I'm trying to make decisions in a vacuum. If\n> > we had a few more people weighing in on the thread on this point, I'd\n> > be happy to go with whatever the consensus was. If most people think\n> > having both --no-manifest (suppressing the manifest completely) and\n> > --manifest-checksums=none (suppressing only the checksums) is useless\n> > and confusing, then sure, let's rip the latter one out. If most people\n> > like the flexibility, let's keep it: it's already implemented and\n> > tested. But I hate to base the decision on what one or two people\n> > think.\n> \n> I'm frustrated at the lack of involvement from others also.\n\nWell, the topic of backup manifests feels like it has generated a lot of\nbickering emails, and people don't want to spend their time dealing with\nthat.I’d like to not also. I suppose it’s just an area that I’m particularly concerned with that allows me to overcome that. Backups are important to me.Thanks,Stephen",
"msg_date": "Fri, 27 Mar 2020 18:38:33 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 06:38:33PM -0400, Stephen Frost wrote:\n> Greetings,\n> \n> On Fri, Mar 27, 2020 at 18:36 Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Thu, Mar 26, 2020 at 12:34:52PM -0400, Stephen Frost wrote:\n> > * Robert Haas (robertmhaas@gmail.com) wrote:\n> > > This is where I feel like I'm trying to make decisions in a vacuum. If\n> > > we had a few more people weighing in on the thread on this point, I'd\n> > > be happy to go with whatever the consensus was. If most people think\n> > > having both --no-manifest (suppressing the manifest completely) and\n> > > --manifest-checksums=none (suppressing only the checksums) is useless\n> > > and confusing, then sure, let's rip the latter one out. If most people\n> > > like the flexibility, let's keep it: it's already implemented and\n> > > tested. But I hate to base the decision on what one or two people\n> > > think.\n> >\n> > I'm frustrated at the lack of involvement from others also.\n> \n> Well, the topic of backup manifests feels like it has generated a lot of\n> bickering emails, and people don't want to spend their time dealing with\n> that.\n> \n> \n> I’d like to not also. I suppose it’s just an area that I’m particularly\n> concerned with that allows me to overcome that. Backups are important to me.\n\nThe big question is whether the discussion _needs_ to be that way.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 27 Mar 2020 18:39:46 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 01:53:54PM -0400, Robert Haas wrote:\n> - Replace a doc paragraph about the advantages and disadvantages of\n> CRC-32C with one by Stephen Frost, with a slightly change by me that I\n> thought made it sound more grammatical.\n\nDefaulting to CRC-32C seems prudent to me:\n\n- As Andres Freund said, SHA-512 is slow relative to storage now available.\n Since gzip is a needlessly-slow choice for backups (or any application that\n copies the compressed data just a few times), comparison to \"gzip -6\" speed\n is immaterial.\n\n- While I'm sure some other fast hash would be a superior default, introducing\n a new algorithm is a bikeshed, as you said. This design makes it easy,\n technically, for someone to introduce a new algorithm later. CRC-32C is not\n catastrophically unfit for 1GiB files.\n\n- Defaulting to SHA-512 would, in the absence of a WAL archive that also uses\n a cryptographic hash function, give a false sense of having achieved some\n coherent cryptographic goal. With the CRC-32C default, WAL and the rest get\n similar protection. I'm discounting the case of using BASE_BACKUP without a\n WAL archive, because I expect little intersection between sites \"worried\n enough to hash everything\" and those \"not worried enough to use an archive\".\n (On the other hand, the program that manages the WAL archive can reasonably\n own hashing base backups; putting ownership in the server isn't achieving\n much extra.)\n\n> + <refnamediv>\n> + <refname>pg_validatebackup</refname>\n> + <refpurpose>verify the integrity of a base backup of a\n> + <productname>PostgreSQL</productname> cluster</refpurpose>\n> + </refnamediv>\n\n> + <listitem>\n> + <para>\n> + <literal>pg_wal</literal> is ignored because WAL files are sent\n> + separately from the backup, and are therefore not described by the\n> + backup manifest.\n> + </para>\n> + </listitem>\n\nStephen Frost mentioned that a backup could pass validation even if\npg_basebackup were killed after writing the base backup and before finishing\nthe writing of pg_wal. One might avoid that by simply writing the manifest to\na temporary name and renaming it to the final name after populating pg_wal.\n\nWhat do you think of having the verification process also call pg_waldump to\nvalidate the WAL CRCs (shown upthread)? That looked helpful and simple.\n\nI think this functionality doesn't belong in its own program. If you suspect\npg_basebackup or pg_restore will eventually gain the ability to merge\nincremental backups into a recovery-ready base backup, I would put the\nfunctionality in that program. Otherwise, I would put it in pg_checksums.\nFor me, part of the friction here is that the program description indicates\ngeneral verification, but the actual functionality merely checks hashes on a\ndirectory tree that happens to represent a PostgreSQL base backup.\n\n> +\t\tparse->pathname = palloc(raw_length + 1);\n\nI don't see this freed anywhere; is it? (It's useful to make peak memory\nconsumption not grow in proportion to the number of files backed up.)\n\n[This message is not a full code review.]\n\n\n",
"msg_date": "Sat, 28 Mar 2020 20:40:10 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 4:32 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't like having a file format that's intended to be used by external\n> tools too that's undocumented except for code that assembles it in a\n> piecemeal fashion. Do you mean in a follow-on patch this release, or\n> later? I don't have a problem with the former.\n\nThis release. I'm happy to work on that as soon as this gets\ncommitted, assuming it gets committed.\n\n> I do found it to be circular. I think we mostly need a paragraph or two\n> somewhere that explains on a higher level what the point of verifying\n> base backups is and what is verified.\n\nFair enough.\n\n> FWIW, I was thinking of backup_manifest.checksum potentially being\n> desirable for another reason: The need to embed the checksum inside the\n> document imo adds a fair bit of rigidity to the file format. See\n\nWell, David Steele suggested this approach. I didn't particularly like\nit, but nobody showed up to agree with me or propose anything\ndifferent, so here we are. I don't think it's the end of the world.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 29 Mar 2020 20:33:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Sat, Mar 28, 2020 at 11:40 PM Noah Misch <noah@leadboat.com> wrote:\n> Stephen Frost mentioned that a backup could pass validation even if\n> pg_basebackup were killed after writing the base backup and before finishing\n> the writing of pg_wal. One might avoid that by simply writing the manifest to\n> a temporary name and renaming it to the final name after populating pg_wal.\n\nHuh, that's an idea. I'll have a look at the code and see what would\nbe involved.\n\n> What do you think of having the verification process also call pg_waldump to\n> validate the WAL CRCs (shown upthread)? That looked helpful and simple.\n\nI don't love calls to external binaries, but I think the thing that\nreally bothers me is that pg_waldump is practically bound to terminate\nwith an error, because the last WAL segment will end with a partial\nrecord. For the same reason, I think there's really no such thing as\nvalidating a single WAL file. I suppose you'd need to know the exact\nstart and end locations for a minimal WAL replay and check that all\nrecords between those LSNs appear OK, ignoring any apparent problems\nafter the minimum ending point, or at least ignoring any problems due\nto an incomplete record in the last file. We don't have a tool for\nthat currently, and I don't think I can write one this week. Or at\nleast, not a good one.\n\n> I think this functionality doesn't belong in its own program. If you suspect\n> pg_basebackup or pg_restore will eventually gain the ability to merge\n> incremental backups into a recovery-ready base backup, I would put the\n> functionality in that program. Otherwise, I would put it in pg_checksums.\n> For me, part of the friction here is that the program description indicates\n> general verification, but the actual functionality merely checks hashes on a\n> directory tree that happens to represent a PostgreSQL base backup.\n\nSuraj's original patch made this part of pg_basebackup, but I didn't\nreally like that, because I wanted it to have its own set of options.\nI still think all the options I've added are pretty useful ones, and I\ncan think of other things somebody might want to do. It feels very\nuncomfortable to make pg_basebackup, or pg_checksums, take either\noptions from set A and do thing X, or options from set B and do thing\nY. But it feels clear that the name pg_validatebackup is not going\nover very well with anyone. I think I should rename it to\npg_validatemanifest.\n\n> > + parse->pathname = palloc(raw_length + 1);\n>\n> I don't see this freed anywhere; is it? (It's useful to make peak memory\n> consumption not grow in proportion to the number of files backed up.)\n\nWe need the hash table to remain populated for the whole run time of\nthe tool, because we're essentially doing a full join of the actual\ndirectory contents against the manifest contents. That's a bit\nunfortunate but it doesn't seem simple to improve. I think the only\npeople who are really going to suffer are people who have an enormous\npile of empty or nearly-empty relations. People who have large\ndatabases for the normal reason - i.e. a reasonable number of tables\nthat hold a lot of data - will have manifests of very manageable size.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 29 Mar 2020 20:42:35 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 4:02 PM David Steele <david@pgmasters.net> wrote:\n> I prefer to validate the size and checksum in the same pass, but I'm not\n> sure it's that big a deal. If the backup is being corrupted under the\n> validate process that would also apply to files that had already been\n> validated.\n\nI did it like this because I thought that in typical scenarios it\nwould be likely to produce useful results more quickly. For instance,\nsuppose that you forget to restore the tablespace directories, and\njust get the main $PGDATA directory. Well, if you do it all in one\npass, you might spend a long time checksumming things before you\nrealize that some files are completely missing. I thought it would be\nuseful to complain about files that are extra or missing or the wrong\nsize FIRST, because that only requires us to stat() each file, and\nonly after that do the comparatively extensive checksumming step that\nrequires us to read the entire contents of each file. Granted, unless\nyou use --exit-on-error, you're going to get all the complaints\neventually anyway, but you might use that option, or you might hit ^C\nwhen you start to see a slough of complaints poppoing out.\n\nMaybe that was the wrong idea, but I thought people would like the\nidea of running cheaper checks first. I wasn't worried about\nconcurrent modification of the backup because then you're super-hosed\nno matter what.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 29 Mar 2020 20:47:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/29/20 8:33 PM, Robert Haas wrote:\n> On Fri, Mar 27, 2020 at 4:32 PM Andres Freund <andres@anarazel.de> wrote:\n> \n>> FWIW, I was thinking of backup_manifest.checksum potentially being\n>> desirable for another reason: The need to embed the checksum inside the\n>> document imo adds a fair bit of rigidity to the file format. See\n> \n> Well, David Steele suggested this approach. I didn't particularly like\n> it, but nobody showed up to agree with me or propose anything\n> different, so here we are. I don't think it's the end of the world.\n\nI prefer the embedded checksum even though it is a pain. It's a lot less \nlikely to go missing.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Sun, 29 Mar 2020 20:48:58 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/29/20 8:42 PM, Robert Haas wrote:\n> On Sat, Mar 28, 2020 at 11:40 PM Noah Misch <noah@leadboat.com> wrote:\n>> I don't see this freed anywhere; is it? (It's useful to make peak memory\n>> consumption not grow in proportion to the number of files backed up.)\n> \n> We need the hash table to remain populated for the whole run time of\n> the tool, because we're essentially doing a full join of the actual\n> directory contents against the manifest contents. That's a bit\n> unfortunate but it doesn't seem simple to improve. I think the only\n> people who are really going to suffer are people who have an enormous\n> pile of empty or nearly-empty relations. People who have large\n> databases for the normal reason - i.e. a reasonable number of tables\n> that hold a lot of data - will have manifests of very manageable size.\n\n+1\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Sun, 29 Mar 2020 20:54:41 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-29 20:47:40 -0400, Robert Haas wrote:\n> Maybe that was the wrong idea, but I thought people would like the\n> idea of running cheaper checks first. I wasn't worried about\n> concurrent modification of the backup because then you're super-hosed\n> no matter what.\n\nI do like that approach.\n\nTo be clear: I'm suggesting the additional crosscheck not because I'm\nnot concerned with concurrent modifications, but because I've seen\nfilesystem per-inode metadata and the actual data / extent-tree\ndiffer. Leading to EOF reported while reading at a different place than\nwhat the size via stat() would indicate.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 29 Mar 2020 17:59:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/29/20 8:47 PM, Robert Haas wrote:\n> On Fri, Mar 27, 2020 at 4:02 PM David Steele <david@pgmasters.net> wrote:\n>> I prefer to validate the size and checksum in the same pass, but I'm not\n>> sure it's that big a deal. If the backup is being corrupted under the\n>> validate process that would also apply to files that had already been\n>> validated.\n> \n> I did it like this because I thought that in typical scenarios it\n> would be likely to produce useful results more quickly. For instance,\n> suppose that you forget to restore the tablespace directories, and\n> just get the main $PGDATA directory. Well, if you do it all in one\n> pass, you might spend a long time checksumming things before you\n> realize that some files are completely missing. I thought it would be\n> useful to complain about files that are extra or missing or the wrong\n> size FIRST, because that only requires us to stat() each file, and\n> only after that do the comparatively extensive checksumming step that\n> requires us to read the entire contents of each file. Granted, unless\n> you use --exit-on-error, you're going to get all the complaints\n> eventually anyway, but you might use that option, or you might hit ^C\n> when you start to see a slough of complaints poppoing out.\n\nYeah, that seems reasonable.\n\nIn our case backups are nearly always compressed and/or encrypted so \neven checking the original size is a bit of work. Getting the checksum \nat the same time seems like an obvious win.\n\nCurrently we don't have a separate validate command outside of restore \nbut when we do we'll consider doing a pass to check for file presence \n(and size when possible) first. Thanks!\n\n> I wasn't worried about\n> concurrent modification of the backup because then you're super-hosed\n> no matter what.\n\nReally, really, super-hosed.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Sun, 29 Mar 2020 21:05:17 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-29 20:42:35 -0400, Robert Haas wrote:\n> > What do you think of having the verification process also call pg_waldump to\n> > validate the WAL CRCs (shown upthread)? That looked helpful and simple.\n> \n> I don't love calls to external binaries, but I think the thing that\n> really bothers me is that pg_waldump is practically bound to terminate\n> with an error, because the last WAL segment will end with a partial\n> record.\n\nI don't think that's the case here. You should know the last required\nrecord, which should allow to specify the precise end for pg_waldump. If\nit errors out reading to that point, we'd be in trouble.\n\n\n> For the same reason, I think there's really no such thing as\n> validating a single WAL file. I suppose you'd need to know the exact\n> start and end locations for a minimal WAL replay and check that all\n> records between those LSNs appear OK, ignoring any apparent problems\n> after the minimum ending point, or at least ignoring any problems due\n> to an incomplete record in the last file. We don't have a tool for\n> that currently, and I don't think I can write one this week. Or at\n> least, not a good one.\n\npg_waldump -s / -e?\n\n\n> > > + parse->pathname = palloc(raw_length + 1);\n> >\n> > I don't see this freed anywhere; is it? (It's useful to make peak memory\n> > consumption not grow in proportion to the number of files backed up.)\n> \n> We need the hash table to remain populated for the whole run time of\n> the tool, because we're essentially doing a full join of the actual\n> directory contents against the manifest contents. That's a bit\n> unfortunate but it doesn't seem simple to improve. I think the only\n> people who are really going to suffer are people who have an enormous\n> pile of empty or nearly-empty relations. People who have large\n> databases for the normal reason - i.e. a reasonable number of tables\n> that hold a lot of data - will have manifests of very manageable size.\n\nGiven that that's a pre-existing issue - at a significantly larger scale\nimo - e.g. for pg_dump (even in the --schema-only case), and that there\nare tons of backend side issues with lots of relations too, I think\nthat's fine.\n\nYou could of course implement something merge-join like, and implement\nthe sorted input via a disk base sort. But that's a lot of work (good\nluck making tuplesort work in the frontend...). So I'd not go there\nunless there's a lot of evidence this is a serious practical issue.\n\nIf we find this use too much memory, I think we'd be better off\ncondensing pathnames into either fewer allocations, or a RelFileNode as\npart of the struct (with a fallback to string for other types of\nfiles). But I'd also not go there for now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 29 Mar 2020 18:07:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/29/20 9:07 PM, Andres Freund wrote:\n> On 2020-03-29 20:42:35 -0400, Robert Haas wrote:\n>>> What do you think of having the verification process also call pg_waldump to\n>>> validate the WAL CRCs (shown upthread)? That looked helpful and simple.\n>>\n>> I don't love calls to external binaries, but I think the thing that\n>> really bothers me is that pg_waldump is practically bound to terminate\n>> with an error, because the last WAL segment will end with a partial\n>> record.\n> \n> I don't think that's the case here. You should know the last required\n> record, which should allow to specify the precise end for pg_waldump. If\n> it errors out reading to that point, we'd be in trouble.\n\nExactly. All WAL generated during the backup should read fine with \npg_waldump or there is a problem.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Sun, 29 Mar 2020 21:23:06 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-29 21:23:06 -0400, David Steele wrote:\n> On 3/29/20 9:07 PM, Andres Freund wrote:\n> > On 2020-03-29 20:42:35 -0400, Robert Haas wrote:\n> > > > What do you think of having the verification process also call pg_waldump to\n> > > > validate the WAL CRCs (shown upthread)? That looked helpful and simple.\n> > > \n> > > I don't love calls to external binaries, but I think the thing that\n> > > really bothers me is that pg_waldump is practically bound to terminate\n> > > with an error, because the last WAL segment will end with a partial\n> > > record.\n> > \n> > I don't think that's the case here. You should know the last required\n> > record, which should allow to specify the precise end for pg_waldump. If\n> > it errors out reading to that point, we'd be in trouble.\n> \n> Exactly. All WAL generated during the backup should read fine with\n> pg_waldump or there is a problem.\n\nSee the attached minimal prototype for what I am thinking of.\n\nThis would not correctly handle the case where the timeline changes\nwhile taking a base backup. But I'm not sure that'd be all that serious\na limitation for now?\n\nI'd personally not want to use a base backup that included a timeline\nswitch...\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 29 Mar 2020 19:08:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Sun, Mar 29, 2020 at 08:42:35PM -0400, Robert Haas wrote:\n> On Sat, Mar 28, 2020 at 11:40 PM Noah Misch <noah@leadboat.com> wrote:\n> > I think this functionality doesn't belong in its own program. If you suspect\n> > pg_basebackup or pg_restore will eventually gain the ability to merge\n> > incremental backups into a recovery-ready base backup, I would put the\n> > functionality in that program. Otherwise, I would put it in pg_checksums.\n> > For me, part of the friction here is that the program description indicates\n> > general verification, but the actual functionality merely checks hashes on a\n> > directory tree that happens to represent a PostgreSQL base backup.\n> \n> Suraj's original patch made this part of pg_basebackup, but I didn't\n> really like that, because I wanted it to have its own set of options.\n> I still think all the options I've added are pretty useful ones, and I\n> can think of other things somebody might want to do. It feels very\n> uncomfortable to make pg_basebackup, or pg_checksums, take either\n> options from set A and do thing X, or options from set B and do thing\n> Y.\n\npg_checksums does already have that property, for what it's worth. (More\nspecifically, certain options dictate the mode, and it reports an error if\nanother option is incompatible with the mode.)\n\n> But it feels clear that the name pg_validatebackup is not going\n> over very well with anyone. I think I should rename it to\n> pg_validatemanifest.\n\nBetween those two, I would use \"pg_validatebackup\" if there's a fair chance it\nwill end up doing the pg_waldump check. Otherwise, I would use\n\"pg_validatemanifest\". I still most prefer delivering this as a mode of an\nexisting program.\n\n> > > + parse->pathname = palloc(raw_length + 1);\n> >\n> > I don't see this freed anywhere; is it? (It's useful to make peak memory\n> > consumption not grow in proportion to the number of files backed up.)\n> \n> We need the hash table to remain populated for the whole run time of\n> the tool, because we're essentially doing a full join of the actual\n> directory contents against the manifest contents. That's a bit\n> unfortunate but it doesn't seem simple to improve. I think the only\n> people who are really going to suffer are people who have an enormous\n> pile of empty or nearly-empty relations. People who have large\n> databases for the normal reason - i.e. a reasonable number of tables\n> that hold a lot of data - will have manifests of very manageable size.\n\nOkay.\n\n\n",
"msg_date": "Sun, 29 Mar 2020 22:58:54 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Mon, Mar 30, 2020 at 11:28 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Sun, Mar 29, 2020 at 08:42:35PM -0400, Robert Haas wrote:\n>\n> > But it feels clear that the name pg_validatebackup is not going\n> > over very well with anyone. I think I should rename it to\n> > pg_validatemanifest.\n>\n> Between those two, I would use \"pg_validatebackup\" if there's a fair chance it\n> will end up doing the pg_waldump check. Otherwise, I would use\n> \"pg_validatemanifest\".\n>\n\n+1.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 30 Mar 2020 11:54:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Sun, Mar 29, 2020 at 10:08 PM Andres Freund <andres@anarazel.de> wrote:\n> See the attached minimal prototype for what I am thinking of.\n>\n> This would not correctly handle the case where the timeline changes\n> while taking a base backup. But I'm not sure that'd be all that serious\n> a limitation for now?\n>\n> I'd personally not want to use a base backup that included a timeline\n> switch...\n\nInteresting concept. I've never (or almost never) used the -s and -e\noptions to pg_waldump, so I didn't think about using those. I think\nhaving a --just-parse option to pg_waldump is a good idea, though\nmaybe not with that name e.g. we could call it --quiet.\n\nIt is less obvious to me what to do about all that as it pertains to\nthe current patch. If we want pg_validatebackup to run pg_waldump in\nthat mode or print out a hint about how to run pg_waldump in that\nmode, it would need to obtain the relevant LSNs. I guess that would\nrequire reading the backup_label file. It's not clear to me what we\nwould do if the backup crosses a timeline switch, assuming that's even\na case pg_basebackup allows. If we don't want to do anything in\npg_validatebackup automatically but just want to document this as a a\npossible technique, we could finesse that problem with some\nweasel-wording.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 30 Mar 2020 14:35:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-30 14:35:40 -0400, Robert Haas wrote:\n> On Sun, Mar 29, 2020 at 10:08 PM Andres Freund <andres@anarazel.de> wrote:\n> > See the attached minimal prototype for what I am thinking of.\n> >\n> > This would not correctly handle the case where the timeline changes\n> > while taking a base backup. But I'm not sure that'd be all that serious\n> > a limitation for now?\n> >\n> > I'd personally not want to use a base backup that included a timeline\n> > switch...\n>\n> Interesting concept. I've never (or almost never) used the -s and -e\n> options to pg_waldump, so I didn't think about using those.\n\nOh - it's how I use it most of the time when investigating a specific\nproblem. I just about always use -s, and often -e. Besides just reducing\nthe logging output, and avoiding spurious errors, it makes it a lot\neasier to iteratively expand the logging for records that are\nproblematic for the case at hand.\n\n\n> I think\n> having a --just-parse option to pg_waldump is a good idea, though\n> maybe not with that name e.g. we could call it --quiet.\n\nYea, I didn't like the option's name. It's just the first thing that\ncame to mind.\n\n\n> It is less obvious to me what to do about all that as it pertains to\n> the current patch.\n\nFWIW, I personally think we can live with this not validating WAL in the\nfirst release. But I also think it'd be within reach to do better and\nallow for WAL verification.\n\n\n> If we want pg_validatebackup to run pg_waldump in that mode or print\n> out a hint about how to run pg_waldump in that mode, it would need to\n> obtain the relevant LSNs.\n\nWe could just include those in the manifest. Seems like good information\nto have in there to me, as it allows to build the complete list of files\nneeded for a restore.\n\n\n> It's not clear to me what we would do if the backup crosses a timeline\n> switch, assuming that's even a case pg_basebackup allows.\n\nI've not tested it, but it sure looks like it's possible. Both by having\na standby replaying from a node that promotes (multiple timeline\nswitches possible too, I think, if the WAL source follows timelines),\nand by backing up from a standby that's being promoted.\n\n\n> If we don't want to do anything in pg_validatebackup automatically but\n> just want to document this as a a possible technique, we could finesse\n> that problem with some weasel-wording.\n\nIt'd probably not be too hard to simply emit multiple commands, one for\neach timeline \"segment\".\n\nI wonder if it'd not be best, independent of whether we build in this\nverification, to include that metadata in the manifest file. That's for\nsure better than having to build a separate tool to parse timeline\nhistory files.\n\nI think it wouldn't be too hard to compute that information while taking\nthe base backup. We know the end timeline (ThisTimeLineID), so we can\njust call readTimeLineHistory(ThisTimeLineID). Which should then allow\nfor something pretty trivial along the lines of\n\ntimelines = readTimeLineHistory(ThisTimeLineID);\nlast_start = InvalidXLogRecPtr;\nforeach(lc, timelines)\n{\n TimeLineHistoryEntry *he = lfirst(lc);\n\n if (he->end < startptr)\n continue;\n\n //\n manifest_emit_wal_range(Min(he->begin, startptr), he->end);\n last_start = he->end;\n}\n\nif (last_start == InvalidXlogRecPtr)\n start = startptr;\nelse\n start = last_start;\n\nmanifest_emit_wal_range(start, entptr);\n\n\nBtw, just in case somebody suggests it: I don't think it's possible to\ncompute the WAL checksums at this point. In stream mode WAL very well\nmight already have been removed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 30 Mar 2020 11:59:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Mon, Mar 30, 2020 at 2:24 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Between those two, I would use \"pg_validatebackup\" if there's a fair chance it\n> > will end up doing the pg_waldump check. Otherwise, I would use\n> > \"pg_validatemanifest\".\n>\n> +1.\n\nI guess I'd like to be clear here that I have no fundamental\ndisagreement with taking this tool in any direction that people would\nlike it to go. For me it's just a question of timing. Feature freeze\nis now a week or so away, and nothing complicated is going to get done\nin that time. If we can all agree on something simple based on\nAndres's recent proposal, cool, but I'm not yet sure that will be the\ncase, so what's plan B? We could decide that what I have here is just\ntoo little to be a viable facility on its own, but I think Stephen is\nthe only one taking that position. We could release it as\npg_validatemanifest with a plan to rename it if other backup-related\nchecks are added later. We could release it as pg_validatebackup with\nthe idea to avoid having to rename it when more backup-related checks\nare added later, but with a greater possibility of confusion in the\nmeantime and no hard guarantee that anyone will actually develop such\nchecks. We could put it in to pg_checksums, but I think that's really\nbacking ourselves into a corner: if backup validation develops other\nchecks that are not checksum-related, what then? I'd much rather\ngamble on keeping things together by topic (backup) than technology\nused internally (checksum). Putting it into pg_basebackup is another\noption, and would avoid that problem, but it's not my preferred\noption, because as I noted before, I think the command-line options\nwill get confusing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 30 Mar 2020 15:04:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-30 15:04:55 -0400, Robert Haas wrote:\n> I guess I'd like to be clear here that I have no fundamental\n> disagreement with taking this tool in any direction that people would\n> like it to go. For me it's just a question of timing. Feature freeze\n> is now a week or so away, and nothing complicated is going to get done\n> in that time. If we can all agree on something simple based on\n> Andres's recent proposal, cool, but I'm not yet sure that will be the\n> case, so what's plan B? We could decide that what I have here is just\n> too little to be a viable facility on its own, but I think Stephen is\n> the only one taking that position. We could release it as\n> pg_validatemanifest with a plan to rename it if other backup-related\n> checks are added later. We could release it as pg_validatebackup with\n> the idea to avoid having to rename it when more backup-related checks\n> are added later, but with a greater possibility of confusion in the\n> meantime and no hard guarantee that anyone will actually develop such\n> checks. We could put it in to pg_checksums, but I think that's really\n> backing ourselves into a corner: if backup validation develops other\n> checks that are not checksum-related, what then? I'd much rather\n> gamble on keeping things together by topic (backup) than technology\n> used internally (checksum). Putting it into pg_basebackup is another\n> option, and would avoid that problem, but it's not my preferred\n> option, because as I noted before, I think the command-line options\n> will get confusing.\n\nI'm mildly inclined to name it pg_validate, pg_validate_dbdir or\nsuch. And eventually (definitely not this release) subsume pg_checksums\nin it. That way we can add other checkers too.\n\nI don't really see a point in ending up with lots of different commands\nover time. Partially because there's probably plenty checks where the\noverall cost can be drastically reduced by combining IO. Partially\nbecause there's probably plenty shareable infrastructure. And partially\nbecause I think it makes discovery for users a lot easier.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 30 Mar 2020 12:16:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Mon, Mar 30, 2020 at 2:59 PM Andres Freund <andres@anarazel.de> wrote:\n> I wonder if it'd not be best, independent of whether we build in this\n> verification, to include that metadata in the manifest file. That's for\n> sure better than having to build a separate tool to parse timeline\n> history files.\n\nI don't think that's better, or at least not \"for sure better\". The\nbackup_label going to include the START TIMELINE, and if -Xfetch is\nused, we're also going to have all the timeline history files. If the\nbackup manifest includes those same pieces of information, then we've\ngot two sources of truth: one copy in the files the server's actually\ngoing to read, and another copy in the backup_manifest which we're\ngoing to potentially use for validation but ignore at runtime. That\nseems not great.\n\n> Btw, just in case somebody suggests it: I don't think it's possible to\n> compute the WAL checksums at this point. In stream mode WAL very well\n> might already have been removed.\n\nRight.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 30 Mar 2020 15:23:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 3:51 PM David Steele <david@pgmasters.net> wrote:\n> There appear to be conflicts with 67e0adfb3f98:\n\nRebased.\n\n> > + Specifies the algorithm that should be used to checksum\n> each file\n> > + for purposes of the backup manifest. Currently, the available\n>\n> perhaps \"for inclusion in the backup manifest\"? Anyway, I think this\n> sentence is awkward.\n\nI changed it to \"Specifies the checksum algorithm that should be\napplied to each file included in the backup manifest.\" I hope that's\nbetter. I also added, in both of the places where this text occurs, an\nexplanation a little higher up of what a backup manifest actually is.\n\n> > + because the files themselves do not need to read.\n>\n> should be \"need to be read\".\n\nFixed.\n\n> > + the manifest itself will always contain a\n> <literal>SHA256</literal>\n>\n> I think just \"the manifest will always contain\" is fine.\n\nOK.\n\n> > + manifeste itself, and is therefore ignored. Note that the\n> manifest\n>\n> typo \"manifeste\", perhaps remove itself.\n\nOK, fixed.\n\n> > { \"Path\": \"backup_label\", \"Size\": 224, \"Last-Modified\": \"2020-03-27\n> 18:33:18 GMT\", \"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"b914bec9\" },\n>\n> Storing the checksum type with each file seems pretty redundant.\n> Perhaps that could go in the header? You could always override if a\n> specific file had a different checksum type, though that seems unlikely.\n>\n> In general it might be good to go with shorter keys: \"mod\", \"chk\", etc.\n> Manifests can get pretty big and that's a lot of extra bytes.\n>\n> I'm also partial to using epoch time in the manifest because it is\n> generally easier for programs to work with. But, human-readable doesn't\n> suck, either.\n\nIt doesn't seem impossible for it to come up; for example, consider a\nfile-level incremental backup facility. You might retain whatever\nchecksums you have for the unchanged files (to avoid rereading them)\nand add checksums for modified or added files.\n\nI am not convinced that minimizing the size of the file here is a\nparticularly important goal, because I don't think it's going to get\nthat big in normal cases. I also think having the keys and values be\neasily understandable by human being is a plus. If we really want a\nminimal format without redundancy, we should've gone with what I\nproposed before (though admittedly that could've been tamped down even\nfurther if we'd cared to squeeze, which I didn't think was important\nthen either).\n\n>\n> > if (maxrate > 0)\n> > maxrate_clause = psprintf(\"MAX_RATE %u\", maxrate);\n> > + if (manifest)\n>\n> A linefeed here would be nice.\n\nAdded.\n\n> > + manifestfile *tabent;\n>\n> This is an odd name. A holdover from the tab-delimited version?\n\nNo, it was meant to stand for table entry. (Now we find out what\nhappens when I break my own rule against using abbreviated words.)\n\n> > + printf(_(\"Usage:\\n %s [OPTION]... BACKUPDIR\\n\\n\"), progname);\n>\n> When I ran pg_validatebackup I expected to use -D to specify the backup\n> dir since pg_basebackup does. On the other hand -D is weird because I\n> *really* expect that to be the pg data dir.\n>\n> But, do we want this to be different from pg_basebackup?\n\nI think it's pretty distinguishable, because pg_basebackup needs an\ninput (server) and an output (directory), whereas pg_validatebackup\nonly needs one. I don't really care if we want to change it, but I was\nthinking of this as being more analogous to, say, pg_resetwal.\nGranted, that's a danger-don't-use-this tool and this isn't, but I\ndon't think we want the -D-is-optional behavior that tools like pg_ctl\nhave, because having a tool that isn't supposed to be used on a\nrunning cluster default to $PGDATA seems inadvisable. And if the\nargument is mandatory then it's not clear to me why we should make\npeople type -D in front of it.\n\n> > + checksum_length = checksum_string_length / 2;\n>\n> This check is defeated if a single character is added the to checksum.\n>\n> Not too big a deal since you still get an error, but still.\n\nI don't see what the problem is here. We speculatively divide by two\nand allocate memory assuming the value that it was even, but then\nbefore doing anything critical we bail out if it was actually odd.\nThat's harmless. We could get around it by saying:\n\nif (checksum_string_length % 2 != 0)\n context->error_cb(...);\nchecksum_length = checksum_string_length / 2;\nchecksum_payload = palloc(checksum_length);\nif (!hexdecode_string(...))\n context->error_cb(...);\n\n...but that would be adding additional code, and error messages, for\nwhat's basically a can't-happen-unless-the-user-is-messing-with-us\ncase.\n\n> > + * Verify that the manifest checksum is correct.\n>\n> This is not working the way I would expect -- I could freely modify the\n> manifest without getting a checksum error on the manifest. For example:\n>\n> $ /home/vagrant/test/pg/bin/pg_validatebackup test/backup3\n> pg_validatebackup: fatal: invalid checksum for file \"backup_label\":\n> \"408901e0814f40f8ceb7796309a59c7248458325a21941e7c55568e381f53831?\"\n>\n> So, if I deleted the entry above, I got a manifest checksum error. But\n> if I just modified the checksum I get a file checksum error with no\n> manifest checksum error.\n>\n> I would prefer a manifest checksum error in all cases where it is wrong,\n> unless --exit-on-error is specified.\n\nI think I would too, but I'm confused as to what you're doing, because\nif I just modified the manifest -- by deleting a file, for example, or\nchanging the checksum of a file, I just get:\n\npg_validatebackup: fatal: manifest checksum mismatch\n\nI'm confused as to why you're not seeing that. What's the exact\nsequence of steps?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 30 Mar 2020 16:16:10 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Sun, Mar 29, 2020 at 9:05 PM David Steele <david@pgmasters.net> wrote:\n> Yeah, that seems reasonable.\n>\n> In our case backups are nearly always compressed and/or encrypted so\n> even checking the original size is a bit of work. Getting the checksum\n> at the same time seems like an obvious win.\n\nMakes sense. If this even got extended so it could read from tar-files\ninstead of the filesystem directly, we'd surely want to take the\nopposite approach and just make a single pass. I'm not sure whether\nit's worth doing that at some point in the future, but it might be. If\nwe're going to add the capability to compress or encrypt backups to\npg_basebackup, we might want to do that first, and then make this tool\nhandle all of those formats in one go.\n\n(As always, I don't have the ability to control how arbitrary\ndevelopers spend their development time... so this is just a thought.)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 30 Mar 2020 16:43:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-30 15:23:08 -0400, Robert Haas wrote:\n> On Mon, Mar 30, 2020 at 2:59 PM Andres Freund <andres@anarazel.de> wrote:\n> > I wonder if it'd not be best, independent of whether we build in this\n> > verification, to include that metadata in the manifest file. That's for\n> > sure better than having to build a separate tool to parse timeline\n> > history files.\n> \n> I don't think that's better, or at least not \"for sure better\". The\n> backup_label going to include the START TIMELINE, and if -Xfetch is\n> used, we're also going to have all the timeline history files. If the\n> backup manifest includes those same pieces of information, then we've\n> got two sources of truth: one copy in the files the server's actually\n> going to read, and another copy in the backup_manifest which we're\n> going to potentially use for validation but ignore at runtime. That\n> seems not great.\n\nThe data in the backup label isn't sufficient though. Without having\nparsed the timeline file there's no way to verify that the correct WAL\nis present. I guess we can also add client side tools to parse\ntimelines, add command the fetch all of the required files, and then\ninterpret that somehow.\n\nBut that seems much more complicated.\n\nImo it makes sense to want to be able verify that WAL looks correct even\ntransporting WAL using another method (say archiving) and thus using\npg_basebackup's -Xnone.\n\nFor the manifest to actually list what's required for the base backup\ndoesn't seem redundant to me. Imo it makes the manifest file make a good\nbit more sense, since afterwards it actually describes the whole base\nbackup.\n\nTaking the redundancy agreement a bit further you can argue that we\ndon't need a list of relation files at all, since they're in the catalog\n:P. Obviously going to that extreme doesn't make all that much\nsense... But I do think it's a second source of truth that's independent\nof what the backends actually are going to read.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 30 Mar 2020 14:08:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/30/20 5:08 PM, Andres Freund wrote:\n> \n> The data in the backup label isn't sufficient though. Without having\n> parsed the timeline file there's no way to verify that the correct WAL\n> is present. I guess we can also add client side tools to parse\n> timelines, add command the fetch all of the required files, and then\n> interpret that somehow.\n> \n> But that seems much more complicated.\n> \n> Imo it makes sense to want to be able verify that WAL looks correct even\n> transporting WAL using another method (say archiving) and thus using\n> pg_basebackup's -Xnone.\n> \n> For the manifest to actually list what's required for the base backup\n> doesn't seem redundant to me. Imo it makes the manifest file make a good\n> bit more sense, since afterwards it actually describes the whole base\n> backup.\n\nFWIW, pgBackRest stores the backup WAL stop/start in the manifest. To \nget this information after the backup is complete requires parsing the \n.backup file which doesn't get stored in the backup directory by \npg_basebackup. As far as I know, this is only accessibly to solutions \nthat implement archive_command. So, pgBackRest could do that but it \nseems far more trouble than it is worth.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 30 Mar 2020 18:56:58 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/30/20 4:16 PM, Robert Haas wrote:\n> On Fri, Mar 27, 2020 at 3:51 PM David Steele <david@pgmasters.net> wrote:\n> \n>> > { \"Path\": \"backup_label\", \"Size\": 224, \"Last-Modified\": \"2020-03-27\n>> 18:33:18 GMT\", \"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"b914bec9\" },\n>>\n>> Storing the checksum type with each file seems pretty redundant.\n>> Perhaps that could go in the header? You could always override if a\n>> specific file had a different checksum type, though that seems unlikely.\n>>\n>> In general it might be good to go with shorter keys: \"mod\", \"chk\", etc.\n>> Manifests can get pretty big and that's a lot of extra bytes.\n>>\n>> I'm also partial to using epoch time in the manifest because it is\n>> generally easier for programs to work with. But, human-readable doesn't\n>> suck, either.\n> \n> It doesn't seem impossible for it to come up; for example, consider a\n> file-level incremental backup facility. You might retain whatever\n> checksums you have for the unchanged files (to avoid rereading them)\n> and add checksums for modified or added files.\n\nOK.\n\n> I am not convinced that minimizing the size of the file here is a\n> particularly important goal, because I don't think it's going to get\n> that big in normal cases. I also think having the keys and values be\n> easily understandable by human being is a plus. If we really want a\n> minimal format without redundancy, we should've gone with what I\n> proposed before (though admittedly that could've been tamped down even\n> further if we'd cared to squeeze, which I didn't think was important\n> then either).\n\nWell, normal cases is the key. But fine, in general we have found that \nthe in memory representation is more important in terms of supporting \nclusters with very large numbers of files.\n\n>> When I ran pg_validatebackup I expected to use -D to specify the backup\n>> dir since pg_basebackup does. On the other hand -D is weird because I\n>> *really* expect that to be the pg data dir.\n>>\n>> But, do we want this to be different from pg_basebackup?\n> \n> I think it's pretty distinguishable, because pg_basebackup needs an\n> input (server) and an output (directory), whereas pg_validatebackup\n> only needs one. I don't really care if we want to change it, but I was\n> thinking of this as being more analogous to, say, pg_resetwal.\n> Granted, that's a danger-don't-use-this tool and this isn't, but I\n> don't think we want the -D-is-optional behavior that tools like pg_ctl\n> have, because having a tool that isn't supposed to be used on a\n> running cluster default to $PGDATA seems inadvisable. And if the\n> argument is mandatory then it's not clear to me why we should make\n> people type -D in front of it.\n\nHonestly I think pg_basebackup is the confusing one, because in most \ncases -D points at the running cluster dir. So, OK.\n\n>> > + checksum_length = checksum_string_length / 2;\n>>\n>> This check is defeated if a single character is added the to checksum.\n>>\n>> Not too big a deal since you still get an error, but still.\n> \n> I don't see what the problem is here. We speculatively divide by two\n> and allocate memory assuming the value that it was even, but then\n> before doing anything critical we bail out if it was actually odd.\n> That's harmless. We could get around it by saying:\n> \n> if (checksum_string_length % 2 != 0)\n> context->error_cb(...);\n> checksum_length = checksum_string_length / 2;\n> checksum_payload = palloc(checksum_length);\n> if (!hexdecode_string(...))\n> context->error_cb(...);\n> \n> ...but that would be adding additional code, and error messages, for\n> what's basically a can't-happen-unless-the-user-is-messing-with-us\n> case.\n\nSorry, pasted the wrong code and even then still didn't get it quite \nright.\n\nThe problem:\n\nIf I remove an even characters from a checksum it appears the checksum \npasses but the manifest checksum fails:\n\n$ pg_basebackup -D test/backup5 --manifest-checksums=SHA256\n\n$ vi test/backup5/backup_manifest\n * Remove two characters from the checksum of backup_label\n\n$ pg_validatebackup test/backup5\n\npg_validatebackup: fatal: manifest checksum mismatch\n\nBut if I add any number of characters or remove an odd number of \ncharacters I get:\n\npg_validatebackup: fatal: invalid checksum for file \"backup_label\": \n\"a98e9164fd59d498d14cfdf19c67d1c2208a30e7b939d1b4a09f524c7adfc11fXX\"\n\nand no manifest checksum failure.\n\n>> > + * Verify that the manifest checksum is correct.\n>>\n>> This is not working the way I would expect -- I could freely modify the\n>> manifest without getting a checksum error on the manifest. For example:\n>>\n>> $ /home/vagrant/test/pg/bin/pg_validatebackup test/backup3\n>> pg_validatebackup: fatal: invalid checksum for file \"backup_label\":\n>> \"408901e0814f40f8ceb7796309a59c7248458325a21941e7c55568e381f53831?\"\n>>\n>> So, if I deleted the entry above, I got a manifest checksum error. But\n>> if I just modified the checksum I get a file checksum error with no\n>> manifest checksum error.\n>>\n>> I would prefer a manifest checksum error in all cases where it is wrong,\n>> unless --exit-on-error is specified.\n> \n> I think I would too, but I'm confused as to what you're doing, because\n> if I just modified the manifest -- by deleting a file, for example, or\n> changing the checksum of a file, I just get:\n> \n> pg_validatebackup: fatal: manifest checksum mismatch\n> \n> I'm confused as to why you're not seeing that. What's the exact\n> sequence of steps?\n\n$ pg_basebackup -D test/backup5 --manifest-checksums=SHA256\n\n$ vi test/backup5/backup_manifest\n * Add 'X' to the checksum of backup_label\n\n$ pg_validatebackup test/backup5\npg_validatebackup: fatal: invalid checksum for file \"backup_label\": \n\"a98e9164fd59d498d14cfdf19c67d1c2208a30e7b939d1b4a09f524c7adfc11fX\"\n\nNo mention of the manifest checksum being invalid. But if I remove the \nbackup label file from the manifest:\n\npg_validatebackup: fatal: manifest checksum mismatch\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 30 Mar 2020 19:24:08 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Mon, Mar 30, 2020 at 12:16:31PM -0700, Andres Freund wrote:\n> On 2020-03-30 15:04:55 -0400, Robert Haas wrote:\n> > I guess I'd like to be clear here that I have no fundamental\n> > disagreement with taking this tool in any direction that people would\n> > like it to go. For me it's just a question of timing. Feature freeze\n> > is now a week or so away, and nothing complicated is going to get done\n> > in that time. If we can all agree on something simple based on\n> > Andres's recent proposal, cool, but I'm not yet sure that will be the\n> > case, so what's plan B? We could decide that what I have here is just\n> > too little to be a viable facility on its own, but I think Stephen is\n> > the only one taking that position. We could release it as\n> > pg_validatemanifest with a plan to rename it if other backup-related\n> > checks are added later. We could release it as pg_validatebackup with\n> > the idea to avoid having to rename it when more backup-related checks\n> > are added later, but with a greater possibility of confusion in the\n> > meantime and no hard guarantee that anyone will actually develop such\n> > checks. We could put it in to pg_checksums, but I think that's really\n> > backing ourselves into a corner: if backup validation develops other\n> > checks that are not checksum-related, what then? I'd much rather\n> > gamble on keeping things together by topic (backup) than technology\n> > used internally (checksum). Putting it into pg_basebackup is another\n> > option, and would avoid that problem, but it's not my preferred\n> > option, because as I noted before, I think the command-line options\n> > will get confusing.\n> \n> I'm mildly inclined to name it pg_validate, pg_validate_dbdir or\n> such. And eventually (definitely not this release) subsume pg_checksums\n> in it. That way we can add other checkers too.\n\nWorks for me; of those two, I prefer pg_validate.\n\n\n",
"msg_date": "Mon, 30 Mar 2020 22:40:14 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Tue, Mar 31, 2020 at 11:10 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Mon, Mar 30, 2020 at 12:16:31PM -0700, Andres Freund wrote:\n> > On 2020-03-30 15:04:55 -0400, Robert Haas wrote:\n> > > I guess I'd like to be clear here that I have no fundamental\n> > > disagreement with taking this tool in any direction that people would\n> > > like it to go. For me it's just a question of timing. Feature freeze\n> > > is now a week or so away, and nothing complicated is going to get done\n> > > in that time. If we can all agree on something simple based on\n> > > Andres's recent proposal, cool, but I'm not yet sure that will be the\n> > > case, so what's plan B? We could decide that what I have here is just\n> > > too little to be a viable facility on its own, but I think Stephen is\n> > > the only one taking that position. We could release it as\n> > > pg_validatemanifest with a plan to rename it if other backup-related\n> > > checks are added later. We could release it as pg_validatebackup with\n> > > the idea to avoid having to rename it when more backup-related checks\n> > > are added later, but with a greater possibility of confusion in the\n> > > meantime and no hard guarantee that anyone will actually develop such\n> > > checks. We could put it in to pg_checksums, but I think that's really\n> > > backing ourselves into a corner: if backup validation develops other\n> > > checks that are not checksum-related, what then? I'd much rather\n> > > gamble on keeping things together by topic (backup) than technology\n> > > used internally (checksum). Putting it into pg_basebackup is another\n> > > option, and would avoid that problem, but it's not my preferred\n> > > option, because as I noted before, I think the command-line options\n> > > will get confusing.\n> >\n> > I'm mildly inclined to name it pg_validate, pg_validate_dbdir or\n> > such. And eventually (definitely not this release) subsume pg_checksums\n> > in it. That way we can add other checkers too.\n>\n> Works for me; of those two, I prefer pg_validate.\n>\n\npg_validate sounds like a tool with a much bigger purpose. I think\neven things like amcheck could also fall under it.\n\nThis patch has two parts (a) Generate backup manifests for base\nbackups, and (b) Validate backup (manifest). It seems to me that\nthere are not many things pending for (a), can't we commit that first\nor is it the case that (a) depends on (b)? This is *not* a suggestion\nto leave pg_validatebackup from this release rather just to commit if\nsomething is ready and meaningful on its own.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 Mar 2020 14:56:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Mon, Mar 30, 2020 at 7:24 PM David Steele <david@pgmasters.net> wrote:\n> > I'm confused as to why you're not seeing that. What's the exact\n> > sequence of steps?\n>\n> $ pg_basebackup -D test/backup5 --manifest-checksums=SHA256\n>\n> $ vi test/backup5/backup_manifest\n> * Add 'X' to the checksum of backup_label\n>\n> $ pg_validatebackup test/backup5\n> pg_validatebackup: fatal: invalid checksum for file \"backup_label\":\n> \"a98e9164fd59d498d14cfdf19c67d1c2208a30e7b939d1b4a09f524c7adfc11fX\"\n>\n> No mention of the manifest checksum being invalid. But if I remove the\n> backup label file from the manifest:\n>\n> pg_validatebackup: fatal: manifest checksum mismatch\n\nOh, I see what's happening now. If the checksum is not an even-length\nstring of hexademical characters, it's treated as a syntax error, so\nit bails out at that point. Generally, a syntax error in the manifest\nfile is treated as a fatal error, and you just die right there. You'd\nget the same behavior if you had malformed JSON, like a stray { or }\nor [ or ] someplace that it doesn't belong according to the rules of\nJSON. On the other hand, if you corrupt the checksum by adding AA or\nEE or 54 or some other even-length string of hex characters, then you\nhave (in this code's view) a semantic error rather than a syntax\nerror, so it will finish loading all the manifest data and then bail\nbecause the checksum doesn't match.\n\nWe really can't avoid bailing out early sometimes, because if the file\nis totally malformed at the JSON level, there's just no way to\ncontinue. We could cause this particular error to get treated as a\nsemantic error rather than a syntax error, but I don't really see much\nadvantage in so doing. This way was easier to code, and I don't think\nit really matters which error we find first.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 31 Mar 2020 07:57:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Amit Kapila (amit.kapila16@gmail.com) wrote:\n> On Tue, Mar 31, 2020 at 11:10 AM Noah Misch <noah@leadboat.com> wrote:\n> > On Mon, Mar 30, 2020 at 12:16:31PM -0700, Andres Freund wrote:\n> > > On 2020-03-30 15:04:55 -0400, Robert Haas wrote:\n> > > > I guess I'd like to be clear here that I have no fundamental\n> > > > disagreement with taking this tool in any direction that people would\n> > > > like it to go. For me it's just a question of timing. Feature freeze\n> > > > is now a week or so away, and nothing complicated is going to get done\n> > > > in that time. If we can all agree on something simple based on\n> > > > Andres's recent proposal, cool, but I'm not yet sure that will be the\n> > > > case, so what's plan B? We could decide that what I have here is just\n> > > > too little to be a viable facility on its own, but I think Stephen is\n> > > > the only one taking that position. We could release it as\n> > > > pg_validatemanifest with a plan to rename it if other backup-related\n> > > > checks are added later. We could release it as pg_validatebackup with\n> > > > the idea to avoid having to rename it when more backup-related checks\n> > > > are added later, but with a greater possibility of confusion in the\n> > > > meantime and no hard guarantee that anyone will actually develop such\n> > > > checks. We could put it in to pg_checksums, but I think that's really\n> > > > backing ourselves into a corner: if backup validation develops other\n> > > > checks that are not checksum-related, what then? I'd much rather\n> > > > gamble on keeping things together by topic (backup) than technology\n> > > > used internally (checksum). Putting it into pg_basebackup is another\n> > > > option, and would avoid that problem, but it's not my preferred\n> > > > option, because as I noted before, I think the command-line options\n> > > > will get confusing.\n> > >\n> > > I'm mildly inclined to name it pg_validate, pg_validate_dbdir or\n> > > such. And eventually (definitely not this release) subsume pg_checksums\n> > > in it. That way we can add other checkers too.\n> >\n> > Works for me; of those two, I prefer pg_validate.\n> \n> pg_validate sounds like a tool with a much bigger purpose. I think\n> even things like amcheck could also fall under it.\n\nYeah, I tend to agree with this.\n\n> This patch has two parts (a) Generate backup manifests for base\n> backups, and (b) Validate backup (manifest). It seems to me that\n> there are not many things pending for (a), can't we commit that first\n> or is it the case that (a) depends on (b)? This is *not* a suggestion\n> to leave pg_validatebackup from this release rather just to commit if\n> something is ready and meaningful on its own.\n\nI suspect the idea here is that we don't really want to commit something\nthat nothing is actually using, and that's understandable and justified\nhere- consider that even in this recent discussion there was talk that\nmaybe we should have included permissions and ownership in the manifest,\nor starting and ending WAL positions, so that they'd be able to be\nchecked by this tool more easily (and because it's just useful to have\nall that info in one place... I don't really agree with the concerns\nthat it's an issue for static information like that to be duplicated).\n\nIn other words, while the manifest creation code might be something we\ncould commit, without a tool to use it (which does all the things that\nwe think it needs to, to perform some high-level task, such as \"validate\na backup\") we don't know that the manifest that's actually generated is\nreally up to snuff and has what it needs to have to perform that task.\n\nI had been hoping that the discussion Andres was leading regarding\nleveraging pg_waldump (or maybe just code from it..) would get us to a\npoint where pg_validatebackup would check that we have all of the WAL\nneeded for the backup to be consistent and that it would then verify the\ninternal checksums of the WAL. That would certainly be a good solution\nfor this time around, in my view, and is already all existing\nclient-side code. I do think we'd want to have a note about how we\nverify pg_wal differently from the other files which are in the\nmanifest.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 31 Mar 2020 07:58:15 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Mon, Mar 30, 2020 at 2:59 PM Andres Freund <andres@anarazel.de> wrote:\n> I think it wouldn't be too hard to compute that information while taking\n> the base backup. We know the end timeline (ThisTimeLineID), so we can\n> just call readTimeLineHistory(ThisTimeLineID). Which should then allow\n> for something pretty trivial along the lines of\n>\n> timelines = readTimeLineHistory(ThisTimeLineID);\n> last_start = InvalidXLogRecPtr;\n> foreach(lc, timelines)\n> {\n> TimeLineHistoryEntry *he = lfirst(lc);\n>\n> if (he->end < startptr)\n> continue;\n>\n> //\n> manifest_emit_wal_range(Min(he->begin, startptr), he->end);\n> last_start = he->end;\n> }\n>\n> if (last_start == InvalidXlogRecPtr)\n> start = startptr;\n> else\n> start = last_start;\n>\n> manifest_emit_wal_range(start, entptr);\n\nI made an attempt to implement this. In the attached patch set, 0001\nand 0002 are (I think) unmodified from the last version. 0003 is a\nslightly-rejiggered version of your new pg_waldump option. 0004 whacks\n0002 around so that the WAL ranges are included in the manifest and\npg_validatebackup tries to run pg_waldump for each WAL range. It\nappears to work in light testing, but I haven't yet (1) tested it\nextensively, (2) written good regression tests for it above and beyond\nwhat pg_validatebackup had already, or (3) updated the documentation.\nI'm going to work on those things. I would appreciate *very timely*\nfeedback on anything people do or do not like about this, because I\nwant to commit this patch set by the end of the work week and that\nisn't very far away. I would also appreciate if people would bear in\nmind the principle that half a loaf is better than none, and further\nimprovements can be made in future releases.\n\nAs part of my light testing, I tried promoting a standby that was\nrunning pg_basebackup, and found that pg_basebackup failed like this:\n\npg_basebackup: error: could not get COPY data stream: ERROR: the\nstandby was promoted during online backup\nHINT: This means that the backup being taken is corrupt and should\nnot be used. Try taking another online backup.\npg_basebackup: removing data directory \"/Users/rhaas/pgslave2\"\n\nMy first thought was that this error message is hard to reconcile with\nthis comment:\n\n /*\n * Send timeline history files too. Only the latest timeline history\n * file is required for recovery, and even that only if there happens\n * to be a timeline switch in the first WAL segment that contains the\n * checkpoint record, or if we're taking a base backup from a standby\n * server and the target timeline changes while the backup is taken.\n * But they are small and highly useful for debugging purposes, so\n * better include them all, always.\n */\n\nBut then it occurred to me that this might be a cascading standby.\nMaybe the original master died and this machine's master got promoted,\nso it has to follow a timeline switch but doesn't itself get promoted.\nI think I might try to test out that scenario and see what happens,\nbut I haven't done so as of this writing. Regardless, it seems like a\nreally good idea to store a list of WAL ranges rather than a single\nstart/end/timeline, because even if it's impossible today it might\nbecome possible in the future. Still, unless there's an easy way to\nset up a test scenario where multiple WAL ranges need to be verified,\nit may be hard to test that this code actually behaves properly.\n\nThoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 31 Mar 2020 14:10:34 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-31 14:10:34 -0400, Robert Haas wrote:\n> I made an attempt to implement this.\n\nAwesome!\n\n\n> In the attached patch set, 0001 I'm going to work on those things. I\n> would appreciate *very timely* feedback on anything people do or do\n> not like about this, because I want to commit this patch set by the\n> end of the work week and that isn't very far away. I would also\n> appreciate if people would bear in mind the principle that half a loaf\n> is better than none, and further improvements can be made in future\n> releases.\n> \n> As part of my light testing, I tried promoting a standby that was\n> running pg_basebackup, and found that pg_basebackup failed like this:\n> \n> pg_basebackup: error: could not get COPY data stream: ERROR: the\n> standby was promoted during online backup\n> HINT: This means that the backup being taken is corrupt and should\n> not be used. Try taking another online backup.\n> pg_basebackup: removing data directory \"/Users/rhaas/pgslave2\"\n> \n> My first thought was that this error message is hard to reconcile with\n> this comment:\n> \n> /*\n> * Send timeline history files too. Only the latest timeline history\n> * file is required for recovery, and even that only if there happens\n> * to be a timeline switch in the first WAL segment that contains the\n> * checkpoint record, or if we're taking a base backup from a standby\n> * server and the target timeline changes while the backup is taken.\n> * But they are small and highly useful for debugging purposes, so\n> * better include them all, always.\n> */\n> \n> But then it occurred to me that this might be a cascading standby.\n\nYea. The check just prevents the walsender's database from being\npromoted:\n\n\t\t/*\n\t\t * Check if the postmaster has signaled us to exit, and abort with an\n\t\t * error in that case. The error handler further up will call\n\t\t * do_pg_abort_backup() for us. Also check that if the backup was\n\t\t * started while still in recovery, the server wasn't promoted.\n\t\t * do_pg_stop_backup() will check that too, but it's better to stop\n\t\t * the backup early than continue to the end and fail there.\n\t\t */\n\t\tCHECK_FOR_INTERRUPTS();\n\t\tif (RecoveryInProgress() != backup_started_in_recovery)\n\t\t\tereport(ERROR,\n\t\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n\t\t\t\t\t errmsg(\"the standby was promoted during online backup\"),\n\t\t\t\t\t errhint(\"This means that the backup being taken is corrupt \"\n\t\t\t\t\t\t\t \"and should not be used. \"\n\t\t\t\t\t\t\t \"Try taking another online backup.\")));\nand\n\n\tif (strcmp(backupfrom, \"standby\") == 0 && !backup_started_in_recovery)\n\t\tereport(ERROR,\n\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n\t\t\t\t errmsg(\"the standby was promoted during online backup\"),\n\t\t\t\t errhint(\"This means that the backup being taken is corrupt \"\n\t\t\t\t\t\t \"and should not be used. \"\n\t\t\t\t\t\t \"Try taking another online backup.\")));\n\nSo that just prevents promotions of the current node, afaict.\n\n\n\n> Regardless, it seems like a really good idea to store a list of WAL\n> ranges rather than a single start/end/timeline, because even if it's\n> impossible today it might become possible in the future.\n\nIndeed.\n\n\n> Still, unless there's an easy way to set up a test scenario where\n> multiple WAL ranges need to be verified, it may be hard to test that\n> this code actually behaves properly.\n\nI think it'd be possible to test without a fully cascading setup, by\ncreating an initial base backup, then do some work to create a bunch of\nnew timelines, and then start the initial base backup. That'd have to\nfollow all those timelines. Not sure that's better than a cascading\nsetup though.\n\n\n> +/*\n> + * Add information about the WAL that will need to be replayed when restoring\n> + * this backup to the manifest.\n> + */\n> +static void\n> +AddWALInfoToManifest(manifest_info *manifest, XLogRecPtr startptr,\n> +\t\t\t\t\t TimeLineID starttli, XLogRecPtr endptr, TimeLineID endtli)\n> +{\n> +\tList *timelines = readTimeLineHistory(endtli);\n\nshould probably happen after the manifest->buffile check.\n\n\n> +\tListCell *lc;\n> +\tbool\tfirst_wal_range = true;\n> +\tbool\tfound_ending_tli = false;\n> +\n> +\t/* If there is no buffile, then the user doesn't want a manifest. */\n> +\tif (manifest->buffile == NULL)\n> +\t\treturn;\n\nNot really about this patch/function specifically: I wonder if this'd\nlook better if you added ManifestEnabled() macro instead of repeating\nthe comment repeatedly.\n\n\n\n> +\t/* Unless --no-parse-wal was specified, we will need pg_waldump. */\n> +\tif (!no_parse_wal)\n> +\t{\n> +\t\tint\t\tret;\n> +\n> +\t\tpg_waldump_path = pg_malloc(MAXPGPATH);\n> +\t\tret = find_other_exec(argv[0], \"pg_waldump\",\n> +\t\t\t\t\t\t\t \"pg_waldump (PostgreSQL) \" PG_VERSION \"\\n\",\n> +\t\t\t\t\t\t\t pg_waldump_path);\n> +\t\tif (ret < 0)\n> +\t\t{\n> +\t\t\tchar\tfull_path[MAXPGPATH];\n> +\n> +\t\t\tif (find_my_exec(argv[0], full_path) < 0)\n> +\t\t\t\tstrlcpy(full_path, progname, sizeof(full_path));\n> +\t\t\tif (ret == -1)\n> +\t\t\t\tpg_log_fatal(\"The program \\\"%s\\\" is needed by %s but was\\n\"\n> +\t\t\t\t\t\t\t \"not found in the same directory as \\\"%s\\\".\\n\"\n> +\t\t\t\t\t\t\t \"Check your installation.\",\n> +\t\t\t\t\t\t\t \"pg_waldump\", \"pg_validatebackup\", full_path);\n> +\t\t\telse\n> +\t\t\t\tpg_log_fatal(\"The program \\\"%s\\\" was found by \\\"%s\\\" but was\\n\"\n> +\t\t\t\t\t\t\t \"not the same version as %s.\\n\"\n> +\t\t\t\t\t\t\t \"Check your installation.\",\n> +\t\t\t\t\t\t\t \"pg_waldump\", full_path, \"pg_validatebackup\");\n> +\t\t}\n> +\t}\n\nISTM, and this can definitely wait for another time, that we should have\none wrapper doing all of this, instead of having quite a few copies of\nvery similar logic to the above.\n\n\n> +/*\n> + * Attempt to parse the WAL files required to restore from backup using\n> + * pg_waldump.\n> + */\n> +static void\n> +parse_required_wal(validator_context *context, char *pg_waldump_path,\n> +\t\t\t\t char *wal_directory, manifest_wal_range *first_wal_range)\n> +{\n> +\tmanifest_wal_range *this_wal_range = first_wal_range;\n> +\n> +\twhile (this_wal_range != NULL)\n> +\t{\n> +\t\tchar *pg_waldump_cmd;\n> +\n> +\t\tpg_waldump_cmd = psprintf(\"\\\"%s\\\" --quiet --path=\\\"%s\\\" --timeline=%u --start=%X/%X --end=%X/%X\\n\",\n> +\t\t\t pg_waldump_path, wal_directory, this_wal_range->tli,\n> +\t\t\t (uint32) (this_wal_range->start_lsn >> 32),\n> +\t\t\t (uint32) this_wal_range->start_lsn,\n> +\t\t\t (uint32) (this_wal_range->end_lsn >> 32),\n> +\t\t\t (uint32) this_wal_range->end_lsn);\n> +\t\tif (system(pg_waldump_cmd) != 0)\n> +\t\t\treport_backup_error(context,\n> +\t\t\t\t\t\t\t\t\"WAL parsing failed for timeline %u\",\n> +\t\t\t\t\t\t\t\tthis_wal_range->tli);\n> +\n> +\t\tthis_wal_range = this_wal_range->next;\n> +\t}\n> +}\n\nShould we have a function to properly escape paths in cases like this?\nNot that it's likely or really problematic, but the quoting for path\ncould be \"circumvented\".\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 31 Mar 2020 15:50:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Tue, Mar 31, 2020 at 03:50:34PM -0700, Andres Freund wrote:\n> On 2020-03-31 14:10:34 -0400, Robert Haas wrote:\n> > +/*\n> > + * Attempt to parse the WAL files required to restore from backup using\n> > + * pg_waldump.\n> > + */\n> > +static void\n> > +parse_required_wal(validator_context *context, char *pg_waldump_path,\n> > +\t\t\t\t char *wal_directory, manifest_wal_range *first_wal_range)\n> > +{\n> > +\tmanifest_wal_range *this_wal_range = first_wal_range;\n> > +\n> > +\twhile (this_wal_range != NULL)\n> > +\t{\n> > +\t\tchar *pg_waldump_cmd;\n> > +\n> > +\t\tpg_waldump_cmd = psprintf(\"\\\"%s\\\" --quiet --path=\\\"%s\\\" --timeline=%u --start=%X/%X --end=%X/%X\\n\",\n> > +\t\t\t pg_waldump_path, wal_directory, this_wal_range->tli,\n> > +\t\t\t (uint32) (this_wal_range->start_lsn >> 32),\n> > +\t\t\t (uint32) this_wal_range->start_lsn,\n> > +\t\t\t (uint32) (this_wal_range->end_lsn >> 32),\n> > +\t\t\t (uint32) this_wal_range->end_lsn);\n> > +\t\tif (system(pg_waldump_cmd) != 0)\n> > +\t\t\treport_backup_error(context,\n> > +\t\t\t\t\t\t\t\t\"WAL parsing failed for timeline %u\",\n> > +\t\t\t\t\t\t\t\tthis_wal_range->tli);\n> > +\n> > +\t\tthis_wal_range = this_wal_range->next;\n> > +\t}\n> > +}\n> \n> Should we have a function to properly escape paths in cases like this?\n> Not that it's likely or really problematic, but the quoting for path\n> could be \"circumvented\".\n\nAre you looking for appendShellString(), or something different?\n\n\n",
"msg_date": "Tue, 31 Mar 2020 22:15:04 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Tue, Mar 31, 2020 at 6:50 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-03-31 14:10:34 -0400, Robert Haas wrote:\n> > I made an attempt to implement this.\n>\n> Awesome!\n\nHere's a new patch set. I haven't fixed the things in your latest\nround of review comments yet, but I did rewrite the documentation for\npg_validatebackup, add documentation for the new pg_waldump option,\nand add regression tests for the new WAL-checking facility of\npg_validatebackup.\n\n0001 - add pg_waldump -q\n0002 - add checksum helpers\n0003 - core backup manifest patch, now with WAL verification included\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 1 Apr 2020 16:47:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-31 22:15:04 -0700, Noah Misch wrote:\n> On Tue, Mar 31, 2020 at 03:50:34PM -0700, Andres Freund wrote:\n> > On 2020-03-31 14:10:34 -0400, Robert Haas wrote:\n> > > +/*\n> > > + * Attempt to parse the WAL files required to restore from backup using\n> > > + * pg_waldump.\n> > > + */\n> > > +static void\n> > > +parse_required_wal(validator_context *context, char *pg_waldump_path,\n> > > +\t\t\t\t char *wal_directory, manifest_wal_range *first_wal_range)\n> > > +{\n> > > +\tmanifest_wal_range *this_wal_range = first_wal_range;\n> > > +\n> > > +\twhile (this_wal_range != NULL)\n> > > +\t{\n> > > +\t\tchar *pg_waldump_cmd;\n> > > +\n> > > +\t\tpg_waldump_cmd = psprintf(\"\\\"%s\\\" --quiet --path=\\\"%s\\\" --timeline=%u --start=%X/%X --end=%X/%X\\n\",\n> > > +\t\t\t pg_waldump_path, wal_directory, this_wal_range->tli,\n> > > +\t\t\t (uint32) (this_wal_range->start_lsn >> 32),\n> > > +\t\t\t (uint32) this_wal_range->start_lsn,\n> > > +\t\t\t (uint32) (this_wal_range->end_lsn >> 32),\n> > > +\t\t\t (uint32) this_wal_range->end_lsn);\n> > > +\t\tif (system(pg_waldump_cmd) != 0)\n> > > +\t\t\treport_backup_error(context,\n> > > +\t\t\t\t\t\t\t\t\"WAL parsing failed for timeline %u\",\n> > > +\t\t\t\t\t\t\t\tthis_wal_range->tli);\n> > > +\n> > > +\t\tthis_wal_range = this_wal_range->next;\n> > > +\t}\n> > > +}\n> > \n> > Should we have a function to properly escape paths in cases like this?\n> > Not that it's likely or really problematic, but the quoting for path\n> > could be \"circumvented\".\n> \n> Are you looking for appendShellString(), or something different?\n\nLooks like that'd be it. Thanks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 Apr 2020 13:59:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-31 14:56:07 +0530, Amit Kapila wrote:\n> On Tue, Mar 31, 2020 at 11:10 AM Noah Misch <noah@leadboat.com> wrote:\n> > On Mon, Mar 30, 2020 at 12:16:31PM -0700, Andres Freund wrote:\n> > > On 2020-03-30 15:04:55 -0400, Robert Haas wrote:\n> > > I'm mildly inclined to name it pg_validate, pg_validate_dbdir or\n> > > such. And eventually (definitely not this release) subsume pg_checksums\n> > > in it. That way we can add other checkers too.\n> >\n> > Works for me; of those two, I prefer pg_validate.\n> >\n> \n> pg_validate sounds like a tool with a much bigger purpose. I think\n> even things like amcheck could also fall under it.\n\nIntentionally so. We don't serve our users by collecting a lot of\ndifferently named commands to work with data directories. A I wrote\nabove, the point would be to eventually have that tool also perform\nchecksum validation etc. Potentially even in a single pass over the\ndata directory.\n\n\n> This patch has two parts (a) Generate backup manifests for base\n> backups, and (b) Validate backup (manifest). It seems to me that\n> there are not many things pending for (a), can't we commit that first\n> or is it the case that (a) depends on (b)? This is *not* a suggestion\n> to leave pg_validatebackup from this release rather just to commit if\n> something is ready and meaningful on its own.\n\nIDK, it seems easier to be able to modify both at the same time.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 Apr 2020 14:01:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 3/31/20 7:57 AM, Robert Haas wrote:\n> On Mon, Mar 30, 2020 at 7:24 PM David Steele <david@pgmasters.net> wrote:\n>>> I'm confused as to why you're not seeing that. What's the exact\n>>> sequence of steps?\n>>\n>> $ pg_basebackup -D test/backup5 --manifest-checksums=SHA256\n>>\n>> $ vi test/backup5/backup_manifest\n>> * Add 'X' to the checksum of backup_label\n>>\n>> $ pg_validatebackup test/backup5\n>> pg_validatebackup: fatal: invalid checksum for file \"backup_label\":\n>> \"a98e9164fd59d498d14cfdf19c67d1c2208a30e7b939d1b4a09f524c7adfc11fX\"\n>>\n>> No mention of the manifest checksum being invalid. But if I remove the\n>> backup label file from the manifest:\n>>\n>> pg_validatebackup: fatal: manifest checksum mismatch\n> \n> Oh, I see what's happening now. If the checksum is not an even-length\n> string of hexademical characters, it's treated as a syntax error, so\n> it bails out at that point. Generally, a syntax error in the manifest\n> file is treated as a fatal error, and you just die right there. You'd\n> get the same behavior if you had malformed JSON, like a stray { or }\n> or [ or ] someplace that it doesn't belong according to the rules of\n> JSON. On the other hand, if you corrupt the checksum by adding AA or\n> EE or 54 or some other even-length string of hex characters, then you\n> have (in this code's view) a semantic error rather than a syntax\n> error, so it will finish loading all the manifest data and then bail\n> because the checksum doesn't match.\n> \n> We really can't avoid bailing out early sometimes, because if the file\n> is totally malformed at the JSON level, there's just no way to\n> continue. We could cause this particular error to get treated as a\n> semantic error rather than a syntax error, but I don't really see much\n> advantage in so doing. This way was easier to code, and I don't think\n> it really matters which error we find first.\n\nI think it would be good to know that the manifest checksum is bad in \nall cases because that may well inform other errors.\n\nThat said, I know you have a lot on your plate with this patch so I'm \nnot going to make a fuss about such a minor gripe. Perhaps this can be \nconsidered for future improvement.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 1 Apr 2020 17:19:47 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Wed, Apr 1, 2020 at 4:47 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Here's a new patch set. I haven't fixed the things in your latest\n> round of review comments yet, but I did rewrite the documentation for\n> pg_validatebackup, add documentation for the new pg_waldump option,\n> and add regression tests for the new WAL-checking facility of\n> pg_validatebackup.\n>\n> 0001 - add pg_waldump -q\n> 0002 - add checksum helpers\n> 0003 - core backup manifest patch, now with WAL verification included\n\nAnd here's another new patch set. After some experimentation, I was\nable to manually test the timeline-switch-during-a-base-backup case\nand found that it had bugs in both pg_validatebackup and the code I\nadded to the backend's basebackup.c. So I fixed those. It would be\nnice to have automated tests, but you need a large database (so that\nbacking it up takes non-trivial time) and a load on the primary (so\nthat WAL is being replayed during the backup) and there's a race\ncondition (because the backup has to not finish before the cascading\nstandby learns that the upstream has been promoted), so I don't at\npresent see a practical way to automate that. I did verify, in manual\ntesting, that a problem with WAL files on either timeline caused a\nvalidation failure. I also verified that the LSNs at which the standby\nbegan replay and reached consistency matched what was stored in the\nmanifest.\n\nI also implemented Noah's suggestion that we should write the backup\nmanifest under a temporary name and then rename it afterward.\nStephen's original complaint that you could end up with a backup that\nvalidates successfully even though we died before we got the WAL is,\nat this point, moot, because pg_validatebackup is now capable of\nnoticing that the WAL is missing. Nevertheless, this seems like a nice\nbelt-and-suspenders check. I was able to position the rename *after*\nwe fsync() the backup directory, as well as after we get all of the\nWAL, so unless those steps complete you'll have backup_manifest.tmp\nrather than backup_manifest. It's true that, if we suffered an OS\ncrash before the fsync() completed and lost some files or some file\ndata, pg_validatebackup ought to fail anyway, but this way it is\nabsolutely certain to fail, and to do so immediately. Likewise for a\nfailure while fetching WAL that manages to leave the output directory\nbehind.\n\nThis version has also had a visit from the pgindent police.\n\nI think this responds to pretty much all of the complaints that I know\nabout and upon which we have a reasonable degree of consensus. There\nare still some things that not everybody is happy about. In\nparticular, Stephen and David are unhappy about using CRC-32C as the\ndefault algorithm, but Andres and Noah both think it's a reasonable\nchoice, even if not as robust as everybody will want. As I agree, I'm\ngoing to stick with that choice.\n\nAlso, there is still some debate about what the tool ought to be\ncalled. My previous suggestion to rename this from pg_validatebackup\nto pg_validatemanifest seems wrong now that WAL validation has been\nadded; in fact, given that we now have two independent sanity checks\non a backup, I'm going to argue that it would be reasonable to extend\nthat by adding more kinds of backup validation, perhaps even including\nthe permissions check that Andres suggested before. I don't plan to\npursue that at present, though. There remains the idea of merging this\nwith some other tool, but I still don't like that. On the one hand,\nit's been suggested that it could be merged into pg_checksums, but I\nthink that is less appealing now that it seems to be growing into a\ngeneral-purpose backup validation tool. It may do things that have\nnothing to do with checksums. On the other hand, it's been suggested\nthat it ought to be called pg_validate and that pg_checksums ought to\neventually be merged into it, but I don't think we have sufficient\nconsensus here to commit the project to such a plan. Nobody\nresponsible for the pg_checksums work has endorsed it, for example.\nMoreover, pg_checksums does things other than validation, such as\nenabling and disabling checksums. Therefore, I think it's unclear that\nsuch a plan would achieve a sufficient degree of consensus.\n\nFor my part, I think this is a general issue that is not really this\npatch's problem to solve. We have had multiple discussions over the\nyears about reducing the number of binaries that we ship. We could\nhave a general binary called \"pg\" or similar and use subcommands: pg\ncreatedb, pg basebackup, pg validatebackup, etc. I think such an\napproach is worth considering, though it would certainly be an\nadjustment for everyone. Or we might do something else. But I don't\nwant to deal with that in this patch.\n\nA couple of other minor suggestions have been made: (1) rejigger\nthings to avoid message duplication related to launching external\nbinaries, (2) maybe use appendShellString, and (3) change some details\nof error-reporting related to manifest parsing. I don't believe anyone\nviews these as blockers; (1) and (2) are preexisting issues that this\npatch extends to one new case.\n\nConsidering all the foregoing, I would like to go ahead and commit this stuff.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 2 Apr 2020 13:04:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-04-02 13:04:45 -0400, Robert Haas wrote:\n> And here's another new patch set. After some experimentation, I was\n> able to manually test the timeline-switch-during-a-base-backup case\n> and found that it had bugs in both pg_validatebackup and the code I\n> added to the backend's basebackup.c. So I fixed those.\n\nCool.\n\n\n> It would be\n> nice to have automated tests, but you need a large database (so that\n> backing it up takes non-trivial time) and a load on the primary (so\n> that WAL is being replayed during the backup) and there's a race\n> condition (because the backup has to not finish before the cascading\n> standby learns that the upstream has been promoted), so I don't at\n> present see a practical way to automate that. I did verify, in manual\n> testing, that a problem with WAL files on either timeline caused a\n> validation failure. I also verified that the LSNs at which the standby\n> began replay and reached consistency matched what was stored in the\n> manifest.\n\nI suspect its possible to control the timing by preventing the\ncheckpoint at the end of recovery from completing within a relevant\ntimeframe. I think configuring a large checkpoint_timeout and using a\nnon-fast base backup ought to do the trick. The state can be advanced by\nseparately triggering an immediate checkpoint? Or by changing the\ncheckpoint_timeout?\n\n\n\n> I also implemented Noah's suggestion that we should write the backup\n> manifest under a temporary name and then rename it afterward.\n> Stephen's original complaint that you could end up with a backup that\n> validates successfully even though we died before we got the WAL is,\n> at this point, moot, because pg_validatebackup is now capable of\n> noticing that the WAL is missing. Nevertheless, this seems like a nice\n> belt-and-suspenders check.\n\nYea, it's imo generally a good idea.\n\n\n> I think this responds to pretty much all of the complaints that I know\n> about and upon which we have a reasonable degree of consensus. There\n> are still some things that not everybody is happy about. In\n> particular, Stephen and David are unhappy about using CRC-32C as the\n> default algorithm, but Andres and Noah both think it's a reasonable\n> choice, even if not as robust as everybody will want. As I agree, I'm\n> going to stick with that choice.\n\nI think it might be worth looking, in a later release, at something like\nblake3 for a fast cryptographic checksum. By allowing for instruction\nparallelism (by independently checksuming different blocks in data, and\nonly advancing the \"shared\" checksum separately) it achieves\nconsiderably higher throughput rates.\n\nI suspect we should also look at a better non-crypto hash. xxhash or\nwhatever. Not just for these checksums, but also for in-memory.\n\n\n> Also, there is still some debate about what the tool ought to be\n> called. My previous suggestion to rename this from pg_validatebackup\n> to pg_validatemanifest seems wrong now that WAL validation has been\n> added; in fact, given that we now have two independent sanity checks\n> on a backup, I'm going to argue that it would be reasonable to extend\n> that by adding more kinds of backup validation, perhaps even including\n> the permissions check that Andres suggested before.\n\nFWIW, the only check I'd really like to see in this release is the\ncrosscheck with the files length and the actually read data (to be able\nto disagnose FS issues).\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 Apr 2020 10:23:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Apr 2, 2020 at 1:23 PM Andres Freund <andres@anarazel.de> wrote:\n> I suspect its possible to control the timing by preventing the\n> checkpoint at the end of recovery from completing within a relevant\n> timeframe. I think configuring a large checkpoint_timeout and using a\n> non-fast base backup ought to do the trick. The state can be advanced by\n> separately triggering an immediate checkpoint? Or by changing the\n> checkpoint_timeout?\n\nThat might make the window fairly wide on normal systems, but I'm not\nsure about Raspberry Pi BF members or things running\nCLOBBER_CACHE_ALWAYS/RECURSIVELY. I guess I could try it.\n\n> I think it might be worth looking, in a later release, at something like\n> blake3 for a fast cryptographic checksum. By allowing for instruction\n> parallelism (by independently checksuming different blocks in data, and\n> only advancing the \"shared\" checksum separately) it achieves\n> considerably higher throughput rates.\n>\n> I suspect we should also look at a better non-crypto hash. xxhash or\n> whatever. Not just for these checksums, but also for in-memory.\n\nI have no problem with that. I don't feel that I am well-placed to\nrecommend for or against specific algorithms. Speed is easy to\nmeasure, but there's also code stability, the license under which\nsomething is released, the quality of the hashes it produces, and the\nextent to which it is cryptographically secure. I'm not an expert in\nany of that stuff, but if we get consensus on something it should be\neasy enough to plug it into this framework. Even changing the default\nwould be no big deal.\n\n> FWIW, the only check I'd really like to see in this release is the\n> crosscheck with the files length and the actually read data (to be able\n> to disagnose FS issues).\n\nNot sure I understand this comment. Isn't that a subset of what the\npatch already does? Are you asking for something to be changed?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 Apr 2020 14:16:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-04-02 14:16:27 -0400, Robert Haas wrote:\n> On Thu, Apr 2, 2020 at 1:23 PM Andres Freund <andres@anarazel.de> wrote:\n> > I suspect its possible to control the timing by preventing the\n> > checkpoint at the end of recovery from completing within a relevant\n> > timeframe. I think configuring a large checkpoint_timeout and using a\n> > non-fast base backup ought to do the trick. The state can be advanced by\n> > separately triggering an immediate checkpoint? Or by changing the\n> > checkpoint_timeout?\n> \n> That might make the window fairly wide on normal systems, but I'm not\n> sure about Raspberry Pi BF members or things running\n> CLOBBER_CACHE_ALWAYS/RECURSIVELY. I guess I could try it.\n\nYou can set checkpoint_timeout to be a day. If that's not enough, well,\nthen I think we have other problems.\n\n\n> > FWIW, the only check I'd really like to see in this release is the\n> > crosscheck with the files length and the actually read data (to be able\n> > to disagnose FS issues).\n> \n> Not sure I understand this comment. Isn't that a subset of what the\n> patch already does? Are you asking for something to be changed?\n\nYes, I am asking for something to be changed: I'd like the code that\nread()s the file when computing the checksum to add up how many bytes\nwere read, and compare that to the size in the manifest. And if there's\na difference report an error about that, instead of a checksum failure.\n\nI've repeatedly seen filesystem issues lead to to earlier EOFs when\nread()ing than what stat() returns. It'll be pretty annoying to have to\ndebug a general \"checksum failure\", rather than just knowing that\nreading stopped after 100MB of 1GB.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 Apr 2020 11:23:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Apr 2, 2020 at 2:23 PM Andres Freund <andres@anarazel.de> wrote:\n> > That might make the window fairly wide on normal systems, but I'm not\n> > sure about Raspberry Pi BF members or things running\n> > CLOBBER_CACHE_ALWAYS/RECURSIVELY. I guess I could try it.\n>\n> You can set checkpoint_timeout to be a day. If that's not enough, well,\n> then I think we have other problems.\n\nI'm not sure that's the only issue here, but I'll try it.\n\n> Yes, I am asking for something to be changed: I'd like the code that\n> read()s the file when computing the checksum to add up how many bytes\n> were read, and compare that to the size in the manifest. And if there's\n> a difference report an error about that, instead of a checksum failure.\n>\n> I've repeatedly seen filesystem issues lead to to earlier EOFs when\n> read()ing than what stat() returns. It'll be pretty annoying to have to\n> debug a general \"checksum failure\", rather than just knowing that\n> reading stopped after 100MB of 1GB.\n\nIs 0004 attached like what you have in mind?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 2 Apr 2020 14:55:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 2020-04-02 14:55:19 -0400, Robert Haas wrote:\n> > Yes, I am asking for something to be changed: I'd like the code that\n> > read()s the file when computing the checksum to add up how many bytes\n> > were read, and compare that to the size in the manifest. And if there's\n> > a difference report an error about that, instead of a checksum failure.\n> >\n> > I've repeatedly seen filesystem issues lead to to earlier EOFs when\n> > read()ing than what stat() returns. It'll be pretty annoying to have to\n> > debug a general \"checksum failure\", rather than just knowing that\n> > reading stopped after 100MB of 1GB.\n> \n> Is 0004 attached like what you have in mind?\n\nYes. Thanks!\n\n- Andres\n\n\n",
"msg_date": "Thu, 2 Apr 2020 12:02:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 4/2/20 1:04 PM, Robert Haas wrote:\n >\n> There\n> are still some things that not everybody is happy about. In\n> particular, Stephen and David are unhappy about using CRC-32C as the\n> default algorithm, but Andres and Noah both think it's a reasonable\n> choice, even if not as robust as everybody will want. As I agree, I'm\n> going to stick with that choice.\n\nYeah, I seem to be on the losing side of this argument, at least for \nnow, so I don't think it should block the commit of this patch. It's an \neasy enough tweak if we change our minds.\n\n> For my part, I think this is a general issue that is not really this\n> patch's problem to solve. We have had multiple discussions over the\n> years about reducing the number of binaries that we ship. We could\n> have a general binary called \"pg\" or similar and use subcommands: pg\n> createdb, pg basebackup, pg validatebackup, etc. I think such an\n> approach is worth considering, though it would certainly be an\n> adjustment for everyone. Or we might do something else. But I don't\n> want to deal with that in this patch.\n\nI'm fine with the current name, especially now that WAL is validated.\n\n> A couple of other minor suggestions have been made: (1) rejigger\n> things to avoid message duplication related to launching external\n> binaries, \n\nThat'd be nice to have, but I think we can live without it for now.\n\n> (2) maybe use appendShellString\n\nSeems like this would be good to have but I'm not going to make a fuss \nabout it.\n\n> and (3) change some details\n> of error-reporting related to manifest parsing. I don't believe anyone\n> views these as blockers\n\nI'd view this as later refinement once we see how the tool is being used \nand/or get gripes from the field.\n\nSo, with the addition of the 0004 patch down-thread this looks \ncommittable to me.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 2 Apr 2020 15:26:15 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Apr 2, 2020 at 2:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Apr 2, 2020 at 2:23 PM Andres Freund <andres@anarazel.de> wrote:\n> > > That might make the window fairly wide on normal systems, but I'm not\n> > > sure about Raspberry Pi BF members or things running\n> > > CLOBBER_CACHE_ALWAYS/RECURSIVELY. I guess I could try it.\n> >\n> > You can set checkpoint_timeout to be a day. If that's not enough, well,\n> > then I think we have other problems.\n>\n> I'm not sure that's the only issue here, but I'll try it.\n\nI ran into a few problems here. In trying to set this up manually, I\nalways began with the following steps:\n\n====\n# (1) create cluster\ninitdb\n\n# (2) add to configuration file\nlog_checkpoints=on\ncheckpoint_timeout=1d\ncheckpoint_completion_target=0.99\n\n# (3) fire it up\npostgres\ncreatedb\n====\n\nIf at this point I do \"pg_basebackup -D pgslave -R -c spread\", it\ncompletes within a few seconds anyway, because there's basically\nnothing dirty, and no matter how slowly you write out no data, it's\nstill pretty quick. If I run \"pgbench -i\" first, and then\n\"pg_basebackup -D pgslave -R -c spread\", it hangs, apparently\nessentially forever, because now the checkpoint has something to do,\nand it does it super-slowly, and \"psql -c checkpoint\" makes it finish\nimmediately. However, this experiment isn't testing quite the right\nthing, because what I actually need is a slow backup off of a\ncascading standby, so that I have time to promote the parent standby\nbefore the backup completes. I tried continuing like this:\n\n====\n# (4) set up standby\npg_basebackup -D pgslave -R\npostgres -D pgslave -c port=5433\n\n# (5) set up cascading standby\npg_basebackup -D pgslave2 -d port=5433 -R\npostgres -c port=5434 -D pgslave2\n\n# (6) dirty some pages on the master\npgbench -i\n\n# (7) start a backup of the cascading standby\npg_basebackup -D pgslave3 -d port=5434 -R -c spread\n====\n\nHowever, the pg_basebackup in the last step completes after only a few\nseconds. If it were hanging, then I could continue with \"pg_ctl\npromote -D pgslave\" and that might give me what I need, but that's not\nwhat happens.\n\nI suspect I'm not doing quite what you had in mind here... thoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 Apr 2020 15:42:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Apr 2, 2020 at 3:26 PM David Steele <david@pgmasters.net> wrote:\n> So, with the addition of the 0004 patch down-thread this looks\n> committable to me.\n\nGlad to hear it. Thank you.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 Apr 2020 15:43:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 2020-04-02 15:42:48 -0400, Robert Haas wrote:\n> I suspect I'm not doing quite what you had in mind here... thoughts?\n\nI have some ideas, but I think it's complicated enough that I'd not put\nit in the \"pre commit path\" for now.\n\n\n",
"msg_date": "Thu, 2 Apr 2020 12:47:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 4/2/20 3:47 PM, Andres Freund wrote:\n> On 2020-04-02 15:42:48 -0400, Robert Haas wrote:\n>> I suspect I'm not doing quite what you had in mind here... thoughts?\n> \n> I have some ideas, but I think it's complicated enough that I'd not put\n> it in the \"pre commit path\" for now.\n\n+1. These would be great tests to have and a win for pg_basebackup \noverall but I don't think they should be a prerequisite for this commit.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 2 Apr 2020 16:34:26 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Apr 2, 2020 at 4:34 PM David Steele <david@pgmasters.net> wrote:\n> +1. These would be great tests to have and a win for pg_basebackup\n> overall but I don't think they should be a prerequisite for this commit.\n\nNot to mention the server. I can't say that I have a lot of confidence\nthat all of the server behavior in this area is well-understood and\nsane.\n\nI've pushed all the patches. Hopefully everyone is happy now, or at\nleast not so unhappy that they're going to break quarantine to beat me\nup. I hope I acknowledged all of the relevant people in the commit\nmessage, but it's possible that I missed somebody; if so, my\napologies. As is my usual custom, I added entries in roughly the order\nthat people chimed in on the thread, so the ordering should not be\ntaken as a reflection of magnitude of contribution or, well, anything\nother than the approximate order in which they chimed in.\n\nIt looks like the buildfarm is unhappy though, so I guess I'd better\ngo look at that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Apr 2020 15:22:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Apr 3, 2020 at 3:22 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> It looks like the buildfarm is unhappy though, so I guess I'd better\n> go look at that.\n\nI fixed two things so far, and there seems to be at least one more\npossible issue that I don't understand.\n\n1. Apparently, we have an automated perlcritic run built in to the\nbuild farm, and apparently, it really hates Perl subroutines that\ndon't end with an explicit return statement. We have that overridden\nto severity 5 in our Perl critic configuration. I guess I should've\nknown this, but didn't. I've pushed a fix adding return statements. I\nbelieve I'm on record as thinking that perlcritic is a tool for\ncomplaining about a lot of things that don't really matter and very\nfew that actually do -- but it's project style, so I'll suck it up!\n\n2. Also, a bunch of machines were super-unhappy with\n003_corruption.pl, failing with this sort of thing:\n\npg_basebackup: error: could not get COPY data stream: ERROR: symbolic\nlink target too long for tar format: file name \"pg_tblspc/16387\",\ntarget \"/home/fabien/pg/build-farm-11/buildroot/HEAD/pgsql.build/src/bin/pg_validatebackup/tmp_check/tmp_test_7w0w\"\n\nApparently, this is a known problem and the solution is to use\nTestLib::tempdir_short instead of TestLib::tempdir, so I pushed a fix\nto make it do that.\n\n3. spurfowl has failed its last two runs like this:\n\nsh: 1: ./configure: not found\n\nI am not sure how this patch could've caused that to happen, but the\ntiming of the failures is certainly suspicious.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Apr 2020 15:53:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Apr 3, 2020 at 3:53 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> 2. Also, a bunch of machines were super-unhappy with\n> 003_corruption.pl, failing with this sort of thing:\n>\n> pg_basebackup: error: could not get COPY data stream: ERROR: symbolic\n> link target too long for tar format: file name \"pg_tblspc/16387\",\n> target \"/home/fabien/pg/build-farm-11/buildroot/HEAD/pgsql.build/src/bin/pg_validatebackup/tmp_check/tmp_test_7w0w\"\n>\n> Apparently, this is a known problem and the solution is to use\n> TestLib::tempdir_short instead of TestLib::tempdir, so I pushed a fix\n> to make it do that.\n\nBy and large, the buildfarm is a lot happier now, but fairywren\n(Windows / Msys Server 2019 / 2 gcc 7.3.0 x86_64) failed like this:\n\n# Postmaster PID for node \"master\" is 198420\nerror running SQL: 'psql:<stdin>:3: ERROR: directory\n\"/tmp/9peoZHrEia\" does not exist'\nwhile running 'psql -XAtq -d port=51493 host=127.0.0.1\ndbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'CREATE TABLE x1\n(a int);\nINSERT INTO x1 VALUES (111);\nCREATE TABLESPACE ts1 LOCATION '/tmp/9peoZHrEia';\nCREATE TABLE x2 (a int) TABLESPACE ts1;\nINSERT INTO x1 VALUES (222);\n' at /home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/test/perl/PostgresNode.pm\nline 1531.\n### Stopping node \"master\" using mode immediate\n\nI wondered why this should be failing on this machine when none of the\nother places where tempdir_short is used are similarly failing. The\nanswer appears to be that most of the TAP tests that use tempdir_short\njust do this:\n\nmy $tempdir_short = TestLib::tempdir_short;\n\n...and then ignore that variable completely for the rest of the\nscript. That's not ideal, and we should probably remove those calls\nto avoid giving that it's actually used for something. The two TAP\ntests that actually do something with it - apart from the one I just\nadded - are pg_basebackup's 010_pg_basebackup.pl and pg_ctl's\n001_start_stop.pl. However, both of those are skipped on Windows.\nAlso, PostgresNode.pm itself uses it, but only when UNIX sockets are\nused, so again not on Windows. So it sorta looks to me like we no\npreexisting tests that meaningfully exercise TestLib::tempdir_short on\nWindows.\n\nGiven that, I suppose I should consider myself lucky if this ends up\nworking on *any* of the Windows critters, but given the implementation\nI'm kinda surprised we have a problem. That function is just:\n\nsub tempdir_short\n{\n\n return File::Temp::tempdir(CLEANUP => 1);\n}\n\nAnd File::Temp's documentation says that the temporary directory is\npicked using File::Spec's tmpdir(), which says that it knows about\ndifferent operating systems and will DTRT on Unix, Mac, OS2, Win32,\nand VMS. Yet on fairywren it is apparently DTWT. I'm not sure why.\n\nAny ideas?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Apr 2020 16:49:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 2020-Apr-03, Robert Haas wrote:\n\n> sub tempdir_short\n> {\n> \n> return File::Temp::tempdir(CLEANUP => 1);\n> }\n> \n> And File::Temp's documentation says that the temporary directory is\n> picked using File::Spec's tmpdir(), which says that it knows about\n> different operating systems and will DTRT on Unix, Mac, OS2, Win32,\n> and VMS. Yet on fairywren it is apparently DTWT. I'm not sure why.\n\nMaybe it needs perl2host?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 3 Apr 2020 17:54:12 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Apr 3, 2020 at 4:54 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Maybe it needs perl2host?\n\n*jaw drops*\n\nWow, OK, yeah, that looks like the thing. Thanks for the suggestion;\nI didn't know that existed (and I kinda wish I still didn't).\n\nI'lll go see about adding that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Apr 2020 17:07:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Apr 03, 2020 at 03:22:23PM -0400, Robert Haas wrote:\n> I've pushed all the patches.\n\nI didn't manage to look at this in advance but have some doc fixes.\n\nword-diff:\n\ndiff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml\nindex 536de9a698..d84afb7b18 100644\n--- a/doc/src/sgml/protocol.sgml\n+++ b/doc/src/sgml/protocol.sgml\n@@ -2586,7 +2586,7 @@ The commands accepted in replication mode are:\n and sent along with the backup. The manifest is a list of every\n file present in the backup with the exception of any WAL files that\n may be included. It also stores the size, last modification time, and\n [-an optional-]{+optionally a+} checksum for each file.\n A value of <literal>force-escape</literal> forces all filenames\n to be hex-encoded; otherwise, this type of encoding is performed only\n for files whose names are non-UTF8 octet sequences.\n@@ -2602,7 +2602,7 @@ The commands accepted in replication mode are:\n <term><literal>MANIFEST_CHECKSUMS</literal></term>\n <listitem>\n <para>\n Specifies the {+checksum+} algorithm that should be applied to each file included\n in the backup manifest. Currently, the available\n algorithms are <literal>NONE</literal>, <literal>CRC32C</literal>,\n <literal>SHA224</literal>, <literal>SHA256</literal>,\ndiff --git a/doc/src/sgml/ref/pg_basebackup.sgml b/doc/src/sgml/ref/pg_basebackup.sgml\nindex c778e061f3..922688e227 100644\n--- a/doc/src/sgml/ref/pg_basebackup.sgml\n+++ b/doc/src/sgml/ref/pg_basebackup.sgml\n@@ -604,7 +604,7 @@ PostgreSQL documentation\n not contain any checksums. Otherwise, it will contain a checksum\n of each file in the backup using the specified algorithm. In addition,\n the manifest will always contain a <literal>SHA256</literal>\n checksum of its own [-contents.-]{+content.+} The <literal>SHA</literal> algorithms\n are significantly more CPU-intensive than <literal>CRC32C</literal>,\n so selecting one of them may increase the time required to complete\n the backup.\n@@ -614,7 +614,7 @@ PostgreSQL documentation\n of each file for users who wish to verify that the backup has not been\n tampered with, while the CRC32C algorithm provides a checksum which is\n much faster to calculate and good at catching errors due to accidental\n changes but is not resistant to [-targeted-]{+malicious+} modifications. Note that, to\n be useful against an adversary who has access to the backup, the backup\n manifest would need to be stored securely elsewhere or otherwise\n verified not to have been modified since the backup was taken.\ndiff --git a/doc/src/sgml/ref/pg_validatebackup.sgml b/doc/src/sgml/ref/pg_validatebackup.sgml\nindex 19888dc196..748ac439a6 100644\n--- a/doc/src/sgml/ref/pg_validatebackup.sgml\n+++ b/doc/src/sgml/ref/pg_validatebackup.sgml\n@@ -41,12 +41,12 @@ PostgreSQL documentation\n </para>\n\n <para>\n It is important to note that[-that-] the validation which is performed by\n <application>pg_validatebackup</application> does not and [-can not-]{+cannot+} include\n every check which will be performed by a running server when attempting\n to make use of the backup. Even if you use this tool, you should still\n perform test restores and verify that the resulting databases work as\n expected and that they[-appear to-] contain the correct data. However,\n <application>pg_validatebackup</application> can detect many problems\n that commonly occur due to storage problems or user error.\n </para>\n@@ -73,7 +73,7 @@ PostgreSQL documentation\n a <literal>backup_manifest</literal> file in the target directory or\n about anything inside <literal>pg_wal</literal>, even though these\n files won't be listed in the backup manifest. Only files are checked;\n the presence or absence [-or-]{+of+} directories is not verified, except\n indirectly: if a directory is missing, any files it should have contained\n will necessarily also be missing. \n </para>\n@@ -84,7 +84,7 @@ PostgreSQL documentation\n for any files for which the computed checksum does not match the\n checksum stored in the manifest. This step is not performed for any files\n which produced errors in the previous step, since they are already known\n to have problems. [-Also, files-]{+Files+} which were ignored in the previous step are\n also ignored in this step.\n </para>\n\n@@ -123,7 +123,7 @@ PostgreSQL documentation\n <title>Options</title>\n\n <para>\n The following command-line options control the [-behavior.-]{+behavior of this program.+}\n\n <variablelist>\n <varlistentry>\ndiff --git a/src/backend/replication/basebackup.c b/src/backend/replication/basebackup.c\nindex 3b18e733cd..aa72a6ff10 100644\n--- a/src/backend/replication/basebackup.c\n+++ b/src/backend/replication/basebackup.c\n@@ -1148,7 +1148,7 @@ AddFileToManifest(manifest_info *manifest, const char *spcoid,\n\t}\n\n\t/*\n\t * Each file's entry [-need-]{+needs+} to be separated from any entry that follows by a\n\t * comma, but there's no comma before the first one or after the last one.\n\t * To make that work, adding a file to the manifest starts by terminating\n\t * the most recently added line, with a comma if appropriate, but does not\n\n-- \nJustin",
"msg_date": "Fri, 3 Apr 2020 16:24:45 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "[ splitting this off into a separate thread ]\n\nOn Fri, Apr 3, 2020 at 5:07 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I'lll go see about adding that.\n\nDone now. Meanwhile, two more machines have reported the mysterious message:\n\nsh: ./configure: not found\n\n...that first appeared on spurfowl a few hours ago. The other two\nmachines are eelpout and elver, both of which list Thomas Munro as a\nmaintainer. spurfowl lists Stephen Frost. Thomas, Stephen, can one of\nyou check and see what's going on? spurfowl has failed this way four\ntimes now, and eelpout and elver have each failed the last two runs,\nbut since there's no helpful information in the logs, it's hard to\nguess what went wrong.\n\nI'm sort of afraid that something in the new TAP tests accidentally\nremoved way too many files during the cleanup phase - e.g. it decided\nthe temporary directory was / and removed every file it could access,\nor something like that. It doesn't do that here, or I, uh, would've\nnoticed by now. But sometimes strange things happen on other people's\nmachines. Hopefully one of those strange things is not that my test\ncode is single-handedly destroying the entire buildfarm, but it's\npossible.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Apr 2020 17:27:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "\nHello Robert,\n\n> Done now. Meanwhile, two more machines have reported the mysterious message:\n>\n> sh: ./configure: not found\n>\n> ...that first appeared on spurfowl a few hours ago. The other two\n> machines are eelpout and elver, both of which list Thomas Munro as a\n> maintainer. spurfowl lists Stephen Frost. Thomas, Stephen, can one of\n> you check and see what's going on? spurfowl has failed this way four\n> times now, and eelpout and elver have each failed the last two runs,\n> but since there's no helpful information in the logs, it's hard to\n> guess what went wrong.\n>\n> I'm sort of afraid that something in the new TAP tests accidentally\n> removed way too many files during the cleanup phase - e.g. it decided\n> the temporary directory was / and removed every file it could access,\n> or something like that. It doesn't do that here, or I, uh, would've\n> noticed by now. But sometimes strange things happen on other people's\n> machines. Hopefully one of those strange things is not that my test\n> code is single-handedly destroying the entire buildfarm, but it's\n> possible.\n\nseawasp just failed the same way. Good news, I can see \"configure\" under \n\"HEAD/pgsql\".\n\nThe only strange thing under buildroot I found is:\n\nHEAD/pgsql.build/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data/backup/open_directory_fails/pg_subtrans/\n\nthis last directory perms are d--------- which seems to break cleanup.\n\nIt may be a left over from a previous run which failed (possibly 21dc488 \n?). I cannot see how this would be related to configure, though. Maybe \nsomething else fails silently and the message is about a consequence of \nthe prior silent failure.\n\nI commented out the cron job and will try to look into it on tomorrow if \nthe status has not changed by then.\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 3 Apr 2020 23:58:30 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> The only strange thing under buildroot I found is:\n\n> HEAD/pgsql.build/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data/backup/open_directory_fails/pg_subtrans/\n\n> this last directory perms are d--------- which seems to break cleanup.\n\nLocally, I observe that \"make clean\" in src/bin/pg_validatebackup fails\nto clean up the tmp_check directory left behind by \"make check\".\nSo the new makefile is not fully plugged into its standard\nresponsibilities. I don't see any unreadable subdirectories though.\n\nI wonder if VPATH versus not-VPATH might be a relevant factor ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Apr 2020 18:12:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "On 2020-Apr-03, Tom Lane wrote:\n\n> I wonder if VPATH versus not-VPATH might be a relevant factor ...\n\nOh, absolutely. The ones that failed show, in the last successful run,\nthe configure line invoked as \"./configure\", while the animals that are\nstill running are invoking configure from some other directory.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 3 Apr 2020 19:24:41 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "On Sat, Apr 4, 2020 at 11:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> > The only strange thing under buildroot I found is:\n>\n> > HEAD/pgsql.build/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data/backup/open_directory_fails/pg_subtrans/\n>\n> > this last directory perms are d--------- which seems to break cleanup.\n\nSame here, on elver. I see pg_subtrans has been chmod(0)'d,\npresumably by the perl subroutine mutilate_open_directory_fails. I\nsee this in my inbox (the build farm wrote it to stderr or stdout\nrather than the log file):\n\ncannot chdir to child for\npgsql.build/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data/backup/open_directory_fails/pg_subtrans:\nPermission denied at ./run_build.pl line 1013.\ncannot remove directory for\npgsql.build/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data/backup/open_directory_fails:\nDirectory not empty at ./run_build.pl line 1013.\ncannot remove directory for\npgsql.build/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data/backup:\nDirectory not empty at ./run_build.pl line 1013.\ncannot remove directory for\npgsql.build/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data:\nDirectory not empty at ./run_build.pl line 1013.\ncannot remove directory for\npgsql.build/src/bin/pg_validatebackup/tmp_check: Directory not empty\nat ./run_build.pl line 1013.\ncannot remove directory for pgsql.build/src/bin/pg_validatebackup:\nDirectory not empty at ./run_build.pl line 1013.\ncannot remove directory for pgsql.build/src/bin: Directory not empty\nat ./run_build.pl line 1013.\ncannot remove directory for pgsql.build/src: Directory not empty at\n./run_build.pl line 1013.\ncannot remove directory for pgsql.build: Directory not empty at\n./run_build.pl line 1013.\ncannot chdir to child for\npgsql.build/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data/backup/open_directory_fails/pg_subtrans:\nPermission denied at ./run_build.pl line 589.\ncannot remove directory for\npgsql.build/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data/backup/open_directory_fails:\nDirectory not empty at ./run_build.pl line 589.\ncannot remove directory for\npgsql.build/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data/backup:\nDirectory not empty at ./run_build.pl line 589.\ncannot remove directory for\npgsql.build/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data:\nDirectory not empty at ./run_build.pl line 589.\ncannot remove directory for\npgsql.build/src/bin/pg_validatebackup/tmp_check: Directory not empty\nat ./run_build.pl line 589.\ncannot remove directory for pgsql.build/src/bin/pg_validatebackup:\nDirectory not empty at ./run_build.pl line 589.\ncannot remove directory for pgsql.build/src/bin: Directory not empty\nat ./run_build.pl line 589.\ncannot remove directory for pgsql.build/src: Directory not empty at\n./run_build.pl line 589.\ncannot remove directory for pgsql.build: Directory not empty at\n./run_build.pl line 589.\n\n\n",
"msg_date": "Sat, 4 Apr 2020 11:29:50 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "Greetings,\n\n* Thomas Munro (thomas.munro@gmail.com) wrote:\n> On Sat, Apr 4, 2020 at 11:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> > > The only strange thing under buildroot I found is:\n> >\n> > > HEAD/pgsql.build/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data/backup/open_directory_fails/pg_subtrans/\n> >\n> > > this last directory perms are d--------- which seems to break cleanup.\n> \n> Same here, on elver. I see pg_subtrans has been chmod(0)'d,\n> presumably by the perl subroutine mutilate_open_directory_fails. I\n> see this in my inbox (the build farm wrote it to stderr or stdout\n> rather than the log file):\n\nYup, saw the same here.\n\nchmod'ing it to 755 seemed to result it the next run cleaning it up, at\nleast. Not sure how things will go on the next actual build tho.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 3 Apr 2020 18:39:41 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Same here, on elver. I see pg_subtrans has been chmod(0)'d,\n> presumably by the perl subroutine mutilate_open_directory_fails. I\n> see this in my inbox (the build farm wrote it to stderr or stdout\n> rather than the log file):\n\n> cannot chdir to child for\n> pgsql.build/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data/backup/open_directory_fails/pg_subtrans:\n> Permission denied at ./run_build.pl line 1013.\n> cannot remove directory for\n> pgsql.build/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data/backup/open_directory_fails:\n> Directory not empty at ./run_build.pl line 1013.\n\nI'm guessing that we're looking at a platform-specific difference in\nwhether \"rm -rf\" fails outright on an unreadable subdirectory, or\njust tries to carry on by unlinking it anyway.\n\nA partial fix would be to have the test script put back normal\npermissions on that directory before it exits ... but any failure\npartway through the script would leave a time bomb requiring manual\ncleanup.\n\nOn the whole, I'd argue that testing that behavior is not valuable\nenough to take risks of periodically breaking buildfarm members\nin a way that will require manual recovery --- to say nothing of\nannoying developers who trip over it. So my vote is to remove\nthat part of the test and be satisfied with checking the behavior\nfor an unreadable file.\n\nThis doesn't directly explain the failure-at-next-configure behavior\nthat we're seeing in the buildfarm, but it wouldn't be too surprising\nif it ends up being that the buildfarm client script doesn't manage\nto fully recover from the situation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Apr 2020 18:48:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "I wrote:\n> I'm guessing that we're looking at a platform-specific difference in\n> whether \"rm -rf\" fails outright on an unreadable subdirectory, or\n> just tries to carry on by unlinking it anyway.\n\nYeah... on my RHEL6 box, \"make check\" cleans up the working directories\nunder tmp_check, but on a FreeBSD 12.1 box, not so much: I'm left with\n\n$ ls tmp_check/\nlog/ t_003_corruption_master_data/\ntgl@oldmini$ ls -R tmp_check/t_003_corruption_master_data/\nbackup/\n\ntmp_check/t_003_corruption_master_data/backup:\nopen_directory_fails/\n\ntmp_check/t_003_corruption_master_data/backup/open_directory_fails:\npg_subtrans/\n\ntmp_check/t_003_corruption_master_data/backup/open_directory_fails/pg_subtrans:\nls: tmp_check/t_003_corruption_master_data/backup/open_directory_fails/pg_subtrans: Permission denied\n\nI did not see any complaints printed to the terminal, but in\nregress_log_003_corruption there's\n\n...\nok 40 - corrupt backup fails validation: open_directory_fails: matches\ncannot chdir to child for /usr/home/tgl/pgsql/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data/backup/open_directory_fails/pg_subtrans: Permission denied at t/003_corruption.pl line 126.\ncannot remove directory for /usr/home/tgl/pgsql/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data/backup/open_directory_fails: Directory not empty at t/003_corruption.pl line 126.\n# Running: pg_basebackup -D /usr/home/tgl/pgsql/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data/backup/search_directory_fails --no-sync -T /tmp/lxaL_sLcnr=/tmp/_fegwVjoDR\nok 41 - base backup ok\n...\n\nThis may be more of a Perl version issue than a platform issue,\nbut either way it's a problem.\n\nAlso, on the FreeBSD box, \"rm -rf\" isn't happy either:\n\n$ rm -rf tmp_check\nrm: tmp_check/t_003_corruption_master_data/backup/open_directory_fails/pg_subtrans: Permission denied\nrm: tmp_check/t_003_corruption_master_data/backup/open_directory_fails: Directory not empty\nrm: tmp_check/t_003_corruption_master_data/backup: Directory not empty\nrm: tmp_check/t_003_corruption_master_data: Directory not empty\nrm: tmp_check: Directory not empty\n\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Apr 2020 19:02:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "On Fri, Apr 3, 2020 at 6:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm guessing that we're looking at a platform-specific difference in\n> whether \"rm -rf\" fails outright on an unreadable subdirectory, or\n> just tries to carry on by unlinking it anyway.\n\nMy intention was that it would be cleaned by the TAP framework itself,\nsince the temporary directories it creates are marked for cleanup. But\nit may be that there's a platform dependency in the behavior of Perl's\nFile::Path::rmtree, too.\n\n> A partial fix would be to have the test script put back normal\n> permissions on that directory before it exits ... but any failure\n> partway through the script would leave a time bomb requiring manual\n> cleanup.\n\nYeah. I've pushed that fix for now, but as you say, it may not survive\ncontact with the enemy. That's kind of disappointing, because I put a\nlot of work into trying to make the tests cover every line of code\nthat they possibly could, and there's no reason to suppose that\npg_validatebackup is the only tool that could benefit from having code\ncoverage of those kinds of scenarios. It's probably not even the tool\nthat is most in need of such testing; it must be far worse if, say,\npg_rewind can't cope with it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Apr 2020 19:50:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "On Fri, Apr 3, 2020 at 5:58 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> seawasp just failed the same way. Good news, I can see \"configure\" under\n> \"HEAD/pgsql\".\n\nAh, good.\n\n> The only strange thing under buildroot I found is:\n>\n> HEAD/pgsql.build/src/bin/pg_validatebackup/tmp_check/t_003_corruption_master_data/backup/open_directory_fails/pg_subtrans/\n\nHuh. I wonder how that got left behind ... it should've been cleaned\nup by the TAP test framework. But I pushed a commit to change the\npermissions back explicitly before exiting. As Tom says, I probably\nneed to remove that entire test, but I'm going to try this first.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Apr 2020 19:55:34 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Apr 3, 2020 at 6:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm guessing that we're looking at a platform-specific difference in\n>> whether \"rm -rf\" fails outright on an unreadable subdirectory, or\n>> just tries to carry on by unlinking it anyway.\n\n> My intention was that it would be cleaned by the TAP framework itself,\n> since the temporary directories it creates are marked for cleanup. But\n> it may be that there's a platform dependency in the behavior of Perl's\n> File::Path::rmtree, too.\n\nYeah, so it would seem. The buildfarm script uses rmtree to clean out\nthe old build tree. The man page for File::Path suggests (but can't\nquite bring itself to say in so many words) that by default, rmtree\nwill adjust the permissions on target directories to allow the deletion\nto succeed. But that's very clearly not happening on some platforms.\n(Maybe that represents a local patch on the part of some packagers\nwho thought it was too unsafe?)\n\nAnyway, the end state presumably is that the pgsql.build directory\nis still there at the end of the buildfarm run, and the next run's\nattempt to also rmtree it fares no better. Then look what it does\nto set up the new build:\n\n\t\tsystem(\"cp -R -p $target $build_path 2>&1\");\n\nOf course, if $build_path already exists, then cp copies to a subdirectory\nof the target not the target itself. So that explains the symptom\n\"./configure does not exist\" --- it exists all right, but in a\nsubdirectory below the one where the buildfarm expects it to be.\n\nIt looks to me like the same problem would occur with VPATH or no.\nThe lack of failures among the VPATH-using critters probably has\nmore to do with whether their rmtree is willing to deal with this\ncase than with VPATH.\n\nAnyway, it's evident that the buildfarm critters that are busted\nwill need manual cleanup, because the script is not going to be\nable to get out of this by itself. I remain of the opinion that\nthe hazard of that happening again in the future (eg, if a buildfarm\nanimal loses power during the test) is sufficient reason to remove\nthis test case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Apr 2020 20:12:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "BTW, some of the buildfarm is showing a simpler portability problem:\nthey think you were too cavalier about the difference between time_t\nand pg_time_t. (On a platform with 32-bit time_t, that's an actual\nbug, probably.) lapwing is actually failing:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2020-04-03%2021%3A41%3A49\n\nccache gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2 -Werror -I. -I. -I../../../src/include -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/include/et -c -o basebackup.o basebackup.c\nbasebackup.c: In function 'AddFileToManifest':\nbasebackup.c:1199:10: error: passing argument 1 of 'pg_gmtime' from incompatible pointer type [-Werror]\nIn file included from ../../../src/include/access/xlog_internal.h:26:0,\n from basebackup.c:20:\n../../../src/include/pgtime.h:49:22: note: expected 'const pg_time_t *' but argument is of type 'time_t *'\ncc1: all warnings being treated as errors\nmake[3]: *** [basebackup.o] Error 1\n\nbut some others are showing it as a warning.\n\nI suppose that judicious s/time_t/pg_time_t/ would fix this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Apr 2020 20:18:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Apr 3, 2020 at 6:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Locally, I observe that \"make clean\" in src/bin/pg_validatebackup fails\n> to clean up the tmp_check directory left behind by \"make check\".\n\nFixed.\n\nI also tried to fix 'lapwing', which was complaining about about a\ncall to pg_gmtime, saying that it \"expected 'const pg_time_t *' but\nargument is of type 'time_t *'\". I was thinking that the problem had\nsomething to do with const, but Thomas pointed out to me that\npg_time_t != time_t, so I pushed a fix which assumes that was the\nissue. (It was certainly *an* issue.)\n\n'prairiedog' is also unhappy, and it looks related:\n\n/bin/sh ../../../../config/install-sh -c -d\n'/Users/buildfarm/bf-data/HEAD/pgsql.build/src/test/modules/commit_ts'/tmp_check\ncd . && TESTDIR='/Users/buildfarm/bf-data/HEAD/pgsql.build/src/test/modules/commit_ts'\nPATH=\"/Users/buildfarm/bf-data/HEAD/pgsql.build/tmp_install/Users/buildfarm/bf-data/HEAD/inst/bin:$PATH\"\nDYLD_LIBRARY_PATH=\"/Users/buildfarm/bf-data/HEAD/pgsql.build/tmp_install/Users/buildfarm/bf-data/HEAD/inst/lib:$DYLD_LIBRARY_PATH\"\n PGPORT='65678'\nPG_REGRESS='/Users/buildfarm/bf-data/HEAD/pgsql.build/src/test/modules/commit_ts/../../../../src/test/regress/pg_regress'\nREGRESS_SHLIB='/Users/buildfarm/bf-data/HEAD/pgsql.build/src/test/regress/regress.so'\n/usr/local/perl5.8.3/bin/prove -I ../../../../src/test/perl/ -I .\nt/*.pl\nt/001_base.........ok\nt/002_standby......FAILED--Further testing stopped: system pg_basebackup failed\nmake: *** [check] Error 25\n\nUnfortunately, that error message is not very informative and for some\nreason the TAP logs don't seem to be included in the buildfarm output\nin this case, so it's hard to tell exactly what went wrong. This\nappears to be another 32-bit critter, which may be related somehow,\nbut I don't know how exactly.\n\n'serinus' is also failing. This is less obviously related:\n\n[02:08:55] t/003_constraints.pl .. ok 2048 ms ( 0.01 usr 0.00 sys\n+ 1.28 cusr 0.38 csys = 1.67 CPU)\n# poll_query_until timed out executing this query:\n# SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN\n('r', 's');\n# expecting this output:\n# t\n# last actual query output:\n# f\n# with stderr:\n\nBut there's also this:\n\n2020-04-04 02:08:57.297 CEST [5e87d019.506c1:1] LOG: connection\nreceived: host=[local]\n2020-04-04 02:08:57.298 CEST [5e87d019.506c1:2] LOG: replication\nconnection authorized: user=bf\napplication_name=tap_sub_16390_sync_16384\n2020-04-04 02:08:57.299 CEST [5e87d019.506c1:3] LOG: statement: BEGIN\nREAD ONLY ISOLATION LEVEL REPEATABLE READ\n2020-04-04 02:08:57.299 CEST [5e87d019.506c1:4] LOG: received\nreplication command: CREATE_REPLICATION_SLOT\n\"tap_sub_16390_sync_16384\" TEMPORARY LOGICAL pgoutput USE_SNAPSHOT\n2020-04-04 02:08:57.299 CEST [5e87d019.506c1:5] ERROR: replication\nslot \"tap_sub_16390_sync_16384\" already exists\nTRAP: FailedAssertion(\"owner->bufferarr.nitems == 0\", File:\n\"/home/bf/build/buildfarm-serinus/HEAD/pgsql.build/../pgsql/src/backend/utils/resowner/resowner.c\",\nLine: 718)\npostgres: publisher: walsender bf [local] idle in transaction\n(aborted)(ExceptionalCondition+0x5c)[0x9a13ac]\npostgres: publisher: walsender bf [local] idle in transaction\n(aborted)(ResourceOwnerDelete+0x295)[0x9db8e5]\npostgres: publisher: walsender bf [local] idle in transaction\n(aborted)[0x54c61f]\npostgres: publisher: walsender bf [local] idle in transaction\n(aborted)(AbortOutOfAnyTransaction+0x122)[0x550e32]\npostgres: publisher: walsender bf [local] idle in transaction\n(aborted)[0x9b3bc9]\npostgres: publisher: walsender bf [local] idle in transaction\n(aborted)(shmem_exit+0x35)[0x80db45]\npostgres: publisher: walsender bf [local] idle in transaction\n(aborted)[0x80dc77]\npostgres: publisher: walsender bf [local] idle in transaction\n(aborted)(proc_exit+0x8)[0x80dd08]\npostgres: publisher: walsender bf [local] idle in transaction\n(aborted)(PostgresMain+0x59f)[0x83bd0f]\npostgres: publisher: walsender bf [local] idle in transaction\n(aborted)[0x7a0264]\npostgres: publisher: walsender bf [local] idle in transaction\n(aborted)(PostmasterMain+0xbfc)[0x7a2b8c]\npostgres: publisher: walsender bf [local] idle in transaction\n(aborted)(main+0x6fb)[0x49749b]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb)[0x7fc52d83bbbb]\npostgres: publisher: walsender bf [local] idle in transaction\n(aborted)(_start+0x2a)[0x49753a]\n2020-04-04 02:08:57.302 CEST [5e87d018.5066b:4] LOG: server process\n(PID 329409) was terminated by signal 6: Aborted\n2020-04-04 02:08:57.302 CEST [5e87d018.5066b:5] DETAIL: Failed\nprocess was running: BEGIN READ ONLY ISOLATION LEVEL REPEATABLE READ\n\nThat might well be related. I note in passing that the DETAIL emitted\nby the postmaster shows the previous SQL command rather than the\nmore-recent replication command, which seems like something to fix. (I\nstill really dislike the fact that we have this evil hack allowing one\nconnection to mix and match those sets of commands...)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Apr 2020 20:48:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "On Fri, Apr 3, 2020 at 8:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, so it would seem. The buildfarm script uses rmtree to clean out\n> the old build tree. The man page for File::Path suggests (but can't\n> quite bring itself to say in so many words) that by default, rmtree\n> will adjust the permissions on target directories to allow the deletion\n> to succeed. But that's very clearly not happening on some platforms.\n> (Maybe that represents a local patch on the part of some packagers\n> who thought it was too unsafe?)\n\nInterestingly, on my machine, rmtree coped with a mode 0 directory\njust fine, but mode 0400 was more than its tiny brain could handle, so\nthe originally committed fix had code to revert 0400 back to 0700, but\nI didn't add similar code to revert from 0 back to 0700 because that\nwas working fine.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Apr 2020 20:54:17 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> 'prairiedog' is also unhappy, and it looks related:\n\nYeah, gaur also failed in the same place. Both of those are\nalignment-picky 32-bit hardware, so I'm thinking the problem is\npg_gmtime() trying to fetch a 64-bit pg_time_t from an insufficiently\naligned address. I'm trying to confirm that on gaur's host right now,\nbut it's a slow machine ...\n\n> 'serinus' is also failing. This is less obviously related:\n\nDunno about this one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Apr 2020 21:52:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Interestingly, on my machine, rmtree coped with a mode 0 directory\n> just fine, but mode 0400 was more than its tiny brain could handle, so\n> the originally committed fix had code to revert 0400 back to 0700, but\n> I didn't add similar code to revert from 0 back to 0700 because that\n> was working fine.\n\nIt seems really odd that an implementation could cope with mode-0\nbut not mode-400. Not sure I care enough to dig into the Perl\nlibrary code, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Apr 2020 21:53:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "On Fri, Apr 3, 2020 at 9:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > 'prairiedog' is also unhappy, and it looks related:\n>\n> Yeah, gaur also failed in the same place. Both of those are\n> alignment-picky 32-bit hardware, so I'm thinking the problem is\n> pg_gmtime() trying to fetch a 64-bit pg_time_t from an insufficiently\n> aligned address. I'm trying to confirm that on gaur's host right now,\n> but it's a slow machine ...\n\nYou might just want to wait until tomorrow and see whether it clears\nup in newer runs. I just pushed yet another fix that might be\nrelevant.\n\nI think I've done about as much as I can do for tonight, though. Most\nthings are green now, and the ones that aren't are failing because of\nstuff that is at least plausibly fixed. By morning it should be\nclearer how much broken stuff is left, although that will be somewhat\ncomplicated by at least sidewinder and seawasp needing manual\nintervention to get back on track.\n\nI apologize to everyone who has been or will be inconvenienced by all\nof this. So far I've pushed 4 test case fixes, 2 bug fixes, and 1\nmakefile fix, which I'm pretty sure is over quota for one patch. :-(\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Apr 2020 22:43:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "Hi,\n\nPeter, Petr, CCed you because it's probably a bug somewhere around the\ninitial copy code for logical replication.\n\n\nOn 2020-04-03 20:48:09 -0400, Robert Haas wrote:\n> 'serinus' is also failing. This is less obviously related:\n\nHm. Tests passed once since then.\n\n\n> 2020-04-04 02:08:57.299 CEST [5e87d019.506c1:4] LOG: received\n> replication command: CREATE_REPLICATION_SLOT\n> \"tap_sub_16390_sync_16384\" TEMPORARY LOGICAL pgoutput USE_SNAPSHOT\n> 2020-04-04 02:08:57.299 CEST [5e87d019.506c1:5] ERROR: replication\n> slot \"tap_sub_16390_sync_16384\" already exists\n\nThat already seems suspicious. I checked the following (successful) run\nand I did not see that in the stage's logs.\n\nLooking at the failing log, it fails because for some reason there's\nrounds (once due to a refresh, once due to an intention replication\nfailure) of copying the relation. Each creates its own temporary slot.\n\nfirst time:\n2020-04-04 02:08:57.276 CEST [5e87d019.506bd:1] LOG: connection received: host=[local]\n2020-04-04 02:08:57.278 CEST [5e87d019.506bd:4] LOG: received replication command: CREATE_REPLICATION_SLOT \"tap_sub_16390_sync_16384\" TEMPORARY LOGICAL pgoutput USE_SNAPSHOT\n2020-04-04 02:08:57.282 CEST [5e87d019.506bd:9] LOG: statement: COPY public.tab_rep TO STDOUT\n2020-04-04 02:08:57.284 CEST [5e87d019.506bd:10] LOG: disconnection: session time: 0:00:00.007 user=bf database=postgres host=[local]\n\nsecond time:\n2020-04-04 02:08:57.288 CEST [5e87d019.506bf:1] LOG: connection received: host=[local]\n2020-04-04 02:08:57.289 CEST [5e87d019.506bf:4] LOG: received replication command: CREATE_REPLICATION_SLOT \"tap_sub_16390_sync_16384\" TEMPORARY LOGICAL pgoutput USE_SNAPSHOT\n2020-04-04 02:08:57.293 CEST [5e87d019.506bf:9] LOG: statement: COPY public.tab_rep TO STDOUT\n\nthird time:\n2020-04-04 02:08:57.297 CEST [5e87d019.506c1:1] LOG: connection received: host=[local]\n2020-04-04 02:08:57.299 CEST [5e87d019.506c1:4] LOG: received replication command: CREATE_REPLICATION_SLOT \"tap_sub_16390_sync_16384\" TEMPORARY LOGICAL pgoutput USE_SNAPSHOT\n2020-04-04 02:08:57.299 CEST [5e87d019.506c1:5] ERROR: replication slot \"tap_sub_16390_sync_16384\" already exists\n\nNote that the connection from the second attempt has not yet\ndisconnected. Hence the error about the replication slot already\nexisting - it's a temporary replication slot that'd otherwise already\nhave been dropped.\n\n\nSeems the logical rep code needs to do something about this race?\n\n\nAbout the assertion failure:\n\nTRAP: FailedAssertion(\"owner->bufferarr.nitems == 0\", File: \"/home/bf/build/buildfarm-serinus/HEAD/pgsql.build/../pgsql/src/backend/utils/resowner/resowner.c\", Line: 718)\npostgres: publisher: walsender bf [local] idle in transaction (aborted)(ExceptionalCondition+0x5c)[0x9a13ac]\npostgres: publisher: walsender bf [local] idle in transaction (aborted)(ResourceOwnerDelete+0x295)[0x9db8e5]\npostgres: publisher: walsender bf [local] idle in transaction (aborted)[0x54c61f]\npostgres: publisher: walsender bf [local] idle in transaction (aborted)(AbortOutOfAnyTransaction+0x122)[0x550e32]\npostgres: publisher: walsender bf [local] idle in transaction (aborted)[0x9b3bc9]\npostgres: publisher: walsender bf [local] idle in transaction (aborted)(shmem_exit+0x35)[0x80db45]\npostgres: publisher: walsender bf [local] idle in transaction (aborted)[0x80dc77]\npostgres: publisher: walsender bf [local] idle in transaction (aborted)(proc_exit+0x8)[0x80dd08]\npostgres: publisher: walsender bf [local] idle in transaction (aborted)(PostgresMain+0x59f)[0x83bd0f]\npostgres: publisher: walsender bf [local] idle in transaction (aborted)[0x7a0264]\npostgres: publisher: walsender bf [local] idle in transaction (aborted)(PostmasterMain+0xbfc)[0x7a2b8c]\npostgres: publisher: walsender bf [local] idle in transaction (aborted)(main+0x6fb)[0x49749b]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb)[0x7fc52d83bbbb]\npostgres: publisher: walsender bf [local] idle in transaction (aborted)(_start+0x2a)[0x49753a]\n2020-04-04 02:08:57.302 CEST [5e87d018.5066b:4] LOG: server process (PID 329409) was terminated by signal 6: Aborted\n\nDue to the log_line_prefix used, I was at first not entirely sure the\nbackend that crashed was the one with the ERROR. But it appears we print\nthe pid as hex for '%c' (why?), so it indeed is the one.\n\n\nI, again, have to say that the amount of stuff that was done as part of\n\ncommit 7c4f52409a8c7d85ed169bbbc1f6092274d03920\nAuthor: Peter Eisentraut <peter_e@gmx.net>\nDate: 2017-03-23 08:36:36 -0400\n\n Logical replication support for initial data copy\n\nis insane. Adding support for running sql over replication connections\nand extending CREATE_REPLICATION_SLOT with new options (without even\nmentioning that in the commit message!) as part of a commit described as\n\"Logical replication support for initial data copy\" shouldn't happen.\n\n\nIt's not obvious to me what buffer pins could be held at this point. I\nwonder if this could be somehow related to\n\ncommit 3cb646264e8ced9f25557ce271284da512d92043\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2018-07-18 12:15:16 -0400\n\n Use a ResourceOwner to track buffer pins in all cases.\n...\n In passing, remove some other ad-hoc resource owner creations that had\n gotten cargo-culted into various other places. As far as I can tell\n that was all unnecessary, and if it had been necessary it was incomplete,\n due to lacking any provision for clearing those resowners later.\n (Also worth noting in this connection is that a process that hasn't called\n InitBufferPoolBackend has no business accessing buffers; so there's more\n to do than just add the resowner if we want to touch buffers in processes\n not covered by this patch.)\n\nwhich removed the resowner previously used in walsender. At the very\nleast we should remove the SavedResourceOwnerDuringExport dance that's\nstill done in snapbuild.c. But it can't really be at fault here,\nbecause the crashing backend won't have used that.\n\n\nSo I'm a bit confused here. The best approach is probably to try to\nreproduce this by adding an artifical delay into backend shutdown.\n\n\n> (I still really dislike the fact that we have this evil hack allowing\n> one connection to mix and match those sets of commands...)\n\nFWIW, I think the opposite. We should get rid of the difference as much\nas possible.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 Apr 2020 20:06:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "On 04/04/2020 05:06, Andres Freund wrote:\n> Hi,\n> \n> Peter, Petr, CCed you because it's probably a bug somewhere around the\n> initial copy code for logical replication.\n> \n> \n> On 2020-04-03 20:48:09 -0400, Robert Haas wrote:\n>> 'serinus' is also failing. This is less obviously related:\n> \n> Hm. Tests passed once since then.\n> \n> \n>> 2020-04-04 02:08:57.299 CEST [5e87d019.506c1:4] LOG: received\n>> replication command: CREATE_REPLICATION_SLOT\n>> \"tap_sub_16390_sync_16384\" TEMPORARY LOGICAL pgoutput USE_SNAPSHOT\n>> 2020-04-04 02:08:57.299 CEST [5e87d019.506c1:5] ERROR: replication\n>> slot \"tap_sub_16390_sync_16384\" already exists\n> \n> That already seems suspicious. I checked the following (successful) run\n> and I did not see that in the stage's logs.\n> \n> Looking at the failing log, it fails because for some reason there's\n> rounds (once due to a refresh, once due to an intention replication\n> failure) of copying the relation. Each creates its own temporary slot.\n> \n> first time:\n> 2020-04-04 02:08:57.276 CEST [5e87d019.506bd:1] LOG: connection received: host=[local]\n> 2020-04-04 02:08:57.278 CEST [5e87d019.506bd:4] LOG: received replication command: CREATE_REPLICATION_SLOT \"tap_sub_16390_sync_16384\" TEMPORARY LOGICAL pgoutput USE_SNAPSHOT\n> 2020-04-04 02:08:57.282 CEST [5e87d019.506bd:9] LOG: statement: COPY public.tab_rep TO STDOUT\n> 2020-04-04 02:08:57.284 CEST [5e87d019.506bd:10] LOG: disconnection: session time: 0:00:00.007 user=bf database=postgres host=[local]\n> \n> second time:\n> 2020-04-04 02:08:57.288 CEST [5e87d019.506bf:1] LOG: connection received: host=[local]\n> 2020-04-04 02:08:57.289 CEST [5e87d019.506bf:4] LOG: received replication command: CREATE_REPLICATION_SLOT \"tap_sub_16390_sync_16384\" TEMPORARY LOGICAL pgoutput USE_SNAPSHOT\n> 2020-04-04 02:08:57.293 CEST [5e87d019.506bf:9] LOG: statement: COPY public.tab_rep TO STDOUT\n> \n> third time:\n> 2020-04-04 02:08:57.297 CEST [5e87d019.506c1:1] LOG: connection received: host=[local]\n> 2020-04-04 02:08:57.299 CEST [5e87d019.506c1:4] LOG: received replication command: CREATE_REPLICATION_SLOT \"tap_sub_16390_sync_16384\" TEMPORARY LOGICAL pgoutput USE_SNAPSHOT\n> 2020-04-04 02:08:57.299 CEST [5e87d019.506c1:5] ERROR: replication slot \"tap_sub_16390_sync_16384\" already exists\n> \n> Note that the connection from the second attempt has not yet\n> disconnected. Hence the error about the replication slot already\n> existing - it's a temporary replication slot that'd otherwise already\n> have been dropped.\n> \n> \n> Seems the logical rep code needs to do something about this race?\n> \n\nThe downstream:\n\n> 2020-04-04 02:08:57.275 CEST [5e87d019.506bc:1] LOG: logical replication table synchronization worker for subscription \"tap_sub\", table \"tab_rep\" has started\n> 2020-04-04 02:08:57.282 CEST [5e87d019.506bc:2] ERROR: duplicate key value violates unique constraint \"tab_rep_pkey\"\n> 2020-04-04 02:08:57.282 CEST [5e87d019.506bc:3] DETAIL: Key (a)=(1) already exists.\n> 2020-04-04 02:08:57.282 CEST [5e87d019.506bc:4] CONTEXT: COPY tab_rep, line 1\n> 2020-04-04 02:08:57.283 CEST [5e87d018.50689:5] LOG: background worker \"logical replication worker\" (PID 329404) exited with exit code 1\n> 2020-04-04 02:08:57.287 CEST [5e87d019.506be:1] LOG: logical replication table synchronization worker for subscription \"tap_sub\", table \"tab_rep\" has started\n> 2020-04-04 02:08:57.293 CEST [5e87d019.506be:2] ERROR: duplicate key value violates unique constraint \"tab_rep_pkey\"\n> 2020-04-04 02:08:57.293 CEST [5e87d019.506be:3] DETAIL: Key (a)=(1) already exists.\n> 2020-04-04 02:08:57.293 CEST [5e87d019.506be:4] CONTEXT: COPY tab_rep, line 1\n> 2020-04-04 02:08:57.295 CEST [5e87d018.50689:6] LOG: background worker \"logical replication worker\" (PID 329406) exited with exit code 1\n> 2020-04-04 02:08:57.297 CEST [5e87d019.506c0:1] LOG: logical replication table synchronization worker for subscription \"tap_sub\", table \"tab_rep\" has started\n> 2020-04-04 02:08:57.299 CEST [5e87d019.506c0:2] ERROR: could not create replication slot \"tap_sub_16390_sync_16384\": ERROR: replication slot \"tap_sub_16390_sync_16384\" already exists\n> 2020-04-04 02:08:57.300 CEST [5e87d018.50689:7] LOG: background worker \"logical replication worker\" (PID 329408) exited with exit code \n\nLooks like we are simply retrying so fast that upstream will not have \nfinished cleanup after second try by the time we already run the third one.\n\nThe last_start_times is supposed to protect against that so I guess \nthere is some issue with how that works.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Sat, 4 Apr 2020 07:01:36 +0200",
"msg_from": "Petr Jelinek <petr@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "On Fri, Apr 3, 2020 at 11:06 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-04-03 20:48:09 -0400, Robert Haas wrote:\n> > 'serinus' is also failing. This is less obviously related:\n>\n> Hm. Tests passed once since then.\n\nYeah, but conchuela also failed once in what I think was a similar\nway. I suspect the fix I pushed last night\n(3e0d80fd8d3dd4f999e0d3aa3e591f480d8ad1fd) may have been enough to\nclear this up.\n\n> That already seems suspicious. I checked the following (successful) run\n> and I did not see that in the stage's logs.\n\nYeah, the behavior of the test case doesn't seem to be entirely deterministic.\n\n> I, again, have to say that the amount of stuff that was done as part of\n>\n> commit 7c4f52409a8c7d85ed169bbbc1f6092274d03920\n> Author: Peter Eisentraut <peter_e@gmx.net>\n> Date: 2017-03-23 08:36:36 -0400\n>\n> Logical replication support for initial data copy\n>\n> is insane. Adding support for running sql over replication connections\n> and extending CREATE_REPLICATION_SLOT with new options (without even\n> mentioning that in the commit message!) as part of a commit described as\n> \"Logical replication support for initial data copy\" shouldn't happen.\n\nI agreed then and still do.\n\n> So I'm a bit confused here. The best approach is probably to try to\n> reproduce this by adding an artifical delay into backend shutdown.\n\nI was able to reproduce an assertion failure by starting a\ntransaction, running a replication command that failed, and then\nexiting the backend. 3e0d80fd8d3dd4f999e0d3aa3e591f480d8ad1fd made\nthat go away. I had wrongly assumed that there was no other way for a\nwalsender to have a ResourceOwner, and in the face of SQL commands\nalso being executed by walsenders, that's clearly not true. I'm not\nsure *precisely* how that lead to the BF failures, but it was really\nclear that it was wrong.\n\n> > (I still really dislike the fact that we have this evil hack allowing\n> > one connection to mix and match those sets of commands...)\n>\n> FWIW, I think the opposite. We should get rid of the difference as much\n> as possible.\n\nWell, that's another approach. It's OK to have one system and it's OK\nto have two systems, but one and a half is not ideal.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 4 Apr 2020 09:20:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "On Fri, Apr 3, 2020 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> BTW, some of the buildfarm is showing a simpler portability problem:\n> they think you were too cavalier about the difference between time_t\n> and pg_time_t. (On a platform with 32-bit time_t, that's an actual\n> bug, probably.) lapwing is actually failing:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2020-04-03%2021%3A41%3A49\n>\n> ccache gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2 -Werror -I. -I. -I../../../src/include -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/include/et -c -o basebackup.o basebackup.c\n> basebackup.c: In function 'AddFileToManifest':\n> basebackup.c:1199:10: error: passing argument 1 of 'pg_gmtime' from incompatible pointer type [-Werror]\n> In file included from ../../../src/include/access/xlog_internal.h:26:0,\n> from basebackup.c:20:\n> ../../../src/include/pgtime.h:49:22: note: expected 'const pg_time_t *' but argument is of type 'time_t *'\n> cc1: all warnings being treated as errors\n> make[3]: *** [basebackup.o] Error 1\n>\n> but some others are showing it as a warning.\n>\n> I suppose that judicious s/time_t/pg_time_t/ would fix this.\n\nI think you sent this email just after I pushed\ndb1531cae00941bfe4f6321fdef1e1ef355b6bed, or maybe after I'd committed\nit locally and just before I pushed it. If you prefer a different fix\nthan what I did there, I can certainly whack it around some more.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 4 Apr 2020 09:34:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Apr 3, 2020 at 10:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think I've done about as much as I can do for tonight, though. Most\n> things are green now, and the ones that aren't are failing because of\n> stuff that is at least plausibly fixed. By morning it should be\n> clearer how much broken stuff is left, although that will be somewhat\n> complicated by at least sidewinder and seawasp needing manual\n> intervention to get back on track.\n\nTaking stock of the situation this morning, most of the buildfarm is\nnow green. There are three failures, on eelpout (6 hours ago),\nfairywren (17 hours ago), and hyrax (3 days, 7 hours ago).\n\neelpout is unhappy because:\n\n+WARNING: could not remove shared memory segment\n\"/PostgreSQL.248989127\": No such file or directory\n+WARNING: could not remove shared memory segment\n\"/PostgreSQL.1450751626\": No such file or directory\n multibatch\n ------------\n f\n@@ -861,22 +863,15 @@\n\n select length(max(s.t))\n from wide left join (select id, coalesce(t, '') || '' as t from wide)\ns using (id);\n- length\n---------\n- 320000\n-(1 row)\n-\n+ERROR: could not open shared memory segment \"/PostgreSQL.605707657\":\nNo such file or directory\n+CONTEXT: parallel worker\n\nI'm not sure what caused that exactly, but it sorta looks like\noperator intervention. Thomas, any ideas?\n\nfairywren's last run was on 21dc488, and commit\n460314db08e8688e1a54a0a26657941e058e45c5 was an attempt to fix what\nbroken there. I guess we'll find out whether that worked the next time\nit runs.\n\nhyrax's last run was before any of this happened, so it seems to have\nan unrelated problem. The last two runs, three and six days ago, both\nfailed like this:\n\n-ERROR: stack depth limit exceeded\n+ERROR: stack depth limit exceeded at character 8\n\nNot sure what that's about.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 4 Apr 2020 09:36:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Apr 3, 2020 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I suppose that judicious s/time_t/pg_time_t/ would fix this.\n\n> I think you sent this email just after I pushed\n> db1531cae00941bfe4f6321fdef1e1ef355b6bed, or maybe after I'd committed\n> it locally and just before I pushed it. If you prefer a different fix\n> than what I did there, I can certainly whack it around some more.\n\nYeah, that commit showed up moments after I sent this. Your fix\nseems fine -- at least prairiedog and gaur are OK with it.\n(I did verify that gaur was reproducibly crashing at that new\npg_strftime call, so we know it was that and not some on-again-\noff-again issue.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Apr 2020 10:43:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> hyrax's last run was before any of this happened, so it seems to have\n> an unrelated problem. The last two runs, three and six days ago, both\n> failed like this:\n\n> -ERROR: stack depth limit exceeded\n> +ERROR: stack depth limit exceeded at character 8\n\n> Not sure what that's about.\n\nWhat it looks like is that hyrax is managing to detect stack overflow\nat a point where an errcontext callback is active that adds an error\ncursor to the failure.\n\nIt's not so surprising that we could get a different result that way\nfrom a CLOBBER_CACHE_ALWAYS animal like hyrax, since CCA-forced\ncache reloads would cause extra stack expenditure at a lot of places.\nAnd it could vary depending on totally random details, like the number\nof local variables in seemingly unrelated code. What is odd is that\n(AFAIR) we've never seen this before. Maybe somebody recently added\nan error cursor callback in a place that didn't have it before, and\nis involved in SQL-function processing? None of the commits leading\nup to the earlier failure look promising for that, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Apr 2020 10:57:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "On Sat, Apr 4, 2020 at 10:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> It's not so surprising that we could get a different result that way\n> from a CLOBBER_CACHE_ALWAYS animal like hyrax, since CCA-forced\n> cache reloads would cause extra stack expenditure at a lot of places.\n> And it could vary depending on totally random details, like the number\n> of local variables in seemingly unrelated code.\n\nOh, yeah. That's unfortunate.\n\n> What is odd is that\n> (AFAIR) we've never seen this before. Maybe somebody recently added\n> an error cursor callback in a place that didn't have it before, and\n> is involved in SQL-function processing? None of the commits leading\n> up to the earlier failure look promising for that, though.\n\nThe relevant range of commits (e8b1774fc2 to a7b9d24e4e) includes an\nereport change (bda6dedbea) and a couple of \"simple expression\"\nchanges (8f59f6b9c0, fbc7a71608) but I don't know exactly why they\nwould have caused this. It seems at least possible, though, that\nchanging the return type of functions involved in error reporting\nwould slightly change the amount of stack space used; and the others\nare related to SQL-function processing. Other than experimenting on\nthat machine, I'm not sure how we could really determine the relevant\nfactors here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 4 Apr 2020 13:05:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sat, Apr 4, 2020 at 10:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What is odd is that\n>> (AFAIR) we've never seen this before. Maybe somebody recently added\n>> an error cursor callback in a place that didn't have it before, and\n>> is involved in SQL-function processing? None of the commits leading\n>> up to the earlier failure look promising for that, though.\n\n> The relevant range of commits (e8b1774fc2 to a7b9d24e4e) includes an\n> ereport change (bda6dedbea) and a couple of \"simple expression\"\n> changes (8f59f6b9c0, fbc7a71608) but I don't know exactly why they\n> would have caused this.\n\nWhen I first noticed hyrax's failure, some days ago, I immediately\nthought of the \"simple expression\" patch. But that should not have\naffected SQL-function processing in any way: the bulk of the changes\nwere in plpgsql, and even the changes in plancache could not be\nrelevant, because functions.c does not use the plancache.\n\nAs for ereport, you'd think that that would only matter once you were\nalready doing an ereport. The point at which the stack overflow\ncheck triggers should be in normal code, not error recovery.\n\n> It seems at least possible, though, that\n> changing the return type of functions involved in error reporting\n> would slightly change the amount of stack space used;\n\nRight, but if it's down to that sort of phase-of-the-moon codegen\ndifference, you'd think this failure would have been coming and\ngoing for years. I still suppose that some fairly recent change\nmust be contributing to this, but haven't had time to investigate.\n\n> Other than experimenting on\n> that machine, I'm not sure how we could really determine the relevant\n> factors here.\n\nWe don't have a lot of CCA buildfarm machines, so I'm suspecting that\nit's probably not that hard to repro if you build with CCA.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Apr 2020 14:36:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "On Sun, Apr 5, 2020 at 2:36 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> eelpout is unhappy because:\n>\n> +WARNING: could not remove shared memory segment\n> \"/PostgreSQL.248989127\": No such file or directory\n> +WARNING: could not remove shared memory segment\n> \"/PostgreSQL.1450751626\": No such file or directory\n\nSeems to have fixed itself while I was sleeping. I did happen run\napt-get upgrade on that box some time yesterday-ish, but I don't\nunderstand what mechanism would trash my /dev/shm in that process.\n/me eyes systemd with suspicion\n\n\n",
"msg_date": "Sun, 5 Apr 2020 09:54:11 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "On 2020-04-04 04:43, Robert Haas wrote:\n\n> I think I've done about as much as I can do for tonight, though. Most\n> things are green now, and the ones that aren't are failing because of\n> stuff that is at least plausibly fixed. By morning it should be\n> clearer how much broken stuff is left, although that will be somewhat\n> complicated by at least sidewinder and seawasp needing manual\n> intervention to get back on track.\n\nI fixed sidewinder I think. Should clear up the next time it runs.\n\nIt was the mode on the directory it couldn't handle- A regular rm -rf \ndidn't work I had to do a chmod -R 700 on all directories to be able to \nmanually remove it.\n\n/Mikael\n\n\n",
"msg_date": "Sun, 5 Apr 2020 15:10:15 +0200",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "Hi,\n\nOn 2020-04-03 15:22:23 -0400, Robert Haas wrote:\n> I've pushed all the patches.\n\nSeeing new warnings in an optimized build\n\n/home/andres/src/postgresql-master/src/bin/pg_validatebackup/parse_manifest.c: In function 'json_manifest_object_end':\n/home/andres/src/postgresql-master/src/bin/pg_validatebackup/parse_manifest.c:591:2: warning: 'end_lsn' may be used uninitialized in this function [-Wmaybe-uninitialized]\n 591 | context->perwalrange_cb(context, tli, start_lsn, end_lsn);\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n/home/andres/src/postgresql-master/src/bin/pg_validatebackup/parse_manifest.c:567:5: note: 'end_lsn' was declared here\n 567 | end_lsn;\n | ^~~~~~~\n/home/andres/src/postgresql-master/src/bin/pg_validatebackup/parse_manifest.c:591:2: warning: 'start_lsn' may be used uninitialized in this function [-Wmaybe-uninitialized]\n 591 | context->perwalrange_cb(context, tli, start_lsn, end_lsn);\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n/home/andres/src/postgresql-master/src/bin/pg_validatebackup/parse_manifest.c:566:13: note: 'start_lsn' was declared here\n 566 | XLogRecPtr start_lsn,\n | ^~~~~~~~~\n\nThe warnings don't seem too unreasonable. The compiler can't see that\nthe error_cb inside json_manifest_parse_failure() is not expected to\nreturn. Probably worth adding a wrapper around the calls to\ncontext->error_cb and mark that as noreturn.\n\n- Andres\n\n\n",
"msg_date": "Sun, 5 Apr 2020 12:31:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "\nOn 4/5/20 9:10 AM, Mikael Kjellström wrote:\n> On 2020-04-04 04:43, Robert Haas wrote:\n>\n>> I think I've done about as much as I can do for tonight, though. Most\n>> things are green now, and the ones that aren't are failing because of\n>> stuff that is at least plausibly fixed. By morning it should be\n>> clearer how much broken stuff is left, although that will be somewhat\n>> complicated by at least sidewinder and seawasp needing manual\n>> intervention to get back on track.\n>\n> I fixed sidewinder I think. Should clear up the next time it runs.\n>\n> It was the mode on the directory it couldn't handle- A regular rm -rf\n> didn't work I had to do a chmod -R 700 on all directories to be able\n> to manually remove it.\n>\n>\n\n\nHmm, the buildfarm client does this at the beginning of each run to\nremove anything that might be left over from a previous run:\n\n\n rmtree(\"inst\");\n rmtree(\"$pgsql\") unless ($from_source && !$use_vpath);\n\n\nDo I need to precede those with some recursive chmod commands? Perhaps\nthe client should refuse to run if there is still something left after\nthese.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 5 Apr 2020 16:06:58 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> Hmm, the buildfarm client does this at the beginning of each run to\n> remove anything that might be left over from a previous run:\n\n> rmtree(\"inst\");\n> rmtree(\"$pgsql\") unless ($from_source && !$use_vpath);\n\nRight, the point is precisely that some versions of rmtree() fail\nto remove a mode-0 subdirectory.\n\n> Do I need to precede those with some recursive chmod commands? Perhaps\n> the client should refuse to run if there is still something left after\n> these.\n\nI think the latter would be a very good idea, just so that this sort of\nfailure is less obscure. Not sure about whether a recursive chmod is\nreally going to be worth the cycles. (On the other hand, the normal\ncase should be that there's nothing there anyway, so maybe it's not\ngoing to be costly.) \n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 05 Apr 2020 16:12:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "\nHello,\n\n>> Do I need to precede those with some recursive chmod commands? Perhaps\n>> the client should refuse to run if there is still something left after\n>> these.\n>\n> I think the latter would be a very good idea, just so that this sort of\n> failure is less obscure. Not sure about whether a recursive chmod is\n> really going to be worth the cycles. (On the other hand, the normal\n> case should be that there's nothing there anyway, so maybe it's not\n> going to be costly.)\n\nCould it be a two-stage process to minimize cost but still be resilient?\n\n rmtree\n if (-d $DIR) {\n emit warning\n chmodtree\n rmtree again\n if (-d $DIR)\n emit error\n }\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 6 Apr 2020 07:18:10 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "On Sun, Apr 5, 2020 at 4:07 PM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> Do I need to precede those with some recursive chmod commands?\n\n+1.\n\n> Perhaps\n> the client should refuse to run if there is still something left after\n> these.\n\n+1 to that, too.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 Apr 2020 07:53:07 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "\nOn 4/6/20 7:53 AM, Robert Haas wrote:\n> On Sun, Apr 5, 2020 at 4:07 PM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com> wrote:\n>> Do I need to precede those with some recursive chmod commands?\n> +1.\n>\n>> Perhaps\n>> the client should refuse to run if there is still something left after\n>> these.\n> +1 to that, too.\n>\n\n\nSee\nhttps://github.com/PGBuildFarm/client-code/commit/0ef76bb1e2629713898631b9a3380d02d41c60ad\n\n\nThis will be in the next release, probably fairly soon.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Mon, 6 Apr 2020 16:06:52 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Taking stock of the situation this morning, most of the buildfarm is\n> now green. There are three failures, on eelpout (6 hours ago),\n> fairywren (17 hours ago), and hyrax (3 days, 7 hours ago).\n\nfairywren has now done this twice in the pg_validatebackupCheck step:\n\nexec failed: Bad address at /home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/test/perl/TestLib.pm line 340.\n at /home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/test/perl/TestLib.pm line 340.\n\nI'm a tad suspicious that it needs another perl2host()\nsomewhere, but the log isn't very clear as to where.\n\nMore generally, I wonder if we ought to be trying to\ncentralize those perl2host() calls instead of sticking\nthem into individual test cases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Apr 2020 00:37:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "On Mon, Apr 6, 2020 at 1:18 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello,\n>\n> >> Do I need to precede those with some recursive chmod commands? Perhaps\n> >> the client should refuse to run if there is still something left after\n> >> these.\n> >\n> > I think the latter would be a very good idea, just so that this sort of\n> > failure is less obscure. Not sure about whether a recursive chmod is\n> > really going to be worth the cycles. (On the other hand, the normal\n> > case should be that there's nothing there anyway, so maybe it's not\n> > going to be costly.)\n>\n> Could it be a two-stage process to minimize cost but still be resilient?\n>\n> rmtree\n> if (-d $DIR) {\n> emit warning\n> chmodtree\n> rmtree again\n> if (-d $DIR)\n> emit error\n> }\n>\n\n\nI thought about doing that. However, it's not really necessary. In the\nnormal course of events these directories should have been removed at\nthe end of the previous run, so we're only dealing with exceptional\ncases here.\n\ncheers\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 7 Apr 2020 09:11:02 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "On Tue, Apr 7, 2020 at 12:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Taking stock of the situation this morning, most of the buildfarm is\n> > now green. There are three failures, on eelpout (6 hours ago),\n> > fairywren (17 hours ago), and hyrax (3 days, 7 hours ago).\n>\n> fairywren has now done this twice in the pg_validatebackupCheck step:\n>\n> exec failed: Bad address at /home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/test/perl/TestLib.pm line 340.\n> at /home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/test/perl/TestLib.pm line 340.\n>\n> I'm a tad suspicious that it needs another perl2host()\n> somewhere, but the log isn't very clear as to where.\n>\n> More generally, I wonder if we ought to be trying to\n> centralize those perl2host() calls instead of sticking\n> them into individual test cases.\n>\n>\n\n\nNot sure about that. I'll see if I can run it by hand and get some\nmore info. What's quite odd is that jacana (a very similar setup) is\npassing this happily.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 7 Apr 2020 09:42:09 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "On 2020/04/04 4:22, Robert Haas wrote:\n> On Thu, Apr 2, 2020 at 4:34 PM David Steele <david@pgmasters.net> wrote:\n>> +1. These would be great tests to have and a win for pg_basebackup\n>> overall but I don't think they should be a prerequisite for this commit.\n> \n> Not to mention the server. I can't say that I have a lot of confidence\n> that all of the server behavior in this area is well-understood and\n> sane.\n> \n> I've pushed all the patches.\n\nWhen there is a backup_manifest in the database cluster, it's included in\nthe backup even when --no-manifest is specified. ISTM that this is problematic\nbecause the backup_manifest is obviously not valid for the backup.\nSo, isn't it better to always exclude the *existing* backup_manifest in the\ncluster from the backup, like backup_label/tablespace_map? Patch attached.\n\nAlso I found the typo in the document. Patch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 8 Apr 2020 14:15:38 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Wed, Apr 8, 2020 at 1:15 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> When there is a backup_manifest in the database cluster, it's included in\n> the backup even when --no-manifest is specified. ISTM that this is problematic\n> because the backup_manifest is obviously not valid for the backup.\n> So, isn't it better to always exclude the *existing* backup_manifest in the\n> cluster from the backup, like backup_label/tablespace_map? Patch attached.\n>\n> Also I found the typo in the document. Patch attached.\n\nBoth patches look good. The second one is definitely a mistake on my\npart, and the first one seems like a totally reasonable change.\nThanks!\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 8 Apr 2020 13:35:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 4/7/20 9:42 AM, Andrew Dunstan wrote:\n> On Tue, Apr 7, 2020 at 12:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> Taking stock of the situation this morning, most of the buildfarm is\n>>> now green. There are three failures, on eelpout (6 hours ago),\n>>> fairywren (17 hours ago), and hyrax (3 days, 7 hours ago).\n>> fairywren has now done this twice in the pg_validatebackupCheck step:\n>>\n>> exec failed: Bad address at /home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/test/perl/TestLib.pm line 340.\n>> at /home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/test/perl/TestLib.pm line 340.\n>>\n>> I'm a tad suspicious that it needs another perl2host()\n>> somewhere, but the log isn't very clear as to where.\n>>\n>> More generally, I wonder if we ought to be trying to\n>> centralize those perl2host() calls instead of sticking\n>> them into individual test cases.\n>>\n>>\n>\n> Not sure about that. I'll see if I can run it by hand and get some\n> more info. What's quite odd is that jacana (a very similar setup) is\n> passing this happily.\n>\n\n\nOK, tricky, but here's what I did to get this working on fairywren.\n\n\nFirst, on Msys2 there is a problem with name mangling. We've had to fix\nthis before by telling it to ignore certain argument prefixes.\n\n\nSecond, once that was fixed rmdir was failing on the tablespace. On\nWindows this is a junction, so unlink is the correct thing to do, I\nbelieve, just as it is on Unix where it's a symlink.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 8 Apr 2020 13:45:39 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> OK, tricky, but here's what I did to get this working on fairywren.\n> First, on Msys2 there is a problem with name mangling. We've had to fix\n> this before by telling it to ignore certain argument prefixes.\n> Second, once that was fixed rmdir was failing on the tablespace. On\n> Windows this is a junction, so unlink is the correct thing to do, I\n> believe, just as it is on Unix where it's a symlink.\n\nHmm, no opinion about the name mangling business, but the other part\nseems like it might break jacana and/or bowerbird, which are currently\nhappy with this test? (AFAICS we only have four Windows animals\nrunning the TAP tests, and the fourth (drongo) hasn't reported in\nfor awhile.)\n\nI guess we could commit it and find out. I'm all for the simpler\ncoding if it works.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Apr 2020 13:59:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "On Wed, Apr 8, 2020 at 1:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I guess we could commit it and find out. I'm all for the simpler\n> coding if it works.\n\nI don't understand what the local $ENV{MSYS2_ARG_CONV_EXCL} =\n$source_ts_prefix does, but the remove/unlink condition was suggested\nby Amit Kapila on the basis of testing on his Windows development\nenvironment, so I suspect that's actually needed on at least some\nsystems. I just work here, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 8 Apr 2020 15:41:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "\nOn 4/8/20 3:41 PM, Robert Haas wrote:\n> On Wed, Apr 8, 2020 at 1:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I guess we could commit it and find out. I'm all for the simpler\n>> coding if it works.\n> I don't understand what the local $ENV{MSYS2_ARG_CONV_EXCL} =\n> $source_ts_prefix does, \n\n\nYou don't want to know ....\n\n\nSee <https://www.msys2.org/wiki/Porting/#filesystem-namespaces> for the\ngory details.\n\n\nIt's the tablespace map parameter that is upsetting it.\n\n\n\n> but the remove/unlink condition was suggested\n> by Amit Kapila on the basis of testing on his Windows development\n> environment, so I suspect that's actually needed on at least some\n> systems. I just work here, though.\n>\n\nYeah, drongo doesn't like it, so we'll have to tweak the logic.\n\n\nI'll update after some more testing.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 8 Apr 2020 15:48:40 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 4/8/20 3:41 PM, Robert Haas wrote:\n>> I don't understand what the local $ENV{MSYS2_ARG_CONV_EXCL} =\n>> $source_ts_prefix does, \n\n> You don't want to know ....\n> See <https://www.msys2.org/wiki/Porting/#filesystem-namespaces> for the\n> gory details.\n\nI don't want to know either, but maybe that reference should be cited\nsomewhere near where we use this sort of hack.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Apr 2020 16:30:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests and contemporaneous buildfarm failures"
},
{
"msg_contents": "\n\nOn 2020/04/09 2:35, Robert Haas wrote:\n> On Wed, Apr 8, 2020 at 1:15 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> When there is a backup_manifest in the database cluster, it's included in\n>> the backup even when --no-manifest is specified. ISTM that this is problematic\n>> because the backup_manifest is obviously not valid for the backup.\n>> So, isn't it better to always exclude the *existing* backup_manifest in the\n>> cluster from the backup, like backup_label/tablespace_map? Patch attached.\n>>\n>> Also I found the typo in the document. Patch attached.\n> \n> Both patches look good. The second one is definitely a mistake on my\n> part, and the first one seems like a totally reasonable change.\n> Thanks!\n\nThanks for reviewing them! I pushed them.\n\nPlease note that the commit messages have not been delivered to\npgsql-committers yet.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 9 Apr 2020 23:06:56 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Greetings,\n\n* Fujii Masao (masao.fujii@oss.nttdata.com) wrote:\n> On 2020/04/09 2:35, Robert Haas wrote:\n> >On Wed, Apr 8, 2020 at 1:15 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>When there is a backup_manifest in the database cluster, it's included in\n> >>the backup even when --no-manifest is specified. ISTM that this is problematic\n> >>because the backup_manifest is obviously not valid for the backup.\n> >>So, isn't it better to always exclude the *existing* backup_manifest in the\n> >>cluster from the backup, like backup_label/tablespace_map? Patch attached.\n> >>\n> >>Also I found the typo in the document. Patch attached.\n> >\n> >Both patches look good. The second one is definitely a mistake on my\n> >part, and the first one seems like a totally reasonable change.\n> >Thanks!\n> \n> Thanks for reviewing them! I pushed them.\n> \n> Please note that the commit messages have not been delivered to\n> pgsql-committers yet.\n\nThey've been released and your address whitelisted.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 9 Apr 2020 10:10:55 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "\n\nOn 2020/04/09 23:10, Stephen Frost wrote:\n> Greetings,\n> \n> * Fujii Masao (masao.fujii@oss.nttdata.com) wrote:\n>> On 2020/04/09 2:35, Robert Haas wrote:\n>>> On Wed, Apr 8, 2020 at 1:15 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>> When there is a backup_manifest in the database cluster, it's included in\n>>>> the backup even when --no-manifest is specified. ISTM that this is problematic\n>>>> because the backup_manifest is obviously not valid for the backup.\n>>>> So, isn't it better to always exclude the *existing* backup_manifest in the\n>>>> cluster from the backup, like backup_label/tablespace_map? Patch attached.\n>>>>\n>>>> Also I found the typo in the document. Patch attached.\n>>>\n>>> Both patches look good. The second one is definitely a mistake on my\n>>> part, and the first one seems like a totally reasonable change.\n>>> Thanks!\n>>\n>> Thanks for reviewing them! I pushed them.\n>>\n>> Please note that the commit messages have not been delivered to\n>> pgsql-committers yet.\n> \n> They've been released and your address whitelisted.\n\nMany thanks!!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 9 Apr 2020 23:11:58 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 2020/04/09 23:06, Fujii Masao wrote:\n> \n> \n> On 2020/04/09 2:35, Robert Haas wrote:\n>> On Wed, Apr 8, 2020 at 1:15 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> When there is a backup_manifest in the database cluster, it's included in\n>>> the backup even when --no-manifest is specified. ISTM that this is problematic\n>>> because the backup_manifest is obviously not valid for the backup.\n>>> So, isn't it better to always exclude the *existing* backup_manifest in the\n>>> cluster from the backup, like backup_label/tablespace_map? Patch attached.\n>>>\n>>> Also I found the typo in the document. Patch attached.\n>>\n>> Both patches look good. The second one is definitely a mistake on my\n>> part, and the first one seems like a totally reasonable change.\n>> Thanks!\n> \n> Thanks for reviewing them! I pushed them.\n\nI found other minor issues.\n\n+ When this option is specified with a value of <literal>yes</literal>\n+ or <literal>force-escape</literal>, a backup manifest is created\n\nforce-escape should be force-encode.\nPatch attached.\n\n-\twhile ((c = getopt_long(argc, argv, \"CD:F:r:RS:T:X:l:nNzZ:d:c:h:p:U:s:wWkvP\",\n+\twhile ((c = getopt_long(argc, argv, \"CD:F:r:RS:T:X:l:nNzZ:d:c:h:p:U:s:wWkvPm:\",\n\n\"m:\" seems unnecessary, so should be removed?\nPatch attached.\n\n+\tif (strcmp(basedir, \"-\") == 0)\n+\t{\n+\t\tchar\t\theader[512];\n+\t\tPQExpBufferData\tbuf;\n+\n+\t\tinitPQExpBuffer(&buf);\n+\t\tReceiveBackupManifestInMemory(conn, &buf);\n\nbackup_manifest should be received only when the manifest is enabled,\nso ISTM that the flag \"manifest\" should be checked in the above if-condition.\nThought? Patch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Mon, 13 Apr 2020 11:09:34 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Mon, Apr 13, 2020 at 11:09:34AM +0900, Fujii Masao wrote:\n> -\twhile ((c = getopt_long(argc, argv, \"CD:F:r:RS:T:X:l:nNzZ:d:c:h:p:U:s:wWkvP\",\n> +\twhile ((c = getopt_long(argc, argv, \"CD:F:r:RS:T:X:l:nNzZ:d:c:h:p:U:s:wWkvPm:\",\n> \n> \"m:\" seems unnecessary, so should be removed?\n> Patch attached.\n\nSmells like some remnant diff from a previous version.\n\n> +\tif (strcmp(basedir, \"-\") == 0)\n> +\t{\n> +\t\tchar\t\theader[512];\n> +\t\tPQExpBufferData\tbuf;\n> +\n> +\t\tinitPQExpBuffer(&buf);\n> +\t\tReceiveBackupManifestInMemory(conn, &buf);\n> \n> backup_manifest should be received only when the manifest is enabled,\n> so ISTM that the flag \"manifest\" should be checked in the above if-condition.\n> Thought? Patch attached.\n>\n> -\tif (strcmp(basedir, \"-\") == 0)\n> +\tif (strcmp(basedir, \"-\") == 0 && manifest)\n> \t{\n> \t\tchar\t\theader[512];\n> \t\tPQExpBufferData\tbuf;\n\nIndeed. Using the tar format with --no-manifest causes a failure:\npg_basebackup -D - --format=t --wal-method=none \\\n --no-manifest > /dev/null\n\nThe doc changes look right to me. Nice catches.\n--\nMichael",
"msg_date": "Mon, 13 Apr 2020 12:25:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Sun, Apr 12, 2020 at 10:09 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> I found other minor issues.\n\nI think these are all correct fixes. Thanks for the post-commit\nreview, and sorry for this mistakes.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 13 Apr 2020 11:15:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 4:32 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't like having a file format that's intended to be used by external\n> tools too that's undocumented except for code that assembles it in a\n> piecemeal fashion. Do you mean in a follow-on patch this release, or\n> later? I don't have a problem with the former.\n\nHere is a patch for that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 13 Apr 2020 13:40:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "documenting the backup manifest file format"
},
{
"msg_contents": "On Mon, Apr 13, 2020 at 01:40:56PM -0400, Robert Haas wrote:\n> On Fri, Mar 27, 2020 at 4:32 PM Andres Freund <andres@anarazel.de> wrote:\n> > I don't like having a file format that's intended to be used by external\n> > tools too that's undocumented except for code that assembles it in a\n> > piecemeal fashion. Do you mean in a follow-on patch this release, or\n> > later? I don't have a problem with the former.\n> \n> Here is a patch for that.\n\ntypos:\nmanifes\nhexademical (twice)\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 13 Apr 2020 12:55:53 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On Mon, Apr 13, 2020 at 1:55 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> typos:\n> manifes\n> hexademical (twice)\n\nThanks. v2 attached.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 13 Apr 2020 14:08:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On 2020-04-13 20:08, Robert Haas wrote:\n> [v2-0001-Document-the-backup-manifest-file-format.patch]\n\nCan you double check this sentence? Seems strange to me but I don't \nknow why; it may well be that my english is not good enough. Maybe a \ncomma after 'required' makes reading easier?\n\n The timeline from which this range of WAL records will be required in\n order to make use of this backup. The value is an integer.\n\n\nOne typo:\n\n'when making using' should be\n'when making use'\n\n\n\nErik Rijkers\n\n\n\n",
"msg_date": "Mon, 13 Apr 2020 20:28:31 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "+ The LSN at which replay must begin on the indicated timeline in order to\n+ make use of this backup. The LSN is stored in the format normally used\n+ by <productname>PostgreSQL</productname>; that is, it is a string\n+ consisting of two strings of hexademical characters, each with a length\n+ of between 1 and 8, separated by a slash.\n\ntypo \"hexademical\"\n\nAre these hex figures upper or lower case? No leading zeroes? This\nwould normally not matter, but the toplevel checksum will care. Also, I\nsee no mention of prettification-chars such as newlines or indentation.\nI suppose if I pass a manifest file through prettification (or Windows\nnewline conversion), the checksum may break.\n\nAs for Last-Modification, I think the spec should indicate the exact\nformat that's used, because it'll also be critical for checksumming.\n\nWhy is the top-level checksum only allowed to be SHA-256, if the files\ncan use up to SHA-512? (Also, did we intentionally omit the dash in\nhash names, so \"SHA-256\" to make it SHA256? This will also be critical\nfor checksumming the manifest itself.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 13 Apr 2020 15:34:49 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On Mon, Apr 13, 2020 at 2:28 PM Erik Rijkers <er@xs4all.nl> wrote:\n> Can you double check this sentence? Seems strange to me but I don't\n> know why; it may well be that my english is not good enough. Maybe a\n> comma after 'required' makes reading easier?\n>\n> The timeline from which this range of WAL records will be required in\n> order to make use of this backup. The value is an integer.\n\nIt sounds a little awkward to me, but not outright wrong. I'm not\nexactly sure how to rephrase it, though. Maybe just shorten it to \"the\ntimeline for this range of WAL records\"?\n\n> One typo:\n>\n> 'when making using' should be\n> 'when making use'\n\nRight, thanks, fixed in my local copy.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 13 Apr 2020 15:51:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "\nOn 4/13/20 1:40 PM, Robert Haas wrote:\n> On Fri, Mar 27, 2020 at 4:32 PM Andres Freund <andres@anarazel.de> wrote:\n>> I don't like having a file format that's intended to be used by external\n>> tools too that's undocumented except for code that assembles it in a\n>> piecemeal fashion. Do you mean in a follow-on patch this release, or\n>> later? I don't have a problem with the former.\n> Here is a patch for that.\n>\n\n\nSeems ok. A tiny example, or an excerpt, might be nice.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Mon, 13 Apr 2020 16:10:20 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On Mon, Apr 13, 2020 at 3:34 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Are these hex figures upper or lower case? No leading zeroes? This\n> would normally not matter, but the toplevel checksum will care.\n\nNot really. You just feed the whole file except for the last line\nthrough shasum and you get the answer.\n\nIt so happens that the server generates lower-case, but\npg_verifybackup will accept either.\n\nLeading zeroes are not omitted. If the checksum's not the right\nlength, it ain't gonna work. If SHA is used, it's the same output you\nwould get from running shasum -a<whatever> on the file, which is\ncertainly a fixed length. I assumed that this followed from the\nstatement that there are two characters per byte in the checksum, and\nfrom the fact that no checksum algorithm I know about drops leading\nzeroes in the output.\n\n> Also, I\n> see no mention of prettification-chars such as newlines or indentation.\n> I suppose if I pass a manifest file through prettification (or Windows\n> newline conversion), the checksum may break.\n\nIt would indeed break. I'm not sure what you want me to say here,\nthough. If you're trying to parse a manifest, you shouldn't care about\nhow the whitespace is arranged. If you're trying to generate one, you\ncan arrange it any way you like, as long as you also include it in the\nchecksum.\n\n> As for Last-Modification, I think the spec should indicate the exact\n> format that's used, because it'll also be critical for checksumming.\n\nAgain, I don't think it really matters for checksumming, but it's\n\"YYYY-MM-DD HH:MM:SS TZ\" format, where TZ is always GMT.\n\n> Why is the top-level checksum only allowed to be SHA-256, if the files\n> can use up to SHA-512?\n\nIf we allowed the top-level checksum to be changed to something else,\nthen we'd probably we want to indicate which kind of checksum is being\nused at the beginning of the file, so as to enable incremental parsing\nwith checksum verification at the end. pg_verifybackup doesn't\ncurrently do incremental parsing, but I'd like to add that sometime,\nif I get time to hash out the details. I think the use case for\nvarying the checksum type of the manifest itself is much less than for\nvarying it for the files. The big problem with checksumming the files\nis that it can be slow, because the files can be big. However, unless\nyou have a truckload of empty files in the database, the manifest is\ngoing to be very small compared to the sizes of all the files, so it\nseemed harmless to use a stronger checksum algorithm for the manifest\nitself. Maybe someone with a ton of empty or nearly-empty relations\nwill complain, but they can always use --no-manifest if they want.\n\nI agree that it's a little bit weird that you can have a stronger\nchecksum for the files instead of the manifest itself, but I also\nwonder what the use case would be for using a stronger checksum on the\nmanifest. David Steele argued that strong checksums on the files could\nbe useful to software that wants to rifle through all the backups\nyou've ever taken and find another copy of that file by looking for\nsomething with a matching checksum. CRC-32C wouldn't be strong enough\nfor that, because eventually you could have enough files that you\nstart to have collisions. The SHA algorithms output enough bits to\nmake that quite unlikely. But this argument only makes sense for the\nfiles, not the manifest.\n\nNaturally, all this is arguable, though, and a good deal of arguing\nabout it has been done, as you have probably noticed. I am still of\nthe opinion that if somebody's goal is to use this facility for its\nintended purpose, which is to find out whether your backup got\ncorrupted, any of these algorithms are fine, and are highly likely to\ntell you that you have a problem if, in fact, you do. In fact, I bet\nthat even a checksum algorithm considerably stupider than anything I'd\nactually consider using would accomplish that goal in a high\npercentage of cases. But not everybody agrees with me, to the point\nwhere I am starting to wonder if I really understand how computers\nwork.\n\n> (Also, did we intentionally omit the dash in\n> hash names, so \"SHA-256\" to make it SHA256? This will also be critical\n> for checksumming the manifest itself.)\n\nI debated this with myself, settled on this spelling, and nobody\ncomplained until now. It could be changed, though. I didn't have any\nparticular reason for choosing it except the feeling that people would\nprobably prefer to type --manifest-checksum=sha256 rather than\n--manifest-checksum=sha-256.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 13 Apr 2020 16:14:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On Mon, Apr 13, 2020 at 4:10 PM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> Seems ok. A tiny example, or an excerpt, might be nice.\n\nAn empty database produces a manifest about 1200 lines long, so a full\nexample seems like too much to include in the documentation. An\nexcerpt could be included, I suppose.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 13 Apr 2020 16:16:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On 4/13/20 4:14 PM, Robert Haas wrote:\n> On Mon, Apr 13, 2020 at 3:34 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n>> Also, I\n>> see no mention of prettification-chars such as newlines or indentation.\n>> I suppose if I pass a manifest file through prettification (or Windows\n>> newline conversion), the checksum may break.\n> \n> It would indeed break. I'm not sure what you want me to say here,\n> though. If you're trying to parse a manifest, you shouldn't care about\n> how the whitespace is arranged. If you're trying to generate one, you\n> can arrange it any way you like, as long as you also include it in the\n> checksum.\n\npgBackRest ignores whitespace but this is a legacy of the way Perl \ncalculated checksums, not an intentional feature. This worked well when \nthe manifest was loaded as a whole, converted to JSON, and checksummed, \nbut it is a major pain for the streaming code we now have in C.\n\nI guarantee that that our next manifest version will do a simple \nchecksum of bytes as Robert has done in this feature.\n\nSo, I'm +1 as implemented.\n\n>> Why is the top-level checksum only allowed to be SHA-256, if the files\n>> can use up to SHA-512?\n\n<snip>\n\n> I agree that it's a little bit weird that you can have a stronger\n> checksum for the files instead of the manifest itself, but I also\n> wonder what the use case would be for using a stronger checksum on the\n> manifest. David Steele argued that strong checksums on the files could\n> be useful to software that wants to rifle through all the backups\n> you've ever taken and find another copy of that file by looking for\n> something with a matching checksum. CRC-32C wouldn't be strong enough\n> for that, because eventually you could have enough files that you\n> start to have collisions. The SHA algorithms output enough bits to\n> make that quite unlikely. But this argument only makes sense for the\n> files, not the manifest.\n\nAgreed. I think SHA-256 is *more* than enough to protect the manifest \nagainst corruption. That said, since the cost of SHA-256 vs. SHA-512 in \nthe context on the manifest is negligible we could just use the stronger \nalgorithm to deflect a similar question going forward.\n\nThat choice might not age well, but we could always say, well, we picked \nit because it was the strongest available at the time. Allowing a choice \nof which algorithm to use for to manifest checksum seems like it will \njust make verifying the file harder with no tangible benefit.\n\nMaybe just a comment in the docs about why SHA-256 was used would be fine.\n\n>> (Also, did we intentionally omit the dash in\n>> hash names, so \"SHA-256\" to make it SHA256? This will also be critical\n>> for checksumming the manifest itself.)\n> \n> I debated this with myself, settled on this spelling, and nobody\n> complained until now. It could be changed, though. I didn't have any\n> particular reason for choosing it except the feeling that people would\n> probably prefer to type --manifest-checksum=sha256 rather than\n> --manifest-checksum=sha-256.\n\n+1 for sha256 rather than sha-256.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 13 Apr 2020 16:42:03 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On 2020-Apr-13, Robert Haas wrote:\n\n> On Mon, Apr 13, 2020 at 3:34 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > Are these hex figures upper or lower case? No leading zeroes? This\n> > would normally not matter, but the toplevel checksum will care.\n> \n> Not really. You just feed the whole file except for the last line\n> through shasum and you get the answer.\n> \n> It so happens that the server generates lower-case, but\n> pg_verifybackup will accept either.\n> \n> Leading zeroes are not omitted. If the checksum's not the right\n> length, it ain't gonna work. If SHA is used, it's the same output you\n> would get from running shasum -a<whatever> on the file, which is\n> certainly a fixed length. I assumed that this followed from the\n> statement that there are two characters per byte in the checksum, and\n> from the fact that no checksum algorithm I know about drops leading\n> zeroes in the output.\n\nEh, apologies, I was completely unclear -- I was looking at the LSN\nfields when writing the above. So the leading zeroes and letter case\ncomment refers to those in the LSN values. I agree that it doesn't\nmatter as long as the same tool generates the json file and writes the\nchecksum.\n\n> > Also, I see no mention of prettification-chars such as newlines or\n> > indentation. I suppose if I pass a manifest file through\n> > prettification (or Windows newline conversion), the checksum may\n> > break.\n> \n> It would indeed break. I'm not sure what you want me to say here,\n> though. If you're trying to parse a manifest, you shouldn't care about\n> how the whitespace is arranged. If you're trying to generate one, you\n> can arrange it any way you like, as long as you also include it in the\n> checksum.\n\nYeah, I guess I'm just saying that it feels brittle to have a file\nformat that's supposed to be good for data exchange and then make it\nitself depend on representation details such as the order that fields\nappear in, the letter case, or the format of newlines. Maybe this isn't\nreally of concern, but it seemed strange.\n\n> > As for Last-Modification, I think the spec should indicate the exact\n> > format that's used, because it'll also be critical for checksumming.\n> \n> Again, I don't think it really matters for checksumming, but it's\n> \"YYYY-MM-DD HH:MM:SS TZ\" format, where TZ is always GMT.\n\nI agree that whatever format you use will work as long as it isn't\nmodified.\n\nI think strict ISO 8601 might be preferable (with the T in the middle\nand ending in Z instead of \" GMT\").\n\n> > Why is the top-level checksum only allowed to be SHA-256, if the\n> > files can use up to SHA-512?\n\nThanks for the discussion. I think you mostly want to make sure that\nthe manifest is sensible (not corrupt) rather than defend against\nsomebody maliciously giving you an attacking manifest (??). I incline\nto agree that any SHA-2 hash is going to serve that purpose and have no\nfurther comment to make.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 13 Apr 2020 17:42:56 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On Mon, Apr 13, 2020 at 5:43 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Yeah, I guess I'm just saying that it feels brittle to have a file\n> format that's supposed to be good for data exchange and then make it\n> itself depend on representation details such as the order that fields\n> appear in, the letter case, or the format of newlines. Maybe this isn't\n> really of concern, but it seemed strange.\n\nI didn't want to use JSON for this at all, but I got outvoted. When I\nraised this issue, it was suggested that I deal with it in this way,\nso I did. I can't really defend it too far beyond that, although I do\nthink that one nice thing about this is that you can verify the\nchecksum using shell commands if you want. Just figure out the number\nof lines in the file, minus one, and do head -n$LINES backup_manifest\n| shasum -a256 and boom. If there were some whitespace-skipping thing\nfiguring out how to reproduce the checksum calculation would be hard.\n\n> I think strict ISO 8601 might be preferable (with the T in the middle\n> and ending in Z instead of \" GMT\").\n\nHmm, did David suggest that before? I don't recall for sure. I think\nhe had some suggestion, but I'm not sure if it was the same one.\n\n> > > Why is the top-level checksum only allowed to be SHA-256, if the\n> > > files can use up to SHA-512?\n>\n> Thanks for the discussion. I think you mostly want to make sure that\n> the manifest is sensible (not corrupt) rather than defend against\n> somebody maliciously giving you an attacking manifest (??). I incline\n> to agree that any SHA-2 hash is going to serve that purpose and have no\n> further comment to make.\n\nThe code has other sanity checks against the manifest failing to parse\nproperly, so you can't (I hope) crash it or anything even if you\nfalsify the checksum. But suppose that there is a gremlin running\naround your system flipping occasional bits. If said gremlin flips a\nbit in a \"0\" that appears in a file's checksum string, it could become\na \"1\", a \"3\", or a \"7\", all of which are still valid characters for a\nhex string. When you then tried to verify the backup, verification for\nthat file would fail, but you'd think it was a problem with the file,\nrather than a problem with the manifest. The manifest checksum\nprevents that: you'll get a complaint about the manifest checksum\nbeing wrong rather than a complaint about the file not matching the\nmanifest checksum. A sufficiently smart gremlin could figure out the\nexpected checksum for the revised manifest and flip bits to make the\nactual value match the expected one, but I think we're worried about\n\"chaotic neutral\" gremlins, not \"lawful evil\" ones.\n\nThat having been said, there was some discussion on the original\nthread about keeping your backup on regular storage and your manifest\nchecksum in a concrete bunker at the bottom of the ocean; in that\nscenario, it should be possible to detect tampering in either the\nmanifest itself or in non-WAL data files, as long as the adversary\ncan't break SHA-256. But I'm not sure how much we should really worry\nabout that. For me, the design center for this feature is a user who\nuntars base.tar and forgets about 43965.tar. If that person runs\npg_verifybackup, it's gonna tell them that things are broken, and\nthat's good enough for me. It may not be good enough for everybody,\nbut it's good enough for me.\n\nI think I'm going to go ahed and push this now, maybe with a small\nwording tweak as discussed upthread with Andrew. The rest of this\ndiscussion is really about whether the patch needs any design changes\nrather than about whether the documentation describes what the patch\ndoes, so it makes sense to me to commit this first and then if\nsomebody wants to argue for a change they certainly can.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 14 Apr 2020 12:56:49 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On 4/14/20 12:56 PM, Robert Haas wrote:\n> On Mon, Apr 13, 2020 at 5:43 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> Yeah, I guess I'm just saying that it feels brittle to have a file\n>> format that's supposed to be good for data exchange and then make it\n>> itself depend on representation details such as the order that fields\n>> appear in, the letter case, or the format of newlines. Maybe this isn't\n>> really of concern, but it seemed strange.\n> \n> I didn't want to use JSON for this at all, but I got outvoted. When I\n> raised this issue, it was suggested that I deal with it in this way,\n> so I did. I can't really defend it too far beyond that, although I do\n> think that one nice thing about this is that you can verify the\n> checksum using shell commands if you want. Just figure out the number\n> of lines in the file, minus one, and do head -n$LINES backup_manifest\n> | shasum -a256 and boom. If there were some whitespace-skipping thing\n> figuring out how to reproduce the checksum calculation would be hard.\n> \n>> I think strict ISO 8601 might be preferable (with the T in the middle\n>> and ending in Z instead of \" GMT\").\n> \n> Hmm, did David suggest that before? I don't recall for sure. I think\n> he had some suggestion, but I'm not sure if it was the same one.\n\n\"I'm also partial to using epoch time in the manifest because it is \ngenerally easier for programs to work with. But, human-readable doesn't \nsuck, either.\"\n\nAlso you don't need to worry about time-zone conversion errors -- even \nif the source time is UTC this can easily happen if you are not careful. \nIt also saves a parsing step.\n\nThe downside is it is not human-readable but this is intended to be a \nmachine-readable format so I don't think it's a big deal (encoded \nfilenames will be just as opaque). If a user really needs to know what \ntime some file is (rare, I think) they can paste it with a web tool to \nfind out.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 14 Apr 2020 13:12:51 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On 2020-Apr-14, David Steele wrote:\n\n> On 4/14/20 12:56 PM, Robert Haas wrote:\n>\n> > Hmm, did David suggest that before? I don't recall for sure. I think\n> > he had some suggestion, but I'm not sure if it was the same one.\n> \n> \"I'm also partial to using epoch time in the manifest because it is\n> generally easier for programs to work with. But, human-readable doesn't\n> suck, either.\"\n\nUgh. If you go down that road, why write human-readable contents at\nall? You may as well just use a binary format. But that's a very\nslippery slope and you won't like to be in the bottom -- I don't see\nwhat that gains you. It's not like it's a lot of work to parse a\ntimestamp in a non-internationalized well-defined human-readable format.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 14 Apr 2020 13:27:31 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On 4/14/20 1:27 PM, Alvaro Herrera wrote:\n> On 2020-Apr-14, David Steele wrote:\n> \n>> On 4/14/20 12:56 PM, Robert Haas wrote:\n>>\n>>> Hmm, did David suggest that before? I don't recall for sure. I think\n>>> he had some suggestion, but I'm not sure if it was the same one.\n>>\n>> \"I'm also partial to using epoch time in the manifest because it is\n>> generally easier for programs to work with. But, human-readable doesn't\n>> suck, either.\"\n> \n> Ugh. If you go down that road, why write human-readable contents at\n> all? You may as well just use a binary format. But that's a very\n> slippery slope and you won't like to be in the bottom -- I don't see\n> what that gains you. It's not like it's a lot of work to parse a\n> timestamp in a non-internationalized well-defined human-readable format.\n\nWell, times are a special case because they are so easy to mess up. Try \nconverting ISO-8601 to epoch time using the standard C functions on a \nsystem where TZ != UTC. Fun times.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 14 Apr 2020 13:33:44 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "\nOn 4/14/20 1:33 PM, David Steele wrote:\n> On 4/14/20 1:27 PM, Alvaro Herrera wrote:\n>> On 2020-Apr-14, David Steele wrote:\n>>\n>>> On 4/14/20 12:56 PM, Robert Haas wrote:\n>>>\n>>>> Hmm, did David suggest that before? I don't recall for sure. I think\n>>>> he had some suggestion, but I'm not sure if it was the same one.\n>>>\n>>> \"I'm also partial to using epoch time in the manifest because it is\n>>> generally easier for programs to work with. But, human-readable\n>>> doesn't\n>>> suck, either.\"\n>>\n>> Ugh. If you go down that road, why write human-readable contents at\n>> all? You may as well just use a binary format. But that's a very\n>> slippery slope and you won't like to be in the bottom -- I don't see\n>> what that gains you. It's not like it's a lot of work to parse a\n>> timestamp in a non-internationalized well-defined human-readable format.\n>\n> Well, times are a special case because they are so easy to mess up.\n> Try converting ISO-8601 to epoch time using the standard C functions\n> on a system where TZ != UTC. Fun times.\n>\n>\n\n\nEven if it's a zulu time? That would be pretty damn sad.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 14 Apr 2020 15:03:12 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On 4/14/20 3:03 PM, Andrew Dunstan wrote:\n> \n> On 4/14/20 1:33 PM, David Steele wrote:\n>> On 4/14/20 1:27 PM, Alvaro Herrera wrote:\n>>> On 2020-Apr-14, David Steele wrote:\n>>>\n>>>> On 4/14/20 12:56 PM, Robert Haas wrote:\n>>>>\n>>>>> Hmm, did David suggest that before? I don't recall for sure. I think\n>>>>> he had some suggestion, but I'm not sure if it was the same one.\n>>>>\n>>>> \"I'm also partial to using epoch time in the manifest because it is\n>>>> generally easier for programs to work with. But, human-readable\n>>>> doesn't\n>>>> suck, either.\"\n>>>\n>>> Ugh. If you go down that road, why write human-readable contents at\n>>> all? You may as well just use a binary format. But that's a very\n>>> slippery slope and you won't like to be in the bottom -- I don't see\n>>> what that gains you. It's not like it's a lot of work to parse a\n>>> timestamp in a non-internationalized well-defined human-readable format.\n>>\n>> Well, times are a special case because they are so easy to mess up.\n>> Try converting ISO-8601 to epoch time using the standard C functions\n>> on a system where TZ != UTC. Fun times.\n> \n> Even if it's a zulu time? That would be pretty damn sad.\nZULU/GMT/UTC are all fine. But if the server timezone is EDT for example \n(not that I recommend this) you are likely to get the wrong result. \nResults vary based on your platform. For instance, we found MacOS was \nmore likely to work the way you would expect and Linux was hopeless.\n\nThere are all kinds of fun tricks to get around this (sort of). One is \nto temporarily set TZ=UTC which sucks if an error happens before it gets \nset back. There are some hacks to try to determine your offset which \nhave inherent race conditions around DST changes.\n\nAfter some experimentation we just used the Posix definition for epoch \ntime and used that to do our conversions:\n\nhttps://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_16\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 14 Apr 2020 15:19:23 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "\nOn 4/14/20 3:19 PM, David Steele wrote:\n> On 4/14/20 3:03 PM, Andrew Dunstan wrote:\n>>\n>> On 4/14/20 1:33 PM, David Steele wrote:\n>>> On 4/14/20 1:27 PM, Alvaro Herrera wrote:\n>>>> On 2020-Apr-14, David Steele wrote:\n>>>>\n>>>>> On 4/14/20 12:56 PM, Robert Haas wrote:\n>>>>>\n>>>>>> Hmm, did David suggest that before? I don't recall for sure. I think\n>>>>>> he had some suggestion, but I'm not sure if it was the same one.\n>>>>>\n>>>>> \"I'm also partial to using epoch time in the manifest because it is\n>>>>> generally easier for programs to work with. But, human-readable\n>>>>> doesn't\n>>>>> suck, either.\"\n>>>>\n>>>> Ugh. If you go down that road, why write human-readable contents at\n>>>> all? You may as well just use a binary format. But that's a very\n>>>> slippery slope and you won't like to be in the bottom -- I don't see\n>>>> what that gains you. It's not like it's a lot of work to parse a\n>>>> timestamp in a non-internationalized well-defined human-readable\n>>>> format.\n>>>\n>>> Well, times are a special case because they are so easy to mess up.\n>>> Try converting ISO-8601 to epoch time using the standard C functions\n>>> on a system where TZ != UTC. Fun times.\n>>\n>> Even if it's a zulu time? That would be pretty damn sad.\n> ZULU/GMT/UTC are all fine. But if the server timezone is EDT for\n> example (not that I recommend this) you are likely to get the wrong\n> result. Results vary based on your platform. For instance, we found\n> MacOS was more likely to work the way you would expect and Linux was\n> hopeless.\n>\n> There are all kinds of fun tricks to get around this (sort of). One is\n> to temporarily set TZ=UTC which sucks if an error happens before it\n> gets set back. There are some hacks to try to determine your offset\n> which have inherent race conditions around DST changes.\n>\n> After some experimentation we just used the Posix definition for epoch\n> time and used that to do our conversions:\n>\n> https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_16\n>\n>\n>\n\nOK, but I think if we're putting a timestamp string in ISO-8601 format\nin the manifest it should be in UTC / Zulu time, precisely to avoid\nthese issues. If that's too much trouble then yes an epoch time will\nprobably do.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 14 Apr 2020 15:55:55 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On 4/14/20 3:55 PM, Andrew Dunstan wrote:\n> \n> OK, but I think if we're putting a timestamp string in ISO-8601 format\n> in the manifest it should be in UTC / Zulu time, precisely to avoid\n> these issues. If that's too much trouble then yes an epoch time will\n> probably do.\n\nHappily ISO-8601 is always UTC. The problem I'm referring to is the \ntimezone setting on the host system when doing conversions in C.\n\nTo be fair most languages handle this well and C is C so I'm not sure we \nneed to make a big deal of it. In JSON/XML it's pretty common to use \nISO-8601 so that seems like a rational choice.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 14 Apr 2020 16:01:40 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On 2020-Apr-14, Andrew Dunstan wrote:\n\n> OK, but I think if we're putting a timestamp string in ISO-8601 format\n> in the manifest it should be in UTC / Zulu time, precisely to avoid\n> these issues. If that's too much trouble then yes an epoch time will\n> probably do.\n\nThe timestamp is always specified and always UTC (except the code calls\nit GMT).\n\n+ /*\n+ * Convert last modification time to a string and append it to the\n+ * manifest. Since it's not clear what time zone to use and since time\n+ * zone definitions can change, possibly causing confusion, use GMT\n+ * always.\n+ */\n+ appendStringInfoString(&buf, \"\\\"Last-Modified\\\": \\\"\");\n+ enlargeStringInfo(&buf, 128);\n+ buf.len += pg_strftime(&buf.data[buf.len], 128, \"%Y-%m-%d %H:%M:%S %Z\",\n+ pg_gmtime(&mtime));\n+ appendStringInfoString(&buf, \"\\\"\");\n\nI was merely saying that it's trivial to make this iso-8601 compliant as\n\n buf.len += pg_strftime(&buf.data[buf.len], 128, \"%Y-%m-%dT%H:%M:%SZ\",\n\nie. omit the \"GMT\" string and replace it with a literal Z, and remove\nthe space and replace it with a T.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 14 Apr 2020 16:09:22 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On 2020-Apr-14, David Steele wrote:\n\n> Happily ISO-8601 is always UTC.\n\nUh, it is not --\nhttps://en.wikipedia.org/wiki/ISO_8601#Time_zone_designators\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 14 Apr 2020 16:11:00 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "\nOn 4/14/20 4:09 PM, Alvaro Herrera wrote:\n> On 2020-Apr-14, Andrew Dunstan wrote:\n>\n>> OK, but I think if we're putting a timestamp string in ISO-8601 format\n>> in the manifest it should be in UTC / Zulu time, precisely to avoid\n>> these issues. If that's too much trouble then yes an epoch time will\n>> probably do.\n> The timestamp is always specified and always UTC (except the code calls\n> it GMT).\n>\n> + /*\n> + * Convert last modification time to a string and append it to the\n> + * manifest. Since it's not clear what time zone to use and since time\n> + * zone definitions can change, possibly causing confusion, use GMT\n> + * always.\n> + */\n> + appendStringInfoString(&buf, \"\\\"Last-Modified\\\": \\\"\");\n> + enlargeStringInfo(&buf, 128);\n> + buf.len += pg_strftime(&buf.data[buf.len], 128, \"%Y-%m-%d %H:%M:%S %Z\",\n> + pg_gmtime(&mtime));\n> + appendStringInfoString(&buf, \"\\\"\");\n>\n> I was merely saying that it's trivial to make this iso-8601 compliant as\n>\n> buf.len += pg_strftime(&buf.data[buf.len], 128, \"%Y-%m-%dT%H:%M:%SZ\",\n>\n> ie. omit the \"GMT\" string and replace it with a literal Z, and remove\n> the space and replace it with a T.\n>\n\n+1\n\n\ncheers\n\n\nandre\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 14 Apr 2020 16:33:31 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On 4/14/20 4:11 PM, Alvaro Herrera wrote:\n> On 2020-Apr-14, David Steele wrote:\n> \n>> Happily ISO-8601 is always UTC.\n> \n> Uh, it is not --\n> https://en.wikipedia.org/wiki/ISO_8601#Time_zone_designators\n\nWhoops, you are correct. I've just never seen non-UTC in the wild yet.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 14 Apr 2020 16:40:12 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "\n\nOn 2020/04/14 0:15, Robert Haas wrote:\n> On Sun, Apr 12, 2020 at 10:09 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> I found other minor issues.\n> \n> I think these are all correct fixes. Thanks for the post-commit\n> review, and sorry for this mistakes.\n\nThanks for the review, Michael and Robert. Pushed the patches!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 15 Apr 2020 11:18:38 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On 2020/04/14 2:40, Robert Haas wrote:\n> On Fri, Mar 27, 2020 at 4:32 PM Andres Freund <andres@anarazel.de> wrote:\n>> I don't like having a file format that's intended to be used by external\n>> tools too that's undocumented except for code that assembles it in a\n>> piecemeal fashion. Do you mean in a follow-on patch this release, or\n>> later? I don't have a problem with the former.\n> \n> Here is a patch for that.\n\nWhile reading the document that you pushed, I thought that it's better\nto define index term for backup manifest, so that we can easily reach\nthis document from the index page. Thought? Patch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 15 Apr 2020 12:49:11 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On Tue, Apr 14, 2020 at 11:49 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> While reading the document that you pushed, I thought that it's better\n> to define index term for backup manifest, so that we can easily reach\n> this document from the index page. Thought? Patch attached.\n\nFine with me. I tend not to think about the index very much, so I'm\nglad you are. :-)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 15 Apr 2020 09:24:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On Tue, 14 Apr 2020 12:56:49 -0400\nRobert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Apr 13, 2020 at 5:43 PM Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> > Yeah, I guess I'm just saying that it feels brittle to have a file\n> > format that's supposed to be good for data exchange and then make it\n> > itself depend on representation details such as the order that fields\n> > appear in, the letter case, or the format of newlines. Maybe this isn't\n> > really of concern, but it seemed strange. \n> \n> I didn't want to use JSON for this at all, but I got outvoted. When I\n> raised this issue, it was suggested that I deal with it in this way,\n> so I did. I can't really defend it too far beyond that, although I do\n> think that one nice thing about this is that you can verify the\n> checksum using shell commands if you want. Just figure out the number\n> of lines in the file, minus one, and do head -n$LINES backup_manifest\n> | shasum -a256 and boom. If there were some whitespace-skipping thing\n> figuring out how to reproduce the checksum calculation would be hard.\n\nFWIW, shell commands (md5sum and sha*sum) read checksums from a separate file\nwith a very simple format: one file per line with format \"CHECKSUM FILEPATH\".\n\nThanks to json, it is fairly easy to extract checksums and filenames from the\ncurrent manifest file format and check them all with one command:\n\n jq -r '.Files|.[]|.Checksum+\" \"+.Path' backup_manifest > checksums.sha256\n sha256sum --check --quiet checksums.sha256\n\nYou can even pipe these commands together to avoid the intermediary file.\n\nBut for backup_manifest, it's kind of shame we have to check the checksum\nagainst an transformed version of the file. Did you consider creating eg. a\nseparate backup_manifest.sha256 file?\n\nI'm very sorry in advance if this has been discussed previously.\n\nRegards,\n\n\n",
"msg_date": "Wed, 15 Apr 2020 17:23:21 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On Wed, Apr 15, 2020 at 11:23 AM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n> But for backup_manifest, it's kind of shame we have to check the checksum\n> against an transformed version of the file. Did you consider creating eg. a\n> separate backup_manifest.sha256 file?\n>\n> I'm very sorry in advance if this has been discussed previously.\n\nIt was briefly mentioned in the original (lengthy) discussion, but I\nthink there was one vote in favor and two votes against or something\nlike that, so it didn't go anywhere. I didn't realize that there were\nhandy command-line tools for manipulating json like that, or I\nprobably would have considered that idea more strongly.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 15 Apr 2020 12:03:28 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On Wed, 15 Apr 2020 12:03:28 -0400\nRobert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Apr 15, 2020 at 11:23 AM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n> > But for backup_manifest, it's kind of shame we have to check the checksum\n> > against an transformed version of the file. Did you consider creating eg. a\n> > separate backup_manifest.sha256 file?\n> >\n> > I'm very sorry in advance if this has been discussed previously. \n> \n> It was briefly mentioned in the original (lengthy) discussion, but I\n> think there was one vote in favor and two votes against or something\n> like that, so it didn't go anywhere.\n\nArgh.\n\n> I didn't realize that there were handy command-line tools for manipulating\n> json like that, or I probably would have considered that idea more strongly.\n\nThat was indeed a lengthy thread with various details discussed. I'm sorry I\ndidn't catch the ball back then.\n\nRegards,\n\n\n",
"msg_date": "Thu, 16 Apr 2020 00:43:15 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On 4/15/20 6:43 PM, Jehan-Guillaume de Rorthais wrote:\n> On Wed, 15 Apr 2020 12:03:28 -0400\n> Robert Haas <robertmhaas@gmail.com> wrote:\n> \n>> On Wed, Apr 15, 2020 at 11:23 AM Jehan-Guillaume de Rorthais\n>> <jgdr@dalibo.com> wrote:\n>>> But for backup_manifest, it's kind of shame we have to check the checksum\n>>> against an transformed version of the file. Did you consider creating eg. a\n>>> separate backup_manifest.sha256 file?\n>>>\n>>> I'm very sorry in advance if this has been discussed previously.\n>>\n>> It was briefly mentioned in the original (lengthy) discussion, but I\n>> think there was one vote in favor and two votes against or something\n>> like that, so it didn't go anywhere.\n> \n> Argh.\n> \n>> I didn't realize that there were handy command-line tools for manipulating\n>> json like that, or I probably would have considered that idea more strongly.\n> \n> That was indeed a lengthy thread with various details discussed. I'm sorry I\n> didn't catch the ball back then.\n\nOne of the reasons to use JSON was to be able to use command line tools \nlike jq to do tasks (I use it myself). But I think only the \npg_verifybackup tool should be used to verify the internal checksum.\n\nTwo thoughts:\n\n1) You can always generate an external checksum when you generate the \nbackup if you want to do your own verification without running \npg_verifybackup.\n\n2) Perhaps it would be good if the pg_verifybackup command had a \n--verify-manifest-checksum option (or something) to check that the \nmanifest file looks valid without checking any files. That's not going \nto happen for PG13, but it's possible for PG14.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 15 Apr 2020 18:54:14 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On Wed, 15 Apr 2020 18:54:14 -0400\nDavid Steele <david@pgmasters.net> wrote:\n\n> On 4/15/20 6:43 PM, Jehan-Guillaume de Rorthais wrote:\n> > On Wed, 15 Apr 2020 12:03:28 -0400\n> > Robert Haas <robertmhaas@gmail.com> wrote:\n> > \n> >> On Wed, Apr 15, 2020 at 11:23 AM Jehan-Guillaume de Rorthais\n> >> <jgdr@dalibo.com> wrote: \n> >>> But for backup_manifest, it's kind of shame we have to check the checksum\n> >>> against an transformed version of the file. Did you consider creating eg.\n> >>> a separate backup_manifest.sha256 file?\n> >>>\n> >>> I'm very sorry in advance if this has been discussed previously. \n> >>\n> >> It was briefly mentioned in the original (lengthy) discussion, but I\n> >> think there was one vote in favor and two votes against or something\n> >> like that, so it didn't go anywhere. \n> > \n> > Argh.\n> > \n> >> I didn't realize that there were handy command-line tools for manipulating\n> >> json like that, or I probably would have considered that idea more\n> >> strongly. \n> > \n> > That was indeed a lengthy thread with various details discussed. I'm sorry I\n> > didn't catch the ball back then. \n> \n> One of the reasons to use JSON was to be able to use command line tools \n> like jq to do tasks (I use it myself).\n\nThat's perfectly fine. I was only wondering about having the manifest checksum\noutside of the manifest itself.\n\n> But I think only the pg_verifybackup tool should be used to verify the\n> internal checksum.\n\ntrue.\n\n> Two thoughts:\n> \n> 1) You can always generate an external checksum when you generate the \n> backup if you want to do your own verification without running \n> pg_verifybackup.\n\nSure, but by the time I want to produce an external checksum, the manifest\nwould have travel around quite a bit with various danger on its way to corrupt\nit. Checksuming it from the original process that produced it sounds safer.\n\n> 2) Perhaps it would be good if the pg_verifybackup command had a \n> --verify-manifest-checksum option (or something) to check that the \n> manifest file looks valid without checking any files. That's not going \n> to happen for PG13, but it's possible for PG14.\n\nSure.\n\nI just liked the idea to be able to check the manifest using an external\ncommand line implementing the same standardized checksum algo. Without editing\nthe manifest first. But I understand it's too late to discuss this now.\n\nRegards,\n\n\n",
"msg_date": "Fri, 17 Apr 2020 00:23:27 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "\n\nOn 2020/04/15 22:24, Robert Haas wrote:\n> On Tue, Apr 14, 2020 at 11:49 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> While reading the document that you pushed, I thought that it's better\n>> to define index term for backup manifest, so that we can easily reach\n>> this document from the index page. Thought? Patch attached.\n> \n> Fine with me. I tend not to think about the index very much, so I'm\n> glad you are. :-)\n\nPushed! Thanks!\n\nRegards,\n \n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 17 Apr 2020 18:39:21 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On 2020/04/15 11:18, Fujii Masao wrote:\n> \n> \n> On 2020/04/14 0:15, Robert Haas wrote:\n>> On Sun, Apr 12, 2020 at 10:09 PM Fujii Masao\n>> <masao.fujii@oss.nttdata.com> wrote:\n>>> I found other minor issues.\n>>\n>> I think these are all correct fixes. Thanks for the post-commit\n>> review, and sorry for this mistakes.\n> \n> Thanks for the review, Michael and Robert. Pushed the patches!\n\nI found three minor issues in pg_verifybackup.\n\n+\t\t{\"print-parse-wal\", no_argument, NULL, 'p'},\n\nThis is unused option, so this line should be removed.\n\n+\tprintf(_(\" -m, --manifest=PATH use specified path for manifest\\n\"));\n\nTypo: --manifest should be --manifest-path\n\npg_verifybackup accepts --quiet option, but its usage() doesn't\nprint any message for --quiet option.\n\nAttached is the patch that fixes those issues.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 23 Apr 2020 01:21:40 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Wed, Apr 22, 2020 at 12:21 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> I found three minor issues in pg_verifybackup.\n>\n> + {\"print-parse-wal\", no_argument, NULL, 'p'},\n>\n> This is unused option, so this line should be removed.\n>\n> + printf(_(\" -m, --manifest=PATH use specified path for manifest\\n\"));\n>\n> Typo: --manifest should be --manifest-path\n>\n> pg_verifybackup accepts --quiet option, but its usage() doesn't\n> print any message for --quiet option.\n>\n> Attached is the patch that fixes those issues.\n\nThanks; LGTM.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 22 Apr 2020 12:28:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "\n\nOn 2020/04/23 1:28, Robert Haas wrote:\n> On Wed, Apr 22, 2020 at 12:21 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> I found three minor issues in pg_verifybackup.\n>>\n>> + {\"print-parse-wal\", no_argument, NULL, 'p'},\n>>\n>> This is unused option, so this line should be removed.\n>>\n>> + printf(_(\" -m, --manifest=PATH use specified path for manifest\\n\"));\n>>\n>> Typo: --manifest should be --manifest-path\n>>\n>> pg_verifybackup accepts --quiet option, but its usage() doesn't\n>> print any message for --quiet option.\n>>\n>> Attached is the patch that fixes those issues.\n> \n> Thanks; LGTM.\n\nThanks for the review! Pushed.\n\nRegards, \n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 23 Apr 2020 11:33:46 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Sun, Apr 5, 2020 at 3:31 PM Andres Freund <andres@anarazel.de> wrote:\n> The warnings don't seem too unreasonable. The compiler can't see that\n> the error_cb inside json_manifest_parse_failure() is not expected to\n> return. Probably worth adding a wrapper around the calls to\n> context->error_cb and mark that as noreturn.\n\nEh, how? The callback is declared as:\n\ntypedef void (*json_manifest_error_callback)(JsonManifestParseContext *,\n char\n*fmt, ...) pg_attribute_printf(2, 3);\n\nI don't know of a way to create a wrapper around that, because of the\nvariable argument list. We could change the callback to take va_list,\nI guess.\n\nDoes it work for you to just add pg_attribute_noreturn() to this\ntypedef, as in the attached?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 23 Apr 2020 08:57:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "Hi,\n\nOn 2020-04-23 08:57:39 -0400, Robert Haas wrote:\n> On Sun, Apr 5, 2020 at 3:31 PM Andres Freund <andres@anarazel.de> wrote:\n> > The warnings don't seem too unreasonable. The compiler can't see that\n> > the error_cb inside json_manifest_parse_failure() is not expected to\n> > return. Probably worth adding a wrapper around the calls to\n> > context->error_cb and mark that as noreturn.\n> \n> Eh, how? The callback is declared as:\n> \n> typedef void (*json_manifest_error_callback)(JsonManifestParseContext *,\n> char\n> *fmt, ...) pg_attribute_printf(2, 3);\n> \n> I don't know of a way to create a wrapper around that, because of the\n> variable argument list.\n\nDidn't think that far...\n\n\n> We could change the callback to take va_list, I guess.\n\nI'd argue that that'd be a good idea anyway, otherwise there's no way to\nwrap the invocation anywhere in the code. But that's an independent\nconsideration, as:\n\n> Does it work for you to just add pg_attribute_noreturn() to this\n> typedef, as in the attached?\n\ndoes fix the problem for me, cool.\n\nDo you not see a warning when compiling with optimizations enabled?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 Apr 2020 14:16:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "On Thu, Apr 23, 2020 at 5:16 PM Andres Freund <andres@anarazel.de> wrote:\n> Do you not see a warning when compiling with optimizations enabled?\n\nNo, I don't. I tried it with -O{0,1,2,3} and I always use -Wall\n-Werror. No warnings.\n\n[rhaas pgsql]$ clang -v\nclang version 5.0.2 (tags/RELEASE_502/final)\nTarget: x86_64-apple-darwin19.4.0\nThread model: posix\nInstalledDir: /opt/local/libexec/llvm-5.0/bin\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 24 Apr 2020 08:03:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: backup manifests"
},
{
"msg_contents": "\n\nOn 2020/04/15 5:33, Andrew Dunstan wrote:\n> \n> On 4/14/20 4:09 PM, Alvaro Herrera wrote:\n>> On 2020-Apr-14, Andrew Dunstan wrote:\n>>\n>>> OK, but I think if we're putting a timestamp string in ISO-8601 format\n>>> in the manifest it should be in UTC / Zulu time, precisely to avoid\n>>> these issues. If that's too much trouble then yes an epoch time will\n>>> probably do.\n>> The timestamp is always specified and always UTC (except the code calls\n>> it GMT).\n>>\n>> + /*\n>> + * Convert last modification time to a string and append it to the\n>> + * manifest. Since it's not clear what time zone to use and since time\n>> + * zone definitions can change, possibly causing confusion, use GMT\n>> + * always.\n>> + */\n>> + appendStringInfoString(&buf, \"\\\"Last-Modified\\\": \\\"\");\n>> + enlargeStringInfo(&buf, 128);\n>> + buf.len += pg_strftime(&buf.data[buf.len], 128, \"%Y-%m-%d %H:%M:%S %Z\",\n>> + pg_gmtime(&mtime));\n>> + appendStringInfoString(&buf, \"\\\"\");\n>>\n>> I was merely saying that it's trivial to make this iso-8601 compliant as\n>>\n>> buf.len += pg_strftime(&buf.data[buf.len], 128, \"%Y-%m-%dT%H:%M:%SZ\",\n>>\n>> ie. omit the \"GMT\" string and replace it with a literal Z, and remove\n>> the space and replace it with a T.\n\nI have one question related to this; Why don't we use log_timezone,\nlike backup_label? log_timezone is used for \"START TIME\" field in\nbackup_label. Sorry if this was already discussed.\n\n\t\t/* Use the log timezone here, not the session timezone */\n\t\tstamp_time = (pg_time_t) time(NULL);\n\t\tpg_strftime(strfbuf, sizeof(strfbuf),\n\t\t\t\t\t\"%Y-%m-%d %H:%M:%S %Z\",\n\t\t\t\t\tpg_localtime(&stamp_time, log_timezone));\n\nOTOH, *if* we want to use the same timezone for backup-related files because\nbackup can be used in different environements and timezone setting\nmay be different there or for other reasons, backup_label also should use\nGMT or something for the sake of consistency?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 15 May 2020 15:10:38 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On Fri, May 15, 2020 at 2:10 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> I have one question related to this; Why don't we use log_timezone,\n> like backup_label? log_timezone is used for \"START TIME\" field in\n> backup_label. Sorry if this was already discussed.\n>\n> /* Use the log timezone here, not the session timezone */\n> stamp_time = (pg_time_t) time(NULL);\n> pg_strftime(strfbuf, sizeof(strfbuf),\n> \"%Y-%m-%d %H:%M:%S %Z\",\n> pg_localtime(&stamp_time, log_timezone));\n>\n> OTOH, *if* we want to use the same timezone for backup-related files because\n> backup can be used in different environements and timezone setting\n> may be different there or for other reasons, backup_label also should use\n> GMT or something for the sake of consistency?\n\nIt's a good question. My inclination was to think that GMT would be\nthe clearest thing, but I also didn't realize that the result would\nthus be inconsistent with backup_label. Not sure what's best here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 15 May 2020 09:14:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> It's a good question. My inclination was to think that GMT would be\n> the clearest thing, but I also didn't realize that the result would\n> thus be inconsistent with backup_label. Not sure what's best here.\n\nI vote for following the backup_label precedent; that's stood for quite\nsome years now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 May 2020 09:34:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On 5/15/20 9:34 AM, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> It's a good question. My inclination was to think that GMT would be\n>> the clearest thing, but I also didn't realize that the result would\n>> thus be inconsistent with backup_label. Not sure what's best here.\n> \n> I vote for following the backup_label precedent; that's stood for quite\n> some years now.\n\nI'd rather keep it GMT. The timestamps in the backup label are purely \ninformational, but the timestamps in the manifest are useful, e.g. to \nset the mtime on a restore to the original value.\n\nForcing the user to do timezone conversions is prone to error. Some \nlanguages, like C, simply aren't good at it.\n\nOf course, my actual preference is to use epoch time which is easy to \nwork with and eliminates the possibility of conversion errors. It is \nalso compact.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 15 May 2020 10:06:52 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "David Steele <david@pgmasters.net> writes:\n> On 5/15/20 9:34 AM, Tom Lane wrote:\n>> I vote for following the backup_label precedent; that's stood for quite\n>> some years now.\n\n> Of course, my actual preference is to use epoch time which is easy to \n> work with and eliminates the possibility of conversion errors. It is \n> also compact.\n\nWell, if we did that then it'd be sufficiently different from the backup\nlabel as to remove any risk of confusion. But \"easy to work with\" is in\nthe eye of the beholder; do we really want a format that's basically\nunreadable to the naked eye?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 May 2020 10:17:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
},
{
"msg_contents": "On 5/15/20 10:17 AM, Tom Lane wrote:\n> David Steele <david@pgmasters.net> writes:\n>> On 5/15/20 9:34 AM, Tom Lane wrote:\n>>> I vote for following the backup_label precedent; that's stood for quite\n>>> some years now.\n> \n>> Of course, my actual preference is to use epoch time which is easy to\n>> work with and eliminates the possibility of conversion errors. It is\n>> also compact.\n> \n> Well, if we did that then it'd be sufficiently different from the backup\n> label as to remove any risk of confusion. But \"easy to work with\" is in\n> the eye of the beholder; do we really want a format that's basically\n> unreadable to the naked eye?\n\nWell, I lost this argument before so it seems I'm in the minority on \neasy-to-use. We use epoch time in the pgBackRest manifests which has \nbeen easy to deal with in both C and Perl, so experience tells me it \nreally is easy, at least for programs.\n\nThe manifest (to me, at least) is generally intended to be \nmachine-processed. For instance, it contains checksums which are not all \nthat useful unless they are checked programmatically -- they can't just \nbe eye-balled.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 15 May 2020 11:05:02 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: documenting the backup manifest file format"
}
] |
[
{
"msg_contents": "In my experience it's not immediately obvious (even after reading the\ndocumentation) the implications of how concurrent index builds manage\ntransactions with respect to multiple concurrent index builds in\nflight at the same time.\n\nSpecifically, as I understand multiple concurrent index builds running\nat the same time will all return at the same time as the longest\nrunning one.\n\nI've attached a small patch to call this caveat out specifically in\nthe documentation. I think the description in the patch is accurate,\nbut please let me know if there's some intricacies around how the\nvarious stages might change the results.\n\nJames Coleman",
"msg_date": "Wed, 18 Sep 2019 13:51:00 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "[DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 01:51:00PM -0400, James Coleman wrote:\n> In my experience it's not immediately obvious (even after reading the\n> documentation) the implications of how concurrent index builds manage\n> transactions with respect to multiple concurrent index builds in\n> flight at the same time.\n> \n> Specifically, as I understand multiple concurrent index builds running\n> at the same time will all return at the same time as the longest\n> running one.\n> \n> I've attached a small patch to call this caveat out specifically in\n> the documentation. I think the description in the patch is accurate,\n> but please let me know if there's some intricacies around how the\n> various stages might change the results.\n\nThe CREATE INDEX docs already say:\n\n In a concurrent index build, the index is actually entered into\n the system catalogs in one transaction, then two table scans occur in\n two more transactions. Before each table scan, the index build must\n wait for existing transactions that have modified the table to terminate.\n After the second scan, the index build must wait for any transactions\n--> that have a snapshot (see <xref linkend=\"mvcc\"/>) predating the second\n--> scan to terminate. Then finally the index can be marked ready for use,\n\nSo, having multiple concurrent index scans is just a special case of\nhaving to \"wait for any transactions that have a snapshot\", no? I am\nnot sure adding a doc mention of other index builds really is helpful.\n\n---------------------------------------------------------------------------\n\n> commit 9e28e704820eebb81ff94c1c3cbfb7db087b2c45\n> Author: James Coleman <jtc331@gmail.com>\n> Date: Wed Sep 18 13:36:22 2019 -0400\n> \n> Document concurrent indexes waiting on each other\n> \n> It's not immediately obvious that because concurrent index building\n> waits on previously running transactions to complete, running multiple\n> concurrent index builds at the same time will result in each of them\n> taking as long to return as the longest takes, so, document this caveat.\n> \n> diff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml\n> index 629a31ef79..35f15abb0e 100644\n> --- a/doc/src/sgml/ref/create_index.sgml\n> +++ b/doc/src/sgml/ref/create_index.sgml\n> @@ -616,6 +616,13 @@ Indexes:\n> cannot.\n> </para>\n> \n> + <para>\n> + Because the second table scan must wait for any transactions having a\n> + snapshot preceding the start of that scan to finish before completing the\n> + scan, concurrent index builds on multiple tables at the same time will\n> + not return on any one table until all have completed.\n> + </para>\n> +\n> <para>\n> Concurrent builds for indexes on partitioned tables are currently not\n> supported. However, you may concurrently build the index on each\n\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 28 Sep 2019 12:18:38 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On 2019-Sep-28, Bruce Momjian wrote:\n\n> The CREATE INDEX docs already say:\n> \n> In a concurrent index build, the index is actually entered into\n> the system catalogs in one transaction, then two table scans occur in\n> two more transactions. Before each table scan, the index build must\n> wait for existing transactions that have modified the table to terminate.\n> After the second scan, the index build must wait for any transactions\n> --> that have a snapshot (see <xref linkend=\"mvcc\"/>) predating the second\n> --> scan to terminate. Then finally the index can be marked ready for use,\n> \n> So, having multiple concurrent index scans is just a special case of\n> having to \"wait for any transactions that have a snapshot\", no? I am\n> not sure adding a doc mention of other index builds really is helpful.\n\nI always thought that create index concurrently was prevented from\nrunning concurrently in a table by the ShareUpdateExclusive lock that's\nheld during the operation.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 28 Sep 2019 22:22:28 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Sat, Sep 28, 2019 at 9:22 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Sep-28, Bruce Momjian wrote:\n>\n> > The CREATE INDEX docs already say:\n> >\n> > In a concurrent index build, the index is actually entered into\n> > the system catalogs in one transaction, then two table scans occur in\n> > two more transactions. Before each table scan, the index build must\n> > wait for existing transactions that have modified the table to terminate.\n> > After the second scan, the index build must wait for any transactions\n> > --> that have a snapshot (see <xref linkend=\"mvcc\"/>) predating the second\n> > --> scan to terminate. Then finally the index can be marked ready for use,\n> >\n> > So, having multiple concurrent index scans is just a special case of\n> > having to \"wait for any transactions that have a snapshot\", no? I am\n> > not sure adding a doc mention of other index builds really is helpful.\n>\n> I always thought that create index concurrently was prevented from\n> running concurrently in a table by the ShareUpdateExclusive lock that's\n> held during the operation.\n\nYou mean multiple CICs on a single table at the same time? Yes, that\n(unfortunately) isn't possible, but I'm concerned in the patch with\nthe fact that CIC on table X blocks CIC on table Y.\n\nJames\n\n\n",
"msg_date": "Sat, 28 Sep 2019 21:54:48 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Sat, Sep 28, 2019 at 09:54:48PM -0400, James Coleman wrote:\n> On Sat, Sep 28, 2019 at 9:22 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >\n> > On 2019-Sep-28, Bruce Momjian wrote:\n> >\n> > > The CREATE INDEX docs already say:\n> > >\n> > > In a concurrent index build, the index is actually entered into\n> > > the system catalogs in one transaction, then two table scans occur in\n> > > two more transactions. Before each table scan, the index build must\n> > > wait for existing transactions that have modified the table to terminate.\n> > > After the second scan, the index build must wait for any transactions\n> > > --> that have a snapshot (see <xref linkend=\"mvcc\"/>) predating the second\n> > > --> scan to terminate. Then finally the index can be marked ready for use,\n> > >\n> > > So, having multiple concurrent index scans is just a special case of\n> > > having to \"wait for any transactions that have a snapshot\", no? I am\n> > > not sure adding a doc mention of other index builds really is helpful.\n> >\n> > I always thought that create index concurrently was prevented from\n> > running concurrently in a table by the ShareUpdateExclusive lock that's\n> > held during the operation.\n> \n> You mean multiple CICs on a single table at the same time? Yes, that\n> (unfortunately) isn't possible, but I'm concerned in the patch with\n> the fact that CIC on table X blocks CIC on table Y.\n\nI think any open transaction will block CIC, which is my point.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 28 Sep 2019 21:56:24 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Sat, Sep 28, 2019 at 9:56 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Sat, Sep 28, 2019 at 09:54:48PM -0400, James Coleman wrote:\n> > On Sat, Sep 28, 2019 at 9:22 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > >\n> > > On 2019-Sep-28, Bruce Momjian wrote:\n> > >\n> > > > The CREATE INDEX docs already say:\n> > > >\n> > > > In a concurrent index build, the index is actually entered into\n> > > > the system catalogs in one transaction, then two table scans occur in\n> > > > two more transactions. Before each table scan, the index build must\n> > > > wait for existing transactions that have modified the table to terminate.\n> > > > After the second scan, the index build must wait for any transactions\n> > > > --> that have a snapshot (see <xref linkend=\"mvcc\"/>) predating the second\n> > > > --> scan to terminate. Then finally the index can be marked ready for use,\n> > > >\n> > > > So, having multiple concurrent index scans is just a special case of\n> > > > having to \"wait for any transactions that have a snapshot\", no? I am\n> > > > not sure adding a doc mention of other index builds really is helpful.\n\nWhile that may be technically true, as a co-worker of mine likes to\npoint out, being \"technically correct\" is the worst kind of correct.\n\nHere's what I mean:\n\nFirst, I believe the docs should aim to be as useful as possible to\neven those with more entry-level understanding of PostgreSQL. The fact\nthe paragraph you cite actually links to the entire chapter on\nconcurrency control in Postgres demonstrates that there's some\nnot-so-immediate stuff here to consider. For one: is it obvious to all\nusers that the transaction held by CIC (or even that all transactions)\nhas an open snapshot?\n\nSecond, this is a difference from a regular CREATE INDEX, and we\nalready call out as caveats differences between CREATE INDEX\nCONCURRENTLY and regular CREATE INDEX as I point out below re:\nAlvaro's comment.\n\nThird, related to the above point, many DDL commands only block DDL\nagainst the table being operated on. The fact that CIC here is\ndifferent is, in my opinion, a fairly surprising break from that\npattern, and as such likely to catch users off guard. I can attest\nthat this surprised at least one entire database team a while back :)\nincluding many people who've been operating Postgres at a large scale\nfor a long time.\n\nI believe caveats like this are worth calling out rather than\nexpecting users to have to understand the implementation details an\nwork out the implications on their own.\n\n> > > I always thought that create index concurrently was prevented from\n> > > running concurrently in a table by the ShareUpdateExclusive lock that's\n> > > held during the operation.\n> >\n> > You mean multiple CICs on a single table at the same time? Yes, that\n> > (unfortunately) isn't possible, but I'm concerned in the patch with\n> > the fact that CIC on table X blocks CIC on table Y.\n>\n> I think any open transaction will block CIC, which is my point.\n\nI read Alvaro as referring to the fact that the docs already call out\nthe following:\n\n> Regular index builds permit other regular index builds on the same table to occur simultaneously, but only one concurrent index build can occur on a table at a time.\n\nJames\n\n\n",
"msg_date": "Sat, 28 Sep 2019 22:08:21 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On 2019-Sep-28, James Coleman wrote:\n\n> I believe caveats like this are worth calling out rather than\n> expecting users to have to understand the implementation details an\n> work out the implications on their own.\n\nI agree.\n\n> I read Alvaro as referring to the fact that the docs already call out\n> the following:\n> \n> > Regular index builds permit other regular index builds on the same\n> > table to occur simultaneously, but only one concurrent index build\n> > can occur on a table at a time.\n\nYeah, that's what I was understanding.\n\nBTW I think there's an approach that could alleviate part of this\nproblem, at least some of the time: whenever CIC runs for an index\nthat's not on expression and not partial, we could set the\nPROC_IN_VACUUM flag. That would cause it to get ignored by other\nprocesses for snapshot purposes (including CIC itself), as well as by\nvacuum. I need to take some time to research the safety of this, but\nintuitively it seems safe.\n\nEven further, I think we could also do it for regular CREATE INDEX\n(under the same conditions) provided that it's not run in a transaction\nblock. But that requires even more research/proof.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 29 Sep 2019 12:27:09 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Sat, Sep 28, 2019 at 10:22:28PM -0300, Alvaro Herrera wrote:\n> I always thought that create index concurrently was prevented from\n> running concurrently in a table by the ShareUpdateExclusive lock that's\n> held during the operation.\n\nREINDEX CONCURRENTLY and CIC can deadlock while waiting for each other\nto finish after their validation phase, see:\nhttps://www.postgresql.org/message-id/20190507030756.GD1499@paquier.xyz\nhttps://www.postgresql.org/message-id/20190507032543.GH1499@paquier.xyz\n--\nMichael",
"msg_date": "Mon, 30 Sep 2019 10:24:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "I went ahead and registered this in the current only CF as\nhttps://commitfest.postgresql.org/27/2454/\n\nAlvaro: Would you like to be added as a reviewer?\n\nJames\n\n\n",
"msg_date": "Fri, 14 Feb 2020 16:09:30 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Sun, Sep 29, 2019 at 9:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Sep 28, 2019 at 10:22:28PM -0300, Alvaro Herrera wrote:\n> > I always thought that create index concurrently was prevented from\n> > running concurrently in a table by the ShareUpdateExclusive lock that's\n> > held during the operation.\n>\n> REINDEX CONCURRENTLY and CIC can deadlock while waiting for each other\n> to finish after their validation phase, see:\n> https://www.postgresql.org/message-id/20190507030756.GD1499@paquier.xyz\n> https://www.postgresql.org/message-id/20190507032543.GH1499@paquier.xyz\n\nMichael,\n\nThanks for the cross-link. Do you think this would be valuable to\ndocument at the same time? Or did you just want to ensure we were also\naware of this particular downfall? If the latter, I appreciate it,\nit's helpful info. If the latter, let me know, and I'll try to update\nthe patch.\n\nThanks,\nJames\n\n\n",
"msg_date": "Fri, 14 Feb 2020 16:10:35 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-18 13:51:00 -0400, James Coleman wrote:\n> In my experience it's not immediately obvious (even after reading the\n> documentation) the implications of how concurrent index builds manage\n> transactions with respect to multiple concurrent index builds in\n> flight at the same time.\n> \n> Specifically, as I understand multiple concurrent index builds running\n> at the same time will all return at the same time as the longest\n> running one.\n> \n> I've attached a small patch to call this caveat out specifically in\n> the documentation. I think the description in the patch is accurate,\n> but please let me know if there's some intricacies around how the\n> various stages might change the results.\n> \n> James Coleman\n\nI'd much rather see effort spent fixing this issue as far as it relates\nto concurrent CICs. For the snapshot waits we can add a procarray flag\n(alongside PROCARRAY_VACUUM_FLAG) indicating that the backend is\ndoing. Which WaitForOlderSnapshots() can then use to ignore those CICs,\nwhich is safe, because those transactions definitely don't insert into\nrelations targeted by CIC. The change to WaitForOlderSnapshots() would\njust be to pass the new flag to GetCurrentVirtualXIDs, I think.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 25 Mar 2020 12:19:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 3:19 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-09-18 13:51:00 -0400, James Coleman wrote:\n> > In my experience it's not immediately obvious (even after reading the\n> > documentation) the implications of how concurrent index builds manage\n> > transactions with respect to multiple concurrent index builds in\n> > flight at the same time.\n> >\n> > Specifically, as I understand multiple concurrent index builds running\n> > at the same time will all return at the same time as the longest\n> > running one.\n> >\n> > I've attached a small patch to call this caveat out specifically in\n> > the documentation. I think the description in the patch is accurate,\n> > but please let me know if there's some intricacies around how the\n> > various stages might change the results.\n> >\n> > James Coleman\n>\n> I'd much rather see effort spent fixing this issue as far as it relates\n> to concurrent CICs. For the snapshot waits we can add a procarray flag\n> (alongside PROCARRAY_VACUUM_FLAG) indicating that the backend is\n> doing. Which WaitForOlderSnapshots() can then use to ignore those CICs,\n> which is safe, because those transactions definitely don't insert into\n> relations targeted by CIC. The change to WaitForOlderSnapshots() would\n> just be to pass the new flag to GetCurrentVirtualXIDs, I think.\n\nAlvaro: I think you had some ideas on this too; any chance you've know\nof a patch that anyone's got cooking?\n\nAndres: If we got this fixed in current PG would you be opposed to\ndocumenting the caveat in previous versions?\n\nThanks,\nJames\n\n\n",
"msg_date": "Wed, 25 Mar 2020 15:24:44 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On 2020-Mar-25, James Coleman wrote:\n\n> Alvaro: I think you had some ideas on this too; any chance you've know\n> of a patch that anyone's got cooking?\n\nI posted this in November\nhttps://postgr.es/m/20191101203310.GA12239@alvherre.pgsql but I didn't\nput time to go through the issues there. I don't know if my approach is\nexactly what Andres has in mind, but I was discouraged by the number of\ngotchas for which the optimization I propose has to be turned off.\n\nMaybe that preliminary patch can serve as a discussion starter, if\nnothing else.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Mar 2020 16:30:10 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-25 15:24:44 -0400, James Coleman wrote:\n> Andres: If we got this fixed in current PG would you be opposed to\n> documenting the caveat in previous versions?\n\nNot really. I'm just not confident it's going to be useful, given the\namount of details needed to be provided to really make sense of the\nissue (the earlier CIC phases don't wait for snapshots, but just\nrelation locks etc).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 25 Mar 2020 12:51:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-25 16:30:10 -0300, Alvaro Herrera wrote:\n> I posted this in November\n> https://postgr.es/m/20191101203310.GA12239@alvherre.pgsql but I didn't\n> put time to go through the issues there.\n\nOh, missed that.\n\n\n> I don't know if my approach is exactly what Andres has in mind\n\nNot quite. I don't think it's generally correct for CIC to set\nPROC_IN_VACUUM. I'm doubtful it's the case even just for plain indexes -\nwe don't want rows to be pruned away from under us. I also think we'd\nwant to set such a flag during all of the CIC phases?\n\nWhat I was thinking of was a new flag, with a distinct value from\nPROC_IN_VACUUM. It'd currently just be specified in the\nGetCurrentVirtualXIDs() calls in WaitForOlderSnapshots(). That'd avoid\nneeding to wait for other CICs on different relations. Since CIC is not\npermitted on system tables, and CIC doesn't do DML on normal tables, it\nseems fairly obviously correct to exclude other CICs.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 25 Mar 2020 12:58:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On 2020-Mar-25, Andres Freund wrote:\n\n> > I don't know if my approach is exactly what Andres has in mind\n> \n> Not quite. I don't think it's generally correct for CIC to set\n> PROC_IN_VACUUM. I'm doubtful it's the case even just for plain indexes -\n> we don't want rows to be pruned away from under us. I also think we'd\n> want to set such a flag during all of the CIC phases?\n> \n> What I was thinking of was a new flag, with a distinct value from\n> PROC_IN_VACUUM. It'd currently just be specified in the\n> GetCurrentVirtualXIDs() calls in WaitForOlderSnapshots(). That'd avoid\n> needing to wait for other CICs on different relations. Since CIC is not\n> permitted on system tables, and CIC doesn't do DML on normal tables, it\n> seems fairly obviously correct to exclude other CICs.\n\nHmm, that sounds more promising.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Mar 2020 17:12:48 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 05:12:48PM -0300, Alvaro Herrera wrote:\n> Hmm, that sounds more promising.\n\nHaven't looked at that myself in details. But as I doubt that this\nwould be backpatched, wouldn't it be better to document the issue for\nnow?\n--\nMichael",
"msg_date": "Thu, 26 Mar 2020 15:52:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 3:58 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-03-25 16:30:10 -0300, Alvaro Herrera wrote:\n> > I posted this in November\n> > https://postgr.es/m/20191101203310.GA12239@alvherre.pgsql but I didn't\n> > put time to go through the issues there.\n>\n> Oh, missed that.\n>\n>\n> > I don't know if my approach is exactly what Andres has in mind\n>\n> Not quite. I don't think it's generally correct for CIC to set\n> PROC_IN_VACUUM. I'm doubtful it's the case even just for plain indexes -\n> we don't want rows to be pruned away from under us. I also think we'd\n> want to set such a flag during all of the CIC phases?\n>\n> What I was thinking of was a new flag, with a distinct value from\n> PROC_IN_VACUUM. It'd currently just be specified in the\n> GetCurrentVirtualXIDs() calls in WaitForOlderSnapshots(). That'd avoid\n> needing to wait for other CICs on different relations. Since CIC is not\n> permitted on system tables, and CIC doesn't do DML on normal tables, it\n> seems fairly obviously correct to exclude other CICs.\n\nThat would keep CIC from blocking other CICs, but it wouldn't solve\nthe problem of CIC blocking vacuum on unrelated tables, right? Perhaps\nthat's orthogonal though.\n\nJames\n\n\n",
"msg_date": "Wed, 15 Apr 2020 09:31:58 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "Hi,\n\nOn 2020-04-15 09:31:58 -0400, James Coleman wrote:\n> On Wed, Mar 25, 2020 at 3:58 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2020-03-25 16:30:10 -0300, Alvaro Herrera wrote:\n> > > I posted this in November\n> > > https://postgr.es/m/20191101203310.GA12239@alvherre.pgsql but I didn't\n> > > put time to go through the issues there.\n> >\n> > Oh, missed that.\n> >\n> >\n> > > I don't know if my approach is exactly what Andres has in mind\n> >\n> > Not quite. I don't think it's generally correct for CIC to set\n> > PROC_IN_VACUUM. I'm doubtful it's the case even just for plain indexes -\n> > we don't want rows to be pruned away from under us. I also think we'd\n> > want to set such a flag during all of the CIC phases?\n> >\n> > What I was thinking of was a new flag, with a distinct value from\n> > PROC_IN_VACUUM. It'd currently just be specified in the\n> > GetCurrentVirtualXIDs() calls in WaitForOlderSnapshots(). That'd avoid\n> > needing to wait for other CICs on different relations. Since CIC is not\n> > permitted on system tables, and CIC doesn't do DML on normal tables, it\n> > seems fairly obviously correct to exclude other CICs.\n> \n> That would keep CIC from blocking other CICs, but it wouldn't solve\n> the problem of CIC blocking vacuum on unrelated tables, right? Perhaps\n> that's orthogonal though.\n\nI am not sure what blocking you are referring to here? CIC shouldn't\nblock vacuum on other tables from running? Or do you just mean that\nvacuum will not be able to remove some rows due to the snapshot from the\nCIC? That'd be an orthogonal problem, yes.\n\nIf it's about the xmin horizon for vacuum: I think we could probably\navoid that using the same flag. As vacuum cannot be run against a table\nthat has a CIC running (although it'd theoretically be possible to allow\nthat), it should be safe to ignore PROC_IN_CIC backends in vacuum's\nGetOldestXmin() call. That might not be true for system relations, but\nwe don't allow CIC on those.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 Apr 2020 15:31:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Wed, Apr 15, 2020 at 6:31 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-04-15 09:31:58 -0400, James Coleman wrote:\n> > On Wed, Mar 25, 2020 at 3:58 PM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2020-03-25 16:30:10 -0300, Alvaro Herrera wrote:\n> > > > I posted this in November\n> > > > https://postgr.es/m/20191101203310.GA12239@alvherre.pgsql but I didn't\n> > > > put time to go through the issues there.\n> > >\n> > > Oh, missed that.\n> > >\n> > >\n> > > > I don't know if my approach is exactly what Andres has in mind\n> > >\n> > > Not quite. I don't think it's generally correct for CIC to set\n> > > PROC_IN_VACUUM. I'm doubtful it's the case even just for plain indexes -\n> > > we don't want rows to be pruned away from under us. I also think we'd\n> > > want to set such a flag during all of the CIC phases?\n> > >\n> > > What I was thinking of was a new flag, with a distinct value from\n> > > PROC_IN_VACUUM. It'd currently just be specified in the\n> > > GetCurrentVirtualXIDs() calls in WaitForOlderSnapshots(). That'd avoid\n> > > needing to wait for other CICs on different relations. Since CIC is not\n> > > permitted on system tables, and CIC doesn't do DML on normal tables, it\n> > > seems fairly obviously correct to exclude other CICs.\n> >\n> > That would keep CIC from blocking other CICs, but it wouldn't solve\n> > the problem of CIC blocking vacuum on unrelated tables, right? Perhaps\n> > that's orthogonal though.\n>\n> I am not sure what blocking you are referring to here? CIC shouldn't\n> block vacuum on other tables from running? Or do you just mean that\n> vacuum will not be able to remove some rows due to the snapshot from the\n> CIC? That'd be an orthogonal problem, yes.\n>\n> If it's about the xmin horizon for vacuum: I think we could probably\n> avoid that using the same flag. As vacuum cannot be run against a table\n> that has a CIC running (although it'd theoretically be possible to allow\n> that), it should be safe to ignore PROC_IN_CIC backends in vacuum's\n> GetOldestXmin() call. That might not be true for system relations, but\n> we don't allow CIC on those.\n\nYeah, I mean that if I have a CIC running on table X then vacuum can't\nremove dead tuples (from after the CIC's snapshot) on table Y.\n\nThat's a pretty significant danger, given the combination of:\n1. Index builds on very large tables can take many days, and\n2. The well understood problems of high update tables with dead tuples\nand poor plans.\n\nI've previously discussed this with other hackers and the reasoning\nthey'd understood way that we couldn't always safely ignore\nPROC_IN_CIC backends in the vacuum's oldest xmin call because of\nfunction indexes, and the fact that (despite clear recommendations to\nthe contrary), there's nothing actually preventing someone from adding\na function index on table X that queries table Y.\n\nI'm not sure I buy that we should care about people doing something\nclearly so dangerous, but...I grant that it'd be nice not to cause new\ncrashes.\n\nJames\n\n\n",
"msg_date": "Wed, 15 Apr 2020 21:44:48 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "Hi,\n\nOn 2020-04-15 21:44:48 -0400, James Coleman wrote:\n> On Wed, Apr 15, 2020 at 6:31 PM Andres Freund <andres@anarazel.de> wrote:\n> > If it's about the xmin horizon for vacuum: I think we could probably\n> > avoid that using the same flag. As vacuum cannot be run against a table\n> > that has a CIC running (although it'd theoretically be possible to allow\n> > that), it should be safe to ignore PROC_IN_CIC backends in vacuum's\n> > GetOldestXmin() call. That might not be true for system relations, but\n> > we don't allow CIC on those.\n>\n> Yeah, I mean that if I have a CIC running on table X then vacuum can't\n> remove dead tuples (from after the CIC's snapshot) on table Y.\n\nFor me \"blocking\" evokes waiting for a lock, which is why I thought\nyou'd not mean that issue.\n\n\n> That's a pretty significant danger, given the combination of:\n> 1. Index builds on very large tables can take many days, and\n\nWe at least don't hold a single snapshot over the multiple phases...\n\n\n> 2. The well understood problems of high update tables with dead tuples\n> and poor plans.\n\nWhich specific problem are you referring to? The planner probing the end\nof the index for values outside of the histogram? I'd hope\n3ca930fc39ccf987c1c22fd04a1e7463b5dd0dfd improved the situation there a\nbit?\n\n\n> > [description why we could ignore CIC for vacuum horizon on other tables ]\n\n> I've previously discussed this with other hackers and the reasoning\n> they'd understood way that we couldn't always safely ignore\n> PROC_IN_CIC backends in the vacuum's oldest xmin call because of\n> function indexes, and the fact that (despite clear recommendations to\n> the contrary), there's nothing actually preventing someone from adding\n> a function index on table X that queries table Y.\n\nWell, even if we consider this an actual problem, we could still use\nPROC_IN_CIC for non-expression non-partial indexes (index operator\nthemselves better ensure this isn't a problem, or they're ridiculously\nbroken already - they can get called during vacuum).\n\nEven when expressions are involved, I don't think that necessarily would\nhave to mean that we need to use the same snapshot to run expressions in\nfor the hole scan. So we could occasionally take a new snapshot for the\npurpose of computing expressions.\n\nThe hard part presumably would be that we'd need to advertise one xmin\nfor the expression snapshot to protect tuples potentially accessed from\nbeing removed, but at the same time we also need to advertise the xmin\nof the snapshot used by CIC, to avoid HOT pruning in other session from\nremoving tuple versions from the table the index is being created\non.\n\nThere's not really infrastructure for doing so. I think we'd basically\nhave to start publicizing multiple xmin values (as long as PGXACT->xmin\nis <= new xmin for expressions, only GetOldestXmin() would need to care,\nand it's not that performance critical). Not pretty.\n\n\n> I'm not sure I buy that we should care about people doing something\n> clearly so dangerous, but...I grant that it'd be nice not to cause new\n> crashes.\n\nI don't think it's just dangerous expressions that would be\naffected. Normal expression indexes need to be able to do syscache\nlookups etc, and they can't safely do so if tuple versions can be\nremoved in the middle of a scan. We could avoid that by not ignoring\nPROC_IN_CIC backend in GetOldestXmin() calls for catalog tables (yuck).\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Thu, 16 Apr 2020 15:12:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Thu, Apr 16, 2020 at 6:12 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-04-15 21:44:48 -0400, James Coleman wrote:\n> > On Wed, Apr 15, 2020 at 6:31 PM Andres Freund <andres@anarazel.de> wrote:\n> > > If it's about the xmin horizon for vacuum: I think we could probably\n> > > avoid that using the same flag. As vacuum cannot be run against a table\n> > > that has a CIC running (although it'd theoretically be possible to allow\n> > > that), it should be safe to ignore PROC_IN_CIC backends in vacuum's\n> > > GetOldestXmin() call. That might not be true for system relations, but\n> > > we don't allow CIC on those.\n> >\n> > Yeah, I mean that if I have a CIC running on table X then vacuum can't\n> > remove dead tuples (from after the CIC's snapshot) on table Y.\n>\n> For me \"blocking\" evokes waiting for a lock, which is why I thought\n> you'd not mean that issue.\n\nIt was sloppy choice of language on my part; for better or worse at\nwork we've taken to talking about \"blocking vacuum\" when that's really\nshorthand for \"blocking [or you'd prefer preventing] vacuuming dead\ntuples\".\n\n> > That's a pretty significant danger, given the combination of:\n> > 1. Index builds on very large tables can take many days, and\n>\n> We at least don't hold a single snapshot over the multiple phases...\n\nFor sure. And text sorting improvements have made this better also,\nstill, as you often point out re: xid size, databases are only getting\nlarger (and more TPS).\n\n> > 2. The well understood problems of high update tables with dead tuples\n> > and poor plans.\n>\n> Which specific problem are you referring to? The planner probing the end\n> of the index for values outside of the histogram? I'd hope\n> 3ca930fc39ccf987c1c22fd04a1e7463b5dd0dfd improved the situation there a\n> bit?\n\nYes, and other commits too, IIRC from the time we spent debugging\nexactly the scenario mentioned in that commit.\n\nBut by \"poor plans\" I don't mean specifically \"poor planning time\" but\nthat we can still end up choosing the \"wrong\" plan, right? And dead\ntuples can make an index scan be significantly worse than it would\notherwise be. Same for a seq scan: you can end up looking at millions\nof dead tuples in a nominally 500 row table.\n\n> > > [description why we could ignore CIC for vacuum horizon on other tables ]\n>\n> > I've previously discussed this with other hackers and the reasoning\n> > they'd understood way that we couldn't always safely ignore\n> > PROC_IN_CIC backends in the vacuum's oldest xmin call because of\n> > function indexes, and the fact that (despite clear recommendations to\n> > the contrary), there's nothing actually preventing someone from adding\n> > a function index on table X that queries table Y.\n>\n> Well, even if we consider this an actual problem, we could still use\n> PROC_IN_CIC for non-expression non-partial indexes (index operator\n> themselves better ensure this isn't a problem, or they're ridiculously\n> broken already - they can get called during vacuum).\n\nAgreed. It'd be unfortunate to have to limit it though.\n\n> Even when expressions are involved, I don't think that necessarily would\n> have to mean that we need to use the same snapshot to run expressions in\n> for the hole scan. So we could occasionally take a new snapshot for the\n> purpose of computing expressions.\n>\n> The hard part presumably would be that we'd need to advertise one xmin\n> for the expression snapshot to protect tuples potentially accessed from\n> being removed, but at the same time we also need to advertise the xmin\n> of the snapshot used by CIC, to avoid HOT pruning in other session from\n> removing tuple versions from the table the index is being created\n> on.\n>\n> There's not really infrastructure for doing so. I think we'd basically\n> have to start publicizing multiple xmin values (as long as PGXACT->xmin\n> is <= new xmin for expressions, only GetOldestXmin() would need to care,\n> and it's not that performance critical). Not pretty.\n\nIn other words, pretty invasive.\n\n> > I'm not sure I buy that we should care about people doing something\n> > clearly so dangerous, but...I grant that it'd be nice not to cause new\n> > crashes.\n>\n> I don't think it's just dangerous expressions that would be\n> affected. Normal expression indexes need to be able to do syscache\n> lookups etc, and they can't safely do so if tuple versions can be\n> removed in the middle of a scan. We could avoid that by not ignoring\n> PROC_IN_CIC backend in GetOldestXmin() calls for catalog tables (yuck).\n\nAt first glance this sounds a lot less invasive, but I also agree it's gross.\n\nJames\n\n\n",
"msg_date": "Thu, 16 Apr 2020 21:04:41 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: tested, passed\n\nJames,\r\n\r\nI'm on board with the point of pointing out explicitly the \"concurrent index builds on multiple tables at the same time will not return on any one table until all have completed\", with back-patching. I do not believe the new paragraph is necessary though. I'd suggest trying to weave it into the existing paragraph ending \"Even then, however, the index may not be immediately usable for queries: in the worst case, it cannot be used as long as transactions exist that predate the start of the index build.\" Adding \"Notably, \" in front of the existing sentence fragment above and tacking it onto the end probably suffices.\r\n\r\nI don't actually don't whether this is true behavior though. Is it something our tests do, or could, demonstrate?\r\n\r\nIt is sorta weird to say \"one will not return until all have completed, though, since usually people think return means completed\". That whole paragraph is a bit unclear for the inexperienced DBA, in particular marked ready to use but isn't usable.\r\n\r\nThat isn't really on this patch to fix though, and the clarity around concurrent CIC seems worthwhile to add, even if imprecise - IMO it doesn't make that whole section any less clear and points out what seems to be a unique dynamic. IOW I would send the simple fix (inline, not a new paragraph) to a committer. The bigger doc reworking or actual behavioral improvements shouldn't hold up such a simple improvement.\r\n\r\nDavid J.\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Thu, 16 Jul 2020 23:33:23 +0000",
"msg_from": "David Johnston <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Thu, Jul 16, 2020 at 7:34 PM David Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> The following review has been posted through the commitfest application:\n> make installcheck-world: not tested\n> Implements feature: not tested\n> Spec compliant: not tested\n> Documentation: tested, passed\n>\n> James,\n>\n> I'm on board with the point of pointing out explicitly the \"concurrent index builds on multiple tables at the same time will not return on any one table until all have completed\", with back-patching. I do not believe the new paragraph is necessary though. I'd suggest trying to weave it into the existing paragraph ending \"Even then, however, the index may not be immediately usable for queries: in the worst case, it cannot be used as long as transactions exist that predate the start of the index build.\" Adding \"Notably, \" in front of the existing sentence fragment above and tacking it onto the end probably suffices.\n\nI'm not sure \"the index may not be immediately usable for queries\" is\nreally accurate/sufficient: it seems to imply the CREATE INDEX has\nreturned but for some reason the index isn't yet valid. The issue I'm\ntrying to describe here is that the CREATE INDEX query itself will not\nreturn until all preceding queries have completed *including*\nconcurrent index creations on unrelated tables.\n\n> I don't actually don't whether this is true behavior though. Is it something our tests do, or could, demonstrate?\n\nIt'd take tests that exercise parallelism, but it's pretty simple to\ndemonstrate (but you do have to catch the first index build in a scan\nphase, so you either need lots of data or a hack). Here's an example\nthat uses a bit of a hack to simulate a slow scan phase:\n\nSetup:\ncreate table items(i int);\ncreate table others(i int);\ncreate function slow_expr() returns text as $$ select pg_sleep(15);\nselect '5'; $$ language sql immutable;\ninsert into items(i) values (1), (2);\ninsert into others(i) values (1), (2);\n\nThen the following in order:\n1. In session A: create index concurrently on items((i::text || slow_expr()));\n2. In session B (at the same time): create index concurrently on others(i);\n\nYou'll notice that the 2nd command, which should be practically\ninstantaneous, waits on the first ~30s scan phase of (1) before it\nreturns. The same is true if after (2) completes you immediately run\nit again -- it waits on the second ~30s scan phase of (1).\n\nThat does reveal a bit of complexity though that that the current\npatch doesn't address, which is that this can be phase dependent (and\nthat complexity gets a lot more non-obvious when there's real live\nactivity (particularly long-running transactions) in the system as\nwell.\n\nI've attached a new patch series with two items:\n1. A simpler (and I believe more correct) doc changes for \"cic blocks\ncic on other tables\".\n2. A patch to document that all index builds can prevent tuples from\nbeing vacuumed away on other tables.\n\nIf it's preferable we could commit the first and discuss the second\nseparately, but since that limitation was also discussed up-thread, I\ndecided to include them both here for now.\n\nJames",
"msg_date": "Fri, 31 Jul 2020 14:51:09 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On 2020-Mar-25, Andres Freund wrote:\n\n> What I was thinking of was a new flag, with a distinct value from\n> PROC_IN_VACUUM. It'd currently just be specified in the\n> GetCurrentVirtualXIDs() calls in WaitForOlderSnapshots(). That'd avoid\n> needing to wait for other CICs on different relations. Since CIC is not\n> permitted on system tables, and CIC doesn't do DML on normal tables, it\n> seems fairly obviously correct to exclude other CICs.\n\nHmm, that does work, and seems a pretty small patch -- attached. Of\ncourse, some more commentary is necessary, but the theory of operation\nis as Andres says. (It does not solve the vacuuming problem I was\ndescribing in the other thread, only the spurious waiting that James is\ncomplaining about in this thread.)\n\nI'm going to try and poke holes on this now ... (Expression indexes with\nfalsely immutable functions?)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 4 Aug 2020 22:11:09 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On 2020-Aug-04, Alvaro Herrera wrote:\n\n> diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h\n> index b20e2ad4f6..43c8ea3e31 100644\n> --- a/src/include/storage/proc.h\n> +++ b/src/include/storage/proc.h\n> @@ -53,6 +53,8 @@ struct XidCache\n> #define\t\tPROC_IS_AUTOVACUUM\t0x01\t/* is it an autovac worker? */\n> #define\t\tPROC_IN_VACUUM\t\t0x02\t/* currently running lazy vacuum */\n> #define\t\tPROC_IN_ANALYZE\t\t0x04\t/* currently running analyze */\n> +#define\t\tPROC_IN_CIC\t\t\t0x40\t/* currently running CREATE INDEX\n> +\t\t\t\t\t\t\t\t\t\t CONCURRENTLY */\n> #define\t\tPROC_VACUUM_FOR_WRAPAROUND\t0x08\t/* set by autovac only */\n> #define\t\tPROC_IN_LOGICAL_DECODING\t0x10\t/* currently doing logical\n> \t\t\t\t\t\t\t\t\t\t\t\t * decoding outside xact */\n\nHah, missed to add new bit to PROC_VACUUM_STATE_MASK here.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 4 Aug 2020 22:14:35 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "Back in the 8.3 cycle (2007) when the autovacuum launcher/worker split\nwas done, we annoyed people because it blocked DDL. That led to an\neffort to cancel autovac automatically when that was detected, by Simon\nRiggs.\nhttps://postgr.es/m/1191526327.4223.204.camel@ebony.site\nhttps://postgr.es/m/1192129897.4233.433.camel@ebony.site\n\nI was fixated on only cancelling when it was ANALYZE, to avoid losing\nany VACUUM work.\nhttps://postgr.es/m/20071025164150.GF23566@alvh.no-ip.org\nThat turned into some flags for PGPROC to detect whether a process was\nANALYZE, and cancel only those.\nhttps://postgr.es/m/20071024151328.GG6559@alvh.no-ip.org\nCommit:\nhttps://postgr.es/m/20071024205536.CB425754229@cvs.postgresql.org\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=745c1b2c2ab\n\nHowever, I was outvoted, so we do not limit cancellation to analyze.\nPatch and discussion: https://postgr.es/m/20071025164150.GF23566@alvh.no-ip.org\nCommit:\nhttps://postgr.es/m/20071026204510.AA02E754229@cvs.postgresql.org\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=acac68b2bca\n\n... which means the flag I had added two days earlier has never been\nused for anything. We've carried the flag forward to this day for\nalmost 13 years, dutifully turning it on and off ... but never checking\nit anywhere.\n\nI propose to remove it, as in the attached patch.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 5 Aug 2020 19:55:49 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-05 19:55:49 -0400, Alvaro Herrera wrote:\n> Back in the 8.3 cycle (2007) when the autovacuum launcher/worker split\n> was done, we annoyed people because it blocked DDL. That led to an\n> effort to cancel autovac automatically when that was detected, by Simon\n> Riggs.\n> https://postgr.es/m/1191526327.4223.204.camel@ebony.site\n> https://postgr.es/m/1192129897.4233.433.camel@ebony.site\n> \n> I was fixated on only cancelling when it was ANALYZE, to avoid losing\n> any VACUUM work.\n> https://postgr.es/m/20071025164150.GF23566@alvh.no-ip.org\n> That turned into some flags for PGPROC to detect whether a process was\n> ANALYZE, and cancel only those.\n> https://postgr.es/m/20071024151328.GG6559@alvh.no-ip.org\n> Commit:\n> https://postgr.es/m/20071024205536.CB425754229@cvs.postgresql.org\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=745c1b2c2ab\n> \n> However, I was outvoted, so we do not limit cancellation to analyze.\n> Patch and discussion: https://postgr.es/m/20071025164150.GF23566@alvh.no-ip.org\n> Commit:\n> https://postgr.es/m/20071026204510.AA02E754229@cvs.postgresql.org\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=acac68b2bca\n> \n> ... which means the flag I had added two days earlier has never been\n> used for anything. We've carried the flag forward to this day for\n> almost 13 years, dutifully turning it on and off ... but never checking\n> it anywhere.\n> \n> I propose to remove it, as in the attached patch.\n\nI'm mildly against that, because I'd really like to start making use of\nthe flag. Not so much for cancellations, but to avoid the drastic impact\nanalyze has on bloat. In OLTP workloads with big tables, and without\ndisabled cost limiting for analyze (or slow IO), the snapshot that\nanalyze holds is often by far the transaction with the oldest xmin.\n\nIt's not entirely trivial to fix (just ignoring it could lead to\ndetoasting issues), but also not that.\n\nOnly mildly against because it'd not be hard to reintroduce once we need\nit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 5 Aug 2020 18:07:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "On Thu, 6 Aug 2020 at 02:07, Andres Freund <andres@anarazel.de> wrote:\n\n>\n> On 2020-08-05 19:55:49 -0400, Alvaro Herrera wrote:\n> > ... which means the flag I had added two days earlier has never been\n> > used for anything. We've carried the flag forward to this day for\n> > almost 13 years, dutifully turning it on and off ... but never checking\n> > it anywhere.\n> >\n> > I propose to remove it, as in the attached patch.\n>\n> I'm mildly against that, because I'd really like to start making use of\n> the flag. Not so much for cancellations, but to avoid the drastic impact\n> analyze has on bloat. In OLTP workloads with big tables, and without\n> disabled cost limiting for analyze (or slow IO), the snapshot that\n> analyze holds is often by far the transaction with the oldest xmin.\n>\n> It's not entirely trivial to fix (just ignoring it could lead to\n> detoasting issues), but also not that.\n>\n> Only mildly against because it'd not be hard to reintroduce once we need\n> it.\n>\n\nGood points, both.\n\nThe most obvious way to avoid long analyze snapshots is to make the\nanalysis take multiple snapshots as it runs, rather than try to invent some\nclever way of ignoring the analyze snapshots (which as Alvaro points out,\nwe never did). All we need to do is to have an analyze snapshot last for at\nmost N rows, but keep scanning until we have the desired sample size. Doing\nthat would mean the analyze sample wouldn't come from a single snapshot,\nbut then who cares? There is no requirement for consistency - the sample\nwould be arguably *more* stable because it comes from multiple points in\ntime, not just one.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nMission Critical Databases\n\nOn Thu, 6 Aug 2020 at 02:07, Andres Freund <andres@anarazel.de> wrote:\n\nOn 2020-08-05 19:55:49 -0400, Alvaro Herrera wrote: \n> ... which means the flag I had added two days earlier has never been\n> used for anything. We've carried the flag forward to this day for\n> almost 13 years, dutifully turning it on and off ... but never checking\n> it anywhere.\n> \n> I propose to remove it, as in the attached patch.\n\nI'm mildly against that, because I'd really like to start making use of\nthe flag. Not so much for cancellations, but to avoid the drastic impact\nanalyze has on bloat. In OLTP workloads with big tables, and without\ndisabled cost limiting for analyze (or slow IO), the snapshot that\nanalyze holds is often by far the transaction with the oldest xmin.\n\nIt's not entirely trivial to fix (just ignoring it could lead to\ndetoasting issues), but also not that.\n\nOnly mildly against because it'd not be hard to reintroduce once we need\nit.Good points, both.The most obvious way to avoid long analyze snapshots is to make the analysis take multiple snapshots as it runs, rather than try to invent some clever way of ignoring the analyze snapshots (which as Alvaro points out, we never did). All we need to do is to have an analyze snapshot last for at most N rows, but keep scanning until we have the desired sample size. Doing that would mean the analyze sample wouldn't come from a single snapshot, but then who cares? There is no requirement for consistency - the sample would be arguably *more* stable because it comes from multiple points in time, not just one.-- Simon Riggs http://www.2ndQuadrant.com/Mission Critical Databases",
"msg_date": "Thu, 6 Aug 2020 09:17:44 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "On Wed, Aug 5, 2020 at 9:07 PM Andres Freund <andres@anarazel.de> wrote:\n> I'm mildly against that, because I'd really like to start making use of\n> the flag. Not so much for cancellations, but to avoid the drastic impact\n> analyze has on bloat. In OLTP workloads with big tables, and without\n> disabled cost limiting for analyze (or slow IO), the snapshot that\n> analyze holds is often by far the transaction with the oldest xmin.\n>\n> It's not entirely trivial to fix (just ignoring it could lead to\n> detoasting issues), but also not that.\n>\n> Only mildly against because it'd not be hard to reintroduce once we need\n> it.\n\nI think we should nuke it. It's trivial to reintroduce the flag if we\nneed it later, if and when somebody's willing to do the associated\nwork. In the meantime, it adds confusion.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 6 Aug 2020 14:25:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Aug 5, 2020 at 9:07 PM Andres Freund <andres@anarazel.de> wrote:\n>> Only mildly against because it'd not be hard to reintroduce once we need\n>> it.\n\n> I think we should nuke it. It's trivial to reintroduce the flag if we\n> need it later, if and when somebody's willing to do the associated\n> work. In the meantime, it adds confusion.\n\n+1 for removal. It's not clear to me that we'd ever put it back.\nLong-running ANALYZE snapshots are indeed a problem, but Simon's proposal\nupthread to just take a new one every so often seems like a much cleaner\nand simpler answer than having onlookers assume that it's safe to ignore\nANALYZE processes. (Given that ANALYZE can invoke user-defined functions,\nand can be invoked from inside user transactions, any such assumption\nseems horribly dangerous.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Aug 2020 14:37:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "On Thu, Aug 6, 2020 at 2:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> +1 for removal. It's not clear to me that we'd ever put it back.\n> Long-running ANALYZE snapshots are indeed a problem, but Simon's proposal\n> upthread to just take a new one every so often seems like a much cleaner\n> and simpler answer than having onlookers assume that it's safe to ignore\n> ANALYZE processes. (Given that ANALYZE can invoke user-defined functions,\n> and can be invoked from inside user transactions, any such assumption\n> seems horribly dangerous.\n\nNot to get too far from the proposal on the table of just removing\nsomething that's been unused for a really long time, which stands on\nits own merits, but if a particular ANALYZE doesn't invoke any\nuser-defined functions and isn't run inside a transaction, could we\nskip acquiring a snapshot altogether? That's an extremely common case,\nthough by no means universal.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 6 Aug 2020 14:48:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Not to get too far from the proposal on the table of just removing\n> something that's been unused for a really long time, which stands on\n> its own merits, but if a particular ANALYZE doesn't invoke any\n> user-defined functions and isn't run inside a transaction, could we\n> skip acquiring a snapshot altogether? That's an extremely common case,\n> though by no means universal.\n\nI'm inclined to think not.\n\n(1) Without a snapshot it's hard to make any non-bogus decisions about\nwhich tuples are live and which are dead. Admittedly, with Simon's\nproposal the final totals would be spongy anyhow, but at least the\nindividual decisions produce meaningful answers.\n\n(2) I'm pretty sure there are places in the system that assume that any\nreader of a table is using an MVCC snapshot. For instance, didn't you\nintroduce some such assumptions along with or just after getting rid of\nSnapshotNow for catalog scans?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Aug 2020 15:11:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "On Thu, Aug 6, 2020 at 3:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> (1) Without a snapshot it's hard to make any non-bogus decisions about\n> which tuples are live and which are dead. Admittedly, with Simon's\n> proposal the final totals would be spongy anyhow, but at least the\n> individual decisions produce meaningful answers.\n\nI don't think I believe this. It's impossible to make *consistent*\ndecisions, but it's not difficult to make *non-bogus* decisions.\nHeapTupleSatisfiesVacuum() and HeapTupleSatifiesUpdate() both make\nsuch decisions, and neither takes a snapshot argument.\n\n> (2) I'm pretty sure there are places in the system that assume that any\n> reader of a table is using an MVCC snapshot. For instance, didn't you\n> introduce some such assumptions along with or just after getting rid of\n> SnapshotNow for catalog scans?\n\nSnapshotSelf still exists and is still used, and IIRC, it has very\nsimilar semantics to the old SnapshotNow, so I don't think that we\nintroduced any really general assumptions of this sort. I think the\nimportant part of those changes was that all the code that had\npreviously used SnapshotNow to examine system catalog tuples for DDL\npurposes and catcache lookups and so forth started using an MVCC scan,\nwhich removed one (of many) impediments to concurrent DDL. I think the\nfact that we removed SnapshotNow outright rather than just ceasing to\nuse it for that purpose was mostly so that nobody would accidentally\nreintroduce code that used it for the sorts of purposes for which it\nhad been used previously, and secondarily for code cleanliness.\nThere's nothing wrong with it fundamentally AFAIK.\n\nIt's worth mentioning, I think, that the main problem with SnapshotNow\nwas that it provided no particular stability. If you did an index scan\nunder SnapshotNow you might find two copies or no copies of a row\nbeing concurrently updated, rather than exactly one. And that in turn\ncould cause problems like failure to build a relcache entry. Now, how\nimportant is stability to ANALYZE? If you *either* retake your MVCC\nsnapshots periodically as you re-scan the table *or* use a non-MVCC\nsnapshot for the scan, you can get those same kinds of artifacts: you\nmight see two copies of a just-updated row, or none. Maybe this would\nactually *break* something - e.g. could there be code that would get\nconfused if we sample multiple rows for the same value in a column\nthat has a UNIQUE index? But I think mostly the consequences would be\nthat you might get somewhat different results from the statistics.\n\nIt's not clear to me that it would even be correct to categorize those\nsomewhat-different results as \"less accurate.\" Tuples that are\ninvisible to a query often have performance consequences very similar\nto visible tuples, in terms of the query run time.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 6 Aug 2020 16:22:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-06 14:48:52 -0400, Robert Haas wrote:\n> On Thu, Aug 6, 2020 at 2:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > +1 for removal. It's not clear to me that we'd ever put it back.\n> > Long-running ANALYZE snapshots are indeed a problem, but Simon's proposal\n> > upthread to just take a new one every so often seems like a much cleaner\n> > and simpler answer than having onlookers assume that it's safe to ignore\n> > ANALYZE processes. (Given that ANALYZE can invoke user-defined functions,\n> > and can be invoked from inside user transactions, any such assumption\n> > seems horribly dangerous.\n> \n> Not to get too far from the proposal on the table of just removing\n> something that's been unused for a really long time, which stands on\n> its own merits, but if a particular ANALYZE doesn't invoke any\n> user-defined functions and isn't run inside a transaction, could we\n> skip acquiring a snapshot altogether? That's an extremely common case,\n> though by no means universal.\n\nI don't think so, at least not in very common situations. E.g. as long\nas there's a toast table we need to hold a snapshot to ensure that we\ndon't get failures looking up toasted datums. IIRC there were some other\nsimilar issues that I can't quite recall right now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Aug 2020 14:26:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> ... how\n> important is stability to ANALYZE? If you *either* retake your MVCC\n> snapshots periodically as you re-scan the table *or* use a non-MVCC\n> snapshot for the scan, you can get those same kinds of artifacts: you\n> might see two copies of a just-updated row, or none. Maybe this would\n> actually *break* something - e.g. could there be code that would get\n> confused if we sample multiple rows for the same value in a column\n> that has a UNIQUE index? But I think mostly the consequences would be\n> that you might get somewhat different results from the statistics.\n\nYeah, that's an excellent point. I can imagine somebody complaining\n\"this query clearly matches a unique index, why is the planner estimating\nmultiple rows out?\". But most of the time it wouldn't matter much.\n(And I think you can get cases like that anyway today.)\n\n> It's not clear to me that it would even be correct to categorize those\n> somewhat-different results as \"less accurate.\"\n\nEstimating two rows where the correct answer is one row is clearly\n\"less accurate\". But I suspect you'd have to be quite unlucky to\nget such a result in practice from Simon's proposal, as long as we\nweren't super-aggressive about changing ANALYZE's snapshot a lot.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Aug 2020 17:35:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-06 16:22:23 -0400, Robert Haas wrote:\n> On Thu, Aug 6, 2020 at 3:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > (1) Without a snapshot it's hard to make any non-bogus decisions about\n> > which tuples are live and which are dead. Admittedly, with Simon's\n> > proposal the final totals would be spongy anyhow, but at least the\n> > individual decisions produce meaningful answers.\n> \n> I don't think I believe this. It's impossible to make *consistent*\n> decisions, but it's not difficult to make *non-bogus* decisions.\n> HeapTupleSatisfiesVacuum() and HeapTupleSatifiesUpdate() both make\n> such decisions, and neither takes a snapshot argument.\n\nYea, I don't think that's a big problem for the main table. As I just\nmentioned in an email a few minutes ago, toast is a bit of a different\ntopic.\n\nIn fact using conceptually like a new snapshot for each sample tuple\nactually seems like it'd be somewhat of an improvement over using a\nsingle snapshot. Given that it's a sample it's not like have very\nprecise expectations of the precise sample, and publishing one that\nsolely consists of pretty old rows by the time we're done doesn't seem\nlike it's a meaningful improvement. I guess there's some danger of\ndistinctness estimates getting worse, by seeing multiple versions of the\nsame tuple multiple times - but they're notoriously inaccurate already,\ndon't think this changes much.\n\n\n> > (2) I'm pretty sure there are places in the system that assume that any\n> > reader of a table is using an MVCC snapshot. For instance, didn't you\n> > introduce some such assumptions along with or just after getting rid of\n> > SnapshotNow for catalog scans?\n> \n> SnapshotSelf still exists and is still used, and IIRC, it has very\n> similar semantics to the old SnapshotNow, so I don't think that we\n> introduced any really general assumptions of this sort. I think the\n> important part of those changes was that all the code that had\n> previously used SnapshotNow to examine system catalog tuples for DDL\n> purposes and catcache lookups and so forth started using an MVCC scan,\n> which removed one (of many) impediments to concurrent DDL. I think the\n> fact that we removed SnapshotNow outright rather than just ceasing to\n> use it for that purpose was mostly so that nobody would accidentally\n> reintroduce code that used it for the sorts of purposes for which it\n> had been used previously, and secondarily for code cleanliness.\n> There's nothing wrong with it fundamentally AFAIK.\n\nSome preaching to the choir:\n\nIDK, there's not really much it (along with Self, Any, ...) can safely\nbe used for, unless you have pretty heavyweight additional locking, or\nlook explicitly at exactly one tuple version. Except that it's probably\nunnecessary, and that there's some disaster recovery benefits, I'd be in\nfavor of prohibiting most snapshot types for [sys]table scans.\n\n\nI'm doubtful that using the term \"snapshot\" for any of these is a good\nchoice, and I don't think there's benefit in actually going through the\nsnapshot APIs. Especially not when, like *Dirty, they abuse fields\ninside SnapshotData to return data that can't be returned through the\nnormal API. It'd probably be better to have more explicit APIs for\nthese, rather than going through snapshot.\n\n\n> It's not clear to me that it would even be correct to categorize those\n> somewhat-different results as \"less accurate.\" Tuples that are\n> invisible to a query often have performance consequences very similar\n> to visible tuples, in terms of the query run time.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Aug 2020 14:45:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> In fact using conceptually like a new snapshot for each sample tuple\n> actually seems like it'd be somewhat of an improvement over using a\n> single snapshot.\n\nDunno, that feels like a fairly bad idea to me. It seems like it would\noveremphasize the behavior of whatever queries happened to be running\nconcurrently with the ANALYZE. I do follow the argument that using a\nsingle snapshot for the whole ANALYZE overemphasizes a single instant\nin time, but I don't think that leads to the conclusion that we shouldn't\nuse a snapshot at all.\n\nAnother angle that would be worth considering, aside from the issue\nof whether the sample used for pg_statistic becomes more or less\nrepresentative, is what impact all this would have on the tuple count\nestimates that go to the stats collector and pg_class.reltuples.\nRight now, we don't have a great story at all on how the stats collector's\ncount is affected by combining VACUUM/ANALYZE table-wide counts with\nthe incremental deltas reported by transactions happening concurrently\nwith VACUUM/ANALYZE. Would changing this behavior make that better,\nor worse, or about the same?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Aug 2020 18:02:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "On Thu, 6 Aug 2020 at 22:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > ... how\n> > important is stability to ANALYZE? If you *either* retake your MVCC\n> > snapshots periodically as you re-scan the table *or* use a non-MVCC\n> > snapshot for the scan, you can get those same kinds of artifacts: you\n> > might see two copies of a just-updated row, or none. Maybe this would\n> > actually *break* something - e.g. could there be code that would get\n> > confused if we sample multiple rows for the same value in a column\n> > that has a UNIQUE index? But I think mostly the consequences would be\n> > that you might get somewhat different results from the statistics.\n>\n> Yeah, that's an excellent point. I can imagine somebody complaining\n> \"this query clearly matches a unique index, why is the planner estimating\n> multiple rows out?\". But most of the time it wouldn't matter much.\n> (And I think you can get cases like that anyway today.)\n>\n> > It's not clear to me that it would even be correct to categorize those\n> > somewhat-different results as \"less accurate.\"\n>\n> Estimating two rows where the correct answer is one row is clearly\n> \"less accurate\". But I suspect you'd have to be quite unlucky to\n> get such a result in practice from Simon's proposal, as long as we\n> weren't super-aggressive about changing ANALYZE's snapshot a lot.\n>\n\nSeems like we're agreed we can use more than one snapshot, the only\ndiscussion is \"how many?\"\n\nThe more you take the more weirdness you will see, so adopting an approach\nof one-snapshot-per-row seems like the worst case for accuracy, even if it\ndoes make analyze faster.\n\n(If we do want to speed up ANALYZE, we should use the system block sampling\napproach, but the argument against that is less independence of rows.)\n\nKeeping the discussion on reducing the impact of bernoulli sampled analyze, I\nwas imagining we would do one snapshot for each block of rows with default\nstatistics_target, so that default behavior would be unaffected. Larger\nsettings would be chunked according to the default, so\nstats_target=10k(max) would take a 10000/100 = 100 snapshots. That approach\nallows people to vary that using an existing parameter if needed.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nMission Critical Databases\n\nOn Thu, 6 Aug 2020 at 22:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> ... how\n> important is stability to ANALYZE? If you *either* retake your MVCC\n> snapshots periodically as you re-scan the table *or* use a non-MVCC\n> snapshot for the scan, you can get those same kinds of artifacts: you\n> might see two copies of a just-updated row, or none. Maybe this would\n> actually *break* something - e.g. could there be code that would get\n> confused if we sample multiple rows for the same value in a column\n> that has a UNIQUE index? But I think mostly the consequences would be\n> that you might get somewhat different results from the statistics.\n\nYeah, that's an excellent point. I can imagine somebody complaining\n\"this query clearly matches a unique index, why is the planner estimating\nmultiple rows out?\". But most of the time it wouldn't matter much.\n(And I think you can get cases like that anyway today.)\n\n> It's not clear to me that it would even be correct to categorize those\n> somewhat-different results as \"less accurate.\"\n\nEstimating two rows where the correct answer is one row is clearly\n\"less accurate\". But I suspect you'd have to be quite unlucky to\nget such a result in practice from Simon's proposal, as long as we\nweren't super-aggressive about changing ANALYZE's snapshot a lot.Seems like we're agreed we can use more than one snapshot, the only discussion is \"how many?\"The more you take the more weirdness you will see, so adopting an approach of one-snapshot-per-row seems like the worst case for accuracy, even if it does make analyze faster.(If we do want to speed up ANALYZE, we should use the system block sampling approach, but the argument against that is less independence of rows.)Keeping the discussion on reducing the impact of bernoulli sampled analyze, I was imagining we would do one snapshot for each block of rows with default statistics_target, so that default behavior would be unaffected. Larger settings would be chunked according to the default, so stats_target=10k(max) would take a 10000/100 = 100 snapshots. That approach allows people to vary that using an existing parameter if needed.-- Simon Riggs http://www.2ndQuadrant.com/Mission Critical Databases",
"msg_date": "Fri, 7 Aug 2020 05:54:19 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "On Thu, Aug 6, 2020 at 5:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > It's not clear to me that it would even be correct to categorize those\n> > somewhat-different results as \"less accurate.\"\n>\n> Estimating two rows where the correct answer is one row is clearly\n> \"less accurate\" [ than estimating one row ].\n\nThat's a tautology, so I can't argue with it as far as it goes.\nThinking about it more, there are really two ways to think about an\nestimated row count.\n\nOn the one hand, if you think of the row count estimate as the number\nof rows that are going to pop out of a node, then it's always right to\nthink of a unique index as limiting the number of occurrences of a\ngiven value to 1. But, if you think of the row count estimate as a way\nof estimating the amount of work that the node has to do to produce\nthat output, then it isn't.\n\nIf a table has a lot of inserts and deletes, or a lot of updates,\nindex scans might have to do a lot of extra work chasing down index\npointers to tuples that end up being invisible to our scan. The scan\nmay not have any filter quals at all, and even if it does, they are\nlikely cheap to evaluate compared to the cost of finding a locking\nbuffers and checking visibility, so the dominant cost of the scan is\nreally based on the total number of rows that are present, not the\nnumber that are visible. Ideally, the presence of those rows would\naffect the cost estimate for the node in a way very similar to\nexpecting to find more rows. At the same time, it doesn't work to just\nbump up the row count estimate for the node, because then you'll think\nmore rows will be output, which might cause poor planning decisions at\nhigher levels.\n\nIt doesn't seem easy to get this 100% right. Tuple visibility can\nchange very quickly, much faster than the inter-ANALYZE interval. And\nsometimes tuples can be pruned away very quickly, too, and the index\npointers may be opportunistically removed very quickly, too.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 7 Aug 2020 14:03:37 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Thinking about it more, there are really two ways to think about an\n> estimated row count.\n\n> On the one hand, if you think of the row count estimate as the number\n> of rows that are going to pop out of a node, then it's always right to\n> think of a unique index as limiting the number of occurrences of a\n> given value to 1. But, if you think of the row count estimate as a way\n> of estimating the amount of work that the node has to do to produce\n> that output, then it isn't.\n\nThe planner intends its row counts to be interpreted in the first way.\nWe do have a rather indirect way of accounting for the cost of scanning\ndead tuples and such, which is that we scale scanning costs according\nto the measured physical size of the relation. That works better for\nI/O costs than it does for CPU costs, but it's not completely useless\nfor the latter. In any case, we'd certainly not want to increase the\nscan's row count estimate for that, because that would falsely inflate\nour estimate of how much work upper plan levels have to do. Whatever\nhappens at the scan level, the upper levels aren't going to see those\ndead tuples.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 07 Aug 2020 14:41:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "On 2020-Aug-05, Andres Freund wrote:\n\n> I'm mildly against that, because I'd really like to start making use of\n> the flag. Not so much for cancellations, but to avoid the drastic impact\n> analyze has on bloat. In OLTP workloads with big tables, and without\n> disabled cost limiting for analyze (or slow IO), the snapshot that\n> analyze holds is often by far the transaction with the oldest xmin.\n\nI pushed despite the objection because it seemed that downstream\ndiscussion was largely favorable to the change, and there's a different\nproposal to solve the bloat problem for analyze; and also:\n\n> Only mildly against because it'd not be hard to reintroduce once we need\n> it.\n\nThanks for the discussion!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 7 Aug 2020 17:35:44 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-06 18:02:26 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > In fact using conceptually like a new snapshot for each sample tuple\n> > actually seems like it'd be somewhat of an improvement over using a\n> > single snapshot.\n> \n> Dunno, that feels like a fairly bad idea to me. It seems like it would\n> overemphasize the behavior of whatever queries happened to be running\n> concurrently with the ANALYZE. I do follow the argument that using a\n> single snapshot for the whole ANALYZE overemphasizes a single instant\n> in time, but I don't think that leads to the conclusion that we shouldn't\n> use a snapshot at all.\n\nI didn't actually want to suggest that we should take a separate\nsnapshot for every sampled row - that'd be excessively costly. What I\nwanted to say was that I don't think that I don't see a clear accuraccy\nbenefit. E.g. not seeing any of the values inserted more recently will\nunder-emphasize those in the histogram.\n\nWhat precisely do you mean with \"overemphasize\" above? I mean those will\ne the rows most likely to live after the analyze is done, so including\nthem doesn't seem like a bad thing to me?\n\n\n> Another angle that would be worth considering, aside from the issue\n> of whether the sample used for pg_statistic becomes more or less\n> representative, is what impact all this would have on the tuple count\n> estimates that go to the stats collector and pg_class.reltuples.\n> Right now, we don't have a great story at all on how the stats collector's\n> count is affected by combining VACUUM/ANALYZE table-wide counts with\n> the incremental deltas reported by transactions happening concurrently\n> with VACUUM/ANALYZE. Would changing this behavior make that better,\n> or worse, or about the same?\n\nHm. Vacuum already counts rows that are inserted concurrently with the\nvacuum scan, if it encounters them. Analyze doesn't. Seems like we'd at\nleast be wrong in a more consistent manner than before...\n\nIIUC both analyze and vacuum will overwrite concurrent changes to\nn_live_tuples. So taking concurrently committed changes into account\nseems like it'd be the right thing?\n\nWe probably could make this more accurate by accounting separately for\n\"recently inserted and committed\" rows, and taking the difference of\nn_live_tuples before/after into account. But I'm a bit doubtful that\nit's worth it?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Aug 2020 14:37:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I pushed despite the objection because it seemed that downstream\n> discussion was largely favorable to the change, and there's a different\n> proposal to solve the bloat problem for analyze; and also:\n\nNote that this quasi-related patch has pretty thoroughly hijacked\nthe CF entry for James' original docs patch proposal. The cfbot\nthinks that that's the latest patch in the original thread, and\nunsurprisingly is failing to apply it.\n\nSince the discussion was all over the place, I'm not sure whether\nthere's still a live docs patch proposal or not; but if so, somebody\nshould repost that patch (and go back to the original thread title).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 29 Aug 2020 20:06:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "On Fri, Jul 31, 2020 at 2:51 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Thu, Jul 16, 2020 at 7:34 PM David Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: not tested\n> > Implements feature: not tested\n> > Spec compliant: not tested\n> > Documentation: tested, passed\n> >\n> > James,\n> >\n> > I'm on board with the point of pointing out explicitly the \"concurrent index builds on multiple tables at the same time will not return on any one table until all have completed\", with back-patching. I do not believe the new paragraph is necessary though. I'd suggest trying to weave it into the existing paragraph ending \"Even then, however, the index may not be immediately usable for queries: in the worst case, it cannot be used as long as transactions exist that predate the start of the index build.\" Adding \"Notably, \" in front of the existing sentence fragment above and tacking it onto the end probably suffices.\n>\n> I'm not sure \"the index may not be immediately usable for queries\" is\n> really accurate/sufficient: it seems to imply the CREATE INDEX has\n> returned but for some reason the index isn't yet valid. The issue I'm\n> trying to describe here is that the CREATE INDEX query itself will not\n> return until all preceding queries have completed *including*\n> concurrent index creations on unrelated tables.\n>\n> > I don't actually don't whether this is true behavior though. Is it something our tests do, or could, demonstrate?\n>\n> It'd take tests that exercise parallelism, but it's pretty simple to\n> demonstrate (but you do have to catch the first index build in a scan\n> phase, so you either need lots of data or a hack). Here's an example\n> that uses a bit of a hack to simulate a slow scan phase:\n>\n> Setup:\n> create table items(i int);\n> create table others(i int);\n> create function slow_expr() returns text as $$ select pg_sleep(15);\n> select '5'; $$ language sql immutable;\n> insert into items(i) values (1), (2);\n> insert into others(i) values (1), (2);\n>\n> Then the following in order:\n> 1. In session A: create index concurrently on items((i::text || slow_expr()));\n> 2. In session B (at the same time): create index concurrently on others(i);\n>\n> You'll notice that the 2nd command, which should be practically\n> instantaneous, waits on the first ~30s scan phase of (1) before it\n> returns. The same is true if after (2) completes you immediately run\n> it again -- it waits on the second ~30s scan phase of (1).\n>\n> That does reveal a bit of complexity though that that the current\n> patch doesn't address, which is that this can be phase dependent (and\n> that complexity gets a lot more non-obvious when there's real live\n> activity (particularly long-running transactions) in the system as\n> well.\n>\n> I've attached a new patch series with two items:\n> 1. A simpler (and I believe more correct) doc changes for \"cic blocks\n> cic on other tables\".\n> 2. A patch to document that all index builds can prevent tuples from\n> being vacuumed away on other tables.\n>\n> If it's preferable we could commit the first and discuss the second\n> separately, but since that limitation was also discussed up-thread, I\n> decided to include them both here for now.\n\nÁlvaro's patch confused the current state of this thread, so I'm\nreattaching (rebased) v2 as v3.\n\nJames",
"msg_date": "Tue, 8 Sep 2020 13:25:21 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Sat, Aug 29, 2020 at 8:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > I pushed despite the objection because it seemed that downstream\n> > discussion was largely favorable to the change, and there's a different\n> > proposal to solve the bloat problem for analyze; and also:\n>\n> Note that this quasi-related patch has pretty thoroughly hijacked\n> the CF entry for James' original docs patch proposal. The cfbot\n> thinks that that's the latest patch in the original thread, and\n> unsurprisingly is failing to apply it.\n>\n> Since the discussion was all over the place, I'm not sure whether\n> there's still a live docs patch proposal or not; but if so, somebody\n> should repost that patch (and go back to the original thread title).\n\nI replied to the original email thread with reposted patches.\n\nJames\n\n\n",
"msg_date": "Tue, 8 Sep 2020 13:27:50 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PROC_IN_ANALYZE stillborn 13 years ago"
},
{
"msg_contents": "On Tue, Sep 08, 2020 at 01:25:21PM -0400, James Coleman wrote:\n> Álvaro's patch confused the current state of this thread, so I'm\n> reattaching (rebased) v2 as v3.\n\n+ <para>\n+ <command>CREATE INDEX</command> (including the <literal>CONCURRENTLY</literal>\n+ option) commands are included when <command>VACUUM</command> calculates what\n+ dead tuples are safe to remove even on tables other than the one being indexed.\n+ </para>\nFWIW, this is true as well for REINDEX CONCURRENTLY because both use\nthe same code paths for index builds and validation, with basically\nthe same waiting phases. But is CREATE INDEX the correct place for\nthat? Wouldn't it be better to tell about such things on the VACUUM\ndoc?\n\n0001 sounds fine to me.\n--\nMichael",
"msg_date": "Wed, 30 Sep 2020 18:10:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Wed, Sep 30, 2020 at 2:10 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Sep 08, 2020 at 01:25:21PM -0400, James Coleman wrote:\n> > Álvaro's patch confused the current state of this thread, so I'm\n> > reattaching (rebased) v2 as v3.\n>\n> + <para>\n> + <command>CREATE INDEX</command> (including the\n> <literal>CONCURRENTLY</literal>\n> + option) commands are included when <command>VACUUM</command>\n> calculates what\n> + dead tuples are safe to remove even on tables other than the one being\n> indexed.\n> + </para>\n> FWIW, this is true as well for REINDEX CONCURRENTLY because both use\n> the same code paths for index builds and validation, with basically\n> the same waiting phases. But is CREATE INDEX the correct place for\n> that? Wouldn't it be better to tell about such things on the VACUUM\n> doc?\n>\n> 0001 sounds fine to me.\n>\n>\nv3-0002 needs a rebase over the create_index.sgml page due to the change of\nthe nearby xref to link. Attached as v4-0002 along with the original\nv3-0001.\n\nI resisted the temptation to commit my word-smithing thoughts to the\naffected paragraph. The word \"phase\" appearing out of nowhere struck me a\nbit oddly. \"Then finally the\" feels like it is missing a couple of commas\n- or just drop the finally. \"then two table scans occur in separate\ntransactions\" reads better than \"two more transactions\" IMO.\n\nFor 0002 maybe focus on the fact that CREATE INDEX is a global concern even\nthough it only names a single table in any one invocation. As a\nconsequence, while it is running, vacuum cannot bring the system's oldest\nxid more current than the oldest xid on any index-in-progress table (I\ndon't know exactly how this works). And, rehasing 0001, all concurrent\nindexing will finish at the same time.\n\nIn short maybe focus less on procedure and specific waiting states and more\non the user-visible consequences. 0001 didn't really clear things up much\nin that regard. It reads like we are introducing a deadlock situation even\nthough that evidently is not the case.\n\nI concur that vacuum's perspective on the create index global reach needs\nto be addressed there if it is not already.\n\n<starts looking at vacuum>\n\nI'm a bit confused as to why/whether create index transactions are somehow\nspecial in this regard, compared to other transactions. I infer from the\nexistence of 0002 that they somehow are...\n\nMy conclusion thus far is that with respect to the original complaint:\n\nOn 2019-09-18 13:51:00 -0400, James Coleman wrote:\n> In my experience it's not immediately obvious (even after reading the\n> documentation) the implications of how concurrent index builds manage\n> transactions with respect to multiple concurrent index builds in\n> flight at the same time.\n\nThese two limited scope patches have not materially moved the needle in\nunderstanding. They are too technical when the underlying issue is\ncomprehension by non-technical people in terms of how they see their system\nbehave.\n\nDavid J.\n\nOn Wed, Sep 30, 2020 at 2:10 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Sep 08, 2020 at 01:25:21PM -0400, James Coleman wrote:\n> Álvaro's patch confused the current state of this thread, so I'm\n> reattaching (rebased) v2 as v3.\n\n+ <para>\n+ <command>CREATE INDEX</command> (including the <literal>CONCURRENTLY</literal>\n+ option) commands are included when <command>VACUUM</command> calculates what\n+ dead tuples are safe to remove even on tables other than the one being indexed.\n+ </para>\nFWIW, this is true as well for REINDEX CONCURRENTLY because both use\nthe same code paths for index builds and validation, with basically\nthe same waiting phases. But is CREATE INDEX the correct place for\nthat? Wouldn't it be better to tell about such things on the VACUUM\ndoc?\n\n0001 sounds fine to me.v3-0002 needs a rebase over the create_index.sgml page due to the change of the nearby xref to link. Attached as v4-0002 along with the original v3-0001.I resisted the temptation to commit my word-smithing thoughts to the affected paragraph. The word \"phase\" appearing out of nowhere struck me a bit oddly. \"Then finally the\" feels like it is missing a couple of commas - or just drop the finally. \"then two table scans occur in separate transactions\" reads better than \"two more transactions\" IMO.For 0002 maybe focus on the fact that CREATE INDEX is a global concern even though it only names a single table in any one invocation. As a consequence, while it is running, vacuum cannot bring the system's oldest xid more current than the oldest xid on any index-in-progress table (I don't know exactly how this works). And, rehasing 0001, all concurrent indexing will finish at the same time.In short maybe focus less on procedure and specific waiting states and more on the user-visible consequences. 0001 didn't really clear things up much in that regard. It reads like we are introducing a deadlock situation even though that evidently is not the case.I concur that vacuum's perspective on the create index global reach needs to be addressed there if it is not already.<starts looking at vacuum>I'm a bit confused as to why/whether create index transactions are somehow special in this regard, compared to other transactions. I infer from the existence of 0002 that they somehow are...My conclusion thus far is that with respect to the original complaint:On 2019-09-18 13:51:00 -0400, James Coleman wrote:> In my experience it's not immediately obvious (even after reading the> documentation) the implications of how concurrent index builds manage> transactions with respect to multiple concurrent index builds in> flight at the same time.These two limited scope patches have not materially moved the needle in understanding. They are too technical when the underlying issue is comprehension by non-technical people in terms of how they see their system behave.David J.",
"msg_date": "Wed, 21 Oct 2020 15:25:51 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Wed, Oct 21, 2020 at 3:25 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n>\n> v3-0002 needs a rebase over the create_index.sgml page due to the change\n> of the nearby xref to link. Attached as v4-0002 along with the original\n> v3-0001.\n>\n>\nattached...\n\nReading the commit message on 0002 - vacuum isn't a transaction-taking\ncommand so it wouldn't interfere with itself, create index does use\ntransactions and thus it's not surprising that it interferes with vacuum -\nwhich looks at transactions, not commands (as most of the internals would\nI'd presume).\n\nDavid J.",
"msg_date": "Wed, 21 Oct 2020 15:32:19 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "Status update for a commitfest entry.\r\n\r\nThe commitfest is nearing the end and I wonder what is this discussion waiting for.\r\nIt looks like the proposed patch received its fair share of review, so I mark it as ReadyForCommitter and lay responsibility for the final decision on them.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Mon, 30 Nov 2020 20:03:33 +0000",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On 2020-Nov-30, Anastasia Lubennikova wrote:\n\n> The commitfest is nearing the end and I wonder what is this discussion waiting for.\n> It looks like the proposed patch received its fair share of review, so\n> I mark it as ReadyForCommitter and lay responsibility for the final\n> decision on them.\n\nI'll get these pushed now, thanks for the reminder.\n\n\n",
"msg_date": "Mon, 30 Nov 2020 17:05:43 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On 2020-Sep-30, Michael Paquier wrote:\n\n> + <para>\n> + <command>CREATE INDEX</command> (including the <literal>CONCURRENTLY</literal>\n> + option) commands are included when <command>VACUUM</command> calculates what\n> + dead tuples are safe to remove even on tables other than the one being indexed.\n> + </para>\n> FWIW, this is true as well for REINDEX CONCURRENTLY because both use\n> the same code paths for index builds and validation, with basically\n> the same waiting phases. But is CREATE INDEX the correct place for\n> that? Wouldn't it be better to tell about such things on the VACUUM\n> doc?\n\nYeah, I think it might be more sensible to document this in\nmaintenance.sgml, as part of the paragraph that discusses removing\ntuples \"to save space\". But making it inline with the rest of the flow,\nit seems to distract from higher-level considerations, so I suggest to\nmake it a footnote instead.\n\nI'm not sure on the wording to use; what about this?",
"msg_date": "Mon, 30 Nov 2020 18:52:55 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 4:53 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2020-Sep-30, Michael Paquier wrote:\n>\n> > + <para>\n> > + <command>CREATE INDEX</command> (including the <literal>CONCURRENTLY</literal>\n> > + option) commands are included when <command>VACUUM</command> calculates what\n> > + dead tuples are safe to remove even on tables other than the one being indexed.\n> > + </para>\n> > FWIW, this is true as well for REINDEX CONCURRENTLY because both use\n> > the same code paths for index builds and validation, with basically\n> > the same waiting phases. But is CREATE INDEX the correct place for\n> > that? Wouldn't it be better to tell about such things on the VACUUM\n> > doc?\n>\n> Yeah, I think it might be more sensible to document this in\n> maintenance.sgml, as part of the paragraph that discusses removing\n> tuples \"to save space\". But making it inline with the rest of the flow,\n> it seems to distract from higher-level considerations, so I suggest to\n> make it a footnote instead.\n\nI have mixed feelings about wholesale moving it; users aren't likely\nto read the vacuum doc when considering how running CIC might impact\ntheir system, though I do understand why it otherwise fits there. Even\nif the primary details are in the vacuum, I tend to think a reference\nnote (or link to the vacuum docs) in the create index docs would be\nuseful. The principle here is that 1.) vacuum is automatic/part of the\nbackground of the system, not just something people trigger manually,\nand 2.) we ought to document things where the user action triggering\nthe behavior is documented.\n\n> I'm not sure on the wording to use; what about this?\n\nThe wording seems fine to me.\n\nThis is a replacement for what was 0002 earlier? And 0001 from earlier\nstill seems to be a useful standalone patch?\n\nJames\n\n\n",
"msg_date": "Mon, 30 Nov 2020 19:29:27 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On 2020-Nov-30, James Coleman wrote:\n\n> On Mon, Nov 30, 2020 at 4:53 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2020-Sep-30, Michael Paquier wrote:\n\n> > Yeah, I think it might be more sensible to document this in\n> > maintenance.sgml, as part of the paragraph that discusses removing\n> > tuples \"to save space\". But making it inline with the rest of the flow,\n> > it seems to distract from higher-level considerations, so I suggest to\n> > make it a footnote instead.\n> \n> I have mixed feelings about wholesale moving it; users aren't likely\n> to read the vacuum doc when considering how running CIC might impact\n> their system, though I do understand why it otherwise fits there.\n\nMakes sense. ISTM that if we want to have a cautionary blurb CIC docs,\nit should go in REINDEX CONCURRENTLY as well.\n\n> > I'm not sure on the wording to use; what about this?\n> \n> The wording seems fine to me.\n\nGreat, thanks.\n\n> This is a replacement for what was 0002 earlier? And 0001 from earlier\n> still seems to be a useful standalone patch?\n\n0001 is the one that I got pushed yesterday, I think -- correct?\nsrc/tools/git_changelog says:\n\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nBranch: master [58ebe967f] 2020-11-30 18:24:55 -0300\nBranch: REL_13_STABLE [3fe0e7c3f] 2020-11-30 18:24:55 -0300\nBranch: REL_12_STABLE [b2603f16a] 2020-11-30 18:24:55 -0300\nBranch: REL_11_STABLE [ed9c9b033] 2020-11-30 18:24:55 -0300\nBranch: REL_10_STABLE [d3bd36a63] 2020-11-30 18:24:55 -0300\nBranch: REL9_6_STABLE [b3d33bf59] 2020-11-30 18:24:55 -0300\nBranch: REL9_5_STABLE [968a537b4] 2020-11-30 18:24:55 -0300\n\n Document concurrent indexes waiting on each other\n \n Because regular CREATE INDEX commands are independent, and there's no\n logical data dependency, it's not immediately obvious that transactions\n held by concurrent index builds on one table will block the second phase\n of concurrent index creation on an unrelated table, so document this\n caveat.\n \n Backpatch this all the way back. In branch master, mention that only\n some indexes are involved.\n \n Author: James Coleman <jtc331@gmail.com>\n Reviewed-by: David Johnston <david.g.johnston@gmail.com>\n Discussion: https://postgr.es/m/CAAaqYe994=PUrn8CJZ4UEo_S-FfRr_3ogERyhtdgHAb2WG_Ufg@mail.gmail.com\n\n\n\n",
"msg_date": "Tue, 1 Dec 2020 20:51:42 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 6:51 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2020-Nov-30, James Coleman wrote:\n>\n> > On Mon, Nov 30, 2020 at 4:53 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> > > On 2020-Sep-30, Michael Paquier wrote:\n>\n> > > Yeah, I think it might be more sensible to document this in\n> > > maintenance.sgml, as part of the paragraph that discusses removing\n> > > tuples \"to save space\". But making it inline with the rest of the flow,\n> > > it seems to distract from higher-level considerations, so I suggest to\n> > > make it a footnote instead.\n> >\n> > I have mixed feelings about wholesale moving it; users aren't likely\n> > to read the vacuum doc when considering how running CIC might impact\n> > their system, though I do understand why it otherwise fits there.\n>\n> Makes sense. ISTM that if we want to have a cautionary blurb CIC docs,\n> it should go in REINDEX CONCURRENTLY as well.\n\nAgreed. Or, alternatively, a blurb something like \"Please note how CIC\ninteracts with VACUUM <link>...\", and then the primary language in\nmaintenance.sgml. That would have the benefit of maintaining the core\nlanguage in only one place.\n\n> > > I'm not sure on the wording to use; what about this?\n> >\n> > The wording seems fine to me.\n>\n> Great, thanks.\n>\n> > This is a replacement for what was 0002 earlier? And 0001 from earlier\n> > still seems to be a useful standalone patch?\n>\n> 0001 is the one that I got pushed yesterday, I think -- correct?\n> src/tools/git_changelog says:\n>\n> Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> Branch: master [58ebe967f] 2020-11-30 18:24:55 -0300\n> Branch: REL_13_STABLE [3fe0e7c3f] 2020-11-30 18:24:55 -0300\n> Branch: REL_12_STABLE [b2603f16a] 2020-11-30 18:24:55 -0300\n> Branch: REL_11_STABLE [ed9c9b033] 2020-11-30 18:24:55 -0300\n> Branch: REL_10_STABLE [d3bd36a63] 2020-11-30 18:24:55 -0300\n> Branch: REL9_6_STABLE [b3d33bf59] 2020-11-30 18:24:55 -0300\n> Branch: REL9_5_STABLE [968a537b4] 2020-11-30 18:24:55 -0300\n>\n> Document concurrent indexes waiting on each other\n>\n> Because regular CREATE INDEX commands are independent, and there's no\n> logical data dependency, it's not immediately obvious that transactions\n> held by concurrent index builds on one table will block the second phase\n> of concurrent index creation on an unrelated table, so document this\n> caveat.\n>\n> Backpatch this all the way back. In branch master, mention that only\n> some indexes are involved.\n>\n> Author: James Coleman <jtc331@gmail.com>\n> Reviewed-by: David Johnston <david.g.johnston@gmail.com>\n> Discussion: https://postgr.es/m/CAAaqYe994=PUrn8CJZ4UEo_S-FfRr_3ogERyhtdgHAb2WG_Ufg@mail.gmail.com\n\nAh, yes, somehow I'd missed that that had been pushed.\n\nJames\n\n\n",
"msg_date": "Tue, 1 Dec 2020 20:05:35 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Tue, Dec 1, 2020 at 8:05 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Tue, Dec 1, 2020 at 6:51 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2020-Nov-30, James Coleman wrote:\n> >\n> > > On Mon, Nov 30, 2020 at 4:53 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > > >\n> > > > On 2020-Sep-30, Michael Paquier wrote:\n> >\n> > > > Yeah, I think it might be more sensible to document this in\n> > > > maintenance.sgml, as part of the paragraph that discusses removing\n> > > > tuples \"to save space\". But making it inline with the rest of the flow,\n> > > > it seems to distract from higher-level considerations, so I suggest to\n> > > > make it a footnote instead.\n> > >\n> > > I have mixed feelings about wholesale moving it; users aren't likely\n> > > to read the vacuum doc when considering how running CIC might impact\n> > > their system, though I do understand why it otherwise fits there.\n> >\n> > Makes sense. ISTM that if we want to have a cautionary blurb CIC docs,\n> > it should go in REINDEX CONCURRENTLY as well.\n>\n> Agreed. Or, alternatively, a blurb something like \"Please note how CIC\n> interacts with VACUUM <link>...\", and then the primary language in\n> maintenance.sgml. That would have the benefit of maintaining the core\n> language in only one place.\n\nAny thoughts on this?\n\nJames\n\n\n",
"msg_date": "Fri, 18 Dec 2020 11:03:16 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On 2020-Dec-01, James Coleman wrote:\n\n> On Tue, Dec 1, 2020 at 6:51 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > Makes sense. ISTM that if we want to have a cautionary blurb CIC docs,\n> > it should go in REINDEX CONCURRENTLY as well.\n> \n> Agreed. Or, alternatively, a blurb something like \"Please note how CIC\n> interacts with VACUUM <link>...\", and then the primary language in\n> maintenance.sgml. That would have the benefit of maintaining the core\n> language in only one place.\n\nI looked into this again, and I didn't like what I had added to\nmaintenance.sgml at all. It seems out of place where I put it; and I\ncouldn't find any great spots. Going back to your original proposal,\nwhat about something like this? It's just one more para in the \"notes\"\nsection in CREATE INDEX and REINDEX pages, without any additions to the\nVACUUM pages.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W",
"msg_date": "Tue, 12 Jan 2021 16:51:39 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Tue, Jan 12, 2021 at 04:51:39PM -0300, Alvaro Herrera wrote:\n> I looked into this again, and I didn't like what I had added to\n> maintenance.sgml at all. It seems out of place where I put it; and I\n> couldn't find any great spots. Going back to your original proposal,\n> what about something like this? It's just one more para in the \"notes\"\n> section in CREATE INDEX and REINDEX pages, without any additions to the\n> VACUUM pages.\n\n+1.\n--\nMichael",
"msg_date": "Wed, 13 Jan 2021 14:58:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 12:58 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jan 12, 2021 at 04:51:39PM -0300, Alvaro Herrera wrote:\n> > I looked into this again, and I didn't like what I had added to\n> > maintenance.sgml at all. It seems out of place where I put it; and I\n> > couldn't find any great spots. Going back to your original proposal,\n> > what about something like this? It's just one more para in the \"notes\"\n> > section in CREATE INDEX and REINDEX pages, without any additions to the\n> > VACUUM pages.\n>\n> +1.\n\nI think one more para in the notes is good. But shouldn't we still\nclarify the issue is specific to CONCURRENTLY?\n\nAlso that it's not just the table being indexed seems fairly significant.\n\nHow about something like:\n\n---\nLike any long-running transaction, <command>REINDEX CONCURRENTLY</command> can\naffect which tuples can be removed by concurrent\n<command>VACUUM</command> on any table.\n---\n\nJames\n\n\n",
"msg_date": "Wed, 13 Jan 2021 10:16:27 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On 2021-Jan-13, James Coleman wrote:\n\n> On Wed, Jan 13, 2021 at 12:58 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Tue, Jan 12, 2021 at 04:51:39PM -0300, Alvaro Herrera wrote:\n> > > I looked into this again, and I didn't like what I had added to\n> > > maintenance.sgml at all. It seems out of place where I put it; and I\n> > > couldn't find any great spots. Going back to your original proposal,\n> > > what about something like this? It's just one more para in the \"notes\"\n> > > section in CREATE INDEX and REINDEX pages, without any additions to the\n> > > VACUUM pages.\n> >\n> > +1.\n> \n> I think one more para in the notes is good. But shouldn't we still\n> clarify the issue is specific to CONCURRENTLY?\n\nHow is it specific to concurrent builds? What we're documenting here is\nthe behavior of vacuum, and that one is identical in both regular builds\nand concurrent builds (since vacuum has to avoid removing rows from\nunder either of them). The only reason concurrent builds are\ninteresting is because they take longer.\n\nWhat was specific to concurrent builds was the fact that you can't have\nmore than one at a time, and that one is what was added in 58ebe967f.\n\n> Also that it's not just the table being indexed seems fairly significant.\n\nThis is true. So I propose\n\n Like any long-running transaction, <command>REINDEX</command> can\n affect which tuples can be removed by concurrent <command>VACUUM</command>\n on any table.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Wed, 13 Jan 2021 14:33:43 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 12:33 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jan-13, James Coleman wrote:\n>\n> > On Wed, Jan 13, 2021 at 12:58 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Tue, Jan 12, 2021 at 04:51:39PM -0300, Alvaro Herrera wrote:\n> > > > I looked into this again, and I didn't like what I had added to\n> > > > maintenance.sgml at all. It seems out of place where I put it; and I\n> > > > couldn't find any great spots. Going back to your original proposal,\n> > > > what about something like this? It's just one more para in the \"notes\"\n> > > > section in CREATE INDEX and REINDEX pages, without any additions to the\n> > > > VACUUM pages.\n> > >\n> > > +1.\n> >\n> > I think one more para in the notes is good. But shouldn't we still\n> > clarify the issue is specific to CONCURRENTLY?\n>\n> How is it specific to concurrent builds? What we're documenting here is\n> the behavior of vacuum, and that one is identical in both regular builds\n> and concurrent builds (since vacuum has to avoid removing rows from\n> under either of them). The only reason concurrent builds are\n> interesting is because they take longer.\n>\n> What was specific to concurrent builds was the fact that you can't have\n> more than one at a time, and that one is what was added in 58ebe967f.\n\nAh, right. I've mixed those up at least once on this thread already.\n\n> > Also that it's not just the table being indexed seems fairly significant.\n>\n> This is true. So I propose\n>\n> Like any long-running transaction, <command>REINDEX</command> can\n> affect which tuples can be removed by concurrent <command>VACUUM</command>\n> on any table.\n\nThat sounds good to me.\n\nJames\n\n\n",
"msg_date": "Wed, 13 Jan 2021 13:42:17 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On 2021-Jan-13, James Coleman wrote:\n\n> On Wed, Jan 13, 2021 at 12:33 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > This is true. So I propose\n> >\n> > Like any long-running transaction, <command>REINDEX</command> can\n> > affect which tuples can be removed by concurrent <command>VACUUM</command>\n> > on any table.\n> \n> That sounds good to me.\n\nGreat, pushed with one more wording tweak: \"REINDEX on any table can\naffect ... on any other table\". To pg12 and up.\n\nI wondered about noting whether only processes in the current database\nare affected, but then I noticed that the current code since commit\ndc7420c2c927 uses a completely different algorithm than what we had with\nGetOldestXmin() and does not consider database boundaries at all.\nThis doesn't sound great to me, since a misbehaved database can now\naffect others ... Maybe I misunderstand that code.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"This is what I like so much about PostgreSQL. Most of the surprises\nare of the \"oh wow! That's cool\" Not the \"oh shit!\" kind. :)\"\nScott Marlowe, http://archives.postgresql.org/pgsql-admin/2008-10/msg00152.php\n\n\n",
"msg_date": "Wed, 13 Jan 2021 18:05:37 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 4:05 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jan-13, James Coleman wrote:\n>\n> > On Wed, Jan 13, 2021 at 12:33 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > > This is true. So I propose\n> > >\n> > > Like any long-running transaction, <command>REINDEX</command> can\n> > > affect which tuples can be removed by concurrent <command>VACUUM</command>\n> > > on any table.\n> >\n> > That sounds good to me.\n>\n> Great, pushed with one more wording tweak: \"REINDEX on any table can\n> affect ... on any other table\". To pg12 and up.\n\nLooks like what got committed is \"REINDEX on a table\" not \"on any\",\nbut I'm not sure that matters too much.\n\nJames\n\n\n",
"msg_date": "Wed, 13 Jan 2021 16:08:33 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On 2021-Jan-13, Alvaro Herrera wrote:\n\n> I wondered about noting whether only processes in the current database\n> are affected, but then I noticed that the current code since commit\n> dc7420c2c927 uses a completely different algorithm than what we had with\n> GetOldestXmin() and does not consider database boundaries at all.\n> This doesn't sound great to me, since a misbehaved database can now\n> affect others ... Maybe I misunderstand that code.\n\nThis appears to be false, per ComputeXidHorizons.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Ni a�n el genio muy grande llegar�a muy lejos\nsi tuviera que sacarlo todo de su propio interior\" (Goethe)\n\n\n",
"msg_date": "Wed, 13 Jan 2021 18:14:25 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On 2021-Jan-13, James Coleman wrote:\n\n> On Wed, Jan 13, 2021 at 4:05 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Jan-13, James Coleman wrote:\n> >\n> > > On Wed, Jan 13, 2021 at 12:33 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > > > This is true. So I propose\n> > > >\n> > > > Like any long-running transaction, <command>REINDEX</command> can\n> > > > affect which tuples can be removed by concurrent <command>VACUUM</command>\n> > > > on any table.\n> > >\n> > > That sounds good to me.\n> >\n> > Great, pushed with one more wording tweak: \"REINDEX on any table can\n> > affect ... on any other table\". To pg12 and up.\n> \n> Looks like what got committed is \"REINDEX on a table\" not \"on any\",\n> but I'm not sure that matters too much.\n\nOuch. The difference seems slight enough that it doesn't matter; is it\nungrammatical?\n\nEither way I'm gonna close this CF entry now, finally. Thank you for\nyour patience!\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"I can't go to a restaurant and order food because I keep looking at the\nfonts on the menu. Five minutes later I realize that it's also talking\nabout food\" (Donald Knuth)\n\n\n",
"msg_date": "Wed, 13 Jan 2021 18:16:04 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Jan-13, James Coleman wrote:\n>>>> This is true. So I propose\n>>>> Like any long-running transaction, <command>REINDEX</command> can\n>>>> affect which tuples can be removed by concurrent <command>VACUUM</command>\n>>>> on any table.\n\n>> Looks like what got committed is \"REINDEX on a table\" not \"on any\",\n>> but I'm not sure that matters too much.\n\n> Ouch. The difference seems slight enough that it doesn't matter; is it\n> ungrammatical?\n\nI'd personally have written \"on other tables\" or \"on another table\",\nor left out that clause altogether and just said \"concurrent\n<command>VACUUM</command>\". I'm not sure it's ungrammatical exactly,\nbut the antecedent of \"a table\" is a bit unclear; people might\nwonder if it means the table being reindexed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Jan 2021 16:29:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 4:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2021-Jan-13, James Coleman wrote:\n> >>>> This is true. So I propose\n> >>>> Like any long-running transaction, <command>REINDEX</command> can\n> >>>> affect which tuples can be removed by concurrent <command>VACUUM</command>\n> >>>> on any table.\n>\n> >> Looks like what got committed is \"REINDEX on a table\" not \"on any\",\n> >> but I'm not sure that matters too much.\n>\n> > Ouch. The difference seems slight enough that it doesn't matter; is it\n> > ungrammatical?\n>\n> I'd personally have written \"on other tables\" or \"on another table\",\n> or left out that clause altogether and just said \"concurrent\n> <command>VACUUM</command>\". I'm not sure it's ungrammatical exactly,\n> but the antecedent of \"a table\" is a bit unclear; people might\n> wonder if it means the table being reindexed.\n\nIt does mean the table being reindexed; the last phrase says \"any\ntable\" meaning \"any other table\".\n\nJames\n\n\n",
"msg_date": "Wed, 13 Jan 2021 16:48:37 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> On Wed, Jan 13, 2021 at 4:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> but the antecedent of \"a table\" is a bit unclear; people might\n>> wonder if it means the table being reindexed.\n\n> It does mean the table being reindexed; the last phrase says \"any\n> table\" meaning \"any other table\".\n\n[ raised eyebrow ] Surely REINDEX and VACUUM can't run on the same\ntable at the same time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Jan 2021 17:00:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 5:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Coleman <jtc331@gmail.com> writes:\n> > On Wed, Jan 13, 2021 at 4:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> but the antecedent of \"a table\" is a bit unclear; people might\n> >> wonder if it means the table being reindexed.\n>\n> > It does mean the table being reindexed; the last phrase says \"any\n> > table\" meaning \"any other table\".\n>\n> [ raised eyebrow ] Surely REINDEX and VACUUM can't run on the same\n> table at the same time.\n\n+ Like any long-running transaction, <command>CREATE INDEX</command> on a\n+ table can affect which tuples can be removed by concurrent\n+ <command>VACUUM</command> on any other table.\n\nThe \"on a table\" is the table on which the REINDEX/CREATE INDEX is\noccurring. The \"any other table\" is where VACUUM might run.\n\nJames\n\n\n",
"msg_date": "Wed, 13 Jan 2021 17:52:16 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
},
{
"msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> On Wed, Jan 13, 2021 at 5:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> [ raised eyebrow ] Surely REINDEX and VACUUM can't run on the same\n>> table at the same time.\n\n> + Like any long-running transaction, <command>CREATE INDEX</command> on a\n> + table can affect which tuples can be removed by concurrent\n> + <command>VACUUM</command> on any other table.\n\n> The \"on a table\" is the table on which the REINDEX/CREATE INDEX is\n> occurring. The \"any other table\" is where VACUUM might run.\n\nI still think it'd be just as clear without the auxiliary clauses,\nsay\n\n+ Like any long-running transaction, <command>CREATE INDEX</command>\n+ can affect which tuples can be removed by concurrent\n+ <command>VACUUM</command> operations.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Jan 2021 18:49:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [DOC] Document concurrent index builds waiting on each other"
}
] |
[
{
"msg_contents": "There is a small but eye catching glitch in the v12 (and master) docs\nfor \"CREATE TABLE AS\".\n\nhttps://www.postgresql.org/docs/12/sql-createtableas.html\n\nindex b5c4ce6959..56d06838f1 100644\n--- a/doc/src/sgml/ref/create_table_as.sgml\n+++ b/doc/src/sgml/ref/create_table_as.sgml\n@@ -146,7 +146,6 @@\n clause for a table can also include <literal>OIDS=FALSE</literal> to\n specify that rows of the new table should contain no OIDs (object\n identifiers), <literal>OIDS=TRUE</literal> is not supported anymore.\n- OIDs.\n </para>\n </listitem>\n </varlistentry>\n\n\nSincerely,\nFilip\n\n\n",
"msg_date": "Thu, 19 Sep 2019 10:10:15 +0200",
"msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <filip.rembialkowski@gmail.com>",
"msg_from_op": true,
"msg_subject": "one line doc patch for v12"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 1:40 PM Filip Rembiałkowski\n<filip.rembialkowski@gmail.com> wrote:\n>\n> There is a small but eye catching glitch in the v12 (and master) docs\n> for \"CREATE TABLE AS\".\n>\n> https://www.postgresql.org/docs/12/sql-createtableas.html\n>\n> index b5c4ce6959..56d06838f1 100644\n> --- a/doc/src/sgml/ref/create_table_as.sgml\n> +++ b/doc/src/sgml/ref/create_table_as.sgml\n> @@ -146,7 +146,6 @@\n> clause for a table can also include <literal>OIDS=FALSE</literal> to\n> specify that rows of the new table should contain no OIDs (object\n> identifiers), <literal>OIDS=TRUE</literal> is not supported anymore.\n> - OIDs.\n> </para>\n> </listitem>\n> </varlistentry>\n>\n\nLooks good to me, will take care of pushing this change in some time\nunless someone else takes care of it before me.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 Sep 2019 14:15:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: one line doc patch for v12"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 2:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Sep 19, 2019 at 1:40 PM Filip Rembiałkowski\n> <filip.rembialkowski@gmail.com> wrote:\n> >\n> > There is a small but eye catching glitch in the v12 (and master) docs\n> > for \"CREATE TABLE AS\".\n> >\n> > https://www.postgresql.org/docs/12/sql-createtableas.html\n> >\n> > index b5c4ce6959..56d06838f1 100644\n> > --- a/doc/src/sgml/ref/create_table_as.sgml\n> > +++ b/doc/src/sgml/ref/create_table_as.sgml\n> > @@ -146,7 +146,6 @@\n> > clause for a table can also include <literal>OIDS=FALSE</literal> to\n> > specify that rows of the new table should contain no OIDs (object\n> > identifiers), <literal>OIDS=TRUE</literal> is not supported anymore.\n> > - OIDs.\n> > </para>\n> > </listitem>\n> > </varlistentry>\n> >\n>\n> Looks good to me, will take care of pushing this change in some time\n> unless someone else takes care of it before me.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 Sep 2019 15:01:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: one line doc patch for v12"
}
] |
[
{
"msg_contents": "Hi,\n\ncurrently, libpq does SSL cerificate validation only against the defined \n`PGSSLROOTCERT` file.\n\nIs there any specific reason, why the system truststore ( at least under \nunixoid systems) is not considered for the validation?\n\nWe would like to contribute a patch to allow certificate validation against \nthe system truststore. Are there any opinions against it?\n\n\nA little bit background for this:\n\nInternally we sign the certificates for our systems with our own CA. The CA \nroot certificates and revocation lists are distributed via puppet and/or \npackages on all of our internal systems.\n\nValidating the certificate against this CA requires to either override the \nPGSSLROOTCERT location via the environment or provide a copy of the file for \neach user that connects with libpq or libpq-like connectors.\n\nWe would like to simplify this.\n\n\n-- \nThomas Berger\n\nPostgreSQL DBA\nDatabase Operations\n\n1&1 Telecommunication SE | Ernst-Frey-Straße 10 | 76135 Karlsruhe | Germany\n\n\n",
"msg_date": "Thu, 19 Sep 2019 14:54:22 +0000",
"msg_from": "Thomas Berger <thomas.berger@1und1.de>",
"msg_from_op": true,
"msg_subject": "Usage of the system truststore for SSL certificate validation"
},
{
"msg_contents": "If we're going to open this up, can we add an option to say \"this key is\nallowed to log in to this account\", SSH style?\n\nI like the idea of using keys rather than .pgpass, but I like the\n~/.ssh/authorized_keys model and don't like the \"set up an entire\ncertificate infrastructure\" approach.\n\nOn Thu, 19 Sep 2019 at 10:54, Thomas Berger <thomas.berger@1und1.de> wrote:\n\n> Hi,\n>\n> currently, libpq does SSL cerificate validation only against the defined\n> `PGSSLROOTCERT` file.\n>\n> Is there any specific reason, why the system truststore ( at least under\n> unixoid systems) is not considered for the validation?\n>\n> We would like to contribute a patch to allow certificate validation\n> against\n> the system truststore. Are there any opinions against it?\n>\n>\n> A little bit background for this:\n>\n> Internally we sign the certificates for our systems with our own CA. The\n> CA\n> root certificates and revocation lists are distributed via puppet and/or\n> packages on all of our internal systems.\n>\n> Validating the certificate against this CA requires to either override the\n> PGSSLROOTCERT location via the environment or provide a copy of the file\n> for\n> each user that connects with libpq or libpq-like connectors.\n>\n> We would like to simplify this.\n>\n>\n> --\n> Thomas Berger\n>\n> PostgreSQL DBA\n> Database Operations\n>\n> 1&1 Telecommunication SE | Ernst-Frey-Straße 10 | 76135 Karlsruhe | Germany\n>\n>\n>\n\nIf we're going to open this up, can we add an option to say \"this key is allowed to log in to this account\", SSH style?I like the idea of using keys rather than .pgpass, but I like the ~/.ssh/authorized_keys model and don't like the \"set up an entire certificate infrastructure\" approach.On Thu, 19 Sep 2019 at 10:54, Thomas Berger <thomas.berger@1und1.de> wrote:Hi,\n\ncurrently, libpq does SSL cerificate validation only against the defined \n`PGSSLROOTCERT` file.\n\nIs there any specific reason, why the system truststore ( at least under \nunixoid systems) is not considered for the validation?\n\nWe would like to contribute a patch to allow certificate validation against \nthe system truststore. Are there any opinions against it?\n\n\nA little bit background for this:\n\nInternally we sign the certificates for our systems with our own CA. The CA \nroot certificates and revocation lists are distributed via puppet and/or \npackages on all of our internal systems.\n\nValidating the certificate against this CA requires to either override the \nPGSSLROOTCERT location via the environment or provide a copy of the file for \neach user that connects with libpq or libpq-like connectors.\n\nWe would like to simplify this.\n\n\n-- \nThomas Berger\n\nPostgreSQL DBA\nDatabase Operations\n\n1&1 Telecommunication SE | Ernst-Frey-Straße 10 | 76135 Karlsruhe | Germany",
"msg_date": "Thu, 19 Sep 2019 12:26:27 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Usage of the system truststore for SSL certificate validation"
},
{
"msg_contents": "This certainly looks like a good addition to me that can be\nimplemented on both client and server side. It is always good to have\na common location where the list of all the certificates from various\nCA's can be placed for validation.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Thu, Sep 19, 2019 at 8:24 PM Thomas Berger <thomas.berger@1und1.de> wrote:\n>\n> Hi,\n>\n> currently, libpq does SSL cerificate validation only against the defined\n> `PGSSLROOTCERT` file.\n>\n> Is there any specific reason, why the system truststore ( at least under\n> unixoid systems) is not considered for the validation?\n>\n> We would like to contribute a patch to allow certificate validation against\n> the system truststore. Are there any opinions against it?\n>\n>\n> A little bit background for this:\n>\n> Internally we sign the certificates for our systems with our own CA. The CA\n> root certificates and revocation lists are distributed via puppet and/or\n> packages on all of our internal systems.\n>\n> Validating the certificate against this CA requires to either override the\n> PGSSLROOTCERT location via the environment or provide a copy of the file for\n> each user that connects with libpq or libpq-like connectors.\n>\n> We would like to simplify this.\n>\n>\n> --\n> Thomas Berger\n>\n> PostgreSQL DBA\n> Database Operations\n>\n> 1&1 Telecommunication SE | Ernst-Frey-Straße 10 | 76135 Karlsruhe | Germany\n>\n>\n\n\n",
"msg_date": "Fri, 20 Sep 2019 10:50:03 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Usage of the system truststore for SSL certificate validation"
},
{
"msg_contents": "On Thu, 19 Sep 2019 at 12:26, Isaac Morland <isaac.morland@gmail.com> wrote:\n\n> If we're going to open this up, can we add an option to say \"this key is\n> allowed to log in to this account\", SSH style?\n>\n> I like the idea of using keys rather than .pgpass, but I like the\n> ~/.ssh/authorized_keys model and don't like the \"set up an entire\n> certificate infrastructure\" approach.\n>\n\n Sorry for the top-post.\n\nOn Thu, 19 Sep 2019 at 12:26, Isaac Morland <isaac.morland@gmail.com> wrote:If we're going to open this up, can we add an option to say \"this key is allowed to log in to this account\", SSH style?I like the idea of using keys rather than .pgpass, but I like the ~/.ssh/authorized_keys model and don't like the \"set up an entire certificate infrastructure\" approach. Sorry for the top-post.",
"msg_date": "Fri, 20 Sep 2019 10:40:45 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Usage of the system truststore for SSL certificate validation"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 12:26:27PM -0400, Isaac Morland wrote:\n> If we're going to open this up, can we add an option to say \"this key is\n> allowed to log in to this account\", SSH style?\n> \n> I like the idea of using keys rather than .pgpass, but I like the ~/.ssh/\n> authorized_keys model and don't like the \"set up an entire certificate\n> infrastructure\" approach.\n\nThis is actually a good question --- why does ssh do it that way and\nPostgres does it another, more like a web server/client. Maybe it is\nbecause ssh allows the user to create one key pair, and use it for\nseveral independent servers, while Postgres assumes the client will only\nconnect to multiple related servers controlled by the same CA. With the\nPostgres approach, you can change the client certificate with no changes\non the server, while with the ssh model, changing the client certificate\nrequires sending the public key to the ssh server to be added to\n~/.ssh/authorized_keys.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 28 Sep 2019 15:59:00 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Usage of the system truststore for SSL certificate validation"
},
{
"msg_contents": "On Sat, Sep 28, 2019 at 9:59 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Thu, Sep 19, 2019 at 12:26:27PM -0400, Isaac Morland wrote:\n> > If we're going to open this up, can we add an option to say \"this key is\n> > allowed to log in to this account\", SSH style?\n> >\n> > I like the idea of using keys rather than .pgpass, but I like the ~/.ssh/\n> > authorized_keys model and don't like the \"set up an entire certificate\n> > infrastructure\" approach.\n>\n> This is actually a good question --- why does ssh do it that way and\n> Postgres does it another, more like a web server/client. Maybe it is\n> because ssh allows the user to create one key pair, and use it for\n> several independent servers, while Postgres assumes the client will only\n> connect to multiple related servers controlled by the same CA. With the\n> Postgres approach, you can change the client certificate with no changes\n> on the server, while with the ssh model, changing the client certificate\n> requires sending the public key to the ssh server to be added to\n> ~/.ssh/authorized_keys.\n>\n\nThe big difference between the two methods in general is the CA yes. In the\nSSL based method, you have a central authority that says \"these keys are\nOK\" by means of certificates. In the ssh key model, there's an individual\nkeypair.\n\nIt would make no sense to extend the cert model of authentication to\nsupport ssh style keys, IMO. However, it might make perfect sense to add a\nseparate pure key based login method. And re-using the way ssh handles keys\nthere would make sense. But the question is, would you really want to\nre-use the ssh *keys*? You couldn't do it server-side anyway (PostgreSQL\nwon't have access to authorized_keys files for other users than itself, as\nunlike ssh it doesn't run as root), and since you need a separate keyspace\nyou probably wouldn't want to use .ssh/identity either.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Sep 28, 2019 at 9:59 PM Bruce Momjian <bruce@momjian.us> wrote:On Thu, Sep 19, 2019 at 12:26:27PM -0400, Isaac Morland wrote:\n> If we're going to open this up, can we add an option to say \"this key is\n> allowed to log in to this account\", SSH style?\n> \n> I like the idea of using keys rather than .pgpass, but I like the ~/.ssh/\n> authorized_keys model and don't like the \"set up an entire certificate\n> infrastructure\" approach.\n\nThis is actually a good question --- why does ssh do it that way and\nPostgres does it another, more like a web server/client. Maybe it is\nbecause ssh allows the user to create one key pair, and use it for\nseveral independent servers, while Postgres assumes the client will only\nconnect to multiple related servers controlled by the same CA. With the\nPostgres approach, you can change the client certificate with no changes\non the server, while with the ssh model, changing the client certificate\nrequires sending the public key to the ssh server to be added to\n~/.ssh/authorized_keys.The big difference between the two methods in general is the CA yes. In the SSL based method, you have a central authority that says \"these keys are OK\" by means of certificates. In the ssh key model, there's an individual keypair.It would make no sense to extend the cert model of authentication to support ssh style keys, IMO. However, it might make perfect sense to add a separate pure key based login method. And re-using the way ssh handles keys there would make sense. But the question is, would you really want to re-use the ssh *keys*? You couldn't do it server-side anyway (PostgreSQL won't have access to authorized_keys files for other users than itself, as unlike ssh it doesn't run as root), and since you need a separate keyspace you probably wouldn't want to use .ssh/identity either. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 30 Sep 2019 11:13:51 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Usage of the system truststore for SSL certificate validation"
}
] |
[
{
"msg_contents": "Hi people,\n\nI have written language plugins for .spec files used in isolation tests. They are available for Vim and Visual Studio Code. I hope they will make reading the tests easier for you. If you find a problem, please open an issue!\n\n\n\nhttps://github.com/onlined/pgspec.vim<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fonlined%2Fpgspec.vim&data=02%7C01%7CEkin.Dursun%40microsoft.com%7C57ec872ba3f1461807cd08d73c3a6a56%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637044094001701907&sdata=TcaIi4QzmWTX4u4G%2FZ8agNcqHK1IzW%2BuZs3kQ6E%2FQkg%3D&reserved=0>\n\nhttps://github.com/onlined/pgspec-vsc<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fonlined%2Fpgspec-vsc&data=02%7C01%7CEkin.Dursun%40microsoft.com%7C57ec872ba3f1461807cd08d73c3a6a56%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637044094001701907&sdata=iMeh5t99CauvpudaxzaiR%2B%2BIx6ZdgE0nC%2F%2BSowm%2F8is%3D&reserved=0>\n\n\n\n\n\n\n\n\n\n\nHi people,\nI have written language plugins for .spec files used in isolation tests. They are available for Vim and Visual Studio Code. I hope they will make reading the tests easier for you. If you find a problem, please open an issue!\n \nhttps://github.com/onlined/pgspec.vim\nhttps://github.com/onlined/pgspec-vsc",
"msg_date": "Thu, 19 Sep 2019 16:38:14 +0000",
"msg_from": "Ekin Dursun <Ekin.Dursun@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Syntax highlighting for Postgres spec files"
},
{
"msg_contents": "I didn't try as waiting to see if for emacs as well shows up :-) Do we want\nto get these in src/tools/editors?\n\nOn Thu, Sep 19, 2019 at 10:15 AM Ekin Dursun <Ekin.Dursun@microsoft.com>\nwrote:\n\n> Hi people,\n>\n> I have written language plugins for .spec files used in isolation tests.\n> They are available for Vim and Visual Studio Code. I hope they will make\n> reading the tests easier for you. If you find a problem, please open an\n> issue!\n>\n>\n>\n> https://github.com/onlined/pgspec.vim\n> <https://urldefense.proofpoint.com/v2/url?u=https-3A__nam06.safelinks.protection.outlook.com_-3Furl-3Dhttps-253A-252F-252Fgithub.com-252Fonlined-252Fpgspec.vim-26data-3D02-257C01-257CEkin.Dursun-2540microsoft.com-257C57ec872ba3f1461807cd08d73c3a6a56-257C72f988bf86f141af91ab2d7cd011db47-257C1-257C0-257C637044094001701907-26sdata-3DTcaIi4QzmWTX4u4G-252FZ8agNcqHK1IzW-252BuZs3kQ6E-252FQkg-253D-26reserved-3D0&d=DwMFAg&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=gxIaqms7ncm0pvqXLI_xjkgwSStxAET2rnZQpzba2KM&m=vaHw1t3-NfB3kXpC9YOqFFEg4yxuP45iVKPJiYWftVQ&s=UukqqC47b3fa2B_x-ukj6g7QGzVLxVDw36GzytEJu6A&e=>\n>\n> https://github.com/onlined/pgspec-vsc\n> <https://urldefense.proofpoint.com/v2/url?u=https-3A__nam06.safelinks.protection.outlook.com_-3Furl-3Dhttps-253A-252F-252Fgithub.com-252Fonlined-252Fpgspec-2Dvsc-26data-3D02-257C01-257CEkin.Dursun-2540microsoft.com-257C57ec872ba3f1461807cd08d73c3a6a56-257C72f988bf86f141af91ab2d7cd011db47-257C1-257C0-257C637044094001701907-26sdata-3DiMeh5t99CauvpudaxzaiR-252B-252BIx6ZdgE0nC-252F-252BSowm-252F8is-253D-26reserved-3D0&d=DwMFAg&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=gxIaqms7ncm0pvqXLI_xjkgwSStxAET2rnZQpzba2KM&m=vaHw1t3-NfB3kXpC9YOqFFEg4yxuP45iVKPJiYWftVQ&s=qEEFiRj0gkZLCK0YJ45eN-XJu4ZPUquODuHTYMNUfCs&e=>\n>\n>\n>\n\nI didn't try as waiting to see if for emacs as well shows up :-) Do we want to get these in src/tools/editors?On Thu, Sep 19, 2019 at 10:15 AM Ekin Dursun <Ekin.Dursun@microsoft.com> wrote:\n\n\nHi people,\nI have written language plugins for .spec files used in isolation tests. They are available for Vim and Visual Studio Code. I hope they will make reading the tests easier for you. If you find a problem, please open an issue!\n \nhttps://github.com/onlined/pgspec.vim\nhttps://github.com/onlined/pgspec-vsc",
"msg_date": "Thu, 19 Sep 2019 17:14:20 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Syntax highlighting for Postgres spec files"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 05:14:20PM -0700, Ashwin Agrawal wrote:\n> I didn't try as waiting to see if for emacs as well shows up :-) Do we want\n> to get these in src/tools/editors?\n\nA full complex plugin may be hard to justify, especially as I suspect\nthat there are very few hackers able to create their own isolation\ntests in Visual. But my take is that if you can have something\nwhich can be directly plugged into emacs.samples and vim.samples then\nthere is room for it.\n--\nMichael",
"msg_date": "Fri, 20 Sep 2019 10:42:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Syntax highlighting for Postgres spec files"
}
] |
[
{
"msg_contents": "Hi all,\n\nThis is a new thread related to the bug analyzed here:\nhttps://www.postgresql.org/message-id/20190919083203.GC21144@paquier.xyz\n\nAnd in short, if you attempt to do an ALTER TABLE with a custom\nreloptions the command burns itself, like that for example this\nsequence:\ncreate extension bloom;\ncreate table aa (a int);\ncreate index aa_bloom ON aa USING bloom (a);\nalter index aa_bloom set (length = 20);\n\nWhich results in the following error:\nERROR: XX000: unrecognized lock mode: 2139062143\nLOCATION: LockAcquireExtended, lock.c:756\n\nThe root of the problem is that the set of relation options loaded\nfinds properly the custom options set when looking for the lock mode\nto use in AlterTableGetRelOptionsLockLevel(), but we never set the\nlock mode this option should use when allocating it, resulting in a\nfailure. The current set of APIs does not allow either to set the\nlock mode associated with a custom reloption.\n\nHence attached is a patch set to address those issues:\n- 0001 makes sure that any existing module creating a custom reloption\nhas the lock mode set to AccessExclusiveMode, which would be a sane\ndefault anyway. I think that we should just back-patch that and close\nany holes.\n- 0002 is a patch which we could use to extend the existing reloption\nAPIs to set the lock mode used. I am aware of the recent work done by\nNikolay in CC to rework this API set, but I am unsure where we are\ngoing there, and the resulting patch is actually very short, being\n20-line long with the current infrastructure. That could go into\nHEAD. Table AMs have been added in v12 so custom reloptions could\ngain more in popularity, but as we are very close to the release it\nwould not be cool to break those APIs. The patch simplicity could\nalso be a reason sufficient for a back-patch, and I don't think that\nthere are many users of them yet.\n\nMy take would be to use 0001 on all branches (or I am missing\nsomething related to custom relopts manipulation?), and consider 0002\non HEAD.\n\nThoughts?\n--\nMichael",
"msg_date": "Fri, 20 Sep 2019 10:38:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Custom reloptions and lock modes"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 10:38:31AM +0900, Michael Paquier wrote:\n> Hi all,\n> \n> This is a new thread related to the bug analyzed here:\n> https://www.postgresql.org/message-id/20190919083203.GC21144@paquier.xyz\n> \n> And in short, if you attempt to do an ALTER TABLE with a custom\n> reloptions the command burns itself, like that for example this\n> sequence:\n> create extension bloom;\n> create table aa (a int);\n> create index aa_bloom ON aa USING bloom (a);\n> alter index aa_bloom set (length = 20);\n> \n> Which results in the following error:\n> - 0002 is a patch which we could use to extend the existing reloption\n> APIs to set the lock mode used. I am aware of the recent work done by\n> Nikolay in CC to rework this API set, but I am unsure where we are\n> going there, and the resulting patch is actually very short, being\n> 20-line long with the current infrastructure. That could go into\n> HEAD. Table AMs have been added in v12 so custom reloptions could\n> gain more in popularity, but as we are very close to the release it\n> would not be cool to break those APIs. The patch simplicity could\n> also be a reason sufficient for a back-patch, and I don't think that\n> there are many users of them yet.\n\nI mean a back-patch down to v12 for this part, but not further down.\nSorry for the possible confusion.\n--\nMichael",
"msg_date": "Fri, 20 Sep 2019 10:44:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Custom reloptions and lock modes"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 7:08 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hence attached is a patch set to address those issues:\n> - 0001 makes sure that any existing module creating a custom reloption\n> has the lock mode set to AccessExclusiveMode, which would be a sane\n> default anyway. I think that we should just back-patch that and close\n> any holes.\nLooks good to me. The patch solves the issue and passes with\nregression tests. IMHO, it should be back-patched to all the branches.\n\n> - 0002 is a patch which we could use to extend the existing reloption\n> APIs to set the lock mode used. I am aware of the recent work done by\n> Nikolay in CC to rework this API set, but I am unsure where we are\n> going there, and the resulting patch is actually very short, being\n> 20-line long with the current infrastructure. That could go into\n> HEAD. Table AMs have been added in v12 so custom reloptions could\n> gain more in popularity, but as we are very close to the release it\n> would not be cool to break those APIs. The patch simplicity could\n> also be a reason sufficient for a back-patch, and I don't think that\n> there are many users of them yet.\n>\nI think this is good approach for now and can be committed on the HEAD only.\n\nOne small thing:\n\n add_int_reloption(bl_relopt_kind, buf,\n \"Number of bits generated for each index column\",\n- DEFAULT_BLOOM_BITS, 1, MAX_BLOOM_BITS);\n+ DEFAULT_BLOOM_BITS, 1, MAX_BLOOM_BITS,\n+ AccessExclusiveLock);\nDo we need a comment to explain why we're using AccessExclusiveLock in\nthis case?\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 20 Sep 2019 11:59:13 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom reloptions and lock modes"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 11:59:13AM +0530, Kuntal Ghosh wrote:\n> On Fri, Sep 20, 2019 at 7:08 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Hence attached is a patch set to address those issues:\n>> - 0001 makes sure that any existing module creating a custom reloption\n>> has the lock mode set to AccessExclusiveMode, which would be a sane\n>> default anyway. I think that we should just back-patch that and close\n>> any holes.\n>\n> Looks good to me. The patch solves the issue and passes with\n> regression tests. IMHO, it should be back-patched to all the branches.\n\nThat's the plan but...\n\n>> - 0002 is a patch which we could use to extend the existing reloption\n>> APIs to set the lock mode used. I am aware of the recent work done by\n>> Nikolay in CC to rework this API set, but I am unsure where we are\n>> going there, and the resulting patch is actually very short, being\n>> 20-line long with the current infrastructure. That could go into\n>> HEAD. Table AMs have been added in v12 so custom reloptions could\n>> gain more in popularity, but as we are very close to the release it\n>> would not be cool to break those APIs. The patch simplicity could\n>> also be a reason sufficient for a back-patch, and I don't think that\n>> there are many users of them yet.\n>>\n>\n> I think this is good approach for now and can be committed on the\n> HEAD only.\n\nLet's wait a couple of days to see if others have any objections to\noffer on the matter. My plan would be to revisit this patch set after\nRC1 is tagged next week to at least fix the bug. I don't predict any\nstrong objections to the patch for HEAD, but who knows..\n\n> One small thing:\n> \n> add_int_reloption(bl_relopt_kind, buf,\n> \"Number of bits generated for each index column\",\n> - DEFAULT_BLOOM_BITS, 1, MAX_BLOOM_BITS);\n> + DEFAULT_BLOOM_BITS, 1, MAX_BLOOM_BITS,\n> + AccessExclusiveLock);\n> Do we need a comment to explain why we're using AccessExclusiveLock in\n> this case?\n\nBecause that's the safest default to use here? That seemed obvious to\nme.\n--\nMichael",
"msg_date": "Fri, 20 Sep 2019 16:08:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Custom reloptions and lock modes"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 12:38 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > One small thing:\n> >\n> > add_int_reloption(bl_relopt_kind, buf,\n> > \"Number of bits generated for each index column\",\n> > - DEFAULT_BLOOM_BITS, 1, MAX_BLOOM_BITS);\n> > + DEFAULT_BLOOM_BITS, 1, MAX_BLOOM_BITS,\n> > + AccessExclusiveLock);\n> > Do we need a comment to explain why we're using AccessExclusiveLock in\n> > this case?\n>\n> Because that's the safest default to use here? That seemed obvious to\n> me.\nOkay. Sounds good.\n\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 20 Sep 2019 12:40:51 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom reloptions and lock modes"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 12:40:51PM +0530, Kuntal Ghosh wrote:\n> Okay. Sounds good.\n\nThanks for the review. Attached is the patch set I am planning to\ncommit. I'll wait after the tag of this week as the first patch needs\nto go down to 9.6, the origin of the bug being 47167b7. The second\npatch would go only to HEAD, as discussed.\n--\nMichael",
"msg_date": "Tue, 24 Sep 2019 11:33:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Custom reloptions and lock modes"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 11:33:35AM +0900, Michael Paquier wrote:\n> Thanks for the review. Attached is the patch set I am planning to\n> commit. I'll wait after the tag of this week as the first patch needs\n> to go down to 9.6, the origin of the bug being 47167b7. The second\n> patch would go only to HEAD, as discussed.\n\nAnd applied both.\n--\nMichael",
"msg_date": "Wed, 25 Sep 2019 10:23:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Custom reloptions and lock modes"
}
] |
[
{
"msg_contents": "While testing something else (whether \"terminating walsender process due to\nreplication timeout\" was happening spuriously), I had logical replication\nset up streaming a default pgbench transaction load, with the publisher\nbeing 13devel-e1c8743 and subscriber being 12BETA4. Eventually I started\ngetting errors about requested wal segments being already removed:\n\n10863 sub idle 00000 2019-09-19 17:14:58.140 EDT LOG: starting logical\ndecoding for slot \"sub\"\n10863 sub idle 00000 2019-09-19 17:14:58.140 EDT DETAIL: Streaming\ntransactions committing after 79/EB0B17A0, reading WAL from 79/E70736A0.\n10863 sub idle 58P01 2019-09-19 17:14:58.140 EDT ERROR: requested WAL\nsegment 0000000100000079000000E7 has already been removed\n10863 sub idle 00000 2019-09-19 17:14:58.144 EDT LOG: disconnection:\nsession time: 0:00:00.030 user=jjanes database=jjanes host=10.0.2.2\nport=40830\n\nIt had been streaming for about 50 minutes before the error showed up, and\nit showed right when streaming was restarting after one of the replication\ntimeouts.\n\nIs there an innocent explanation for this? I thought logical replication\nslots provided an iron-clad guarantee that WAL would be retained until it\nwas no longer needed. I am just using pub/sub, none of the lower level\nstuff.\n\nCheers,\n\nJeff\n\nWhile testing something else (whether \"terminating walsender process due to replication timeout\" was happening spuriously), I had logical replication set up streaming a default pgbench transaction load, with the publisher being 13devel-e1c8743 and subscriber being 12BETA4. Eventually I started getting errors about requested wal segments being already removed:10863 sub idle 00000 2019-09-19 17:14:58.140 EDT LOG: starting logical decoding for slot \"sub\"10863 sub idle 00000 2019-09-19 17:14:58.140 EDT DETAIL: Streaming transactions committing after 79/EB0B17A0, reading WAL from 79/E70736A0.10863 sub idle 58P01 2019-09-19 17:14:58.140 EDT ERROR: requested WAL segment 0000000100000079000000E7 has already been removed10863 sub idle 00000 2019-09-19 17:14:58.144 EDT LOG: disconnection: session time: 0:00:00.030 user=jjanes database=jjanes host=10.0.2.2 port=40830It had been streaming for about 50 minutes before the error showed up, and it showed right when streaming was restarting after one of the replication timeouts.Is there an innocent explanation for this? I thought logical replication slots provided an iron-clad guarantee that WAL would be retained until it was no longer needed. I am just using pub/sub, none of the lower level stuff.Cheers,Jeff",
"msg_date": "Fri, 20 Sep 2019 08:45:34 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "WAL recycled despite logical replication slot"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 08:45:34AM -0400, Jeff Janes wrote:\n>While testing something else (whether \"terminating walsender process due to\n>replication timeout\" was happening spuriously), I had logical replication\n>set up streaming a default pgbench transaction load, with the publisher\n>being 13devel-e1c8743 and subscriber being 12BETA4. Eventually I started\n>getting errors about requested wal segments being already removed:\n>\n>10863 sub idle 00000 2019-09-19 17:14:58.140 EDT LOG: starting logical\n>decoding for slot \"sub\"\n>10863 sub idle 00000 2019-09-19 17:14:58.140 EDT DETAIL: Streaming\n>transactions committing after 79/EB0B17A0, reading WAL from 79/E70736A0.\n>10863 sub idle 58P01 2019-09-19 17:14:58.140 EDT ERROR: requested WAL\n>segment 0000000100000079000000E7 has already been removed\n>10863 sub idle 00000 2019-09-19 17:14:58.144 EDT LOG: disconnection:\n>session time: 0:00:00.030 user=jjanes database=jjanes host=10.0.2.2\n>port=40830\n>\n>It had been streaming for about 50 minutes before the error showed up, and\n>it showed right when streaming was restarting after one of the replication\n>timeouts.\n>\n>Is there an innocent explanation for this? I thought logical replication\n>slots provided an iron-clad guarantee that WAL would be retained until it\n>was no longer needed. I am just using pub/sub, none of the lower level\n>stuff.\n>\n\nI think you're right - this should not happen with replication slots.\nCan you provide more detailed setup instructions, so that I can try to\nreproduce and investigate the isssue?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 20 Sep 2019 17:27:16 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL recycled despite logical replication slot"
},
{
"msg_contents": "Hi, \n\nOn September 20, 2019 5:45:34 AM PDT, Jeff Janes <jeff.janes@gmail.com> wrote:\n>While testing something else (whether \"terminating walsender process\n>due to\n>replication timeout\" was happening spuriously), I had logical\n>replication\n>set up streaming a default pgbench transaction load, with the publisher\n>being 13devel-e1c8743 and subscriber being 12BETA4. Eventually I\n>started\n>getting errors about requested wal segments being already removed:\n>\n>10863 sub idle 00000 2019-09-19 17:14:58.140 EDT LOG: starting logical\n>decoding for slot \"sub\"\n>10863 sub idle 00000 2019-09-19 17:14:58.140 EDT DETAIL: Streaming\n>transactions committing after 79/EB0B17A0, reading WAL from\n>79/E70736A0.\n>10863 sub idle 58P01 2019-09-19 17:14:58.140 EDT ERROR: requested WAL\n>segment 0000000100000079000000E7 has already been removed\n>10863 sub idle 00000 2019-09-19 17:14:58.144 EDT LOG: disconnection:\n>session time: 0:00:00.030 user=jjanes database=jjanes host=10.0.2.2\n>port=40830\n>\n>It had been streaming for about 50 minutes before the error showed up,\n>and\n>it showed right when streaming was restarting after one of the\n>replication\n>timeouts.\n>\n>Is there an innocent explanation for this? I thought logical\n>replication\n>slots provided an iron-clad guarantee that WAL would be retained until\n>it\n>was no longer needed. I am just using pub/sub, none of the lower level\n>stuff.\n\nIt indeed should. What's the content of\npg_replication_slot for that slot?\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Fri, 20 Sep 2019 15:16:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL recycled despite logical replication slot"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 11:27 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> >\n> >Is there an innocent explanation for this? I thought logical replication\n> >slots provided an iron-clad guarantee that WAL would be retained until it\n> >was no longer needed. I am just using pub/sub, none of the lower level\n> >stuff.\n> >\n>\n> I think you're right - this should not happen with replication slots.\n> Can you provide more detailed setup instructions, so that I can try to\n> reproduce and investigate the isssue?\n>\n\nIt is a bit messy, because this isn't what I was trying to test.\n\nThe basic set up is pretty simple:\n\nOn master:\n\npgbench -i -s 100\ncreate publication pgbench for table pgbench_accounts, pgbench_branches,\npgbench_history , pgbench_tellers;\npgbench -R200 -c4 -j4 -P60 -T360000 -n\n\non replica:\n\npgbench -i -s 1\ntruncate pgbench_history , pgbench_accounts, pgbench_branches,\npgbench_tellers;\ncreate subscription sub CONNECTION 'host=192.168.0.15' publication pgbench;\n\nThe messy part: It looked like the synch was never going to finish, so\nfirst I cut the rate down to -R20. Then what I thought I did was drop the\nprimary key on pgbench_accounts (manually doing a kill -15 on the synch\nworker to release the lock), wait for the copy to start again and then\nfinish and then start getting \"ERROR: logical replication target relation\n\"public.pgbench_accounts\" has neither REPLICA IDENTITY index nor PRIMARY\nKEY and published relation does not have REPLICA IDENTITY FULL\" log\nmessages, then I re-added the primary key. Then I increased the -R back to\n200, and about 50 minutes later got the WAL already removed error.\n\nBut now I can't seem to reproduce this, as the next time I tried to do the\nsynch with no primary key there doesn't seem to be a commit after the COPY\nfinishes so once it tries to replay the first update, it hits the above \"no\nprimary key\" error and then rolls back the **the entire COPY** as well as\nthe single-row update, and starts the entire COPY over again before you\nhave a chance to intervene and build the index. So I'm guessing now that\neither the lack of a commit (which itself seems like a spectacularly bad\nidea) is situation dependent, or the very slow COPY had finished between\nthe time I had decided to drop the primary key, and time I actually\nimplemented the drop.\n\nPerhaps important here is that the replica is rather underpowered. Write\nIO and fsyncs periodically become painfully slow, which is probably why\nthere are replication timeouts, and since the problem happened when trying\nto reestablish after a timeout I would guess that that is critical to the\nissue.\n\nI was running the master with fsync=off, but since the OS never crashed\nthat should not be the source of corruption.\n\n\nI'll try some more to reproduce this, but I wanted to make sure there was\nactually something here to reproduce, and not just my misunderstanding of\nhow things are supposed to work.\n\nCheers,\n\nJeff\n\nOn Fri, Sep 20, 2019 at 11:27 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:>\n>Is there an innocent explanation for this? I thought logical replication\n>slots provided an iron-clad guarantee that WAL would be retained until it\n>was no longer needed. I am just using pub/sub, none of the lower level\n>stuff.\n>\n\nI think you're right - this should not happen with replication slots.\nCan you provide more detailed setup instructions, so that I can try to\nreproduce and investigate the isssue?It is a bit messy, because this isn't what I was trying to test.The basic set up is pretty simple:On master:pgbench -i -s 100create publication pgbench for table pgbench_accounts, pgbench_branches, pgbench_history , pgbench_tellers;pgbench -R200 -c4 -j4 -P60 -T360000 -non replica:pgbench -i -s 1truncate pgbench_history , pgbench_accounts, pgbench_branches, pgbench_tellers;create subscription sub CONNECTION 'host=192.168.0.15' publication pgbench;The messy part: It looked like the synch was never going to finish, so first I cut the rate down to -R20. Then what I thought I did was drop the primary key on pgbench_accounts (manually doing a kill -15 on the synch worker to release the lock), wait for the copy to start again and then finish and then start getting \"ERROR: logical replication target relation \"public.pgbench_accounts\" has neither REPLICA IDENTITY index nor PRIMARY KEY and published relation does not have REPLICA IDENTITY FULL\" log messages, then I re-added the primary key. Then I increased the -R back to 200, and about 50 minutes later got the WAL already removed error. But now I can't seem to reproduce this, as the next time I tried to do the synch with no primary key there doesn't seem to be a commit after the COPY finishes so once it tries to replay the first update, it hits the above \"no primary key\" error and then rolls back the **the entire COPY** as well as the single-row update, and starts the entire COPY over again before you have a chance to intervene and build the index. So I'm guessing now that either the lack of a commit (which itself seems like a spectacularly bad idea) is situation dependent, or the very slow COPY had finished between the time I had decided to drop the primary key, and time I actually implemented the drop.Perhaps important here is that the replica is rather underpowered. Write IO and fsyncs periodically become painfully slow, which is probably why there are replication timeouts, and since the problem happened when trying to reestablish after a timeout I would guess that that is critical to the issue.I was running the master with fsync=off, but since the OS never crashed that should not be the source of corruption.I'll try some more to reproduce this, but I wanted to make sure there was actually something here to reproduce, and not just my misunderstanding of how things are supposed to work.Cheers,Jeff",
"msg_date": "Sun, 22 Sep 2019 11:32:17 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL recycled despite logical replication slot"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 6:25 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On September 20, 2019 5:45:34 AM PDT, Jeff Janes <jeff.janes@gmail.com>\n> wrote:\n> >While testing something else (whether \"terminating walsender process\n> >due to\n> >replication timeout\" was happening spuriously), I had logical\n> >replication\n> >set up streaming a default pgbench transaction load, with the publisher\n> >being 13devel-e1c8743 and subscriber being 12BETA4. Eventually I\n> >started\n> >getting errors about requested wal segments being already removed:\n> >\n> >10863 sub idle 00000 2019-09-19 17:14:58.140 EDT LOG: starting logical\n> >decoding for slot \"sub\"\n> >10863 sub idle 00000 2019-09-19 17:14:58.140 EDT DETAIL: Streaming\n> >transactions committing after 79/EB0B17A0, reading WAL from\n> >79/E70736A0.\n> >10863 sub idle 58P01 2019-09-19 17:14:58.140 EDT ERROR: requested WAL\n> >segment 0000000100000079000000E7 has already been removed\n> >10863 sub idle 00000 2019-09-19 17:14:58.144 EDT LOG: disconnection:\n> >session time: 0:00:00.030 user=jjanes database=jjanes host=10.0.2.2\n> >port=40830\n> >\n> >It had been streaming for about 50 minutes before the error showed up,\n> >and\n> >it showed right when streaming was restarting after one of the\n> >replication\n> >timeouts.\n> >\n> >Is there an innocent explanation for this? I thought logical\n> >replication\n> >slots provided an iron-clad guarantee that WAL would be retained until\n> >it\n> >was no longer needed. I am just using pub/sub, none of the lower level\n> >stuff.\n>\n> It indeed should. What's the content of\n> pg_replication_slot for that slot?\n>\n\nUnfortunately I don't think I have that preserved. If I can reproduce the\nissue, would preserving data/pg_replslot/sub/state help as well?\n\nCheers,\n\nJeff\n\nOn Fri, Sep 20, 2019 at 6:25 PM Andres Freund <andres@anarazel.de> wrote:Hi, \n\nOn September 20, 2019 5:45:34 AM PDT, Jeff Janes <jeff.janes@gmail.com> wrote:\n>While testing something else (whether \"terminating walsender process\n>due to\n>replication timeout\" was happening spuriously), I had logical\n>replication\n>set up streaming a default pgbench transaction load, with the publisher\n>being 13devel-e1c8743 and subscriber being 12BETA4. Eventually I\n>started\n>getting errors about requested wal segments being already removed:\n>\n>10863 sub idle 00000 2019-09-19 17:14:58.140 EDT LOG: starting logical\n>decoding for slot \"sub\"\n>10863 sub idle 00000 2019-09-19 17:14:58.140 EDT DETAIL: Streaming\n>transactions committing after 79/EB0B17A0, reading WAL from\n>79/E70736A0.\n>10863 sub idle 58P01 2019-09-19 17:14:58.140 EDT ERROR: requested WAL\n>segment 0000000100000079000000E7 has already been removed\n>10863 sub idle 00000 2019-09-19 17:14:58.144 EDT LOG: disconnection:\n>session time: 0:00:00.030 user=jjanes database=jjanes host=10.0.2.2\n>port=40830\n>\n>It had been streaming for about 50 minutes before the error showed up,\n>and\n>it showed right when streaming was restarting after one of the\n>replication\n>timeouts.\n>\n>Is there an innocent explanation for this? I thought logical\n>replication\n>slots provided an iron-clad guarantee that WAL would be retained until\n>it\n>was no longer needed. I am just using pub/sub, none of the lower level\n>stuff.\n\nIt indeed should. What's the content of\npg_replication_slot for that slot?Unfortunately I don't think I have that preserved. If I can reproduce the issue, would preserving data/pg_replslot/sub/state help as well?Cheers,Jeff",
"msg_date": "Sun, 22 Sep 2019 11:45:05 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL recycled despite logical replication slot"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-22 11:45:05 -0400, Jeff Janes wrote:\n> On Fri, Sep 20, 2019 at 6:25 PM Andres Freund <andres@anarazel.de> wrote:\n> > Hi,\n> > >Is there an innocent explanation for this? I thought logical\n> > >replication\n> > >slots provided an iron-clad guarantee that WAL would be retained until\n> > >it\n> > >was no longer needed. I am just using pub/sub, none of the lower level\n> > >stuff.\n> >\n> > It indeed should. What's the content of\n> > pg_replication_slot for that slot?\n> >\n> \n> Unfortunately I don't think I have that preserved. If I can reproduce the\n> issue, would preserving data/pg_replslot/sub/state help as well?\n\nCan't hurt. Best together with other slots, if they exists.\n\nCould you describe the system a bit?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 22 Sep 2019 16:37:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL recycled despite logical replication slot"
}
] |
[
{
"msg_contents": "Hello,\n\nI have come around a strange situation when using a unicode string\nthat has non normalized characters. The attached script 'initcap.sql'\ncan reproduce the problem.\n\nThe attached patch can fix the issue.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Fri, 20 Sep 2019 22:44:11 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": true,
"msg_subject": "Wrong results using initcap() with non normalized string"
},
{
"msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> I have come around a strange situation when using a unicode string\n> that has non normalized characters. The attached script 'initcap.sql'\n> can reproduce the problem.\n> The attached patch can fix the issue.\n\nIf we're going to start worrying about non-normalized characters,\nI suspect there are far more places than this one that we'd have\nto consider buggy :-(.\n\nAs for the details of the patch, it seems overly certain that\nit's working with UTF8 data.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Sep 2019 19:20:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results using initcap() with non normalized string"
},
{
"msg_contents": "On 2019-Sep-20, Tom Lane wrote:\n\n> =?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> > I have come around a strange situation when using a unicode string\n> > that has non normalized characters. The attached script 'initcap.sql'\n> > can reproduce the problem.\n\nFor illustration purposes:\n\nSELECT initcap('ŞUB');\n initcap \n─────────\n Şub\n(1 fila)\n\nSELECT initcap('ŞUB');\n initcap \n─────────\n ŞUb\n(1 fila)\n\n> If we're going to start worrying about non-normalized characters,\n> I suspect there are far more places than this one that we'd have\n> to consider buggy :-(.\n\nI would think that we have to start somewhere, rather than take the\nposition that we can never do anything about it.\n\n(ref: https://www.postgresql.org/message-id/flat/53E179E1.3060404%402ndquadrant.com )\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 20 Sep 2019 21:42:51 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results using initcap() with non normalized string"
},
{
"msg_contents": "On Sat, Sep 21, 2019 at 2:42 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Sep-20, Tom Lane wrote:\n>\n> > If we're going to start worrying about non-normalized characters,\n> > I suspect there are far more places than this one that we'd have\n> > to consider buggy :-(.\n>\n> I would think that we have to start somewhere, rather than take the\n> position that we can never do anything about it.\n>\n> (ref: https://www.postgresql.org/message-id/flat/53E179E1.3060404%402ndquadrant.com )\n\nThis conversation is prior to having the normalization code available\n[1]. Nowadays this particular issue seems like low hanging fruit, but\nI agree it would be problematic if it was the only normalization-aware\nfunction, although most functions are sure to be troubleless if\nnothing has been reported before.\n\nThe attached patch addresses the comment about assuming UTF8.\n\nRegards,\n\nJuan José Santamaría Flecha\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=60f11b87a2349985230c08616fa8a34ffde934c8",
"msg_date": "Sun, 22 Sep 2019 13:15:38 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Wrong results using initcap() with non normalized string"
},
{
"msg_contents": "On 2019-Sep-22, Juan Jos� Santamar�a Flecha wrote:\n\n> The attached patch addresses the comment about assuming UTF8.\n\nThe UTF8 bits looks reasonable to me. I guess the other part of that\nquestion is whether we support any other multibyte encoding that\nsupports combining characters. Maybe for cases other than UTF8 we can\ntest for 0-width chars (using pg_encoding_dsplen() perhaps?) and drive\nthe upper/lower decision off that? (For the UTF8 case, I don't know if\nJuanjo's proposal is better than pg_encoding_dsplen. Both seem to boil\ndown to a bsearch, though unicode_norm.c's table seems much larger than\nwchar.c's).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 28 Sep 2019 22:38:08 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results using initcap() with non normalized string"
},
{
"msg_contents": "On Sun, Sep 29, 2019 at 3:38 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> The UTF8 bits looks reasonable to me. I guess the other part of that\n> question is whether we support any other multibyte encoding that\n> supports combining characters. Maybe for cases other than UTF8 we can\n> test for 0-width chars (using pg_encoding_dsplen() perhaps?) and drive\n> the upper/lower decision off that? (For the UTF8 case, I don't know if\n> Juanjo's proposal is better than pg_encoding_dsplen. Both seem to boil\n> down to a bsearch, though unicode_norm.c's table seems much larger than\n> wchar.c's).\n>\n\nUsing pg_encoding_dsplen() looks like the way to go. The normalizarion\nlogic included in ucs_wcwidth() already does what is need to avoid the\nissue, so there is no need to use unicode_norm_table.h. UTF8 is the\nonly multibyte encoding that can return a 0-width dsplen, so this\napproach would also works for all the other encodings that do not use\ncombining characters.\n\nPlease find attached a patch with this approach.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Thu, 3 Oct 2019 20:39:18 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Wrong results using initcap() with non normalized string"
}
] |
[
{
"msg_contents": "Hi,\n\nI wonder if you guys can help me with this, I've been struggling with this query for almost a week and I haven't been able to tune it, it runs forever and I need it to run fast.\n\nRegards.\n\nSteven Castillo",
"msg_date": "Fri, 20 Sep 2019 21:21:59 +0000",
"msg_from": "\"Castillo, Steven (Agile)\" <Steven.Castillo@umusic.com>",
"msg_from_op": true,
"msg_subject": "Hi guys, HELP please"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 09:21:59PM +0000, Castillo, Steven (Agile)\nwrote:\n>Hi,\n>\n>I wonder if you guys can help me with this, I've been struggling with\n>this query for almost a week and I haven't been able to tune it, it\n>runs forever and I need it to run fast.\n>\n\nHard to say, because all we have is an explain without any additional\ninformation (like amount of data, PostgreSQL version, settings like\nwork_mem). Maybe look at [1] which explains what to try, and also what\nto include in your question.\n\n[1] https://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nNow, if I had to guess, I'd say this is a case of underestimate, causing\na choice of nested loops. That's fairly deadly.\n\nIn particular, I'm talking about this:\n\n -> Seq Scan on t_territory_common tc (cost=0.00..6494012.54 rows=49 width=232)\n Filter: (((source)::text = 'DSCHED'::text) AND ... many conditions .... \n\nHow many rows does this return when you query just this table (with all\nthe conditions)? Chances are those conditions are correlated, in which\ncase the number of rows is much higher than 49 (possibly by orders of\nmagnitude).\n\nIf that's the case, you have multiple options:\n\n1) create a temporary table, and then joining it (can be analyzed,\nestimates are likely much better)\n\n2) disable nested loops for this query (useful for testing/investigation)\n\n3) create extended statistics on those correlated columns (depends on\nwhich PostgreSQL version you use)\n\n4) redo the table schema (e.g. have a special column representing\ncombination of those columns), so that there's just a single condition\n(thus no misestimate due to correlation)\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 23 Sep 2019 13:42:17 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Hi guys, HELP please"
}
] |
[
{
"msg_contents": "The step to reproduce this issue.\n1. Create a table\n create table gist_point_tbl(id int4, p point);\n create index gist_pointidx on gist_point_tbl using gist(p);\n2. Insert data\n insert into gist_point_tbl (id, p) select g, point(g*10, g*10) from generate_series(1, 1000000) g;\n3. Delete data\n delete from gist_point_bl;\n4. Vacuum table\n vacuum gist_point_tbl;\n -- Send SIGINT to vacuum process after WAL-log of the truncation is flushed and the truncation is not finished\n -- We will receive error message \"ERROR: canceling statement due to user request\"\n5. Vacuum table again\n vacuum gist_point tbl;\n -- The standby node crashed and the PANIC log is \"PANIC: WAL contains references to invalid pages\"\n\n\nThe standby node succeed to replay truncate log but master node doesn't truncate the file, it will be crashed if master node writes to these blocks which truncated in standby node.\nI try to fix issue to prevent query cancel interrupts during truncating.\n\n\ndiff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c\nindex 5df4382b7e..04b696ae01 100644\n--- a/src/backend/catalog/storage.c\n+++ b/src/backend/catalog/storage.c\n@@ -26,6 +26,7 @@\n #include \"access/xlogutils.h\"\n #include \"catalog/storage.h\"\n #include \"catalog/storage_xlog.h\"\n+#include \"miscadmin.h\"\n #include \"storage/freespace.h\"\n #include \"storage/smgr.h\"\n #include \"utils/memutils.h\"\n@@ -248,6 +249,14 @@ RelationTruncate(Relation rel, BlockNumber nblocks)\n if (vm)\n visibilitymap_truncate(rel, nblocks);\n\n\n+ /*\n+ * When master node flush WAL-log of the truncation and then receive SIGINT signal to cancel\n+ * this transaction before the truncation, if standby receive this WAL-log and do the truncation,\n+ * standby node will crash when master node writes to these blocks which are truncated in standby node.\n+ * So we prevent query cancel interrupts.\n+ */\n+ HOLD_CANCEL_INTERRUPTS();\n+\n /*\n * We WAL-log the truncation before actually truncating, which means\n * trouble if the truncation fails. If we then crash, the WAL replay\n@@ -288,6 +297,8 @@ RelationTruncate(Relation rel, BlockNumber nblocks)\n\n\n /* Do the real work */\n smgrtruncate(rel->rd_smgr, MAIN_FORKNUM, nblocks);\n+\n+ RESUME_CANCEL_INTERRUPTS();\n }\nThe step to reproduce this issue.1. Create a table create table gist_point_tbl(id int4, p point); create index gist_pointidx on gist_point_tbl using gist(p);2. Insert data insert into gist_point_tbl (id, p) select g, point(g*10, g*10) from generate_series(1, 1000000) g;3. Delete data delete from gist_point_bl;4. Vacuum table vacuum gist_point_tbl; -- Send SIGINT to vacuum process after WAL-log of the truncation is flushed and the truncation is not finished -- We will receive error message \"ERROR: canceling statement due to user request\"5. Vacuum table again vacuum gist_point tbl; -- The standby node crashed and the PANIC log is \"PANIC: WAL contains references to invalid pages\"The standby node succeed to replay truncate log but master node doesn't truncate the file, it will be crashed if master node writes to these blocks which truncated in standby node.I try to fix issue to prevent query cancel interrupts during truncating.diff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.cindex 5df4382b7e..04b696ae01 100644--- a/src/backend/catalog/storage.c+++ b/src/backend/catalog/storage.c@@ -26,6 +26,7 @@ #include \"access/xlogutils.h\" #include \"catalog/storage.h\" #include \"catalog/storage_xlog.h\"+#include \"miscadmin.h\" #include \"storage/freespace.h\" #include \"storage/smgr.h\" #include \"utils/memutils.h\"@@ -248,6 +249,14 @@ RelationTruncate(Relation rel, BlockNumber nblocks) if (vm) visibilitymap_truncate(rel, nblocks);+ /*+ * When master node flush WAL-log of the truncation and then receive SIGINT signal to cancel+ * this transaction before the truncation, if standby receive this WAL-log and do the truncation,+ * standby node will crash when master node writes to these blocks which are truncated in standby node.+ * So we prevent query cancel interrupts.+ */+ HOLD_CANCEL_INTERRUPTS();+ /* * We WAL-log the truncation before actually truncating, which means * trouble if the truncation fails. If we then crash, the WAL replay@@ -288,6 +297,8 @@ RelationTruncate(Relation rel, BlockNumber nblocks) /* Do the real work */ smgrtruncate(rel->rd_smgr, MAIN_FORKNUM, nblocks);++ RESUME_CANCEL_INTERRUPTS(); }",
"msg_date": "Sun, 22 Sep 2019 00:38:03 +0800 (CST)",
"msg_from": "Thunder <thunder1@126.com>",
"msg_from_op": true,
"msg_subject": "PATCH: standby crashed when replay block which truncated in standby\n but failed to truncate in master node"
},
{
"msg_contents": "Is this an issue? \nCan we fix like this?\nThanks!\n\n\n\n\n\n\nAt 2019-09-22 00:38:03, \"Thunder\" <thunder1@126.com> wrote:\n\nThe step to reproduce this issue.\n1. Create a table\n create table gist_point_tbl(id int4, p point);\n create index gist_pointidx on gist_point_tbl using gist(p);\n2. Insert data\n insert into gist_point_tbl (id, p) select g, point(g*10, g*10) from generate_series(1, 1000000) g;\n3. Delete data\n delete from gist_point_bl;\n4. Vacuum table\n vacuum gist_point_tbl;\n -- Send SIGINT to vacuum process after WAL-log of the truncation is flushed and the truncation is not finished\n -- We will receive error message \"ERROR: canceling statement due to user request\"\n5. Vacuum table again\n vacuum gist_point tbl;\n -- The standby node crashed and the PANIC log is \"PANIC: WAL contains references to invalid pages\"\n\n\nThe standby node succeed to replay truncate log but master node doesn't truncate the file, it will be crashed if master node writes to these blocks which truncated in standby node.\nI try to fix issue to prevent query cancel interrupts during truncating.\n\n\ndiff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c\nindex 5df4382b7e..04b696ae01 100644\n--- a/src/backend/catalog/storage.c\n+++ b/src/backend/catalog/storage.c\n@@ -26,6 +26,7 @@\n #include \"access/xlogutils.h\"\n #include \"catalog/storage.h\"\n #include \"catalog/storage_xlog.h\"\n+#include \"miscadmin.h\"\n #include \"storage/freespace.h\"\n #include \"storage/smgr.h\"\n #include \"utils/memutils.h\"\n@@ -248,6 +249,14 @@ RelationTruncate(Relation rel, BlockNumber nblocks)\n if (vm)\n visibilitymap_truncate(rel, nblocks);\n\n\n+ /*\n+ * When master node flush WAL-log of the truncation and then receive SIGINT signal to cancel\n+ * this transaction before the truncation, if standby receive this WAL-log and do the truncation,\n+ * standby node will crash when master node writes to these blocks which are truncated in standby node.\n+ * So we prevent query cancel interrupts.\n+ */\n+ HOLD_CANCEL_INTERRUPTS();\n+\n /*\n * We WAL-log the truncation before actually truncating, which means\n * trouble if the truncation fails. If we then crash, the WAL replay\n@@ -288,6 +297,8 @@ RelationTruncate(Relation rel, BlockNumber nblocks)\n\n\n /* Do the real work */\n smgrtruncate(rel->rd_smgr, MAIN_FORKNUM, nblocks);\n+\n+ RESUME_CANCEL_INTERRUPTS();\n }\n\n\n\n\n \nIs this an issue? Can we fix like this?Thanks!At 2019-09-22 00:38:03, \"Thunder\" <thunder1@126.com> wrote: The step to reproduce this issue.1. Create a table create table gist_point_tbl(id int4, p point); create index gist_pointidx on gist_point_tbl using gist(p);2. Insert data insert into gist_point_tbl (id, p) select g, point(g*10, g*10) from generate_series(1, 1000000) g;3. Delete data delete from gist_point_bl;4. Vacuum table vacuum gist_point_tbl; -- Send SIGINT to vacuum process after WAL-log of the truncation is flushed and the truncation is not finished -- We will receive error message \"ERROR: canceling statement due to user request\"5. Vacuum table again vacuum gist_point tbl; -- The standby node crashed and the PANIC log is \"PANIC: WAL contains references to invalid pages\"The standby node succeed to replay truncate log but master node doesn't truncate the file, it will be crashed if master node writes to these blocks which truncated in standby node.I try to fix issue to prevent query cancel interrupts during truncating.diff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.cindex 5df4382b7e..04b696ae01 100644--- a/src/backend/catalog/storage.c+++ b/src/backend/catalog/storage.c@@ -26,6 +26,7 @@ #include \"access/xlogutils.h\" #include \"catalog/storage.h\" #include \"catalog/storage_xlog.h\"+#include \"miscadmin.h\" #include \"storage/freespace.h\" #include \"storage/smgr.h\" #include \"utils/memutils.h\"@@ -248,6 +249,14 @@ RelationTruncate(Relation rel, BlockNumber nblocks) if (vm) visibilitymap_truncate(rel, nblocks);+ /*+ * When master node flush WAL-log of the truncation and then receive SIGINT signal to cancel+ * this transaction before the truncation, if standby receive this WAL-log and do the truncation,+ * standby node will crash when master node writes to these blocks which are truncated in standby node.+ * So we prevent query cancel interrupts.+ */+ HOLD_CANCEL_INTERRUPTS();+ /* * We WAL-log the truncation before actually truncating, which means * trouble if the truncation fails. If we then crash, the WAL replay@@ -288,6 +297,8 @@ RelationTruncate(Relation rel, BlockNumber nblocks) /* Do the real work */ smgrtruncate(rel->rd_smgr, MAIN_FORKNUM, nblocks);++ RESUME_CANCEL_INTERRUPTS(); }",
"msg_date": "Mon, 23 Sep 2019 15:48:50 +0800 (CST)",
"msg_from": "Thunder <thunder1@126.com>",
"msg_from_op": false,
"msg_subject": "Re:PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Mon, Sep 23, 2019 at 03:48:50PM +0800, Thunder wrote:\n>Is this an issue?\n>Can we fix like this?\n>Thanks!\n>\n\nI do think it is a valid issue. No opinion on the fix yet, though.\nThe report was sent on saturday, so patience ;-)\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 23 Sep 2019 13:45:14 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Mon, Sep 23, 2019 at 01:45:14PM +0200, Tomas Vondra wrote:\n> On Mon, Sep 23, 2019 at 03:48:50PM +0800, Thunder wrote:\n>> Is this an issue?\n>> Can we fix like this?\n>> Thanks!\n>> \n> \n> I do think it is a valid issue. No opinion on the fix yet, though.\n> The report was sent on saturday, so patience ;-)\n\nAnd for some others it was even a longer weekend. Anyway, the problem\ncan be reproduced if you apply the attached which introduces a failure\npoint, and then if you run the following commands:\ncreate table aa as select 1;\ndelete from aa;\n\\! touch /tmp/truncate_flag\nvacuum aa;\n\\! rm /tmp/truncate_flag\nvacuum aa; -- panic on standby\n\nThis also points out that there are other things to worry about than\ninterruptions, as for example DropRelFileNodeLocalBuffers() could lead\nto an ERROR, and this happens before the physical truncation is done\nbut after the WAL record is replayed on the standby, so any failures\nhappening at the truncation phase before the work is done would be a\nproblem. However we are talking about failures which should not\nhappen and these are elog() calls. It would be tempting to add a\ncritical section here, but we could still have problems if we have a\nfailure after the WAL record has been flushed, which means that it\nwould be replayed on the standby, and the surrounding comments are\nclear about that. In short, as a matter of safety I'd like to think\nthat what you are suggesting is rather acceptable (aka hold interrupts\nbefore the WAL record is written and release after the physical\ntruncate), so as truncation avoids failures possible to avoid.\n\nDo others have thoughts to share on the matter?\n--\nMichael",
"msg_date": "Tue, 24 Sep 2019 10:40:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "Hello.\n\nAt Tue, 24 Sep 2019 10:40:19 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190924014019.GB2012@paquier.xyz>\n> On Mon, Sep 23, 2019 at 01:45:14PM +0200, Tomas Vondra wrote:\n> > On Mon, Sep 23, 2019 at 03:48:50PM +0800, Thunder wrote:\n> >> Is this an issue?\n> >> Can we fix like this?\n> >> Thanks!\n> >> \n> > \n> > I do think it is a valid issue. No opinion on the fix yet, though.\n> > The report was sent on saturday, so patience ;-)\n> \n> And for some others it was even a longer weekend. Anyway, the problem\n> can be reproduced if you apply the attached which introduces a failure\n> point, and then if you run the following commands:\n> create table aa as select 1;\n> delete from aa;\n> \\! touch /tmp/truncate_flag\n> vacuum aa;\n> \\! rm /tmp/truncate_flag\n> vacuum aa; -- panic on standby\n> \n> This also points out that there are other things to worry about than\n> interruptions, as for example DropRelFileNodeLocalBuffers() could lead\n> to an ERROR, and this happens before the physical truncation is done\n> but after the WAL record is replayed on the standby, so any failures\n> happening at the truncation phase before the work is done would be a\n\nIndeed.\n\n> problem. However we are talking about failures which should not\n> happen and these are elog() calls. It would be tempting to add a\n> critical section here, but we could still have problems if we have a\n> failure after the WAL record has been flushed, which means that it\n> would be replayed on the standby, and the surrounding comments are\n\nAgreed.\n\n> clear about that. In short, as a matter of safety I'd like to think\n> that what you are suggesting is rather acceptable (aka hold interrupts\n> before the WAL record is written and release after the physical\n> truncate), so as truncation avoids failures possible to avoid.\n> \n> Do others have thoughts to share on the matter?\n\nAgreed for the concept, but does the patch work as described? It\nseems that query cancel doesn't fire during the holded-off\nsection since no CHECK_FOR_INTERRUPTS() there.\n\nregares.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 24 Sep 2019 12:46:19 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "At Tue, 24 Sep 2019 12:46:19 +0900 (Tokyo Standard Time), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in <20190924.124619.248088532.horikyota.ntt@gmail.com>\n> > clear about that. In short, as a matter of safety I'd like to think\n> > that what you are suggesting is rather acceptable (aka hold interrupts\n> > before the WAL record is written and release after the physical\n> > truncate), so as truncation avoids failures possible to avoid.\n> > \n> > Do others have thoughts to share on the matter?\n> \n> Agreed for the concept, but does the patch work as described? It\n> seems that query cancel doesn't fire during the holded-off\n> section since no CHECK_FOR_INTERRUPTS() there.\n\nOf course I found no *explicit* ones. But I found one\nereport(DEBUG1 in register_dirty_segment. So it will work at\nleast for the case where fsync request queue is full.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 24 Sep 2019 14:48:16 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 02:48:16PM +0900, Kyotaro Horiguchi wrote:\n> Of course I found no *explicit* ones. But I found one\n> ereport(DEBUG1 in register_dirty_segment. So it will work at\n> least for the case where fsync request queue is full.\n\nExactly. I have not checked the patch in details, but I think that\nwe should not rely on the assumption that no code paths in this area do\nnot check after CHECK_FOR_INTERRUPTS() as smgrtruncate() does much\nmore than just the physical segment truncation.\n--\nMichael",
"msg_date": "Wed, 25 Sep 2019 12:24:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "At Wed, 25 Sep 2019 12:24:03 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190925032403.GF1815@paquier.xyz>\n> On Tue, Sep 24, 2019 at 02:48:16PM +0900, Kyotaro Horiguchi wrote:\n> > Of course I found no *explicit* ones. But I found one\n> > ereport(DEBUG1 in register_dirty_segment. So it will work at\n> > least for the case where fsync request queue is full.\n> \n> Exactly. I have not checked the patch in details, but I think that\n> we should not rely on the assumption that no code paths in this area do\n> not check after CHECK_FOR_INTERRUPTS() as smgrtruncate() does much\n> more than just the physical segment truncation.\n\nAgreed to the point. Just I doubted that it really fixes the\nauthor's problem. And confirmed that it can be.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 25 Sep 2019 15:55:46 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 10:41 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Sep 23, 2019 at 01:45:14PM +0200, Tomas Vondra wrote:\n> > On Mon, Sep 23, 2019 at 03:48:50PM +0800, Thunder wrote:\n> >> Is this an issue?\n> >> Can we fix like this?\n> >> Thanks!\n> >>\n> >\n> > I do think it is a valid issue. No opinion on the fix yet, though.\n> > The report was sent on saturday, so patience ;-)\n>\n> And for some others it was even a longer weekend. Anyway, the problem\n> can be reproduced if you apply the attached which introduces a failure\n> point, and then if you run the following commands:\n> create table aa as select 1;\n> delete from aa;\n> \\! touch /tmp/truncate_flag\n> vacuum aa;\n> \\! rm /tmp/truncate_flag\n> vacuum aa; -- panic on standby\n>\n> This also points out that there are other things to worry about than\n> interruptions, as for example DropRelFileNodeLocalBuffers() could lead\n> to an ERROR, and this happens before the physical truncation is done\n> but after the WAL record is replayed on the standby, so any failures\n> happening at the truncation phase before the work is done would be a\n> problem. However we are talking about failures which should not\n> happen and these are elog() calls. It would be tempting to add a\n> critical section here, but we could still have problems if we have a\n> failure after the WAL record has been flushed, which means that it\n> would be replayed on the standby, and the surrounding comments are\n> clear about that.\n\nCould you elaborate what problem adding a critical section there occurs?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 26 Sep 2019 01:13:56 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 01:13:56AM +0900, Fujii Masao wrote:\n> On Tue, Sep 24, 2019 at 10:41 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> This also points out that there are other things to worry about than\n>> interruptions, as for example DropRelFileNodeLocalBuffers() could lead\n>> to an ERROR, and this happens before the physical truncation is done\n>> but after the WAL record is replayed on the standby, so any failures\n>> happening at the truncation phase before the work is done would be a\n>> problem. However we are talking about failures which should not\n>> happen and these are elog() calls. It would be tempting to add a\n>> critical section here, but we could still have problems if we have a\n>> failure after the WAL record has been flushed, which means that it\n>> would be replayed on the standby, and the surrounding comments are\n>> clear about that.\n> \n> Could you elaborate what problem adding a critical section there occurs?\n\nWrapping the call of smgrtruncate() within RelationTruncate() to use a\ncritical section would make things worse from the user perspective on\nthe primary, no? If the physical truncation fails, we would still\nfail WAL replay on the standby, but instead of generating an ERROR in\nthe session of the user attempting the TRUNCATE, the whole primary\nwould be taken down.\n--\nMichael",
"msg_date": "Fri, 27 Sep 2019 15:14:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 3:14 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Sep 26, 2019 at 01:13:56AM +0900, Fujii Masao wrote:\n> > On Tue, Sep 24, 2019 at 10:41 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> This also points out that there are other things to worry about than\n> >> interruptions, as for example DropRelFileNodeLocalBuffers() could lead\n> >> to an ERROR, and this happens before the physical truncation is done\n> >> but after the WAL record is replayed on the standby, so any failures\n> >> happening at the truncation phase before the work is done would be a\n> >> problem. However we are talking about failures which should not\n> >> happen and these are elog() calls. It would be tempting to add a\n> >> critical section here, but we could still have problems if we have a\n> >> failure after the WAL record has been flushed, which means that it\n> >> would be replayed on the standby, and the surrounding comments are\n> >> clear about that.\n> >\n> > Could you elaborate what problem adding a critical section there occurs?\n>\n> Wrapping the call of smgrtruncate() within RelationTruncate() to use a\n> critical section would make things worse from the user perspective on\n> the primary, no? If the physical truncation fails, we would still\n> fail WAL replay on the standby, but instead of generating an ERROR in\n> the session of the user attempting the TRUNCATE, the whole primary\n> would be taken down.\n\nThanks for elaborating that! Understood.\n\nBut this can cause subsequent recovery to always fail with invalid-pages error\nand the server not to start up. This is bad. So, to allviate the situation,\nI'm thinking it would be worth adding something like igore_invalid_pages\ndeveloper parameter. When this parameter is set to true, the startup process\nalways ignores invalid-pages errors. Thought?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 3 Oct 2019 13:49:34 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Thu, Oct 03, 2019 at 01:49:34PM +0900, Fujii Masao wrote:\n> But this can cause subsequent recovery to always fail with invalid-pages error\n> and the server not to start up. This is bad. So, to allviate the situation,\n> I'm thinking it would be worth adding something like igore_invalid_pages\n> developer parameter. When this parameter is set to true, the startup process\n> always ignores invalid-pages errors. Thought?\n\nThat could be helpful.\n--\nMichael",
"msg_date": "Thu, 3 Oct 2019 13:57:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Thu, Oct 3, 2019 at 1:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Oct 03, 2019 at 01:49:34PM +0900, Fujii Masao wrote:\n> > But this can cause subsequent recovery to always fail with invalid-pages error\n> > and the server not to start up. This is bad. So, to allviate the situation,\n> > I'm thinking it would be worth adding something like igore_invalid_pages\n> > developer parameter. When this parameter is set to true, the startup process\n> > always ignores invalid-pages errors. Thought?\n>\n> That could be helpful.\n\nSo attached patch adds new developer GUC \"ignore_invalid_pages\".\nSetting ignore_invalid_pages to true causes the system\nto ignore the failure (but still report a warning), and continue recovery.\n\nI will add this to next CommitFest.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Thu, 3 Oct 2019 17:54:40 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Thu, Oct 03, 2019 at 05:54:40PM +0900, Fujii Masao wrote:\n> On Thu, Oct 3, 2019 at 1:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Thu, Oct 03, 2019 at 01:49:34PM +0900, Fujii Masao wrote:\n> > > But this can cause subsequent recovery to always fail with invalid-pages error\n> > > and the server not to start up. This is bad. So, to allviate the situation,\n> > > I'm thinking it would be worth adding something like igore_invalid_pages\n> > > developer parameter. When this parameter is set to true, the startup process\n> > > always ignores invalid-pages errors. Thought?\n> >\n> > That could be helpful.\n> \n> So attached patch adds new developer GUC \"ignore_invalid_pages\".\n> Setting ignore_invalid_pages to true causes the system\n> to ignore the failure (but still report a warning), and continue recovery.\n> \n> I will add this to next CommitFest.\n\nNo actual objections against this patch from me as a dev option.\n\n+ Detection of WAL records having references to invalid pages during\n+ recovery causes <productname>PostgreSQL</productname> to report\n+ an error, aborting the recovery. Setting\nWell, that's not really an error. This triggers a PANIC, aka crashes\nthe server. And in this case the actual problem is that you may not\nbe able to move on with recovery when restarting the server again,\nexcept if luck is on your side because you would continuously face\nit..\n\n+ recovery. This behavior may <emphasis>cause crashes, data loss,\n+ propagate or hide corruption, or other serious problems</emphasis>.\nNit: indentation on the second line here.\n\n+ However, it may allow you to get past the error, finish the recovery,\n+ and cause the server to start up.\nFor consistency here I would suggest the second part of the sentence\nto be \"TO finish recovery, and TO cause the server to start up\".\n\n+ The default setting is off, and it can only be set at server start.\nNit^2: Missing a <literal> markup for \"off\"?\n--\nMichael",
"msg_date": "Fri, 29 Nov 2019 11:39:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 11:39 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Oct 03, 2019 at 05:54:40PM +0900, Fujii Masao wrote:\n> > On Thu, Oct 3, 2019 at 1:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Thu, Oct 03, 2019 at 01:49:34PM +0900, Fujii Masao wrote:\n> > > > But this can cause subsequent recovery to always fail with invalid-pages error\n> > > > and the server not to start up. This is bad. So, to allviate the situation,\n> > > > I'm thinking it would be worth adding something like igore_invalid_pages\n> > > > developer parameter. When this parameter is set to true, the startup process\n> > > > always ignores invalid-pages errors. Thought?\n> > >\n> > > That could be helpful.\n> >\n> > So attached patch adds new developer GUC \"ignore_invalid_pages\".\n> > Setting ignore_invalid_pages to true causes the system\n> > to ignore the failure (but still report a warning), and continue recovery.\n> >\n> > I will add this to next CommitFest.\n>\n> No actual objections against this patch from me as a dev option.\n\nThanks for the review! Attached is the updated version of the patch.\n\n> + Detection of WAL records having references to invalid pages during\n> + recovery causes <productname>PostgreSQL</productname> to report\n> + an error, aborting the recovery. Setting\n> Well, that's not really an error. This triggers a PANIC, aka crashes\n> the server. And in this case the actual problem is that you may not\n> be able to move on with recovery when restarting the server again,\n> except if luck is on your side because you would continuously face\n> it..\n\nSo you're thinking that \"report an error\" should be changed to\n\"trigger a PANIC\"? Personally \"report an error\" sounds ok because\nPANIC is one of \"error\", I think. But if that misleads people,\nI will change the sentence.\n\n> + recovery. This behavior may <emphasis>cause crashes, data loss,\n> + propagate or hide corruption, or other serious problems</emphasis>.\n> Nit: indentation on the second line here.\n\nYes, I fixed that.\n\n> + However, it may allow you to get past the error, finish the recovery,\n> + and cause the server to start up.\n> For consistency here I would suggest the second part of the sentence\n> to be \"TO finish recovery, and TO cause the server to start up\".\n\nYes, I fixed that.\n\n> + The default setting is off, and it can only be set at server start.\n> Nit^2: Missing a <literal> markup for \"off\"?\n\nYes, I fixed that.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Mon, 16 Dec 2019 12:22:18 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 12:22:18PM +0900, Fujii Masao wrote:\n> > + Detection of WAL records having references to invalid pages during\n> > + recovery causes <productname>PostgreSQL</productname> to report\n> > + an error, aborting the recovery. Setting\n> > Well, that's not really an error. This triggers a PANIC, aka crashes\n> > the server. And in this case the actual problem is that you may not\n> > be able to move on with recovery when restarting the server again,\n> > except if luck is on your side because you would continuously face\n> > it..\n> \n> So you're thinking that \"report an error\" should be changed to\n> \"trigger a PANIC\"? Personally \"report an error\" sounds ok because\n> PANIC is one of \"error\", I think. But if that misleads people,\n> I will change the sentence.\n\nIn the context of a recovery, an ERROR is promoted to a FATAL, but\nhere are talking about something that bypasses the crash of the\nserver. So this could bring confusion. I think that the\ndocumentation should be crystal clear about that, with two aspects\noutlined when the parameter is disabled, somewhat like data_sync_retry\nactually:\n- A PANIC-level error is triggered.\n- It crashes the cluster.\n--\nMichael",
"msg_date": "Tue, 17 Dec 2019 14:19:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 2:19 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Dec 16, 2019 at 12:22:18PM +0900, Fujii Masao wrote:\n> > > + Detection of WAL records having references to invalid pages during\n> > > + recovery causes <productname>PostgreSQL</productname> to report\n> > > + an error, aborting the recovery. Setting\n> > > Well, that's not really an error. This triggers a PANIC, aka crashes\n> > > the server. And in this case the actual problem is that you may not\n> > > be able to move on with recovery when restarting the server again,\n> > > except if luck is on your side because you would continuously face\n> > > it..\n> >\n> > So you're thinking that \"report an error\" should be changed to\n> > \"trigger a PANIC\"? Personally \"report an error\" sounds ok because\n> > PANIC is one of \"error\", I think. But if that misleads people,\n> > I will change the sentence.\n>\n> In the context of a recovery, an ERROR is promoted to a FATAL, but\n> here are talking about something that bypasses the crash of the\n> server. So this could bring confusion. I think that the\n> documentation should be crystal clear about that, with two aspects\n> outlined when the parameter is disabled, somewhat like data_sync_retry\n> actually:\n> - A PANIC-level error is triggered.\n> - It crashes the cluster.\n\nOK, I updated the patch that way.\nAttached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Thu, 16 Jan 2020 23:17:36 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 11:17:36PM +0900, Fujii Masao wrote:\n> OK, I updated the patch that way.\n> Attached is the updated version of the patch.\n\nThanks. I have few tweaks to propose to the docs.\n\n+ raise a PANIC-level error, aborting the recovery. Setting\nInstead of \"PANIC-level error\", I would just use \"PANIC error\", and\ninstead of \"aborting the recovery\" just \"crashing the server\".\n\n+ causes the system to ignore those WAL records\nWAL records are not ignored, but errors caused by incorrect page\nreferences in those WAL records are. The current phrasing sounds like\nthe WAL records are not applied.\n\nAnother thing that I just recalled. Do you think that it would be\nbetter to mention that invalid page references can only be seen after\nreaching the consistent point during recovery? The information given\nlooks enough, but I was just wondering if that's worth documenting or\nnot.\n--\nMichael",
"msg_date": "Fri, 17 Jan 2020 13:47:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Fri, Jan 17, 2020 at 1:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jan 16, 2020 at 11:17:36PM +0900, Fujii Masao wrote:\n> > OK, I updated the patch that way.\n> > Attached is the updated version of the patch.\n>\n> Thanks. I have few tweaks to propose to the docs.\n>\n> + raise a PANIC-level error, aborting the recovery. Setting\n> Instead of \"PANIC-level error\", I would just use \"PANIC error\", and\n\nI have no strong opinion about this, but I used \"PANIC-level error\"\nbecause the description for data_sync_retry has already used it.\n\n> instead of \"aborting the recovery\" just \"crashing the server\".\n\nPANIC implies server crash, so IMO \"crashing the server\" is\na bit redundant, and \"aborting the recovery\" is better because\n\"continue the recovery\" is used later.\n\n> + causes the system to ignore those WAL records\n> WAL records are not ignored, but errors caused by incorrect page\n> references in those WAL records are. The current phrasing sounds like\n> the WAL records are not applied.\n\nSo, what about\n\n---------------\ncauses the system to ignore invalid page references in WAL records\n(but still report a warning), and continue the recovery.\n---------------\n\n> Another thing that I just recalled. Do you think that it would be\n> better to mention that invalid page references can only be seen after\n> reaching the consistent point during recovery? The information given\n> looks enough, but I was just wondering if that's worth documenting or\n> not.\n\nISTM that this is not the information that users should understand...\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Fri, 17 Jan 2020 19:36:51 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Fri, Jan 17, 2020 at 07:36:51PM +0900, Fujii Masao wrote:\n> On Fri, Jan 17, 2020 at 1:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Thanks. I have few tweaks to propose to the docs.\n>>\n>> + raise a PANIC-level error, aborting the recovery. Setting\n>> Instead of \"PANIC-level error\", I would just use \"PANIC error\", and\n> \n> I have no strong opinion about this, but I used \"PANIC-level error\"\n> because the description for data_sync_retry has already used it.\n\nOkay. Fine with what you think is good.\n\n>> instead of \"aborting the recovery\" just \"crashing the server\".\n> \n> PANIC implies server crash, so IMO \"crashing the server\" is\n> a bit redundant, and \"aborting the recovery\" is better because\n> \"continue the recovery\" is used later.\n\nOkay. I see your point here.\n\n> So, what about\n> \n> ---------------\n> causes the system to ignore invalid page references in WAL records\n> (but still report a warning), and continue the recovery.\n> ---------------\n\nAnd that sounds good to me. Switching the patch as ready for\ncommitter.\n--\nMichael",
"msg_date": "Sat, 18 Jan 2020 12:48:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Sat, Jan 18, 2020 at 9:18 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jan 17, 2020 at 07:36:51PM +0900, Fujii Masao wrote:\n> > So, what about\n> >\n> > ---------------\n> > causes the system to ignore invalid page references in WAL records\n> > (but still report a warning), and continue the recovery.\n> > ---------------\n>\n> And that sounds good to me. Switching the patch as ready for\n> committer.\n>\n\nAre we planning to do something about the original problem reported in\nthis thread?\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 Jan 2020 14:13:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Mon, Jan 20, 2020 at 02:13:53PM +0530, Amit Kapila wrote:\n> Are we planning to do something about the original problem reported in\n> this thread?\n\nWe should. This is on my TODO list, though seeing that it involved\nfull_page_writes=off I drifted a bit away from it.\n--\nMichael",
"msg_date": "Tue, 21 Jan 2020 09:35:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 6:05 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jan 20, 2020 at 02:13:53PM +0530, Amit Kapila wrote:\n> > Are we planning to do something about the original problem reported in\n> > this thread?\n>\n> We should. This is on my TODO list, though seeing that it involved\n> full_page_writes=off I drifted a bit away from it.\n>\n\nThe original email doesn't say so. I might be missing something, but\ncan you explain what makes you think so.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 21 Jan 2020 08:45:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 08:45:14AM +0530, Amit Kapila wrote:\n> The original email doesn't say so. I might be missing something, but\n> can you explain what makes you think so.\n\nOops. Incorrect thread, I was thinking about this one previously:\nhttps://www.postgresql.org/message-id/822113470.250068.1573246011818@connect.xfinity.com\n\nRe-reading the root of the thread, I am still not sure what we could\ndo, as that's rather tricky. See here:\nhttps://www.postgresql.org/message-id/20190927061414.GF8485@paquier.xyz\n--\nMichael",
"msg_date": "Tue, 21 Jan 2020 13:39:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "\n\nOn 2020/01/21 13:39, Michael Paquier wrote:\n> On Tue, Jan 21, 2020 at 08:45:14AM +0530, Amit Kapila wrote:\n>> The original email doesn't say so. I might be missing something, but\n>> can you explain what makes you think so.\n> \n> Oops. Incorrect thread, I was thinking about this one previously:\n> https://www.postgresql.org/message-id/822113470.250068.1573246011818@connect.xfinity.com\n> \n> Re-reading the root of the thread, I am still not sure what we could\n> do, as that's rather tricky. See here:\n> https://www.postgresql.org/message-id/20190927061414.GF8485@paquier.xyz\n\nThe original proposal, i.e., holding the interrupts during\nthe truncation, is worth considering? It is not a perfect\nsolution but might improve the situation a bit.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Tue, 21 Jan 2020 15:41:54 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "\n\nOn 2020/01/18 12:48, Michael Paquier wrote:\n> On Fri, Jan 17, 2020 at 07:36:51PM +0900, Fujii Masao wrote:\n>> On Fri, Jan 17, 2020 at 1:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n>>> Thanks. I have few tweaks to propose to the docs.\n>>>\n>>> + raise a PANIC-level error, aborting the recovery. Setting\n>>> Instead of \"PANIC-level error\", I would just use \"PANIC error\", and\n>>\n>> I have no strong opinion about this, but I used \"PANIC-level error\"\n>> because the description for data_sync_retry has already used it.\n> \n> Okay. Fine with what you think is good.\n> \n>>> instead of \"aborting the recovery\" just \"crashing the server\".\n>>\n>> PANIC implies server crash, so IMO \"crashing the server\" is\n>> a bit redundant, and \"aborting the recovery\" is better because\n>> \"continue the recovery\" is used later.\n> \n> Okay. I see your point here.\n> \n>> So, what about\n>>\n>> ---------------\n>> causes the system to ignore invalid page references in WAL records\n>> (but still report a warning), and continue the recovery.\n>> ---------------\n> \n> And that sounds good to me. Switching the patch as ready for\n> committer.\n\nThanks! Committed!\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 22 Jan 2020 11:59:52 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-21 15:41:54 +0900, Fujii Masao wrote:\n> On 2020/01/21 13:39, Michael Paquier wrote:\n> > On Tue, Jan 21, 2020 at 08:45:14AM +0530, Amit Kapila wrote:\n> > > The original email doesn't say so. I might be missing something, but\n> > > can you explain what makes you think so.\n> >\n> > Oops. Incorrect thread, I was thinking about this one previously:\n> > https://www.postgresql.org/message-id/822113470.250068.1573246011818@connect.xfinity.com\n> >\n> > Re-reading the root of the thread, I am still not sure what we could\n> > do, as that's rather tricky.\n\nDid anybody consider the proposal at\nhttps://www.postgresql.org/message-id/20191223005430.yhf4n3zr4ojwbcn2%40alap3.anarazel.de ?\nI think we're going to have to do something like that to actually fix\nthe problem, rather than polish around the edges.\n\n\n> See here:\n> https://www.postgresql.org/message-id/20190927061414.GF8485@paquier.xyz\n\nOn 2019-09-27 15:14:14 +0900, Michael Paquier wrote:\n> Wrapping the call of smgrtruncate() within RelationTruncate() to use a\n> critical section would make things worse from the user perspective on\n> the primary, no? If the physical truncation fails, we would still\n> fail WAL replay on the standby, but instead of generating an ERROR in\n> the session of the user attempting the TRUNCATE, the whole primary\n> would be taken down.\n\nFWIW, to me this argument just doesn't make any sense - even if a few\npeople have argued it.\n\nA failure in the FS truncate currently yields to a cluster in a\ncorrupted state in multiple ways:\n1) Dirty buffer contents were thrown away, and going forward their old\n contents will be read back.\n2) We have WAL logged something that we haven't done. That's *obviously*\n something *completely* violating WAL logging rules. And break WAL\n replay (including locally, should we crash before the next\n checkpoint - there could be subsequent WAL records relying on the\n block's existance).\n\nThat's so obviously worse than a PANIC restart, that I really don't\nunderstand the \"worse from the user perspective\" argument from your\nemail above. Obviously it sucks that the error might re-occur during\nrecovery. But that's something that usually actually can be fixed -\nwhereas the data corruption can't.\n\n\n> The original proposal, i.e., holding the interrupts during\n> the truncation, is worth considering? It is not a perfect\n> solution but might improve the situation a bit.\n\nI don't think it's useful in isolation.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 3 Feb 2020 05:49:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: standby crashed when replay block which truncated in\n standby but failed to truncate in master node"
}
] |
[
{
"msg_contents": "I find the documentation in\nhttps://www.postgresql.org/docs/12/functions-json.html very confusing.\n\nIn table 9.44 take the first entry,\n\nExample JSON\n {\"x\": [2.85, -14.7, -9.4]}\n\nExample Query\n + $.x.floor()\n\nResult\n2, -15, -10\n\nThere are no end to end examples here. How do I apply the example query to\nthe example json to obtain the given result?\n\nTable 9.47 only gives two operators which apply a jsonpath to a json(b)\nobject: @? and @@; and neither one of those yield the indicated result from\nthe first line in 9.44. What does?\n\nAlso, I can't really figure out what the descriptions of @? and @@ mean.\nDoes @? return true if an item exists, even if the value of that item is\nfalse, while @@ returns the truth value of the existing item?\n\nhttps://www.postgresql.org/docs/12/datatype-json.html#DATATYPE-JSONPATH\n\n\"The SQL/JSON path language is fully integrated into the SQL engine\". What\ndoes that mean? If it were only partially integrated, what would that\nmean? Is this providing me with any useful information? Is this just\nsaying that this is not a contrib extension module?\n\nWhat is the difference between \"SQL/JSON Path Operators And Methods\" and\nand \"jsonpath Accessors\" and why are they not described in the same place,\nor at least nearby each other?\n\nCheers,\n\nJeff\n\nI find the documentation in \n\nhttps://www.postgresql.org/docs/12/functions-json.html very confusing.In table 9.44 take the first entry, Example JSON {\"x\": [2.85, -14.7, -9.4]} Example Query\t + $.x.floor() Result\t\t2, -15, -10There are no end to end examples here. How do I apply the example query to the example json to obtain the given result?Table 9.47 only gives two operators which apply a jsonpath to a json(b) object: @? and @@; and neither one of those yield the indicated result from the first line in 9.44. What does?Also, I can't really figure out what the descriptions of @? and @@ mean. Does @? return true if an item exists, even if the value of that item is false, while @@ returns the truth value of the existing item?https://www.postgresql.org/docs/12/datatype-json.html#DATATYPE-JSONPATH \"The SQL/JSON path language is fully integrated into the SQL engine\". What does that mean? If it were only partially integrated, what would that mean? Is this providing me with any useful information? Is this just saying that this is not a contrib extension module?What is the difference between \"SQL/JSON Path Operators And Methods\" and and \"jsonpath Accessors\" and why are they not described in the same place, or at least nearby each other?Cheers,Jeff",
"msg_date": "Sun, 22 Sep 2019 14:18:04 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "JSONPATH documentation"
},
{
"msg_contents": "On Sun, Sep 22, 2019 at 2:18 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n\n> I find the documentation in\n> https://www.postgresql.org/docs/12/functions-json.html very confusing.\n>\n> In table 9.44 take the first entry,\n>\n> Example JSON\n> {\"x\": [2.85, -14.7, -9.4]}\n>\n> Example Query\n> + $.x.floor()\n>\n> Result\n> 2, -15, -10\n>\n> There are no end to end examples here. How do I apply the example query to\n> the example json to obtain the given result?\n>\n\nOK, never mind here. After digging in the regression tests, I did find\njsonb_path_query and friends, and they are in the docs with examples in\ntable 9.49. I don't know how I overlooked that in the first place, I guess\nI was fixated on operators. Or maybe by the time I was down in those\nfunctions, I thought I had cycled back up and was looking at 9.44 again.\nBut I think it would make sense to move the description of jsonpath to its\nown page. It is confusing to have operators within the jsonpath language,\nand operators which apply to jsonpath \"from the outside\", together in the\nsame page.\n\nCheers,\n\nJeff\n\n>\n\nOn Sun, Sep 22, 2019 at 2:18 PM Jeff Janes <jeff.janes@gmail.com> wrote:I find the documentation in \n\nhttps://www.postgresql.org/docs/12/functions-json.html very confusing.In table 9.44 take the first entry, Example JSON {\"x\": [2.85, -14.7, -9.4]} Example Query\t + $.x.floor() Result\t\t2, -15, -10There are no end to end examples here. How do I apply the example query to the example json to obtain the given result?OK, never mind here. After digging in the regression tests, I did find jsonb_path_query and friends, and they are in the docs with examples in table 9.49. I don't know how I overlooked that in the first place, I guess I was fixated on operators. Or maybe by the time I was down in those functions, I thought I had cycled back up and was looking at 9.44 again. But I think it would make sense to move the description of jsonpath to its own page. It is confusing to have operators within the jsonpath language, and operators which apply to jsonpath \"from the outside\", together in the same page.Cheers,Jeff",
"msg_date": "Sun, 22 Sep 2019 16:36:28 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: JSONPATH documentation"
},
{
"msg_contents": "Hi!\n\nOn Sun, Sep 22, 2019 at 9:18 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n> I find the documentation in https://www.postgresql.org/docs/12/functions-json.html very confusing.\n>\n> In table 9.44 take the first entry,\n>\n> Example JSON\n> {\"x\": [2.85, -14.7, -9.4]}\n>\n> Example Query\n> + $.x.floor()\n>\n> Result\n> 2, -15, -10\n>\n> There are no end to end examples here. How do I apply the example query to the example json to obtain the given result?\n\nYes, I agree this looks unclear. I can propose two possible solutions.\n1) Include full queries into the table. For instance, it could be\n\"SELECT jsonb_path_query_array('{\"x\": [2.85, -14.7, -9.4]}', '+\n$.x.floor()');\". Or at least full SQL expressions, e.g.\n\"jsonb_path_query_array('{\"x\": [2.85, -14.7, -9.4]}', '+\n$.x.floor()')\".\n2) Add a note clarifying which functions use to run the examples.\n\nWhat do you think?\n\n> Table 9.47 only gives two operators which apply a jsonpath to a json(b) object: @? and @@; and neither one of those yield the indicated result from the first line in 9.44. What does?\n\nOperators don't produce these results. These results may be produced\nby jsonb_path_query() or jsonb_path_query_array() functions described\nin table 9.49.\n\n> Also, I can't really figure out what the descriptions of @? and @@ mean. Does @? return true if an item exists, even if the value of that item is false, while @@ returns the truth value of the existing item?\n\nI see @? and @@ are lacking of examples. And description given in the\ntable is a bit vague.\n\n@? checks if jsonpath returns at least of item.\n\n# SELECT '{\"x\": [2.85, -14.7, -9.4]}' @? '$.x[*] ? (@ > 2)';\n ?column?\n----------\n t\n\n# SELECT '{\"x\": [2.85, -14.7, -9.4]}' @? '$.x[*] ? (@ > 3)';\n ?column?\n----------\n f\n\n@@ checks if first item returned by jsonpath is true.\n\n# SELECT '{\"x\": [2.85, -14.7, -9.4]}' @@ '$.x.size() == 3';\n ?column?\n----------\n f\n\n# SELECT '{\"x\": [2.85, -14.7, -9.4]}' @@ '$.x.size() == 4';\n ?column?\n----------\n f\n\n> https://www.postgresql.org/docs/12/datatype-json.html#DATATYPE-JSONPATH\n>\n> \"The SQL/JSON path language is fully integrated into the SQL engine\". What does that mean? If it were only partially integrated, what would that mean? Is this providing me with any useful information? Is this just saying that this is not a contrib extension module?\n\nI guess, this sentence comes from uncommitted patch, which implements\nSQL/JSON clauses. I see that now we only can use jsonpath in\nfunctions and operator. So, we can't say it's fully integrated.\n\n> What is the difference between \"SQL/JSON Path Operators And Methods\" and and \"jsonpath Accessors\" and why are they not described in the same place, or at least nearby each other?\n\nAccessors are used to access parts of json objects/arrays, while\noperators manipulates accessed parts. This terminology comes from SQL\nstandard. In principle we could call accessors and operators the same\nname, but we follow standard terminology.\n\nCurrently description of jsonpath is divided between datatypes section\nand functions and operators section. And yes, this looks cumbersome.\nI think we should move the whole description to the one section.\nProbably we should move jsonpath description to datatypes section\n(assuming jsonpath is a datatype) leaving functions and operators\nsection with just SQL-level functions and operators. What do you\nthink?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sun, 22 Sep 2019 23:56:27 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: JSONPATH documentation"
},
{
"msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> On Sun, Sep 22, 2019 at 9:18 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n> Currently description of jsonpath is divided between datatypes section\n> and functions and operators section. And yes, this looks cumbersome.\n\nAgreed, but ...\n\n> I think we should move the whole description to the one section.\n> Probably we should move jsonpath description to datatypes section\n> (assuming jsonpath is a datatype) leaving functions and operators\n> section with just SQL-level functions and operators. What do you\n> think?\n\n... I don't think that's an improvement. We don't document detailed\nbehavior of a datatype's functions in datatype.sgml, and this seems\nlike it would be contrary to that layout. If anything, I'd merge\nthe other way, with only a very minimal description of jsonpath\n(perhaps none?) in datatype.sgml.\n\nWhile we're whining about this, I find it very off-putting that\nthe jsonpath stuff was inserted in the JSON functions section\nahead of the actual JSON functions. I think it should have\ngone after them, because it feels like a barely-related interjection\nas it stands. Maybe there's even a case that it should be\nits own <sect1>, after the \"functions-json\" section.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 22 Sep 2019 18:03:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: JSONPATH documentation"
},
{
"msg_contents": "On Mon, Sep 23, 2019 at 1:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > On Sun, Sep 22, 2019 at 9:18 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n> > Currently description of jsonpath is divided between datatypes section\n> > and functions and operators section. And yes, this looks cumbersome.\n>\n> Agreed, but ...\n>\n> > I think we should move the whole description to the one section.\n> > Probably we should move jsonpath description to datatypes section\n> > (assuming jsonpath is a datatype) leaving functions and operators\n> > section with just SQL-level functions and operators. What do you\n> > think?\n>\n> ... I don't think that's an improvement. We don't document detailed\n> behavior of a datatype's functions in datatype.sgml, and this seems\n> like it would be contrary to that layout. If anything, I'd merge\n> the other way, with only a very minimal description of jsonpath\n> (perhaps none?) in datatype.sgml.\n>\n> While we're whining about this, I find it very off-putting that\n> the jsonpath stuff was inserted in the JSON functions section\n> ahead of the actual JSON functions. I think it should have\n> gone after them, because it feels like a barely-related interjection\n> as it stands. Maybe there's even a case that it should be\n> its own <sect1>, after the \"functions-json\" section.\n\nYes, it think moving jsonpath description to own <sect1> is a good\nidea. But I still think we should have complete jsonpath description\nin the single place. What about joining jsonpath description from\nboth datatypes section and functions and operators section into this\n<sect1>, leaving datatypes section with something very brief?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 23 Sep 2019 02:03:14 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: JSONPATH documentation"
},
{
"msg_contents": "JSON Containment, JSONPath, and Transforms are means to work with JSONB but\nnot the actual datatype itself. Doc should be split into\n1) Data type - how do declare, indexing, considerations when using it...\n2) Ways to work with the data type - functions, containment, JSONPath...\n\nThese can be separate pages or on the same page but they need to be\nlogically and physically separated\n\nThere should also be a link to some of the original JSONPath spec\nhttps://goessner.net/articles/JsonPath/\n\nThank you so much for putting so much work into the documentation! Please\nlet me know if there are others way you would like to me help with the doc.\n\nOn Sun, Sep 22, 2019 at 4:03 PM Alexander Korotkov <\na.korotkov@postgrespro.ru> wrote:\n\n> On Mon, Sep 23, 2019 at 1:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > > On Sun, Sep 22, 2019 at 9:18 PM Jeff Janes <jeff.janes@gmail.com>\n> wrote:\n> > > Currently description of jsonpath is divided between datatypes section\n> > > and functions and operators section. And yes, this looks cumbersome.\n> >\n> > Agreed, but ...\n> >\n> > > I think we should move the whole description to the one section.\n> > > Probably we should move jsonpath description to datatypes section\n> > > (assuming jsonpath is a datatype) leaving functions and operators\n> > > section with just SQL-level functions and operators. What do you\n> > > think?\n> >\n> > ... I don't think that's an improvement. We don't document detailed\n> > behavior of a datatype's functions in datatype.sgml, and this seems\n> > like it would be contrary to that layout. If anything, I'd merge\n> > the other way, with only a very minimal description of jsonpath\n> > (perhaps none?) in datatype.sgml.\n> >\n> > While we're whining about this, I find it very off-putting that\n> > the jsonpath stuff was inserted in the JSON functions section\n> > ahead of the actual JSON functions. I think it should have\n> > gone after them, because it feels like a barely-related interjection\n> > as it stands. Maybe there's even a case that it should be\n> > its own <sect1>, after the \"functions-json\" section.\n>\n> Yes, it think moving jsonpath description to own <sect1> is a good\n> idea. But I still think we should have complete jsonpath description\n> in the single place. What about joining jsonpath description from\n> both datatypes section and functions and operators section into this\n> <sect1>, leaving datatypes section with something very brief?\n>\n> ------\n> Alexander Korotkov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n>\n\nJSON Containment, JSONPath, and Transforms are means to work with JSONB but not the actual datatype itself. Doc should be split into 1) Data type - how do declare, indexing, considerations when using it...2) Ways to work with the data type - functions, containment, JSONPath...These can be separate pages or on the same page but they need to be logically and physically separatedThere should also be a link to some of the original JSONPath spec https://goessner.net/articles/JsonPath/Thank you so much for putting so much work into the documentation! Please let me know if there are others way you would like to me help with the doc.On Sun, Sep 22, 2019 at 4:03 PM Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:On Mon, Sep 23, 2019 at 1:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > On Sun, Sep 22, 2019 at 9:18 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n> > Currently description of jsonpath is divided between datatypes section\n> > and functions and operators section. And yes, this looks cumbersome.\n>\n> Agreed, but ...\n>\n> > I think we should move the whole description to the one section.\n> > Probably we should move jsonpath description to datatypes section\n> > (assuming jsonpath is a datatype) leaving functions and operators\n> > section with just SQL-level functions and operators. What do you\n> > think?\n>\n> ... I don't think that's an improvement. We don't document detailed\n> behavior of a datatype's functions in datatype.sgml, and this seems\n> like it would be contrary to that layout. If anything, I'd merge\n> the other way, with only a very minimal description of jsonpath\n> (perhaps none?) in datatype.sgml.\n>\n> While we're whining about this, I find it very off-putting that\n> the jsonpath stuff was inserted in the JSON functions section\n> ahead of the actual JSON functions. I think it should have\n> gone after them, because it feels like a barely-related interjection\n> as it stands. Maybe there's even a case that it should be\n> its own <sect1>, after the \"functions-json\" section.\n\nYes, it think moving jsonpath description to own <sect1> is a good\nidea. But I still think we should have complete jsonpath description\nin the single place. What about joining jsonpath description from\nboth datatypes section and functions and operators section into this\n<sect1>, leaving datatypes section with something very brief?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 23 Sep 2019 09:52:24 -0700",
"msg_from": "Steven Pousty <steve.pousty@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: JSONPATH documentation"
},
{
"msg_contents": "On Mon, Sep 23, 2019 at 7:52 PM Steven Pousty <steve.pousty@gmail.com> wrote:\n> JSON Containment, JSONPath, and Transforms are means to work with JSONB but not the actual datatype itself. Doc should be split into\n> 1) Data type - how do declare, indexing, considerations when using it...\n> 2) Ways to work with the data type - functions, containment, JSONPath...\n>\n> These can be separate pages or on the same page but they need to be logically and physically separated\n\nAccording to your proposal, where jsonpath functions, operators and\naccessors should be described in? On the one hand jsonpath functions\netc. are part of jsonpath datatype. On the other hand it's functions\nwe apply to jsonb documents.\n\n> There should also be a link to some of the original JSONPath spec\n> https://goessner.net/articles/JsonPath/\n\nWe implement JSONPath according to SQL Standard 2016. Your link\nprovides earlier attempt to implement jsonpath. It's similar, but\nsome examples don't follow standard (and don't work in our\nimplementation). For instance '$.store.book[(@.length-1)].title'\nshould be written as '$.store.book[last - 1] .title'.\n\n> Thank you so much for putting so much work into the documentation! Please let me know if there are others way you would like to me help with the doc.\n\nThank you! My main point is that we should put description of\njsonpath into single place. But we need to reach consensus on what\nthis place should be.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 23 Sep 2019 21:06:55 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: JSONPATH documentation"
},
{
"msg_contents": "Hey there:\nThanks for the education on the path spec. Too bad it is in a zip doc - do\nyou know of a place where it is publicly available so we can link to it?\nPerhaps there is some document or page you think would be a good reference\nread for people who want to understand more?\nhttps://standards.iso.org/ittf/PubliclyAvailableStandards/c067367_ISO_IEC_TR_19075-6_2017.zip\n\nI am uncertain why JSONPath is considered part of the datatype any more so\nthan string functions are considered part of the character datatype\nhttps://www.postgresql.org/docs/11/functions-string.html\n\n\nOn Mon, Sep 23, 2019 at 11:07 AM Alexander Korotkov <\na.korotkov@postgrespro.ru> wrote:\n\n> On Mon, Sep 23, 2019 at 7:52 PM Steven Pousty <steve.pousty@gmail.com>\n> wrote:\n> > JSON Containment, JSONPath, and Transforms are means to work with JSONB\n> but not the actual datatype itself. Doc should be split into\n> > 1) Data type - how do declare, indexing, considerations when using it...\n> > 2) Ways to work with the data type - functions, containment, JSONPath...\n> >\n> > These can be separate pages or on the same page but they need to be\n> logically and physically separated\n>\n> According to your proposal, where jsonpath functions, operators and\n> accessors should be described in? On the one hand jsonpath functions\n> etc. are part of jsonpath datatype. On the other hand it's functions\n> we apply to jsonb documents.\n>\n> > There should also be a link to some of the original JSONPath spec\n> > https://goessner.net/articles/JsonPath/\n>\n> We implement JSONPath according to SQL Standard 2016. Your link\n> provides earlier attempt to implement jsonpath. It's similar, but\n> some examples don't follow standard (and don't work in our\n> implementation). For instance '$.store.book[(@.length-1)].title'\n> should be written as '$.store.book[last - 1] .title'.\n>\n> > Thank you so much for putting so much work into the documentation!\n> Please let me know if there are others way you would like to me help with\n> the doc.\n>\n> Thank you! My main point is that we should put description of\n> jsonpath into single place. But we need to reach consensus on what\n> this place should be.\n>\n> ------\n> Alexander Korotkov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n\nHey there:Thanks for the education on the path spec. Too bad it is in a zip doc - do you know of a place where it is publicly available so we can link to it? Perhaps there is some document or page you think would be a good reference read for people who want to understand more?https://standards.iso.org/ittf/PubliclyAvailableStandards/c067367_ISO_IEC_TR_19075-6_2017.zipI am uncertain why JSONPath is considered part of the datatype any more so than string functions are considered part of the character datatypehttps://www.postgresql.org/docs/11/functions-string.htmlOn Mon, Sep 23, 2019 at 11:07 AM Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:On Mon, Sep 23, 2019 at 7:52 PM Steven Pousty <steve.pousty@gmail.com> wrote:\n> JSON Containment, JSONPath, and Transforms are means to work with JSONB but not the actual datatype itself. Doc should be split into\n> 1) Data type - how do declare, indexing, considerations when using it...\n> 2) Ways to work with the data type - functions, containment, JSONPath...\n>\n> These can be separate pages or on the same page but they need to be logically and physically separated\n\nAccording to your proposal, where jsonpath functions, operators and\naccessors should be described in? On the one hand jsonpath functions\netc. are part of jsonpath datatype. On the other hand it's functions\nwe apply to jsonb documents.\n\n> There should also be a link to some of the original JSONPath spec\n> https://goessner.net/articles/JsonPath/\n\nWe implement JSONPath according to SQL Standard 2016. Your link\nprovides earlier attempt to implement jsonpath. It's similar, but\nsome examples don't follow standard (and don't work in our\nimplementation). For instance '$.store.book[(@.length-1)].title'\nshould be written as '$.store.book[last - 1] .title'.\n\n> Thank you so much for putting so much work into the documentation! Please let me know if there are others way you would like to me help with the doc.\n\nThank you! My main point is that we should put description of\njsonpath into single place. But we need to reach consensus on what\nthis place should be.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 23 Sep 2019 12:10:28 -0700",
"msg_from": "Steven Pousty <steve.pousty@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: JSONPATH documentation"
},
{
"msg_contents": "Hi!\n\nOn Mon, Sep 23, 2019 at 10:10 PM Steven Pousty <steve.pousty@gmail.com> wrote:\n> Thanks for the education on the path spec. Too bad it is in a zip doc - do you know of a place where it is publicly available so we can link to it? Perhaps there is some document or page you think would be a good reference read for people who want to understand more?\n> https://standards.iso.org/ittf/PubliclyAvailableStandards/c067367_ISO_IEC_TR_19075-6_2017.zip\n\nYes, this link looks good to me. It's technical report, not standard\nitself. So, it may have some little divergences. But it seems to be\nthe best free resource available, assuming standard itself isn't free.\n\n> I am uncertain why JSONPath is considered part of the datatype any more so than string functions are considered part of the character datatype\n> https://www.postgresql.org/docs/11/functions-string.html\n\nLet me clarify my thoughts. SQL-level functions jsonb_path_*() (table\n9.49) are clearly not part of jsonpath datatype. But jsonpath\naccessors (table 8.25), functions (table 9.44) and operators (table\n9.45) are used inside jsonpath value. So, technically they are parts\nof jsonpath datatype.\n\nP.S. We don't use top posting in mailing lists. Please, use bottom\nposting. See https://en.wikipedia.org/wiki/Posting_style#Top-posting\nfor details.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 23 Sep 2019 22:29:02 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: JSONPATH documentation"
},
{
"msg_contents": "Privet :D\n\n\nOn Mon, Sep 23, 2019 at 12:29 PM Alexander Korotkov <\na.korotkov@postgrespro.ru> wrote:\n\n> Hi!\n>\n> On Mon, Sep 23, 2019 at 10:10 PM Steven Pousty <steve.pousty@gmail.com>\n> wrote:\n> >\n> https://standards.iso.org/ittf/PubliclyAvailableStandards/c067367_ISO_IEC_TR_19075-6_2017.zip\n>\n> Yes, this link looks good to me. It's technical report, not standard\n> itself. So, it may have some little divergences. But it seems to be\n> the best free resource available, assuming standard itself isn't free.\n>\n> Works for me if we can't find something better\n\n\n> > I am uncertain why JSONPath is considered part of the datatype any more\n> so than string functions are considered part of the character datatype\n> > https://www.postgresql.org/docs/11/functions-string.html\n>\n> Let me clarify my thoughts. SQL-level functions jsonb_path_*() (table\n> 9.49) are clearly not part of jsonpath datatype. But jsonpath\n> accessors (table 8.25), functions (table 9.44) and operators (table\n> 9.45) are used inside jsonpath value. So, technically they are parts\n> of jsonpath datatype.\n>\n>\nYes but the only time I would use those 8.25, 9.44, and 9.45 is to just\ncreate a jsonpath whose main purpose is to query or filter JSONB.\nAs a continued analogy, I think we rightly do not discuss anything but\ncreating and considerations when using character fields:\nhttps://www.postgresql.org/docs/11/datatype-character.html\n\nAnd then we have a separate page that talk about all the ways you can\nmanipulate and filter character fields.\n\nMy feeling is that JSONPath is only included as a way to work with JSONB,\nnot as requirement of JSONB. Therefore JSONPath documentation belongs with\nall the other ways we work with JSONB, not as part of the datatype\ndefinition.\nJSONPath is important and complicated enough that it may warrant its own\npage, just not in the same page where we define JSON(B)\n\n\n\n> P.S. We don't use top posting in mailing lists. Please, use bottom\n> posting. See https://en.wikipedia.org/wiki/Posting_style#Top-posting\n> for details.\n>\n>\nThanks for the very KIND etiquette correction - I really appreciate you\nnot flaming me.\n\nThanks\nSteve\n\n\nPrivet :D\nOn Mon, Sep 23, 2019 at 12:29 PM Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:Hi!\n\nOn Mon, Sep 23, 2019 at 10:10 PM Steven Pousty <steve.pousty@gmail.com> wrote:> https://standards.iso.org/ittf/PubliclyAvailableStandards/c067367_ISO_IEC_TR_19075-6_2017.zip\n\nYes, this link looks good to me. It's technical report, not standard\nitself. So, it may have some little divergences. But it seems to be\nthe best free resource available, assuming standard itself isn't free.\nWorks for me if we can't find something better \n> I am uncertain why JSONPath is considered part of the datatype any more so than string functions are considered part of the character datatype\n> https://www.postgresql.org/docs/11/functions-string.html\n\nLet me clarify my thoughts. SQL-level functions jsonb_path_*() (table\n9.49) are clearly not part of jsonpath datatype. But jsonpath\naccessors (table 8.25), functions (table 9.44) and operators (table\n9.45) are used inside jsonpath value. So, technically they are parts\nof jsonpath datatype.\nYes but the only time I would use those 8.25, 9.44, and 9.45 is to just create a jsonpath whose main purpose is to query or filter JSONB. As a continued analogy, I think we rightly do not discuss anything but creating and considerations when using character fields:https://www.postgresql.org/docs/11/datatype-character.htmlAnd then we have a separate page that talk about all the ways you can manipulate and filter character fields.My feeling is that JSONPath is only included as a way to work with JSONB, not as requirement of JSONB. Therefore JSONPath documentation belongs with all the other ways we work with JSONB, not as part of the datatype definition.JSONPath is important and complicated enough that it may warrant its own page, just not in the same page where we define JSON(B) \nP.S. We don't use top posting in mailing lists. Please, use bottom\nposting. See https://en.wikipedia.org/wiki/Posting_style#Top-posting\nfor details.\nThanks for the very KIND etiquette correction - I really appreciate you not flaming me. \nThanksSteve",
"msg_date": "Mon, 23 Sep 2019 12:52:21 -0700",
"msg_from": "Steven Pousty <steve.pousty@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: JSONPATH documentation"
},
{
"msg_contents": "On 2019-09-23 00:03, Tom Lane wrote:\n> While we're whining about this, I find it very off-putting that\n> the jsonpath stuff was inserted in the JSON functions section\n> ahead of the actual JSON functions. I think it should have\n> gone after them, because it feels like a barely-related interjection\n> as it stands. Maybe there's even a case that it should be\n> its own <sect1>, after the \"functions-json\" section.\n\nI'd just switch the sect2's around.\n\nThat would be similar to how the documentation of regular expressions is\nlaid out: functions first, then details of the contained mini-language.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 24 Sep 2019 23:08:25 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: JSONPATH documentation"
},
{
"msg_contents": "On 9/25/19 12:08 AM, Peter Eisentraut wrote:\n> On 2019-09-23 00:03, Tom Lane wrote:\n>> While we're whining about this, I find it very off-putting that\n>> the jsonpath stuff was inserted in the JSON functions section\n>> ahead of the actual JSON functions. I think it should have\n>> gone after them, because it feels like a barely-related interjection\n>> as it stands. Maybe there's even a case that it should be\n>> its own <sect1>, after the \"functions-json\" section.\n> I'd just switch the sect2's around.\n\nAs more SQL/JSON functionality gets added, I believe a separate sect1 is \nlikely to be more justified. However, for v12 I'd vote for moving sect2 \ndown. The patch is attached, it also fixes the ambiguous sentence that \nhas raised questions in this thread.\n\n-- \nLiudmila Mantrova\nTechnical writer at Postgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 25 Sep 2019 17:46:08 +0300",
"msg_from": "Liudmila Mantrova <l.mantrova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: JSONPATH documentation"
},
{
"msg_contents": "On 2019-09-25 16:46, Liudmila Mantrova wrote:\n> On 9/25/19 12:08 AM, Peter Eisentraut wrote:\n>> On 2019-09-23 00:03, Tom Lane wrote:\n>>> While we're whining about this, I find it very off-putting that\n>>> the jsonpath stuff was inserted in the JSON functions section\n>>> ahead of the actual JSON functions. I think it should have\n>>> gone after them, because it feels like a barely-related interjection\n>>> as it stands. Maybe there's even a case that it should be\n>>> its own <sect1>, after the \"functions-json\" section.\n>> I'd just switch the sect2's around.\n> \n> As more SQL/JSON functionality gets added, I believe a separate sect1 is \n> likely to be more justified. However, for v12 I'd vote for moving sect2 \n> down. The patch is attached, it also fixes the ambiguous sentence that \n> has raised questions in this thread.\n\ncommitted, thanks\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 26 Sep 2019 16:38:44 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: JSONPATH documentation"
}
] |
[
{
"msg_contents": "\n\n\n\n\nOne-line Summary:\n Better query optimization for \"NOT IN\" clause\n\n\n\nBusiness Use-case:\n Using x NOT IN (SELECT y FROM target) on extremely large tables can\n be done very fast. This\n might be necessary in order to introduce foreign keys where old\n systems relied on\n a user to type in a reference manually. Typically I am useing NOT IN\n to find invalid\n foreign keys which was not protected by old database designs. More\n specifically, the\n following query would return what I am looking for:\n SELECT source, docno\n FROM tableA\n WHERE (source, docno) NOT IN (SELECT b.source, b.docno FROM tableB\n b)\n The query took a few hours (tableB contains 1.8m rows and tableA\n 7.8m rows)\n\n\n\nUser impact with the change:\n A search for invalid foreign keys can be reduced from hours or days\n to seconds. The\n reasoning behind this implementation is as follows: If I can return\n all rows\n from \"SELECT col1, col2, ... FROM tableA ORDER BY col1, col2\" in few\n seconds,\n then calculating missing / NOT IN (or existing / IN) values in a few\n seconds\n as well.\n\n\n\nImplementation details:\n I have written a general purpose plpgsql function which returns an\n SETOF text[]. The function\n takes two table names with two sets of column names and optionally\n two filters on the\n different tables. Unfortunately I had to cast values to text to have\n this work in\n a plpgsql function, but I got the results I needed.\n\n My required SQL:\n SELECT source, docno\n FROM tableA\n WHERE (source, docno) NOT IN (SELECT b.source, b.docno FROM tableB\n b)\n\n The query took a few hours (tableB contains 1.8m rows and tableA\n 7.8m rows)\n\n I used my function as follows:\n SELECT DISTINCT a[1], a[2] FROM util.notIn('tableA', 'source,\n docno', 'tableB', 'source, docno', null, null) a ORDER BY a[1], a[2]\n This function returned what I needed in 51 seconds, where the NOT IN\n statement ran for hours.\n\n Before including the plpgsql source, let me explain what it does:\n The function builds up two SQL statements using the given values\n (remember I did not specify\n any filter on either of the two tables - the last two parameters are\n null).\n Resulting statements are:\n\n Primary: SELECT DISTINCT ARRAY[source::text, docno::text] FROM\n tableA WHERE source IS NOT NULL AND docno IS NOT NULL ORDER BY\n ARRAY[source::text, docno::text]\n Secondary: SELECT DISTINCT ARRAY[source::text, docno::text] FROM\n tableB WHERE source IS NOT NULL AND docno IS NOT NULL ORDER BY\n ARRAY[source::text, docno::text]\n\n (As you can see, I stripped out NULLs, because that I can check for\n easily with a simple query)\n Now I open two cursors and fetch the first record of both statements\n into separate variables. Since\n the FOUND variable is overwritten with every FETCH, I store the\n states in my own variables.\n Ordering the results are important, since it is important for the\n working of the function.\n A loop now advances both or one of the two result sets based on\n whether the rows are equal or\n not (if not equal, return a row (if primary rows < secondary )\n and only fetch from the dataset\n which is lesser of the two. You can early terminate when the primary\n dataset end has been\n reached, alternatively if the secondary dataset reached its end, you\n can simply add all remaining\n rows from the primary dataset to the result.\n\n The best implementation for this would be to detect a NOT IN (or IN)\n WHERE clause during query optimization\n phase and use a sequential scan as done in the function. Important\n is that the IN / NOT IN select statement\n must not be dependent on the current row of the main SELECT.\n\n A real long term solution would actually be to extend the SQL\n definition language to allow for a different\n type of join. To optimize IN, you could rewrite your query as an\n INNER JOIN (providing you you join to\n a DISTINCT dataset). However, the NOT IN is not so easy (except for\n LEFT JOIN testing for NULL in secondary\n table). What I would propose is the following:\n SELECT a.colA\n FROM TableA\n [LEFT | RIGHT] MISSING JOIN TableB ON TableB.colB = TableA.colA\n\n CREATE OR REPLACE FUNCTION util.notIn(pTableA text, pFieldA text,\n pTableB text, pFieldB text, pFilterA text, pFilterB text) RETURNS\n SETOF text[] AS\n $BODY$\n DECLARE\n vFieldsA text[];\n vFieldsB text[];\n vValueA text[];\n vValueB text[];\n vRowA record;\n vRowB record;\n vSQLA text;\n vSQLB text;\n vWhereA text;\n vWhereB text;\n vSelectA text;\n vSelectB text;\n vCursorA refcursor;\n vCursorB refcursor;\n vFirst integer;\n vLast integer;\n vNdx integer;\n vMoreA boolean;\n vMoreB boolean;\n BEGIN\n IF pTableA IS NULL OR pTableB IS NULL OR pFieldA IS NULL OR\n pFieldB IS NULL THEN\n RAISE EXCEPTION 'pTableA, pTableB, pFieldA and pFieldB\n parameters are required';\n END IF;\n vFieldsA := regexp_split_to_array(pFieldA, '[,]');\n vFieldsB := regexp_split_to_array(pFieldB, '[,]');\n vFirst := array_lower(vFieldsA, 1);\n IF array_length(vFieldsA, 1) <> array_length(vFieldsB, 1)\n THEN\n RAISE EXCEPTION 'pFieldA and pFieldB field lists must contain\n the same number of field names';\n END IF;\n vLast := array_upper(vFieldsA, 1);\n vWhereA := ' WHERE ';\n vWhereB := ' WHERE ';\n vSelectA := '';\n vSelectB := '';\n FOR vNdx IN vFirst .. vLast LOOP\n vFieldsA[vNdx] := trim(vFieldsA[vNdx]);\n vFieldsB[vNdx] := trim(vFieldsB[vNdx]);\n IF vNdx > 1 THEN\n vSelectA := vSelectA || ', ';\n vSelectB := vSelectB || ', ';\n vWhereA := vWhereA || ' AND ';\n vWhereB := vWhereB || ' AND ';\n END IF;\n vSelectA := vSelectA || vFieldsA[vNdx] || '::text';\n vSelectB := vSelectB || vFieldsB[vNdx] || '::text';\n vWhereA := vWhereA || vFieldsA[vNdx] || ' IS NOT NULL';\n vWhereB := vWhereB || vFieldsB[vNdx] || ' IS NOT NULL';\n END LOOP;\n vSQLA := 'SELECT DISTINCT ARRAY[' || vSelectA || '] FROM ' ||\n pTableA || vWhereA || ' ORDER BY ARRAY[' || vSelectA || ']';\n vSQLB := 'SELECT DISTINCT ARRAY[' || vSelectB || '] FROM ' ||\n pTableB || vWhereB || ' ORDER BY ARRAY[' || vSelectB || ']';\n IF COALESCE(trim(pFilterA), '') <> '' THEN\n vSQLA := vSQLA || ' AND ' || pFilterA;\n END IF;\n IF COALESCE(trim(pFilterB), '') <> '' THEN\n vSQLB := vSQLB || ' AND ' || pFilterB;\n END IF;\n RAISE NOTICE 'Primary data: %', vSQLA;\n RAISE NOTICE 'Secondary data: %', vSQLB;\n OPEN vCursorA FOR EXECUTE vSQLA;\n OPEN vCursorB FOR EXECUTE vSQLB;\n FETCH vCursorA INTO vValueA;\n vMoreA := FOUND;\n FETCH vCursorB INTO vValueB;\n vMoreB := FOUND;\n WHILE vMoreA LOOP\n IF vMoreB THEN\n IF vValueA = vValueB THEN\n FETCH vCursorA INTO vValueA;\n vMoreA := FOUND;\n FETCH vCursorB INTO vValueB;\n vMoreB := FOUND;\n ELSEIF vValueA < vValueB THEN\n RETURN NEXT vValueA;\n FETCH vCursorA INTO vValueA;\n vMoreA := FOUND;\n ELSE\n FETCH vCursorB INTO vValueB;\n vMoreB := FOUND;\n END IF;\n ELSE\n RETURN NEXT vValueA;\n FETCH vCursorA INTO vValueA;\n vMoreA := FOUND;\n END IF;\n END LOOP;\n CLOSE vCursorA;\n CLOSE vCursorB;\n RETURN;\n END;\n $BODY$\n LANGUAGE plpgsql;\n\n\n\nEstimated Development Time:\n unknown\n\n\n\nOpportunity Window Period:\n none - PostgreSQL will have better performance whenever this is\n added\n\n\n\nBudget Money:\n none - I have solved my problem already, but I think it would be a\n great\n improvement on PostgreSQL performance\n\n\n\nContact Information:\njohn@softco.co.za\n\n\n\nCategory:\n [[Category:Proposals]]\n [[Category:Performance]]\n\n\n\n",
"msg_date": "Mon, 23 Sep 2019 15:43:34 +0200",
"msg_from": "John Bester <john@softco.co.za>",
"msg_from_op": true,
"msg_subject": "Proposal: Better query optimization for \"NOT IN\" clause"
},
{
"msg_contents": "Hello,\n\nJust for information there are some works regarding how to include this in\ncore,\nthat may interest you ;o)\n\nsee \"NOT IN subquery optimization\"\nhttps://www.postgresql.org/message-id/flat/1550706289606-0.post%40n3.nabble.com\n\ncommitfest entry: https://commitfest.postgresql.org/24/2023/\n\nand \"Converting NOT IN to anti-joins during planning\"\nhttps://www.postgresql.org/message-id/flat/CAKJS1f82pqjqe3WT9_xREmXyG20aOkHc-XqkKZG_yMA7JVJ3Tw%40mail.gmail.com\n\ncommitfest entry: https://commitfest.postgresql.org/24/2020/\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Mon, 23 Sep 2019 12:25:51 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Better query optimization for \"NOT IN\" clause"
}
] |
[
{
"msg_contents": "Hi,\n\nI've been working on a custom aggregate, and I've ran into some fairly\nannoying overhead due to casting direct parameters over and over. I'm\nwondering if there's a way to eliminate this, somehow, without having to\ndo an explicit cast.\n\nImagine you have a simple aggregate:\n\n CREATE AGGREGATE tdigest_percentile(double precision, int, double precision[])\n (\n ...\n );\n\nwith two direct parameters (actually, I'm not sure that's the correct\nterm, becuse this is not an ordered-set aggregate and [1] only talks\nabout direct parameters in that context). Anyway, I'm talking about the\nextra parameters, after the 'double precision' value to aggregate.\n\nThe last parameter is an array of values in [0.0,1.0], representing\npercentiles (similarly to what we do in percentile_cont). It's annoying\nto write literal values, so let's use CTE to generate the array:\n\n WITH\n perc AS (SELECT array_agg(i/100.0) AS p\n FROM generate_series(1,99) s(i))\n SELECT\n SELECT tdigest_percentile(random(), 100, (SELECT p FROM perc))\n FROM generate_series(1,10000000);\n\nwhich does work, but it's running for ~180 seconds. When used with an\nexplicit array literal, it runs in ~1.6 second.\n\n SELECT tdigest_percentile(random(), 100, ARRAY[0.01, ..., 0.99]))\n FROM generate_series(1,10000000);\n\nAfter a while, I've realized that the issue is casting - the CTE\nproduces numeric[] array, and we do the cast to double precision[] on\nevery call to the state transition function (and we do ~10M of those).\nThe cast is fairly expensive - much more expensive than the aggregate\nitself. The explicit literal ends up being the right type, so the whole\nquery is much faster.\n\nAnd indeed, adding the explicit cast to the CTE query\n\n WITH\n perc AS (SELECT array_agg((i/100.0)::double precision) AS p\n FROM generate_series(1,99) s(i))\n SELECT\n SELECT tdigest_percentile(random(), 100, (SELECT p FROM perc))\n FROM generate_series(1,10000000);\n\ndoes the trick - the query is ~1.6s again.\n\nI wonder if there's a chance to detect and handle this without having to\ndo the cast over and over? I'm thinking that's not quite possible,\nbecause the value is not actually guaranteed to be the same for all\ncalls (even though it's the case for the example I've given).\n\nBut maybe we could flag the parameter somehow, to make it more like the\ndirect parameter (which is only evaluated once). I don't really need\nthose extra parameters in the transition function at all, it's fine to\njust get it to the final function (and there should be far fewer calls\nto those).\n\nregards\n\n\n[1] https://www.postgresql.org/docs/current/sql-createaggregate.html\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 23 Sep 2019 17:56:21 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "overhead due to casting extra parameters with aggregates (over and\n over)"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I've been working on a custom aggregate, and I've ran into some fairly\n> annoying overhead due to casting direct parameters over and over. I'm\n> wondering if there's a way to eliminate this, somehow, without having to\n> do an explicit cast.\n\n> Imagine you have a simple aggregate:\n\n> CREATE AGGREGATE tdigest_percentile(double precision, int, double precision[])\n> (\n> ...\n> );\n\n> with two direct parameters (actually, I'm not sure that's the correct\n> term, becuse this is not an ordered-set aggregate and [1] only talks\n> about direct parameters in that context). Anyway, I'm talking about the\n> extra parameters, after the 'double precision' value to aggregate.\n\nBut you're not telling the system that those are direct parameters,\nat least not if you mean that they can only legitimately have one value\nacross the whole query. As-is, they're just more aggregated arguments\nso we have to evaluate them again at each row.\n\nIt's fairly messy that the SQL spec ties direct arguments to ordered-set\naggregates; you'd think there'd be some value in treating those features\nas orthogonal. I'm not sure how we could wedge them into the syntax\notherwise, though :-(. You could perhaps convert your aggregate to\nan ordered-set aggregate, but then you'd be paying for a sort that\nyou don't need, IIUC.\n\n> After a while, I've realized that the issue is casting - the CTE\n> produces numeric[] array, and we do the cast to double precision[] on\n> every call to the state transition function (and we do ~10M of those).\n\nThe only reason that the CTE reference is cheap is that we understand\nthat it's stable so we don't have to recompute it each time; otherwise\nyou'd be moaning about that more than the cast. As you say, the short\nterm workaround is to do the casting inside the sub-select. I think the\nlong term fix is to generically avoid re-computing stable subexpressions.\nThere was a patch for that a year or so ago but the author never finished\nit, AFAIR.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Sep 2019 12:53:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: overhead due to casting extra parameters with aggregates (over\n and over)"
},
{
"msg_contents": "On Mon, Sep 23, 2019 at 12:53:36PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> I've been working on a custom aggregate, and I've ran into some fairly\n>> annoying overhead due to casting direct parameters over and over. I'm\n>> wondering if there's a way to eliminate this, somehow, without having to\n>> do an explicit cast.\n>\n>> Imagine you have a simple aggregate:\n>\n>> CREATE AGGREGATE tdigest_percentile(double precision, int, double precision[])\n>> (\n>> ...\n>> );\n>\n>> with two direct parameters (actually, I'm not sure that's the correct\n>> term, becuse this is not an ordered-set aggregate and [1] only talks\n>> about direct parameters in that context). Anyway, I'm talking about the\n>> extra parameters, after the 'double precision' value to aggregate.\n>\n>But you're not telling the system that those are direct parameters,\n>at least not if you mean that they can only legitimately have one value\n>across the whole query. As-is, they're just more aggregated arguments\n>so we have to evaluate them again at each row.\n>\n\nUnderstood.\n\n>It's fairly messy that the SQL spec ties direct arguments to ordered-set\n>aggregates; you'd think there'd be some value in treating those features\n>as orthogonal. I'm not sure how we could wedge them into the syntax\n>otherwise, though :-(. You could perhaps convert your aggregate to\n>an ordered-set aggregate, but then you'd be paying for a sort that\n>you don't need, IIUC.\n>\n\nYeah, having to do the sort (and keep all the data) is exactly what the\ntdigest is meant to eliminate, so making it an ordered-set aggregate is\nexactly the thing I don't want to do. Also, it disables parallel query,\nwhich is another reason not to do that.\n\n>> After a while, I've realized that the issue is casting - the CTE\n>> produces numeric[] array, and we do the cast to double precision[] on\n>> every call to the state transition function (and we do ~10M of those).\n>\n>The only reason that the CTE reference is cheap is that we understand\n>that it's stable so we don't have to recompute it each time; otherwise\n>you'd be moaning about that more than the cast. As you say, the short\n>term workaround is to do the casting inside the sub-select. I think the\n>long term fix is to generically avoid re-computing stable subexpressions.\n>There was a patch for that a year or so ago but the author never finished\n>it, AFAIR.\n>\n\nHmmm, yeah. I'll dig throgh the archives, although it's not a very high\npriority - it's more a thing that surprised/bugged me while working on\nthe custom aggregate.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 23 Sep 2019 19:44:47 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: overhead due to casting extra parameters with aggregates (over\n and over)"
}
] |
[
{
"msg_contents": "Hello all,\n\nCurrently pg_dump sorts most dumpable objects by priority, namespace, name\nand then object ID. Since triggers and RLS policies belong to tables, there\nmay be more than one with the same name within the same namespace, leading to\npotential sorting discrepancies between databases that only differ by object\nIDs.\n\nThe attached draft patch (made against `pg_dump_sort.c` on master) breaks\nties for trigger and policy objects by using the table name, increasing the\nsort order stability. I have compiled it and executed it against a number of\nlocal databases and it behaves as desired.\n\nI am new to PostgreSQL contribution and my C-skills are rusty, so please let\nme know if I can improve the patch, or if there are areas of PostgreSQL that\nI have overlooked.\n\nKind regards,\n\nBenjie Gillam",
"msg_date": "Mon, 23 Sep 2019 22:34:07 +0100",
"msg_from": "Benjie Gillam <benjie@jemjie.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Sort policies and triggers by table name in pg_dump."
},
{
"msg_contents": "On Mon, Sep 23, 2019 at 10:34:07PM +0100, Benjie Gillam wrote:\n> The attached draft patch (made against `pg_dump_sort.c` on master) breaks\n> ties for trigger and policy objects by using the table name, increasing the\n> sort order stability. I have compiled it and executed it against a number of\n> local databases and it behaves as desired.\n\nCould you provide a simple example of schema (tables with some\npolicies and triggers), with the difference this generates for\npg_dump, which shows your point?\n\n> I am new to PostgreSQL contribution and my C-skills are rusty, so please let\n> me know if I can improve the patch, or if there are areas of PostgreSQL that\n> I have overlooked.\n\nYour patch has two warnings because you are trying to map a policy\ninfo pointer to a trigger info pointer:\npg_dump_sort.c:224:24: warning: initialization of ‘TriggerInfo *’ {aka\n‘struct _triggerInfo *’} from incompatible pointer type ‘PolicyInfo *\nconst’ {aka ‘struct _policyInfo * const’}\n[-Wincompatible-pointer-types]\n 224 | TriggerInfo *tobj2 = *(PolicyInfo *const *) p2; \n--\nMichael",
"msg_date": "Tue, 24 Sep 2019 11:02:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Sort policies and triggers by table name in pg_dump."
},
{
"msg_contents": "> Could you provide a simple example of schema (tables with some\n> policies and triggers), with the difference this generates for\n> pg_dump, which shows your point?\n\nCertainly; I've attached a bash script that can reproduce the issue\nand the diff that it produces, here's the important part:\n\n CREATE TRIGGER a BEFORE INSERT ON foo\n FOR EACH ROW EXECUTE PROCEDURE qux();\n CREATE POLICY a ON foo FOR SELECT USING (true);\n\n CREATE TRIGGER a BEFORE INSERT ON bar\n FOR EACH ROW EXECUTE PROCEDURE qux();\n CREATE POLICY a ON bar FOR SELECT USING (true);\n\nHere we create two identically named triggers and two identically\nnamed policies on tables foo and bar. If instead we ran these\nstatements in a different order (or if the object IDs were to wrap)\nthe order of the pg_dump would be different even though the\ndatabases are identical other than object IDs. The attached\npatch eliminates this difference.\n\n> Your patch has two warnings because you are trying to map a policy\n> info pointer to a trigger info pointer:\n\nAh, thank you for the pointer (aha); I've attached an updated patch\nthat addresses this copy/paste issue.",
"msg_date": "Tue, 24 Sep 2019 08:48:33 +0100",
"msg_from": "Benjie Gillam <benjie@jemjie.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Sort policies and triggers by table name in pg_dump."
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 08:48:33AM +0100, Benjie Gillam wrote:\n> Here we create two identically named triggers and two identically\n> named policies on tables foo and bar. If instead we ran these\n> statements in a different order (or if the object IDs were to wrap)\n> the order of the pg_dump would be different even though the\n> databases are identical other than object IDs. The attached\n> patch eliminates this difference.\n\nThanks. Perhaps you could add your patch to the next commit fest\nthen at https://commitfest.postgresql.org/25/?\n\nThis way, your patch gains more visibility for potential reviews.\nAnother key thing to remember is that one patch authored requires one\nother patch of equal difficulty to be reviewed.\n--\nMichael",
"msg_date": "Wed, 25 Sep 2019 16:15:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Sort policies and triggers by table name in pg_dump."
},
{
"msg_contents": "> Thanks. Perhaps you could add your patch to the next commit fest\n> then at https://commitfest.postgresql.org/25/?\n\nThanks, submitted.\n\n\n",
"msg_date": "Wed, 25 Sep 2019 08:36:24 +0100",
"msg_from": "Benjie Gillam <benjie@jemjie.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Sort policies and triggers by table name in pg_dump."
},
{
"msg_contents": "Benjie Gillam <benjie@jemjie.com> writes:\n>> Your patch has two warnings because you are trying to map a policy\n>> info pointer to a trigger info pointer:\n\n> Ah, thank you for the pointer (aha); I've attached an updated patch\n> that addresses this copy/paste issue.\n\nLGTM, pushed (with a bit of extra polishing of comments).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Nov 2019 16:26:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Sort policies and triggers by table name in pg_dump."
}
] |
[
{
"msg_contents": "Hi,\n\nAs of v12, Append node is elided when there's a single subnode under\nit. An example in the partitioning documentation needs to be fixed to\naccount for that change. Attached a patch.\n\nThanks,\nAmit",
"msg_date": "Tue, 24 Sep 2019 10:52:30 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix example in partitioning documentation"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 10:52:30AM +0900, Amit Langote wrote:\n> As of v12, Append node is elided when there's a single subnode under\n> it. An example in the partitioning documentation needs to be fixed to\n> account for that change. Attached a patch.\n\nIndeed, using the same example as the docs:\nCREATE TABLE measurement (\n logdate date not null,\n peaktemp int,\n unitsales int\n ) PARTITION BY RANGE (logdate);\nCREATE TABLE measurement_y2016m07\n PARTITION OF measurement (\n unitsales DEFAULT 0\n ) FOR VALUES FROM ('2016-07-01') TO ('2016-08-01');\nSET enable_partition_pruning = on;\nEXPLAIN SELECT count(*) FROM measurement WHERE logdate = DATE\n'2016-07-02';\n\nI'll take care of committing that, however this will have to wait\nuntil v12 RC1 is tagged.\n--\nMichael",
"msg_date": "Tue, 24 Sep 2019 11:14:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix example in partitioning documentation"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 11:14 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Sep 24, 2019 at 10:52:30AM +0900, Amit Langote wrote:\n> > As of v12, Append node is elided when there's a single subnode under\n> > it. An example in the partitioning documentation needs to be fixed to\n> > account for that change. Attached a patch.\n>\n> Indeed, using the same example as the docs:\n> CREATE TABLE measurement (\n> logdate date not null,\n> peaktemp int,\n> unitsales int\n> ) PARTITION BY RANGE (logdate);\n> CREATE TABLE measurement_y2016m07\n> PARTITION OF measurement (\n> unitsales DEFAULT 0\n> ) FOR VALUES FROM ('2016-07-01') TO ('2016-08-01');\n> SET enable_partition_pruning = on;\n> EXPLAIN SELECT count(*) FROM measurement WHERE logdate = DATE\n> '2016-07-02';\n>\n> I'll take care of committing that, however this will have to wait\n> until v12 RC1 is tagged.\n\nSure, thank you.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Tue, 24 Sep 2019 11:36:40 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix example in partitioning documentation"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 11:36:40AM +0900, Amit Langote wrote:\n> Sure, thank you.\n\nAnd done with f5daf7f, back-patched down to 12.\n--\nMichael",
"msg_date": "Wed, 25 Sep 2019 13:47:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix example in partitioning documentation"
},
{
"msg_contents": "On Wed, Sep 25, 2019 at 1:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Sep 24, 2019 at 11:36:40AM +0900, Amit Langote wrote:\n> > Sure, thank you.\n>\n> And done with f5daf7f, back-patched down to 12.\n\nThanks again. :)\n\nRegards,\nAmit\n\n\n",
"msg_date": "Wed, 25 Sep 2019 14:36:41 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix example in partitioning documentation"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen backup_label exists, the startup process enters archive recovery mode\neven if recovery.signal file doesn't exist. In this case, the startup process\ntries to retrieve WAL files by using restore_command. Then, at the beginning\nof the archive recovery, the contents of backup_label are copied to pg_control\nand backup_label file is removed. This would be an intentional behavior.\n\nBut I think the problem is that, if the server shuts down during that\narchive recovery, the restart of the server may cause the recovery to fail\nbecause neither backup_label nor recovery.signal exist and the server\ndoesn't enter an archive recovery mode. Is this intentional, too? Seems No.\n\nSo the problematic scenario is;\n\n1. the server starts with backup_label, but not recovery.signal.\n2. the startup process enters an archive recovery mode because\n backup_label exists.\n3. the contents of backup_label are copied to pg_control and\n backup_label is deleted.\n4. the server shuts down..\n5. the server is restarted. neither backup_label nor recovery.signal exist.\n6. the startup process starts just crash recovery because neither backup_label\n nor recovery.signal exist. Since it cannot retrieve WAL files from archival\n area, it may fail.\n\nOne idea to fix this issue is to make the above step #3 remember that\nbackup_label existed, in pg_control. Then we should make the subsequent\nrecovery enter an archive recovery mode if pg_control indicates that\neven if neither backup_label nor recovery.signal exist. Thought?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Tue, 24 Sep 2019 14:25:03 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "recovery starting when backup_label exists, but not recovery.signal"
},
{
"msg_contents": "On 9/24/19 1:25 AM, Fujii Masao wrote:\n> \n> When backup_label exists, the startup process enters archive recovery mode\n> even if recovery.signal file doesn't exist. In this case, the startup process\n> tries to retrieve WAL files by using restore_command. Then, at the beginning\n> of the archive recovery, the contents of backup_label are copied to pg_control\n> and backup_label file is removed. This would be an intentional behavior.\n\n> But I think the problem is that, if the server shuts down during that\n> archive recovery, the restart of the server may cause the recovery to fail\n> because neither backup_label nor recovery.signal exist and the server\n> doesn't enter an archive recovery mode. Is this intentional, too? Seems No.\n> \n> So the problematic scenario is;\n> \n> 1. the server starts with backup_label, but not recovery.signal.\n> 2. the startup process enters an archive recovery mode because\n> backup_label exists.\n> 3. the contents of backup_label are copied to pg_control and\n> backup_label is deleted.\n\nDo you mean deleted or renamed to backup_label.old?\n\n> 4. the server shuts down..\n\nThis happens after the cluster has reached consistency?\n\n> 5. the server is restarted. neither backup_label nor recovery.signal exist.\n> 6. the startup process starts just crash recovery because neither backup_label\n> nor recovery.signal exist. Since it cannot retrieve WAL files from archival\n> area, it may fail.\n\nI tried a few ways to reproduce this but was not successful without\nmanually removing WAL. Probably I just needed a much larger set of WAL.\n\nI assume you have a repro? Can you give more details?\n\n> One idea to fix this issue is to make the above step #3 remember that\n> backup_label existed, in pg_control. Then we should make the subsequent\n> recovery enter an archive recovery mode if pg_control indicates that\n> even if neither backup_label nor recovery.signal exist. Thought?\n\nThat seems pretty invasive to me at this stage. I'd like to reproduce\nit and see if there are alternatives.\n\nAlso, are you sure this is a new behavior? I've been finding that some\nbehaviors that have existed for a long time are suddenly more apparent\nor easier to hit with the new mechanism. Examples of that are in [1].\n\n-- \n-David\ndavid@pgmasters.net\n\n[1]\nhttps://www.postgresql.org/message-id/5e6537c7-d10e-6a67-4813-bbd7455cfaf5%40pgmasters.net\n\n\n",
"msg_date": "Thu, 26 Sep 2019 14:36:33 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: recovery starting when backup_label exists, but not\n recovery.signal"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 3:36 AM David Steele <david@pgmasters.net> wrote:\n>\n> On 9/24/19 1:25 AM, Fujii Masao wrote:\n> >\n> > When backup_label exists, the startup process enters archive recovery mode\n> > even if recovery.signal file doesn't exist. In this case, the startup process\n> > tries to retrieve WAL files by using restore_command. Then, at the beginning\n> > of the archive recovery, the contents of backup_label are copied to pg_control\n> > and backup_label file is removed. This would be an intentional behavior.\n>\n> > But I think the problem is that, if the server shuts down during that\n> > archive recovery, the restart of the server may cause the recovery to fail\n> > because neither backup_label nor recovery.signal exist and the server\n> > doesn't enter an archive recovery mode. Is this intentional, too? Seems No.\n> >\n> > So the problematic scenario is;\n> >\n> > 1. the server starts with backup_label, but not recovery.signal.\n> > 2. the startup process enters an archive recovery mode because\n> > backup_label exists.\n> > 3. the contents of backup_label are copied to pg_control and\n> > backup_label is deleted.\n>\n> Do you mean deleted or renamed to backup_label.old?\n>\n> > 4. the server shuts down..\n>\n> This happens after the cluster has reached consistency?\n>\n> > 5. the server is restarted. neither backup_label nor recovery.signal exist.\n> > 6. the startup process starts just crash recovery because neither backup_label\n> > nor recovery.signal exist. Since it cannot retrieve WAL files from archival\n> > area, it may fail.\n>\n> I tried a few ways to reproduce this but was not successful without\n> manually removing WAL.\n\nHmm me too. I think that since we enter crash recovery at step #6 we\ndon't retrieve WAL files from archival area.\n\nBut I reproduced the problem Fujii-san mentioned that the restart of\nthe server during archive recovery causes to the crash recovery\ninstead of resuming the archive recovery. Which is the different\nbehavior from version 11 or before and I personally think it made\nbehavior worse.\n\nRegards,\n\n--\nMasahiko Sawada\n\n\n",
"msg_date": "Fri, 27 Sep 2019 16:02:00 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery starting when backup_label exists,\n but not recovery.signal"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 3:36 AM David Steele <david@pgmasters.net> wrote:\n>\n> On 9/24/19 1:25 AM, Fujii Masao wrote:\n> >\n> > When backup_label exists, the startup process enters archive recovery mode\n> > even if recovery.signal file doesn't exist. In this case, the startup process\n> > tries to retrieve WAL files by using restore_command. Then, at the beginning\n> > of the archive recovery, the contents of backup_label are copied to pg_control\n> > and backup_label file is removed. This would be an intentional behavior.\n>\n> > But I think the problem is that, if the server shuts down during that\n> > archive recovery, the restart of the server may cause the recovery to fail\n> > because neither backup_label nor recovery.signal exist and the server\n> > doesn't enter an archive recovery mode. Is this intentional, too? Seems No.\n> >\n> > So the problematic scenario is;\n> >\n> > 1. the server starts with backup_label, but not recovery.signal.\n> > 2. the startup process enters an archive recovery mode because\n> > backup_label exists.\n> > 3. the contents of backup_label are copied to pg_control and\n> > backup_label is deleted.\n>\n> Do you mean deleted or renamed to backup_label.old?\n\nSorry for the confusing wording..\nI meant the following code that renames backup_label to .old, in StartupXLOG().\n\n /*\n * If there was a backup label file, it's done its job and the info\n * has now been propagated into pg_control. We must get rid of the\n * label file so that if we crash during recovery, we'll pick up at\n * the latest recovery restartpoint instead of going all the way back\n * to the backup start point. It seems prudent though to just rename\n * the file out of the way rather than delete it completely.\n */\n if (haveBackupLabel)\n {\n unlink(BACKUP_LABEL_OLD);\n durable_rename(BACKUP_LABEL_FILE, BACKUP_LABEL_OLD, FATAL);\n }\n\n> > 4. the server shuts down..\n>\n> This happens after the cluster has reached consistency?\n\nYou need to shutdown the server until WAL replay finishes,\nno matter whether it reaches the consistent point or not.\n\n> > 5. the server is restarted. neither backup_label nor recovery.signal exist.\n> > 6. the startup process starts just crash recovery because neither backup_label\n> > nor recovery.signal exist. Since it cannot retrieve WAL files from archival\n> > area, it may fail.\n>\n> I tried a few ways to reproduce this but was not successful without\n> manually removing WAL. Probably I just needed a much larger set of WAL.\n>\n> I assume you have a repro? Can you give more details?\n\nWhat I did is:\n\n1. Start PostgreSQL server with WAL archiving enabled.\n2. Take an online backup by using pg_basebackup, for example,\n $ pg_basebackup -D backup\n3. Execute many write SQL to generate lots of WAL files. During that execution,\n perform CHECKPOINT to remove some WAL files from pg_wal directory.\n You need to repeat these until you confirm that there are many WAL files\n that have already been removed from pg_wal but exist only in archive area.\n 4. Shutdown the server.\n 5. Remove PGDATA and restore it from backup.\n 6. Set up restore_command.\n 7. (Forget to put recovery.signal)\n That is, in this scenario, you want to recover the database up to\n the latest WAL records in the archive area. So you need to start archive\n recovery by setting restore_command and putting recovery.signal.\n But the problem happens when you forget to put recovery.signal.\n 8. Start PostgreSQL server.\n 9. Shutdown the server while it's restoring archived WAL files and replaying\n them. At this point, you will notice that the archive recovery starts\n even though recovery.signal doesn't exist. So even archived WAL files\n are successfully restored at this step.\n 10. Restart PostgreSQL server. Since neither backup_label or recovery.signal\n exist, crash recovery starts and fail to restore the archived WAL files.\n So you fail to recover the database up to the latest WAL record\nin archive\n directory. The recovery will finish at early point.\n\n> > One idea to fix this issue is to make the above step #3 remember that\n> > backup_label existed, in pg_control. Then we should make the subsequent\n> > recovery enter an archive recovery mode if pg_control indicates that\n> > even if neither backup_label nor recovery.signal exist. Thought?\n>\n> That seems pretty invasive to me at this stage. I'd like to reproduce\n> it and see if there are alternatives.\n>\n> Also, are you sure this is a new behavior?\n\nIn v11 or before, if backup_label exists but not recovery.conf,\nthe startup process doesn't enter an archive recovery mode. It starts\ncrash recovery in that case. So the bahavior is somewhat different\nbetween versions.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Fri, 27 Sep 2019 17:34:40 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery starting when backup_label exists,\n but not recovery.signal"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 4:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Sep 27, 2019 at 3:36 AM David Steele <david@pgmasters.net> wrote:\n> >\n> > On 9/24/19 1:25 AM, Fujii Masao wrote:\n> > >\n> > > When backup_label exists, the startup process enters archive recovery mode\n> > > even if recovery.signal file doesn't exist. In this case, the startup process\n> > > tries to retrieve WAL files by using restore_command. Then, at the beginning\n> > > of the archive recovery, the contents of backup_label are copied to pg_control\n> > > and backup_label file is removed. This would be an intentional behavior.\n> >\n> > > But I think the problem is that, if the server shuts down during that\n> > > archive recovery, the restart of the server may cause the recovery to fail\n> > > because neither backup_label nor recovery.signal exist and the server\n> > > doesn't enter an archive recovery mode. Is this intentional, too? Seems No.\n> > >\n> > > So the problematic scenario is;\n> > >\n> > > 1. the server starts with backup_label, but not recovery.signal.\n> > > 2. the startup process enters an archive recovery mode because\n> > > backup_label exists.\n> > > 3. the contents of backup_label are copied to pg_control and\n> > > backup_label is deleted.\n> >\n> > Do you mean deleted or renamed to backup_label.old?\n> >\n> > > 4. the server shuts down..\n> >\n> > This happens after the cluster has reached consistency?\n> >\n> > > 5. the server is restarted. neither backup_label nor recovery.signal exist.\n> > > 6. the startup process starts just crash recovery because neither backup_label\n> > > nor recovery.signal exist. Since it cannot retrieve WAL files from archival\n> > > area, it may fail.\n> >\n> > I tried a few ways to reproduce this but was not successful without\n> > manually removing WAL.\n>\n> Hmm me too. I think that since we enter crash recovery at step #6 we\n> don't retrieve WAL files from archival area.\n>\n> But I reproduced the problem Fujii-san mentioned that the restart of\n> the server during archive recovery causes to the crash recovery\n> instead of resuming the archive recovery.\n\nYes, it's strange and unexpected to start crash recovery\nwhen restarting archive recovery. Archive recovery should\nstart again in that case, I think.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Fri, 27 Sep 2019 17:41:09 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery starting when backup_label exists,\n but not recovery.signal"
},
{
"msg_contents": "On 9/27/19 4:41 AM, Fujii Masao wrote:\n> On Fri, Sep 27, 2019 at 4:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> On Fri, Sep 27, 2019 at 3:36 AM David Steele <david@pgmasters.net> wrote:\n>>>\n>>> On 9/24/19 1:25 AM, Fujii Masao wrote:\n>>>>\n>>>> When backup_label exists, the startup process enters archive recovery mode\n>>>> even if recovery.signal file doesn't exist. In this case, the startup process\n>>>> tries to retrieve WAL files by using restore_command. Then, at the beginning\n>>>> of the archive recovery, the contents of backup_label are copied to pg_control\n>>>> and backup_label file is removed. This would be an intentional behavior.\n>>>\n>>>> But I think the problem is that, if the server shuts down during that\n>>>> archive recovery, the restart of the server may cause the recovery to fail\n>>>> because neither backup_label nor recovery.signal exist and the server\n>>>> doesn't enter an archive recovery mode. Is this intentional, too? Seems No.\n>>>>\n>>>> So the problematic scenario is;\n>>>>\n>>>> 1. the server starts with backup_label, but not recovery.signal.\n>>>> 2. the startup process enters an archive recovery mode because\n>>>> backup_label exists.\n>>>> 3. the contents of backup_label are copied to pg_control and\n>>>> backup_label is deleted.\n>>>\n>>> Do you mean deleted or renamed to backup_label.old?\n>>>\n>>>> 4. the server shuts down..\n>>>\n>>> This happens after the cluster has reached consistency?\n>>>\n>>>> 5. the server is restarted. neither backup_label nor recovery.signal exist.\n>>>> 6. the startup process starts just crash recovery because neither backup_label\n>>>> nor recovery.signal exist. Since it cannot retrieve WAL files from archival\n>>>> area, it may fail.\n>>>\n>>> I tried a few ways to reproduce this but was not successful without\n>>> manually removing WAL.\n>>\n>> Hmm me too. I think that since we enter crash recovery at step #6 we\n>> don't retrieve WAL files from archival area.\n>>\n>> But I reproduced the problem Fujii-san mentioned that the restart of\n>> the server during archive recovery causes to the crash recovery\n>> instead of resuming the archive recovery.\n> \n> Yes, it's strange and unexpected to start crash recovery\n> when restarting archive recovery. Archive recovery should\n> start again in that case, I think.\n\n+1\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 27 Sep 2019 13:56:06 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: recovery starting when backup_label exists, but not\n recovery.signal"
},
{
"msg_contents": "On 9/27/19 4:34 AM, Fujii Masao wrote:\n> On Fri, Sep 27, 2019 at 3:36 AM David Steele <david@pgmasters.net> wrote:\n>>\n>> On 9/24/19 1:25 AM, Fujii Masao wrote:\n>>>\n>>> When backup_label exists, the startup process enters archive recovery mode\n>>> even if recovery.signal file doesn't exist. In this case, the startup process\n>>> tries to retrieve WAL files by using restore_command. Then, at the beginning\n>>> of the archive recovery, the contents of backup_label are copied to pg_control\n>>> and backup_label file is removed. This would be an intentional behavior.\n>>\n>>> But I think the problem is that, if the server shuts down during that\n>>> archive recovery, the restart of the server may cause the recovery to fail\n>>> because neither backup_label nor recovery.signal exist and the server\n>>> doesn't enter an archive recovery mode. Is this intentional, too? Seems No.\n>>>\n>>> So the problematic scenario is;\n>>>\n>>> 1. the server starts with backup_label, but not recovery.signal.\n>>> 2. the startup process enters an archive recovery mode because\n>>> backup_label exists.\n>>> 3. the contents of backup_label are copied to pg_control and\n>>> backup_label is deleted.\n>>\n>> Do you mean deleted or renamed to backup_label.old?\n> \n> Sorry for the confusing wording..\n> I meant the following code that renames backup_label to .old, in StartupXLOG().\n\nRight, that makes sense.\n\n>>\n>> I assume you have a repro? Can you give more details?\n> \n> What I did is:\n> \n> 1. Start PostgreSQL server with WAL archiving enabled.\n> 2. Take an online backup by using pg_basebackup, for example,\n> $ pg_basebackup -D backup\n> 3. Execute many write SQL to generate lots of WAL files. During that execution,\n> perform CHECKPOINT to remove some WAL files from pg_wal directory.\n> You need to repeat these until you confirm that there are many WAL files\n> that have already been removed from pg_wal but exist only in archive area.\n> 4. Shutdown the server.\n> 5. Remove PGDATA and restore it from backup.\n> 6. Set up restore_command.\n> 7. (Forget to put recovery.signal)\n> That is, in this scenario, you want to recover the database up to\n> the latest WAL records in the archive area. So you need to start archive\n> recovery by setting restore_command and putting recovery.signal.\n> But the problem happens when you forget to put recovery.signal.\n> 8. Start PostgreSQL server.\n> 9. Shutdown the server while it's restoring archived WAL files and replaying\n> them. At this point, you will notice that the archive recovery starts\n> even though recovery.signal doesn't exist. So even archived WAL files\n> are successfully restored at this step.\n> 10. Restart PostgreSQL server. Since neither backup_label or recovery.signal\n> exist, crash recovery starts and fail to restore the archived WAL files.\n> So you fail to recover the database up to the latest WAL record\n> in archive\n> directory. The recovery will finish at early point.\n\nYes, I see it now. I did not have enough WAL to make it work before, as\nI suspected.\n\n>>> One idea to fix this issue is to make the above step #3 remember that\n>>> backup_label existed, in pg_control. Then we should make the subsequent\n>>> recovery enter an archive recovery mode if pg_control indicates that\n>>> even if neither backup_label nor recovery.signal exist. Thought?\n>>\n>> That seems pretty invasive to me at this stage. I'd like to reproduce\n>> it and see if there are alternatives.\n>>\n>> Also, are you sure this is a new behavior?\n> \n> In v11 or before, if backup_label exists but not recovery.conf,\n> the startup process doesn't enter an archive recovery mode. It starts\n> crash recovery in that case. So the bahavior is somewhat different\n> between versions.\n\nAgreed. Since recovery options can be used in the presence of\nbackup_label *or* recovery.signal (or standby.signal for that matter)\nthis does represent a change in behavior. And it doesn't appear to be a\nbeneficial change.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 27 Sep 2019 14:01:11 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: recovery starting when backup_label exists, but not\n recovery.signal"
},
{
"msg_contents": "On 2019-09-27 10:34, Fujii Masao wrote:\n>> Also, are you sure this is a new behavior?\n> In v11 or before, if backup_label exists but not recovery.conf,\n> the startup process doesn't enter an archive recovery mode. It starts\n> crash recovery in that case. So the bahavior is somewhat different\n> between versions.\n\nCan you bisect this? I have traced through xlog.c in both versions and\nI don't see how this logic is any different in any obvious way.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Sep 2019 21:35:33 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery starting when backup_label exists, but not\n recovery.signal"
},
{
"msg_contents": "Hi Peter,\n\nOn 9/27/19 3:35 PM, Peter Eisentraut wrote:\n> On 2019-09-27 10:34, Fujii Masao wrote:\n>>> Also, are you sure this is a new behavior?\n>> In v11 or before, if backup_label exists but not recovery.conf,\n>> the startup process doesn't enter an archive recovery mode. It starts\n>> crash recovery in that case. So the bahavior is somewhat different\n>> between versions.\n> \n> Can you bisect this? I have traced through xlog.c in both versions and\n> I don't see how this logic is any different in any obvious way.\n\nWhat I've been seeing is that the underlying logic isn't different but\nthere are more ways to get into it.\n\nPreviously, there was no archive/targeted recovery without\nrecovery.conf, but now there are several ways to get to archive/targeted\nrecovery, i.e., making the recovery settings GUCs has bypassed controls\nthat previously had limited how they could be used and when.\n\nThe issues on the other thread [1], at least, were all introduced in\n2dedf4d9.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n[1]\nhttps://www.postgresql.org/message-id/flat/e445616d-023e-a268-8aa1-67b8b335340c%40pgmasters.net\n\n\n",
"msg_date": "Fri, 27 Sep 2019 15:57:27 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: recovery starting when backup_label exists, but not\n recovery.signal"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nPostgreSQL 12 documentation states, that minimum required version of\nOpenSSL is 0.9.8. However, I was unable to сompile current\nPGPRO_12_STABLE with OpenSSL 0.9.8j (from SLES 11sp4).\n\n\n-fno-strict-aliasing -fwrapv -g -O2 -I../../../src/include -D_GNU_SOURCE -I/usr/include/libxml2 -c -o be-secure-openssl.o be-secure-openssl.c\nbe-secure-openssl.c: In function ‘SSL_CTX_set_min_proto_version’:\nbe-secure-openssl.c:1340: error: ‘SSL_OP_NO_TLSv1_1’ undeclared (first use in this function)\nbe-secure-openssl.c:1340: error: (Each undeclared identifier is reported only once\nbe-secure-openssl.c:1340: error: for each function it appears in.)\nbe-secure-openssl.c:1344: error: ‘SSL_OP_NO_TLSv1_2’ undeclared (first use in this function)\nbe-secure-openssl.c: In function ‘SSL_CTX_set_max_proto_version’:\nbe-secure-openssl.c:1361: error: ‘SSL_OP_NO_TLSv1_1’ undeclared (first use in this function)\nbe-secure-openssl.c:1365: error: ‘SSL_OP_NO_TLSv1_2’ undeclared (first use in this function)\nmake: *** [be-secure-openssl.o] Error 1\n\n\nProblem is that some code in src/backend/libpq/be-secure-openssl.c\nassumes that if preprocessor symbols TLS1_1_VERSION and TLS1_2_VERSION\nare defined in the openssl headers, corresponding versions of TLS are\nsupported by the library.\n\nIt is not so. Here is exempt from tls1.h header file from the openssl\n0.9.8j\n\n#define TLS1_VERSION 0x0301\n#define TLS1_1_VERSION 0x0302\n#define TLS1_2_VERSION 0x0303\n/* TLS 1.1 and 1.2 are not supported by this version of OpenSSL, so\n * TLS_MAX_VERSION indicates TLS 1.0 regardless of the above\n * definitions. (s23_clnt.c and s23_srvr.c have an OPENSSL_assert()\n * check that would catch the error if TLS_MAX_VERSION was too low.)\n */\n#define TLS_MAX_VERSION TLS1_VERSION\n\nReplacing all \n\n#ifdef TLS1_1_VERSION\n\nwith\n\n#if defined(TLS1_1_VERSION) && TLS1_1_VERSION <= TLS_MAX_VERSION\n\nand analogue for TLS1_2_VERSION fixes the problem.\n\nReally, problem is that symbol SSL_OP_NO_TLSv1_1 (and 1_2 accordingly)\nmight be undefined even if TLS1_1_VERSION defined. \n\nReplacing \n\n#ifdef TLS1_1_VERSION\n\nwith \n\n#ifdef SSL_OP_NO_TLSv1_1 \n\nseems to be correct solution for two of three #ifdef TLS1_1_VERSION\nstatements in be-secure-openssl.c, because this symbol is used inside\n#ifdef block.\n\nBut there is third (first from start of file) one.\n...\n case PG_TLS1_1_VERSION:\n#ifdef TLS1_1_VERSION\n return TLS1_1_VERSION;\n#else\n break;\n#endif\n...\n(line 1290). In this case check for TLS1_1_VERSION <= TLS_MAX_VERSION\nseems to be more self-explanatory, than check for somewhat unrelated \nsymbol SSL_OP_NO_TLSv1_1\n \n\n-- \n\n\n\n",
"msg_date": "Tue, 24 Sep 2019 10:18:59 +0300",
"msg_from": "Victor Wagner <vitus@wagner.pp.ru>",
"msg_from_op": true,
"msg_subject": "PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 10:18:59AM +0300, Victor Wagner wrote:\n> PostgreSQL 12 documentation states, that minimum required version of\n> OpenSSL is 0.9.8. However, I was unable to сompile current\n> PGPRO_12_STABLE with OpenSSL 0.9.8j (from SLES 11sp4).\n\nI can reproduce that with REL_12_STABLE and the top of\nOpenSSL_0_9_8-stable fromx OpenSSL's git.\n\n> It is not so. Here is exempt from tls1.h header file from the openssl\n> 0.9.8j\n> \n> #define TLS1_VERSION 0x0301\n> #define TLS1_1_VERSION 0x0302\n> #define TLS1_2_VERSION 0x0303\n> /* TLS 1.1 and 1.2 are not supported by this version of OpenSSL, so\n> * TLS_MAX_VERSION indicates TLS 1.0 regardless of the above\n> * definitions. (s23_clnt.c and s23_srvr.c have an OPENSSL_assert()\n> * check that would catch the error if TLS_MAX_VERSION was too low.)\n> */\n> #define TLS_MAX_VERSION TLS1_VERSION\n\nIndeed, we rely currently on a false assumption that the version is\nsupported if the object is defined. That's clearly wrong.\n\n> Replacing all \n> \n> #ifdef TLS1_1_VERSION\n> \n> with\n> \n> #if defined(TLS1_1_VERSION) && TLS1_1_VERSION <= TLS_MAX_VERSION\n> \n> and analogue for TLS1_2_VERSION fixes the problem.\n\nThat sounds like a plan. \n\n> Really, problem is that symbol SSL_OP_NO_TLSv1_1 (and 1_2 accordingly)\n> might be undefined even if TLS1_1_VERSION defined. \n> \n> Replacing \n> \n> #ifdef TLS1_1_VERSION\n> \n> with \n> \n> #ifdef SSL_OP_NO_TLSv1_1\n\nHmm. Wouldn't it be better to check if the maximum version of TLS is\nsupported and if SSL_OP_NO_TLSv1_1 is defined (same for 1.2)?\n\n> But there is third (first from start of file) one.\n> ...\n> case PG_TLS1_1_VERSION:\n> #ifdef TLS1_1_VERSION\n> return TLS1_1_VERSION;\n> #else\n> break;\n> #endif\n> ...\n> (line 1290). In this case check for TLS1_1_VERSION <= TLS_MAX_VERSION\n> seems to be more self-explanatory, than check for somewhat unrelated \n> symbol SSL_OP_NO_TLSv1_1\n\nThat sounds right. Victor, would you like to write a patch?\n--\nMichael",
"msg_date": "Tue, 24 Sep 2019 18:49:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "On Tue, 24 Sep 2019 18:49:17 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Sep 24, 2019 at 10:18:59AM +0300, Victor Wagner wrote:\n> > PostgreSQL 12 documentation states, that minimum required version of\n> > OpenSSL is 0.9.8. However, I was unable to сompile current\n> > PGPRO_12_STABLE with OpenSSL 0.9.8j (from SLES 11sp4).\n> \n> I can reproduce that with REL_12_STABLE and the top of\n> OpenSSL_0_9_8-stable fromx OpenSSL's git.\n> \n> > Replacing all \n> > \n> > #ifdef TLS1_1_VERSION\n> > \n> > with\n> > \n> > #if defined(TLS1_1_VERSION) && TLS1_1_VERSION <= TLS_MAX_VERSION\n> > \n> > and analogue for TLS1_2_VERSION fixes the problem.\n> \n> That sounds like a plan. \n[skip] \n> > ...\n> > (line 1290). In this case check for TLS1_1_VERSION <=\n> > TLS_MAX_VERSION seems to be more self-explanatory, than check for\n> > somewhat unrelated symbol SSL_OP_NO_TLSv1_1\n> \n> That sounds right. Victor, would you like to write a patch?\n\nI'm attaching patch which uses solution mentioned above.\nIt seems that chedk for SSL_OP_NO_TLSvX_Y is redundant if \nwe are checking for TLS_MAX_VERSION.\n--",
"msg_date": "Tue, 24 Sep 2019 13:07:31 +0300",
"msg_from": "Victor Wagner <vitus@wagner.pp.ru>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "On 2019-Sep-24, Victor Wagner wrote:\n\n> Dear hackers,\n> \n> PostgreSQL 12 documentation states, that minimum required version of\n> OpenSSL is 0.9.8. However, I was unable to сompile current\n> PGPRO_12_STABLE with OpenSSL 0.9.8j (from SLES 11sp4).\n\n(Nice branch name.) I wonder if we should really continue to support\nOpenSSL 0.9.8. That branch was abandoned by the OpenSSL dev group in\n2015 ... and I wouldn't want to assume that there are no security\nproblems fixed in the meantime. Why shouldn't we drop support for that\ngoing forward, raising our minimum required OpenSSL version to be at\nleast something in the 1.0 branch?\n\n(I'm not entirely sure about minor version numbers in OpenSSL -- it\nseems 1.0.2 is still being maintained, but 1.0.0 itself was also\nabandoned in 2016, as was 1.0.1. As far as I understand they use the\nalphabetical sequence *after* the three-part version number in the way\nwe use minor number; so 1.0.1u (2016) is the last there, and 1.0.2t is a\nrecent one in the maintained branch.\n\nAlong the same lines, 0.9.8j was released in Jan 2009. The last in\n0.9.8 was 0.9.8zi in December 2015.)\n\nAnyway I suppose it's not impossible that third parties are still\nmaintaining their 1.0.0 branch, but I doubt anyone cares for 0.9.8 with\nPostgres 12 ... particularly since SUSE themselves suggest not to use\nthe packaged OpenSSL for their stuff but rather stick to NSS. That\nsaid, in 2014 (!!) SUSE released OpenSSL 1.0.1 separately, for use with\nSLES 11:\nhttps://www.suse.com/c/introducing-the-suse-linux-enterprise-11-security-module/\nWho would use the already obsolete SLES 11 (general support ended in\nMarch 2019, though extended support ends in 2022) with Postgres 12?\nThat seems insane.\n\nAll that being said, I don't oppose to this patch, since it seems a\nquick way to get out of the immediate trouble.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 24 Sep 2019 12:13:07 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> ... I wonder if we should really continue to support\n> OpenSSL 0.9.8.\n\nFair question, but post-rc1 is no time to be moving that goalpost\nfor the v12 branch.\n\n> Anyway I suppose it's not impossible that third parties are still\n> maintaining their 1.0.0 branch,\n\nAnother data point on that is that Red Hat is still supporting\n1.0.1e in RHEL6. I don't think we should assume that just because\nOpenSSL upstream has dropped support for a branch, it no longer\nexists in the wild.\n\nHaving said that, if it makes our lives noticeably easier to\ndrop support for 0.9.8 in HEAD, I won't stand in the way.\n\n(We should survey the buildfarm and see what the older critters\nare running, perhaps.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Sep 2019 11:25:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "Victor Wagner <vitus@wagner.pp.ru> writes:\n> I'm attaching patch which uses solution mentioned above.\n> It seems that chedk for SSL_OP_NO_TLSvX_Y is redundant if \n> we are checking for TLS_MAX_VERSION.\n\nOne thing I'm wondering is if it's safe to assume that TLS_MAX_VERSION\nwill be defined whenever these other symbols are. Looking in an\n0.9.8x install tree, that doesn't seem to define any of them; while\nin 1.0.1e I see\n\n./tls1.h:#define TLS1_1_VERSION 0x0302\n./tls1.h:#define TLS1_2_VERSION 0x0303\n./tls1.h:#define TLS_MAX_VERSION TLS1_2_VERSION\n\nSo the patch seems okay for these two versions, but I have no data about\nintermediate OpenSSL versions.\n\nBTW, the spacing in this patch seems rather random.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Sep 2019 12:43:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "On 2019-09-24 09:18, Victor Wagner wrote:\n> Problem is that some code in src/backend/libpq/be-secure-openssl.c\n> assumes that if preprocessor symbols TLS1_1_VERSION and TLS1_2_VERSION\n> are defined in the openssl headers, corresponding versions of TLS are\n> supported by the library.\n> \n> It is not so. Here is exempt from tls1.h header file from the openssl\n> 0.9.8j\n> \n> #define TLS1_VERSION 0x0301\n> #define TLS1_1_VERSION 0x0302\n> #define TLS1_2_VERSION 0x0303\n> /* TLS 1.1 and 1.2 are not supported by this version of OpenSSL, so\n> * TLS_MAX_VERSION indicates TLS 1.0 regardless of the above\n> * definitions. (s23_clnt.c and s23_srvr.c have an OPENSSL_assert()\n> * check that would catch the error if TLS_MAX_VERSION was too low.)\n> */\n> #define TLS_MAX_VERSION TLS1_VERSION\n\nThat's not actually what this file looks like in the upstream release.\nIt looks like the packagers must have patched in the protocol codes for\nTLS 1.1 and 1.2 themselves. Then they should also add the corresponding\nSSL_OP_NO_* flags. AFAICT, these pairs of symbols are always added\ntogether in upstream commits.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 24 Sep 2019 23:52:48 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 11:25:30AM -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> ... I wonder if we should really continue to support\n>> OpenSSL 0.9.8.\n> \n> Fair question, but post-rc1 is no time to be moving that goalpost\n> for the v12 branch.\n\nYeah. I worked in the past with SUSE-based appliances, and I recall\nthat those folks have been maintaining their own patched version of\nOpenSSL 0.9.8 with a bunch of custom patches, some of them coming from\nnewer versions of upstream to take care of security issues with 0.9.8.\nSo even if they call their version 0.9.8j, I think that they include\nmuch more security-related fixes than their version string suggests.\nI don't know at which extent though.\n\n>> Anyway I suppose it's not impossible that third parties are still\n>> maintaining their 1.0.0 branch,\n> \n> Another data point on that is that Red Hat is still supporting\n> 1.0.1e in RHEL6. I don't think we should assume that just because\n> OpenSSL upstream has dropped support for a branch, it no longer\n> exists in the wild.\n> \n> Having said that, if it makes our lives noticeably easier to\n> drop support for 0.9.8 in HEAD, I won't stand in the way.\n\nAgreed. There is an argument for dropping support for OpenSSL 0.9.8\nin 13~, but I don't agree of doing that in 12. Let's just fix the\nissue.\n--\nMichael",
"msg_date": "Wed, 25 Sep 2019 15:55:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 11:52:48PM +0200, Peter Eisentraut wrote:\n> That's not actually what this file looks like in the upstream release.\n> It looks like the packagers must have patched in the protocol codes for\n> TLS 1.1 and 1.2 themselves. Then they should also add the corresponding\n> SSL_OP_NO_* flags. AFAICT, these pairs of symbols are always added\n> together in upstream commits.\n\nYes, they did so. I see those three fields as of 6287fa5 from\nupstream which is the release tag for 0.9.8j:\n#define TLS1_VERSION 0x0301\n#define TLS1_VERSION_MAJOR 0x03\n#define TLS1_VERSION_MINOR 0x01\n\nHowever if you look at the top branch OpenSSL_0_9_8-stable (7474341),\nthen you would notice that ssl/tls1.h does something completely\ndifferent and defines TLS_MAX_VERSION. So everything is in line to\nsay that the version of OpenSSL in SUSE labelled 0.9.8j is using\nsomething compatible with the latest version of upstream 0.9.8zi. I\nthink that we should just stick for simplicity with the top of their\nbranch instead of trying to be compatible with 0.9.8j because Postgres\n12 has not been released yet, hence if one tries to compile Postgres\n12 with OpenSSL 0.9.8j then they would get a compilation failure, and\nwe could just tell them to switch to the latest version of upstream\nfor 0.9.8. That's something they should really do anyway to take\ncare of various security issues on this branch. Well, if that happens\nthey should rather upgrade to at least 1.0.2 anyway :)\n\nSo I agree with the proposal to rely on the presence of\nTLS_MAX_VERSION, and base our decision-making on that.\n--\nMichael",
"msg_date": "Thu, 26 Sep 2019 13:53:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 12:43:17PM -0400, Tom Lane wrote:\n> One thing I'm wondering is if it's safe to assume that TLS_MAX_VERSION\n> will be defined whenever these other symbols are. Looking in an\n> 0.9.8x install tree, that doesn't seem to define any of them; while\n> in 1.0.1e I see\n\nYeah, I could personally live with the argument of simplicity and just\nsay that trying to compile v12 with any version older than 0.9.8zc or\nany version that does not have those symbols just does not work, and\nthat one needs to use the top of the released versions.\n\n> ./tls1.h:#define TLS1_1_VERSION 0x0302\n> ./tls1.h:#define TLS1_2_VERSION 0x0303\n> ./tls1.h:#define TLS_MAX_VERSION TLS1_2_VERSION\n>\n> So the patch seems okay for these two versions, but I have no data about\n> intermediate OpenSSL versions.\n\nMore precisely, all those fields have been added by this upstream\ncommit, so the fields are present since 0.9.8zc:\ncommit: c6a876473cbff0fd323c8abcaace98ee2d21863d\nauthor: Bodo Moeller <bodo@openssl.org>\ndate: Wed, 15 Oct 2014 04:18:29 +0200\nSupport TLS_FALLBACK_SCSV.\n\n> BTW, the spacing in this patch seems rather random.\n\nIndeed.\n\nNow that I think about it, another method would be to rely on the fact\nthat a given version of OpenSSL does not support TLS 1.1 and 1.2. So\nwe could also just add checks based on OPENSSL_VERSION_NUMBER and be\ndone with it. And actually, looking at their tree TLS 1.1 and 2.2 are\nnot supported in 1.0.0 either. 1.0.1, 1.0.2, 1.1.0 and HEAD do\nsupport them, but not TLS 1.3.\n\nI would still prefer relying on TLS_MAX_VERSION though, as that's more\nportable for future decisions, like the introduction of TLS1_3_VERSION\nfor which we have already some logic in be-secure-openssl.c. And\nupdating this stuff would very likely get forgotten once OpenSSL adds\nsupport for TLS 1.3...\n\nThere is another issue in the patch:\n-#ifdef TLS1_3_VERSION\n+#if defined(TLS1_3_VERSION) && TLS1_2_VERSION <= TLS_MAX_VERSION\nThe second part of the if needs to use TLS1_3_VERSION.\n\nI would also add more brackets around the extra conditions for\nreadability.\n--\nMichael",
"msg_date": "Thu, 26 Sep 2019 14:25:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Now that I think about it, another method would be to rely on the fact\n> that a given version of OpenSSL does not support TLS 1.1 and 1.2. So\n> we could also just add checks based on OPENSSL_VERSION_NUMBER and be\n> done with it.\n\nNo, that way madness lies. We *know* that there are lots of\nvendor-patched versions of OpenSSL out there, so that the nominal\nversion number isn't really going to tell us what the package can do.\n\nWhat I'm concerned about at the moment is Peter's comment upthread\nthat what we seem to be dealing with here is a broken vendor patch,\nnot any officially-released OpenSSL version at all. Is it our job\nto work around that situation, rather than pushing the vendor to\nfix their patch?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Sep 2019 02:03:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 02:03:12AM -0400, Tom Lane wrote:\n> What I'm concerned about at the moment is Peter's comment upthread\n> that what we seem to be dealing with here is a broken vendor patch,\n> not any officially-released OpenSSL version at all. Is it our job\n> to work around that situation, rather than pushing the vendor to\n> fix their patch?\n\nYes, rather broken. SUSE got the header visibly right, at least the\nversion string is not. The best solution in our favor would be that\nthey actually fix their stuff :)\n\nAnd OpenSSL is also to blame by not handling those flags consistently\nin a stable branch..\n--\nMichael",
"msg_date": "Thu, 26 Sep 2019 15:56:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "On 2019-09-26 06:53, Michael Paquier wrote:\n> So I agree with the proposal to rely on the presence of\n> TLS_MAX_VERSION, and base our decision-making on that.\n\nBut then there is this:\n\ncommit 04cd70c6899c6b36517b2b07d7a12b2cceba1bef\nAuthor: Kurt Roeckx <kurt@roeckx.be>\nDate: Tue Sep 18 22:17:14 2018\n\n Deprecate TLS_MAX_VERSION, DTLS_MAX_VERSION and DTLS_MIN_VERSION\n\n Fixes: #7183\n\n Reviewed-by: Matt Caswell <matt@openssl.org>\n GH: #7260\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 26 Sep 2019 16:43:33 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "Here is my proposed patch, currently completely untested.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 26 Sep 2019 18:24:22 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 04:43:33PM +0200, Peter Eisentraut wrote:\n> On 2019-09-26 06:53, Michael Paquier wrote:\n> > So I agree with the proposal to rely on the presence of\n> > TLS_MAX_VERSION, and base our decision-making on that.\n> \n> But then there is this:\n> \n> commit 04cd70c6899c6b36517b2b07d7a12b2cceba1bef\n> Author: Kurt Roeckx <kurt@roeckx.be>\n> Date: Tue Sep 18 22:17:14 2018\n> \n> Deprecate TLS_MAX_VERSION, DTLS_MAX_VERSION and DTLS_MIN_VERSION\n> \n> Fixes: #7183\n> \n> Reviewed-by: Matt Caswell <matt@openssl.org>\n> GH: #7260\n\nOuch. I missed that part, thanks! That's included as part of current\nHEAD, so even 1.1.1 still has the flags.\n--\nMichael",
"msg_date": "Fri, 27 Sep 2019 09:37:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 06:24:22PM +0200, Peter Eisentraut wrote:\n> Here is my proposed patch, currently completely untested.\n\nI have tested compilation of REL_12_STABLE with the top of OpenSSL\n0.9.8, 1.0.0, 1.0.1, 1.0.2, 1.1.0 and 1.1.1. Our SSL tests also pass\nin all the setups I have tested.\n\nYour patch does not issue a ereport(LOG/FATAL) in the event of a\nfailure with SSL_CTX_set_max_proto_version(), which is something done\nwhen ssl_protocol_version_to_openssl()'s result is -1. Wouldn't it be\nbetter to report that properly to the user?\n\nSome more nits about the patch I have. Would it be worth copying the\ncomment from min_proto_version() to SSL_CTX_set_max_proto_version()?\nI would add a newline before the comment block as well.\n\nNote: We have a failure with ssl/t/002_scram.pl because of the\nintroduction of the recent channel_binding parameter if you try to run \nthe SSL tests on HEAD with at least 0.9.8 as we forgot to add a\nconditional check for HAVE_X509_GET_SIGNATURE_NID as c3d41cc did.\nI'll send a patch for that separately. That's why I have checked the\npatch only with REL_12_STABLE.\n--\nMichael",
"msg_date": "Fri, 27 Sep 2019 10:51:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "On 2019-09-27 03:51, Michael Paquier wrote:\n> I have tested compilation of REL_12_STABLE with the top of OpenSSL\n> 0.9.8, 1.0.0, 1.0.1, 1.0.2, 1.1.0 and 1.1.1. Our SSL tests also pass\n> in all the setups I have tested.\n\ngreat\n\n> Your patch does not issue a ereport(LOG/FATAL) in the event of a\n> failure with SSL_CTX_set_max_proto_version(), which is something done\n> when ssl_protocol_version_to_openssl()'s result is -1. Wouldn't it be\n> better to report that properly to the user?\n\nOur SSL_CTX_set_max_proto_version() is a reimplementation of a function\nthat exists in newer versions of OpenSSL, so it has a specific error\nbehavior. Our implementation should probably not diverge from it too much.\n\n> Some more nits about the patch I have. Would it be worth copying the\n> comment from min_proto_version() to SSL_CTX_set_max_proto_version()?\n> I would add a newline before the comment block as well.\n\nok\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Sep 2019 15:50:57 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 03:50:57PM +0200, Peter Eisentraut wrote:\n> On 2019-09-27 03:51, Michael Paquier wrote:\n>> Your patch does not issue a ereport(LOG/FATAL) in the event of a\n>> failure with SSL_CTX_set_max_proto_version(), which is something done\n>> when ssl_protocol_version_to_openssl()'s result is -1. Wouldn't it be\n>> better to report that properly to the user?\n> \n> Our SSL_CTX_set_max_proto_version() is a reimplementation of a function\n> that exists in newer versions of OpenSSL, so it has a specific error\n> behavior. Our implementation should probably not diverge from it too much.\n\nI agree with this point. Now my argument is about logging LOG or\nFATAL within be_tls_init() after the two OpenSSL functions (or our\nwrappers) SSL_CTX_set_min/max_proto_version are called.\n--\nMichael",
"msg_date": "Fri, 27 Sep 2019 23:20:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "On 2019-09-27 16:20, Michael Paquier wrote:\n> On Fri, Sep 27, 2019 at 03:50:57PM +0200, Peter Eisentraut wrote:\n>> On 2019-09-27 03:51, Michael Paquier wrote:\n>>> Your patch does not issue a ereport(LOG/FATAL) in the event of a\n>>> failure with SSL_CTX_set_max_proto_version(), which is something done\n>>> when ssl_protocol_version_to_openssl()'s result is -1. Wouldn't it be\n>>> better to report that properly to the user?\n>>\n>> Our SSL_CTX_set_max_proto_version() is a reimplementation of a function\n>> that exists in newer versions of OpenSSL, so it has a specific error\n>> behavior. Our implementation should probably not diverge from it too much.\n> \n> I agree with this point. Now my argument is about logging LOG or\n> FATAL within be_tls_init() after the two OpenSSL functions (or our\n> wrappers) SSL_CTX_set_min/max_proto_version are called.\n\ncommitted with that\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 28 Sep 2019 22:52:18 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
},
{
"msg_contents": "On Sat, Sep 28, 2019 at 10:52:18PM +0200, Peter Eisentraut wrote:\n> committed with that\n\nThanks, LGTM.\n--\nMichael",
"msg_date": "Sun, 29 Sep 2019 10:47:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL12 and older versions of OpenSSL"
}
] |
[
{
"msg_contents": "Hi,\n\nsrc/backend/replication/logical/proto.c\naction = pq_getmsgbyte(in);\nif (action != 'N')\n elog(ERROR, \"expected new tuple but got %d\",\n action);\n\n\"%d\" in the above message should be \"%c\" because the type of\nthe variable \"action\" is char? There are other log messages that\n\"%c\" is used for such variable, in proto.c. Seems the above is\nonly message that \"%d\" is used for such variable.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Tue, 24 Sep 2019 18:41:02 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "log message in proto.c"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 5:41 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n> src/backend/replication/logical/proto.c\n> action = pq_getmsgbyte(in);\n> if (action != 'N')\n> elog(ERROR, \"expected new tuple but got %d\",\n> action);\n>\n> \"%d\" in the above message should be \"%c\" because the type of\n> the variable \"action\" is char? There are other log messages that\n> \"%c\" is used for such variable, in proto.c. Seems the above is\n> only message that \"%d\" is used for such variable.\n\nThe potential problem with using %c to print characters is that the\ncharacter might be a null byte or something else that ends up making\nthe log file invalid under the relevant encoding.\n\nHowever, if the goal of using %d is to protect against such problems,\nit has to be done consistently.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 24 Sep 2019 13:15:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: log message in proto.c"
}
] |
[
{
"msg_contents": "This one just came up on IRC:\n\ncreate table tltest(a integer, b text, c text, d text);\ninsert into tltest\n select i, repeat('foo',100), repeat('foo',100), repeat('foo',100)\n from generate_series(1,100000) i;\nset log_temp_files=0;\nset client_min_messages=log;\n\nselect count(a+c) from (select a, count(*) over () as c from tltest s1) s;\nLOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp82513.3\", size 92600000\n\nUsing 92MB of disk for one integer seems excessive; the reason is clear\nfrom the explain:\n\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=16250.00..16250.01 rows=1 width=8) (actual time=1236.260..1236.260 rows=1 loops=1)\n Output: count((tltest.a + (count(*) OVER (?))))\n -> WindowAgg (cost=0.00..14750.00 rows=100000 width=12) (actual time=1193.846..1231.216 rows=100000 loops=1)\n Output: tltest.a, count(*) OVER (?)\n -> Seq Scan on public.tltest (cost=0.00..13500.00 rows=100000 width=4) (actual time=0.006..14.361 rows=100000 loops=1)\n Output: tltest.a, tltest.b, tltest.c, tltest.d\n\nso the whole width of the table is being stored in the tuplestore used\nby the windowagg.\n\nIn create_windowagg_plan, we have:\n\n /*\n * WindowAgg can project, so no need to be terribly picky about child\n * tlist, but we do need grouping columns to be available\n */\n subplan = create_plan_recurse(root, best_path->subpath, CP_LABEL_TLIST);\n\nObviously we _do_ need to be more picky about this; it seems clear that\nusing CP_SMALL_TLIST | CP_LABEL_TLIST would be a win in many cases.\nOpinions?\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Tue, 24 Sep 2019 12:49:54 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Excessive disk usage in WindowAgg"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> Using 92MB of disk for one integer seems excessive; the reason is clear\n> from the explain:\n> ...\n> so the whole width of the table is being stored in the tuplestore used\n> by the windowagg.\n\n> In create_windowagg_plan, we have:\n\n> /*\n> * WindowAgg can project, so no need to be terribly picky about child\n> * tlist, but we do need grouping columns to be available\n> */\n> subplan = create_plan_recurse(root, best_path->subpath, CP_LABEL_TLIST);\n\n> Obviously we _do_ need to be more picky about this; it seems clear that\n> using CP_SMALL_TLIST | CP_LABEL_TLIST would be a win in many cases.\n> Opinions?\n\nSeems reasonable to me, do you want to do the honors?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Nov 2019 12:18:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Excessive disk usage in WindowAgg"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-04 12:18:48 -0500, Tom Lane wrote:\n> Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> > Using 92MB of disk for one integer seems excessive; the reason is clear\n> > from the explain:\n> > ...\n> > so the whole width of the table is being stored in the tuplestore used\n> > by the windowagg.\n> \n> > In create_windowagg_plan, we have:\n> \n> > /*\n> > * WindowAgg can project, so no need to be terribly picky about child\n> > * tlist, but we do need grouping columns to be available\n> > */\n> > subplan = create_plan_recurse(root, best_path->subpath, CP_LABEL_TLIST);\n> \n> > Obviously we _do_ need to be more picky about this; it seems clear that\n> > using CP_SMALL_TLIST | CP_LABEL_TLIST would be a win in many cases.\n> > Opinions?\n> \n> Seems reasonable to me, do you want to do the honors?\n\nI was briefly wondering if this ought to be backpatched. -0 here, but...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 4 Nov 2019 10:06:08 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Excessive disk usage in WindowAgg"
},
{
"msg_contents": ">>>>> \"Andres\" == Andres Freund <andres@anarazel.de> writes:\n\n >>> Obviously we _do_ need to be more picky about this; it seems clear\n >>> that using CP_SMALL_TLIST | CP_LABEL_TLIST would be a win in many\n >>> cases. Opinions?\n\n >> Seems reasonable to me, do you want to do the honors?\n\n Andres> I was briefly wondering if this ought to be backpatched. -0\n Andres> here, but...\n\nUh, it seems obvious to me that it should be backpatched?\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Mon, 04 Nov 2019 19:04:52 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: Excessive disk usage in WindowAgg"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-04 19:04:52 +0000, Andrew Gierth wrote:\n> >>>>> \"Andres\" == Andres Freund <andres@anarazel.de> writes:\n> \n> >>> Obviously we _do_ need to be more picky about this; it seems clear\n> >>> that using CP_SMALL_TLIST | CP_LABEL_TLIST would be a win in many\n> >>> cases. Opinions?\n> \n> >> Seems reasonable to me, do you want to do the honors?\n> \n> Andres> I was briefly wondering if this ought to be backpatched. -0\n> Andres> here, but...\n> \n> Uh, it seems obvious to me that it should be backpatched?\n\nFine with me. But I don't think it's just plainly obvious, it's\nessentailly a change in query plans etc, and we've been getting more\nhesitant with those over time.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 4 Nov 2019 11:11:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Excessive disk usage in WindowAgg"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-11-04 19:04:52 +0000, Andrew Gierth wrote:\n>> Uh, it seems obvious to me that it should be backpatched?\n\n> Fine with me. But I don't think it's just plainly obvious, it's\n> essentailly a change in query plans etc, and we've been getting more\n> hesitant with those over time.\n\nSince this is happening during create_plan(), it affects no planner\ndecisions; it's just a pointless inefficiency AFAICS. Back-patching\nseems fine.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Nov 2019 14:20:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Excessive disk usage in WindowAgg"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> On 2019-11-04 19:04:52 +0000, Andrew Gierth wrote:\n >>> Uh, it seems obvious to me that it should be backpatched?\n\n >> Fine with me. But I don't think it's just plainly obvious, it's\n >> essentailly a change in query plans etc, and we've been getting more\n >> hesitant with those over time.\n\n Tom> Since this is happening during create_plan(), it affects no\n Tom> planner decisions; it's just a pointless inefficiency AFAICS.\n Tom> Back-patching seems fine.\n\nI will deal with it then. (probably tomorrow or so)\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Mon, 04 Nov 2019 19:42:19 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: Excessive disk usage in WindowAgg"
}
] |
[
{
"msg_contents": "I recently had to cut loose (pg_drop_replication_slot) a logical replica\nthat couldn't keep up and so was threatening to bring down the master.\n\nIn mopping up on the replica side, I couldn't just drop the subscription,\nbecause it couldn't drop the nonexistent slot on the master and so refused\nto work. So I had to do a silly little dance where I first disable the\nsubscription, then ALTER SUBSCRIPTION ... SET (slot_name = NONE), then drop\nit.\n\nWanting to clean up after itself is admirable, but if there is nothing to\nclean up, why should that be an error condition? Should this be an item on\nhttps://wiki.postgresql.org/wiki/Todo (to whatever extent that is still\nused).\n\nCheers,\n\nJeff\n\nI recently had to cut loose \n\n(pg_drop_replication_slot) a logical replica that couldn't keep up and so was threatening to bring down the master.In mopping up on the replica side, I couldn't just drop the subscription, because it couldn't drop the nonexistent slot on the master and so refused to work. So I had to do a silly little dance where I first disable the subscription, then ALTER SUBSCRIPTION ... SET (slot_name = NONE), then drop it.Wanting to clean up after itself is admirable, but if there is nothing to clean up, why should that be an error condition? Should this be an item on https://wiki.postgresql.org/wiki/Todo (to whatever extent that is still used).Cheers,Jeff",
"msg_date": "Tue, 24 Sep 2019 10:31:02 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "DROP SUBSCRIPTION with no slot"
},
{
"msg_contents": "On 2019-09-24 16:31, Jeff Janes wrote:\n> I recently had to cut loose (pg_drop_replication_slot) a logical replica\n> that couldn't keep up and so was threatening to bring down the master.\n> \n> In mopping up on the replica side, I couldn't just drop the\n> subscription, because it couldn't drop the nonexistent slot on the\n> master and so refused to work. So I had to do a silly little dance\n> where I first disable the subscription, then ALTER SUBSCRIPTION ... SET\n> (slot_name = NONE), then drop it.\n> \n> Wanting to clean up after itself is admirable, but if there is nothing\n> to clean up, why should that be an error condition?\n\nThe alternatives seem quite error prone to me. Better be explicit.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 24 Sep 2019 23:25:12 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP SUBSCRIPTION with no slot"
},
{
"msg_contents": "This also seems to be a problem for somewhat fringe case of subscriptions created with connect=false option.\nThey cannot be dropped in an obvious way, without knowing the ALTER SUBSCRIPTION trick.\n\nFor example:\n\ncontrib_regression=# create subscription test_sub connection 'dbname=contrib_regression' publication test_pub with ( connect=false ); \nWARNING: tables were not subscribed, you will have to run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to subscribe the tables\nCREATE SUBSCRIPTION\n\ncontrib_regression=# drop subscription test_sub; -- fails\nERROR: could not drop the replication slot \"test_sub\" on publisher\nDETAIL: The error was: ERROR: replication slot \"test_sub\" does not exist\n\ncontrib_regression=# alter subscription test_sub set ( slot_name=none );\nALTER SUBSCRIPTION\n\ncontrib_regression=# drop subscription test_sub; -- succeeds\nDROP SUBSCRIPTION\n\n\nNote that the publication was never refreshed.\nIt seems that the first DROP should succeed in the above case. \nOr at least some hint should be given how to fix this.\n\n\n\n\n> On 24 Sep 2019, at 23:25, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2019-09-24 16:31, Jeff Janes wrote:\n>> I recently had to cut loose (pg_drop_replication_slot) a logical replica\n>> that couldn't keep up and so was threatening to bring down the master.\n>> \n>> In mopping up on the replica side, I couldn't just drop the\n>> subscription, because it couldn't drop the nonexistent slot on the\n>> master and so refused to work. So I had to do a silly little dance\n>> where I first disable the subscription, then ALTER SUBSCRIPTION ... SET\n>> (slot_name = NONE), then drop it.\n>> \n>> Wanting to clean up after itself is admirable, but if there is nothing\n>> to clean up, why should that be an error condition?\n> \n> The alternatives seem quite error prone to me. Better be explicit.\n> \n> -- \n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n> \n> \n\n\nThis also seems to be a problem for somewhat fringe case of subscriptions created with connect=false option.They cannot be dropped in an obvious way, without knowing the ALTER SUBSCRIPTION trick.For example:contrib_regression=# create subscription test_sub connection 'dbname=contrib_regression' publication test_pub with ( connect=false ); WARNING: tables were not subscribed, you will have to run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to subscribe the tablesCREATE SUBSCRIPTIONcontrib_regression=# drop subscription test_sub; -- failsERROR: could not drop the replication slot \"test_sub\" on publisherDETAIL: The error was: ERROR: replication slot \"test_sub\" does not existcontrib_regression=# alter subscription test_sub set ( slot_name=none );ALTER SUBSCRIPTIONcontrib_regression=# drop subscription test_sub; -- succeedsDROP SUBSCRIPTIONNote that the publication was never refreshed.It seems that the first DROP should succeed in the above case. Or at least some hint should be given how to fix this.On 24 Sep 2019, at 23:25, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2019-09-24 16:31, Jeff Janes wrote:I recently had to cut loose (pg_drop_replication_slot) a logical replicathat couldn't keep up and so was threatening to bring down the master.In mopping up on the replica side, I couldn't just drop thesubscription, because it couldn't drop the nonexistent slot on themaster and so refused to work. So I had to do a silly little dancewhere I first disable the subscription, then ALTER SUBSCRIPTION ... SET(slot_name = NONE), then drop it.Wanting to clean up after itself is admirable, but if there is nothingto clean up, why should that be an error condition?The alternatives seem quite error prone to me. Better be explicit.-- Peter Eisentraut http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 25 Sep 2019 00:41:53 +0200",
"msg_from": "Ziga <ziga@ljudmila.org>",
"msg_from_op": false,
"msg_subject": "Re: DROP SUBSCRIPTION with no slot"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 5:25 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-09-24 16:31, Jeff Janes wrote:\n> > I recently had to cut loose (pg_drop_replication_slot) a logical replica\n> > that couldn't keep up and so was threatening to bring down the master.\n> >\n> > In mopping up on the replica side, I couldn't just drop the\n> > subscription, because it couldn't drop the nonexistent slot on the\n> > master and so refused to work. So I had to do a silly little dance\n> > where I first disable the subscription, then ALTER SUBSCRIPTION ... SET\n> > (slot_name = NONE), then drop it.\n> >\n> > Wanting to clean up after itself is admirable, but if there is nothing\n> > to clean up, why should that be an error condition?\n>\n> The alternatives seem quite error prone to me. Better be explicit.\n>\n\nIf you can connect to the master, and see that the slot already fails to\nexist, what is error prone about that?\n\nIf someone goes to the effort of setting up a different master, configures\nit to receive replica connections, and alters the subscription CONNECTION\nparameter on the replica to point to that poisoned master, will an error on\nthe DROP SUBSCRIPTION really force them to see the error of their ways, or\nwill they just succeed at explicitly doing the silly dance and finalize the\nprocess of shooting themselves in the foot via a roundabout mechanism?\n(And at the point the CONNECTION is changed, he is in the same boat even if\nhe doesn't try to drop the sub--either way the master has a dangling\nslot). I'm in favor of protecting a fool from his foolishness, except when\nit is annoying to the rest of us (Well, annoying to me, I guess. I don't\nknow yet about the rest of us).\n\nCheers,\n\nJeff\n\nOn Tue, Sep 24, 2019 at 5:25 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2019-09-24 16:31, Jeff Janes wrote:\n> I recently had to cut loose (pg_drop_replication_slot) a logical replica\n> that couldn't keep up and so was threatening to bring down the master.\n> \n> In mopping up on the replica side, I couldn't just drop the\n> subscription, because it couldn't drop the nonexistent slot on the\n> master and so refused to work. So I had to do a silly little dance\n> where I first disable the subscription, then ALTER SUBSCRIPTION ... SET\n> (slot_name = NONE), then drop it.\n> \n> Wanting to clean up after itself is admirable, but if there is nothing\n> to clean up, why should that be an error condition?\n\nThe alternatives seem quite error prone to me. Better be explicit.If you can connect to the master, and see that the slot already fails to exist, what is error prone about that?If someone goes to the effort of setting up a different master, configures it to receive replica connections, and alters the subscription CONNECTION parameter on the replica to point to that poisoned master, will an error on the DROP SUBSCRIPTION really force them to see the error of their ways, or will they just succeed at explicitly doing the silly dance and finalize the process of shooting themselves in the foot via a roundabout mechanism? (And at the point the CONNECTION is changed, he is in the same boat even if he doesn't try to drop the sub--either way the master has a dangling slot). I'm in favor of protecting a fool from his foolishness, except when it is annoying to the rest of us (Well, annoying to me, I guess. I don't know yet about the rest of us).Cheers,Jeff",
"msg_date": "Tue, 24 Sep 2019 19:07:02 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DROP SUBSCRIPTION with no slot"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 6:42 PM Ziga <ziga@ljudmila.org> wrote:\n\n> This also seems to be a problem for somewhat fringe case of subscriptions\n> created with connect=false option.\n> They cannot be dropped in an obvious way, without knowing the ALTER\n> SUBSCRIPTION trick.\n>\n> For example:\n>\n> contrib_regression=# create subscription test_sub connection\n> 'dbname=contrib_regression' publication test_pub with ( connect=false );\n>\n>\n> WARNING: tables were not subscribed, you will have to run ALTER\n> SUBSCRIPTION ... REFRESH PUBLICATION to subscribe the tables\n> CREATE SUBSCRIPTION\n>\n> contrib_regression=# drop subscription test_sub; -- fails\n> ERROR: could not drop the replication slot \"test_sub\" on publisher\n> DETAIL: The error was: ERROR: replication slot \"test_sub\" does not exist\n>\n> contrib_regression=# alter subscription test_sub set ( slot_name=none );\n> ALTER SUBSCRIPTION\n>\n> contrib_regression=# drop subscription test_sub; -- succeeds\n> DROP SUBSCRIPTION\n>\n>\n> Note that the publication was never refreshed.\n> It seems that the first DROP should succeed in the above case.\n> Or at least some hint should be given how to fix this.\n>\n\nThere is no HINT in the error message itself, but there is in the\ndocumentation, see note at end of\nhttps://www.postgresql.org/docs/current/sql-dropsubscription.html. I agree\nwith you that the DROP should just work in this case, even more so than in\nmy case. But if we go with the argument that doing that is too error\nprone, then do we want to include a HINT on how to be error prone more\nconveniently?\n\nCheers,\n\nJeff\n\nOn Tue, Sep 24, 2019 at 6:42 PM Ziga <ziga@ljudmila.org> wrote:This also seems to be a problem for somewhat fringe case of subscriptions created with connect=false option.They cannot be dropped in an obvious way, without knowing the ALTER SUBSCRIPTION trick.For example:contrib_regression=# create subscription test_sub connection 'dbname=contrib_regression' publication test_pub with ( connect=false ); WARNING: tables were not subscribed, you will have to run ALTER SUBSCRIPTION ... REFRESH PUBLICATION to subscribe the tablesCREATE SUBSCRIPTIONcontrib_regression=# drop subscription test_sub; -- failsERROR: could not drop the replication slot \"test_sub\" on publisherDETAIL: The error was: ERROR: replication slot \"test_sub\" does not existcontrib_regression=# alter subscription test_sub set ( slot_name=none );ALTER SUBSCRIPTIONcontrib_regression=# drop subscription test_sub; -- succeedsDROP SUBSCRIPTIONNote that the publication was never refreshed.It seems that the first DROP should succeed in the above case. Or at least some hint should be given how to fix this.There is no HINT in the error message itself, but there is in the documentation, see note at end of https://www.postgresql.org/docs/current/sql-dropsubscription.html. I agree with you that the DROP should just work in this case, even more so than in my case. But if we go with the argument that doing that is too error prone, then do we want to include a HINT on how to be error prone more conveniently?Cheers,Jeff",
"msg_date": "Tue, 24 Sep 2019 19:22:03 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DROP SUBSCRIPTION with no slot"
},
{
"msg_contents": "> On 25 Sep 2019, at 01:22, Jeff Janes <jeff.janes@gmail.com> wrote:\n> \n> There is no HINT in the error message itself, but there is in the documentation, see note at end of https://www.postgresql.org/docs/current/sql-dropsubscription.html <https://www.postgresql.org/docs/current/sql-dropsubscription.html>. I agree with you that the DROP should just work in this case, even more so than in my case. But if we go with the argument that doing that is too error prone, then do we want to include a HINT on how to be error prone more conveniently?\n> \n> Cheers,\n> \n> Jeff\n\nAh. I missed that bit in the documentation!\n\nPerhaps a publication should remember, whether it actually created a replication slot and only try to remove it, if it did. Although that probably wouldn't help much in your case.\n\n\nŽ.\n\n\nOn 25 Sep 2019, at 01:22, Jeff Janes <jeff.janes@gmail.com> wrote:There is no HINT in the error message itself, but there is in the documentation, see note at end of https://www.postgresql.org/docs/current/sql-dropsubscription.html. I agree with you that the DROP should just work in this case, even more so than in my case. But if we go with the argument that doing that is too error prone, then do we want to include a HINT on how to be error prone more conveniently?Cheers,JeffAh. I missed that bit in the documentation!Perhaps a publication should remember, whether it actually created a replication slot and only try to remove it, if it did. Although that probably wouldn't help much in your case.Ž.",
"msg_date": "Wed, 25 Sep 2019 19:55:25 +0200",
"msg_from": "=?utf-8?Q?=C5=BDiga_Kranjec?= <ziga@ljudmila.org>",
"msg_from_op": false,
"msg_subject": "Re: DROP SUBSCRIPTION with no slot"
},
{
"msg_contents": "On Wed, 25 Sep 2019 at 13:55, Žiga Kranjec <ziga@ljudmila.org> wrote:\n\n>\n> Ah. I missed that bit in the documentation!\n>\n> Perhaps a publication should remember, whether it actually created a\n> replication slot and only try to remove it, if it did. Although that\n> probably wouldn't help much in your case.\n>\n\nWhat about issuing a NOTICE if the slot doesn't exist? I'm thinking if it\nis able to connect to the primary and issue the command to delete the slot,\nbut it doesn't exist, not in case of arbitrary errors. I believe there is\nprecedent for similar behaviour from all the DROP ... IF EXISTS commands.\n\nOn Wed, 25 Sep 2019 at 13:55, Žiga Kranjec <ziga@ljudmila.org> wrote:Ah. I missed that bit in the documentation!Perhaps a publication should remember, whether it actually created a replication slot and only try to remove it, if it did. Although that probably wouldn't help much in your case.What about issuing a NOTICE if the slot doesn't exist? I'm thinking if it is able to connect to the primary and issue the command to delete the slot, but it doesn't exist, not in case of arbitrary errors. I believe there is precedent for similar behaviour from all the DROP ... IF EXISTS commands.",
"msg_date": "Wed, 25 Sep 2019 14:41:53 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP SUBSCRIPTION with no slot"
},
{
"msg_contents": "Hi,\n\nOn 25/09/2019 01:07, Jeff Janes wrote:\n> On Tue, Sep 24, 2019 at 5:25 PM Peter Eisentraut \n> <peter.eisentraut@2ndquadrant.com \n> <mailto:peter.eisentraut@2ndquadrant.com>> wrote:\n> \n> On 2019-09-24 16:31, Jeff Janes wrote:\n> > I recently had to cut loose (pg_drop_replication_slot) a logical\n> replica\n> > that couldn't keep up and so was threatening to bring down the\n> master.\n> >\n> > In mopping up on the replica side, I couldn't just drop the\n> > subscription, because it couldn't drop the nonexistent slot on the\n> > master and so refused to work. So I had to do a silly little dance\n> > where I first disable the subscription, then ALTER SUBSCRIPTION\n> ... SET\n> > (slot_name = NONE), then drop it.\n> >\n> > Wanting to clean up after itself is admirable, but if there is\n> nothing\n> > to clean up, why should that be an error condition?\n> \n> The alternatives seem quite error prone to me. Better be explicit.\n> \n> \n> If you can connect to the master, and see that the slot already fails to \n> exist, what is error prone about that?\n> \n> If someone goes to the effort of setting up a different master, \n> configures it to receive replica connections, and alters the \n> subscription CONNECTION parameter on the replica to point to that \n> poisoned master, will an error on the DROP SUBSCRIPTION really force \n> them to see the error of their ways, or will they just succeed at \n> explicitly doing the silly dance and finalize the process of shooting \n> themselves in the foot via a roundabout mechanism? \n\nAll that needs to happen to get into this situation is to have \nreplication go through haproxy or some other loadbalancer or dns name \nthat points to different server after failover. So user really does not \nhave to touch the subscription\n\nWe should at least offer HINT though.\n\nHowever, I'd be in favor of removing this restriction once the patch \nwhich limits how much wal a slot can retain gets in.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Wed, 25 Sep 2019 23:14:18 +0200",
"msg_from": "Petr Jelinek <petr@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP SUBSCRIPTION with no slot"
}
] |
[
{
"msg_contents": "Thinking about the nearby thread[1] about overrunning MaxAllocSize\nduring encoding conversion, it struck me that another thing\nwe could usefully do to improve that situation is to be smarter\nabout what's the growth factor --- the existing one-size-fits-all\nchoice of MAX_CONVERSION_GROWTH = 4 is leaving a lot on the table.\n\nIn particular, it seems like we could usefully frame things as\nhaving a constant max growth factor associated with each target\nencoding, stored as a new field in pg_wchar_table[]. By definition,\nthe max growth factor cannot be more than the maximum character\nlength in the target encoding. So this approach immediately gives\nus a growth factor of 1 with any single-byte output encoding,\nand even many of the multibyte encodings would have max growth 2\nor 3 without having to think any harder than that.\n\nBut we can do better, I think, recognizing that all the supported\nencodings are ASCII extensions. The only possible way to expend\n4 output bytes per input byte is if there is some 1-byte character\nthat translates to a 4-byte character, and I think this is not the\ncase for converting any of our encodings to UTF8. If you need at\nleast a 2-byte character to produce a 3-byte or 4-byte UTF8 character,\nthen UTF8 has max growth 2. I'm not quite sure if that's true\nfor every source encoding, but I'm pretty certain it couldn't be\nworse than 3.\n\nIt might be worth getting a bit more complex and having a 2-D\narray indexed by both source and destination encodings to determine\nthe max growth factor. I haven't run tests to empirically verify\nwhat is the max growth factor.\n\nA fly in this ointment is: could a custom encoding conversion\nfunction violate our conclusions about what's the max growth\nfactor? Maybe it would be worth treating the growth factor\nas a property of a particular conversion (i.e., add a column\nto pg_conversion) rather than making it a hard-wired property.\n\nIn any case, it seems likely that we could end up with a\nmultiplier of 1, 2, or 3 rather than 4 in just about every\ncase of practical interest. That sure seems like a win\nwhen converting long strings.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/20190816181418.GA898@alvherre.pgsql\n\n\n",
"msg_date": "Tue, 24 Sep 2019 17:14:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Improving on MAX_CONVERSION_GROWTH"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 5:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> In any case, it seems likely that we could end up with a\n> multiplier of 1, 2, or 3 rather than 4 in just about every\n> case of practical interest. That sure seems like a win\n> when converting long strings.\n\n+1. From what I've seen, I'd say this is a significant practical\nproblem for people who are trying to store large blobs of data in the\ndatabase.\n\nA lot of that is because they hit the 1GB allocation limit, and I\nwonder whether we shouldn't be trying harder to avoid imposing that\nlimit in multiple places. It's reasonable - and necessary - to impose\na limit on the size of an individual datum, but when that same limit\nis imposed on other things, like the worst-case size of the encoding\nconversion, the size of an individual message sent via the wire\nprotocol, etc., you end up with a situation where users have trouble\npredicting what the behavior is going to be. >=1GB definitely won't\nwork, but it'll probably break at some point before you even get that\nfar depending on a bunch of complex factors that are hard to\nunderstand, not really documented, and mostly the result of applying\n1GB limit to every single memory allocation across the whole backend\nwithout really thinking about what that does to the user-visible\nbehavior.\n\nNow, that's not to say we should abandon MaxAllocSize, which I agree\nserves as a useful backstop. But IMHO it would be smart to start with\nthe desired user-facing behavior -- we want to support datums up to X\nsize -- and then consider how we can get there while maintaining\nMaxAllocSize as a general-purpose backstop. Our current strategy seems\nto be mostly the reverse: write the code the way that feels natural,\nenforce MaxAllocSize everywhere, and if that breaks things for a user,\nwell then that means - by definition - that the user was trying to do\nsomething we don't support.\n\nOne approach I think we should consider is, for larger strings,\nactually scan the string and figure out how much memory we're going to\nneed for the conversion and then allocate exactly that amount (and\nfail if it's >=1GB). An extra scan over the string is somewhat costly,\nbut allocating hundreds of megabytes of memory on the theory that we\ncould hypothetically have needed it is costly in different way. Memory\nis more abundant today than it's ever been, but there are still plenty\nof systems where a couple of extra allocations in the multi-hundred-MB\nrange can make the whole thing fall over. And even if it doesn't make\nthe whole thing fall over, the CPU efficiency of avoiding an extra\npass over the string really ought to be compared with the memory\nefficiency of allocating extra storage. Getting down from a\nworst-case multiple of 4 to 2 is a great idea, but it still means that\nconverting a 100MB string will allocate 200MB when what you need will\nvery often be between 100MB and 105MB. That's not an insignificant\ncost, even though it's much better than allocating 400MB.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 27 Sep 2019 10:53:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving on MAX_CONVERSION_GROWTH"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-27 10:53:48 -0400, Robert Haas wrote:\n> A lot of that is because they hit the 1GB allocation limit, and I\n> wonder whether we shouldn't be trying harder to avoid imposing that\n> limit in multiple places.\n\n> It's reasonable - and necessary - to impose\n> a limit on the size of an individual datum, but when that same limit\n> is imposed on other things, like the worst-case size of the encoding\n> conversion, the size of an individual message sent via the wire\n> protocol, etc., you end up with a situation where users have trouble\n> predicting what the behavior is going to be. >=1GB definitely won't\n> work, but it'll probably break at some point before you even get that\n> far depending on a bunch of complex factors that are hard to\n> understand, not really documented, and mostly the result of applying\n> 1GB limit to every single memory allocation across the whole backend\n> without really thinking about what that does to the user-visible\n> behavior.\n\n+1 - that will be a long, piecemeal, project I think... But deciding\nthat we should do so is a good first step.\n\nNote that one of the additional reasons for the 1GB limit is that it\nprotects against int overflows. I'm somewhat unconvinced that that's a\nsensible approach, but ...\n\nI wonder if we shouldn't make stringinfos use size_t lengths, btw. Only\nsupporting INT32_MAX (not even UINT32_MAX) seems weird these days. But\nwe'd presumably have to make it opt-in.\n\n\n> One approach I think we should consider is, for larger strings,\n> actually scan the string and figure out how much memory we're going to\n> need for the conversion and then allocate exactly that amount (and\n> fail if it's >=1GB). An extra scan over the string is somewhat costly,\n> but allocating hundreds of megabytes of memory on the theory that we\n> could hypothetically have needed it is costly in different way.\n\nMy proposal for this is something like\nhttps://www.postgresql.org/message-id/20190924214204.mav4md77xg5u5wq6%40alap3.anarazel.de\nwhich should avoid the overallocation without a second pass, and\nhopefully without loosing much efficiency.\n\nIt's worthwhile to note that additional passes over data are often quite\nexpensive, memory latency hasn't shrunk that much in last decade or\nso. I have frequently seen all the memcpys from one StringInfo/char*\ninto another StringInfo show up in profiles.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 27 Sep 2019 08:40:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improving on MAX_CONVERSION_GROWTH"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 11:40 AM Andres Freund <andres@anarazel.de> wrote:\n> Note that one of the additional reasons for the 1GB limit is that it\n> protects against int overflows. I'm somewhat unconvinced that that's a\n> sensible approach, but ...\n\nIt's not crazy. People using 'int' rather casually just as they use\n'palloc' rather casually, without necessarily thinking about what\ncould go wrong at the edges. I don't have any beef with that as a\ngeneral strategy; I just think we should be trying to do better in the\ncases where it negatively affects the user experience.\n\n> It's worthwhile to note that additional passes over data are often quite\n> expensive, memory latency hasn't shrunk that much in last decade or\n> so. I have frequently seen all the memcpys from one StringInfo/char*\n> into another StringInfo show up in profiles.\n\nOK.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 27 Sep 2019 11:53:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving on MAX_CONVERSION_GROWTH"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Sep 27, 2019 at 11:40 AM Andres Freund <andres@anarazel.de> wrote:\n>> Note that one of the additional reasons for the 1GB limit is that it\n>> protects against int overflows. I'm somewhat unconvinced that that's a\n>> sensible approach, but ...\n\n> It's not crazy. People using 'int' rather casually just as they use\n> 'palloc' rather casually, without necessarily thinking about what\n> could go wrong at the edges. I don't have any beef with that as a\n> general strategy; I just think we should be trying to do better in the\n> cases where it negatively affects the user experience.\n\nA small problem with doing anything very interesting here is that the\nint-is-enough-for-a-string-length approach is baked into the wire\nprotocol (read the DataRow message format spec and weep).\n\nWe could probably bend the COPY protocol enough to support multi-gig row\nvalues --- dropping the rule that the backend doesn't split rows across\nCopyData messages wouldn't break too many clients, hopefully. That would\nat least dodge some problems in dump/restore scenarios.\n\nIn the meantime, I still think we should commit what I proposed in the\nother thread (<974.1569356381@sss.pgh.pa.us>), or something close to it.\nAndres' proposal would perhaps be an improvement on that, but I don't\nthink it'll be ready anytime soon; and for sure we wouldn't risk\nback-patching it, while I think we could back-patch what I suggested.\nIn any case, that patch is small enough that dropping it would be no big\nloss if a better solution comes along.\n\nAlso, as far as the immediate subject of this thread is concerned,\nI'm inclined to get rid of MAX_CONVERSION_GROWTH in favor of using\nthe target encoding's max char length. The one use (in printtup.c)\nwhere we don't know the target encoding could use MAX_MULTIBYTE_CHAR_LEN\ninstead. Being smarter than that could help in some cases (mostly,\nconversion of ISO encodings to UTF8), but it's not that big a win.\n(I did some checks and found that some ISO encodings could provide a\nmax growth of 2x, but many are max 3x, so 4x isn't that far out of\nline.) If Andres' ideas don't pan out we could come back and work\nharder on this, but for now something simple and back-patchable\nseems like a useful stopgap improvement.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 Sep 2019 15:25:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Improving on MAX_CONVERSION_GROWTH"
},
{
"msg_contents": "I wrote:\n> In the meantime, I still think we should commit what I proposed in the\n> other thread (<974.1569356381@sss.pgh.pa.us>), or something close to it.\n> Andres' proposal would perhaps be an improvement on that, but I don't\n> think it'll be ready anytime soon; and for sure we wouldn't risk\n> back-patching it, while I think we could back-patch what I suggested.\n> In any case, that patch is small enough that dropping it would be no big\n> loss if a better solution comes along.\n\nNot having heard any objections, I'll proceed with that. Andres is\nwelcome to work on replacing it with his more-complicated idea...\n\n> Also, as far as the immediate subject of this thread is concerned,\n> I'm inclined to get rid of MAX_CONVERSION_GROWTH in favor of using\n> the target encoding's max char length.\n\nI realized after re-reading the comment for MAX_CONVERSION_GROWTH that\nthis thread is based on a false premise, namely that encoding conversions\nalways produce one \"character\" out per \"character\" in. In the presence of\ncombining characters and suchlike, that premise fails, and it becomes\nquite unclear just what the max growth ratio actually is. So I'm going\nto leave that alone for now. Maybe this point is an argument for pushing\nforward with Andres' approach, but I'm still dubious about the overall\ncost/benefit ratio of that concept.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Oct 2019 12:12:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Improving on MAX_CONVERSION_GROWTH"
},
{
"msg_contents": "Hi,\n\nOn 2019-10-03 12:12:40 -0400, Tom Lane wrote:\n> I wrote:\n> > In the meantime, I still think we should commit what I proposed in the\n> > other thread (<974.1569356381@sss.pgh.pa.us>), or something close to it.\n> > Andres' proposal would perhaps be an improvement on that, but I don't\n> > think it'll be ready anytime soon; and for sure we wouldn't risk\n> > back-patching it, while I think we could back-patch what I suggested.\n> > In any case, that patch is small enough that dropping it would be no big\n> > loss if a better solution comes along.\n> \n> Not having heard any objections, I'll proceed with that. Andres is\n> welcome to work on replacing it with his more-complicated idea...\n\nYea, what I'm proposing is clearly not backpatchable. So +1\n\n\n> Maybe this point is an argument for pushing forward with Andres'\n> approach, but I'm still dubious about the overall cost/benefit ratio\n> of that concept.\n\nI think if it were just for MAX_CONVERSION_GROWTH, I'd be inclined to\nagree. But I think it has other advantages, so I'm mildy positivie that\nit'll be an overall win...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 3 Oct 2019 09:20:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improving on MAX_CONVERSION_GROWTH"
}
] |
[
{
"msg_contents": "\nRelease 11 of the PostgreSQL Buildfarm client is now available\n\n\nApart from some bug fixes, there are to following features:\n\n\n. Allow a list of branches as positional arguments to run_branches.pl\n This overrides what is found in the config file. The list can't include\n metabranches like ALL, nor can it contain regexes.\n. improve diagnostic capture for git and fetching branches of interest\n. unify config.log and configure.log\n. add siginfo to gdb output\n. improve test coverage\n - run check for test modules marked NO_INSTALLCHECK\n - run TAP tests for test modules that have them\n - run TAP tests for contrib modules that have them\n. explicitly use \"trust\" with initdb\n\n\nDownload from\n<https://github.com/PGBuildFarm/client-code/archive/REL_11.tar.gz> or\n\n<https://buildfarm.postgresql.org/downloads/latest-client.tgz>\n\n\nEnjoy\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 24 Sep 2019 17:26:27 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Release 11 of PostgreSQL Buildfarm client"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached is a draft for the PostgreSQL 12 RC1 press release. Please let\nme know if you find any errors or notable omissions.\n\nI'd also like to take this opportunity as a chance to say thank you to\neveryone for your hard work to get PostgreSQL 12 to this point. I'm\npersonally very excited for what should be yet another fantastic release\nthat will provide a lot of great features and enhancements for our users!\n\nThanks,\n\nJonathan",
"msg_date": "Wed, 25 Sep 2019 06:42:23 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 12 RC1 Press Release Draft"
},
{
"msg_contents": "The \"Upgrading to PostgreSQL 12 RC 1\" references v11 rather than v12:\n\n\"To upgrade to PostgreSQL 11 RC 1 from Beta 4 or an earlier version of\nPostgreSQL 11, ...\"\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\nThe \"Upgrading to PostgreSQL 12 RC 1\" references v11 rather than v12:\"To upgrade to PostgreSQL 11 RC 1 from Beta 4 or an earlier version ofPostgreSQL 11, ...\"Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/",
"msg_date": "Wed, 25 Sep 2019 06:50:24 -0400",
"msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 RC1 Press Release Draft"
},
{
"msg_contents": "On 9/25/19 6:50 AM, Sehrope Sarkuni wrote:\n> The \"Upgrading to PostgreSQL 12 RC 1\" references v11 rather than v12:\n> \n> \"To upgrade to PostgreSQL 11 RC 1 from Beta 4 or an earlier version of\n> PostgreSQL 11, ...\"\n\nThanks! Fixed attached,\n\nJonathan",
"msg_date": "Wed, 25 Sep 2019 07:08:23 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 12 RC1 Press Release Draft"
},
{
"msg_contents": "On 2019-09-25 13:08, Jonathan S. Katz wrote:\n> On 9/25/19 6:50 AM, Sehrope Sarkuni wrote:\n>> The \"Upgrading to PostgreSQL 12 RC 1\" references v11 rather than v12:\n>> \n>> \"To upgrade to PostgreSQL 11 RC 1 from Beta 4 or an earlier version of\n>> PostgreSQL 11, ...\"\n\nA small typo (or processing hickup):\n\n'pattern\\_ops' should be\n'pattern_ops'\n\n\n\n> \n> Thanks! Fixed attached,\n> \n> Jonathan\n\n\n",
"msg_date": "Wed, 25 Sep 2019 14:21:35 +0200",
"msg_from": "Erikjan Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 RC1 Press Release Draft"
},
{
"msg_contents": "On 9/25/19 8:21 AM, Erikjan Rijkers wrote:\n> On 2019-09-25 13:08, Jonathan S. Katz wrote:\n>> On 9/25/19 6:50 AM, Sehrope Sarkuni wrote:\n>>> The \"Upgrading to PostgreSQL 12 RC 1\" references v11 rather than v12:\n>>>\n>>> \"To upgrade to PostgreSQL 11 RC 1 from Beta 4 or an earlier version of\n>>> PostgreSQL 11, ...\"\n> \n> A small typo (or processing hickup):\n> \n> 'pattern\\_ops' should be\n> 'pattern_ops'\n\nThanks for noticing that -- because this is Markdown we need to escape\nthe underscore.\n\nThanks!\n\nJonathan",
"msg_date": "Wed, 25 Sep 2019 13:28:36 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 12 RC1 Press Release Draft"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nI want to propose an extension to CREATE TABLE syntax to allow the creation\nof partition tables along with its parent table using a single statement.\n\nIn this proposal, I am proposing to specify the list of partitioned tables\nafter the PARTITION BY clause.\n\nCREATE TABLE table_name (..)\n PARTITION BY { RANGE | LIST | HASH } (..)\n (\n list of partitions\n) ;\n\nBelow are a few examples of the proposed syntax, in a nutshell, I am\nleveraging the syntax currently supported by Postgres for creating\npartitioned tables. The purpose of this proposal is to combine the creation\nof the parent partition table and its partitions in one SQL statement.\n\nCREATE TABLE Sales (salesman_id INT, salesman_name TEXT, sales_region TEXT,\nhiring_date DATE, sales_amount INT )\n PARTITION BY RANGE (hiring_date)\n (\n PARTITION part_one FOR VALUES FROM ('2008-02-01') TO ('2008-03-01'),\n PARTITION part_two FOR VALUES FROM ('2009-02-01') TO ('2009-03-01'),\n PARTITION part_def DEFAULT\n );\n\nCREATE TABLE Sales2 (salesman_id INT, salesman_name TEXT, sales_region\nTEXT, hiring_date DATE, sales_amount INT )\n PARTITION BY HASH (salesman_id)\n (\n PARTITION par_one FOR VALUES WITH (MODULUS 2, REMAINDER 0),\n PARTITION par_two FOR VALUES WITH (MODULUS 2, REMAINDER 1)\n );\n\nCREATE TABLE Sales3(salesman_id INT, salesman_name TEXT, sales_region TEXT,\nhiring_date DATE, sales_amount INT)\n PARTITION BY LIST (sales_region)\n (\n PARTITION pt_one FOR VALUES IN ('JAPAN','CHINA'),\n PARTITION pt_two FOR VALUES IN ('USA','CANADA'),\n PARTITION pt_def DEFAULT\n );\n\n-- Similarly for specifying subpartitions of partitioned tables\n\nCREATE TABLE All_Sales ( year INT, month INT, day INT, info TEXT)\n PARTITION BY RANGE(year)(\n PARTITION sale_2019_2020 FOR VALUES FROM (2019) TO (2021)\n PARTITION BY LIST(month)\n (\n PARTITION sale_2019_2020_1 FOR VALUES IN (1,2,3,4)\n PARTITION BY RANGE(day)(\n PARTITION sale_2019_2020_1_1 FOR VALUES FROM (1) TO (10)\n PARTITION BY HASH(info)\n (\n PARTITION sale_2019_2020_1_1_1 FOR VALUES WITH (MODULUS\n2,REMAINDER 0),\n PARTITION sale_2019_2020_1_1_2 FOR VALUES WITH (MODULUS\n2,REMAINDER 1)\n ),\n PARTITION sale_2019_2020_1_2 FOR VALUES FROM (10) TO (20),\n PARTITION sale_2019_2020_1_3 FOR VALUES FROM (20) TO (32)),\n PARTITION sale_2019_2020_2 FOR VALUES IN (5,6,7,8),\n PARTITION sale_2019_2020_3 FOR VALUES IN (9,10,11,12)\n ),\n PARTITION sale_2021_2022 FOR VALUES FROM (2021) TO (2023),\n PARTITION sale_2023_2024 FOR VALUES FROM (2023) TO (2025),\n PARTITION sale_default default\n );\n\nThis new syntax requires minimal changes in the code. I along with my\ncolleague Movead.li have drafted a rough POC patch attached to this email.\n\nPlease note that the patch is just to showcase the new syntax and get a\nconsensus on the overall design and approach.\n\nAs far as I know, there are already few ongoing discussions related to the\npartition syntax enhancements, but the proposed syntax will not interfere\nwith these ongoing proposals. Here is a link to one such discussion:\nhttps://www.postgresql.org/message-id/alpine.DEB.2.21.1907150711080.22273%40lancre\n\nPlease feel free to share your thoughts.\n\nBest Regards\n\n...\nMuhammad Usama\nHighgo Software Canada\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC",
"msg_date": "Wed, 25 Sep 2019 19:31:38 +0500",
"msg_from": "Muhammad Usama <m.usama@gmail.com>",
"msg_from_op": true,
"msg_subject": "Proposal for syntax to support creation of partition tables when\n creating parent table"
},
{
"msg_contents": "\nHello Muhammad,\n\nI think that it may be better to have a partition spec which describes not \nthe list of partitions, but what is wanted, letting postgres to do some \nmore work.\n\nSee this thread:\n\nhttps://www.postgresql.org/message-id/alpine.DEB.2.21.1907150711080.22273@lancre\n\n> I want to propose an extension to CREATE TABLE syntax to allow the creation\n> of partition tables along with its parent table using a single statement.\n>\n> In this proposal, I am proposing to specify the list of partitioned tables\n> after the PARTITION BY clause.\n>\n> CREATE TABLE table_name (..)\n> PARTITION BY { RANGE | LIST | HASH } (..)\n> (\n> list of partitions\n> ) ;\n\n> Below are a few examples of the proposed syntax, in a nutshell, I am\n> leveraging the syntax currently supported by Postgres for creating\n> partitioned tables. The purpose of this proposal is to combine the creation\n> of the parent partition table and its partitions in one SQL statement.\n>\n> CREATE TABLE Sales (salesman_id INT, salesman_name TEXT, sales_region TEXT,\n> hiring_date DATE, sales_amount INT )\n> PARTITION BY RANGE (hiring_date)\n> (\n> PARTITION part_one FOR VALUES FROM ('2008-02-01') TO ('2008-03-01'),\n> PARTITION part_two FOR VALUES FROM ('2009-02-01') TO ('2009-03-01'),\n> PARTITION part_def DEFAULT\n> );\n>\n> CREATE TABLE Sales2 (salesman_id INT, salesman_name TEXT, sales_region\n> TEXT, hiring_date DATE, sales_amount INT )\n> PARTITION BY HASH (salesman_id)\n> (\n> PARTITION par_one FOR VALUES WITH (MODULUS 2, REMAINDER 0),\n> PARTITION par_two FOR VALUES WITH (MODULUS 2, REMAINDER 1)\n> );\n>\n> CREATE TABLE Sales3(salesman_id INT, salesman_name TEXT, sales_region TEXT,\n> hiring_date DATE, sales_amount INT)\n> PARTITION BY LIST (sales_region)\n> (\n> PARTITION pt_one FOR VALUES IN ('JAPAN','CHINA'),\n> PARTITION pt_two FOR VALUES IN ('USA','CANADA'),\n> PARTITION pt_def DEFAULT\n> );\n>\n> -- Similarly for specifying subpartitions of partitioned tables\n>\n> CREATE TABLE All_Sales ( year INT, month INT, day INT, info TEXT)\n> PARTITION BY RANGE(year)(\n> PARTITION sale_2019_2020 FOR VALUES FROM (2019) TO (2021)\n> PARTITION BY LIST(month)\n> (\n> PARTITION sale_2019_2020_1 FOR VALUES IN (1,2,3,4)\n> PARTITION BY RANGE(day)(\n> PARTITION sale_2019_2020_1_1 FOR VALUES FROM (1) TO (10)\n> PARTITION BY HASH(info)\n> (\n> PARTITION sale_2019_2020_1_1_1 FOR VALUES WITH (MODULUS\n> 2,REMAINDER 0),\n> PARTITION sale_2019_2020_1_1_2 FOR VALUES WITH (MODULUS\n> 2,REMAINDER 1)\n> ),\n> PARTITION sale_2019_2020_1_2 FOR VALUES FROM (10) TO (20),\n> PARTITION sale_2019_2020_1_3 FOR VALUES FROM (20) TO (32)),\n> PARTITION sale_2019_2020_2 FOR VALUES IN (5,6,7,8),\n> PARTITION sale_2019_2020_3 FOR VALUES IN (9,10,11,12)\n> ),\n> PARTITION sale_2021_2022 FOR VALUES FROM (2021) TO (2023),\n> PARTITION sale_2023_2024 FOR VALUES FROM (2023) TO (2025),\n> PARTITION sale_default default\n> );\n>\n> This new syntax requires minimal changes in the code. I along with my\n> colleague Movead.li have drafted a rough POC patch attached to this email.\n>\n> Please note that the patch is just to showcase the new syntax and get a\n> consensus on the overall design and approach.\n>\n> As far as I know, there are already few ongoing discussions related to the\n> partition syntax enhancements, but the proposed syntax will not interfere\n> with these ongoing proposals. Here is a link to one such discussion:\n> https://www.postgresql.org/message-id/alpine.DEB.2.21.1907150711080.22273%40lancre\n>\n> Please feel free to share your thoughts.\n>\n> Best Regards\n>\n> ...\n> Muhammad Usama\n> Highgo Software Canada\n> URL : http://www.highgo.ca\n> ADDR: 10318 WHALLEY BLVD, Surrey, BC\n>\n\n-- \nFabien Coelho - CRI, MINES ParisTech\n\n\n",
"msg_date": "Wed, 25 Sep 2019 17:22:19 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for syntax to support creation of partition tables when\n creating parent table"
},
{
"msg_contents": "Muhammad Usama <m.usama@gmail.com> writes:\n> I want to propose an extension to CREATE TABLE syntax to allow the creation\n> of partition tables along with its parent table using a single statement.\n\nTBH, I think this isn't a particularly good idea. It seems very\nreminiscent of the variant of CREATE SCHEMA that lets you create\na bunch of contained objects along with the schema. That variant\nis a mess to support and AFAIK it's practically unused in the\nreal world. (If it were used, we'd get requests to support more\nthan the small number of object types that the CREATE SCHEMA\ngrammar currently allows.)\n\nAs Fabien noted, there's been some related discussion about this\narea, but nobody was advocating a solution of this particular shape.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 Sep 2019 11:53:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for syntax to support creation of partition tables when\n creating parent table"
},
{
"msg_contents": "On Wed, Sep 25, 2019 at 8:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Muhammad Usama <m.usama@gmail.com> writes:\n> > I want to propose an extension to CREATE TABLE syntax to allow the\n> creation\n> > of partition tables along with its parent table using a single statement.\n>\n> TBH, I think this isn't a particularly good idea. It seems very\n> reminiscent of the variant of CREATE SCHEMA that lets you create\n> a bunch of contained objects along with the schema. That variant\n> is a mess to support and AFAIK it's practically unused in the\n> real world. (If it were used, we'd get requests to support more\n> than the small number of object types that the CREATE SCHEMA\n> grammar currently allows.)\n>\n\nIMO creating auto-partitions shouldn't be viewed as creating bunch of\nschema objects with CREATE SCHEMA command. Most of the other RDBMS\nsolutions support the table partition syntax where parent partition table\nis specified with partitions and sub-partitions in same SQL statement. As I\nunderstand the proposal is not changing the syntax of creating partitions,\nit is providing the ease of creating parent partition table along with its\npartitions in same statement. I think it does make it easier when you are\ncreating a big partition table with lots of partitions and sub-partitions.\n\nThe would also benefit users migrating to postgres from Oracle or mysql etc\nwhere similar syntax is supported.\n\nAnd if not more I think it is a tick in the box with minimal code change.\n\n\n>\n> As Fabien noted, there's been some related discussion about this\n> area, but nobody was advocating a solution of this particular shape.\n>\n\nThe thread that Usama mentioned in his email is creating auto-partitions\njust for HASH partitions, this is trying to do similar for all types of\npartitions.\n\n\n> regards, tom lane\n>\n>\n>\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nEMAIL: mailto: ahsan.hadi@highgo.ca\n\nOn Wed, Sep 25, 2019 at 8:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Muhammad Usama <m.usama@gmail.com> writes:\n> I want to propose an extension to CREATE TABLE syntax to allow the creation\n> of partition tables along with its parent table using a single statement.\n\nTBH, I think this isn't a particularly good idea. It seems very\nreminiscent of the variant of CREATE SCHEMA that lets you create\na bunch of contained objects along with the schema. That variant\nis a mess to support and AFAIK it's practically unused in the\nreal world. (If it were used, we'd get requests to support more\nthan the small number of object types that the CREATE SCHEMA\ngrammar currently allows.)IMO creating auto-partitions shouldn't be viewed as creating bunch of schema objects with CREATE SCHEMA command. Most of the other RDBMS solutions support the table partition syntax where parent partition table is specified with partitions and sub-partitions in same SQL statement. As I understand the proposal is not changing the syntax of creating partitions, it is providing the ease of creating parent partition table along with its partitions in same statement. I think it does make it easier when you are creating a big partition table with lots of partitions and sub-partitions. The would also benefit users migrating to postgres from Oracle or mysql etc where similar syntax is supported.And if not more I think it is a tick in the box with minimal code change. \n\nAs Fabien noted, there's been some related discussion about this\narea, but nobody was advocating a solution of this particular shape.The thread that Usama mentioned in his email is creating auto-partitions just for HASH partitions, this is trying to do similar for all types of partitions.\n regards, tom lane\n\n\n-- Highgo Software (Canada/China/Pakistan)URL : http://www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCEMAIL: mailto: ahsan.hadi@highgo.ca",
"msg_date": "Wed, 25 Sep 2019 23:46:28 +0500",
"msg_from": "Ahsan Hadi <ahsan.hadi@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for syntax to support creation of partition tables when\n creating parent table"
},
{
"msg_contents": "Hi Ahsan, Usama\n\nThanks for starting work on this.\n\nOn Thu, Sep 26, 2019 at 3:46 AM Ahsan Hadi <ahsan.hadi@gmail.com> wrote:\n> On Wed, Sep 25, 2019 at 8:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> As Fabien noted, there's been some related discussion about this\n>> area, but nobody was advocating a solution of this particular shape.\n>\n> The thread that Usama mentioned in his email is creating auto-partitions just for HASH partitions, this is trying to do similar for all types of partitions.\n\nI agree that this proposal makes life easier for developers familiar\nwith the partitioning syntax and features of other databases.\nHowever, it adds little functionality over what users can already do,\neven though today it takes multiple commands rather than just one.\nThe problem is that the syntax proposed here is still verbose because\nusers still have to spell out all the partition bounds by themselves.\n\nThe focus of the other thread, as I understand it, is to implement the\nfunctionality to get the same thing done (create many partitions in\none command) in much less verbose manner. Fabien started the\ndiscussion for hash partitioning because the interface for it seems\nstraightforward -- just specify the number of partitions and that many\npartitions would get created without having to actually specify\nmodulus/remainder for each. Since the underlying functionality\nwouldn't be too different for other partitioning methods, we would\nonly have to come up with a suitable interface.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 26 Sep 2019 16:33:17 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for syntax to support creation of partition tables when\n creating parent table"
}
] |
[
{
"msg_contents": "While testing against PG12 I noticed the documentation states that\nrecovery targets are not valid when standby.signal is present.\n\nBut surely the exception is recovery_target_timeline? My testing\nconfirms that this works just as in prior versions with standy_mode=on.\n\nDocumentation patch is attached.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net",
"msg_date": "Wed, 25 Sep 2019 16:21:52 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": true,
"msg_subject": "Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 5:22 AM David Steele <david@pgmasters.net> wrote:\n>\n> While testing against PG12 I noticed the documentation states that\n> recovery targets are not valid when standby.signal is present.\n\nOr that description in the doc is not true? Other recovery target\nparameters seem to take effect even when standby.signal exists.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 26 Sep 2019 18:55:53 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On 9/26/19 5:55 AM, Fujii Masao wrote:\n> On Thu, Sep 26, 2019 at 5:22 AM David Steele <david@pgmasters.net> wrote:\n>>\n>> While testing against PG12 I noticed the documentation states that\n>> recovery targets are not valid when standby.signal is present.\n> \n> Or that description in the doc is not true? Other recovery target\n> parameters seem to take effect even when standby.signal exists.\n\nYes, and this is true for or any combination of recovery.signal and\nstandby signal as far as I can see. We have been tracking down some\nstrange behaviors over the last few days as we have been adding PG12\nsupport to pgBackRest. Late in the day I know, but we just got the\nrelevant code migrated to C and we did not fancy coding it twice.\n\nThe main thing is if you set recovery_target_time in\npostgresql.auto.conf then recovery will always try to hit that target\nwith any combination of recovery.signal and standby.signal. But\ntarget_action is only active when recovery.signal, standby.signal, or\nboth are present.\n\nAll these tests were done on 12rc1.\n\nSo given this postgresql.auto.conf:\n\nrecovery_target_time = '2019-09-26 14:39:51.280711+00'\nrestore_command = 'cp /home/vagrant/test/archive/%f \"%p\"'\nrecovery_target_timeline = current\nrecovery_target_action = promote\n\nAnd these settings added to postgresql.conf:\n\nwal_level = replica\narchive_mode = on\narchive_command = 'test ! -f /home/vagrant/test/archive/%f && cp %p\n/home/vagrant/test/archive/%f'\n\nAnd this backup_label:\n\nSTART WAL LOCATION: 0/2000028 (file 000000010000000000000002)\nCHECKPOINT LOCATION: 0/2000060\nBACKUP METHOD: streamed\nBACKUP FROM: master\nSTART TIME: 2019-09-26 14:39:49 UTC\nLABEL: pg_basebackup base backup\nSTART TIMELINE: 1\n\nThe backup we are recovering contains a table that exists at the target\ntime but is dropped after that as an additional confirmation. In all\nthe recovery scenarios below the table exists after recovery.\n\nHere's what recovery looks like with recovery.signal:\n\n2019-09-26 14:49:52.758 UTC [25353] LOG: database system was\ninterrupted; last known up at 2019-09-26 14:39:49 UTC\n2019-09-26 14:49:52.824 UTC [25353] LOG: starting point-in-time\nrecovery to 2019-09-26 14:39:51.280711+00\n2019-09-26 14:49:52.836 UTC [25353] LOG: restored log file\n\"000000010000000000000002\" from archive\n2019-09-26 14:49:52.885 UTC [25353] LOG: redo starts at 0/2000028\n2019-09-26 14:49:52.894 UTC [25353] LOG: consistent recovery state\nreached at 0/2000100\n2019-09-26 14:49:52.894 UTC [25352] LOG: database system is ready to\naccept read only connections\n2019-09-26 14:49:52.905 UTC [25353] LOG: restored log file\n\"000000010000000000000003\" from archive\n2019-09-26 14:49:52.940 UTC [25353] LOG: recovery stopping before\ncommit of transaction 487, time 2019-09-26 14:39:54.981557+00\n2019-09-26 14:49:52.940 UTC [25353] LOG: redo done at 0/30096A0\ncp: cannot stat '/home/vagrant/test/archive/00000002.history': No such\nfile or directory\n2019-09-26 14:49:52.943 UTC [25353] LOG: selected new timeline ID: 2\n2019-09-26 14:49:52.998 UTC [25353] LOG: archive recovery complete\n2019-09-26 14:49:52.998 UTC [25353] LOG: database system is ready to\naccept connections\n\nThis is completely normal and what you would expect.\n\nNow without recovery.signal from a fresh restore:\n\n2019-09-26 14:52:29.491 UTC [25409] LOG: database system was\ninterrupted; last known up at 2019-09-26 14:39:49 UTC\n2019-09-26 14:52:29.574 UTC [25409] LOG: restored log file\n\"000000010000000000000002\" from archive\n2019-09-26 14:52:29.622 UTC [25409] LOG: redo starts at 0/2000028\n2019-09-26 14:52:29.631 UTC [25409] LOG: consistent recovery state\nreached at 0/2000100\n2019-09-26 14:52:29.642 UTC [25409] LOG: restored log file\n\"000000010000000000000003\" from archive\n2019-09-26 14:52:29.666 UTC [25409] LOG: recovery stopping before\ncommit of transaction 487, time 2019-09-26 14:39:54.981557+00\n2019-09-26 14:52:29.666 UTC [25409] LOG: redo done at 0/30096A0\n2019-09-26 14:52:29.716 UTC [25408] LOG: database system is ready to\naccept connections\n\nNow there is no \"starting point-in-time recovery\" message but we are\nstill stopping in the same place, \"recovery stopping before commit of\ntransaction 487\". There is no promotion so now we are now logging on\ntimeline 1 (so there are duplicate WAL errors as soon as archive_command\nruns). In PG < 12 you could do this by shutting down, removing\nrecovery.conf and restarting, but it is now much easier to end up on the\nsame timeline.\n\nNow with with standby.signal only from a fresh restore:\n\n2019-09-26 14:59:36.889 UTC [25522] LOG: database system was\ninterrupted; last known up at 2019-09-26 14:39:49 UTC\n2019-09-26 14:59:36.983 UTC [25522] LOG: entering standby mode\n2019-09-26 14:59:36.994 UTC [25522] LOG: restored log file\n\"000000010000000000000002\" from archive\n2019-09-26 14:59:37.038 UTC [25522] LOG: redo starts at 0/2000028\n2019-09-26 14:59:37.047 UTC [25522] LOG: consistent recovery state\nreached at 0/2000100\n2019-09-26 14:59:37.047 UTC [25521] LOG: database system is ready to\naccept read only connections\n2019-09-26 14:59:37.061 UTC [25522] LOG: restored log file\n\"000000010000000000000003\" from archive\n2019-09-26 14:59:37.093 UTC [25522] LOG: recovery stopping before\ncommit of transaction 487, time 2019-09-26 14:39:54.981557+00\n2019-09-26 14:59:37.093 UTC [25522] LOG: redo done at 0/30096A0\ncp: cannot stat '/home/vagrant/test/archive/00000002.history': No such\nfile or directory\n2019-09-26 14:59:37.097 UTC [25522] LOG: selected new timeline ID: 2\n2019-09-26 14:59:37.270 UTC [25522] LOG: archive recovery complete\ncp: cannot stat '/home/vagrant/test/archive/00000001.history': No such\nfile or directory\n2019-09-26 14:59:37.338 UTC [25521] LOG: database system is ready to\naccept connections\n\nThe cluster starts in standby mode, hits the recovery target,\nthen promotes even though no recovery.signal is present.\n\nAnd finally with both recovery.signal and standby.signal you get the\nsame as with standby.signal only.\n\nI was able to get the same results using an xid target:\n\nrecovery_target_xid = 487\nrecovery_target_inclusive = false\n\nAll of this is roughly analogous to use cases that were possible\nbefore, but there were fewer permutations then. You had no standby and\nno recovery target without recovery.conf so \"recovery.signal\" was always\nthere, more or less.\n\nAt the very least, according to the docs, none of the target options are\nsupposed to be active unless recovery.signal is in place. Since\noutdated entries in postgresql.auto.conf can have effect even in\nthe absence of recovery.signal, it seems pretty important to make sure\nthat mechanism is working correctly - or that the caveat is clearly\ndocumented.\n\nI do think this issue needs to be addressed before GA.\n\nFujii -- I just became aware of your email at [1] so I'll respond to\nthat as well.\n\n-- \n-David\ndavid@pgmasters.net\n\n[1]\nhttps://www.postgresql.org/message-id/CAHGQGwEYYg_Ng%2B03FtZczacCpYgJ2Pn%3DB_wPtWF%2BFFLYDgpa1g%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 26 Sep 2019 13:58:16 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": true,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On 2019-09-25 22:21, David Steele wrote:\n> While testing against PG12 I noticed the documentation states that\n> recovery targets are not valid when standby.signal is present.\n> \n> But surely the exception is recovery_target_timeline? My testing\n> confirms that this works just as in prior versions with standy_mode=on.\n\nOr maybe we should move recovery_target_timeline to a different section?\n But which one?\n\nI don't know if recovery_target_timeline is actually useful to change in\nstandby mode.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 26 Sep 2019 22:48:30 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On 9/26/19 4:48 PM, Peter Eisentraut wrote:\n> On 2019-09-25 22:21, David Steele wrote:\n>> While testing against PG12 I noticed the documentation states that\n>> recovery targets are not valid when standby.signal is present.\n>>\n>> But surely the exception is recovery_target_timeline? My testing\n>> confirms that this works just as in prior versions with standy_mode=on.\n> \n> Or maybe we should move recovery_target_timeline to a different section?\n> But which one?\n\nNot sure. I think just noting it as an exception is OK, if it is the \nonly exception. But currently that does not seem to be the case.\n\n> I don't know if recovery_target_timeline is actually useful to change in\n> standby mode.\n\nIt is. I just dealt with a split-brain case that required the standbys \nto be rebuilt on a specific timeline (not latest).\n\nOf course, you could do recovery on that timeline, shutdown, and then \nbring the cluster back up as a standby, but that seems like a lot of \nextra work.\n\nBut as Fujii noted and I've demonstrated in the follow-up pretty much \nall target options are allowed for standby recovery. I don't think that \nmakes sense, personally, but apparently it was allowed in prior versions \nso we'll need to think carefully before disallowing it.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 26 Sep 2019 17:02:51 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": true,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On 2019-09-26 23:02, David Steele wrote:\n> On 9/26/19 4:48 PM, Peter Eisentraut wrote:\n>> On 2019-09-25 22:21, David Steele wrote:\n>>> While testing against PG12 I noticed the documentation states that\n>>> recovery targets are not valid when standby.signal is present.\n>>>\n>>> But surely the exception is recovery_target_timeline? My testing\n>>> confirms that this works just as in prior versions with standy_mode=on.\n>>\n>> Or maybe we should move recovery_target_timeline to a different section?\n>> But which one?\n> \n> Not sure. I think just noting it as an exception is OK, if it is the \n> only exception. But currently that does not seem to be the case.\n> \n>> I don't know if recovery_target_timeline is actually useful to change in\n>> standby mode.\n\nOK, I have committed your original documentation patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Sep 2019 16:36:49 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "Hi Peter,\n\nOn 9/27/19 10:36 AM, Peter Eisentraut wrote:\n> On 2019-09-26 23:02, David Steele wrote:\n>> On 9/26/19 4:48 PM, Peter Eisentraut wrote:\n>>\n>>> I don't know if recovery_target_timeline is actually useful to change in\n>>> standby mode.\n> \n> OK, I have committed your original documentation patch.\n\nThanks, that's a good start.\n\nAs Fujii noticed and I have demonstrated upthread, just about any target\nsetting can be used in a standby restore. This matches the behavior of\nprior versions so it's not exactly a regression, but the old docs made\nno claim that standby_mode disabled targeted restore.\n\nIf fact, for both PG12 and before, setting a recovery target in standby\nmode causes the cluster to drop out of standby mode.\n\nAlso, the presence or absence of recovery.signal does not seem to have\nany effect on how targeted recovery proceeds, except as Fujii has\ndemonstrated in [1].\n\nI'm not sure what the best thing to do is. The docs are certainly\nincorrect, but fixing them would be weird. What do we say, setting\ntargets will exit standby mode? That certainly what happens, though.\n\nAlso, the fact that target settings are being used when recovery.signal\nis missing is contrary to the docs, and this is a new behavior in PG12.\n Prior to 12 you could not have target settings without recovery.conf\nbeing present by definition.\n\nI think, at the very least, the fact that targeted recovery proceeds in\nthe absence of recovery.signal represents a bug.\n\n-- \n-David\ndavid@pgmasters.net\n\n[1]\nhttps://www.postgresql.org/message-id/CAHGQGwEYYg_Ng%2B03FtZczacCpYgJ2Pn%3DB_wPtWF%2BFFLYDgpa1g%40mail.gmail.com\n\n\n",
"msg_date": "Fri, 27 Sep 2019 11:14:03 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": true,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On Sat, Sep 28, 2019 at 12:14 AM David Steele <david@pgmasters.net> wrote:\n>\n> Hi Peter,\n>\n> On 9/27/19 10:36 AM, Peter Eisentraut wrote:\n> > On 2019-09-26 23:02, David Steele wrote:\n> >> On 9/26/19 4:48 PM, Peter Eisentraut wrote:\n> >>\n> >>> I don't know if recovery_target_timeline is actually useful to change in\n> >>> standby mode.\n> >\n> > OK, I have committed your original documentation patch.\n>\n> Thanks, that's a good start.\n>\n> As Fujii noticed and I have demonstrated upthread, just about any target\n> setting can be used in a standby restore. This matches the behavior of\n> prior versions so it's not exactly a regression, but the old docs made\n> no claim that standby_mode disabled targeted restore.\n>\n> If fact, for both PG12 and before, setting a recovery target in standby\n> mode causes the cluster to drop out of standby mode.\n>\n> Also, the presence or absence of recovery.signal does not seem to have\n> any effect on how targeted recovery proceeds, except as Fujii has\n> demonstrated in [1].\n>\n> I'm not sure what the best thing to do is. The docs are certainly\n> incorrect, but fixing them would be weird. What do we say, setting\n> targets will exit standby mode? That certainly what happens, though.\n>\n> Also, the fact that target settings are being used when recovery.signal\n> is missing is contrary to the docs, and this is a new behavior in PG12.\n> Prior to 12 you could not have target settings without recovery.conf\n> being present by definition.\n>\n> I think, at the very least, the fact that targeted recovery proceeds in\n> the absence of recovery.signal represents a bug.\n\nYes, recovery target settings are used even when neither backup_label\nnor recovery.signal exist, i.e., just a crash recovery, in v12. This is\ncompletely different behavior from prior versions.\n\nIMO, since v12 is RC1 now, it's not good idea to change the logic to new.\nSo at least for v12, we basically should change the recovery logic so that\nit behaves in the same way as prior versions. That is,\n\n- Stop the recovery with an error if any recovery target is set in\n crash recovery\n- Use recovery target settings if set even when standby mode\n- Do not enter an archive recovery mode if recovery.signal is missing\n\nThought?\n\nIf we want new behavior in recovery, we can change the logic for v13.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Sat, 28 Sep 2019 00:58:02 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On 9/27/19 11:58 AM, Fujii Masao wrote:\n> On Sat, Sep 28, 2019 at 12:14 AM David Steele <david@pgmasters.net> wrote:\n>>\n>> I think, at the very least, the fact that targeted recovery proceeds in\n>> the absence of recovery.signal represents a bug.\n> \n> Yes, recovery target settings are used even when neither backup_label\n> nor recovery.signal exist, i.e., just a crash recovery, in v12. This is\n> completely different behavior from prior versions.\n\nI'm not able to reproduce this. I only see recovery settings being used\nif backup_label, recovery.signal, or standby.signal is present.\n\nDo you have an example?\n\n> IMO, since v12 is RC1 now, it's not good idea to change the logic to new.\n> So at least for v12, we basically should change the recovery logic so that\n> it behaves in the same way as prior versions. That is,\n> \n> - Stop the recovery with an error if any recovery target is set in\n> crash recovery\n\nThis seems reasonable. I tried adding a recovery.signal and\nrestore_command for crash recovery and I just got an error that it\ncouldn't find 00000002.history in the archive.\n\n> - Use recovery target settings if set even when standby mode\n\nYes, this is weird, but it is present in current versions.\n\n> - Do not enter an archive recovery mode if recovery.signal is missing\n\nAgreed. Perhaps it's OK to use restore_command if a backup_label is\npresent, but we certainly should not be doing targeted recovery.\n\n> If we want new behavior in recovery, we can change the logic for v13.\n\nAgreed, but it's not at all clear to me how invasive these changes would be.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 27 Sep 2019 13:01:03 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": true,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On Sat, Sep 28, 2019 at 2:01 AM David Steele <david@pgmasters.net> wrote:\n>\n> On 9/27/19 11:58 AM, Fujii Masao wrote:\n> > On Sat, Sep 28, 2019 at 12:14 AM David Steele <david@pgmasters.net> wrote:\n> >>\n> >> I think, at the very least, the fact that targeted recovery proceeds in\n> >> the absence of recovery.signal represents a bug.\n> >\n> > Yes, recovery target settings are used even when neither backup_label\n> > nor recovery.signal exist, i.e., just a crash recovery, in v12. This is\n> > completely different behavior from prior versions.\n>\n> I'm not able to reproduce this. I only see recovery settings being used\n> if backup_label, recovery.signal, or standby.signal is present.\n>\n> Do you have an example?\n\nYes, here is the example:\n\ninitdb -D data\npg_ctl -D data start\npsql -c \"select pg_create_restore_point('hoge')\"\npsql -c \"alter system set recovery_target_name to 'hoge'\"\npsql -c \"create table test as select num from generate_series(1, 100) num\"\npg_ctl -D data -m i stop\npg_ctl -D data start\n\nAfter restarting the server at the above final step, you will see\nthe following log messages indicating that recovery stops at\nrecovery_target_name.\n\n2019-09-28 22:42:04.849 JST [16944] LOG: recovery stopping at restore\npoint \"hoge\", time 2019-09-28 22:42:03.86558+09\n2019-09-28 22:42:04.849 JST [16944] FATAL: requested recovery stop\npoint is before consistent recovery point\n\n> > IMO, since v12 is RC1 now, it's not good idea to change the logic to new.\n> > So at least for v12, we basically should change the recovery logic so that\n> > it behaves in the same way as prior versions. That is,\n> >\n> > - Stop the recovery with an error if any recovery target is set in\n> > crash recovery\n>\n> This seems reasonable. I tried adding a recovery.signal and\n> restore_command for crash recovery and I just got an error that it\n> couldn't find 00000002.history in the archive.\n\nYou added recovery.signal, so it means that you started an archive recovery,\nnot crash recovery. Right?\n\nAnyway I'm thinking to apply something like attached patch, to emit an error\nif recovery target is set in crash recovery.\n\n> > - Use recovery target settings if set even when standby mode\n>\n> Yes, this is weird, but it is present in current versions.\n\nYes, and some users might be using this current behavior.\nIf we keep this behavior as it is in v12, the documentation\nneeds to be corrected.\n\n> > - Do not enter an archive recovery mode if recovery.signal is missing\n>\n> Agreed. Perhaps it's OK to use restore_command if a backup_label is\n> present\n\nYeah, it's maybe OK, but differenet behavior from current version.\nSo, at least for v12, I'm inclined to prevent crash recovery with backup_label\nfrom using restore_command, i.e., only WAL files in pg_wal will be replayed\nin this case.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Sat, 28 Sep 2019 23:54:10 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On 9/28/19 10:54 AM, Fujii Masao wrote:\n> On Sat, Sep 28, 2019 at 2:01 AM David Steele <david@pgmasters.net> wrote:\n>> On 9/27/19 11:58 AM, Fujii Masao wrote:\n>>>\n>>> Yes, recovery target settings are used even when neither backup_label\n>>> nor recovery.signal exist, i.e., just a crash recovery, in v12. This is\n>>> completely different behavior from prior versions.\n>>\n>> I'm not able to reproduce this. I only see recovery settings being used\n>> if backup_label, recovery.signal, or standby.signal is present.\n>>\n>> Do you have an example?\n> \n> Yes, here is the example:\n> \n> initdb -D data\n> pg_ctl -D data start\n> psql -c \"select pg_create_restore_point('hoge')\"\n> psql -c \"alter system set recovery_target_name to 'hoge'\"\n> psql -c \"create table test as select num from generate_series(1, 100) num\"\n> pg_ctl -D data -m i stop\n> pg_ctl -D data start\n> \n> After restarting the server at the above final step, you will see\n> the following log messages indicating that recovery stops at\n> recovery_target_name.\n> \n> 2019-09-28 22:42:04.849 JST [16944] LOG: recovery stopping at restore\n> point \"hoge\", time 2019-09-28 22:42:03.86558+09\n> 2019-09-28 22:42:04.849 JST [16944] FATAL: requested recovery stop\n> point is before consistent recovery point\n\nThat's definitely not good behavior.\n\n>>> IMO, since v12 is RC1 now, it's not good idea to change the logic to new.\n>>> So at least for v12, we basically should change the recovery logic so that\n>>> it behaves in the same way as prior versions. That is,\n>>>\n>>> - Stop the recovery with an error if any recovery target is set in\n>>> crash recovery\n>>\n>> This seems reasonable. I tried adding a recovery.signal and\n>> restore_command for crash recovery and I just got an error that it\n>> couldn't find 00000002.history in the archive.\n> \n> You added recovery.signal, so it means that you started an archive recovery,\n> not crash recovery. Right?\n\nCorrect.\n\n> Anyway I'm thinking to apply something like attached patch, to emit an error\n> if recovery target is set in crash recovery.\n\nThe patch looks reasonable.\n\n>>> - Do not enter an archive recovery mode if recovery.signal is missing\n>>\n>> Agreed. Perhaps it's OK to use restore_command if a backup_label is\n>> present\n> \n> Yeah, it's maybe OK, but differenet behavior from current version.\n> So, at least for v12, I'm inclined to prevent crash recovery with backup_label\n> from using restore_command, i.e., only WAL files in pg_wal will be replayed\n> in this case.\n\nAgreed. Seems like that could be added to the patch above easily\nenough. More checks would be needed to prevent the behaviors I've been\nseeing in the other thread, but it should be possible to more or less\nmimic the old behavior with sufficient checks.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Sat, 28 Sep 2019 11:51:29 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": true,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On Sun, Sep 29, 2019 at 12:51 AM David Steele <david@pgmasters.net> wrote:\n>\n> On 9/28/19 10:54 AM, Fujii Masao wrote:\n> > On Sat, Sep 28, 2019 at 2:01 AM David Steele <david@pgmasters.net> wrote:\n> >> On 9/27/19 11:58 AM, Fujii Masao wrote:\n> >>>\n> >>> Yes, recovery target settings are used even when neither backup_label\n> >>> nor recovery.signal exist, i.e., just a crash recovery, in v12. This is\n> >>> completely different behavior from prior versions.\n> >>\n> >> I'm not able to reproduce this. I only see recovery settings being used\n> >> if backup_label, recovery.signal, or standby.signal is present.\n> >>\n> >> Do you have an example?\n> >\n> > Yes, here is the example:\n> >\n> > initdb -D data\n> > pg_ctl -D data start\n> > psql -c \"select pg_create_restore_point('hoge')\"\n> > psql -c \"alter system set recovery_target_name to 'hoge'\"\n> > psql -c \"create table test as select num from generate_series(1, 100) num\"\n> > pg_ctl -D data -m i stop\n> > pg_ctl -D data start\n> >\n> > After restarting the server at the above final step, you will see\n> > the following log messages indicating that recovery stops at\n> > recovery_target_name.\n> >\n> > 2019-09-28 22:42:04.849 JST [16944] LOG: recovery stopping at restore\n> > point \"hoge\", time 2019-09-28 22:42:03.86558+09\n> > 2019-09-28 22:42:04.849 JST [16944] FATAL: requested recovery stop\n> > point is before consistent recovery point\n>\n> That's definitely not good behavior.\n>\n> >>> IMO, since v12 is RC1 now, it's not good idea to change the logic to new.\n> >>> So at least for v12, we basically should change the recovery logic so that\n> >>> it behaves in the same way as prior versions. That is,\n> >>>\n> >>> - Stop the recovery with an error if any recovery target is set in\n> >>> crash recovery\n> >>\n> >> This seems reasonable. I tried adding a recovery.signal and\n> >> restore_command for crash recovery and I just got an error that it\n> >> couldn't find 00000002.history in the archive.\n> >\n> > You added recovery.signal, so it means that you started an archive recovery,\n> > not crash recovery. Right?\n>\n> Correct.\n>\n> > Anyway I'm thinking to apply something like attached patch, to emit an error\n> > if recovery target is set in crash recovery.\n>\n> The patch looks reasonable.\n>\n> >>> - Do not enter an archive recovery mode if recovery.signal is missing\n> >>\n> >> Agreed. Perhaps it's OK to use restore_command if a backup_label is\n> >> present\n> >\n> > Yeah, it's maybe OK, but differenet behavior from current version.\n> > So, at least for v12, I'm inclined to prevent crash recovery with backup_label\n> > from using restore_command, i.e., only WAL files in pg_wal will be replayed\n> > in this case.\n>\n> Agreed. Seems like that could be added to the patch above easily\n> enough. More checks would be needed to prevent the behaviors I've been\n> seeing in the other thread, but it should be possible to more or less\n> mimic the old behavior with sufficient checks.\n\nYeah, more checks would be necessary. IMO easy fix is to forbid not only\nrecovery target parameters but also any recovery parameters (specified\nin recovery.conf in previous versions) in crash recovery.\n\nIn v11 or before, any parameters in recovery.conf cannot take effect in\ncrash recovery because crash recovery always starts without recovery.conf.\nBut in v12, those parameters are specified in postgresql.conf,\nso they may take effect even in crash recovery (i.e., when both\nrecovery.signal and standby.signal are missing). This would be the root\ncause of the problems that we are discussing, I think.\n\nThere might be some recovery parameters that we can safely use\neven in crash recovery, e.g., maybe recovery_end_command\n(now, you can see that recovery_end_command is executed in\ncrash recovery in v12). But at this stage of v12, it's worth thinking to\njust cause crash recovery to exit with an error when any recovery\nparameter is set. Thought?\n\nOr if that change is overkill, alternatively we can make crash recovery\n\"ignore\" any recovery parameters, e.g., by forcibly disabling\nthe parameters.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Sun, 29 Sep 2019 02:26:02 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "Fujii Masao <masao.fujii@gmail.com> writes:\n>> Agreed. Seems like that could be added to the patch above easily\n>> enough. More checks would be needed to prevent the behaviors I've been\n>> seeing in the other thread, but it should be possible to more or less\n>> mimic the old behavior with sufficient checks.\n\n> Yeah, more checks would be necessary. IMO easy fix is to forbid not only\n> recovery target parameters but also any recovery parameters (specified\n> in recovery.conf in previous versions) in crash recovery.\n\n> In v11 or before, any parameters in recovery.conf cannot take effect in\n> crash recovery because crash recovery always starts without recovery.conf.\n> But in v12, those parameters are specified in postgresql.conf,\n> so they may take effect even in crash recovery (i.e., when both\n> recovery.signal and standby.signal are missing). This would be the root\n> cause of the problems that we are discussing, I think.\n\nSo ... what I'm wondering about here is what happens during *actual* crash\nrecovery, eg a postmaster-driven restart of the startup process after\na backend crash in hot standby. The direction you guys are going in\nseems likely to cause the startup process to refuse to function until\nthose parameters are removed from postgresql.conf, which seems quite\nuser-unfriendly.\n\nMaybe I'm misunderstanding, but I think that rather than adding error\nchecks that were not there before, the right path to fixing this is\nto cause these settings to be ignored if we're doing crash recovery.\nNot make the user take them out (and possibly later put them back).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 28 Sep 2019 13:45:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On 9/28/19 1:26 PM, Fujii Masao wrote:\n> On Sun, Sep 29, 2019 at 12:51 AM David Steele <david@pgmasters.net> wrote:\n> \n> Yeah, more checks would be necessary. IMO easy fix is to forbid not only\n> recovery target parameters but also any recovery parameters (specified\n> in recovery.conf in previous versions) in crash recovery.\n> \n> In v11 or before, any parameters in recovery.conf cannot take effect in\n> crash recovery because crash recovery always starts without recovery.conf.\n> But in v12, those parameters are specified in postgresql.conf,\n> so they may take effect even in crash recovery (i.e., when both\n> recovery.signal and standby.signal are missing). This would be the root\n> cause of the problems that we are discussing, I think.\n> \n> There might be some recovery parameters that we can safely use\n> even in crash recovery, e.g., maybe recovery_end_command\n> (now, you can see that recovery_end_command is executed in\n> crash recovery in v12). But at this stage of v12, it's worth thinking to\n> just cause crash recovery to exit with an error when any recovery\n> parameter is set. Thought?\n\nI dislike the idea of crash recovery throwing fatal errors because there\nare recovery settings in postgresql.auto.conf. Since there is no\ndefined mechanism for cleaning out old recovery settings we have to\nassume that they will persist (and accumulate) more or less forever.\n\n> Or if that change is overkill, alternatively we can make crash recovery\n> \"ignore\" any recovery parameters, e.g., by forcibly disabling\n> the parameters.\n\nI'd rather load recovery settings *only* if recovery.signal or\nstandby.signal is present and do this only after crash recovery is\ncomplete, i.e. in the absence of backup_label.\n\nI think blindly loading recovery settings then trying to ignore them\nlater is pretty much why we are having these issues in the first place.\n I'd rather not extend that pattern if possible.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Sat, 28 Sep 2019 13:49:29 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": true,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On 2019-09-28 19:45, Tom Lane wrote:\n> Maybe I'm misunderstanding, but I think that rather than adding error\n> checks that were not there before, the right path to fixing this is\n> to cause these settings to be ignored if we're doing crash recovery.\n\nThat makes sense to me. Something like this (untested)?\n\ndiff --git a/src/backend/access/transam/xlog.c\nb/src/backend/access/transam/xlog.c\nindex 0daab3ff4b..25cae57131 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -5618,6 +5618,13 @@ recoveryStopsBefore(XLogReaderState *record)\n \tTimestampTz recordXtime = 0;\n \tTransactionId recordXid;\n\n+\t/*\n+\t * Ignore recovery target settings when not in archive recovery (meaning\n+\t * we are in crash recovery).\n+\t */\n+\tif (!InArchiveRecovery)\n+\t\treturn false;\n+\n \t/* Check if we should stop as soon as reaching consistency */\n \tif (recoveryTarget == RECOVERY_TARGET_IMMEDIATE && reachedConsistency)\n \t{\n@@ -5759,6 +5766,13 @@ recoveryStopsAfter(XLogReaderState *record)\n \tuint8\t\trmid;\n \tTimestampTz recordXtime;\n\n+\t/*\n+\t * Ignore recovery target settings when not in archive recovery (meaning\n+\t * we are in crash recovery).\n+\t */\n+\tif (!InArchiveRecovery)\n+\t\treturn false;\n+\n \tinfo = XLogRecGetInfo(record) & ~XLR_INFO_MASK;\n \trmid = XLogRecGetRmid(record);\n\n\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 28 Sep 2019 23:07:59 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On Sun, Sep 29, 2019 at 6:08 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-09-28 19:45, Tom Lane wrote:\n> > Maybe I'm misunderstanding, but I think that rather than adding error\n> > checks that were not there before, the right path to fixing this is\n> > to cause these settings to be ignored if we're doing crash recovery.\n>\n> That makes sense to me.\n\n+1\n\n> Something like this (untested)?\n\nYes, but ArchiveRecoveryRequested should be checked instead of\nInArchiveRecovery, I think. Otherwise recovery targets would take effect\nwhen recovery.signal is missing but backup_label exists. In this case,\nInArchiveRecovery is set to true though ArchiveRecoveryRequested is\nfalse because recovery.signal is missing.\n\nWith the attached patch, I checked that the steps that I described\nupthread didn't reproduce the issue.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Mon, 30 Sep 2019 01:36:55 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On 2019-09-27 17:14, David Steele wrote:\n> On 9/27/19 10:36 AM, Peter Eisentraut wrote:\n>> On 2019-09-26 23:02, David Steele wrote:\n>>> On 9/26/19 4:48 PM, Peter Eisentraut wrote:\n>>>\n>>>> I don't know if recovery_target_timeline is actually useful to change in\n>>>> standby mode.\n>> OK, I have committed your original documentation patch.\n> Thanks, that's a good start.\n> \n> As Fujii noticed and I have demonstrated upthread, just about any target\n> setting can be used in a standby restore. This matches the behavior of\n> prior versions so it's not exactly a regression, but the old docs made\n> no claim that standby_mode disabled targeted restore.\n\nI have further fixed the documentation.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 29 Sep 2019 23:11:12 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On 2019-09-29 18:36, Fujii Masao wrote:\n> Yes, but ArchiveRecoveryRequested should be checked instead of\n> InArchiveRecovery, I think. Otherwise recovery targets would take effect\n> when recovery.signal is missing but backup_label exists. In this case,\n> InArchiveRecovery is set to true though ArchiveRecoveryRequested is\n> false because recovery.signal is missing.\n> \n> With the attached patch, I checked that the steps that I described\n> upthread didn't reproduce the issue.\n\nYour patch looks correct to me.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 29 Sep 2019 23:59:08 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On Mon, Sep 30, 2019 at 6:59 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-09-29 18:36, Fujii Masao wrote:\n> > Yes, but ArchiveRecoveryRequested should be checked instead of\n> > InArchiveRecovery, I think. Otherwise recovery targets would take effect\n> > when recovery.signal is missing but backup_label exists. In this case,\n> > InArchiveRecovery is set to true though ArchiveRecoveryRequested is\n> > false because recovery.signal is missing.\n> >\n> > With the attached patch, I checked that the steps that I described\n> > upthread didn't reproduce the issue.\n>\n> Your patch looks correct to me.\n\nThanks! So I committed the patch.\n\nAlso we need to do the same thing for other recovery options like\nrestore_command. Attached is the patch which makes crash recovery\nignore restore_command and recovery_end_command.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Mon, 30 Sep 2019 10:48:53 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "Greetings,\n\n* David Steele (david@pgmasters.net) wrote:\n> On 9/28/19 1:26 PM, Fujii Masao wrote:\n> > On Sun, Sep 29, 2019 at 12:51 AM David Steele <david@pgmasters.net> wrote:\n> > \n> > Yeah, more checks would be necessary. IMO easy fix is to forbid not only\n> > recovery target parameters but also any recovery parameters (specified\n> > in recovery.conf in previous versions) in crash recovery.\n> > \n> > In v11 or before, any parameters in recovery.conf cannot take effect in\n> > crash recovery because crash recovery always starts without recovery.conf.\n> > But in v12, those parameters are specified in postgresql.conf,\n> > so they may take effect even in crash recovery (i.e., when both\n> > recovery.signal and standby.signal are missing). This would be the root\n> > cause of the problems that we are discussing, I think.\n> > \n> > There might be some recovery parameters that we can safely use\n> > even in crash recovery, e.g., maybe recovery_end_command\n> > (now, you can see that recovery_end_command is executed in\n> > crash recovery in v12). But at this stage of v12, it's worth thinking to\n> > just cause crash recovery to exit with an error when any recovery\n> > parameter is set. Thought?\n> \n> I dislike the idea of crash recovery throwing fatal errors because there\n> are recovery settings in postgresql.auto.conf. Since there is no\n> defined mechanism for cleaning out old recovery settings we have to\n> assume that they will persist (and accumulate) more or less forever.\n> \n> > Or if that change is overkill, alternatively we can make crash recovery\n> > \"ignore\" any recovery parameters, e.g., by forcibly disabling\n> > the parameters.\n> \n> I'd rather load recovery settings *only* if recovery.signal or\n> standby.signal is present and do this only after crash recovery is\n> complete, i.e. in the absence of backup_label.\n> \n> I think blindly loading recovery settings then trying to ignore them\n> later is pretty much why we are having these issues in the first place.\n> I'd rather not extend that pattern if possible.\n\nAgreed.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 2 Oct 2019 03:30:38 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On Wed, Oct 02, 2019 at 03:30:38AM -0400, Stephen Frost wrote:\n> * David Steele (david@pgmasters.net) wrote:\n>> On 9/28/19 1:26 PM, Fujii Masao wrote:\n>>> There might be some recovery parameters that we can safely use\n>>> even in crash recovery, e.g., maybe recovery_end_command\n>>> (now, you can see that recovery_end_command is executed in\n>>> crash recovery in v12). But at this stage of v12, it's worth thinking to\n>>> just cause crash recovery to exit with an error when any recovery\n>>> parameter is set. Thought?\n>> \n>> I dislike the idea of crash recovery throwing fatal errors because there\n>> are recovery settings in postgresql.auto.conf. Since there is no\n>> defined mechanism for cleaning out old recovery settings we have to\n>> assume that they will persist (and accumulate) more or less forever.\n\nYeah, I don't think that's a good thing either. That's a recipe to\nmake the user experience more confusing.\n\n>>> Or if that change is overkill, alternatively we can make crash recovery\n>>> \"ignore\" any recovery parameters, e.g., by forcibly disabling\n>>> the parameters.\n>> \n>> I'd rather load recovery settings *only* if recovery.signal or\n>> standby.signal is present and do this only after crash recovery is\n>> complete, i.e. in the absence of backup_label.\n>> \n>> I think blindly loading recovery settings then trying to ignore them\n>> later is pretty much why we are having these issues in the first place.\n>> I'd rather not extend that pattern if possible.\n> \n> Agreed.\n\nThat would mean that you need to create some new special handling,\nwhile making sure that the process reading the parameters is not\nconfused either by default values. It seems to me that if we are not\nin recovery, the best thing was can do now is just to not process\nanything those parameters trigger, instead of \"ignoring\" these (you\nare referring to use SetConfigOption in the startup process here?).\n--\nMichael",
"msg_date": "Fri, 4 Oct 2019 18:09:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On Fri, Oct 4, 2019 at 6:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Oct 02, 2019 at 03:30:38AM -0400, Stephen Frost wrote:\n> > * David Steele (david@pgmasters.net) wrote:\n> >> On 9/28/19 1:26 PM, Fujii Masao wrote:\n> >>> There might be some recovery parameters that we can safely use\n> >>> even in crash recovery, e.g., maybe recovery_end_command\n> >>> (now, you can see that recovery_end_command is executed in\n> >>> crash recovery in v12). But at this stage of v12, it's worth thinking to\n> >>> just cause crash recovery to exit with an error when any recovery\n> >>> parameter is set. Thought?\n> >>\n> >> I dislike the idea of crash recovery throwing fatal errors because there\n> >> are recovery settings in postgresql.auto.conf. Since there is no\n> >> defined mechanism for cleaning out old recovery settings we have to\n> >> assume that they will persist (and accumulate) more or less forever.\n>\n> Yeah, I don't think that's a good thing either. That's a recipe to\n> make the user experience more confusing.\n>\n> >>> Or if that change is overkill, alternatively we can make crash recovery\n> >>> \"ignore\" any recovery parameters, e.g., by forcibly disabling\n> >>> the parameters.\n> >>\n> >> I'd rather load recovery settings *only* if recovery.signal or\n> >> standby.signal is present and do this only after crash recovery is\n> >> complete, i.e. in the absence of backup_label.\n> >>\n> >> I think blindly loading recovery settings then trying to ignore them\n> >> later is pretty much why we are having these issues in the first place.\n> >> I'd rather not extend that pattern if possible.\n> >\n> > Agreed.\n\nAgreed, too. Do you have any idea to implement that? I've not found out\n\"smart\" way to do that yet.\n\nOne idea is, as Michael suggested, to use SetConfigOption() for all the\narchive recovery parameters at the beginning of the startup process as follows,\nto forcibly set the default values if crash recovery is running. But this\nseems not smart for me.\n\nSetConfigOption(\"restore_command\", ...);\nSetConfigOption(\"archive_cleanup_command\", ...);\nSetConfigOption(\"recovery_end_command\", ...);\n...\n\nMaybe we should make GUC mechanism notice signal files and ignore\narchive recovery-related parameters when none of those files exist?\nThis change seems overkill at least in v12, though.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Mon, 7 Oct 2019 22:13:56 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On Mon, Oct 7, 2019 at 9:14 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n> Agreed, too. Do you have any idea to implement that? I've not found out\n> \"smart\" way to do that yet.\n>\n> One idea is, as Michael suggested, to use SetConfigOption() for all the\n> archive recovery parameters at the beginning of the startup process as follows,\n> to forcibly set the default values if crash recovery is running. But this\n> seems not smart for me.\n>\n> SetConfigOption(\"restore_command\", ...);\n> SetConfigOption(\"archive_cleanup_command\", ...);\n> SetConfigOption(\"recovery_end_command\", ...);\n> ...\n>\n> Maybe we should make GUC mechanism notice signal files and ignore\n> archive recovery-related parameters when none of those files exist?\n> This change seems overkill at least in v12, though.\n\nI think this approach is going in the wrong direction. In every other\npart of the system, it's the job of the code around the GUC system to\nuse parameters when they're relevant and ignore them when they should\nbe ignored. Deciding that the parameters that were formerly part of\nrecovery.conf are an exception to that rule and that the GUC system is\nresponsible for making sure they're set only when we pay attention to\nthem seems like it's bringing back or exacerbating a code-level split\nbetween recovery.conf parameters and postgresql.conf parameters when,\nmeanwhile, we've been wanting to eradicate that split so that the\nthings we allow for postgresql.conf parameters -- e.g. changing them\nwhile they are running -- can be applied to these parameters also.\n\nI don't particularly like the use of SetConfigOption() either,\nalthough it does have some precedent in autovacuum, for example.\nGenerally, it's right and proper that the GUC system sets the\nvariables to which the parameters it controls are tied -- and then the\nrest of the code has to do the right thing around that. It sounds like\nthe patch that got rid of recovery.conf wasn't considered carefully\nenough, and missed the fact that it was introducing some inadvertent\nbehavior changes. That's too bad, but let's not overreact. It seems\ntotally fine to me to just add ad-hoc checks that rule out\ninappropriately relying on these parameters while performing crash\nrecovery - and be done with it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 7 Oct 2019 11:40:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Oct 7, 2019 at 9:14 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > Agreed, too. Do you have any idea to implement that? I've not found out\n> > \"smart\" way to do that yet.\n> >\n> > One idea is, as Michael suggested, to use SetConfigOption() for all the\n> > archive recovery parameters at the beginning of the startup process as follows,\n> > to forcibly set the default values if crash recovery is running. But this\n> > seems not smart for me.\n> >\n> > SetConfigOption(\"restore_command\", ...);\n> > SetConfigOption(\"archive_cleanup_command\", ...);\n> > SetConfigOption(\"recovery_end_command\", ...);\n> > ...\n> >\n> > Maybe we should make GUC mechanism notice signal files and ignore\n> > archive recovery-related parameters when none of those files exist?\n> > This change seems overkill at least in v12, though.\n> \n> I think this approach is going in the wrong direction. In every other\n> part of the system, it's the job of the code around the GUC system to\n> use parameters when they're relevant and ignore them when they should\n> be ignored. Deciding that the parameters that were formerly part of\n> recovery.conf are an exception to that rule and that the GUC system is\n> responsible for making sure they're set only when we pay attention to\n> them seems like it's bringing back or exacerbating a code-level split\n> between recovery.conf parameters and postgresql.conf parameters when,\n> meanwhile, we've been wanting to eradicate that split so that the\n> things we allow for postgresql.conf parameters -- e.g. changing them\n> while they are running -- can be applied to these parameters also.\n\nI don't think we necessairly need to be thinking about trying to\neliminate all differences between certain former recovery.conf settings\nand things like work_mem, even as we make it such that those former\nsettings can be changed while we're running.\n\n> I don't particularly like the use of SetConfigOption() either,\n> although it does have some precedent in autovacuum, for example.\n\nIt's pretty explicitly the job of SetConfigOption to manage the fact\nthat only certain options can be set at certain times, as noted at the\ntop of guc.h where we're talking about GUC contexts (and which\nSetConfigOption references as being what it's job is to manage-\nguc.c:6776 currently).\n\n> Generally, it's right and proper that the GUC system sets the\n> variables to which the parameters it controls are tied -- and then the\n> rest of the code has to do the right thing around that. It sounds like\n> the patch that got rid of recovery.conf wasn't considered carefully\n> enough, and missed the fact that it was introducing some inadvertent\n> behavior changes. That's too bad, but let's not overreact. It seems\n> totally fine to me to just add ad-hoc checks that rule out\n> inappropriately relying on these parameters while performing crash\n> recovery - and be done with it.\n\nThe patch that got rid of recovery.conf also removed the inherent\nunderstanding and knowledge that there are certain options that can only\nbe set (and make sense ...) at certain times- namely, when we're doing\nrecovery. Having these options set at other times is entirely wrong and\nwill be confusing to both users, and, as seen, code. From a user\nperspective, what happens when you've started up PG as a primary, since\nyou don't have a signal file in place to indicate that you're doing\nrecovery, and you have a recovery_target set, so some user does\n\"show recovery_target_name\" and sees a value? How is that sensible?\n\nThose options should only be set when we're actually doing recovery,\nwhich is governed by the signal file. Recovery is absolutely a specific\nkind of state that the system is in, not unlike postmaster, we've even\ngot a specific pg_is_in_recovery() function for it.\n\nHaving these options end up set but then hacking all of the other code\nthat looks at them to check if we're actually in recovery or not would\nend up being both confusing to users as well as an ongoing source of\nbugs (which has already been made clear by the fact that we're having\nthis discussion...). Wouldn't that also mean we would need to hack the\n'show' code, to blank out the recovery_target_name variable if we aren't\nin recovery? Ugh.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 8 Oct 2019 09:58:03 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On Tue, Oct 8, 2019 at 10:58 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n> > On Mon, Oct 7, 2019 at 9:14 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > > Agreed, too. Do you have any idea to implement that? I've not found out\n> > > \"smart\" way to do that yet.\n> > >\n> > > One idea is, as Michael suggested, to use SetConfigOption() for all the\n> > > archive recovery parameters at the beginning of the startup process as follows,\n> > > to forcibly set the default values if crash recovery is running. But this\n> > > seems not smart for me.\n> > >\n> > > SetConfigOption(\"restore_command\", ...);\n> > > SetConfigOption(\"archive_cleanup_command\", ...);\n> > > SetConfigOption(\"recovery_end_command\", ...);\n> > > ...\n> > >\n> > > Maybe we should make GUC mechanism notice signal files and ignore\n> > > archive recovery-related parameters when none of those files exist?\n> > > This change seems overkill at least in v12, though.\n> >\n> > I think this approach is going in the wrong direction. In every other\n> > part of the system, it's the job of the code around the GUC system to\n> > use parameters when they're relevant and ignore them when they should\n> > be ignored. Deciding that the parameters that were formerly part of\n> > recovery.conf are an exception to that rule and that the GUC system is\n> > responsible for making sure they're set only when we pay attention to\n> > them seems like it's bringing back or exacerbating a code-level split\n> > between recovery.conf parameters and postgresql.conf parameters when,\n> > meanwhile, we've been wanting to eradicate that split so that the\n> > things we allow for postgresql.conf parameters -- e.g. changing them\n> > while they are running -- can be applied to these parameters also.\n>\n> I don't think we necessairly need to be thinking about trying to\n> eliminate all differences between certain former recovery.conf settings\n> and things like work_mem, even as we make it such that those former\n> settings can be changed while we're running.\n>\n> > I don't particularly like the use of SetConfigOption() either,\n> > although it does have some precedent in autovacuum, for example.\n>\n> It's pretty explicitly the job of SetConfigOption to manage the fact\n> that only certain options can be set at certain times, as noted at the\n> top of guc.h where we're talking about GUC contexts (and which\n> SetConfigOption references as being what it's job is to manage-\n> guc.c:6776 currently).\n>\n> > Generally, it's right and proper that the GUC system sets the\n> > variables to which the parameters it controls are tied -- and then the\n> > rest of the code has to do the right thing around that. It sounds like\n> > the patch that got rid of recovery.conf wasn't considered carefully\n> > enough, and missed the fact that it was introducing some inadvertent\n> > behavior changes. That's too bad, but let's not overreact. It seems\n> > totally fine to me to just add ad-hoc checks that rule out\n> > inappropriately relying on these parameters while performing crash\n> > recovery - and be done with it.\n\nYeah, I agree.\n\n> The patch that got rid of recovery.conf also removed the inherent\n> understanding and knowledge that there are certain options that can only\n> be set (and make sense ...) at certain times- namely, when we're doing\n> recovery. Having these options set at other times is entirely wrong and\n> will be confusing to both users, and, as seen, code. From a user\n> perspective, what happens when you've started up PG as a primary, since\n> you don't have a signal file in place to indicate that you're doing\n> recovery, and you have a recovery_target set, so some user does\n> \"show recovery_target_name\" and sees a value? How is that sensible?\n>\n> Those options should only be set when we're actually doing recovery,\n> which is governed by the signal file. Recovery is absolutely a specific\n> kind of state that the system is in, not unlike postmaster, we've even\n> got a specific pg_is_in_recovery() function for it.\n>\n> Having these options end up set but then hacking all of the other code\n> that looks at them to check if we're actually in recovery or not would\n> end up being both confusing to users as well as an ongoing source of\n> bugs (which has already been made clear by the fact that we're having\n> this discussion...). Wouldn't that also mean we would need to hack the\n> 'show' code, to blank out the recovery_target_name variable if we aren't\n> in recovery? Ugh.\n\nIsn't this overkill? This doesn't seem the problem only for recovery-related\nsettings. We have already have the similar issue with other settings.\nFor example, log_directory parameter is ignored when logging_collector is\nnot enabled. But SHOW log_directory reports the setting value even when\nlogging_collector is disabled. This seems the similar issue and might be\nconfusing, but we could live with that.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 9 Oct 2019 00:48:09 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "Greetings,\n\n* Fujii Masao (masao.fujii@gmail.com) wrote:\n> On Tue, Oct 8, 2019 at 10:58 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Having these options end up set but then hacking all of the other code\n> > that looks at them to check if we're actually in recovery or not would\n> > end up being both confusing to users as well as an ongoing source of\n> > bugs (which has already been made clear by the fact that we're having\n> > this discussion...). Wouldn't that also mean we would need to hack the\n> > 'show' code, to blank out the recovery_target_name variable if we aren't\n> > in recovery? Ugh.\n> \n> Isn't this overkill? This doesn't seem the problem only for recovery-related\n> settings. We have already have the similar issue with other settings.\n> For example, log_directory parameter is ignored when logging_collector is\n> not enabled. But SHOW log_directory reports the setting value even when\n> logging_collector is disabled. This seems the similar issue and might be\n> confusing, but we could live with that.\n\nI agree it's a similar issue. I disagree that it's actually sensible\nfor us to do so and would rather contend that it's confusing and not\ngood.\n\nWe certainly do a lot of smart things in PG, but showing the value of\nvariables that aren't accurate, and we *know* they aren't, hardly seems\nlike something we should be saying \"this is good and ok, so let's do\nmore of this.\"\n\nI'd rather argue that this just shows that we need to come up with a\nsolution in this area. I don't think it's *as* big of a deal when it\ncomes to logging_collector/log_directory because, at least there, you\ndon't even start to get into the same code paths where it matters, like\nyou end up doing with the recovery targets and crash recovery, so the\nchances of bugs creeping in are less in the log_directory case.\n\nI still don't think it's great though and, yes, would prefer that we\navoid having log_directory set when logging_collector is in use.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 8 Oct 2019 12:02:06 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On Wed, Oct 9, 2019 at 1:02 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Fujii Masao (masao.fujii@gmail.com) wrote:\n> > On Tue, Oct 8, 2019 at 10:58 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > Having these options end up set but then hacking all of the other code\n> > > that looks at them to check if we're actually in recovery or not would\n> > > end up being both confusing to users as well as an ongoing source of\n> > > bugs (which has already been made clear by the fact that we're having\n> > > this discussion...). Wouldn't that also mean we would need to hack the\n> > > 'show' code, to blank out the recovery_target_name variable if we aren't\n> > > in recovery? Ugh.\n> >\n> > Isn't this overkill? This doesn't seem the problem only for recovery-related\n> > settings. We have already have the similar issue with other settings.\n> > For example, log_directory parameter is ignored when logging_collector is\n> > not enabled. But SHOW log_directory reports the setting value even when\n> > logging_collector is disabled. This seems the similar issue and might be\n> > confusing, but we could live with that.\n>\n> I agree it's a similar issue. I disagree that it's actually sensible\n> for us to do so and would rather contend that it's confusing and not\n> good.\n>\n> We certainly do a lot of smart things in PG, but showing the value of\n> variables that aren't accurate, and we *know* they aren't, hardly seems\n> like something we should be saying \"this is good and ok, so let's do\n> more of this.\"\n>\n> I'd rather argue that this just shows that we need to come up with a\n> solution in this area. I don't think it's *as* big of a deal when it\n> comes to logging_collector/log_directory because, at least there, you\n> don't even start to get into the same code paths where it matters, like\n> you end up doing with the recovery targets and crash recovery, so the\n> chances of bugs creeping in are less in the log_directory case.\n>\n> I still don't think it's great though and, yes, would prefer that we\n> avoid having log_directory set when logging_collector is in use.\n\nThere are other parameters having the similar issue, for example,\n- parameters for SSL connection when ssl is disabled\n- parameters for autovacuum activity when autovacuum is disabled\n- parameters for Hot Standby when hot_standby is disabled\netc\n\nYeah, it's better to make SHOW command handle these parameters\n\"less confusing\". But I cannot wait for the solution for them before\nfixing the original issue in v12 (i.e., the issue where restore_command\ncan be executed even in crash recovery). So, barring any objection,\nI'd like to commit the patch that I attached upthread, soon.\nThe patch prevents restore_command and recovery_end_command\nfrom being executed in crash recovery. Thought?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 9 Oct 2019 15:16:11 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On Tue, Oct 8, 2019 at 9:58 AM Stephen Frost <sfrost@snowman.net> wrote:\n> From a user\n> perspective, what happens when you've started up PG as a primary, since\n> you don't have a signal file in place to indicate that you're doing\n> recovery, and you have a recovery_target set, so some user does\n> \"show recovery_target_name\" and sees a value? How is that sensible?\n\nI'd argue that not only is it sensible, but it's the only correct\nanswer. If I put a value in postgresql.conf and it doesn't show up in\nthe output of SHOW, I'm going to be confused. That just seems flat\nwrong to me.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 9 Oct 2019 07:50:33 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, Oct 8, 2019 at 9:58 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > From a user\n> > perspective, what happens when you've started up PG as a primary, since\n> > you don't have a signal file in place to indicate that you're doing\n> > recovery, and you have a recovery_target set, so some user does\n> > \"show recovery_target_name\" and sees a value? How is that sensible?\n> \n> I'd argue that not only is it sensible, but it's the only correct\n> answer. If I put a value in postgresql.conf and it doesn't show up in\n> the output of SHOW, I'm going to be confused. That just seems flat\n> wrong to me.\n\nYou're going to be really confused when you realize that, sure, it's\nset, but we just completely ignored it ...\n\nHow about we look at things like listen_addresses or shared_buffers?\nLet's make a similar argument there- some day, in the future, we make PG\nautomagically realize when shared_buffers is too high to be able to\nstart up, so we lower it to some other value just to get the database\nonline, with the hope that the user will realize and fix the setting\n(this isn't a joke- having shared_buffers be too high through an ALTER\nSYSTEM setting is a real problem and it'd be nice if we had a way to\ndeal with that...), you think we should keep the shared_buffers variable\nshowing whatever was in the config file because, well, that's what was\nin the config file?\n\nIf anything, what we should be doing here is throwing a WARNING or\nsimilar that these settings don't make sense, because we aren't going\nthrough recovery, and blank them out. If we want to be really cute, we\ncould have the show return something like: <not in recovery>, or\nsimilar, but just returning an invalid value because that's what was in\nthe config file is nonsense. SHOW isn't a view of what's in\npostgresql.conf, it's telling the user what the current system state is.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 9 Oct 2019 08:42:53 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "Greetings,\n\n* Fujii Masao (masao.fujii@gmail.com) wrote:\n> On Wed, Oct 9, 2019 at 1:02 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Fujii Masao (masao.fujii@gmail.com) wrote:\n> > > On Tue, Oct 8, 2019 at 10:58 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > > Having these options end up set but then hacking all of the other code\n> > > > that looks at them to check if we're actually in recovery or not would\n> > > > end up being both confusing to users as well as an ongoing source of\n> > > > bugs (which has already been made clear by the fact that we're having\n> > > > this discussion...). Wouldn't that also mean we would need to hack the\n> > > > 'show' code, to blank out the recovery_target_name variable if we aren't\n> > > > in recovery? Ugh.\n> > >\n> > > Isn't this overkill? This doesn't seem the problem only for recovery-related\n> > > settings. We have already have the similar issue with other settings.\n> > > For example, log_directory parameter is ignored when logging_collector is\n> > > not enabled. But SHOW log_directory reports the setting value even when\n> > > logging_collector is disabled. This seems the similar issue and might be\n> > > confusing, but we could live with that.\n> >\n> > I agree it's a similar issue. I disagree that it's actually sensible\n> > for us to do so and would rather contend that it's confusing and not\n> > good.\n> >\n> > We certainly do a lot of smart things in PG, but showing the value of\n> > variables that aren't accurate, and we *know* they aren't, hardly seems\n> > like something we should be saying \"this is good and ok, so let's do\n> > more of this.\"\n> >\n> > I'd rather argue that this just shows that we need to come up with a\n> > solution in this area. I don't think it's *as* big of a deal when it\n> > comes to logging_collector/log_directory because, at least there, you\n> > don't even start to get into the same code paths where it matters, like\n> > you end up doing with the recovery targets and crash recovery, so the\n> > chances of bugs creeping in are less in the log_directory case.\n> >\n> > I still don't think it's great though and, yes, would prefer that we\n> > avoid having log_directory set when logging_collector is in use.\n> \n> There are other parameters having the similar issue, for example,\n> - parameters for SSL connection when ssl is disabled\n> - parameters for autovacuum activity when autovacuum is disabled\n> - parameters for Hot Standby when hot_standby is disabled\n\nI agree that those would also be nice to improve with some indication\nthat those features are disabled, but, again, as I said above, while\nthey're confusing they at least don't *also* lead to bugs where the code\nitself is confused about the state of the system, because we don't have\ntwo different major ways of getting to the same code.\n\n> Yeah, it's better to make SHOW command handle these parameters\n> \"less confusing\". But I cannot wait for the solution for them before\n> fixing the original issue in v12 (i.e., the issue where restore_command\n> can be executed even in crash recovery). So, barring any objection,\n> I'd like to commit the patch that I attached upthread, soon.\n> The patch prevents restore_command and recovery_end_command\n> from being executed in crash recovery. Thought?\n\nI'm not suggesting that we fix everything in this area in a patch that\nwe back-patch, and I haven't intended to imply that at all throughout\nthis, so this argument doesn't really hold. I do think we should fix\nthis issue, where we've seen bugs from the confusion, in the right way\nby realizing that this is a direction that's prone to cause bugs.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 9 Oct 2019 08:50:18 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On Wed, Oct 9, 2019 at 8:42 AM Stephen Frost <sfrost@snowman.net> wrote:\n> You're going to be really confused when you realize that, sure, it's\n> set, but we just completely ignored it ...\n\nNo, I'm not, because I expect that settings will only take effect for\noperations to which they apply. As Fujii Masao also pointed out, there\nare lots of other settings that are ignored (but still shown as set)\nin situations where they don't apply.\n\n> How about we look at things like listen_addresses or shared_buffers?\n> Let's make a similar argument there- some day, in the future, we make PG\n> automagically realize when shared_buffers is too high to be able to\n> start up, so we lower it to some other value just to get the database\n> online, with the hope that the user will realize and fix the setting\n> (this isn't a joke- having shared_buffers be too high through an ALTER\n> SYSTEM setting is a real problem and it'd be nice if we had a way to\n> deal with that...), you think we should keep the shared_buffers variable\n> showing whatever was in the config file because, well, that's what was\n> in the config file?\n\nYes. I mean, I assume that if we did such a thing, we might rename the\nGUC in the config file to max_shared_buffers or shared_buffers_limit\nor something like that to make it more clear, and shared_buffers\nitself might become a PGC_INTERNAL setting that users can't modify but\nwhich can still be viewed using SHOW. But if a value is set in the\nconfiguration file and not overridden somewhere else (ALTER USER,\nALTER FUNCTION, etc.) I expect the SHOW command to display that value.\n\n> If anything, what we should be doing here is throwing a WARNING or\n> similar that these settings don't make sense, because we aren't going\n> through recovery, and blank them out. If we want to be really cute, we\n> could have the show return something like: <not in recovery>, or\n> similar, but just returning an invalid value because that's what was in\n> the config file is nonsense. SHOW isn't a view of what's in\n> postgresql.conf, it's telling the user what the current system state is.\n\nSHOW is telling the user the value that is configured in the current\nsession, which may or may not be the value configured in\npostgresql.conf, but is the value that was configured somewhere. For\nthe most part, it's not trying to tell the user what the current\nsystem state is, although we have a few weird exceptions that behave\notherwise.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 9 Oct 2019 09:57:10 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Oct 9, 2019 at 8:42 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > If anything, what we should be doing here is throwing a WARNING or\n> > similar that these settings don't make sense, because we aren't going\n> > through recovery, and blank them out. If we want to be really cute, we\n> > could have the show return something like: <not in recovery>, or\n> > similar, but just returning an invalid value because that's what was in\n> > the config file is nonsense. SHOW isn't a view of what's in\n> > postgresql.conf, it's telling the user what the current system state is.\n> \n> SHOW is telling the user the value that is configured in the current\n> session, which may or may not be the value configured in\n> postgresql.conf, but is the value that was configured somewhere. For\n> the most part, it's not trying to tell the user what the current\n> system state is, although we have a few weird exceptions that behave\n> otherwise.\n\nI don't understand this argument and so it seems pretty likely that\nwe're just not going to agree here. The idea that the GUC system isn't\nsomething that someone can depend on to find out what the current state\nof a variable is is just utter nonsense to me. Yes, we have exceptions\nto that, and I don't think they're good ones, but the argument that SHOW\ndoesn't actually return what the state is and instead returns \"this is\nwhat this variable was configured to at some point\" does not make any\nsense to me.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 9 Oct 2019 10:38:03 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On 2019-09-30 03:48, Fujii Masao wrote:\n> Also we need to do the same thing for other recovery options like\n> restore_command. Attached is the patch which makes crash recovery\n> ignore restore_command and recovery_end_command.\n\nThis patch looks correct to me.\n\nDo we need to handle archive_cleanup_command? Perhaps not.\n\nA check in recoveryApplyDelay() might be necessary.\n\nThat should cover everything then.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 9 Oct 2019 22:52:30 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
},
{
"msg_contents": "On Thu, Oct 10, 2019 at 5:52 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-09-30 03:48, Fujii Masao wrote:\n> > Also we need to do the same thing for other recovery options like\n> > restore_command. Attached is the patch which makes crash recovery\n> > ignore restore_command and recovery_end_command.\n>\n> This patch looks correct to me.\n\nThanks for the review! I committed the patch.\n\n> Do we need to handle archive_cleanup_command? Perhaps not.\n\nNo, because archive_command is basically executed by checkpointer\nand this process cannot be invoked in crash recovery case.\n\n> A check in recoveryApplyDelay() might be necessary.\n\nYes! We are discussing this issue at\nhttps://www.postgresql.org/message-id/CAHGQGwEyD6HdZLfdWc+95g=VQFPR4zQL4n+yHxQgGEGjaSVheQ@mail.gmail.com\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Fri, 11 Oct 2019 16:48:22 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby accepts recovery_target_timeline setting?"
}
] |
[
{
"msg_contents": "While rechecking another patch, I found that 709d003fbd forgot to\nedit a comment mentioning three members removed from\nXLogReaderState.\n\nSee the attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 26 Sep 2019 11:08:09 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "A comment fix in xlogreader.c"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 11:08:09AM +0900, Kyotaro Horiguchi wrote:\n> While rechecking another patch, I found that 709d003fbd forgot to\n> edit a comment mentioning three members removed from\n> XLogReaderState.\n>\n@@ -103,8 +103,7 @@ XLogReaderAllocate(int wal_segment_size, const char *waldir,\n state->read_page = pagereadfunc;\n /* system_identifier initialized to zeroes above */\n state->private_data = private_data;\n- /* ReadRecPtr and EndRecPtr initialized to zeroes above */\n- /* readSegNo, readOff, readLen, readPageTLI initialized to zeroes above */\n+ /* ReadRecPtr, EndRecPtr and readLen initialized to zeroes above */\n state->errormsg_buf = palloc_extended(MAX_ERRORMSG_LEN + 1,\n MCXT_ALLOC_NO_OOM);\n if (!state->errormsg_buf)\n\nI see. readSegNo and readOff have been moved to WALOpenSegment and\nreplaced by new, equivalent fields, still all those three fields are\nstill initialized for the palloc_extended() call to allocate\nXLogReaderState, while the two others are now part of\nWALOpenSegmentInit(). Your change is correct, so applied.\n--\nMichael",
"msg_date": "Thu, 26 Sep 2019 11:57:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: A comment fix in xlogreader.c"
},
{
"msg_contents": "At Thu, 26 Sep 2019 11:57:59 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190926025759.GB2115@paquier.xyz>\n> On Thu, Sep 26, 2019 at 11:08:09AM +0900, Kyotaro Horiguchi wrote:\n> > While rechecking another patch, I found that 709d003fbd forgot to\n> > edit a comment mentioning three members removed from\n> > XLogReaderState.\n> >\n> @@ -103,8 +103,7 @@ XLogReaderAllocate(int wal_segment_size, const char *waldir,\n> state->read_page = pagereadfunc;\n> /* system_identifier initialized to zeroes above */\n> state->private_data = private_data;\n> - /* ReadRecPtr and EndRecPtr initialized to zeroes above */\n> - /* readSegNo, readOff, readLen, readPageTLI initialized to zeroes above */\n> + /* ReadRecPtr, EndRecPtr and readLen initialized to zeroes above */\n> state->errormsg_buf = palloc_extended(MAX_ERRORMSG_LEN + 1,\n> MCXT_ALLOC_NO_OOM);\n> if (!state->errormsg_buf)\n> \n> I see. readSegNo and readOff have been moved to WALOpenSegment and\n> replaced by new, equivalent fields, still all those three fields are\n> still initialized for the palloc_extended() call to allocate\n> XLogReaderState, while the two others are now part of\n> WALOpenSegmentInit(). Your change is correct, so applied.\n\nExactly. Thanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 01 Oct 2019 12:24:21 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A comment fix in xlogreader.c"
}
] |
[
{
"msg_contents": "I have started to learn postgresql. While going through the WAL dump saw\nthe below records,\n\nrmgr: Heap2 len (rec/tot): 60/ 60, tx: 0, lsn:\n4F/CFF1F0F8, prev 4F/CFF1EB70, desc: CLEAN remxid 0, blkref #0: rel\n1663/16385/1259 blk 1301\n\nrmgr: Heap2 len (rec/tot): 60/ 60, tx: 0, lsn:\n4F/CF26F808, prev 4F/CF26F7C0, desc: CLEAN remxid 0, blkref #0: rel\n1663/16385/3312732 blk 2634\n\nrmgr: Heap2 len (rec/tot): 56/ 56, tx: 79718109, lsn:\n4F/CF96D970, prev 4F/CF96D930, desc: CLEAN remxid 0, blkref #0: rel\n1663/16385/209012 blk 6621\n\n\nWanted to know more about these,\n\n 1. Are \"CLEAN\" vacuum records ? If not, what are those ?\n 2. Why is the transaction id \"0\" ?\n 3. What does these records do ?\n\nPostgresql version : 10.6\n\nI have started to learn postgresql. While going through the WAL dump saw the below records, rmgr: Heap2 len (rec/tot): 60/ 60, tx: 0, \nlsn: 4F/CFF1F0F8, prev 4F/CFF1EB70, desc: CLEAN remxid 0, blkref #0: rel\n 1663/16385/1259 blk 1301rmgr: Heap2 \n len (rec/tot): 60/ 60, tx: 0, lsn: 4F/CF26F808, prev \n4F/CF26F7C0, desc: CLEAN remxid 0, blkref #0: rel 1663/16385/3312732 blk\n 2634rmgr: Heap2 len (rec/tot): \n56/ 56, tx: 79718109, lsn: 4F/CF96D970, prev 4F/CF96D930, desc: \nCLEAN remxid 0, blkref #0: rel 1663/16385/209012 blk 6621Wanted to know more about these, 1. Are \"CLEAN\" vacuum records ? If not, what are those ? 2. Why is the transaction id \"0\" ? 3. What does these records do ?Postgresql version : 10.6",
"msg_date": "Thu, 26 Sep 2019 11:14:53 +0530",
"msg_from": "Looserof7 <looserof7@gmail.com>",
"msg_from_op": true,
"msg_subject": "WAL records"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 11:14:53AM +0530, Looserof7 wrote:\n>I have started to learn postgresql. While going through the WAL dump saw\n>the below records,\n>\n>rmgr: Heap2 len (rec/tot): 60/ 60, tx: 0, lsn:\n>4F/CFF1F0F8, prev 4F/CFF1EB70, desc: CLEAN remxid 0, blkref #0: rel\n>1663/16385/1259 blk 1301\n>\n>rmgr: Heap2 len (rec/tot): 60/ 60, tx: 0, lsn:\n>4F/CF26F808, prev 4F/CF26F7C0, desc: CLEAN remxid 0, blkref #0: rel\n>1663/16385/3312732 blk 2634\n>\n>rmgr: Heap2 len (rec/tot): 56/ 56, tx: 79718109, lsn:\n>4F/CF96D970, prev 4F/CF96D930, desc: CLEAN remxid 0, blkref #0: rel\n>1663/16385/209012 blk 6621\n>\n>\n>Wanted to know more about these,\n>\n> 1. Are \"CLEAN\" vacuum records ? If not, what are those ?\n> 2. Why is the transaction id \"0\" ?\n> 3. What does these records do ?\n>\n>Postgresql version : 10.6\n\nIt's an information for the replica (hot-standby) with information about\nrecent data removed from the table by vacuum, so that the standby may\nabort user queries that'd need the data etc. See heap_xlog_clean()\nfunction in heapam.c.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 26 Sep 2019 20:47:39 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL records"
}
] |
[
{
"msg_contents": "Hi,\n\nI have just started to read the PostgreSQL code and found a lack of\ncomments for a postgres backend program in bootstrap mode.\nWhen I saw the --boot option implemented in src/backend/main/main.c at\nfirst time, I did not understand why the --boot option is not documented\nand what it is used for.\nThe only way to know these things is to type `grep -r '\\--boot' .` on a\nproject root.\nIt is easy to see that the --boot option is used in initdb for some\nhistorical reasons, but it is painful for a beginner like me.\nI believe the attached patch which adds a few comments might help a\nbeginner.\n\nMany thanks,\n\n-- \nYouki Shiraishi\nNTT Software Innovation Center\nPhone: +81-(0)3-5860-5115\nEmail: shiraishi@computer.org",
"msg_date": "Thu, 26 Sep 2019 15:06:37 +0900",
"msg_from": "Youki Shiraishi <shiraishi@computer.org>",
"msg_from_op": true,
"msg_subject": "Add comments for a postgres program in bootstrap mode"
},
{
"msg_contents": "Hi Shiraishi-san,\n\nOn Thu, Sep 26, 2019 at 3:06 PM Youki Shiraishi <shiraishi@computer.org> wrote:\n>\n> Hi,\n>\n> I have just started to read the PostgreSQL code and found a lack of comments for a postgres backend program in bootstrap mode.\n> When I saw the --boot option implemented in src/backend/main/main.c at first time, I did not understand why the --boot option is not documented and what it is used for.\n> The only way to know these things is to type `grep -r '\\--boot' .` on a project root.\n> It is easy to see that the --boot option is used in initdb for some historical reasons, but it is painful for a beginner like me.\n> I believe the attached patch which adds a few comments might help a beginner.\n\nThanks for the patch. It might be a good idea to demystify this\nsecret --boot option.\n\n+ /* Bootstrap mode for initdb */\n if (argc > 1 && strcmp(argv[1], \"--boot\") == 0)\n AuxiliaryProcessMain(argc, argv); /* does not return */\n else if (argc > 1 && strcmp(argv[1], \"--describe-config\") == 0)\n\nHow about expanding that comment just a little bit, say:\n\n /*\n * Bootstrapping is handled by AuxiliaryProcessMain() for historic\n * reasons.\n */\n\n@@ -190,7 +190,8 @@ static IndexList *ILHead = NULL;\n * AuxiliaryProcessMain\n *\n * The main entry point for auxiliary processes, such as the bgwriter,\n- * walwriter, walreceiver, bootstrapper and the shared memory checker code.\n+ * walwriter, walreceiver, postgres program in bootstrap mode and the\n+ * shared memory checker code.\n\nThis change may not be necessary, because, bootstrapper is a good\nshort name for 'postgres program in bootstrap mode'. Also, this name\nis similar in style to the names of other auxiliary processes.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 26 Sep 2019 17:37:43 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add comments for a postgres program in bootstrap mode"
},
{
"msg_contents": "Hello Amit,\n\nOn Thu, Sep 26, 2019 at 5:38 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hi Shiraishi-san,\n>\n> On Thu, Sep 26, 2019 at 3:06 PM Youki Shiraishi <shiraishi@computer.org> wrote:\n> >\n> > Hi,\n> >\n> > I have just started to read the PostgreSQL code and found a lack of comments for a postgres backend program in bootstrap mode.\n> > When I saw the --boot option implemented in src/backend/main/main.c at first time, I did not understand why the --boot option is not documented and what it is used for.\n> > The only way to know these things is to type `grep -r '\\--boot' .` on a project root.\n> > It is easy to see that the --boot option is used in initdb for some historical reasons, but it is painful for a beginner like me.\n> > I believe the attached patch which adds a few comments might help a beginner.\n>\n> Thanks for the patch. It might be a good idea to demystify this\n> secret --boot option.\n>\n> + /* Bootstrap mode for initdb */\n> if (argc > 1 && strcmp(argv[1], \"--boot\") == 0)\n> AuxiliaryProcessMain(argc, argv); /* does not return */\n> else if (argc > 1 && strcmp(argv[1], \"--describe-config\") == 0)\n>\n> How about expanding that comment just a little bit, say:\n>\n> /*\n> * Bootstrapping is handled by AuxiliaryProcessMain() for historic\n> * reasons.\n> */\n>\n> @@ -190,7 +190,8 @@ static IndexList *ILHead = NULL;\n> * AuxiliaryProcessMain\n> *\n> * The main entry point for auxiliary processes, such as the bgwriter,\n> - * walwriter, walreceiver, bootstrapper and the shared memory checker code.\n> + * walwriter, walreceiver, postgres program in bootstrap mode and the\n> + * shared memory checker code.\n>\n> This change may not be necessary, because, bootstrapper is a good\n> short name for 'postgres program in bootstrap mode'. Also, this name\n> is similar in style to the names of other auxiliary processes.\n\nThank you for reviewing my patch.\nMy concern is that the word 'bootstrapper' is ambiguous.\nIf the word is obvious to hackers, please use the v2 patch attached to\nthis email.\n\nThanks,\n\n--\nYouki Shiraishi\nNTT Software Innovation Center\nPhone: +81-(0)3-5860-5115\nEmail: shiraishi@computer.org",
"msg_date": "Thu, 26 Sep 2019 18:32:18 +0900",
"msg_from": "Youki Shiraishi <shiraishi@computer.org>",
"msg_from_op": true,
"msg_subject": "Re: Add comments for a postgres program in bootstrap mode"
},
{
"msg_contents": "Hi Shiraishi-san,\n\nOn Thu, Sep 26, 2019 at 6:32 PM Youki Shiraishi <shiraishi@computer.org> wrote:\n> On Thu, Sep 26, 2019 at 5:38 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Thu, Sep 26, 2019 at 3:06 PM Youki Shiraishi <shiraishi@computer.org> wrote:\n> > > I have just started to read the PostgreSQL code and found a lack of comments for a postgres backend program in bootstrap mode.\n> > > When I saw the --boot option implemented in src/backend/main/main.c at first time, I did not understand why the --boot option is not documented and what it is used for.\n> > > The only way to know these things is to type `grep -r '\\--boot' .` on a project root.\n> > > It is easy to see that the --boot option is used in initdb for some historical reasons, but it is painful for a beginner like me.\n> > > I believe the attached patch which adds a few comments might help a beginner.\n> >\n> > Thanks for the patch. It might be a good idea to demystify this\n> > secret --boot option.\n> >\n> > + /* Bootstrap mode for initdb */\n> > if (argc > 1 && strcmp(argv[1], \"--boot\") == 0)\n> > AuxiliaryProcessMain(argc, argv); /* does not return */\n> > else if (argc > 1 && strcmp(argv[1], \"--describe-config\") == 0)\n> >\n> > How about expanding that comment just a little bit, say:\n> >\n> > /*\n> > * Bootstrapping is handled by AuxiliaryProcessMain() for historic\n> > * reasons.\n> > */\n\nDo you any thoughts on this suggestion?\n\n> > @@ -190,7 +190,8 @@ static IndexList *ILHead = NULL;\n> > * AuxiliaryProcessMain\n> > *\n> > * The main entry point for auxiliary processes, such as the bgwriter,\n> > - * walwriter, walreceiver, bootstrapper and the shared memory checker code.\n> > + * walwriter, walreceiver, postgres program in bootstrap mode and the\n> > + * shared memory checker code.\n> >\n> > This change may not be necessary, because, bootstrapper is a good\n> > short name for 'postgres program in bootstrap mode'. Also, this name\n> > is similar in style to the names of other auxiliary processes.\n>\n> Thank you for reviewing my patch.\n> My concern is that the word 'bootstrapper' is ambiguous.\n\nI was saying that 'bootstrapper' sounds like 'bgwriter', 'walwriter',\netc., so fits well in that sentence. It would've been OK if those\nthings were also written as 'postgres program that does background\nbuffer writing', etc.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 27 Sep 2019 00:09:45 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add comments for a postgres program in bootstrap mode"
},
{
"msg_contents": "Youki Shiraishi <shiraishi@computer.org> writes:\n> On Thu, Sep 26, 2019 at 5:38 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> * The main entry point for auxiliary processes, such as the bgwriter,\n>> - * walwriter, walreceiver, bootstrapper and the shared memory checker code.\n>> + * walwriter, walreceiver, postgres program in bootstrap mode and the\n>> + * shared memory checker code.\n>> \n>> This change may not be necessary, because, bootstrapper is a good\n>> short name for 'postgres program in bootstrap mode'. Also, this name\n>> is similar in style to the names of other auxiliary processes.\n\n> Thank you for reviewing my patch.\n> My concern is that the word 'bootstrapper' is ambiguous.\n> If the word is obvious to hackers, please use the v2 patch attached to\n> this email.\n\nA quick grep through the sources finds that \"bootstrapper\" is used\nin exactly two places (here, and one comment in initdb.c). I don't\nthink it's accepted jargon at all, and would vote to get rid of it.\nWhat I think *is* the usual phrasing is \"bootstrap-mode backend\"\nor variants of that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Sep 2019 11:22:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add comments for a postgres program in bootstrap mode"
},
{
"msg_contents": "Hello Tom,\n\nOn Fri, Sep 27, 2019 at 12:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Youki Shiraishi <shiraishi@computer.org> writes:\n> > On Thu, Sep 26, 2019 at 5:38 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >> * The main entry point for auxiliary processes, such as the bgwriter,\n> >> - * walwriter, walreceiver, bootstrapper and the shared memory checker code.\n> >> + * walwriter, walreceiver, postgres program in bootstrap mode and the\n> >> + * shared memory checker code.\n> >>\n> >> This change may not be necessary, because, bootstrapper is a good\n> >> short name for 'postgres program in bootstrap mode'. Also, this name\n> >> is similar in style to the names of other auxiliary processes.\n>\n> > Thank you for reviewing my patch.\n> > My concern is that the word 'bootstrapper' is ambiguous.\n> > If the word is obvious to hackers, please use the v2 patch attached to\n> > this email.\n>\n> A quick grep through the sources finds that \"bootstrapper\" is used\n> in exactly two places (here, and one comment in initdb.c). I don't\n> think it's accepted jargon at all, and would vote to get rid of it.\n> What I think *is* the usual phrasing is \"bootstrap-mode backend\"\n> or variants of that.\n\nI also vote to get rid of such ambiguous stuff.\nAs you can see by grepping, \"bootstrap-mode backend\" (and something\nlike that) is also called in the sources as:\n\n- bootstrap backend\n- (basic) bootstrap process\n- backend running in bootstrap mode\n- postgres (backend) program in bootstrap mode\n- bootstrapper\n\nI think \"bootstrap backend\" is a strong candidate for an alternative\nof \"bootstrapper\" because it is used in the official documentation of\ninitdb.\n--\nYouki Shiraishi\nNTT Software Innovation Center\nPhone: +81-(0)3-5860-5115\nEmail: shiraishi@computer.org\n\n\n",
"msg_date": "Fri, 27 Sep 2019 12:29:08 +0900",
"msg_from": "Youki Shiraishi <shiraishi@computer.org>",
"msg_from_op": true,
"msg_subject": "Re: Add comments for a postgres program in bootstrap mode"
},
{
"msg_contents": "Hi Amit,\n\nOn Fri, Sep 27, 2019 at 12:10 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hi Shiraishi-san,\n>\n> On Thu, Sep 26, 2019 at 6:32 PM Youki Shiraishi <shiraishi@computer.org> wrote:\n> > On Thu, Sep 26, 2019 at 5:38 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Thu, Sep 26, 2019 at 3:06 PM Youki Shiraishi <shiraishi@computer.org> wrote:\n> > > > I have just started to read the PostgreSQL code and found a lack of comments for a postgres backend program in bootstrap mode.\n> > > > When I saw the --boot option implemented in src/backend/main/main.c at first time, I did not understand why the --boot option is not documented and what it is used for.\n> > > > The only way to know these things is to type `grep -r '\\--boot' .` on a project root.\n> > > > It is easy to see that the --boot option is used in initdb for some historical reasons, but it is painful for a beginner like me.\n> > > > I believe the attached patch which adds a few comments might help a beginner.\n> > >\n> > > Thanks for the patch. It might be a good idea to demystify this\n> > > secret --boot option.\n> > >\n> > > + /* Bootstrap mode for initdb */\n> > > if (argc > 1 && strcmp(argv[1], \"--boot\") == 0)\n> > > AuxiliaryProcessMain(argc, argv); /* does not return */\n> > > else if (argc > 1 && strcmp(argv[1], \"--describe-config\") == 0)\n> > >\n> > > How about expanding that comment just a little bit, say:\n> > >\n> > > /*\n> > > * Bootstrapping is handled by AuxiliaryProcessMain() for historic\n> > > * reasons.\n> > > */\n>\n> Do you any thoughts on this suggestion?\n\nSorry, I missed your suggestion.\nThe purpose of a comment here is to direct hackers to initdb.c because\nthe --boot option is used only by initdb.\ninitdb.c describes why it uses the --boot option (i.e., historical\nreason), so I think it should not be described in main.c.\n\nRegards,\n\n-- \nYouki Shiraishi\nNTT Software Innovation Center\nPhone: +81-(0)3-5860-5115\nEmail: shiraishi@computer.org\n\n\n",
"msg_date": "Fri, 27 Sep 2019 12:51:55 +0900",
"msg_from": "Youki Shiraishi <shiraishi@computer.org>",
"msg_from_op": true,
"msg_subject": "Re: Add comments for a postgres program in bootstrap mode"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 12:29:08PM +0900, Youki Shiraishi wrote:\n> I also vote to get rid of such ambiguous stuff.\n> As you can see by grepping, \"bootstrap-mode backend\" (and something\n> like that) is also called in the sources as:\n> \n> - bootstrap backend\n> - (basic) bootstrap process\n> - backend running in bootstrap mode\n> - postgres (backend) program in bootstrap mode\n> - bootstrapper\n> \n> I think \"bootstrap backend\" is a strong candidate for an alternative\n> of \"bootstrapper\" because it is used in the official documentation of\n> initdb.\n\nIt seems to me that \"backend running in bootstrap mode\" would be the\nmost consistent way to define that state in a backend process:\n$ git grep -i \"bootstrap mode backend\" | wc -l\n0\n$ git grep -i \"bootstrap-mode\" | wc -l\n0\n$ git grep -i \"bootstrap mode\" | wc -l\n68\n$ git grep -i \"bootstrap process\" | wc -l\n9\n\n\"bootstrapper\" sounds weird. My 2c.\n--\nMichael",
"msg_date": "Fri, 27 Sep 2019 13:03:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add comments for a postgres program in bootstrap mode"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 1:03 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Sep 27, 2019 at 12:29:08PM +0900, Youki Shiraishi wrote:\n> > I also vote to get rid of such ambiguous stuff.\n> > As you can see by grepping, \"bootstrap-mode backend\" (and something\n> > like that) is also called in the sources as:\n> >\n> > - bootstrap backend\n> > - (basic) bootstrap process\n> > - backend running in bootstrap mode\n> > - postgres (backend) program in bootstrap mode\n> > - bootstrapper\n> >\n> > I think \"bootstrap backend\" is a strong candidate for an alternative\n> > of \"bootstrapper\" because it is used in the official documentation of\n> > initdb.\n>\n> It seems to me that \"backend running in bootstrap mode\" would be the\n> most consistent way to define that state in a backend process:\n> $ git grep -i \"bootstrap mode backend\" | wc -l\n> 0\n> $ git grep -i \"bootstrap-mode\" | wc -l\n> 0\n> $ git grep -i \"bootstrap mode\" | wc -l\n> 68\n> $ git grep -i \"bootstrap process\" | wc -l\n> 9\n>\n> \"bootstrapper\" sounds weird. My 2c.\n\nAlright, count me in on the \"bootstrap mode backend\" side too. :)\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 27 Sep 2019 13:12:24 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add comments for a postgres program in bootstrap mode"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 12:52 PM Youki Shiraishi <shiraishi@computer.org> wrote:\n> On Fri, Sep 27, 2019 at 12:10 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Thu, Sep 26, 2019 at 6:32 PM Youki Shiraishi <shiraishi@computer.org> wrote:\n> > > On Thu, Sep 26, 2019 at 5:38 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > > + /* Bootstrap mode for initdb */\n> > > > if (argc > 1 && strcmp(argv[1], \"--boot\") == 0)\n> > > > AuxiliaryProcessMain(argc, argv); /* does not return */\n> > > > else if (argc > 1 && strcmp(argv[1], \"--describe-config\") == 0)\n> > > >\n> > > > How about expanding that comment just a little bit, say:\n> > > >\n> > > > /*\n> > > > * Bootstrapping is handled by AuxiliaryProcessMain() for historic\n> > > > * reasons.\n> > > > */\n> >\n> > Do you any thoughts on this suggestion?\n>\n> Sorry, I missed your suggestion.\n> The purpose of a comment here is to direct hackers to initdb.c because\n> the --boot option is used only by initdb.\n> initdb.c describes why it uses the --boot option (i.e., historical\n> reason), so I think it should not be described in main.c.\n\nSorry, I didn't really mean to take out the \"for initdb\" part. So, I\nshould've suggested this\n\n /*\n * Bootstrap mode for initdb. Bootstrapping is handled by\n * AuxiliaryProcessMain() for historical reasons.\n */\n\nIMO, it would be good for this comment to say why\nAuxiliaryProcessMain() is invoked here instead of, say,\nPostgresMain(). \"for historical reasons\" may not be enough but maybe\nit is.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 27 Sep 2019 13:59:24 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add comments for a postgres program in bootstrap mode"
},
{
"msg_contents": "Hello folks,\n\nOn Fri, Sep 27, 2019 at 1:59 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Fri, Sep 27, 2019 at 12:52 PM Youki Shiraishi <shiraishi@computer.org> wrote:\n> > On Fri, Sep 27, 2019 at 12:10 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Thu, Sep 26, 2019 at 6:32 PM Youki Shiraishi <shiraishi@computer.org> wrote:\n> > > > On Thu, Sep 26, 2019 at 5:38 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > > > + /* Bootstrap mode for initdb */\n> > > > > if (argc > 1 && strcmp(argv[1], \"--boot\") == 0)\n> > > > > AuxiliaryProcessMain(argc, argv); /* does not return */\n> > > > > else if (argc > 1 && strcmp(argv[1], \"--describe-config\") == 0)\n> > > > >\n> > > > > How about expanding that comment just a little bit, say:\n> > > > >\n> > > > > /*\n> > > > > * Bootstrapping is handled by AuxiliaryProcessMain() for historic\n> > > > > * reasons.\n> > > > > */\n> > >\n> > > Do you any thoughts on this suggestion?\n> >\n> > Sorry, I missed your suggestion.\n> > The purpose of a comment here is to direct hackers to initdb.c because\n> > the --boot option is used only by initdb.\n> > initdb.c describes why it uses the --boot option (i.e., historical\n> > reason), so I think it should not be described in main.c.\n>\n> Sorry, I didn't really mean to take out the \"for initdb\" part. So, I\n> should've suggested this\n>\n> /*\n> * Bootstrap mode for initdb. Bootstrapping is handled by\n> * AuxiliaryProcessMain() for historical reasons.\n> */\n>\n> IMO, it would be good for this comment to say why\n> AuxiliaryProcessMain() is invoked here instead of, say,\n> PostgresMain(). \"for historical reasons\" may not be enough but maybe\n> it is.\n\nI totally agree with that.\nIt might also help to tell that AuxiliaryProcessMain() is special\nentry point compared to others.\n\nAccording to the discussion so far, I revised my patch as attached.\nI believe the patch might help beginners.\n\nMany thanks,\n\n-- \nYouki Shiraishi\nNTT Software Innovation Center\nPhone: +81-(0)3-5860-5115\nEmail: shiraishi@computer.org",
"msg_date": "Fri, 27 Sep 2019 14:37:19 +0900",
"msg_from": "Youki Shiraishi <shiraishi@computer.org>",
"msg_from_op": true,
"msg_subject": "Re: Add comments for a postgres program in bootstrap mode"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen we do archive recovery from the database cluster of which\ntimeline ID is more than 2 pg_wal/RECOVERYHISTORY is remained even\nafter archive recovery completed.\n\nThe cause of this seems cbc55da556b that moved exitArchiveRecovery()\nto before writeTimeLineHistory(). writeTimeLineHIstory() restores the\nhistory file from archive directory and therefore creates\nRECOVERYHISTORY file in pg_wal directory. We used to remove such\ntemporary file by exitArchiveRecovery() but with this commit the order\nof calling these functions is reversed. Therefore we create\nRECOVERYHISTORY file after exited from archive recovery mode and\nremain it.\n\nTo fix it I think that we can remove RECOVERYHISTORY file before the\nhistory file is archived in writeTimeLineHIstory(). The commit\ncbc55da556b is intended to minimize the window between the moment the\nfile is written and the end-of-recovery record is generated. So I\nthink it's not good to put exitArchiveRecovery() after\nwriteTimeLineHIstory().\n\nThis issue seems to exist in all supported version as far as I read\nthe code, although I don't test all of them yet.\n\nI've attached the draft patch to fix this issue. Regression test might\nbe required. Feedback and suggestion are very welcome.\n\nRegards,\n\n--\nMasahiko Sawada",
"msg_date": "Thu, 26 Sep 2019 10:14:44 +0200",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_wal/RECOVERYHISTORY file remains after archive recovery"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 5:15 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi,\n>\n> When we do archive recovery from the database cluster of which\n> timeline ID is more than 2 pg_wal/RECOVERYHISTORY is remained even\n> after archive recovery completed.\n>\n> The cause of this seems cbc55da556b that moved exitArchiveRecovery()\n> to before writeTimeLineHistory(). writeTimeLineHIstory() restores the\n> history file from archive directory and therefore creates\n> RECOVERYHISTORY file in pg_wal directory. We used to remove such\n> temporary file by exitArchiveRecovery() but with this commit the order\n> of calling these functions is reversed. Therefore we create\n> RECOVERYHISTORY file after exited from archive recovery mode and\n> remain it.\n>\n> To fix it I think that we can remove RECOVERYHISTORY file before the\n> history file is archived in writeTimeLineHIstory(). The commit\n> cbc55da556b is intended to minimize the window between the moment the\n> file is written and the end-of-recovery record is generated. So I\n> think it's not good to put exitArchiveRecovery() after\n> writeTimeLineHIstory().\n>\n> This issue seems to exist in all supported version as far as I read\n> the code, although I don't test all of them yet.\n>\n> I've attached the draft patch to fix this issue. Regression test might\n> be required. Feedback and suggestion are very welcome.\n\nWhat about moving the logic that removes RECO VERYXLOG and\nRECOVERYHISTORY from exitArchiveRecovery() and performing it\njust before/after RemoveNonParentXlogFiles()? Which looks simple.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 26 Sep 2019 18:23:09 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_wal/RECOVERYHISTORY file remains after archive recovery"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 6:23 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> On Thu, Sep 26, 2019 at 5:15 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > When we do archive recovery from the database cluster of which\n> > timeline ID is more than 2 pg_wal/RECOVERYHISTORY is remained even\n> > after archive recovery completed.\n> >\n> > The cause of this seems cbc55da556b that moved exitArchiveRecovery()\n> > to before writeTimeLineHistory(). writeTimeLineHIstory() restores the\n> > history file from archive directory and therefore creates\n> > RECOVERYHISTORY file in pg_wal directory. We used to remove such\n> > temporary file by exitArchiveRecovery() but with this commit the order\n> > of calling these functions is reversed. Therefore we create\n> > RECOVERYHISTORY file after exited from archive recovery mode and\n> > remain it.\n> >\n> > To fix it I think that we can remove RECOVERYHISTORY file before the\n> > history file is archived in writeTimeLineHIstory(). The commit\n> > cbc55da556b is intended to minimize the window between the moment the\n> > file is written and the end-of-recovery record is generated. So I\n> > think it's not good to put exitArchiveRecovery() after\n> > writeTimeLineHIstory().\n> >\n> > This issue seems to exist in all supported version as far as I read\n> > the code, although I don't test all of them yet.\n> >\n> > I've attached the draft patch to fix this issue. Regression test might\n> > be required. Feedback and suggestion are very welcome.\n>\n> What about moving the logic that removes RECO VERYXLOG and\n> RECOVERYHISTORY from exitArchiveRecovery() and performing it\n> just before/after RemoveNonParentXlogFiles()? Which looks simple.\n>\n\nAgreed. Attached the updated patch.\n\nRegards,\n\n--\nMasahiko Sawada",
"msg_date": "Fri, 27 Sep 2019 13:51:25 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_wal/RECOVERYHISTORY file remains after archive recovery"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 01:51:25PM +0900, Masahiko Sawada wrote:\n> On Thu, Sep 26, 2019 at 6:23 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>> What about moving the logic that removes RECO VERYXLOG and\n>> RECOVERYHISTORY from exitArchiveRecovery() and performing it\n>> just before/after RemoveNonParentXlogFiles()? Which looks simple.\n>\n> Agreed. Attached the updated patch.\n\nMea culpa here, I have just noticed the thread. Fujii-san, would you\nprefer if I take care of it? And also, what's the issue with not\ndoing the removal of both files just after writeTimeLineHistory()?\nThat's actually what happened before cbc55da5.\n--\nMichael",
"msg_date": "Fri, 27 Sep 2019 17:09:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_wal/RECOVERYHISTORY file remains after archive recovery"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 5:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Sep 27, 2019 at 01:51:25PM +0900, Masahiko Sawada wrote:\n> > On Thu, Sep 26, 2019 at 6:23 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> >> What about moving the logic that removes RECO VERYXLOG and\n> >> RECOVERYHISTORY from exitArchiveRecovery() and performing it\n> >> just before/after RemoveNonParentXlogFiles()? Which looks simple.\n> >\n> > Agreed. Attached the updated patch.\n>\n> Mea culpa here, I have just noticed the thread. Fujii-san, would you\n> prefer if I take care of it? And also, what's the issue with not\n> doing the removal of both files just after writeTimeLineHistory()?\n> That's actually what happened before cbc55da5.\n\nSo you think that it's better to remove them just after writeTimeLineHistory()?\nPer the following Sawada-san's comment, I was thinking that idea is basically\nnot good. And, RemoveNonParentXlogFiles() also removes garbage files from\npg_wal. It's simpler if similar codes exist near. Thought?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Fri, 27 Sep 2019 17:58:21 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_wal/RECOVERYHISTORY file remains after archive recovery"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 05:58:21PM +0900, Fujii Masao wrote:\n> So you think that it's better to remove them just after writeTimeLineHistory()?\n> Per the following Sawada-san's comment, I was thinking that idea is basically\n> not good. And, RemoveNonParentXlogFiles() also removes garbage files from\n> pg_wal. It's simpler if similar codes exist near. Thought?\n\nSawada-san's argument of upthread is that it is not good to put\nexitArchiveRecovery() after writeTimeLineHIstory(), which is what\ncbc55da has done per the reasons mentioned in the commit log, and we\nshould not change that.\n\nMy argument is we know that RECOVERYXLOG and RECOVERYHISTORY are not\nneeded anymore at this stage of recovery, hence we had better remove\nthem as soon as possible. I am not convinced that it is a good idea\nto move the cleanup close to RemoveNonParentXlogFiles(). First, this\nis an entirely different part of the logic where the startup process\nhas already switched to a new timeline. Second, we add more steps\nbetween the moment the two files are not needed and the moment they\nare removed, so any failure in-between would cause those files to\nstill be there (we cannot say either that we will not manipulate this\ncode later on) and we don't want those files to lie around. So,\nmentioning that we do the cleanup just after writeTimeLineHIstory()\nbecause we don't need them anymore is more consistent with what has\nbeen done for ages for the end of archive recovery, something that\ncbc55da unfortunately broke.\n--\nMichael",
"msg_date": "Fri, 27 Sep 2019 19:15:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_wal/RECOVERYHISTORY file remains after archive recovery"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 7:16 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Sep 27, 2019 at 05:58:21PM +0900, Fujii Masao wrote:\n> > So you think that it's better to remove them just after writeTimeLineHistory()?\n> > Per the following Sawada-san's comment, I was thinking that idea is basically\n> > not good. And, RemoveNonParentXlogFiles() also removes garbage files from\n> > pg_wal. It's simpler if similar codes exist near. Thought?\n>\n> Sawada-san's argument of upthread is that it is not good to put\n> exitArchiveRecovery() after writeTimeLineHIstory(), which is what\n> cbc55da has done per the reasons mentioned in the commit log, and we\n> should not change that.\n>\n> My argument is we know that RECOVERYXLOG and RECOVERYHISTORY are not\n> needed anymore at this stage of recovery, hence we had better remove\n> them as soon as possible. I am not convinced that it is a good idea\n> to move the cleanup close to RemoveNonParentXlogFiles(). First, this\n> is an entirely different part of the logic where the startup process\n> has already switched to a new timeline. Second, we add more steps\n> between the moment the two files are not needed and the moment they\n> are removed, so any failure in-between would cause those files to\n> still be there (we cannot say either that we will not manipulate this\n> code later on) and we don't want those files to lie around. So,\n> mentioning that we do the cleanup just after writeTimeLineHIstory()\n> because we don't need them anymore is more consistent with what has\n> been done for ages for the end of archive recovery, something that\n> cbc55da unfortunately broke.\n\nOk, I have no objection to remove them just after writeTimeLineHistory().\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Fri, 27 Sep 2019 20:41:26 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_wal/RECOVERYHISTORY file remains after archive recovery"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 8:41 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> On Fri, Sep 27, 2019 at 7:16 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Fri, Sep 27, 2019 at 05:58:21PM +0900, Fujii Masao wrote:\n> > > So you think that it's better to remove them just after writeTimeLineHistory()?\n> > > Per the following Sawada-san's comment, I was thinking that idea is basically\n> > > not good. And, RemoveNonParentXlogFiles() also removes garbage files from\n> > > pg_wal. It's simpler if similar codes exist near. Thought?\n> >\n> > Sawada-san's argument of upthread is that it is not good to put\n> > exitArchiveRecovery() after writeTimeLineHIstory(), which is what\n> > cbc55da has done per the reasons mentioned in the commit log, and we\n> > should not change that.\n> >\n> > My argument is we know that RECOVERYXLOG and RECOVERYHISTORY are not\n> > needed anymore at this stage of recovery, hence we had better remove\n> > them as soon as possible. I am not convinced that it is a good idea\n> > to move the cleanup close to RemoveNonParentXlogFiles(). First, this\n> > is an entirely different part of the logic where the startup process\n> > has already switched to a new timeline. Second, we add more steps\n> > between the moment the two files are not needed and the moment they\n> > are removed, so any failure in-between would cause those files to\n> > still be there (we cannot say either that we will not manipulate this\n> > code later on) and we don't want those files to lie around. So,\n> > mentioning that we do the cleanup just after writeTimeLineHIstory()\n> > because we don't need them anymore is more consistent with what has\n> > been done for ages for the end of archive recovery, something that\n> > cbc55da unfortunately broke.\n>\n> Ok, I have no objection to remove them just after writeTimeLineHistory().\n>\n\nI abandoned once to move the removal code to between\nwriteTimeLineHistory() and timeline switching because of expanding the\nwindow but since unlink itself will complete within a very short time\nit would not be problamatic much.\n\nAttached the updated patch that just moves the removal code.\n\nRegards,\n\n--\nMasahiko Sawada",
"msg_date": "Fri, 27 Sep 2019 22:00:16 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_wal/RECOVERYHISTORY file remains after archive recovery"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 10:00:16PM +0900, Masahiko Sawada wrote:\n> I abandoned once to move the removal code to between\n> writeTimeLineHistory() and timeline switching because of expanding the\n> window but since unlink itself will complete within a very short time\n> it would not be problamatic much.\n> \n> Attached the updated patch that just moves the removal code.\n\nThat's not quite it, as you forgot to move the declaration of\nrecoveryPath so the patch fails to compile.\n\nAdding some tests would be nice, so I updated your patch to include\nsomething. One place where we recover files from archives is\n002_archiving.pl, still the files get renamed to the segment names\nwhen recovered so that's difficult to make that part 100%\ndeterministic yet. Still as a reminder of the properties behind those\nfiles it does not sound bad to document it in the test either, that's\ncheap, and we get the future covered.\n--\nMichael",
"msg_date": "Mon, 30 Sep 2019 10:10:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_wal/RECOVERYHISTORY file remains after archive recovery"
},
{
"msg_contents": "On Mon, Sep 30, 2019 at 10:10 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Sep 27, 2019 at 10:00:16PM +0900, Masahiko Sawada wrote:\n> > I abandoned once to move the removal code to between\n> > writeTimeLineHistory() and timeline switching because of expanding the\n> > window but since unlink itself will complete within a very short time\n> > it would not be problamatic much.\n> >\n> > Attached the updated patch that just moves the removal code.\n>\n> That's not quite it, as you forgot to move the declaration of\n> recoveryPath so the patch fails to compile.\n\nOops, thanks.\n\n>\n> Adding some tests would be nice, so I updated your patch to include\n> something. One place where we recover files from archives is\n> 002_archiving.pl, still the files get renamed to the segment names\n> when recovered so that's difficult to make that part 100%\n> deterministic yet. Still as a reminder of the properties behind those\n> files it does not sound bad to document it in the test either, that's\n> cheap, and we get the future covered.\n\nThank you for updating the patch!\n\n+1 to add tests but even the current postgres passes this tests\nbecause of two reasons: one is $node_standby tries to restore\n00000001.history but fails and therefore RECOVERYHISTORY isn't\ncreated. Another one is described To reproduce this issue the new\ntimeline ID of recovered database needs to be more than 3.\n\n+isnt(\n+ -f \"$node_standby_data/pg_wal/RECOVERYHISTORY\",\n+ \"RECOVERYHISTORY removed after promotion\");\n+isnt(\n+ -f \"$node_standby_data/pg_wal/RECOVERYXLOG\",\n+ \"RECOVERYXLOG removed after promotion\");\n\nI think that the above checks are always true because isnt() function\nchecks if the 1st argument and 2nd argument are not the same.\n\nI've attached the updated version patch including the tests. Please review it.\n\nRegards,\n\n--\nMasahiko Sawada",
"msg_date": "Mon, 30 Sep 2019 12:53:58 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_wal/RECOVERYHISTORY file remains after archive recovery"
},
{
"msg_contents": "On Mon, Sep 30, 2019 at 12:53:58PM +0900, Masahiko Sawada wrote:\n> I think that the above checks are always true because isnt() function\n> checks if the 1st argument and 2nd argument are not the same.\n\nDammit. I overlooked this part of the module's doc.\n\n> I've attached the updated version patch including the tests. Please\n> review it.\n\nThanks, your test allows to reproduce the original problem, so that's\nnice. I don't have much to say, except some improvements to the\ncomments of the test as per the attached. What do you think?\n--\nMichael",
"msg_date": "Mon, 30 Sep 2019 17:03:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_wal/RECOVERYHISTORY file remains after archive recovery"
},
{
"msg_contents": "On Mon, Sep 30, 2019 at 5:03 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Sep 30, 2019 at 12:53:58PM +0900, Masahiko Sawada wrote:\n> > I think that the above checks are always true because isnt() function\n> > checks if the 1st argument and 2nd argument are not the same.\n>\n> Dammit. I overlooked this part of the module's doc.\n>\n> > I've attached the updated version patch including the tests. Please\n> > review it.\n>\n> Thanks, your test allows to reproduce the original problem, so that's\n> nice. I don't have much to say, except some improvements to the\n> comments of the test as per the attached. What do you think?\n\nThank you for updating! The comment in your patch is much better.\n\nRegards,\n\n--\nMasahiko Sawada\n\n\n",
"msg_date": "Mon, 30 Sep 2019 17:07:08 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_wal/RECOVERYHISTORY file remains after archive recovery"
},
{
"msg_contents": "On Mon, Sep 30, 2019 at 05:07:08PM +0900, Masahiko Sawada wrote:\n> Thank you for updating! The comment in your patch is much better.\n\nThanks, done and back-patched down to 9.5.\n--\nMichael",
"msg_date": "Wed, 2 Oct 2019 15:58:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_wal/RECOVERYHISTORY file remains after archive recovery"
},
{
"msg_contents": "On Wed, Oct 2, 2019 at 3:58 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Sep 30, 2019 at 05:07:08PM +0900, Masahiko Sawada wrote:\n> > Thank you for updating! The comment in your patch is much better.\n>\n> Thanks, done and back-patched down to 9.5.\n\nThank you!\n\nRegards,\n\n--\nMasahiko Sawada\n\n\n",
"msg_date": "Wed, 2 Oct 2019 16:01:38 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_wal/RECOVERYHISTORY file remains after archive recovery"
}
] |
[
{
"msg_contents": "Building the 12rc1 package on Ubuntu eoan/amd64, I got this\nregression diff:\n\n12:06:27 diff -U3 /<<PKGBUILDDIR>>/build/../src/test/regress/expected/select_parallel.out /<<PKGBUILDDIR>>/build/src/bin/pg_upgrade/tmp_check/regress/results/select_parallel.out\n12:06:27 --- /<<PKGBUILDDIR>>/build/../src/test/regress/expected/select_parallel.out\t2019-09-23 20:24:42.000000000 +0000\n12:06:27 +++ /<<PKGBUILDDIR>>/build/src/bin/pg_upgrade/tmp_check/regress/results/select_parallel.out\t2019-09-26 10:06:21.171683801 +0000\n12:06:27 @@ -21,8 +21,8 @@\n12:06:27 Workers Planned: 3\n12:06:27 -> Partial Aggregate\n12:06:27 -> Parallel Append\n12:06:27 - -> Parallel Seq Scan on d_star\n12:06:27 -> Parallel Seq Scan on f_star\n12:06:27 + -> Parallel Seq Scan on d_star\n12:06:27 -> Parallel Seq Scan on e_star\n12:06:27 -> Parallel Seq Scan on b_star\n12:06:27 -> Parallel Seq Scan on c_star\n12:06:27 @@ -75,8 +75,8 @@\n12:06:27 Workers Planned: 3\n12:06:27 -> Partial Aggregate\n12:06:27 -> Parallel Append\n12:06:27 - -> Seq Scan on d_star\n12:06:27 -> Seq Scan on f_star\n12:06:27 + -> Seq Scan on d_star\n12:06:27 -> Seq Scan on e_star\n12:06:27 -> Seq Scan on b_star\n12:06:27 -> Seq Scan on c_star\n12:06:27 @@ -103,7 +103,7 @@\n12:06:27 -----------------------------------------------------\n12:06:27 Finalize Aggregate\n12:06:27 -> Gather\n12:06:27 - Workers Planned: 1\n12:06:27 + Workers Planned: 3\n12:06:27 -> Partial Aggregate\n12:06:27 -> Append\n12:06:27 -> Parallel Seq Scan on a_star\n\nRetriggering the build worked, though.\n\nChristoph\n\n\n",
"msg_date": "Thu, 26 Sep 2019 13:04:26 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Unstable select_parallel regression output in 12rc1"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Building the 12rc1 package on Ubuntu eoan/amd64, I got this\n> regression diff:\n\nThe append-order differences have been seen before, per this thread:\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKG%2B0CxrKRWRMf5ymN3gm%2BBECHna2B-q1w8onKBep4HasUw%40mail.gmail.com\n\nWe haven't seen it in quite some time in HEAD, though I fear that's\njust due to bad luck or change of timing of unrelated tests. I've\nbeen hoping to catch it in HEAD to validate the theory I posited in\n<22315.1563378828@sss.pgh.pa.us>, but your report doesn't help because\nthe additional checking queries aren't there in the v12 branch :-(\n\n> 12:06:27 @@ -103,7 +103,7 @@\n> 12:06:27 -----------------------------------------------------\n> 12:06:27 Finalize Aggregate\n> 12:06:27 -> Gather\n> 12:06:27 - Workers Planned: 1\n> 12:06:27 + Workers Planned: 3\n> 12:06:27 -> Partial Aggregate\n> 12:06:27 -> Append\n> 12:06:27 -> Parallel Seq Scan on a_star\n\nWe've also seen this on a semi-regular basis, and I've been intending\nto bitch about it, though it didn't seem very useful to do so as long\nas there were other instabilities in the regression tests. What we\ncould do, perhaps, is feed the plan output through a filter that\nsuppresses the exact number-of-workers value. There's precedent\nfor such plan-filtering elsewhere in the tests already.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Sep 2019 11:12:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unstable select_parallel regression output in 12rc1"
},
{
"msg_contents": "Re: Tom Lane 2019-09-26 <12685.1569510771@sss.pgh.pa.us>\n> We haven't seen it in quite some time in HEAD, though I fear that's\n> just due to bad luck or change of timing of unrelated tests.\n\nThe v13 package builds that are running every 6h here haven't seen a\nproblem yet either, so the probability of triggering it seems very\nlow. So it's not a pressing problem. (There's some extension modules\nwhere the testsuite fails at a much higher rate, getting all targets\nto pass at the same time is next to impossible there :(. )\n\nChristoph\n\n\n",
"msg_date": "Fri, 27 Sep 2019 09:20:55 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Unstable select_parallel regression output in 12rc1"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Re: Tom Lane 2019-09-26 <12685.1569510771@sss.pgh.pa.us>\n>> We haven't seen it in quite some time in HEAD, though I fear that's\n>> just due to bad luck or change of timing of unrelated tests.\n\n> The v13 package builds that are running every 6h here haven't seen a\n> problem yet either, so the probability of triggering it seems very\n> low. So it's not a pressing problem.\n\nI've pushed some changes to try to ameliorate the issue.\n\n> (There's some extension modules\n> where the testsuite fails at a much higher rate, getting all targets\n> to pass at the same time is next to impossible there :(. )\n\nI feel your pain, believe me. Used to fight the same kind of problems\nwhen I was at Red Hat. Are any of those extension modules part of\nPostgres?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 28 Sep 2019 13:36:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unstable select_parallel regression output in 12rc1"
},
{
"msg_contents": "Re: Tom Lane 2019-09-28 <24917.1569692191@sss.pgh.pa.us>\n> > (There's some extension modules\n> > where the testsuite fails at a much higher rate, getting all targets\n> > to pass at the same time is next to impossible there :(. )\n> \n> I feel your pain, believe me. Used to fight the same kind of problems\n> when I was at Red Hat. Are any of those extension modules part of\n> Postgres?\n\nNo, external ones. The main offenders at the moment are pglogical and\npatroni (admittedly not an extension in the strict sense). Both have\nextensive testsuites that exercise replication scenarios that are\nprone to race conditions. (Maybe we should just run less tests for the\npackaging.)\n\nChristoph\n\n\n",
"msg_date": "Sun, 29 Sep 2019 21:31:25 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Unstable select_parallel regression output in 12rc1"
}
] |
[
{
"msg_contents": "Every so often the partition_prune test falls over, for example\nhere, here, and here:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2019-08-15%2021%3A45%3A00\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2019-08-21%2022%3A19%3A23\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2019-09-11%2018%3A46%3A47\n\nThe reason for the failures is quite apparent: we sometimes don't\nget as many workers as we hoped for. The test script is not quite\n100% naive about that, but it's only designed to filter out the\n\"loops\" counts of parallel scan nodes. As these examples show,\nthat's utterly inadequate. The \"Workers Launched\" field is variable\ntoo, obviously, and so are the rows and loops counts for every plan\nnode up to the Gather.\n\nI experimented with adjusting explain_parallel_append() to filter\nmore fields, but soon realized that we'd have to filter out basically\neverything that makes it useful to run EXPLAIN ANALYZE at all.\n\nTherefore, I think it's time to give up this testing methodology\nas a bad idea, and fall back to the time-honored way of running a\nplain EXPLAIN and then the actual query, as per the attached patch.\n\n(Note: there's some roughly similar code in select_parallel.sql,\nbut as far as I could find it fails seldom if at all. Likely that\nis because we don't run select_parallel in parallel with other\ntest scripts. So perhaps an argument could be made to leave\npartition_prune.sql alone and just run it by itself. I do not\ncare for that answer though, as it will make the regression test\nsuite slower, plus I do not see any argument that this testing method\nactually provides any info we don't get the traditional way.)\n\nBTW, another aspect of this test script that could stand to be\nnuked from orbit is this method for getting a custom plan:\n\n-- Execute query 5 times to allow choose_custom_plan\n-- to start considering a generic plan.\nexecute ab_q4 (1, 8);\nexecute ab_q4 (1, 8);\nexecute ab_q4 (1, 8);\nexecute ab_q4 (1, 8);\nexecute ab_q4 (1, 8);\n\nWe should drop that in favor of plan_cache_mode = force_custom_plan,\nIMO. But I didn't include that change in this patch.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 26 Sep 2019 18:25:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Instability of partition_prune regression test results"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 7:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Every so often the partition_prune test falls over, for example\n> here, here, and here:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2019-08-15%2021%3A45%3A00\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2019-08-21%2022%3A19%3A23\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2019-09-11%2018%3A46%3A47\n>\n> The reason for the failures is quite apparent: we sometimes don't\n> get as many workers as we hoped for. The test script is not quite\n> 100% naive about that, but it's only designed to filter out the\n> \"loops\" counts of parallel scan nodes. As these examples show,\n> that's utterly inadequate. The \"Workers Launched\" field is variable\n> too, obviously, and so are the rows and loops counts for every plan\n> node up to the Gather.\n>\n> I experimented with adjusting explain_parallel_append() to filter\n> more fields, but soon realized that we'd have to filter out basically\n> everything that makes it useful to run EXPLAIN ANALYZE at all.\n>\n> Therefore, I think it's time to give up this testing methodology\n> as a bad idea, and fall back to the time-honored way of running a\n> plain EXPLAIN and then the actual query, as per the attached patch.\n\nIsn't the point of using ANALYZE here to show that the exec-param\nbased run-time pruning is working (those \"never executed\" strings)?\n\n> BTW, another aspect of this test script that could stand to be\n> nuked from orbit is this method for getting a custom plan:\n>\n> -- Execute query 5 times to allow choose_custom_plan\n> -- to start considering a generic plan.\n> execute ab_q4 (1, 8);\n> execute ab_q4 (1, 8);\n> execute ab_q4 (1, 8);\n> execute ab_q4 (1, 8);\n> execute ab_q4 (1, 8);\n>\n> We should drop that in favor of plan_cache_mode = force_custom_plan,\n> IMO.\n\n+1\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 27 Sep 2019 11:42:32 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Instability of partition_prune regression test results"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Fri, Sep 27, 2019 at 7:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I experimented with adjusting explain_parallel_append() to filter\n>> more fields, but soon realized that we'd have to filter out basically\n>> everything that makes it useful to run EXPLAIN ANALYZE at all.\n>> Therefore, I think it's time to give up this testing methodology\n>> as a bad idea, and fall back to the time-honored way of running a\n>> plain EXPLAIN and then the actual query, as per the attached patch.\n\n> Isn't the point of using ANALYZE here to show that the exec-param\n> based run-time pruning is working (those \"never executed\" strings)?\n\nHm. Well, if you want to see those, we could do it as attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 27 Sep 2019 11:59:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Instability of partition_prune regression test results"
},
{
"msg_contents": "On Sat, Sep 28, 2019 at 12:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Fri, Sep 27, 2019 at 7:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I experimented with adjusting explain_parallel_append() to filter\n> >> more fields, but soon realized that we'd have to filter out basically\n> >> everything that makes it useful to run EXPLAIN ANALYZE at all.\n> >> Therefore, I think it's time to give up this testing methodology\n> >> as a bad idea, and fall back to the time-honored way of running a\n> >> plain EXPLAIN and then the actual query, as per the attached patch.\n>\n> > Isn't the point of using ANALYZE here to show that the exec-param\n> > based run-time pruning is working (those \"never executed\" strings)?\n>\n> Hm. Well, if you want to see those, we could do it as attached.\n\nPerfect, thanks.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Sat, 28 Sep 2019 16:20:08 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Instability of partition_prune regression test results"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Sat, Sep 28, 2019 at 12:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Amit Langote <amitlangote09@gmail.com> writes:\n>>> Isn't the point of using ANALYZE here to show that the exec-param\n>>> based run-time pruning is working (those \"never executed\" strings)?\n\n>> Hm. Well, if you want to see those, we could do it as attached.\n\n> Perfect, thanks.\n\nOK, pushed that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 28 Sep 2019 13:34:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Instability of partition_prune regression test results"
}
] |
[
{
"msg_contents": "Here's to hoping this is the worst omission in v12.\n\nJustin",
"msg_date": "Thu, 26 Sep 2019 21:20:51 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "tab complete for explain SETTINGS"
},
{
"msg_contents": "On 2019/09/27 11:20, Justin Pryzby wrote:\n> Here's to hoping this is the worst omission in v12.\n> \n> Justin\n> \n\nHi Justin,\n\nI share my test result of your patch.\n\nI used two commits REL_12_RC1 and Head, and got a Hunk below:\n\n#REL_12_RC1 (17822c0e4f5ab8093e78f665c9e44766ae648a44)\n=============================\n$ patch -p1 <v1-0001-tab-completion-for-explain-SETTINGS.patch\n(Stripping trailing CRs from patch; use --binary to disable.)\npatching file src/bin/psql/tab-complete.c\nHunk #1 succeeded at 2886 (offset -57 lines).\n=============================\n\n#Head (fbfa5664882c9b61428266e6fb0d48b0147c421a)\n=============================\n$ patch -p1 <v1-0001-tab-completion-for-explain-SETTINGS.patch\n(Stripping trailing CRs from patch; use --binary to disable.)\npatching file src/bin/psql/tab-complete.c\nHunk #1 succeeded at 2940 (offset -3 lines).\n=============================\n\n\nAnyway, I tested the patch and it looks fine. :)\n\n#Test result of tab-completion on Head\n=============================\n# explain (\nANALYZE BUFFERS COSTS FORMAT SETTINGS SUMMARY TIMING VERBOSE\n\n# explain (s\nsettings summary\n\n# explain (settings ON ) select * from pg_class;\n QUERY PLAN\n-------------------------------------------------------------\n Seq Scan on pg_class (cost=0.00..16.95 rows=395 width=265)\n Settings: geqo_threshold = '10'\n(2 rows)\n=============================\n\n\nThanks,\nTatsuro Yamada\n\n\n\n\n",
"msg_date": "Fri, 27 Sep 2019 12:18:17 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: tab complete for explain SETTINGS"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 12:18:17PM +0900, Tatsuro Yamada wrote:\n> Anyway, I tested the patch and it looks fine. :)\n\nThanks Justin and Yamada-san. The order of the options in the list to\ndisplay and in the check did not match the order of the documentation,\nwhich is the intention here, so fixed and committed this way.\n\n(The list of options displayed would be alphabetically ordered for the\ncompletion but it is good to keep the code consistent with the docs,\nthis makes easier future checks when adding new options).\n--\nMichael",
"msg_date": "Fri, 27 Sep 2019 12:55:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: tab complete for explain SETTINGS"
}
] |
[
{
"msg_contents": "Hi all,\n(Jeff Davis in CC)\n\nAs $subject tells, any version of OpenSSL not including\nX509_get_signature_nid() (version <= 1.0.1) causes the SSL tests to\nfail. This has been introduced by d6e612f.\n\nWe need to do something similar to c3d41cc for the test, as per the\nattached. I have tested that with OpenSSL 1.0.1 and 1.0.2 to stress\nboth scenarios.\n\nAny objections to this fix?\n\nThanks,\n--\nMichael",
"msg_date": "Fri, 27 Sep 2019 11:44:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "SSL tests failing for channel_binding with OpenSSL <= 1.0.1"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 11:44:57AM +0900, Michael Paquier wrote:\n> We need to do something similar to c3d41cc for the test, as per the\n> attached. I have tested that with OpenSSL 1.0.1 and 1.0.2 to stress\n> both scenarios.\n> \n> Any objections to this fix?\n\nCommitted as a12c75a1.\n--\nMichael",
"msg_date": "Mon, 30 Sep 2019 14:35:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: SSL tests failing for channel_binding with OpenSSL <= 1.0.1"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Sep 27, 2019 at 11:44:57AM +0900, Michael Paquier wrote:\n>> We need to do something similar to c3d41cc for the test, as per the\n>> attached. I have tested that with OpenSSL 1.0.1 and 1.0.2 to stress\n>> both scenarios.\n>> Any objections to this fix?\n\n> Committed as a12c75a1.\n\nThe committed fix looks odd: isn't the number of executed tests the\nsame in both code paths? (I didn't try it yet.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Sep 2019 09:37:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SSL tests failing for channel_binding with OpenSSL <= 1.0.1"
},
{
"msg_contents": "On Mon, 2019-09-30 at 09:37 -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Fri, Sep 27, 2019 at 11:44:57AM +0900, Michael Paquier wrote:\n> > > We need to do something similar to c3d41cc for the test, as per\n> > > the\n> > > attached. I have tested that with OpenSSL 1.0.1 and 1.0.2 to\n> > > stress\n> > > both scenarios.\n> > > Any objections to this fix?\n> > Committed as a12c75a1.\n> \n> The committed fix looks odd: isn't the number of executed tests the\n> same in both code paths? (I didn't try it yet.)\n\ntest_connect_fails actually runs two tests, one for the failing exit\ncode and one for the error message.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Mon, 30 Sep 2019 11:08:20 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: SSL tests failing for channel_binding with OpenSSL <= 1.0.1"
},
{
"msg_contents": "On Mon, Sep 30, 2019 at 11:08:20AM -0700, Jeff Davis wrote:\n> On Mon, 2019-09-30 at 09:37 -0400, Tom Lane wrote:\n>> The committed fix looks odd: isn't the number of executed tests the\n>> same in both code paths? (I didn't try it yet.)\n>\n> test_connect_fails actually runs two tests, one for the failing exit\n> code and one for the error message.\n\nYes. The committed code still works as I would expect. With OpenSSL\n<= 1.0.1, I get 10 tests, and 9 with OpenSSL >= 1.0.2. You can check\nthe difference from test 5 \"SCRAM with SSL and channel_binding=require\".\n--\nMichael",
"msg_date": "Tue, 1 Oct 2019 09:13:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: SSL tests failing for channel_binding with OpenSSL <= 1.0.1"
}
] |
[
{
"msg_contents": "Hello,\n\nI found the problem that clang compiler introduces warnings when building\nPostgreSQL. Attached patch fixes it.\n\n===\nCompiler version\n===\nclang version 10.0.0-svn372772-1~exp1+0~20190924181208.2504~1.gbpb209ff\n(trunk)\n\nOlder versions of clang may not generate this warning.\n\n===\nWarning\n===\n\ntimestamp.c:3236:22: warning: implicit conversion from 'long' to 'double'\nchanges value from 9223372036854775807 to 9223372036854775808\n[-Wimplicit-int-float-conversion]\n if (result_double > PG_INT64_MAX || result_double < PG_INT64_MIN)\n ~ ^~~~~~~~~~~~\n../../../../src/include/c.h:444:22: note: expanded from macro 'PG_INT64_MAX'\n#define PG_INT64_MAX INT64CONST(0x7FFFFFFFFFFFFFFF)\n ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n../../../../src/include/c.h:381:25: note: expanded from macro 'INT64CONST'\n#define INT64CONST(x) (x##L)\n ^~~~\n<scratch space>:234:1: note: expanded from here\n0x7FFFFFFFFFFFFFFFL\n^~~~~~~~~~~~~~~~~~~\n1 warning generated.\npgbench.c:1657:30: warning: implicit conversion from 'long' to 'double'\nchanges value from 9223372036854775807 to 9223372036854775808\n[-Wimplicit-int-float-conversion]\n if (dval < PG_INT64_MIN || PG_INT64_MAX < dval)\n ^~~~~~~~~~~~ ~\n../../../src/include/c.h:444:22: note: expanded from macro 'PG_INT64_MAX'\n#define PG_INT64_MAX INT64CONST(0x7FFFFFFFFFFFFFFF)\n ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n../../../src/include/c.h:381:25: note: expanded from macro 'INT64CONST'\n#define INT64CONST(x) (x##L)\n ^~~~\n<scratch space>:252:1: note: expanded from here\n0x7FFFFFFFFFFFFFFFL\n^~~~~~~~~~~~~~~~~~~\n1 warning generated.\n\n===\n\nThis warning is due to implicit conversion from PG_INT64_MAX to double,\nwhich drops the precision as described in the warning. This drop is not a\nproblem in this case, but we have to get rid of useless warnings. Attached\npatch casts PG_INT64_MAX explicitly.\n\nThanks,\nYuya Watari\nNTT Software Innovation Center\nwatari.yuya@gmail.com",
"msg_date": "Fri, 27 Sep 2019 12:00:15 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Keep compiler silence (clang 10, implicit conversion from 'long' to\n 'double' )"
},
{
"msg_contents": "Hello,\n\nI add further information. This issue also has a problem about\n*overflow checking*.\n\nThe original code is as follows.\n\nsrc/backend/utils/adt/timestamp.c:3222\n-----\n if (result_double > PG_INT64_MAX || result_double < PG_INT64_MIN)\n ereport(ERROR,\n (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n errmsg(\"interval out of range\")));\n result->time = (int64) result_double;\n-----\n\nHere, the code checks whether \"result_double\" fits 64-bit integer size\nbefore casting it.\n\nHowever, as I have mentioned in the previous email, PG_INT64_MAX is\ncast to double and the value becomes 9223372036854775808 due to lack\nof precision.\nTherefore, the above code is identical to \"result_double >\n9223372036854775808.0\". This checking does not cover the case when\nresult_double is equal to 9223372036854775808. In this case, \"(int64)\nresult_double\" will be -9223372036854775808, which is wrong.\n\nThe next code confirms what I explained.\n\n===\n#include <stdio.h>\n#include <stdint.h>\nint main(void)\n{\n double value = (double) INT64_MAX;\n printf(\"INT64_MAX = %ld\\n\", INT64_MAX);\n printf(\"value = %lf\\n\", value);\n printf(\"(value > (double) INT64_MAX) == %d\\n\", value > (double) INT64_MAX);\n printf(\"(long int) value == %ld\\n\", (long int) value);\n}\n===\nOutput:\nINT64_MAX = 9223372036854775807\nvalue = 9223372036854775808.000000\n(value > (double) INT64_MAX) == 0\n(long int) value == -9223372036854775808\n===\n\nI think the code should be \"result_double >= (double) PG_INT64_MAX\",\nthat is we have to use >= rather than >. I attached the modified\npatch.\n\nThanks,\nYuya Watari\nNTT Software Innovation Center\nwatari.yuya@gmail.com\n\n2019年9月27日(金) 12:00 Yuya Watari <watari.yuya@gmail.com>:\n>\n> Hello,\n>\n> I found the problem that clang compiler introduces warnings when building PostgreSQL. Attached patch fixes it.\n>\n> ===\n> Compiler version\n> ===\n> clang version 10.0.0-svn372772-1~exp1+0~20190924181208.2504~1.gbpb209ff (trunk)\n>\n> Older versions of clang may not generate this warning.\n>\n> ===\n> Warning\n> ===\n>\n> timestamp.c:3236:22: warning: implicit conversion from 'long' to 'double' changes value from 9223372036854775807 to 9223372036854775808 [-Wimplicit-int-float-conversion]\n> if (result_double > PG_INT64_MAX || result_double < PG_INT64_MIN)\n> ~ ^~~~~~~~~~~~\n> ../../../../src/include/c.h:444:22: note: expanded from macro 'PG_INT64_MAX'\n> #define PG_INT64_MAX INT64CONST(0x7FFFFFFFFFFFFFFF)\n> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> ../../../../src/include/c.h:381:25: note: expanded from macro 'INT64CONST'\n> #define INT64CONST(x) (x##L)\n> ^~~~\n> <scratch space>:234:1: note: expanded from here\n> 0x7FFFFFFFFFFFFFFFL\n> ^~~~~~~~~~~~~~~~~~~\n> 1 warning generated.\n> pgbench.c:1657:30: warning: implicit conversion from 'long' to 'double' changes value from 9223372036854775807 to 9223372036854775808 [-Wimplicit-int-float-conversion]\n> if (dval < PG_INT64_MIN || PG_INT64_MAX < dval)\n> ^~~~~~~~~~~~ ~\n> ../../../src/include/c.h:444:22: note: expanded from macro 'PG_INT64_MAX'\n> #define PG_INT64_MAX INT64CONST(0x7FFFFFFFFFFFFFFF)\n> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> ../../../src/include/c.h:381:25: note: expanded from macro 'INT64CONST'\n> #define INT64CONST(x) (x##L)\n> ^~~~\n> <scratch space>:252:1: note: expanded from here\n> 0x7FFFFFFFFFFFFFFFL\n> ^~~~~~~~~~~~~~~~~~~\n> 1 warning generated.\n>\n> ===\n>\n> This warning is due to implicit conversion from PG_INT64_MAX to double, which drops the precision as described in the warning. This drop is not a problem in this case, but we have to get rid of useless warnings. Attached patch casts PG_INT64_MAX explicitly.\n>\n> Thanks,\n> Yuya Watari\n> NTT Software Innovation Center\n> watari.yuya@gmail.com",
"msg_date": "Fri, 27 Sep 2019 16:43:53 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Keep compiler silence (clang 10, implicit conversion from 'long'\n to 'double' )"
},
{
"msg_contents": "Hello.\n\nAt Fri, 27 Sep 2019 16:43:53 +0900, Yuya Watari <watari.yuya@gmail.com> wrote in <CAJ2pMkaLTOxFjTim=GV8u=jG++sb9W6GNSgyFxPVDSQMVfRv5g@mail.gmail.com>\n> Hello,\n> \n> I add further information. This issue also has a problem about\n> *overflow checking*.\n> \n> The original code is as follows.\n> \n> src/backend/utils/adt/timestamp.c:3222\n> -----\n> if (result_double > PG_INT64_MAX || result_double < PG_INT64_MIN)\n> ereport(ERROR,\n> (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> errmsg(\"interval out of range\")));\n> result->time = (int64) result_double;\n> -----\n>\n> I think the code should be \"result_double >= (double) PG_INT64_MAX\",\n> that is we have to use >= rather than >. I attached the modified\n> patch.\n\nYeah, good catch! It is surely a bogus comparison. But I suspect\nthat how a number is rounded off depends on architechture. (But\nnot sure.) There seems be a variant of (double/float)->int32,\nsay, in interval_div.\n\nI found a trick seems workable generically (*1). (2.0 *\n(PG_INT64_MAX/2 + 1)) will generate the value next to the\nPG_INT64_MAX based on some assumptions\n(*1). IS_DOUBLE_SAFE_IN_INT64() below would be able to check if\nthe value can be converted into int64 safely or not. \n\n====\n/*\n * Check if a double value can be casted into int64.\n *\n * This macro is assuming that FLT_RADIX == 2 so that the * 2.0 trick works,\n * PG_INT64_MAX is so below DBL_MAX that the doubled value can be represented\n * in double and DBL_MANT_DIG is equal or smaller than DBL_MAX_EXP so that\n * ceil() returns expected result.\n*/\n#define MIN_DOUBLE_OVER_INT64_MAX (2.0 * (PG_INT64_MAX / 2 + 1))\n#define MAX_DOUBLE_UNDER_INT64_MIN (2.0 * (PG_INT64_MIN / 2 - 1))\n\n#if -PG_INT64_MAX != PG_INT64_MIN\n#define IS_DOUBLE_SAFE_IN_INT64(x)\t\t \\\n ((x) < MIN_DOUBLE_OVER_INT64_MAX && ceil(x) >= PG_INT64_MIN)\n#else\n#define IS_DOUBLE_SAFE_IN_INT64(x)\t\t\t\t\\\n ((x) < MIN_DOUBLE_OVER_INT64_MAX && (x) > MAX_DOUBLE_UNDER_INT64_MIN)\n#endif\n====\n\nI haven't fully confirmed if it is really right.\n\n*1: https://stackoverflow.com/questions/526070/handling-overflow-when-casting-doubles-to-integers-in-c\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 01 Oct 2019 15:41:48 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Keep compiler silence (clang 10, implicit conversion from\n 'long' to 'double' )"
},
{
"msg_contents": "Horiguchi-san,\n\nOn Tue, Oct 1, 2019 at 3:41 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> I found a trick seems workable generically (*1). (2.0 *\n> (PG_INT64_MAX/2 + 1)) will generate the value next to the\n> PG_INT64_MAX based on some assumptions\n> (*1). IS_DOUBLE_SAFE_IN_INT64() below would be able to check if\n> the value can be converted into int64 safely or not.\n\nThanks for sharing a nice way of checking overflow. I tested your\nIS_DOUBLE_SAFE_IN_INT64() macro in my environment by the simple code\n(attached to this email) and confirmed that it appropriately handled\nthe overflow. However, further consideration is needed for different\narchitectures.\n\nI attached the modified patch. In the patch, I placed the macro in\n\"src/include/c.h\", but this may not be a good choice because c.h is\nwidely included from a lot of files. Do you have any good ideas about\nits placement?\n\nThanks,\nYuya Watari\nNTT Software Innovation Center\nwatari.yuya@gmail.com",
"msg_date": "Wed, 2 Oct 2019 15:56:07 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Keep compiler silence (clang 10, implicit conversion from 'long'\n to 'double' )"
},
{
"msg_contents": "Yuya Watari <watari.yuya@gmail.com> writes:\n> I attached the modified patch. In the patch, I placed the macro in\n> \"src/include/c.h\", but this may not be a good choice because c.h is\n> widely included from a lot of files. Do you have any good ideas about\n> its placement?\n\nI agree that there's an actual bug here; it can be demonstrated with\n\n# select extract(epoch from '256 microseconds'::interval * (2^55)::float8);\n date_part \n--------------------\n -9223372036854.775\n(1 row)\n\nwhich clearly is a wrong answer.\n\nI do not however like any of the proposed patches. We already have one\nplace that deals with this problem correctly, in int8.c's dtoi8():\n\n /*\n * Range check. We must be careful here that the boundary values are\n * expressed exactly in the float domain. We expect PG_INT64_MIN to be an\n * exact power of 2, so it will be represented exactly; but PG_INT64_MAX\n * isn't, and might get rounded off, so avoid using it.\n */\n if (unlikely(num < (float8) PG_INT64_MIN ||\n num >= -((float8) PG_INT64_MIN) ||\n isnan(num)))\n ereport(ERROR,\n (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n errmsg(\"bigint out of range\")));\n\nWe should adopt that coding technique not invent new ones.\n\nI do concur with creating a macro that encapsulates a correct version\nof this test, maybe like\n\n#define DOUBLE_FITS_IN_INT64(num) \\\n\t((num) >= (double) PG_INT64_MIN && \\\n\t (num) < -((double) PG_INT64_MIN))\n\n(or s/double/float8/ ?)\n\nc.h is probably a reasonable place, seeing that we define the constants\nthere.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Nov 2019 12:53:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Keep compiler silence (clang 10,\n implicit conversion from 'long' to 'double' )"
},
{
"msg_contents": "At Mon, 04 Nov 2019 12:53:48 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Yuya Watari <watari.yuya@gmail.com> writes:\n> > I attached the modified patch. In the patch, I placed the macro in\n> > \"src/include/c.h\", but this may not be a good choice because c.h is\n> > widely included from a lot of files. Do you have any good ideas about\n> > its placement?\n> \n> I agree that there's an actual bug here; it can be demonstrated with\n> \n> # select extract(epoch from '256 microseconds'::interval * (2^55)::float8);\n> date_part \n> --------------------\n> -9223372036854.775\n> (1 row)\n> \n> which clearly is a wrong answer.\n> \n> I do not however like any of the proposed patches. We already have one\n> place that deals with this problem correctly, in int8.c's dtoi8():\n> \n> /*\n> * Range check. We must be careful here that the boundary values are\n> * expressed exactly in the float domain. We expect PG_INT64_MIN to be an\n> * exact power of 2, so it will be represented exactly; but PG_INT64_MAX\n> * isn't, and might get rounded off, so avoid using it.\n> */\n> if (unlikely(num < (float8) PG_INT64_MIN ||\n> num >= -((float8) PG_INT64_MIN) ||\n> isnan(num)))\n> ereport(ERROR,\n> (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n> errmsg(\"bigint out of range\")));\n> \n> We should adopt that coding technique not invent new ones.\n> \n> I do concur with creating a macro that encapsulates a correct version\n> of this test, maybe like\n> \n> #define DOUBLE_FITS_IN_INT64(num) \\\n> \t((num) >= (double) PG_INT64_MIN && \\\n> \t (num) < -((double) PG_INT64_MIN))\n\n# I didn't noticed the existing bit above.\n\nAgreed. it is equivalent to the trick AFAICS thus no need to add\nanother one to warry with.\n\n> (or s/double/float8/ ?)\n\nMaybe.\n\n> c.h is probably a reasonable place, seeing that we define the constants\n> there.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 05 Nov 2019 13:59:25 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Keep compiler silence (clang 10, implicit conversion from\n 'long' to 'double' )"
},
{
"msg_contents": "Hello Tom and Horiguchi-san,\n\nOn Tue, Nov 5, 2019 at 1:59 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Mon, 04 Nov 2019 12:53:48 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> > I do concur with creating a macro that encapsulates a correct version\n> > of this test, maybe like\n> >\n> > #define DOUBLE_FITS_IN_INT64(num) \\\n> > ((num) >= (double) PG_INT64_MIN && \\\n> > (num) < -((double) PG_INT64_MIN))\n\nThank you for your comments. The proposed macro \"DOUBLE_FITS_IN_INT64\"\nis a good and simple way to check the overflow. According to that, I\nrevised the patch, which includes regression tests.\n\nIn the patch, I additionally modified other occurrences as follows.\n\n=========\n\n+#define FLOAT8_FITS_IN_INT32(num) \\\n+ ((num) >= (float8) PG_INT32_MIN && (num) < -((float8) PG_INT32_MIN))\n\n=========\n\n- if (unlikely(num < (float8) PG_INT32_MIN ||\n- num >= -((float8) PG_INT32_MIN) ||\n- isnan(num)))\n+ /* Range check */\n+ if (unlikely(!FLOAT8_FITS_IN_INT32(num)))\n\n=========\n\nThe added macro FLOAT8_FITS_IN_INT32() does not check NaN explicitly,\nbut it sufficiently handles the case.\n\nBest regards,\nYuya Watari\nNTT Software Innovation Center\nwatari.yuya@gmail.com",
"msg_date": "Tue, 5 Nov 2019 20:43:38 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Keep compiler silence (clang 10, implicit conversion from 'long'\n to 'double' )"
},
{
"msg_contents": "Yuya Watari <watari.yuya@gmail.com> writes:\n> The added macro FLOAT8_FITS_IN_INT32() does not check NaN explicitly,\n> but it sufficiently handles the case.\n\nReally? I don't think anything is guaranteed about how a NaN will\ncompare when using C's non-NaN-aware comparison operators.\n\nMy thought about this was to annotate the macros with a reminder\nto also check for NaN if there's any possibility that the value\nis NaN.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Nov 2019 10:04:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Keep compiler silence (clang 10,\n implicit conversion from 'long' to 'double' )"
},
{
"msg_contents": "Hello Tom,\n\nThank you for replying.\n\nOn Wed, Nov 6, 2019 at 12:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Yuya Watari <watari.yuya@gmail.com> writes:\n> > The added macro FLOAT8_FITS_IN_INT32() does not check NaN explicitly,\n> > but it sufficiently handles the case.\n>\n> Really? I don't think anything is guaranteed about how a NaN will\n> compare when using C's non-NaN-aware comparison operators.\n>\n> My thought about this was to annotate the macros with a reminder\n> to also check for NaN if there's any possibility that the value\n> is NaN.\n\nI agree with your opinion. Thank you for pointing it out.\n\nIf the platform satisfies IEEE-754 standard, all comparisons (except\nfor \"not equals\") between NaN and other floating values are \"false\".\n[1] In this case, the proposed FLOAT8_FITS_IN_INT32() macro handles\nNaN.\n\n[1] https://en.wikipedia.org/wiki/NaN#Comparison_with_NaN\n\nHowever, this behavior depends on the platform architecture. As you\nhave said, C language does not always follow IEEE-754. I think adding\nexplicit checking of NaN is necessary.\n\nI modified the patch and attached it.\n\nBest regards,\nYuya Watari\nNTT Software Innovation Center\nwatari.yuya@gmail.com",
"msg_date": "Wed, 6 Nov 2019 11:32:38 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Keep compiler silence (clang 10, implicit conversion from 'long'\n to 'double' )"
},
{
"msg_contents": "On Wed, Nov 6, 2019 at 3:33 PM Yuya Watari <watari.yuya@gmail.com> wrote:\n> However, this behavior depends on the platform architecture. As you\n> have said, C language does not always follow IEEE-754. I think adding\n> explicit checking of NaN is necessary.\n\nI'm curious about this point. C may not require IEEE 754 (for\nexample, on current IBM mainframe and POWER hardware you can opt for\nIBM hex floats, and on some IBM platforms that is the default, and the\nC compiler isn't breaking any rules by doing that; the only other\nfloating point format I've heard of is VAX format, long gone, but\nperhaps allowed by C). But PostgreSQL effectively requires IEEE 754\nsince commit 02ddd499322ab6f2f0d58692955dc9633c2150fc, right?\n\n\n",
"msg_date": "Wed, 6 Nov 2019 15:43:59 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Keep compiler silence (clang 10, implicit conversion from 'long'\n to 'double' )"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Nov 6, 2019 at 3:33 PM Yuya Watari <watari.yuya@gmail.com> wrote:\n>> However, this behavior depends on the platform architecture. As you\n>> have said, C language does not always follow IEEE-754. I think adding\n>> explicit checking of NaN is necessary.\n\n> I'm curious about this point. C may not require IEEE 754 (for\n> example, on current IBM mainframe and POWER hardware you can opt for\n> IBM hex floats, and on some IBM platforms that is the default, and the\n> C compiler isn't breaking any rules by doing that; the only other\n> floating point format I've heard of is VAX format, long gone, but\n> perhaps allowed by C). But PostgreSQL effectively requires IEEE 754\n> since commit 02ddd499322ab6f2f0d58692955dc9633c2150fc, right?\n\nThat commit presumes that floats follow the IEEE bitwise representation,\nI think; but it's a long way from there to assuming that float comparisons\ndo something that is explicitly *not* promised by C99. The C spec goes no\nfurther than to state that comparisons on NaNs might raise an exception,\nand that's already bad enough. I believe that the assumption Yuya-san was\nmaking about \"comparisons on NaNs return false\" is only guaranteed by C99\nif you use the new-in-C99 macros isless(x, y) and so on, not if you write\nx < y.\n\nThere's a separate discussion to be had here about whether\n\t!isnan(x) && !isnan(y) && x < y\nis more or less efficient, or portable, than\n\tisless(x, y)\nbut I'm not really in any hurry to start using the latter macros.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Nov 2019 22:21:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Keep compiler silence (clang 10,\n implicit conversion from 'long' to 'double' )"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> But PostgreSQL effectively requires IEEE 754 since commit\n >> 02ddd499322ab6f2f0d58692955dc9633c2150fc, right?\n\n Tom> That commit presumes that floats follow the IEEE bitwise\n Tom> representation, I think;\n\nCorrect. (It notably does _not_ make any assumptions about how floating\npoint arithmetic or comparisons work - all the computation is done in\nintegers.)\n\n Tom> but it's a long way from there to assuming that float comparisons\n Tom> do something that is explicitly *not* promised by C99.\n\nI agree.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Wed, 06 Nov 2019 04:08:33 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Keep compiler silence (clang 10,\n implicit conversion from 'long' to 'double' )"
},
{
"msg_contents": "Hello Tom, Thomas, and Andrew,\n\n> Tom> That commit presumes that floats follow the IEEE bitwise\n> Tom> representation, I think;\n>\n> Correct. (It notably does _not_ make any assumptions about how floating\n> point arithmetic or comparisons work - all the computation is done in\n> integers.)\n>\n> Tom> but it's a long way from there to assuming that float comparisons\n> Tom> do something that is explicitly *not* promised by C99.\n>\n> I agree.\n\nThank you for your comments. I agree that we should not assume\nanything that is not guaranteed in the language specification. The\nmodified patch (attached in the previous e-mail) checks NaN explicitly\nif needed.\n\nBest regards,\nYuya Watari\nNTT Software Innovation Center\nwatari.yuya@gmail.com\n\n\n",
"msg_date": "Wed, 6 Nov 2019 13:56:46 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Keep compiler silence (clang 10, implicit conversion from 'long'\n to 'double' )"
},
{
"msg_contents": "At Wed, 6 Nov 2019 13:56:46 +0900, Yuya Watari <watari.yuya@gmail.com> wrote in \n> Hello Tom, Thomas, and Andrew,\n> \n> > Tom> That commit presumes that floats follow the IEEE bitwise\n> > Tom> representation, I think;\n> >\n> > Correct. (It notably does _not_ make any assumptions about how floating\n> > point arithmetic or comparisons work - all the computation is done in\n> > integers.)\n> >\n> > Tom> but it's a long way from there to assuming that float comparisons\n> > Tom> do something that is explicitly *not* promised by C99.\n> >\n> > I agree.\n> \n> Thank you for your comments. I agree that we should not assume\n> anything that is not guaranteed in the language specification. The\n> modified patch (attached in the previous e-mail) checks NaN explicitly\n> if needed.\n\nMmm? See the bit in the patch cited below (v5).\n\n+\t/* Range check */\n+\tif (unlikely(!FLOAT8_FITS_IN_INT32(num)) || isnan(num))\n\nIf compiler doesn't any fancy, num is fed to an arithmetic before\nchecking if it is NaN. That seems have a chance of exception.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 07 Nov 2019 15:09:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Keep compiler silence (clang 10, implicit conversion from\n 'long' to 'double' )"
},
{
"msg_contents": "Hello Horiguchi-san,\n\nOn Thu, Nov 7, 2019 at 3:10 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Mmm? See the bit in the patch cited below (v5).\n>\n> + /* Range check */\n> + if (unlikely(!FLOAT8_FITS_IN_INT32(num)) || isnan(num))\n>\n> If compiler doesn't any fancy, num is fed to an arithmetic before\n> checking if it is NaN. That seems have a chance of exception.\n\nThank you for pointing it out. That's my mistake. I fixed it and\nattached the patch.\n\nBest regards,\nYuya Watari\nNTT Software Innovation Center\nwatari.yuya@gmail.com",
"msg_date": "Thu, 7 Nov 2019 17:21:06 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Keep compiler silence (clang 10, implicit conversion from 'long'\n to 'double' )"
},
{
"msg_contents": "Yuya Watari <watari.yuya@gmail.com> writes:\n> On Thu, Nov 7, 2019 at 3:10 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> + if (unlikely(!FLOAT8_FITS_IN_INT32(num)) || isnan(num))\n>> If compiler doesn't any fancy, num is fed to an arithmetic before\n>> checking if it is NaN. That seems have a chance of exception.\n\n> Thank you for pointing it out. That's my mistake. I fixed it and\n> attached the patch.\n\nActually, that mistake is very old --- the existing functions tested\nisnan() last for a long time. I agree that testing isnan() first\nis safer, but it seems that the behavior of throwing an exception\nfor comparisons on NaN is rarer than one might guess from the C spec.\n\nAnother issue in the patch as it stands is that the FITS_IN_ macros\nrequire the input to have already been rounded with rint(), else they'll\ngive the wrong answer for values just a bit smaller than -PG_INTnn_MIN.\nThe existing uses of the technique did that, and interval_mul already\ndid too, but I had to adjust pgbench. This is largely a documentation\nfailure: not only did you fail to add any commentary about the new macros,\nbut you removed most of the commentary that had been in-line in the\nexisting usages.\n\nI fixed those things and pushed it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 07 Nov 2019 11:30:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Keep compiler silence (clang 10,\n implicit conversion from 'long' to 'double' )"
},
{
"msg_contents": "Hello Tom,\n\nThank you for your comments.\n\nOn Fri, Nov 8, 2019 at 1:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> failure: not only did you fail to add any commentary about the new macros,\n> but you removed most of the commentary that had been in-line in the\n> existing usages.\n\nI apologize for the insufficient comments. I had to add more\ninformation about these macros.\n\n> I fixed those things and pushed it.\n\nThank you very much for the commit!\n\nBest regards,\nYuya Watari\nNTT Software Innovation Center\nwatari.yuya@gmail.com\n\n\n",
"msg_date": "Fri, 8 Nov 2019 14:58:33 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Keep compiler silence (clang 10, implicit conversion from 'long'\n to 'double' )"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.